Imputation by Predictive Mean Matching: Promise & Peril

March 5, 2015 By Paul Allison

Allison PicPredictive mean matching (PMM) is an attractive way to do multiple imputation for missing data, especially for imputing quantitative variables that are not normally distributed. But, as I explain below, it’s also easy to do it the wrong way. 

Compared with standard methods based on linear regression and the normal distribution, PMM produces imputed values that are much more like real values. If the original variable is skewed, the imputed values will also be skewed. If the original variable is bounded by 0 and 100, the imputed values will also be bounded by 0 and 100. And if the real values are discrete (like number of children), the imputed values will also be discrete. That’s because the imputed values are real values that are “borrowed” from individuals with real data.

PMM has been around for a long time (Rubin 1986, Little 1988), but only recently has it become widely available and practical to use. Originally, it could only be used in situations where a single variable had missing data or, more broadly, when the missing data pattern was monotonic. Now, however, the PMM method is embedded in many software packages that implement an approach to multiple imputation variously known as multiple imputation by chained equations (MICE), sequential generalized regression, or the fully conditional specification (FCS). It’s available in many statistical packages, including SAS, Stata, SPSS, and R, all of which allow you to use PMM for virtually any missing data pattern.

There are two major pitfalls to PMM, however. First, only a handful of studies have evaluated its performance, so it’s not clear how well it compares with alternative methods. Second, at least two statistical packages, SPSS and Stata, have implemented PMM with a default setting that actually invalidates the method. If you use either of those packages, you must override the default.

Before explaining that problem, I first need to provide a brief description of how PMM works. Suppose there is a single variable x that has some cases with missing data, and a set of variables z (with no missing data) that are used to impute x. Do the following:

  1. For cases with no missing data, estimate a linear regression of x on z, producing a set of coefficients b.
  2. Make a random draw from the “posterior predictive distribution” of b, producing a new set of coefficients b*. Typically this would be a random draw from a multivariate normal distribution with mean b and the estimated covariance matrix of b (with an additional random draw for the residual variance). This step is necessary to produce sufficient variability in the imputed values, and is common to all “proper” methods for multiple imputation.
  3. Using b*, generate predicted values for x for all cases, both those with data missing on x and those with data present.
  4. For each case with missing x, identify a set of cases with observed x whose predicted values are close to the predicted value for the case with missing data.
  5. From among those close cases, randomly choose one and assign its observed value to substitute for the missing value.
  6. Repeat steps 2 through 5 for each completed data set. 

Unlike many methods of imputation, the purpose of the linear regression is not to actually generate imputed values. Rather, it serves to construct a metric for matching cases with missing data to similar cases with data present.

There are several variations to this method (Morris et al. 2014), but the most important issue to settle is how many cases (k) should be in each match set. The default in the SAS procedure MI and in the MICE package for R is k=5. That is, each case with missing data on x is matched to the 5 cases (with data present) that have the closest predicted values. One of the 5 is chosen at random and its x value is assigned to the case with missing data. Solas and the user-written ice command for Stata set the default at k=10. 

On the other hand, for the SPSS missing values module and for the built-in mi command in Stata the default is k=1. That is, each case with missing data is matched to the single case whose predicted value is closest to the predicted value for the case with missing data. With only one matched case, there is no random draw at Step 5 in the scheme above.

That’s a serious error. With no random draw at Step 5, the only source of random variation in the imputation process is the random draw of regression coefficients in Step 2. That’s not nearly enough to produce proper imputations. As a result, estimated standard errors tend to be much too low, leading to inflated test statistics and confidence intervals that are much too narrow (Morris et al. 2014). 

Why did SPSS and Stata get it so wrong? Well, I’m guessing that they relied on Don Rubin’s classic 1987 book Multiple Imputation for Nonresponse in Surveys. In his description of PMM (p. 168), he proposed matching to a single case. But later work makes it clear that this is not the way to go.

So, if not k=1, then how many? That’s not clear. Schenker and Taylor (1996) did simulations with k=3 and k=10. Differences in performance were small, but with k=3, there was less bias and more sampling variation. Based on their simulations, Morris et al. (2014) recommended k=10 for most situations. But a lot depends on sample size. With large samples, k=10 is probably the better choice. But with smaller samples, k=10 will probably include too many cases that are rather unlike the case to which they are matched. Personally, I’m reasonably happy with the k=5 default of SAS and MICE.

The other major drawback of PMM is that there’s no mathematical theory to justify it (which is also true of MICE methods more generally). We have to rely on Monte Carlo simulations, and no simulation can study all the possibilities. Results reported by Schenker and Taylor (1996) and Morris et al. (2014) are very encouraging, but hardly definitive. In brief, it appears that PMM does almost as well as parametric methods for a correctly specified model, and a little better than parametric methods in certain misspecified models. So the current consensus seems to be that this is an acceptable and potentially useful method. But–as they say–more research is needed.


Little, Roderick J. A. (1988) “Missing-data adjustments in large surveys.” Journal of Business & Economic Statistics 6: 287-296.

Morris, Tim P., Ian R. White and Patrick Royston (2014) “Tuning multiple imputation by predictive mean matching and local residual draws.” BMC Medical Research Methodology 14: 75-87.

Rubin, Donald B. (1986) “Statistical matching using file concatenation with adjusted weights and multiple imputations.” Journal of Business & Economic Statistics 4: 87-94.

Rubin, Donald B. (1987) Multiple Imputation for Nonresponse in Surveys. Wiley.

Schenker, Nathaniel and Jeremy M.G. Taylor (1996) “Partially parametric techniques for multiple imputation.” Computational Statistics & Data Analysis 22: 425-446.

14 Responses

  1. Stef van Buuren says:

    Paul, wonderful post. Another pitfall is that PMM may not work well in areas where the predictions are sparse, e.g. when imputing extremes. But I have always been impressed what it can do. I have found that a well-implemented PMM also performs well for semi-continuous data, for categorical data, for multilevel data, for compositions, and it will be a matter of time before more evidence on such applications will appear. It’s a great allround method. Hopefully, SPSS and Stata will improve their implementations. Stef.

  2. Paul Hoon says:

    Dear Drs. Allison, Royston, Morris, Van Burren, Rubin, White, Shafer and others I have not named.

    First, thanks to all of you who have published fine imputation papers and books which are guiding my efforts.

    Paul, I appreciate your excellent posts here at Statistical Horizons.

    Am tackling a big imputation problem and I’m running into a speed problem.

    I have three questions numbered below. First, some introductory information.

    I read with interest Morris et al. recent paper entitled: “Tuning multiple imputation by PMM and LRD.”

    Have a large health data base, approaching 3/4 million, with 100 variables and many variables have 40% missing. The missing value mechanism is MAR, and univariate tests show variables are non-normal.

    I plan to “improve” normality with linskew(0) or bcskew(0) in Stata first, before imputation, and then back-transfer to raw score after imputation. In tests, these methods greatly improve skew, but many variables remain with kurtoses above 3 or 4, which I have to live with. Several papers show, and Royston recommends, that “transformation toward normality” before imputation will improve imputation point estimates for all methods, even chained PMM (which relies on a linear model to find “donors:).

    Test runs with an 11 variable subset, show that Schafer’s MVN imputation method in Stata will run in about 5 minutes on a hex core xenon machine with 22 gigs of ECC ram, but PMM with k = 1 and iterations = 10 takes about nine hours. All data, program and op sys calls are in RAM on a machine I built myself. So I’m getting max speed. Speed under Mac OSX and Windows 8.1 (same hardware) are virtually the same.

    Schafer’s MVM “imposes a normal distribution” when filling in missing values, and distorts or changes the character of the distribution, especially max and min. Chained PMM creates filled in data which faithfully reproduces the original, unfilled distribution, to the extent, that there is almost an exact “finger-print” match based on histograms. This holds for variables which have 40% missing. I am concerned about bias in regression and covariance structure after Schafer’s MVN. Applying Meng’s congeniality principle, chained PMM so far seems the best approach.

    Contact with tech support at Stata, and further test runs with timing code have shown that the Stata implementation of PMM has not been parallelized, that is, computation load cannot be distributed across multiple processors, and speed improvements are marginal after four cores (processors). Since ICE was the progenitor for implementations in SPSS and other packages, I doubt that there is any existing implementation of chained PMM that has been parallelized. It may well be that a parallelized chained PMM algorithm is an oxymoron, and that it is not possible.

    Am trying to look at all my options, but “hot deck” methods so far seem to be my best bet.

    1) Do you know of an implementation of Chained PMM or Local Residual Draw (LRD) that has been “sped up”, that is, which can take advantage of multiple processors? That is, an implementation that has been parallelized?

    Otherwise I’m looking at an ninety hour run to get a single imputation set under chained PMM for ten iterations. I will need m = 20 or twenty imputation sets for aggregation under Rubin’s rules, or 1800 hours of computation, or 75 days.

    Dr. Allison, since you argue cogently for a k = 3 or more, this could be longer! Dr. Van Burren suggests k can be small, but I don’t yet have timing tests to see if larger values of k imply substantially longer run times.

    I can probably cut this down to 8 weeks if I run several instances of Stata simultaneously on each machine, cut chained PMM iterations to five (convergence is reached after 4 to 5 iterations).

    This is doable if you use marine batteries with 1000 cold cranking amps to power UPS backups in case of a power failure that lasts 24 hours. Am going to have to build in my electronics workshop, a new, fast, black cylinder Mac Pro Xenon. ECC (error checking ram) on a xenon motherboard is a necessity for an eight week run. Must also monitor CPU temperatures.

    2) I cannot find LRD under Stata. Do you know if LRD has been parallelized? Does it run faster than PMM?

    3) Are there other “quasi-parametric” methods, hot deck methods, that run fast? I would appreciate any and all suggestions and ideas from you gentlemen. Criticisms welcomed. I would like to use a method such as PMM which does an excellent job of maintaining distribution shape (even U or Poison), max, min, skew, kurtosis, mean, SD in the “filled in” distribution. but which will run faster.

    Thank you for taking the time to read this email, and I look forward to your comments. Please contact me directly: phoon at fas dot harvard dot edu Please feel free to share this email with other researchers whom you believe may be able to offer suggestions.


    Paul Hoon
    Visiting Fellow

    • Paul Allison says:

      I’m really sorry, but this is just way more content than I have time to address. Others are welcome to comment and suggest answers.

  3. Stef van Buuren says:

    Paul, there is a C code routine for finding matches in the mice package that gave a speed up of 5-6 times compared to the older R version. It’s fairly dumb, and I am sure it could be made much faster than this. It can also be made parallel by someone with proper skills. Hope this helps, Stef.

  4. Daniel Klein says:

    Stata has already changed the default in the latest release (14), and forces users to specify the number of closest values (neighbors) to be used.

    Paul, thanks a lot for your very interesting and inspiring blog entries. Much appreciated!


  5. Christian Czubacki says:

    great posting about the pmm-procedure in SPSS. Is there a way to change the k to like 10, as suggest?

    best regards,

  6. Michael McIsaac says:

    Thank you for the interesting post. I wonder how your point about the need for a random variation at step 5 relates to your discussion of discrete data and to Stef van Buuren’s comment that the method works well for categorical data; surely in some settings the k nearest observed neighbours will always be identical (for, say, k1. Could you comment on the use of PMM in such a setting?

  7. David Disabato says:

    Hi Paul and others,

    I am having an interesting situation where the fraction of missing information from pooled analyses (e.g., OLS regression) increases the more auxiliary variables I add to my pmm imputation models. I was taught to include all available auxiliary variables in a dataset, so I have 30 auxiliary variables in my pmm imputation models. However, when I decrease the number of auxiliary variables to 20 or even 10, my fraction of missing information significantly decreases (from .80 to .60 and .50, respectively; I have about 50% missing data for analysis variables). Has anybody else experienced this? I have noticed it with traditional parametric imputation models as well, but it seems to be more common and extreme with pmm (in my experience).


    • Paul Allison says:

      This can happen. Auxiliary variables often introduce more noise than help. Concentrate on auxiliaries that have high correlations with the variables that have missing data.

Leave a Reply