Skip to content

Sensitivity Analysis for Not Missing at Random

Paul Allison
September 25, 2014

When I teach my seminar on Missing Data, the most common question I get is “What can I do if my data are not missing at random?” My usual answer is “Not much,” followed by “but you can do a sensitivity analysis.” Everyone agrees that a sensitivity analysis is essential for investigating possible violations of the missing at random assumption. But, unfortunately, there’s little guidance in the literature on how to actually do one. And, until recently, no commercial software package had options for doing a sensitivity analysis.

I’m happy to report that PROC MI in SAS 9.4 has several options for doing a sensitivity analysis based on multiple imputation. I’ve recently had a chance to read the documentation and do a few test runs. A little later in this post, I’ll tell you what I’ve learned.

LEARN MORE IN A SEMINAR WITH PAUL ALLISON

But first, some background. There are two widely-used “modern” methods for handling missing data: multiple imputation and maximum likelihood. In virtually all implementations of these methods in commercial software, the underlying assumption is that data are missing at random (MAR). Roughly speaking, this means that the probability that data are missing on a particular variable does not depend on the value of that variable, after adjusting for observed variables. This assumption would be violated, for example, if people with high income were less likely to report their income.

The MAR assumption does allow missingness to depend on anything that you observe, it just can’t depend on things that you don’t observe. MAR is not a testable assumption. You may suspect that your data are not missing at random, but nothing in your data will tell you whether or not that’s the case.

It’s possible to do multiple imputation or maximum likelihood when data are missing not at random (MNAR), but to do that, you first need to specify a model for the missing data mechanism—that is, a model of how missingness depends on both observed and unobserved quantities. That raises three issues:

  • For any data set, there are an infinite number of possible MNAR models.
  • Nothing in the data will tell you which of those models is better than another.
  • Results may depend heavily on which model you choose.

That’s a dangerous combination. And it’s why a sensitivity analysis is so important. The basic idea is to try out a bunch of plausible MNAR models, and then see how consistent the results are across the different models. If results are reasonably consistent, then you can feel pretty confident that, even if data are not missing at random, that would not compromise your conclusions. On the other hand, if the results are not consistent across models, you would have to worry about whether any of the results are trustworthy.

Keep in mind that this is not a test. Inconsistency of results does not tell you that your data are MNAR. It simply gives you some idea of what would happen if the data are MNAR in particular ways.

There’s nothing very deep about this. The hard part is figuring out how to come up with a reasonable set of models. It’s particularly hard if you’re using maximum likelihood to handle the missing data. Elsewhere I’ve argued for the advantages of maximum likelihood over multiple imputation. But one attraction of multiple imputation is that it’s easier to do a decent sensitivity analysis.

That’s where the new options for PROC MI come in. I think they’re easiest to explain by way of an example. In my Missing Data seminar, I use an example data set called COLLEGE, which contains information on 1302 four-year colleges and universities in the U.S. The goal is to estimate a linear regression in which the dependent variable is graduation rate, the percentage of students who graduate among those who enrolled four years earlier.

There are lots of missing data for the five predictor variables, but we’re going to focus on the 98 colleges that did not report their graduation rates. It’s plausible that colleges with low graduation rates would be less likely to report those rates in order to avoid adverse publicity. If so, that would probably entail a violation of the MAR assumption. It would also imply that colleges with missing data on graduation rates would tend to have lower (unobserved) graduation rates than those colleges that report their graduation rates, controlling for other variables.

PROC MI allows us to build that supposition into the multiple imputation model. We can, for example, specify an imputation model that says that the imputed values of GRADRAT are only 80% of what they would be if the data were actually missing at random. Here’s the SAS code for doing that:

PROC MI DATA=MY.COLLEGE OUT=MIOUT;
VAR GRADRAT CSAT LENROLL STUFAC PRIVATE RMBRD ACT;
FCS ;
MNAR ADJUST(GRADRAT / SCALE=.80);
RUN;

This program produces five data sets, with missing data imputed by linear regression. For a sensitivity analysis, the essential ingredient here is the MNAR statement. The ADJUST option says to multiply the imputed values of GRADRAT by .80 at each step of the iterative process. To do a proper sensitivity analysis, we would redo both the imputation and the analysis for several different values of the SCALE parameter, ranging between 0 and 1.

The MNAR statement only works if you specify the MONOTONE method or the FCS method, which is what I used here. FCS stands for fully conditional specification, and it’s equivalent to the chained equation or sequential regression method used in many other packages. The MNAR statement does not work if you use the default MCMC method. [It could probably be done for MCMC, but that would mess up the elegant computational algorithm. FCS is already a “messy” algorithm, so a little more mess is no big deal].

Instead of multiplying the imputed values by some constant, we could add or subtract a constant, for example,

MNAR ADJUST(GRADRAT / SHIFT = -20);

This would subtract 20 points from any imputed graduation rates. Again, to do a sensitivity analysis, you’d want to try out a range of different SHIFT values to see what effect that would have on your results.

The SHIFT and SCALE options can be combined. The SHIFT option can also be used for adjusting the imputations of categorical outcomes (binary, ordinal or nominal), except that the changes are applied on the log-odds scale.

Another option allows you to restrict the adjustments to certain subsets of the data, e.g.,

MNAR ADJUST(GRADRAT / SHIFT = -20 ADJUSTOBS=(PRIVATE=’1’));

This says to subtract 20 points from the imputed values of graduation rates, but only for private colleges, not for public colleges. If you use the ADJUSTOBS option, the subsetting variable (PRIVATE in this case) should be listed in a CLASS statement.

There are also other options, which you can read about here. An introductory article written by the guy who developed PROC MI, Yang Yuan, can be downloaded here.

If you don’t use SAS, you can do adjustments like this using other multiple imputation software along with a little programming. You first produce data sets under the MAR assumption and then you modify imputed values by adding or multiplying by the desired constants. But the SAS method is more elegant because the adjustments are made at each iteration, and the adjusted imputations are used in imputing other variables with missing data in later steps of the algorithm.

This particular way of doing a sensitivity analysis is based on something called pattern-mixture models for MNAR. You can read more about pattern-mixture models in Chapter 10 of the book Multiple Imputation and Its Application by James Carpenter and Michael Kenward.

Finally, it’s worth noting that the inclusion of appropriate auxiliary variables into the imputation model can go a long way toward reducing the likelihood of MNAR. The best auxiliary variables are those that are highly correlated with both the variable that has missing data and the probability that the variable is missing.

Share

Comments

  1. July 22, 2022

    Dear Dr. Allison,

    Several years ago (possibly 2018 or 2020), you mentioned statisticians who had initially thought they found useful NIMAR sensitivity tests, I believe for covariance estimands, within a “dropout” context, but had subsequently withdrawn their reports. I need to study their original articles and their thinking at that time while I take a deep breath and a long walk. I cannot find that comment you made in your earlier Statistical Horizons series today, July 22, 2022. Could you post all references concerning these efforts that you are aware of?

    Muthen, Muthen and Asparouhov in Regression and mediation analysis using mplus (2016) pp 464-487 present code and discussion for modeling the NMAR mechanism of either missing (for whatever reason), or dropout. In your view, would similar efforts in a research report show evidence to peer reviewers that an investigator at least sought to explore a likely NMAR mechanism?

    Or would they reject the paper stating: “You have not met the MAR assumption. We appreciate your efforts to examine/explore what you believe to be the most likely NIMAR mechanism, but please realize there are very likely to be several (at least one) other NIMAR mechanism in effect that you have not tested for. Therefore, we unfortunately cannot accept and publish your work at this time.” Regards, The Editors.

    What are your thoughts concerning this perplexing matter?

    1. If you are concerned that your data may be not missing at random, I do think it’s worth exploring the kinds of models described by Muthen et al. Note that reviewers cannot legitimately say “you have not met the MAR assumption” because nothing in your data will tell whether you have or have not met that assumption.

  2. Thank you. This post is very helpful. I usually use Stata. But I agree here that this MNAR statement in SAS is quite elegant.

  3. Hi dear dr,
    I am confused about the use of selection models and pattern mixture.
    Are the selection model and pattern mixture considered as multiple imputation or likelihood -based methods ? in the book,
    “Longitudinal Data Analysis”, these methods have been attributed to the likelihood-based approach , while in Rubin ‘s multiple imputation, these methods have been discussed again. book one said,”regardless of the particular imputation method adopted, subsequent analyses of the observed and imputed data are valid when missingness is MAR (or MCAR).”, While Rubin says in his book about multiple imputation for nonignorable non response.
    What is the link between multiple imputation and likelihood-based model ?
    and likelihood-based and maximum likelihood method that have been discussed here is same?

    1. Maximum likelihood and multiple imputation can both be used for pattern-mixture models and for selection models. Although multiple imputation is not maximum likelihood, it can be regarded as a “likelihood-based” method that has the desirable properties described by Rubin. Bayesian methods are likelihood-based, and the goal of multiple imputation is to make random draws from the predictive posterior distribution of the missing data given the observed data.

  4. Paul, thanks for your lucid description of these new and useful options.
    I used the shift method in my 99 Stat Med paper, and observed problematic behavior if shifting is applied to two correlated variables at the same time. Would you advise restricting this option to just one variable?
    Stef.

    1. Stef: Sounds like you’ve already had more experience with these methods than I have. However, it does make some sense to me to focus on one variable at a time. Otherwise it’s just too hard to figure out what’s going on.

Leave a Reply

Your email address will not be published. Required fields are marked *