Interpretable Machine Learning - Online Course
A 4-Day Livestream Seminar Taught by
Adam D. RennhoffTuesday, June 9 —
Friday, June 12, 2026
10:30am-12:30pm (convert to your local time)
1:30pm-3:00pm
This seminar is part of our Machine Learning Certification, a flexible 4-course pathway designed to build practical expertise in modern machine learning. Contact us to learn how you can complete the certification and access discounted pricing.
Machine learning models routinely outperform traditional statistical models in predictive accuracy, yet their complexity can make them difficult to understand and communicate. For many applied researchers, this lack of transparency can limit the adoption of powerful predictive tools.
This course offers a practical and conceptually clear introduction to interpretable machine learning. You will learn how to understand, explain, and trust complex machine learning models using modern tools that naturally connect to familiar statistical concepts, such as marginal effects, uncertainty quantification, and variable importance. The course emphasizes both global interpretation (how variables influence predictions on average) and local interpretation (why a specific prediction was made).
Hands-on demonstrations will be conducted in R, with equivalent Python code provided where feasible. No prior experience with machine learning beyond basic modeling knowledge is required.
Starting June 9, this seminar will be presented as a 4-day synchronous, livestream workshop via Zoom. Each day will feature two sessions with hands-on exercises, separated by a 1-hour break. Live attendance is recommended for the best experience. But if you can’t join in real time, recordings will be available within 24 hours and can be accessed for four weeks after the seminar.
Closed captioning is available for all live and recorded sessions. Captions can be translated to a variety of languages including Spanish, Korean, and Italian. For more information, click here.
ECTS Equivalent Points: 1
More details about the course content
This seminar will give you the tools and skills to…
-
- Understand why machine learning models often outperform traditional models and why interpretability is essential.
- Differentiate between intrinsically interpretable models and black-box models.
- Use global interpretation tools such as partial dependence plots (PDPs), ALE plots, feature importance, and surrogate models.
- Use local interpretation tools such as individual conditional expectation (ICE) curves, LIME, anchors, and SHAP values.
- Interpret direction, magnitude, and heterogeneity of effects using model-agnostic tools.
- Apply bootstrapped uncertainty methods to assess statistical confidence in ML interpretation.
This seminar will give you the tools and skills to…
-
- Understand why machine learning models often outperform traditional models and why interpretability is essential.
- Differentiate between intrinsically interpretable models and black-box models.
- Use global interpretation tools such as partial dependence plots (PDPs), ALE plots, feature importance, and surrogate models.
- Use local interpretation tools such as individual conditional expectation (ICE) curves, LIME, anchors, and SHAP values.
- Interpret direction, magnitude, and heterogeneity of effects using model-agnostic tools.
- Apply bootstrapped uncertainty methods to assess statistical confidence in ML interpretation.
Computing
To participate in the hands-on exercises, you are encouraged to use a computer with the most recent version of R, RStudio, and relevant ML packages installed. The list of relevant packages will be distributed prior to the first course meeting. Equivalent Python code will be provided where applicable.
If you’d like to take this course but are concerned that you don’t know enough R, there are excellent online resources for learning the basics. Here are our recommendations.
To participate in the hands-on exercises, you are encouraged to use a computer with the most recent version of R, RStudio, and relevant ML packages installed. The list of relevant packages will be distributed prior to the first course meeting. Equivalent Python code will be provided where applicable.
If you’d like to take this course but are concerned that you don’t know enough R, there are excellent online resources for learning the basics. Here are our recommendations.
Who should register?
This seminar is designed for applied researchers across various fields, including medical research, public health, biostatistics, economics, government analysis, industry data science, and consulting, who utilize predictive models and require tools for explanation, transparency, and effective communication.
You should have a basic familiarity with statistical modeling (e.g., regression) and data analysis in R. No prior experience with machine learning is required.
This seminar is designed for applied researchers across various fields, including medical research, public health, biostatistics, economics, government analysis, industry data science, and consulting, who utilize predictive models and require tools for explanation, transparency, and effective communication.
You should have a basic familiarity with statistical modeling (e.g., regression) and data analysis in R. No prior experience with machine learning is required.
Seminar outline
Foundations of interpretable machine learning
-
- Why interpretability matters: transparency, trust, fairness, and reproducibility.
- Overview of model complexity and accuracy–interpretability tradeoff.
- Intrinsically interpretable models: linear/logistic regression, decision trees, sparsity via LASSO.
- Introduction to “black-box” models: random forests, gradient boosting (XGBoost), neural networks.
- Global vs. local interpretability and model-agnostic frameworks.
Post-hoc interpretation methods
-
- Global interpretation tools:
- Partial dependence plots (PDPs).
- Accumulated local effects (ALE) plots.
- Feature interaction measures (e.g., H-statistic).
- Permutation feature importance with uncertainty.
- Local interpretation tools:
- Individual conditional expectation (ICE) curves.
- Local surrogate models (LIME).
- Anchors and scoped rules.
- Counterfactual explanations.
Applications, uncertainty, and advanced topics
-
- SHAP and Shapley values: theory, intuition, and practice.
- SHAP-based feature importance and model explanation.
- Bootstrapped uncertainty for interpretability tools:
- Confidence intervals for PDPs and ICE curves.
- Bootstrapped SHAP-based marginal effects.
- Evaluating model stability and robustness.
Foundations of interpretable machine learning
-
- Why interpretability matters: transparency, trust, fairness, and reproducibility.
- Overview of model complexity and accuracy–interpretability tradeoff.
- Intrinsically interpretable models: linear/logistic regression, decision trees, sparsity via LASSO.
- Introduction to “black-box” models: random forests, gradient boosting (XGBoost), neural networks.
- Global vs. local interpretability and model-agnostic frameworks.
Post-hoc interpretation methods
-
- Global interpretation tools:
- Partial dependence plots (PDPs).
- Accumulated local effects (ALE) plots.
- Feature interaction measures (e.g., H-statistic).
- Permutation feature importance with uncertainty.
- Local interpretation tools:
- Individual conditional expectation (ICE) curves.
- Local surrogate models (LIME).
- Anchors and scoped rules.
- Counterfactual explanations.
- Global interpretation tools:
Applications, uncertainty, and advanced topics
-
- SHAP and Shapley values: theory, intuition, and practice.
- SHAP-based feature importance and model explanation.
- Bootstrapped uncertainty for interpretability tools:
- Confidence intervals for PDPs and ICE curves.
- Bootstrapped SHAP-based marginal effects.
- Evaluating model stability and robustness.
Payment information
The fee of $995 USD includes all course materials.
PayPal and all major credit cards are accepted.
Our Tax ID number is 26-4576270.
The fee of $995 USD includes all course materials.
PayPal and all major credit cards are accepted.
Our Tax ID number is 26-4576270.