Skip to content

Interpretable Machine Learning - Online Course

A 4-Day Livestream Seminar Taught by

Jens Hainmueller
Course Dates:

Tuesday, June 24 –
Friday, June 27, 2025

Schedule: All sessions are held live via Zoom. All times are ET (New York time).

10:30am-12:30pm (convert to your local time)
1:30pm-3:00pm

A Practical Guide to Unpacking the Black Box

Machine learning often outperforms traditional models like linear or logistic regression in predictive accuracy, but this advantage often comes at the cost of higher complexity and output that can be difficult to explain and interpret.

Accurate prediction is often not enough—researchers need to understand why a model performs well, which features are driving its decisions, and how predictions differ across subgroups. This transparency helps researchers develop fair, reliable, and robust models, and to translate their work to real-world scenarios where explainability is often a requirement.

This course is designed to teach you how to make machine learning models more transparent and interpretable. After reviewing interpretable models such as linear and logistic regression, we will examine several popular machine learning models and demonstrate how they can be made more interpretable using a range of post-hoc and model-agnostic methods that provide insights at both the aggregate and individual levels. These methods include partial dependence plots, Accumulated Local Effects (ALE) plots, feature interaction measures (H-statistic), functional decomposition, permutation feature importance, global surrogate models, individual conditional expectation (ICE) curves, local surrogate models (such as LIME), scoped rules (anchors), counterfactual explanations, Shapley values, and SHAP values.

Throughout the course, core technical concepts will be demonstrated with real-world datasets and hands-on coding exercises. This will ensure that you not only understand the theory behind interpretability but also acquire practical skills to apply these techniques in your own projects.

By the end of the course, you will be equipped with the knowledge and tools necessary to interpret machine learning models effectively, allowing for better insights, improved model transparency, and greater trust in your systems.

Starting June 24, we are offering this seminar as a 4-day synchronous*, livestream workshop held via the free video-conferencing software Zoom. Each day will consist of two lecture sessions which include hands-on exercises, separated by a 1-hour break. You are encouraged to join the lecture live, but will have the opportunity to view the recorded session later in the day if you are unable to attend at the scheduled time.

*We understand that finding time to participate in livestream courses can be difficult. If you prefer, you may take all or part of the course asynchronously. The video recordings will be made available within 24 hours of each session and will be accessible for four weeks after the seminar, meaning that you will get all of the class content and discussions even if you cannot participate synchronously. 

Closed captioning is available for all live and recorded sessions. Captions can be translated to a variety of languages including Spanish, Korean, and Italian. For more information, click here.

More details about the course content

Computing

Who should register?

Seminar outline

Payment information