This is the repository for the book Interpretable Machine Learning -- A Guide For Explaining Black Box Models.
You can read the book in the following ways:
You can find the current version of the book here: https://christophm.github.io/interpretable-ml-book/
This book is about interpretable machine learning. Machine learning is being built into many products and processes of our daily lives, yet decisions made by machines don't automatically come with an explanation. An explanation increases the trust in the decision and in the machine learning model. As the programmer of an algorithm you want to know whether you can trust the learned model. Did it learn generalizable features? Or are there some odd artifacts in the training data which the algorithm picked up? This book will give an overview over techniques that can be used to make black boxes as transparent as possible and explain decisions. In the first chapter algorithms that produce simple, interpretable models are introduced together with instructions how to interpret the output. The later chapters focus on analyzing complex models and their decisions. In an ideal future, machines will be able to explain their decisions and make a transition into an algorithmic age more human. This books is recommended for machine learning practitioners, data scientists, statisticians and also for stakeholders deciding on the use of machine learning and intelligent algorithms.
See CHANGELOG.md for version history.
If you found this book useful for your blog post, research article or product, I would be grateful if you would cite this book. You can cite the book like this:
Molnar, Christoph. *Interpretable Machine Learning: A Guide for Making Black Box Models Explainable*. 3rd ed., 2025. ISBN: 978-3-911578-03-5. Available at: \url{https://christophm.github.io/interpretable-ml-book}.
Or use the following bibtex entry:
@book{molnar2025,
title={Interpretable Machine Learning},
subtitle={A Guide for Making Black Box Models Explainable},
author={Christoph Molnar},
year={2025},
edition={3},
isbn={978-3-911578-03-5}
url={https://christophm.github.io/interpretable-ml-book}
}
I'm always curious about where and how interpretation methods are used in industry and research. If you use the book as a reference, it would be great if you wrote me a line and told me what for. This is, of course, optional and only serves to satisfy my own curiosity and to stimulate interesting exchanges. My email is [email protected]
If you find any errors in the book, I welcome your help in fixing them! To contribute:
- Fork the repository.
- Create a new branch for your fix.
- Address the error(s) you found.
- Submit a pull request with a clear description of the fix.
Additionally, if you have content suggestions or requests, feel free to open an issue. While I can't promise that all suggestions will be added, I appreciate the feedback.
Thank you for helping improve the book!
- Most R examples use the iml R package that I developed
- For a deep dive into SHAP, checkout my book Interpreting Machine Learning Models with SHAP