Probability Calibration: Latest Techniques
It is often desirable that a model output well-calibrated probabilities. This tutorial will discuss why calibration is important, how to assess the calibration of a model, and how to calibrate by post-processing the scores. We will review calibration techniques such as Isotonic Regression, Platt scaling, Beta calibration, and Spline calibration and apply them to real-world examples.
Brian Lucena is an applied mathematician and machine learning expert with experience in industry, research and teaching. He has built predictive models in the realms of healthcare and finance used in numerous hospitals and financial institutions. He is the creator of two open-source Python packages: ML_Insights: which features tooling around calibration and model interpretability, and, recently, StructureBoost: a gradient boosting package which incorporates the structure of categorical variables. The latter package is based on his paper in AISTATS 2020. His goal is to build bridges between the academic research and practicing data scientist communities. He has taught at UC-Berkeley, Brown, USF, and the Metis Data Science bootcamp, and is a frequent speaker at conferences such as ODSC and MLConf.