We propose ConformalPrediction.jl
: a Julia package for Predictive Uncertainty Quantification in Machine Learning (ML) through Conformal Prediction. It works with supervised models trained in MLJ.jl
, a popular comprehensive ML framework for Julia. Conformal Prediction is easy-to-understand, easy-to-use and model-agnostic and it works under minimal distributional assumptions.
A first crucial step towards building trustworthy AI systems is to be transparent about predictive uncertainty. Machine Learning model parameters are random variables and their values are estimated from noisy data. That inherent stochasticity feeds through to model predictions and should be addressed, at the very least in order to avoid overconfidence in models.
Beyond that obvious concern, it turns out that quantifying model uncertainty actually opens up a myriad of possibilities to improve up- and down-stream tasks like active learning and model robustness. In Bayesian Active Learning, for example, uncertainty estimates are used to guide the search for new input samples, which can make ground-truthing tasks more efficient (Houlsby et al., 2011). With respect to model performance in downstream tasks, predictive uncertainty quantification can be used to improve model calibration and robustness (Lakshminarayanan et al., 2016).
Conformal Prediction (CP) is a scalable frequentist approach to uncertainty quantification and coverage control (Angelopoulus and Bates, 2022). CP can be used to generate prediction intervals for regression models and prediction sets for classification models. There is also some recent work on conformal predictive distributions and probabilistic predictions. The following characteristics make CP particularly attractive to the ML community:
MLJ.jl
toolkit.Open-source development in the Julia AI space has been very active in recent years. MLJ is just one great example testifying to these community efforts. As we gradually build up an AI ecosystem, it is important to also pay attention to the risks and challenges facing AI today. With respect to Predictive Uncertainty Quantification, there is currently good support for Bayesian Methods and Ensembling. A fully-fledged implementation of Conformal Prediction in Julia has so far been lacking.
ConformalPrediction.jl
Through this project we aim to close that gap and thereby contribute to broader community efforts towards trustworthy AI. Highlights of our new package include:
conformal_model(model::MLJ.Supervised)
.MLJ.jl
and some of the leading researchers in the field. Thankfully we have also already received a lot of useful feedback and contributions from the community.Our primary goal for this package is to become the go-to place for conformalizing supervised machine learning models in Julia. To this end we currently envision the following future developments:
For more information see the list of outstanding issues.
Take a quick interactive tour to see what this package can do: link. Aside from this Pluto.jl
π notebook you will find links to many more resources on the package repository: ConformalPrediction.jl
.