Conformal Prediction in Julia πŸŸ£πŸ”΄πŸŸ’

Conformal Prediction in Julia β€” Part 1

conformal prediction
uncertainty
Julia
A (very) gentle introduction to Conformal Prediction in Julia using my new package ConformalPrediction.jl.
Published

October 25, 2022

Prediction sets for two different samples
and changing coverage rates.
As coverage grows, so does the size of the
prediction sets.

A first crucial step towards building trustworthy AI systems is to be transparent about predictive uncertainty. Model parameters are random variables and their values are estimated from noisy data. That inherent stochasticity feeds through to model predictions and should to be addressed, at the very least in order to avoid overconfidence in models.

Beyond that obvious concern, it turns out that quantifying model uncertainty actually opens up a myriad of possibilities to improve up- and down-stream modeling tasks like active learning and robustness. In Bayesian Active Learning, for example, uncertainty estimates are used to guide the search for new input samples, which can make ground-truthing tasks more efficient (Houlsby et al. 2011). With respect to model performance in downstream tasks, uncertainty quantification can be used to improve model calibration and robustness (Lakshminarayanan, Pritzel, and Blundell 2017).

In previous posts we have looked at how uncertainty can be quantified in the Bayesian context (see here and here). Since in Bayesian modeling we are generally concerned with estimating posterior distributions, we get uncertainty estimates almost as a byproduct. This is great for all intends and purposes, but it hinges on assumptions about prior distributions. Personally, I have no quarrel with the idea of making prior distributional assumptions. On the contrary, I think the Bayesian framework formalizes the idea of integrating prior information in models and therefore provides a powerful toolkit for conducting science. Still, in some cases this requirement may be seen as too restrictive or we may simply lack prior information.

Enter: Conformal Prediction (CP) β€” a scalable frequentist approach to uncertainty quantification and coverage control. In this post we will go through the basic concepts underlying CP. A number of hands-on usage examples in Julia should hopefully help to convey some intuition and ideally attract people interested in contributing to a new and exciting open-source development.

πŸƒβ€β™€οΈ TL;DR
  1. Conformal Prediction is an interesting frequentist approach to uncertainty quantification that can even be combined with Bayes (Section 1).
  2. It is scalable and model-agnostic and therefore well applicable to machine learning (Section 1).
  3. ConformalPrediction.jl implements CP in pure Julia and can be used with any supervised model available from MLJ.jl (Section 2).
  4. Implementing CP directly on top of an existing, powerful machine learning toolkit demonstrates the potential usefulness of this framework to the ML community (Section 2).
  5. Standard conformal classifiers produce set-valued predictions: for ambiguous samples these sets are typically large (for high coverage) or empty (for low coverage) (Section 2.1).

πŸ“– Background

Conformal Prediction promises to be an easy-to-understand, distribution-free and model-agnostic way to generate statistically rigorous uncertainty estimates. That’s quite a mouthful, so let’s break it down: firstly, as I will hopefully manage to illustrate in this post, the underlying concepts truly are fairly straight-forward to understand; secondly, CP indeed relies on only minimal distributional assumptions; thirdly, common procedures to generate conformal predictions really do apply almost universally to all supervised models, therefore making the framework very intriguing to the ML community; and, finally, CP does in fact come with a frequentist coverage guarantee that ensures that conformal prediction sets contain the true value with a user-chosen probability. For a formal proof of this marginal coverage property and a detailed introduction to the topic, I recommend Angelopoulos and Bates (2022).

Note

In what follows we will loosely treat the tutorial by Angelopoulos and Bates (2022) and the general framework it sets as a reference. You are not expected to have read the paper, but I also won’t reiterate any details here.

CP can be used to generate prediction intervals for regression models and prediction sets for classification models (more on this later). There is also some recent work on conformal predictive distributions and probabilistic predictions. Interestingly, it can even be used to complement Bayesian methods. Angelopoulos and Bates (2022), for example, point out that prior information should be incorporated into prediction sets and demonstrate how Bayesian predictive distributions can be conformalized in order to comply with the frequentist notion of coverage. Relatedly, Hoff (2021) proposes a Bayes-optimal prediction procedure. And finally, Stanton, Maddox, and Wilson (2022) very recently proposed a way to introduce conformal prediction in Bayesian Optimization. I find this type of work that combines different schools of thought very promising, but I’m drifting off a little … So, without further ado, let us look at some code.

πŸ“¦ Conformal Prediction in Julia

In this section of this first short post on CP we will look at how conformal prediction can be implemented in Julia. In particular, we will look at an approach that is compatible with any of the many supervised machine learning models available in MLJ: a beautiful, comprehensive machine learning framework funded by the Alan Turing Institute and the New Zealand Strategic Science Investment Fund Blaom et al. (2020). We will go through some basic usage examples employing a new Julia package that I have been working on: ConformalPrediction.jl.

ConformalPrediction.jl is a package for uncertainty quantification through conformal prediction for machine learning models trained in MLJ. At the time of writing it is still in its early stages of development, but already implements a range of different approaches to CP. Contributions are very much welcome:

Split Conformal Classification

We consider a simple binary classification problem. Let \((X_i, Y_i), \ i=1,...,n\) denote our feature-label pairs and let \(\mu: \mathcal{X} \mapsto \mathcal{Y}\) denote the mapping from features to labels. For illustration purposes we will use the moons dataset πŸŒ™. Using MLJ.jl we first generate the data and split into into a training and test set:

Code
using MLJ
using Random
Random.seed!(123)

# Data:
X, y = make_moons(500; noise=0.15)
train, test = partition(eachindex(y), 0.8, shuffle=true)

Here we will use a specific case of CP called split conformal prediction which can then be summarized as follows:1

  1. Partition the training into a proper training set and a separate calibration set: \(\mathcal{D}_n=\mathcal{D}^{\text{train}} \cup \mathcal{D}^{\text{cali}}\).
  2. Train the machine learning model on the proper training set: \(\hat\mu_{i \in \mathcal{D}^{\text{train}}}(X_i,Y_i)\).
  3. Compute nonconformity scores, \(\mathcal{S}\), using the calibration data \(\mathcal{D}^{\text{cali}}\) and the fitted model \(\hat\mu_{i \in \mathcal{D}^{\text{train}}}\).
  4. For a user-specified desired coverage ratio \((1-\alpha)\) compute the corresponding quantile, \(\hat{q}\), of the empirical distribution of nonconformity scores, \(\mathcal{S}\).
  5. For the given quantile and test sample \(X_{\text{test}}\), form the corresponding conformal prediction set:

\[ C(X_{\text{test}})=\{y:s(X_{\text{test}},y) \le \hat{q}\} \tag{1}\]

This is the default procedure used for classification and regression in ConformalPrediction.jl.

You may want to take a look at the source code for the classification case here. As a first important step, we begin by defining a concrete type SimpleInductiveClassifier that wraps a supervised model from MLJ.jl and reserves additional fields for a few hyperparameters. As a second step, we define the training procedure, which includes the data-splitting and calibration step. Finally, as a third step we implement the procedure in Equation 1 to compute the conformal prediction set.

Development Status

The permalinks above take you to the version of the package that was up-to-date at the time of writing. Since the package is in its early stages of development, the code base and API can be expected to change.

Now let’s take this to our πŸŒ™ data. To illustrate the package functionality we will demonstrate the envisioned workflow. We first define our atomic machine learning model following standard MLJ.jl conventions. Using ConformalPrediction.jl we then wrap our atomic model in a conformal model using the standard API call conformal_model(model::Supervised; kwargs...). To train and predict from our conformal model we can then rely on the conventional MLJ.jl procedure again. In particular, we wrap our conformal model in data (turning it into a machine) and then fit it on the training set. Finally, we use our machine to predict the label for a new test sample Xtest:

Code
# Model:
KNNClassifier = @load KNNClassifier pkg=NearestNeighborModels
model = KNNClassifier(;K=50) 

# Training:
using ConformalPrediction
conf_model = conformal_model(model; coverage=.9)
mach = machine(conf_model, X, y)
fit!(mach, rows=train)

# Conformal Prediction:
Xtest = selectrows(X, first(test))
ytest = y[first(test)]
predict(mach, Xtest)[1]
import NearestNeighborModels
 βœ”
           UnivariateFinite{Multiclass{2}}      
     β”Œ                                        ┐ 
   0 ─■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■ 0.94   
     β””                                        β”˜ 

The final predictions are set-valued. While the softmax output remains unchanged for the SimpleInductiveClassifier, the size of the prediction set depends on the chosen coverage rate, \((1-\alpha)\).

When specifying a coverage rate very close to one, the prediction set will typically include many (in some cases all) of the possible labels. Below, for example, both classes are included in the prediction set when setting the coverage rate equal to \((1-\alpha)\)=1.0. This is intuitive, since high coverage quite literally requires that the true label is covered by the prediction set with high probability.

Code
conf_model = conformal_model(model; coverage=coverage)
mach = machine(conf_model, X, y)
fit!(mach, rows=train)

# Conformal Prediction:
Xtest = (x1=[1],x2=[0])
predict(mach, Xtest)[1]
           UnivariateFinite{Multiclass{2}}      
     β”Œ                                        ┐ 
   0 ─■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■ 0.5   
   1 ─■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■ 0.5   
     β””                                        β”˜ 

Conversely, for low coverage rates, prediction sets can also be empty. For a choice of \((1-\alpha)\)=0.1, for example, the prediction set for our test sample is empty. This is a bit difficult to think about intuitively and I have not yet come across a satisfactory, intuitive interpretation.2 When the prediction set is empty, the predict call currently returns missing:

Code
conf_model = conformal_model(model; coverage=coverage)
mach = machine(conf_model, X, y)
fit!(mach, rows=train)

# Conformal Prediction:
predict(mach, Xtest)[1]
missing

Figure 1 should provide some more intuition as to what exactly is happening here. It illustrates the effect of the chosen coverage rate on the predicted softmax output and the set size in the two-dimensional feature space. Contours are overlayed with the moon data points (including test data). The two samples highlighted in red, \(X_1\) and \(X_2\), have been manually added for illustration purposes. Let’s look at these one by one.

Firstly, note that \(X_1\) (red cross) falls into a region of the domain that is characterized by high predictive uncertainty. It sits right at the bottom-right corner of our class-zero moon 🌜 (orange), a region that is almost entirely enveloped by our class-one moon πŸŒ› (green). For low coverage rates the prediction set for \(X_1\) is empty: on the left-hand side this is indicated by the missing contour for the softmax probability; on the right-hand side we can observe that the corresponding set size is indeed zero. For high coverage rates the prediction set includes both \(y=0\) and \(y=1\), indicative of the fact that the conformal classifier is uncertain about the true label.

With respect to \(X_2\), we observe that while also sitting on the fringe of our class-zero moon, this sample populates a region that is not fully enveloped by data points from the opposite class. In this region, the underlying atomic classifier can be expected to be more certain about its predictions, but still not highly confident. How is this reflected by our corresponding conformal prediction sets?

Code
Xtest_2 = (x1=[-0.5],x2=[0.25])
cov_ = .9
conf_model = conformal_model(model; coverage=cov_)
mach = machine(conf_model, X, y)
fit!(mach, rows=train)
pΜ‚_2 = pdf(predict(mach, Xtest_2)[1], 0)

Well, for low coverage rates (roughly \(<0.9\)) the conformal prediction set does not include \(y=0\): the set size is zero (right panel). Only for higher coverage rates do we have \(C(X_2)=\{0\}\): the coverage rate is high enough to include \(y=0\), but the corresponding softmax probability is still fairly low. For example, for \((1-\alpha)=0.9\) we have \(\hat{p}(y=0|X_2)=0.72.\)

These two examples illustrate an interesting point: for regions characterised by high predictive uncertainty, conformal prediction sets are typically empty (for low coverage) or large (for high coverage). While set-valued predictions may be something to get used to, this notion is overall intuitive.

Code
# Setup
coverages = range(0.75,1.0,length=5)
n = 100
x1_range = range(extrema(X.x1)...,length=n)
x2_range = range(extrema(X.x2)...,length=n)

anim = @animate for coverage in coverages
    conf_model = conformal_model(model; coverage=coverage)
    mach = machine(conf_model, X, y)
    fit!(mach, rows=train)
    p1 = contourf_cp(mach, x1_range, x2_range; type=:proba, title="Softmax", axis=nothing)
    scatter!(p1, X.x1, X.x2, group=y, ms=2, msw=0, alpha=0.75)
    scatter!(p1, Xtest.x1, Xtest.x2, ms=6, c=:red, label="X₁", shape=:cross, msw=6)
    scatter!(p1, Xtest_2.x1, Xtest_2.x2, ms=6, c=:red, label="Xβ‚‚", shape=:diamond, msw=6)
    p2 = contourf_cp(mach, x1_range, x2_range; type=:set_size, title="Set size", axis=nothing)
    scatter!(p2, X.x1, X.x2, group=y, ms=2, msw=0, alpha=0.75)
    scatter!(p2, Xtest.x1, Xtest.x2, ms=6, c=:red, label="X₁", shape=:cross, msw=6)
    scatter!(p2, Xtest_2.x1, Xtest_2.x2, ms=6, c=:red, label="Xβ‚‚", shape=:diamond, msw=6)
    plot(p1, p2, plot_title="(1-Ξ±)=$(round(coverage,digits=2))", size=(800,300))
end

gif(anim, fps=0.5)
Figure 1: The effect of the coverage rate on the conformal prediction set. Softmax probabilities are shown on the left. The size of the prediction set is shown on the right.

🏁 Conclusion

This has really been a whistle-stop tour of Conformal Prediction: an active area of research that probably deserves much more attention. Hopefully, though, this post has helped to provide some color and, if anything, made you more curious about the topic. Let’s recap the TL;DR from above:

  1. Conformal Prediction is an interesting frequentist approach to uncertainty quantification that can even be combined with Bayes (Section 1).
  2. It is scalable and model-agnostic and therefore well applicable to machine learning (Section 1).
  3. ConformalPrediction.jl implements CP in pure Julia and can be used with any supervised model available from MLJ.jl (Section 2).
  4. Implementing CP directly on top of an existing, powerful machine learning toolkit demonstrates the potential usefulness of this framework to the ML community (Section 2).
  5. Standard conformal classifiers produce set-valued predictions: for ambiguous samples these sets are typically large (for high coverage) or empty (for low coverage) (Section 2.1).

Below I will leave you with some further resources.

πŸ“š Further Resources

Chances are that you have already come across the Awesome Conformal Prediction repo: Manokhin (2022) provides a comprehensive, up-to-date overview of resources related to the conformal prediction. Among the listed articles you will also find Angelopoulos and Bates (2022), which inspired much of this post. The repo also points to open-source implementations in other popular programming languages including Python and R.

References

Angelopoulos, Anastasios N., and Stephen Bates. 2022. β€œA Gentle Introduction to Conformal Prediction and Distribution-Free Uncertainty Quantification.” https://arxiv.org/abs/2107.07511.
Blaom, Anthony D., Franz Kiraly, Thibaut Lienart, Yiannis Simillides, Diego Arenas, and Sebastian J. Vollmer. 2020. β€œMLJ: A Julia Package for Composable Machine Learning.” Journal of Open Source Software 5 (55): 2704. https://doi.org/10.21105/joss.02704.
Hoff, Peter. 2021. β€œBayes-Optimal Prediction with Frequentist Coverage Control.” https://doi.org/10.3150/22-bej1484.
Houlsby, Neil, Ferenc HuszΓ‘r, Zoubin Ghahramani, and MΓ‘tΓ© Lengyel. 2011. β€œBayesian Active Learning for Classification and Preference Learning.” https://arxiv.org/abs/1112.5745.
Lakshminarayanan, Balaji, Alexander Pritzel, and Charles Blundell. 2017. β€œSimple and Scalable Predictive Uncertainty Estimation Using Deep Ensembles.” Advances in Neural Information Processing Systems 30.
Manokhin, Valery. 2022. β€œAwesome Conformal Prediction.” https://doi.org/10.5281/zenodo.6467205; Zenodo. https://doi.org/10.5281/zenodo.6467205.
Stanton, Samuel, Wesley Maddox, and Andrew Gordon Wilson. 2022. β€œBayesian Optimization with Conformal Coverage Guarantees.” https://arxiv.org/abs/2210.12496.

Footnotes

  1. In other places split conformal prediction is sometimes referred to as inductive conformal prediction.β†©οΈŽ

  2. Any thoughts/comments welcome!β†©οΈŽ

Citation

BibTeX citation:
@online{altmeyer2022,
  author = {Altmeyer, Patrick},
  title = {Conformal {Prediction} in {Julia} πŸŸ£πŸ”΄πŸŸ’},
  date = {2022-10-25},
  url = {https://www.paltmeyer.com/blog//blog/posts/conformal-prediction},
  langid = {en}
}
For attribution, please cite this work as:
Altmeyer, Patrick. 2022. β€œConformal Prediction in Julia πŸŸ£πŸ”΄πŸŸ’.” October 25, 2022. https://www.paltmeyer.com/blog//blog/posts/conformal-prediction.