Understanding the pitfalls and complexities we face when AI models make decisions

Introduction

Are we building overconfident models?

Navigating the world with the help of AI has become as common as using a smartphone. We rely on these advanced algorithms to suggest the fastest route home, to offer medical diagnoses, and even to invest our money. But what happens when AI, our digital co-pilot, leads us astray?

Consider the implications of an AI system in healthcare that doesn't ask for a second opinion or an autonomous vehicle that takes turns into oncoming traffic. These are not mere inconveniences; they can be life-threatening.

Let's take a closer look at the overconfidence phenomenon in AI models and the real-world impacts that may pose broader implications for trust in technology.

Image
Definition

What is model uncertainty?

Uncertainty in deep learning acknowledges that AI predictions are not infallible. It stems from limitations in the data, potential overfitting, and how well the model can adapt to new situations. Essentially, it's an admission that AI outputs come with a "confidence range," and our ability to identify and measure this uncertainty is vital as we continue to apply AI output to critical everyday tasks.

Image
Demonstration

Lets take a look at an example

Let's take a scenario where we use a model designed to identify numbers. We present the model with a number and ask it to determine what it thinks it is. In this case, we compare two models: one that utilizes prior networks with uncertainty quantification, giving us a sense of how sure it is about its guesses, and another using a traditional network that doesn't provide this extra layer of insight into its own confidence levels.

Here, we can select the number for the model to test in addition to an "Angle." This angle will rotate the number before giving it to the model to test.

You'll see that the traditional model is often exceedingly confident, even when guessing the incorrect number. On the flip side, the "uncertainty model" is quite a bit more hesitant regarding its own predictions.

Change the values to see the difference between models

60°120°180°240°300°
Image

Uncertainty model: The image is a 8 but I'm only 0.34% certain

Traditional model: The image is a 2 but I'm only 41.39% certain

I don't know!

Learning to say "I don't know"

We should want our models to admit "I don't know" when encountering something they haven't seen before. In deep learning terms, this translates to incorporating uncertainty quantification—a method that assesses a model's confidence level when making predictions on fresh, unseen data. It's about giving the model a way to express its own reliability on new inputs.

Image
Comparing

Comparing the models

Imagine a scenario in which traditional models, accustomed to dealing with familiar data, encounter something entirely unexpected. Picture a situation where these models are shown an image of a shoe, yet they are only trained to recognize digits.

When comparing traditional models to those trained to quantify uncertainty, their approach to errors and confidence levels is markedly different. For instance, a traditional model misidentify a shoe as the digit 2 with a 99.66% confidence level, showcasing severe overconfidence. This is problematic, especially in critical situations where accuracy is crucial.

In contrast, a model trained on uncertainty quantification responds much more cautiously to the same image, offering a mere 2.17% confidence in its prediction that it is the digit 2. This cautious approach highlights a significant advantage: it avoids the pitfalls of overconfidence by acknowledging its own limitations, making it a safer choice for applications requiring high reliability and safety.

Image

Uncertainty model:The image is a 2 but I'm only 2.17% certain

Traditional model:The image is a 2 and I'm 99.66% certain

Used Papers

Prior Networks

To quantify uncertainty the loss functions described in two papers were used. The "Information Aware max-norm Dirichlet networks for predictive uncertainty estimation"[1] and "Evidential Deep Learning to Quantify Classification Uncertainty."[2]

Both papers introduce the idea of learning a Dirichlet distribution over class probabilities rather than learning the probabilities directly. This approach of learning a Dirichlet distribution falls into a category of uncertain approaches known as "prior networks" having some nice properties.

  • Simple implementation
  • Cost-Effectiveness
  • Optimized Performance
  • No Core model changes
Other models

Monte Carlo Dropout

A comparison with another popular technique, Monte Carlo Dropout[3], highlights the strengths of prior networks. Monte Carlo Dropout necessitates the inclusion of dropout layers in the model, limiting architectural flexibility. Moreover, it requires multiple runs of each example through the network to obtain uncertainty scores, potentially increasing inference costs by up to ten times. In contrast, prior networks maintain zero extra inference cost, impose no architectural constraints, and are implemented via a straightforward modification of the cross-entropy loss function.

Image
Conclusion

Encouraging the Use of Prior Networks

Considering these advantages, we encourage researchers and practitioners to delve deeper into the potential of prior networks. In our experiments, employing the Dirichlet loss function resulted in models that nearly matched the accuracy of traditional networks while also bringing the critical advantage of more precise uncertainty quantification to the table.It's a crucial aspect of many predictive modeling scenarios in the landscape of predictive modeling.

Image

Check our content or contact us to build things

Explore our resources to learn more or pay us to build things!