Keeping humans in the loop with XAI

What place does explainability (XAI) occupy today in Machine Learning and Data Science? The challenge of the last ten years in data science has instead been to find the right “algorithmic recipe” to create ML models that are ever more powerful, more complex and therefore less and less understandable.

14/09/2022

Trusted AI

Tous les articles

Sommaire

À télécharger

AI systems based on these algorithms often make predictions without one, the designer or user of the AI, being able to understand what motivates the prediction. These systems are nicknamed “black boxes” because of the opacity of the data processing that is done there.

The phenomenon of black boxes and the natural distrust they generate is a major obstacle to the adoption of AI. An effort must be focused on the R&D of trusted AI in order to find a form of explainability for each algorithm, even the most complex ones.

In the actions to be put in place to create a trustworthy AI (Explainability/Confidentiality/Confidentiality/Frugality/Fairness), XAI is of particular importance because it concerns the model as a whole, including algorithms and data. Trust then becomes the responsibility of the creator of the AI, of the person who designed the model, who chose the algorithm.

Many software publishers, whose solutions are used maliciously, take refuge behind the excuse of the neutrality of their algorithm. They want to place the blame on the end user and their use of the tool, relieving themselves of any ethical responsibility.

Social networks are the most eloquent example. They prefer not to have to explain their algorithms, Even if it means losing control, rather than taking on the ethical responsibility for their creation. The very reason for explainability is to prevent these excesses and to allow trustworthy AI.

In addition to these ethical and regulatory aspects, explainability has above all an operational interest. The example of the autonomous car is undoubtedly the most telling about what explainability (XAI) can and should bring to our daily lives. Whether from the point of view of the designer or the end user, XAI allows access to the three levels of understanding of AI:

  • Understanding or knowing in detail what AI does and what influences it.
  • Mastery or knowing what AI can do
  • Trust or knowing what AI can't do

Interest of the XAI from a designer point of view

The process of creating a machine learning model most often consists of three phases: design, training and evaluation with iterative logic until the model is “frozen” as desired in order to put it into production.

Explainability here reveals its first major operational interest. With a better understanding of the algorithmic recipe, the designer may choose not to put his model into production if the explanations are not satisfactory, despite good statistical results. It can thus relaunch a “design, training, evaluation” iteration and avoid deploying an uncertain AI.

The XAI makes it possible to understand what the algorithm does during the training phase and offers the opportunity to:

  1. correct what he does (comprehension),
  2. know his potential, know what he can do (mastery) and
  3. to see your limits (trust)

The XAI makes it possible to verify that the model we are building corresponds to what we wanted to model and thus makes it possible to accelerate the development phase by helping Data Scientists to “freeze” the model with confidence to put it into production.

During the deployment phase of the model in production, it will be confronted with dynamic data and it will then be necessary to use explainability again in order to:

  1. Debug and understand design errors that could not emerge by simply training the model with static data (understanding).
  2. Understand how the model makes its prediction (mastery). Let's take the example of an autonomous car that stops at a Stop sign. Of course, the model must make the car stop every time, without exception. However, it is essential to understand how the car stops at the sign, what is its level of understanding. Does she recognize the stop sign or will she stop at all the red signs? Is it able to adapt to unknown data such as panel degradation (stickers, deformations, etc.)?
  3. Know what the machine learning model cannot do and how it will react to unknown data (trust). Explainability allows you to be sure that the model will respond well to A case not anticipated during training, where a simple statistical evaluation will not be able to help. To continue with the example of the autonomous car, several cases of cars stopping for no reason in the middle of a lane have been reported. The only common parameter between these cases is that they all happened at night. Here the statistical evaluation ends, however, the XAI made it possible to understand that the AI in these cars confused the moon with traffic lights.

Benefits of XAI from a user perspective

From the point of view of the end user, explainability is also of major interest. We believe that XAI has a decisive role in the adoption of AI by the general public, by enabling trusted AI and by streamlining the user experience. Adoption by as many people as possible is an essential step in the development of AI today.

If software companies want to continue promoting artificial intelligence, they need to open the black box and explain how it works. It is necessary to give elements of understanding to the end user to keep them in the decision loop: should they follow the predictions or recommendations of the AI or not?

Just like designers, AI users need access to all three levels of AI understanding:

  • Understanding or knowing in detail what AI does and what influences it.
  • Mastery or knowing what AI can do
  • Trust or knowing what AI can't do

Understanding AI

To understand a service using artificial intelligence, each user must be in a position to be able to question the decisions made by the algorithm:

It is possible to keep humans in the AI loop. This involves the use of XAI methods.

Mastering AI

To master artificial intelligence, you need to know what it can do and understand the interactions between data and their causes:

  • Knowing that my autonomous car will squeeze to the right or to the left depending on the country where I am.
  • Understand the shortcomings that led to a recommendation
  • Know that theassignment of a schedule or a course was given in a fair manner
  • Power Act on my energy consumption to save and pollute less.

The challenge of mastering AI is to be able to create an enabling complementarity between the human and the algorithm. Reinforcing humans through machines can only be envisaged in the context of trusted AI, in particular thanks to explainability.

Trusting AI

XAI is the pillar of trusted AI where equity (removing biases and discrimination in data), frugality (minimizing the amount of energy required to train models), and confidentiality (respecting privacy and data confidentiality) are found.

Trust is based on tangible evidence, it is the result of understanding and mastery.

An ML model is built on a limited set of past data. Having explanations about how it works will allow you to have confidence in your behavior in production, especially if you are confronted with data that you have never seen.

It is the role of explainability to go beyond statistical guarantees to convince as many people as possible that a trustworthy AI, which keeps humans at the heart of technology, is possible and already operational.

Building trust to encourage adoption is also the approach taken by the European Union. In order to create a safe and favourable environment for innovation, the European Commission proposed in 2021 a battery of initiatives that will contribute to strengthening trusted AI and making explainability the norm in artificial intelligence.

Learn more about Craft AI's commitments and vision

Une plateforme compatible avec tout l’écosystème

aws
Azure
Google Cloud
OVH Cloud
scikit-lean
PyTorch
Tensor Flow
XGBoost
jupyter
PC
Python
R
Rust
mongo DB