Trusted AI: can an LLM be explained?

17/10/2023

Tous les webinaires

Sommaire

The explosion in the use of generative AIs raises a major problem: can the results of AI be trusted and understood?

💡 Description
The race for the power of LLMs, which we have been witnessing in recent months, makes the issue of the explainability of algorithms as crucial as it is complicated: can we still understand the path of a model of several tens of billions of parameters?
If Shapley Values have established themselves as the standards of explainability in Machine Learning, their usefulness disappears on a model as complex as an LLM. The very nature of the type of explanation sought for an LLM differs from what is sought on a classification-type model.
So how do you know if a Large Language Model acts like a “parrot” repeating what it has learned or if it gives you an answer, based on the acquisition of a concept?

⏰ Unfolding
14:00-14:10: Introduction
14:10-14:30: Presentation
14:30-14:45: Question-Answer

🎙 Speakers
Bastien Zimmermann
, R&D engineer at Craft AIHélen d'Argentré, Head of Marketing at Craft AI

👋 About
A French startup that is a pioneer in trusted AI, Craft AI is specialized in the industrialization of artificial intelligence projects. For 8 years, Craft AI has developed unique technological expertise in the operationalization of Machine & Deep Learning models. In particular, the company allows its customers to develop, specialize and operate their own generative AIs. Finally, it offers a responsible vision of AI, that is to say energy-efficient, explainable and respectful of privacy.

À télécharger