Mario Günther (LMU Munich)
When Should We Attribute Beliefs to AI Systems?
11:30 – 13:00 CET
Abstract
How should we explain a decision made by an AI system to a layperson? It has been suggested that an AI system can be seen as a rational agent whose behavior is explainable in terms of beliefs and desires. This answer requires a theory that allows us to attribute beliefs to an AI system in a justified way. Here we propose such a theory of belief attribution which helps the layperson to understand how a given AI system works on a level that abstracts away from its underlying probability calculations. We argue that the resulting explanations engender trust in AI systems where trust is appropriate.