• Recherche,

Evolution of inherently interpretable multimodal programs

le 27 mai 2025

12h45
Manufacture des Tabacs
Salle MH003

Camilo De la Torre - IRIT, REVA team

Abstract: Vision-based decision-making tasks encompass a wide range of applications, including safety-critical domains where trustworthiness is as key as performance. These tasks are often addressed using Deep Reinforcement Learning (DRL) techniques, based on Artificial Neural Networks (ANNs), to automate sequential decision making. However, the “black-box” nature of ANNs limits their applicability in these settings, where transparency and accountability are essential. To address this, various explanation methods have been proposed; however, they often fall short in fully elucidating the decision-making pipeline of ANNs, a critical aspect for ensuring reli- ability in safety-critical applications. To bridge this gap, we propose an approach based on Graph-based Genetic Programming (GGP) to generate transparent policies for vision-based control tasks. Our evolved policies are constrained in size and composed of simple and well-understood operational modules, enabling inherent interpretability. We evaluate our method on three Atari games, comparing explanations derived from common explainability techniques to those derived from interpreting the agent’s true computational graph. We demonstrate that interpretable policies offer a more complete view of the decision process than explainability methods, enabling a full comprehension of competitive game-playing policies.
Mis à jour le 19 mai 2025