• Recherche,

Evolutionary Synthesis of Interpretable Control Policies for Complex Systems

le 24 mars 2026

12h45
Manufacture des Tabacs
Salle MF103

Giorgia Nadizar

Automated control policies that determine actions in response to changing conditions play a central role in many real-world systems. Reinforcement learning methods based on artificial neural networks have achieved strong performance in such tasks, but their decision-making processes are often opaque, making them difficult to interpret, validate, and trust, particularly in safety-critical domains such as healthcare, transportation, or aerospace.
In my research, I investigate interpretable learning-based control methods using evolutionary computation, with a particular focus on genetic programming. This approach evolves control policies represented as symbolic expressions, graphs, or programs composed of human-readable building blocks. Such representations provide decomposability and transparency, enabling complex policies to be analyzed and understood in ways that are typically not possible with neural networks.
In this talk, I will present results obtained on several benchmark problems, including continuous robot control and visual decision-making tasks, where interpretable policies can achieve competitive performance. I will then discuss key limitations that remain, notably scalability to more complex environments and the computational cost of evolutionary search. Finally, I will outline my ongoing and future research directions aimed at improving the efficiency and scalability of interpretable policy synthesis, with the long-term goal of making interpretable controllers a practical alternative to black-box neural policies. 

Mis à jour le 11 mars 2026