Toward Interpretable Machine Learning

Many of today’s decision-making processes are increasingly being made by computer-based systems: from whether a company receives a loan, to whether an individual’s parole is granted. However, modern machine learning techniques, including deep learning, focus solely on predictive accuracy and completely disregard the topic of interpretability. This shortcoming has significantly curtailed the adoption of state-of-art machine learning techniques in many industries, including healthcare, finance, insurance and law, in which regulations and business practice require transparent, trustful and auditable decision support systems.

In this platform we initiate research that forms the basis of the next generation of machine learning techniques that are able to rationalize and explain their decisions. We aim to bridge the gap between predictive systems and humans by advancing research to create easily-consumable and interpretable predictions. It is a timely and relevant research field, closely aligned with the spirit and the imperatives of the GDPR legislation that calls for the provision of “meaningful explanations” by decision systems.

UNIL

Michalis Vlachos

IMD

Amit Joshi
Howard Yu
Goutam Challagala

EPFL

Guillaume Obozinski
Pascal Frossard