Machine learning (ML) systems are abundant in our world. However, most of these systems are not understandable, which poses several challenges, including their safety, proper functioning and accountability. Further, ML models are susceptible to social biases, which can lead to unjust and discriminatory situations. The field of eXplainable Artificial Intelligence (XAI) attempts to answer these challenges by providing explanation methods for ML models. However, there is still an open debate about the necessary desiderata of such methods, including the often-missing consideration of the legal side of explanation desiderata. In this work, we put forward a set of five technical and five legal desiderata of XAI and develop a multi-layered mapping encompassing the dynamics among and between the two sets and linking them to actual requirements. From the standpoint of legality, we rely on the European requirements explainability and justificability. In our mapping, we draw the interdependencies and the intersections between the technical and legal desiderata, creating an image that visualises the assessment of the technical and legal driving forces (desiderata matching requirements) in the design and provision of explanations. Ultimately, explainability and justificability desiderata must be systematic; understood as a dynamic, circular and iterative process.
How should an explanation be? A mapping of technical and legal desiderata of explanations for machine learning models
comande
2025-01-01
Abstract
Machine learning (ML) systems are abundant in our world. However, most of these systems are not understandable, which poses several challenges, including their safety, proper functioning and accountability. Further, ML models are susceptible to social biases, which can lead to unjust and discriminatory situations. The field of eXplainable Artificial Intelligence (XAI) attempts to answer these challenges by providing explanation methods for ML models. However, there is still an open debate about the necessary desiderata of such methods, including the often-missing consideration of the legal side of explanation desiderata. In this work, we put forward a set of five technical and five legal desiderata of XAI and develop a multi-layered mapping encompassing the dynamics among and between the two sets and linking them to actual requirements. From the standpoint of legality, we rely on the European requirements explainability and justificability. In our mapping, we draw the interdependencies and the intersections between the technical and legal desiderata, creating an image that visualises the assessment of the technical and legal driving forces (desiderata matching requirements) in the design and provision of explanations. Ultimately, explainability and justificability desiderata must be systematic; understood as a dynamic, circular and iterative process.File | Dimensione | Formato | |
---|---|---|---|
How should an explanation be A mapping of technical and legal desiderata of explanations.pdf
accesso aperto
Tipologia:
Documento in Pre-print/Submitted manuscript
Licenza:
Creative commons (selezionare)
Dimensione
1.12 MB
Formato
Adobe PDF
|
1.12 MB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.