SOPhiA 2022

Salzburgiense Concilium Omnibus Philosophis Analyticis

SOPhiA ToolsDE-pageEN-page

Programme - Talk

The Use of Works on Scientific Explanation for Explainable Artificial Intelligence
(Philosophy of Science, English)

European Union regulations hastened the need for the elaboration of Explainable Artificial Intelligence especially due to the "right to explanation" (Goodman & Flaxman 2017) in automated high-stakes algorithmic decisions. The realization of XAI shows that firstly the ordinary usage of the term __explanation__ was taken into account by computer scientists, as far as focus was on the clarification of the opaque ML mechanisms by visualization, "translation" of machine reasoning into flowcharts of human-like inferences (Doran, Schulz, & Besold 2017) and so on (Gunning & Aha 2019).

But the problem is that "researchers in artificial intelligence often use epistemological notions in a fast and loose manner that wouldn't pass muster with philosophers" (P__ez 2009, 131), which is also true in relation to scientific explanation. This gap is filled by scientists as well as philosophers in broad (for general philosophy of science and epistemology: P__ez 2009, Hoffman, Klein & Mueller 2018, O_Hara 2020, Valentino & Freitas 2022) and narrow (for social sciences: Miller 2019, economics: Kaul 2022) perspectives.

Among things neglected researchers indicate the factor of abductive reasoning at initial stage of inquiry as a trigger (surprising fact) for search for explanation and problem statement; the processual view of explaining which should be followed by justifications and usage of counterfactuals and contrast cases; the necessity of selective phase from many explanatory hypotheses; the preference of qualitative (especially causal) appraisal of hypotheses, not quantitative. Some authors claim on social nature of explanations (Miller 2019) and pragmatic criteria, namely close relation of explanation and understanding that is revealed in considering dependence relations (P__ez 2019), co-adaptation of user and AI especially in case of clarifying boundary conditions (Hoffman, Klein & Mueller 2018, 199).

(1994 characters)

References

Doran, D., Schulz, S., & Besold, T. R. (2017). What does explainable AI really mean? A new conceptualization of perspectives. arXiv preprint arXiv:1710.00794.

Goodman, B., & Flaxman, S. (2017). European Union regulations on algorithmic decision-making and a "right to explanation". AI magazine, 38(3), 50-57.

Gunning D., Aha D.W. (2019). DARPA's explainable artificial intelligence program. AI Mag, 40(2), 44.

Hoffman, R. R., Klein, G., & Mueller, S. T. (2018). Explaining explanation for "Explainable AI". In Proceedings of the Human Factors and Ergonomics Society Annual Meeting (Vol. 62, No. 1, pp. 197-201). Sage CA: Los Angeles, CA: SAGE Publications.

Kaul, N. (2022). 3Es for AI: Economics, Explanation, Epistemology. Frontiers in Artificial Intelligence, 5.

Miller, T. (2019). Explanation in artificial intelligence: Insights from the social sciences. Artificial intelligence, 267, 1-38.

O'Hara, K. (2020). Explainable AI and the philosophy and practice of explanation. Computer Law & Security Review, 39, 105474.

P__ez, A. (2009). Artificial explanations: the epistemological interpretation of explanation in AI. Synthese, 170(1), 131-146.

P__ez, A. (2019). The pragmatic turn in explainable artificial intelligence (XAI). Minds and Machines, 29(3), 441-459.

Valentino, M., & Freitas, A. (2022). Scientific Explanation and Natural Language: A Unified Epistemological-Linguistic Perspective for Explainable AI. arXiv preprint arXiv:2205.01809.


Chair: Wojciech Grabon
Time: 10:00-10:30, 09. September 2022 (Friday)
Location: SR 1.004
Remark: (Online Talk)

Vera Shumilina 
(HSE University, Russia)



Testability and Meaning deco