Cited by Lee Sonogan
Abstract by Facchini, Alessandro and Termine, Alberto
The research program of eXplainable AI (XAI) has been developed with the aim of providing tools and methods for reducing opacity and making AI systems more humanly understandable. Unfortunately, the majority of XAI scholars actually classify a system as more or less opaque by confronting it with traditional rules-based systems, which are usually assumed to be the prototype of transparent systems. In doing so, the concept of opacity remains unexplained. To overcome this issue, we propose to view opacity as a pragmatic concept. Based on this, we then explicit the distinction between structural opacity, link opacity and informational opacity, hence providing the groundwork for a conceptual taxonomy for XAI scholars and their practice.
Publication: PhiSciArchive (Peer-Review Journal)
Pub Date: 31 Oct, 2021 Doi: http://philsci-archive.pitt.edu/id/eprint/19766
Keywords: Opacity, Explainable AI, Machine Learning, Scientific Understanding
http://philsci-archive.pitt.edu/19766/ (Plenty more sections and references in this research article)