AI3SD Video: Interpreting opacity: understanding gaps in our explanations of artificial neural networks
AI3SD Video: Interpreting opacity: understanding gaps in our explanations of artificial neural networks
We know everything that goes on within artificial neural networks. We tend to know of all the data such systems have been trained on. And designers will be aware of the various design decisions, training algorithms and techniques that went into their construction, too. At the same time, leading AI designers tell us that their systems are in some sense uninterpretable, inexplicable or opaque. That’s puzzling. Drawing on discussions in the philosophy of neuroscience and science more generally, I will make use of this puzzle to try to advance our understanding of what explanations we lack with respect to ANNS; hence the nature and scope of explanation. The puzzle helps us to distinguish different phenomena in need of explanation, and some limits to the mechanistic explanatory strategies so often helpfully employed in the cognitive neurosciences.
Mcneill, William
be33c4df-0f0e-42bf-8b9b-3c0afe8cb69e
Frey, Jeremy G.
ba60c559-c4af-44f1-87e6-ce69819bf23f
Kanza, Samantha
b73bcf34-3ff8-4691-bd09-aa657dcff420
Niranjan, Mahesan
5cbaeea8-7288-4b55-a89c-c43d212ddd4f
2 March 2022
Mcneill, William
be33c4df-0f0e-42bf-8b9b-3c0afe8cb69e
Frey, Jeremy G.
ba60c559-c4af-44f1-87e6-ce69819bf23f
Kanza, Samantha
b73bcf34-3ff8-4691-bd09-aa657dcff420
Niranjan, Mahesan
5cbaeea8-7288-4b55-a89c-c43d212ddd4f
Mcneill, William
(2022)
AI3SD Video: Interpreting opacity: understanding gaps in our explanations of artificial neural networks.
Frey, Jeremy G., Kanza, Samantha and Niranjan, Mahesan
(eds.)
AI4SD Network+ Conference, Chilworth Manor , Southampton, United Kingdom.
01 - 03 Mar 2022.
(doi:10.5258/SOTON/AI3SD0198).
Record type:
Conference or Workshop Item
(Other)
Abstract
We know everything that goes on within artificial neural networks. We tend to know of all the data such systems have been trained on. And designers will be aware of the various design decisions, training algorithms and techniques that went into their construction, too. At the same time, leading AI designers tell us that their systems are in some sense uninterpretable, inexplicable or opaque. That’s puzzling. Drawing on discussions in the philosophy of neuroscience and science more generally, I will make use of this puzzle to try to advance our understanding of what explanations we lack with respect to ANNS; hence the nature and scope of explanation. The puzzle helps us to distinguish different phenomena in need of explanation, and some limits to the mechanistic explanatory strategies so often helpfully employed in the cognitive neurosciences.
Video
ai4sd_march_2022_day_2_WillMcNeill
More information
Published date: 2 March 2022
Additional Information:
William has been a lecturer in Philosophy at the University of Southampton since 2016 and is part of the Philosophy of Language, Philosophy of Mind and Epistemology Research Group. Prior to this he lectured at Kings College London, the University of York and Cardiff University. His research interests are centered on the epistemology of perception, social cognition and inferential knowledge.
Venue - Dates:
AI4SD Network+ Conference, Chilworth Manor , Southampton, United Kingdom, 2022-03-01 - 2022-03-03
Identifiers
Local EPrints ID: 470014
URI: http://eprints.soton.ac.uk/id/eprint/470014
PURE UUID: 3d575357-3cba-4f68-ae00-8a5f88ddecdc
Catalogue record
Date deposited: 30 Sep 2022 16:38
Last modified: 17 Mar 2024 03:52
Export record
Altmetrics
Contributors
Editor:
Mahesan Niranjan
Download statistics
Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.
View more statistics