AI3SD Video: Neural Networks and Explanatory Opacity
AI3SD Video: Neural Networks and Explanatory Opacity
Deep artificial neural network (DANN) designers often accept that the systems they construct lack interpretability, are not transparent – in other words, that they are ‘inexplicable’. It should not be obvious what they mean. Explanations, particularly in the neurosciences, are often thought to consist of the mechanisms which underpin observed phenomena. But DANN designers have complete access to the mechanisms underpinning the systems they build – as well as access to their training sets, design parameters, training algorithms and so on. In this talk I distinguish various senses of ‘explanation’ – ontic, epistemic, objective, subjective. The aims are (1) to help map out the various questions we might be interested in, (2) to scope the limits of mechanistic approaches to the question of explanation, and (3) to try to narrow down the sense in which DANNs are supposed to be explanatorily opaque.
AI, AI3SD Event, Artificial Intelligence, Neural Networks
Mcneill, William
be33c4df-0f0e-42bf-8b9b-3c0afe8cb69e
Kanza, Samantha
b73bcf34-3ff8-4691-bd09-aa657dcff420
Frey, Jeremy G.
ba60c559-c4af-44f1-87e6-ce69819bf23f
Niranjan, Mahesan
5cbaeea8-7288-4b55-a89c-c43d212ddd4f
Hooper, Victoria
af1a99f1-7848-4d5c-a4b5-615888838d84
29 July 2020
Mcneill, William
be33c4df-0f0e-42bf-8b9b-3c0afe8cb69e
Kanza, Samantha
b73bcf34-3ff8-4691-bd09-aa657dcff420
Frey, Jeremy G.
ba60c559-c4af-44f1-87e6-ce69819bf23f
Niranjan, Mahesan
5cbaeea8-7288-4b55-a89c-c43d212ddd4f
Hooper, Victoria
af1a99f1-7848-4d5c-a4b5-615888838d84
Mcneill, William
(2020)
AI3SD Video: Neural Networks and Explanatory Opacity.
Kanza, Samantha, Frey, Jeremy G., Niranjan, Mahesan and Hooper, Victoria
(eds.)
AI3SD Summer Seminar Series 2020, Online, Southampton, United Kingdom.
01 Jul - 23 Sep 2020.
(doi:10.5258/SOTON/P0049).
Record type:
Conference or Workshop Item
(Other)
Abstract
Deep artificial neural network (DANN) designers often accept that the systems they construct lack interpretability, are not transparent – in other words, that they are ‘inexplicable’. It should not be obvious what they mean. Explanations, particularly in the neurosciences, are often thought to consist of the mechanisms which underpin observed phenomena. But DANN designers have complete access to the mechanisms underpinning the systems they build – as well as access to their training sets, design parameters, training algorithms and so on. In this talk I distinguish various senses of ‘explanation’ – ontic, epistemic, objective, subjective. The aims are (1) to help map out the various questions we might be interested in, (2) to scope the limits of mechanistic approaches to the question of explanation, and (3) to try to narrow down the sense in which DANNs are supposed to be explanatorily opaque.
Video
AI3SDOnlineSeminarSeries-4-WM-290720-orig
- Version of Record
More information
Published date: 29 July 2020
Additional Information:
William has been a lecturer in Philosophy at the University of Southampton since 2016 and is part of the Philosophy of Language, Philosophy of Mind and Epistemology Research Group. Prior to this he lectured at Kings College London, the University of York and Cardiff University. His research interests are centered on the epistemology of perception, social cognition and inferential knowledge.
Venue - Dates:
AI3SD Summer Seminar Series 2020, Online, Southampton, United Kingdom, 2020-07-01 - 2020-09-23
Keywords:
AI, AI3SD Event, Artificial Intelligence, Neural Networks
Identifiers
Local EPrints ID: 446698
URI: http://eprints.soton.ac.uk/id/eprint/446698
PURE UUID: e2e52bc8-88a8-4960-9a3b-0f94c0e110de
Catalogue record
Date deposited: 18 Feb 2021 17:30
Last modified: 17 Mar 2024 03:51
Export record
Altmetrics
Contributors
Editor:
Mahesan Niranjan
Editor:
Victoria Hooper
Download statistics
Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.
View more statistics