AI3SD Video: How can explainable AI help scientific exploration?
AI3SD Video: How can explainable AI help scientific exploration?
Although models developed using machine learning are increasingly prevalent in scientific research, their opacity poses a threat to their utility. Explainable AI (XAI) aims to diminish this threat by rendering opaque models transparent. But, XAI is more than just the solution to a problem--it can also play an invaluable role in scientific exploration. In this talk, I will consider different techniques from Explainable AI to demonstrate their potential contribution to different kinds of exploratory activities. In particular, I argue that XAI tools can be used (1) to better understand what a "big data" model is a model of, (2) to engage in causal inference over high-dimensional nonlinear systems, and (3) to generate algorithmic-level hypotheses in cognitive science and neuroscience.
AI, AI3SD Event, Artificial Intelligence, Chemistry, Data Science, Data Sharing, Machine Learning, ML, Responsible Research, Scientific Discovery
Zednik, Carlos
30b0f2c9-5f00-4937-a5a3-015e9e349fc2
Frey, Jeremy G.
ba60c559-c4af-44f1-87e6-ce69819bf23f
Kanza, Samantha
b73bcf34-3ff8-4691-bd09-aa657dcff420
Niranjan, Mahesan
5cbaeea8-7288-4b55-a89c-c43d212ddd4f
20 October 2021
Zednik, Carlos
30b0f2c9-5f00-4937-a5a3-015e9e349fc2
Frey, Jeremy G.
ba60c559-c4af-44f1-87e6-ce69819bf23f
Kanza, Samantha
b73bcf34-3ff8-4691-bd09-aa657dcff420
Niranjan, Mahesan
5cbaeea8-7288-4b55-a89c-c43d212ddd4f
Zednik, Carlos
(2021)
AI3SD Video: How can explainable AI help scientific exploration?
Frey, Jeremy G., Kanza, Samantha and Niranjan, Mahesan
(eds.)
AI3SD Autumn Seminar Series 2021.
13 Oct - 15 Dec 2021.
(doi:10.5258/SOTON/AI3SD0156).
Record type:
Conference or Workshop Item
(Other)
Abstract
Although models developed using machine learning are increasingly prevalent in scientific research, their opacity poses a threat to their utility. Explainable AI (XAI) aims to diminish this threat by rendering opaque models transparent. But, XAI is more than just the solution to a problem--it can also play an invaluable role in scientific exploration. In this talk, I will consider different techniques from Explainable AI to demonstrate their potential contribution to different kinds of exploratory activities. In particular, I argue that XAI tools can be used (1) to better understand what a "big data" model is a model of, (2) to engage in causal inference over high-dimensional nonlinear systems, and (3) to generate algorithmic-level hypotheses in cognitive science and neuroscience.
Video
AI3SDAutumnSeminar-201021-CarlosZednik
- Version of Record
More information
Published date: 20 October 2021
Additional Information:
My research centers on the explanation of natural and artificial cognitive systems. Many of my articles specify norms and best-practice methods for cognitive psychology, neuroscience, and explainable AI. Others develop philosophical concepts and arguments with which to better understand scientific and engineering practice. I am the PI of the DFG-funded project on Generalizability and Simplicity of Mechanistic Explanations in Neuroscience. In addition to my regular research and teaching, I do consulting work on the methodological, normative, and ethical constraints on artificial intelligence, my primary expertise being transparency in machine learning. In this context I have an ongoing relationship with the research team at neurocat GmbH, and have contributed to AI standardization efforts at the German Institute for Standardization (DIN). Before arriving in Eindhoven I was based at the Philosophy-Neuroscience-Cognition program at the University of Magdeburg, and prior to that, at the Institute of Cognitive Science at the University of Osnabrück. I received my PhD from the Indiana University Cognitive Science Program, after receiving a Master's degree in Philosophy of Mind from the University of Warwick and a Bachelor's degree in Computer Science and Philosophy from Cornell University. You can find out more about me on Google Scholar, PhilPapers, Publons, and Twitter.
Venue - Dates:
AI3SD Autumn Seminar Series 2021, 2021-10-13 - 2021-12-15
Keywords:
AI, AI3SD Event, Artificial Intelligence, Chemistry, Data Science, Data Sharing, Machine Learning, ML, Responsible Research, Scientific Discovery
Identifiers
Local EPrints ID: 451912
URI: http://eprints.soton.ac.uk/id/eprint/451912
PURE UUID: db610125-ce5d-481a-90bb-2dbfb7ac3833
Catalogue record
Date deposited: 03 Nov 2021 17:32
Last modified: 17 Mar 2024 03:51
Export record
Altmetrics
Contributors
Author:
Carlos Zednik
Editor:
Mahesan Niranjan
Download statistics
Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.
View more statistics