The University of Southampton
University of Southampton Institutional Repository

AI3SD Video: How can explainable AI help scientific exploration?

AI3SD Video: How can explainable AI help scientific exploration?
AI3SD Video: How can explainable AI help scientific exploration?
Although models developed using machine learning are increasingly prevalent in scientific research, their opacity poses a threat to their utility. Explainable AI (XAI) aims to diminish this threat by rendering opaque models transparent. But, XAI is more than just the solution to a problem--it can also play an invaluable role in scientific exploration. In this talk, I will consider different techniques from Explainable AI to demonstrate their potential contribution to different kinds of exploratory activities. In particular, I argue that XAI tools can be used (1) to better understand what a "big data" model is a model of, (2) to engage in causal inference over high-dimensional nonlinear systems, and (3) to generate algorithmic-level hypotheses in cognitive science and neuroscience.
AI, AI3SD Event, Artificial Intelligence, Chemistry, Data Science, Data Sharing, Machine Learning, ML, Responsible Research, Scientific Discovery
Zednik, Carlos
30b0f2c9-5f00-4937-a5a3-015e9e349fc2
Frey, Jeremy G.
ba60c559-c4af-44f1-87e6-ce69819bf23f
Kanza, Samantha
b73bcf34-3ff8-4691-bd09-aa657dcff420
Niranjan, Mahesan
5cbaeea8-7288-4b55-a89c-c43d212ddd4f
Zednik, Carlos
30b0f2c9-5f00-4937-a5a3-015e9e349fc2
Frey, Jeremy G.
ba60c559-c4af-44f1-87e6-ce69819bf23f
Kanza, Samantha
b73bcf34-3ff8-4691-bd09-aa657dcff420
Niranjan, Mahesan
5cbaeea8-7288-4b55-a89c-c43d212ddd4f

Zednik, Carlos (2021) AI3SD Video: How can explainable AI help scientific exploration? Frey, Jeremy G., Kanza, Samantha and Niranjan, Mahesan (eds.) AI3SD Autumn Seminar Series 2021. 13 Oct - 15 Dec 2021. (doi:10.5258/SOTON/AI3SD0156).

Record type: Conference or Workshop Item (Other)

Abstract

Although models developed using machine learning are increasingly prevalent in scientific research, their opacity poses a threat to their utility. Explainable AI (XAI) aims to diminish this threat by rendering opaque models transparent. But, XAI is more than just the solution to a problem--it can also play an invaluable role in scientific exploration. In this talk, I will consider different techniques from Explainable AI to demonstrate their potential contribution to different kinds of exploratory activities. In particular, I argue that XAI tools can be used (1) to better understand what a "big data" model is a model of, (2) to engage in causal inference over high-dimensional nonlinear systems, and (3) to generate algorithmic-level hypotheses in cognitive science and neuroscience.

Video
AI3SDAutumnSeminar-201021-CarlosZednik - Version of Record
Available under License Creative Commons Attribution.
Download (678MB)
Text
20102021-AI3SDQA-CZ
Available under License Creative Commons Attribution.
Download (59kB)

More information

Published date: 20 October 2021
Additional Information: My research centers on the explanation of natural and artificial cognitive systems. Many of my articles specify norms and best-practice methods for cognitive psychology, neuroscience, and explainable AI. Others develop philosophical concepts and arguments with which to better understand scientific and engineering practice. I am the PI of the DFG-funded project on Generalizability and Simplicity of Mechanistic Explanations in Neuroscience. In addition to my regular research and teaching, I do consulting work on the methodological, normative, and ethical constraints on artificial intelligence, my primary expertise being transparency in machine learning. In this context I have an ongoing relationship with the research team at neurocat GmbH, and have contributed to AI standardization efforts at the German Institute for Standardization (DIN). Before arriving in Eindhoven I was based at the Philosophy-Neuroscience-Cognition program at the University of Magdeburg, and prior to that, at the Institute of Cognitive Science at the University of Osnabrück. I received my PhD from the Indiana University Cognitive Science Program, after receiving a Master's degree in Philosophy of Mind from the University of Warwick and a Bachelor's degree in Computer Science and Philosophy from Cornell University. You can find out more about me on Google Scholar, PhilPapers, Publons, and Twitter.
Venue - Dates: AI3SD Autumn Seminar Series 2021, 2021-10-13 - 2021-12-15
Keywords: AI, AI3SD Event, Artificial Intelligence, Chemistry, Data Science, Data Sharing, Machine Learning, ML, Responsible Research, Scientific Discovery

Identifiers

Local EPrints ID: 451912
URI: http://eprints.soton.ac.uk/id/eprint/451912
PURE UUID: db610125-ce5d-481a-90bb-2dbfb7ac3833
ORCID for Jeremy G. Frey: ORCID iD orcid.org/0000-0003-0842-4302
ORCID for Samantha Kanza: ORCID iD orcid.org/0000-0002-4831-9489
ORCID for Mahesan Niranjan: ORCID iD orcid.org/0000-0001-7021-140X

Catalogue record

Date deposited: 03 Nov 2021 17:32
Last modified: 17 Mar 2024 03:51

Export record

Altmetrics

Contributors

Author: Carlos Zednik
Editor: Jeremy G. Frey ORCID iD
Editor: Samantha Kanza ORCID iD
Editor: Mahesan Niranjan ORCID iD

Download statistics

Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.

View more statistics

Atom RSS 1.0 RSS 2.0

Contact ePrints Soton: eprints@soton.ac.uk

ePrints Soton supports OAI 2.0 with a base URL of http://eprints.soton.ac.uk/cgi/oai2

This repository has been built using EPrints software, developed at the University of Southampton, but available to everyone to use.

We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we will assume that you are happy to receive cookies on the University of Southampton website.

×