AI3SD Video: Explainable machine learning for trustworthy AI
AI3SD Video: Explainable machine learning for trustworthy AI
Black box AI systems for automated decision making, often based on machine learning over (big) data, map a user’s features into a class or a score without exposing the reasons why. This is problematic not only for the lack of transparency, but also for possible biases inherited by the algorithms from human prejudices and collection artifacts hidden in the training data, which may lead to unfair or wrong decisions. The future of AI lies in enabling people to collaborate with machines to solve complex problems. Like any efficient collaboration, this requires good communication, trust, clarity and understanding. Explainable AI addresses such challenges and for years different AI communities have studied such topic, leading to different definitions, evaluation protocols, motivations, and results. This lecture provides a reasoned introduction to the work of Explainable AI (XAI) to date, and surveys the literature with a focus on machine learning and symbolic AI related approaches. We motivate the needs of XAI in real-world and large-scale application, while presenting state-of-the-art techniques and best practices, as well as discussing the many open challenges.
AI, AI3SD Event, Artificial Intelligence, Chemistry, Data Science, Data Sharing, Machine Learning, ML, Responsible Research, Scientific Discovery
Giannotti, Fosca
2a55dd44-44d9-42bc-b2d9-f55e05756278
Frey, Jeremy G.
ba60c559-c4af-44f1-87e6-ce69819bf23f
Kanza, Samantha
b73bcf34-3ff8-4691-bd09-aa657dcff420
Niranjan, Mahesan
5cbaeea8-7288-4b55-a89c-c43d212ddd4f
20 October 2021
Giannotti, Fosca
2a55dd44-44d9-42bc-b2d9-f55e05756278
Frey, Jeremy G.
ba60c559-c4af-44f1-87e6-ce69819bf23f
Kanza, Samantha
b73bcf34-3ff8-4691-bd09-aa657dcff420
Niranjan, Mahesan
5cbaeea8-7288-4b55-a89c-c43d212ddd4f
Giannotti, Fosca
(2021)
AI3SD Video: Explainable machine learning for trustworthy AI.
Frey, Jeremy G., Kanza, Samantha and Niranjan, Mahesan
(eds.)
AI3SD Autumn Seminar Series 2021.
13 Oct - 15 Dec 2021.
(doi:10.5258/SOTON/AI3SD0157).
Record type:
Conference or Workshop Item
(Other)
Abstract
Black box AI systems for automated decision making, often based on machine learning over (big) data, map a user’s features into a class or a score without exposing the reasons why. This is problematic not only for the lack of transparency, but also for possible biases inherited by the algorithms from human prejudices and collection artifacts hidden in the training data, which may lead to unfair or wrong decisions. The future of AI lies in enabling people to collaborate with machines to solve complex problems. Like any efficient collaboration, this requires good communication, trust, clarity and understanding. Explainable AI addresses such challenges and for years different AI communities have studied such topic, leading to different definitions, evaluation protocols, motivations, and results. This lecture provides a reasoned introduction to the work of Explainable AI (XAI) to date, and surveys the literature with a focus on machine learning and symbolic AI related approaches. We motivate the needs of XAI in real-world and large-scale application, while presenting state-of-the-art techniques and best practices, as well as discussing the many open challenges.
Video
AI3SDAutumnSeminar-201021-FoscaGiannotti
- Accepted Manuscript
More information
Published date: 20 October 2021
Additional Information:
Fosca Giannotti is a director of research of computer science at the Information Science and Technology Institute “A. Faedo” of the National Research Council, Pisa, Italy. Fosca Giannotti is a pioneering scientist in mobility data mining, social network analysis and privacy-preserving data mining. Fosca leads the Pisa KDD Lab – Knowledge Discovery and Data Mining Laboratory http://kdd.isti.cnr.it, a joint research initiative of the University of Pisa and ISTI-CNR, founded in 1994 as one of the earliest research lab centered on data mining. Fosca’s research focus is on social mining from big data: smart cities, human dynamics, social and economic networks, ethics and trust, diffusion of innovations. She has coordinated tens of European projects and industrial collaborations. Fosca is currently the coordinator of SoBigData, the European research infrastructure on Big Data Analytics and Social Mining http://www.sobigdata.eu, an ecosystem of ten cutting edge European research centres providing an open platform for interdisciplinary data science and data-driven innovation. Recently she is the PI of ERC Advanced Grant entitled XAI – Science and technology for the explanation of AI decision making. She is member of the steering board of CINI-AIIS lab. On March 8, 2019 she has been features as one of the 19 Inspiring women in AI, BigData, Data Science, Machine Learning by KDnuggets.com, the leading site on AI, Data Mining and Machine Learning https://www.kdnuggets.com/2019/03/women-ai-big-data-science-machine-learning.html.
Venue - Dates:
AI3SD Autumn Seminar Series 2021, 2021-10-13 - 2021-12-15
Keywords:
AI, AI3SD Event, Artificial Intelligence, Chemistry, Data Science, Data Sharing, Machine Learning, ML, Responsible Research, Scientific Discovery
Identifiers
Local EPrints ID: 451923
URI: http://eprints.soton.ac.uk/id/eprint/451923
PURE UUID: f0437e22-f5c5-4853-a814-85bfddab26b7
Catalogue record
Date deposited: 03 Nov 2021 17:37
Last modified: 17 Mar 2024 03:51
Export record
Altmetrics
Contributors
Author:
Fosca Giannotti
Editor:
Mahesan Niranjan
Download statistics
Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.
View more statistics