ICAI: The Labs – AI & the Public Sector in NL
This Lunch at ICAI session is focused on AI technology in the public sector. The Cultural AI lab and the Civic AI lab each present their work and discuss challenges and developments made in this field.
The Cultural AI lab is a collaboration between Centrum Wiskunde & Informatica (CWI), the KNAW Humanities Cluster, the National Library of the Netherlands, the Rijksmuseum, the Netherlands Institute for Sound and Vision, TNO, Vrije Universiteit Amsterdam and University of Amsterdam.
The Civic AI lab is a collaboration between the City of Amsterdam, Ministry of the Interior and Kingdom relations, University of Amsterdam (UvA) and the Vrije Universiteit Amsterdam (VU).
12.00 (noon): Introduction Cultural AI lab by Jacco van Ossenbruggen (VU)
12:05: Andrei Nesterov (CWI) about ‘Detecting and modelling contentious words in cultural heritage collections’
12.20: Introduction Civic AI lab by Sennay Ghebreab (UvA)
12:25: Emma Beauxis-Aussalet (VU) about ‘Modelling and Explaining AI Error and Bias’
12.40: Discussion what’s next for AI in the public sector
‘Detecting and modelling contentious words in cultural heritage collections’
One of the key questions in the Cultural AI Lab is how AI can take the complexity of cultural contexts into account and show different perspectives on digitised artefacts in heritage collections? In short: How can AI be “culturally aware”?
As the first step to this question, we investigate the statistical and symbolic approaches to deal with the usage of outmoded, inaccurate, and offensive words (we call such words contentious) in heritage collections and descriptions. For example, in which contexts might using the word ‘exotic’ be problematic?
During the ICAI Lunch, we will talk about how we use crowdsourcing and domain expert knowledge in 2 parallel but interconnected projects: (1) detecting contentious terms in cultural heritage collections (historical newspaper articles) and (2) modelling the usage of such terms in different contexts.
‘Modelling and Explaining AI Error and Bias’
Systematic error discrepancies between populations create bias, discrimination, and fairness issues. Such issues are largely discussed in high-level guidelines for ethical computing. Yet we are lacking scalable methods to manage these issues in practice. One line of research at the Civic AI Lab aims at bridging this gap with algorithm-agnostic methods that i) model the patterns of error; ii) explain the features that underly the patterns of errors; iii) consider human bias in groundtruth data; and iv) that are scalable to a large range of systems used by public institutions. Our goal is to develop comprehensive sets of methods that support transparency, non-discrimination, and due process, and that inform policies for responsible AI.