ICAI: The Labs – Trust and Transparency in AI

This ICAI the Labs session is focused on Trust and Transparency in AI in The Netherlands. The Police Lab and the KPN Responsible AI Lab share their story. Two speakers highlight their recent work. And two leaders in the field point out future directions.

12.00 (noon): Floris Bex (Universiteit Utrecht) presents the Police Lab
12.05: Esther Nieuwenhuizen and Daphne Odekerken on Explainable AI for citizens’ trust at the Netherlands National Police
12.20: Eric Postma (University Tilburg) presents the KPN Responsibility Lab
12.25: Georgios Vlassopoulos on Transparancy in AI: Explaining the Decision Boundary
12.40: Floris Bex and Eric Postma discuss what’s next in Trust and Transparency, in The Netherlands and beyond
13.00: End

All times are CEST.

*** This is an online meeting. Make sure to (1) sign up for the meetup on the meetup page and (2) ensure you receive emails from Meetup. Shortly before the event we will send you the Zoom link to attend, as well as the info you need to log in via a web browser (if your organization does not allow you to install Zoom). You will only receive this if you have done both these steps. ***

“Explainable AI for citizens’ trust at the Netherlands National Police”

In a democratic society, it is important that citizens trust the police – in general, but specifically in their use of AI systems. In this talk, we discuss experimental research into how different types of explanations of decisions made by an AI system influence citizens’ actions and, by extension, their trust in the police AI system. We first briefly describe the system for intelligent crime reporting and its capability to provide different types of explanation. We then discuss in more detail our survey experiment, which compares people’s reactions to different types of explanation. It turns out that a substantive explanation – in which reasons for the decision are given – is better received than a procedural explanation – in which the procedure by which the system reached its decision is given.

“Explaining the Decision Boundary”

Why does my Machine Learning model output this decision ? Can I trust my model? Which properties of the data have been learned by a complex Neural Network? How can we communicate to a user the reasoning of such models? Can we “explain” their decision boundary? How can we build explanations based on simple interpretable attributes?
In this technical talk it will be shown how these questions can be answered in a novel form: by approximating the local decision boundary of a complex model, with a simple interpretable model.


17 Dec 2020


12:00 - 13:00




Online Event