
ICAI: The Labs – AI for individual medical precision NL
**Please note that this is a hybrid event.
In May, ICAI: The Labs is focused on AI for individual medical precision in the Netherlands. The Civic AI Lab and the AI for Precision Health, Nutrition & Behaviour lab each present their work and discuss challenges and developments made in this field.
The Civic AI Lab is a collaboration between City of Amsterdam, Vrije Universiteit Amsterdam (VU) and University of Amsterdam (UvA). Civic AI Lab focuses on the application of artificial intelligence in the fields of education, welfare, environment, mobility, and health.
AI for Precision Health, Nutrition & Behaviour Lab is a collaboration between Radboud University, OnePlanet Research Center, Radboudumc, Wageningen University & Research and nine industry partners. This lab focuses on the development of AI solutions for encouragement of healthy behaviour, with a specific focus on precision health and nutrition.
To join this event:
- To participate on-site: please register here. The physical location would be in A1.28, Science Park 904, Amsterdam.
- To participate online, please register here.
Program
Chairwoman: Emma Beauxis Aussalet (VU)
12.00 (noon): Opening by chairwoman
12:05: Introduction of the AI for Precision Health, Nutrition & Behaviour Lab by Tibor Bosse (RU)
12:10: Erkan Başar (RU) presents “On Controlled Usage of Open-domain Language Models for Chatbots in Highly Sensitive Domains.”
12.25: Introduction of the Civic AI Lab by Emma Beauxis Aussalet (VU)
12:30: Ilse van der Linden and Sara Altamirano will present “‘AI for Health in the Public Sector: XAI for Assessing Fairness and Informing Decisions”
12.45: Discussion of what’s next in AI for individual medical precision in the Netherlands
13.00: End
Abstracts
“On Controlled Usage of Open-domain Language Models for Chatbots in Highly Sensitive Domains”
Open-domain large language models have progressed to generating natural-sounding and coherent text. Even though the generated texts appear human-like, the main stumbling block is that their output is never fully predictable, which runs the risk of resulting in harmful content such as false statements or inflammatory language. This makes it difficult to apply these models within conversational agents in highly sensitive domains such as personal health counselling. Hence, most of the conversational agents for highly sensitive domains are developed using pre-scripted approaches. Although pre-scripted approaches are highly controlled, they suffer from repetitiveness and scalability issues. In this project, we explore the possibility of combining the best of both worlds. We propose and describe in detail a new, flexible expert-driven hybrid architecture for harnessing the benefits of large language models in a controlled manner for highly sensitive domains and discuss the expectations and challenges.
“AI for Health in the Public Sector: XAI for Assessing Fairness and Informing Decisions”
Our projects primarily focus on developing inclusive, accountable and fair AI systems that can benefit everyone in society. We will be presenting two PhD research projects in the domain of health and well-being, which particularly concern causal models and counterfactual fairness. One project focuses on the diverse needs of a variety of stakeholders (e.g., families, schools, health professionals, public servants) and on providing policymakers with actional explanations that inform their decisions on resource allocation and intervention. Through user studies we evaluate if these explainable AI techniques (XAI) empower public health actors and contribute to the objectives of our public partners. The other project focuses on causal models that explain the prediction errors, and compares the factors that explain correct and incorrect detection of mothers and infants at risk (TP and FN). The use of synthetic data will also be investigated, as it can protect patient’s privacy, and may extend the test set with relevant counterfactuals