On 6th of October between 12:00 and 13:00, ‘ ICAI: The Labs’ will have a meetup on AI for computer vision in the Netherlands. Two labs each present their work and discuss challenges and developments in this field.
QUVA Lab is the collaboration between Qualcomm and the University of Amsterdam. The mission of the QUVA-lab is to perform world-class research on deep vision. Such vision strives to automatically interpret with the aid of deep learning what happens where, when and why in images and video.
Thira Lab is a collaboration between Thirona, Delft Imaging Systems and Radboud UMC. The mission of the lab is to perform world-class research to strengthen healthcare with innovative imaging solutions.
12.00 (noon): Opening
12:05 Introduction of the QUVA Lab by Yuki Asano (UvA)
12:10 QUVA Lab: Philip Lippe (UvA) presents: “CITRIS: Causal Identifiability from Temporal Intervened Sequences”
12:25 Keelin Murphy (Radboudumc) introduces the Thira Lab and presents “BabyChecker: AI-assisted ultrasound for maternal care in low-resource settings.”
12.45 Discussion of what’s next in AI in Computer Vision in the Netherlands
“CITRIS: Causal Identifiability from Temporal Intervened Sequences”
Understanding the latent causal factors of a dynamical system from visual observations is considered a crucial step towards agents reasoning in complex environments, and this task is often referred to as causal representation learning. In this talk, we present CITRIS, a variational autoencoder framework that learns causal representations from temporal sequences of images in which underlying causal factors have possibly been intervened upon. In contrast to the recent literature, CITRIS exploits temporality and observing intervention targets to identify scalar and multidimensional causal factors, such as 3D rotation angles. Furthermore, by introducing a normalizing flow, CITRIS can be easily extended to leverage and disentangle representations obtained by already pretrained autoencoders. Extending previous results on scalar causal factors, we prove identifiability in a more general setting, in which only some components of a causal factor are affected by interventions. In experiments on 3D rendered image sequences, CITRIS outperforms previous methods on recovering the underlying causal variables. Moreover, using pretrained autoencoders, CITRIS can even generalize to unseen instantiations of causal factors, opening future research areas in sim-to-real generalization for causal representation learning.
“BabyChecker: AI-assisted ultrasound for maternal care in low-resource settings”
Approximately 99% of maternal deaths occur in low-resource settings where the ability to diagnose and manage pregnancy complications is limited. Prenatal ultrasound screening is used routinely in wealthier countries around the world and can detect high risk pregnancies which should be managed at a medical center. The BabyChecker project develops Artificial Intelligence (AI) tools which run on a smartphone connected to a low-cost ultrasound device. The image acquisition can be done by an operator with minimal training, following a standard protocol. The BabyChecker AI software will perform standard prenatal checks to determine the gestational age and position of the baby and the presence of more than one fetus, for example. In this way, high risk pregnancies can be identified and referred for clinical management. In this talk we will describe the AI systems underlying the BabyChecker system, the current and future development and challenges.