ICAI’s 2022 Graphic Overview

Although we are already half way through 2023, it’s always good to look back at the successes our institute have reached so far. In the spirit of ICAI’s mission to support open talent and technology development and ‘Make AI for Everyone’, we managed to accomplish great goals in the past year.

Therefore, we would like to present you: The ICAI 2022 graphic overview, in which we offer a comprehensive overview of our achievements in 2022.

This infographic showcases four of the six pillars of ICAI, which resembles the foundation of our institute: ICAI Labs, ICAI Academy, ICAI Venture and ICAI Connector.

Labs

ICAI ended 2022 with a portfolio of 32 labs. 334 PhD students and 100 partners worked together to deliver a great amount of 254 research papers, which we are very proud of. We continue to grow, and expect to add more valuable labs to our ecosystem this year.

Connector:

We managed to organise a lot of interesting events within our community, giving visitors the opportunity to network and educate themselves about the latest trends in the field of AI. Interested in our next event? Don’t forget to regularly check our events page on the website, or sign up for our monthly newsletter.

Academy:

With our Dutch podcast ‘Snoek op Zolder’ (produced by Henny Huijgens) we reached the magic number of over 5500 downloads! Little teaser: a new season is in the making, and we can’t wait to share this new season with our community.

Announcement two new ICAI labs

We are expanding! In the past weeks, two new ICAI labs have launched. ICAI is happy to have them as a part of our ecosystem.

Explainable AI for Health

The Explainable AI for Health lab is a collaboration between the Leiden University Medical Center, Amsterdam UMC and Centrum Wiskunde & Informatica. Together, they will work on developing new forms of artificial intelligence that help physicians and patients with clinical decisions. 

The goal of the lab is to tailor and validate new inherently explainable AI techniques and guidelines that can be used for clinical decision-making.

Scientific director Peter A.N. Bosman says: “Clinical decision support based on AI can have great added value for physicians and patients”.

More info about this lab can be found on the the dedicated lab page: https://icai.ai/explainable-ai-for-health-lab/

Responsible and Ethical AI for Healthcare Lab (REAiHL)

Our 49th lab is the REAiHL lab, is a collab between SAS Institute, Erasmus Medical Center and Delft University of Technology. The lab aims to develop and deploy AI technologies that are safe, transparent, and aligned with ethical principles to improve healthcare outcomes.

The new AI Ethics Lab was initiated by internist-intensivist Michel van Genderen from Erasmus MC. Diederik Gommers, Professor of Intensive Care Medicine at Erasmus MC, is also closely involved: “Initially, the new AI Ethics Lab will focus on developing best practices for the Intensive Care Unit,” Buijsman says. “But our ultimate goal is to develop a generalized framework for the safe and ethical application of AI throughout the entire hospital. We therefore expect to soon start addressing use cases from other clinical departments as well.”

More info about REAiHl can be found here: https://icai.ai/reaihl/

Let’s Make AI for Everyone

Today is ICAI’s fifth anniversary. We started ICAI from the fundamental belief that if AI is going to transform and permeate every single aspect of our society, we should help ensure that we can influence, steer, and own these developments ourselves, here in The Netherlands and in Europe. A core motivation behind ICAI has been to help address the significant concentration of power and data in the hands of a very small number of companies that have almost exclusive access to these technologies outside of any form of democratic control. Giving ourselves the means to develop talent and technology in AI is an issue of industrial and, ultimately, societal sovereignty.

The mission that ICAI has pursued to support open talent and technology development in AI since its launch on April 23, 2018 can best be summarized in three phrases: shared ownership, augmented intelligence, and many voices

Shared ownership” refers to democratizing AI. It refers to the collaborative development of talent and technology in AI, with different types of stakeholders – knowledge institutes, industry, government, civil society – based on shared innovation agendas that the stakeholders determine, work on, and revise themselves. Learning-by-doing is a key ingredient of shared ownership, so that all stakeholders become smarter through their participation in collaborative development. ICAI’s labs are our primary vehicle for putting shared ownership into practice. I am very proud that as of today, when we turn five, 47 labs around the Netherlands have been launched, each with at least five PhD students. Between them, they bring together more than 140 partners from all sectors of Dutch society. 

Augmented intelligence” targets the development of AI systems not as autonomous systems that are meant to replace people but as systems that complement and support people to help them decide and act better. There is no lack of challenges where we can use help: global pandemics, resource scarcity, energy transition, aging populations, collapsing biodiversity, digital divides, climate change, staff shortages in key sectors, growing inequality, food waste, unaffordable healthcare, eroding democratic institutions. Increasingly, ICAI’s labs do not just target technological or economic goals but align their innovation agendas with the UN’s sustainable development goals.

Finally, “many voices” recognizes the diversity of perspectives on the development, roles, and impacts of AI. It also refers to the need to optimize for goals that go beyond accuracy. With the recently launched ROBUST program, ICAI now includes a large number of labs that focus on different aspects of trustworthiness of AI-based systems, such as explainability, reliability, repeatability, resilience, and safety. Rather than relying on end users, or indeed on society, to deal with the consequences of AI technologies that have been optimized for accuracy only, ROBUST emphasizes the range of meaningfully different trade-offs that technology development and deployment may and should make. Above all, the ROBUST program fosters the collective intelligence of diverse and collaborating groups of stakeholders.

ICAI has grown into a nation-wide ecosystem that is organized in a bottom-up fashion, in just five years. With this ecosystem we seek to decentralize and democratize technological power and to make sure that technology is applied for human empowerment, to have broad and genuine benefit.

Maarten de Rijke
Scientific director Innovation Center for Artificial Intelligence

ICAI joins Partnership on AI

AI has the power to transform the world, but only when guided by the shared values and visions of its stakeholders. Therefore, ICAI is proud to become part of the Partnership on AI, a worldwide coalition of academic institutions, media organizations, industry leaders, and civil society groups dedicated to promoting the positive impact of AI on people and society. 

The Partnership on AI (PAI) was founded with six thematic pillars, aimed at addressing the risks and opportunities of AI:

  • Safety-Critical AI;
  • Fair, Transparent, and Accountable AI;
  • AI, Labor, and the Economy;
  • Collaborations Between People and AI Systems;
  • Social and Societal Influences of AI;
  • AI and Social Good.

Partners from all over the world, and AI subdomain, take part in the partnership with one goal; to promote best practices in the development and deployment of AI. Through collaboration, the Partnership on AI develops tools, recommendations, and other resources by inviting voices from across the AI community and beyond to turn insights into actions to ensure that AI and ML technology puts people first.

“We are pleased to be part of PAI and connect on a global level to other organizations to work on AI challenges”, says Maarten de Rijke. Our membership in the Partnership on AI reflects our commitment to the responsible deployment of AI technology and our belief in the importance of collaboration in shaping the future of AI. By sharing expertise, taking part in steering committees, and dedicating ourselves to sharing resources and education regarding the development of AI policy, we can all learn from each other.

If you are interested in learning more about Partnership on AI, please find their website via the following link: https://partnershiponai.org/. Moreover, if you specifically want to know more about ICAI’s involvement in the partnership, please contact Esther Smit at esmit@icai.ai.

ROBUST AI programme receives additional €25 million in funding from Dutch Research Council

Total project budget of over 87 million, including 17 new labs and 170 new PhD candidates over 10 years

ROBUST, a new initiative by the Innovation Center for Artificial Intelligence (ICAI), is supported by the University of Amsterdam and 51 government, industry and knowledge-sector partners. The programme aims to strengthen the Dutch artificial intelligence (AI) ecosystem by boosting fundamental AI research. ROBUST focuses primarily on the development of trustworthy AI technology for the resolution of socially relevant issues, such as those in healthcare, logistics, media, food and energy. The research sponsor, the Dutch Research Council (NWO) has earmarked 25 million euros for the programme for the next 10 years.

ROBUST unites 17 knowledge institutions, 19 participating industry sponsors and 15 civil-social organisations from across the Netherlands. Maarten de Rijke, UvA university professor of Artificial Intelligence and Information Retrieval, is the ROBUST programme leader.

The additional €25 million grant comes from a call by the research council for Long-Term Programmes, which give strong public-private consortia the chance to receive funding for a ten-year period. This as part of the initiative of the Netherlands AI Coalition to invest in explainable and trustworthy AI. Next to the research council, companies, knowledge institutes contribute to the programme. The total ROBUST budget amounts to €87.3 million, of which 7.5 million coming from the Ministry of Economics and Climate. The ROBUST programme is complementary to the AiNed programme and will shape the collaboration on dissemination, consolidation and valorisation of the results, as well as retaining talent in the Netherlands. This contributes to the ambitions of the Strategy Digital Economy of the cabinet to be in the forefront of human-centred AI development and AI applications.

170 new PhD candidates
Seventeen new public-private labs will be set up under the ROBUST umbrella and form part of the Innovation Center for Artificial Intelligence (ICAI), thus bringing its lab total to 46. ICAI focuses on AI talent and knowledge development. In the coming year, ROBUST will recruit no fewer than 85 new PhD candidates, followed by another 85 in five years’ time.

Human-centred AI for sustainable growth
‘What makes ROBUST unique is that not only will the new labs contribute to economic and technological objectives, they will also aid the United Nations’ sustainable development goals aimed at reducing poverty, inequality, injustice and climate change’, says De Rijke. ‘One important focus of all projects is to optimise reliable AI systems for qualities such as precision, soundness, reproducibility, resilience, transparency and security.’

Twin-win study
Just like the other ICAI labs, the ROBUST labs will put the twin-win principle into practice: intensive public-private research partnerships in AI technology that lead to open publications and solutions that have been validated in practice. ‘We test our scientific findings within an industry context. Research and practice thus come together at an earlier stage, allowing for far better validation of the results. This way, research validation doesn’t end in the lab, but also in the outside world.’

Startups, SMEs, and policymakers
‘AI is a systemic technology that touches all aspects of society. That’s why it’s important to ensure that the application of AI technology becomes a widely shared responsibility. ROBUST collaborates with regional civil-social partners throughout the Netherlands, and especially with startups and small to medium-sized enterprises (SMEs).’ The objective is not only to develop knowledge and innovations with ROBUST partners, but also to make them more widely available to other parties within the Dutch ecosystem. New findings and their policy implications will also be shared with national and European policymakers.

Contact
Journalists wishing to contact Maarten de Rijke or other relevant scientists, or to find out more about ROBUST, please contact persvoorlichting@uva.nl.

Journalists wishing to find out more about Long-Term Programmes or the sponsorship of the Dutch Research Council, please visit https://www.nwo.nl/calls/lange-termijn-programmas-strategiegedreven-consortia-met-impact

Artificial Intelligence and Improved Hearing – The Opening of FEPlab

From the first of November onwards, knowledge institution Eindhoven University of Technology (TU/e) and globally leading hearing aid manufacturer GN Hearing will join forces in FEPlab. The lab is dedicated to ameliorating the participation of hearing-impaired people in both formal and informal settings.

Research

FEPlab will focus its research on transferring a leading physics/neuroscience-based theory about computation in the brain, the Free Energy Principle (FEP), to practical use in human-centered agents such as hearing devices and VR technology. FEP is a general theory of information processing and decision-making in brains that is rooted in thermodynamics. The principle states that biological agents must take actions (or decisions) that minimize their (variational) free energy which is a measure of the amount of total prediction error in a system. Practically, by minimizing free energy, the agent takes actions that optimally balance information-seeking behavior (reduce uncertainties) against goal-driven behavior. Theoretical foundations for AI application of FEP-based synthetic agents have been produced by BIASlab at TU/e. In the current endeavor, FEPlab is focused to bring FEP-based AI agents to the professional hearing device industry. Professor Bert de Vries, the scientific director of FEPlab alongside Associate Professor Jaap Ham, believes FEP-based synthetic agents have much to offer to signal processing systems:

I believe that development of signal processing systems will in the future be largely automated by autonomously operating agents that learn purposeful (signal processing) behavior from situated environmental interactions.

Bert de Vries, Scientific Director FEPlab

Expertise and Focus

FEPlab will comprise experts from different fields of expertise such as Audiology, Autonomous Agents & Robotics, Decision Making, and Machine Learning to tackle the complex multidisciplinary challenges at hand. The lab will employ five PhD students at TU/e, of which four will join the BIASlab research group in the EE department and one PhD student will join the Human-Technology Interaction group at the IE&IS department. Key research topics include reactive message passing for robust inference, generative probabilistic models for audio processing, and interaction design for hearing aid personalization.

Sustainable Development Goals

FEPlab will focus on two SDGs. Firstly, the research goals of the lab resonate with SDG 3 focused on Good Health and Well-being since untreated hearing loss in the elderly increases the risk of developing dementia and Alzheimer’s disease as well as emotional and physical problems. Secondly, the lab’s research goals also support SDG 8 of achieving higher levels of economic productivity through technology upgrading and innovation as hearing loss is also shown to affect work participation negatively.

Partners

The ICAI FEPlab is a collaboration between Eindhoven University of Technology (TU/e) and GN Hearing.

ICAI Deep-Dive: Working with Medical Data

Working with medical data comes with many challenges, ranging from improving data usability to maintaining privacy and security. To outline some of these challenges, ICAI organizes the ICAI Deep-Dive: Working with Medical Data on the 3rd of November, 15:00-18:00. This hybrid event will be moderated by Nancy Irisarri Méndez and will take place on location at Radboud University and online.

Artificial intelligence solutions are rapidly transforming the world by automating tasks that for long have been performed by humans solely. Training on increasingly massive datasets is one of the enablers of this widespread use of robust and trailblazing models. However, due to socioeconomic and legal restrictions, the industry lacks large-scale medical datasets to enable the development of robust AI-based healthcare solutions. Therefore, there has been an increased interest in technical solutions that can overcome such data-sharing limitations while simultaneously maintaining data security and the privacy of patients.

We will open this ICAI Deep-Dive event with an introduction to two specific data-related challenges in the medical field. The first challenge will be introduced by Bram van Ginneken of the Radboud UMC, who will discuss FAIR (Findability; Accessibility; Interoperability; and Reusability) requirements for data sharing in practice. Thereafter, Gennady Roshchupkin of the Erasmus UMC will conclude part I of the event by discussing the challenges of using Federated Learning in genomics research.

The second part of the ICAI Deep-Dive event will be a panel discussion that centralizes the question “How do we tackle challenges in medical data usage by collaborating together?”. During the panel discussion, Nancy will moderate the discussion among Bram van Ginneken, Clarisa Sánchez, Gennady Roshchupkin, Johan van Soest, and is also open to everyone who is interested in the challenges mentioned in the previous two talks.

After the panel discussion, it is time for networking while enjoying some drinks.

UvA and Bosch extend collaboration with new ICAI research lab

The UvA and world-leading technology company Bosch have agreed to extend their established collaboration with the launch of a new public-private research lab. Delta Lab 2 – the follow-up to the successful collaboration Delta Lab 1 – will focus on the use of artificial intelligence and machine learning for applications in computer vision, generative models and causal learning. Delta Lab 2 will form part of ICAI, the national Innovation Center for AI, headquartered on the Amsterdam Science Park. The lab will be headed by the UvA’s Dr Jan-Willem van de Meent and Prof. Theo Gevers. Dr Eric Nalisnick will be the daily lab manager.

‘For Bosch, collaboration and close exchange with academic institutions is an essential component of our efforts in the development of safe, robust, and explainable AI. By expanding and realigning the previously successful collaboration in the UvA-Bosch Delta Lab, we are realizing our ambition of combining cutting-edge research with high application potential,’ says Michael Fausten, Senior Vice President and Head of the Bosch Center for Artificial Intelligence (BCAI).

In the Delta 2 lab, ten PhD students, one postdoc and one lab manager will work on projects over the next five years with a total budget of €5.2 million. aiming for new research on deep (causal and partial differential equation-based) generative models; certainty and causality in machine learning; and 3D computer vision.

Building on success

Gevers: ‘The collaboration between UvA and Bosch in Delta Lab 1 has been a great success. We want to build on that success with new fundamental research into machine learning and computer vision technologies. We are grateful to Bosch for continuing to invest in fundamental research. We also thank the previous directors and researchers for all their efforts, and we will continue to work enthusiastically on unexplored areas of AI.’

Van de Meent: ‘We are excited to continue our productive collaboration. Working with Bosch is a win win. It not only facilitates uptake of AI innovations in industry, but also provides a wealth of use cases that can inspire new innovations, such as incorporating physical knowledge into models, reasoning about their causal structure, and evaluating the level of confidence that can be attributed to predictions.’

More information can be found here.

Civic AI Lab on UNESCO’s top 100 list of AI solutions worldwide

Civic AI Lab’s proposal for the UNESCO’s International Research Centre on Artificial Intelligence (IRCAI) has been rated early stage with great potential and therefore made it to the top 100 projects list of AI solutions worldwide. IRCAI is releasing a list of 100 projects solving problems related to the 17 United Nations Sustainable Development Goals with the application of Artificial Intelligence, from all five geographical regions: Africa, Europe and Americas, Asia and the Pacific, and the Middle East. Civic AI Lab is one of ICAI’s 29 labs.

IRCAI’s list of 100 projects

Nine Veni grants for AI researchers

The Dutch Research Council (NWO) has awarded nine Veni grants to researchers involved in groundbreaking AI research. The recipients can use the grants – up to a maximum of 280,000 per researcher – to further develop their research ideas over the next three years. ICAI wants to congratulate these colleagues for their achievements!

The AI recipients of a Veni grant:

  • Continual Learning under Human Guidance
    dr. E.T. Nalisnick (M), Universiteit van Amsterdam
    Artificial intelligence (AI) systems need to adapt to new scenarios. Yet, we must ensure that the new behaviours and skills that they acquire are safe. The researcher will develop AI techniques that allow autonomous systems to adapt but to do so cautiously, under the guidance of a human.
  • Intelligent interactive natural language systems you can trust and control
    dr. V. Niculae (M), Universiteit van Amsterdam
    Artificial intelligence agents are seemingly approaching human performance in natural language tasks like automatic translation and dialogue. However, deployed in the wild, such systems are out of control, learning to produce harmful language even unprompted. Using recent machine learning breakthroughs, the researcher rethinks language generation for trustworthiness and controllability.
  • Efficient AI with material-based neural networks
    dr. H.C. Ruiz Euler (M), Universiteit Twente
    The unprecedented success of artificial intelligence (AI) comes at the price of unsustainable computational costs. This project will research the potential of a novel technology for highly efficient AI hardware: “material-based neural networks”. This technology will enable the next generation of efficient AI systems for edge computing and autonomous systems.
  • Helping computers say what they mean to say
    dr. J.D. Groschwitz (M), Universiteit van Amsterdam
    When a computer talks to us, for example when answering a question, it must translate that answer from its inner computer representation to fluent human language. This project combines linguistics and state-of-the-art machine learning to create a language generation system in which the output text expresses exactly what the computer meant to say.
  • The life and death of white dwarf binary stars
    dr. J.C.J. van Roestel (M), Universiteit van Amsterdam
    Double white dwarf stars are a rare but important type of binary star. They are potential supernova progenitors, some merge to form massive rotating white dwarfs, and they also emit gravitational wave radiation. I will combine data from the Dutch BlackGEM telescope with multiple other telescope surveys and use novel machine learning methods to uncover the population of short-period eclipsing white dwarf binary stars across the entire sky. By comparing the observed population and characteristics with binary population synthesis models, I will determine how these double white dwarfs end their life.
  • Personalizing radiotherapy with Artificial Intelligence: reducing the toxicity burden for cancer survivors
    Lisanne van Dijk PhD, University Medical Center Groningen (UMCG)
    Many head and neck cancer patients suffer from persistent severe toxicities following radiotherapy. As survival rates increase, toxicity reduction has become more pivotal. This project uses Artificial Intelligence techniques to predict toxicity trajectories, which can facilitate personalized decision-support to guide physicians in finding optimal strategies to reduce these severe toxicities.
  • Towards realistic models for spatiotemporal data
    dr. K. Kirchner (V), Technische Universiteit Delft
    Many environmental factors, such as temperature or air pollution, are recorded at several locations and dates. Because of limited computing power, a realistic analysis of the resulting large datasets is often unachievable. This project develops computational approaches which enable efficient accurate data analysis and reliable forecasts for phenomena with uncertainty.
  • Explainable Artificial Intelligence to unravel genetic architecture of complex traits
    Gennady Roshchupkin PhD, Erasmus MC Medical Center
    While we have learned that most diseases have a genetic component, we are still far away from understanding the underlying processes. Using Artificial Intelligence, I will investigate the complex relationship between DNA mutations and human health. This will be the basis for development of novel diagnostic, prognostic and therapeutic tools.
  • Neural networks for efficient storage and communication of information
    dr. J. Townsend (M), Universiteit van Amsterdam
    The brain is an extremely efficient system for storing and communicating information. This research will study the use of artificial neural networks, inspired by the mechanisms in the brain, for data compression, enabling faster internet communication and more efficient storage of computer files.

Find the complete list of Veni grants 2021 here.