On the 27th of October ICAI organizes the ICAI Day: A Deep Dive into AI. This hybrid event will take place on location in Den Bosch and online. The focus of this ICAI event will be on the technological side of AI. Registration for the event is now open for everybody who is interested. You can register here.
Together with the ELSA Labs community of NLAIC, we organize a lunch event as part of the ICAI day for all the labs. In small table settings we will deep dive into specific area’s and how to work on AI with trustworthiness integrated in the technology. Introductions will come from the Police Lab and ELSA Lab. You must be physically present to attend this part of the programme.
We have speakers from outside and inside the labs who will dive into the latest technologies of AI and show work on Geometric Deep Learning: from Euclid to drug design (Michael Bronstein, Imperial College & Twitter) and graph convolution networks (Xie Weiyi, Radboud MC/Thira Lab). Finally our experts in the labs, from academia and industry, will share insides on lessons learned from the collaborations in the ICAI labs.
Topics of the lunch table discussions: – Inclusive society with data engineering – Autonomous systems in mobility – Robotics & autonomous agents – Computer vision in healthcare – Using AI in education and governmental org. – Online personalization and impact
13:45 – 17:00 Part 2: ICAI plenary event 13:45 – 14:00 Welcome by chair Nathan de Groot and director of ICAI Maarten de Rijke 14:00 – 14:45 Keynote Michael Bronstein Geometric Deep Learning: from Euclid to drug design (Twitter, Imperial College)
14:45 – 15:00 Break
15:00 – 15:30 Lecture Xie Weiyi(Thira Lab) – Graph Attention Networks for airway labeling 15:30 – 15:35 Short videos of different ICAI labs 15:35 – 16:20 Discussion table ICAI labs –Lessons learned in collaboration: Elvan Kula (ING, AI for Fintech Lab), Georgios Tsatsaronis (Elsevier, Discovery Lab), Cees Snoek (UvA, QUVA, AIM & Atlas Lab) 16:20 – 16:30 Closing words
ICAI is very proud to be able to expand the network of ICAI with the LTP ROBUST program “Trustworthy AI systems for sustainable growth”, supported by NWO in the new Long Term Program with €25 million.
AI technology promises to help with many tough societal challenges. For the technology to be adopted and benefit everyone, it is essential that the AI systems that we develop are trustworthy. The ROBUST Long-Term Program addresses this challenge and gets the opportunity to build the program.
First and foremost, ROBUST focuses on attracting talent to work on the challenges of trustworthy AI. Talent is the core of any AI ecosystem. Second, it makes trustworthy AI research and innovation a shared responsibility between knowledge institutes, industry, governmental organizations, and other societal stakeholders. And third, it practices learning by doing in the Dutch context, through use-inspired research, connections with startups and SMEs, and an extensive knowledge sharing efforts.
17 new ICAI labs
The ROBUST program builds on ICAI, the Innovation Center for Artificial Intelligence. It intends to add 17 labs to ICAI’s current ecosystem of 30 labs, in areas as diverse as health, energy, logistics, and services. The labs that make up ROBUST are driven by economic opportunities and contributions to the UN’s sustainable development goals. They will develop AI-algorithms that advance the state of the art in accuracy, reliability, repeatability, resilience, and safety of AI algorithms – all essential hallmarks of trustworthy AI.
ROBUST is a collaboration of 21 knowledge institutes, 23 companies, and 10 societal organizations. ROBUST is supported by the Netherlands Organisation for Scientific Research (NWO) and the AiNed National Growth Fund Investment Program.
The project leader for the ROBUST program is prof. Maarten de Rijke of the University of Amsterdam and ICAI. The co-applicants are prof. Mark van den Brand (Technical University Eindhoven), prof. Arie van Deursen (Delft University of Technology), prof. Bram van Ginneken (RadboudUMC), dr. Eva van Rikxoort (Thirona), prof. Clarisa Sánchez Gutiérrez (University of Amsterdam), and prof. Nava Tintarev (Maastricht University).
Ben Luijten is halfway through his PhD research at e/MTIC AI-Lab and is already working with his team on a number of prototypes of Philips ultrasound devices incorporating their algorithms. Luijten: ‘We work really close to the clinicians right now to find out what the best image quality is for them.’
Ben Luijten is a PhD Candidate at the Biomedical Diagnostic Lab at Eindhoven University of Technology and is a member of ICAI’s e/MTIC AI-Lab.
e/MTIC AI-Lab is a collaboration between Eindhoven University of Technology, Catharina Hospital, Maxima Medical Center, Kempenhaeghe Epilepsy and Sleep Center and Philips.
What can artificial intelligence and deep learning techniques mean for ultrasound imaging?
‘At the core of our research, we are trying to maximize image quality of ultrasound images. Ultrasound imaging has been around for almost fifty years now. It’s a fantastic technique that makes it possible to convert sound reflections into images, and take a look inside the human body in real-time. Technicians however, have always struggled with image quality. MRI and CT scans have a very high image quality, but they don’t give real-time feedback, and are very expensive. We are now trying to improve ultrasound image quality with the use of AI.’
Why is real-time image processing so hard?
‘Our devices have to operate within a fraction of a second. Ultrasound imaging needs at least thirty frames a second, and sometimes even up to a thousand frames a second in the case of blood flow measurements for example. In order to process all this information in real-time, the reconstruction algorithms have to be very small and lightweight in order to run on medical devices.’
What is your approach to this problem?
‘We look at the algorithms early on in the signal chain, so before the image formation. That way we are close to the signal processing. The signal processing is everything that happens in between the measurement and the image formation; the sampling of the signal, putting it into a digital form, filtering it, etc. Since these techniques are already well understood, we try to implement intelligent solutions that stay close to these conventional steps. In doing so, we can improve the image quality in a robust way, with relatively compact neural networks, and a small margin of error.’
How far have you already come with improving the image quality?
‘Well, the first question here is: what actually is the best image quality? Sometimes, what we as technicians find really good images, with very high contrast and resolution for example, the doctors don’t like at all, because they don’t see the same things as they did before. So we work really close to the clinicians right now to find out what the best image quality is for them. At this point we already have some good working AI algorithms that we started to implement in prototype systems.’
How will the next generation of ultrasound devices look like?
‘We want to head into the direction of image processing with small, portable ultrasound devices. That way a doctor could go to a patient’s home to do some basic scanning with lightweight AI running on his or her smartphone. And more complex processing could be done in the cloud. This could be a solution to the triage problem we saw during Covid for example. We did a side project on automatic assessment of Covid-severity, based on ultrasound scans of the lungs. And it worked pretty good. Eventually, the development of these portable ultrasound devices could change the way we use diagnostic tools.’
The e/MTIC AI-Lab promises ‘to provide a fast track to high-tech health innovations’. What does this ‘fast track’ look like?
‘e/MTIC connects a lot of collaborations and people in one lab. e/MTIC Lab tries to create a knowledge hub to minimize the fraction between all the research teams, the industry partners and the hospitals. It really lowers the boundaries of getting in touch with each other. If I have a question on a certain image, they can connect me to a doctor who can help me with that specific image or question.’
What’s next for deep learning in ultrasound imaging?
‘We are going to a certain point where the difference between conventional signal processing and deep learning becomes smaller and smaller. But within medical imaging we’re not likely going towards AI that can do everything. Instead, the focus will be on assisting the doctor in his or her work. In our case, this means developing fast, light-weighted neural networks, that get the most information out of the ultrasound measurements as possible.’
On September 16, 2021, Ben Luijten will talk about ‘Deep learning for ultrasound signal processing’ during the Lunch at ICAI Meetup on Deep Learning in the Netherlands. Want to join? Sign up!
LUMO Labs announces a TTT.AI investment in Autoscriber, a Dutch health tech software startup developed in a clinical setting. Autoscriber is developing AI-supported voice recognition software to capture and summarize healthcare professional-patient consultations.TTT.AI is part of the ICAI Venture Program.
Autoscriber’s value proposition is at the intersection of three crucial healthcare trends:
affordable and accessible healthcare for all
growing importance of structured/discrete data capture to support data-driven healthcare initiatives
increasing desire for understanding and self-determination among patients
Health professionals access Autoscriber as a subscription based software-as-a-service (SaaS) solution to large hospitals and practices, smaller practices and General Practitioners. LUMO Labs’ pre-seed funding will allow Autoscriber to go live in multiple hospitals during the next 12 months.
The technology, which promises to streamline clinical interactions into a seamless experience for patients and caregivers, was developed in collaboration with the Clinical Artificial Intelligence and Research Lab (CAIRELab) at Leiden University Medical Center.
“We are very excited to work so closely with the LUMC. Every design choice we make we validate with the physician in a clinical setting.” said Koen Bonenkamp, co-founder and CTO.
Autoscriber software records, transcribes and extracts clinical concepts during consultations. It allows for automated summaries and integration in the patient’s Electronic Health Record that can be easily edited by the physician, providing real-time support for diagnostics and personalized care.
LUMO Labs is investing because Autoscriber meets and/or exceeds their investment fundamentals, including a strong entrepreneurial founding team with profound expertise, proof-of-concept and the promise to dramatically improve the lives of caregivers and patients.
“The problem Autoscriber is solving is universal: reducing time and money spent on repetitive, administrative tasks by physicians while increasing transparency, comprehensibility and human interaction in deeply personal treatment situations,” said LUMO Labs founding partner Andy Lürling. “Their solution is dynamic and highly scalable because of the strong technological and human-centered set-up.”
The TTT.AI consortium is looking for 4 Business Developers AI (Amsterdam, Utrecht, Nijmegen and Eindhoven) to help set up and supervise academic start-ups and spin-offs.
Together with the TTT.AI initiative, ICAI is working on a programme where employees of the universities can apply for support. This ICAI Venture programme has the aim to stimulate PhD students and staff of the knowledge institutes to look into the possibilities of starting a business with the solutions that are build.
Are you an experienced business developer in the field of software and digital innovations, and do you have an affinity with Artificial Intelligence (AI)? Does the sound of working in a dynamic working environment with top scientists, using your knowledge and experience to bring AI innovations to market appeal to you? Then take a look at the position of AI Business Developer for the new national TTT.AI programme. The application deadline is September 19.
During the second edition of the ICAI Day we will take a closer look at the way ICAI brings together knowledge institutes, industry, and government to catalyze knowledge creation in AI.
We will address questions such as: What does it look like for industry to work closely together with academia? How can you bring academic work into practice?
Experts and talent from academia, industry and government will share lessons learned on this topic. We will also hear from speakers who invest in public-private collaboration by working simultaneously on the academic and on the industry side.
27 October 2021 12:00 – 17:00 hrs Den Bosch*
More information about the program and how to subscribe will be shared soon!
*This ICAI Day will have a hybrid format. Participation will be possible both in-person and online.
Do you have a creative mind in setting up new projects? And do you like to work independently with multiple stakeholders? Send your application before 31 August!
The project manager will be working on a big ICAI Consortium Proposal. This consortium is an initiative of the Innovation Center for Articifial Intelligence (ICAI) and will be funded by NWO and industry. The project manager will be working closely with the researchers, business developers, industry partners and internal Faculty departments to build the full proposal.
The consortium is building around ICAI and in the ICAI way of working with ICAI labs, where researchers and industry partners work very closely on topics that are of high importance for the industry partner(s). This specific consortium and proposal is a proposal for a 10 year project in close relation to the industry in the Netherlands.
The Partnership for Online Personalized AI-driven Adaptive Radiation Therapy (POP-AART) is the 28th lab to join ICAI. The lab will focus on the use of artificial intelligence for precision radiotherapy.
It is a major challenge to give patients the right dose of radiation, at the right spot with least damage to healthy tissue, and while the patient and the tumor move and change shape during radiation and over time. Within the POP-AART lab six PhD researchers will develop novel AI strategies for improving the images on which the radiation treatment is based, predicting changes over time of the tumor and incorporating them in automatic treatment planning and adaptation.
POP-AART will run for five years. Research topics range from improving CT images obtained just before radiation to the level of diagnostic quality CT images, predicting deformations and segmentations of the tumor and organs at risk and incorporate these data in online and automated treatment plan optimization for each patient individually at each radiation session.
The lab will be led by scientific directors Efstratios Gavves and Jan-Jakob Sonke. Gavves is assistant professor of Computer Vision and Deep Learning at UvA. Sonke is theme leader image guided therapy at the Netherlands cancer institute and Professor by special appointment of Adaptive Radiotherapy at the Faculty of Medicine at UvA. The Governing Board will consist of academic partners Lodewyk Wessels (NKI-AvL) and Mark de Graef (UvA) and industry partner Rui Lopes (Elekta).
About the Netherlands Cancer Institute
The Netherlands Cancer Institute, founded in 1913, is among the top 10 comprehensive cancer centers, combining world-class fundamental, translational, and clinical research with dedicated patient care. Their initiatives to promote excellent translational research have been recognized by the European Academy of Cancer Sciences, when they designated the institute ‘Comprehensive Cancer Center of Excellence in Translational Research’.
For almost five decades, Elekta has been a leader in precision radiation medicine. Their more than 4,000 employees worldwide are committed to ensuring everyone in the world with cancer has access to – and benefits from – more precise, personalized radiotherapy treatments. Headquartered in Stockholm, Sweden, Elekta is listed on NASDAQ Stockholm Exchange.
MindLabs recently joined the national Innovation Center for Artificial Intelligence (ICAI). This makes Tilburg one of eight locations with an ICAI lab. Together with the partners in the MasterMinds project, MindLabs collaborates in this public-private research lab, named MasterMinds AI Lab. The MasterMinds AI Lab is one of the many initiatives within MindLabs and is dedicated to the development and evaluation of new technologies, focusing on the cross-fertilization between artificial minds and human minds.
Artificial Intelligence and Human Behavior
The ICAI is a national collaboration of several universities, companies and the Dutch government with 27 labs on eight locations. The goal of the Dutch government and universities is to remain at the forefront of AI by knowledge development and nurturing young talent. MasterMinds collaborates in research at the intersection of robotics and avatars, serious gaming, decision making, and virtual and augmented reality. The results of these studies are applied to realize AI solutions for the benefit of society.
Professor Max Louwerse, scientific director of the MasterMinds AI Lab: “The future of AI increasingly lies at the intersection between artificial intelligence and human behavior. The MasterMinds project works at this interface to develop new technologies and will be able to apply them immediately. This is entirely in the nature of MindLabs.”
Five MasterMinds research projects
The MasterMinds project consists of five innovative research projects, aiming to develop breakthroughs with interactive AI technologies such as serious gaming, augmented and virtual reality, intelligent tutoring systems and natural language processing and data science. Research questions include: Can we train and improve complex decision-making using serious gaming? What is the learning effect of training pilots using virtual reality? How do we effectively design AR and VR training modules? How can we develop and use intelligent tutoring systems? The project is funded to stimulate regional ecosystems to develop a resilient, sustainable, and future-proof economy with a central role for SME’s.
The MasterMinds project brings together knowledge institutions, industrial partners, and governmental organizations to work together on AI solutions that can be readily used for the partners. The project aims to develop AI technologies combined with the impact on and input from human behavior, across multiple sectors in society in the field of aerospace, logistics, maintenance, and education focusing on robotics and avatars, serious gaming and learning, language and data science technologies and virtual and augmented reality solutions. The project provides a T-shaped profile of explainable AI solutions, where depth is achieved per subproject while breadth is achieved across the five projects. The project answers questions that have been formed by the industrial partners to prepare for the technology-driven future that lies ahead.
The five MasterMinds reseach projects:
Serious Games in logistics – Port of Rotterdam
VR for air force simulations – Dutch Royal Airforce and MultiSIM
AR & VR for production and maintenance – Actemium, Marel en CastLab
Evidence-based prevention: predictive analytics – Interpolis en Gemeente Tilburg
Virtual Reality in Education – WPG Zwijsen, Spacebuzz en Timeaware
MasterMinds brings together the “brightest minds” in artifical and human intelligence.
July 1, 2021ICAI InterviewComments Off on ICAI Interview with Martijn Kleppe: gaining insights much quicker by combining AI and Humanities
Martijn Kleppe is a trained historian who collaborates with AI scientists. Kleppe: ‘When I was doing research as a historian, I could analyze twenty books in a month. For the computer this is a matter of seconds.’
Martijn Kleppe is one of the founding members of Cultural AI Lab and the head of the Research Department of the KB National Library of the Netherlands.
The Cultural AI Lab is a joint effort of the heritage institutions Rijksmuseum, Institute for Sound and Vision, the KB and knowledge institutions CWI, KNAW, UvA, VU and TNO.
What problems within the humanities can be solved with AI?
‘The biggest challenge that we face within the humanities is scale. There is a shift happening right now within historical research from close reading, which we have done for centuries, to distant reading, where we use a computer to detect patterns in all sorts of humanities data. In books, newspapers, television programs, artworks, social media outlets etcetera. This is the biggest opportunity that we have right now, but also the biggest challenge because we have to rely on other competences to make this kind of research possible.’
How is the interest in AI from the cultural world?
‘It is really gaining momentum right now. There is an ecosystem evolving with partners from the culture and media domain – like the Rijksmuseum, the National Archive – and the creative industry, that are interested in applying AI within their services or processes. And several members of our lab recently founded the working group ‘Culture & Media’ within the National AI Coalition with all sorts of cultural and media partners.’
Does the humanities also influence AI?
‘Yes, it creates new academic research questions. Most of the algorithms within the AI domain have been trained with new and high quality data. But in the datasets of the heritage institutions there is 200 year old data. Digitized newspapers from the end of the nineteenth century with a very low quality for example. And newspapers from colonial Indonesia and Suriname with a completely different vocabulary. Bringing in those kinds of datasets within the AI domain offers new perspectives and questions on polyvocal data. How can we handle these kinds of data and how can we improve them? My experience so far is that the computer scientists involved in our project love these new questions.’
At the ICAI meetup on July 8, the lab will speak about contentious words in cultural heritage collections. How does the lab approach this?
‘Handling issues like that, is the essences of our research. We try to answer the question: How can you detect bias in descriptions of artifacts of museum collections? And can you also help the museum by giving them suggestions for other kind of words? Especially last year, with the Black Lives Matter movement, we saw how relevant these questions were. How do we deal with the past? It is a technical, but also really a societal and ethical challenge.‘
As a historian, what attracts you in AI?
‘I have to ask new types of questions. Instead of doing source criticism on books, newspapers or television programs, now I have to do source criticism on algorithms. At the KB we have the Delpher.nl platform for example, where you can search in millions of digitalized texts from Dutch newspapers, books and magazines. In order to search efficiently, you have to understand the basis of the algorithm behind it. What I also really like about the AI research is the teamwork. Traditionally humanities scholars are more solitary researchers. But to be able to collaborate with other disciplines you have to be vulnerable and develop yourself.’
Does it happen that you get a new perspective on heritage collections because of the AI research?
‘Yes, for sure. During my PhD research I wrote an article about the first moment in time when a Dutch photograph was published in a Dutch newspaper. That research was based on manually going through a selection of newspapers. But then four years later I participated in a research project with a historian who was one of the firsts to start applying Computer Vision on historical newspapers. He run an algorithm that went through all the newspapers and immediately said: ‘Martijn, you were completely wrong. The first photograph was published earlier than the moment you mention in your paper.’ That was fantastic. Science is always about gaining new insights. When I did my research, those techniques did not exist yet. We now gain insights much quicker than we did before.’
On July 8, 2021, Cultural AI Lab will talk about making AI ‘culturally aware’ during the Lunch at ICAI Meeting. Want to join? Sign up here.