Registration for ICAI Day: AI & Climate Change is open!

On November 16th, 2022, ICAI organizes the ‘ICAI Day: AI & Climate Change’ This hybrid event will take place on location in Utrecht and online. Registration is now open for anyone interested.

During the first part, the Lunch Table Discussion (on-site only), from 12:00 to 13:30 hrs, you will have the opportunity to talk to others in small table settings during a catered lunch. Each table revolves around a specific topic with regard to Artificial Intelligence and Climate Change. You can choose from seven sectors: Heavy Industry and Manufacturing, Electricity Systems, Transportation/Mobility, Agriculture, Societal Adaption, Ecosystems and Biodiversity, and Markets and Finance. The seats are limited for this part, so sign up quickly.

The second part of the ICAI Day, the Plenary Session from 13:30 to 18:00 hrs, will be a hybrid setting moderated by Felienne Hermans (Leiden University). You can sign up for this plenary program physically or online. Keynote speakers Chiem van Straaten (KNMI), Elena Verdolini (EIEE, Università degli Studi di Brescia) & Ralf Herbrich (Hasso Plattner Institute) will share their insights on how Artificial Intelligence can be used to tackle Climate Change. Thereafter, there will be a panel discussion and some drinks to close the event.

Launch of The National course on AI & Ethics

On Tuesday 20 September the AI and Ethics course was launched at Startup Village in the Science Park of the University of Amsterdam. The course is a follow-up to the original Dutch National AI Course that focused on the basic building blocks of artificial intelligence. 

Jim Stolze explained during the official launch. “There were a lot of lingering questions among the participants of our first course, questions such as: ‘How do we keep it fair and how do we avoid prejudice entrenching itself in algorithms?’ To make sure that everyone understands how the sector grapples with these topics and thinks these things through for themselves, there is now the National AI Course: AI and Ethics.”

After the opening words, there was a group discussion with, among others, professor Maarten de Rijke, founder of ICAI, and distinguished professor of artificial intelligence and information retrieval at the University of Amsterdam. Maarten recounted how trustworthiness is one of the most important pillars on which a broader rollout of AI technology could rest. People will only trust AI if it proves itself worthy of that trust. This means that AI should be explainable, transparent, have reproducible results, and should be trustworthy for less prominent groups.

The way that he and his colleagues approach this issue is to try and push the gas, but make sure there are guardrails. Explicability, transparency, trustworthiness for less prominent groups, and reproducibility could be some of these guardrails.

 From left to right: Sennay Ghebreab, Mieke van Heesewijk & Quirine Eijkman

After the talk it was finally time for the official launch, Sennay Ghebreab the director of the Civic AI Lab at ICAI, Mieke van Heesewijk from the SIDN fund, and Quirine Eijkman from the Netherlands’ Institute for Human Rights had the honor to press the big red button and give the world access to the course. These three domain experts are featured in the course due to their long-standing work on human rights, AI, and ethics.

The conversation on AI and Ethics continued with some food and drinks. All attendees brought some of their experience and expertise for lively discussion and a group picture.

The course is free and accessible for everyone: https://ethiek.ai-cursus.nl

Using Artificial Intelligence to Enable Low-Cost Medical Imaging – Phillip Lippe interviews Keelin Murphy

Medical imaging is a cornerstone of medicine for the diagnosis of disease, treatment selection, and quantification of treatment effects. Now, with the help of deep learning, researchers and engineers strive to enable the widespread use of low-cost medical imaging devices that automatically interpret medical images. This allows low and middle-income countries to meet their clinical demand and radiologists to reduce diagnostic time. In this interview, Phillip Lippe, a PhD student at the QUVA Lab, interviewed Keelin Murphy, a researcher at the Thira Lab, to learn more about the lab’s research and the developments of the BabyChecker project.

Keelin Murphy is an Assistant Professor at the Diagnostic Image Analysis Group in Radboud University Medical Center. Her research interests are in AI for low-cost imaging modalities with a focus on applications for low and middle-income countries. This includes chest X-ray applications for the detection of tuberculosis and other abnormalities, as well as ultrasound AI for applications including prenatal screening.
Phillip Lippe is a PhD student in the QUVA Lab at the University of Amsterdam and part of the ELLIS PhD program in cooperation with Qualcomm. His research focuses on the intersection of causality and machine learning, particularly causal representation learning and temporal data. Before starting his PhD, he completed his Master’s degree in Artificial Intelligence at the University of Amsterdam.

The QUVA Lab is a collaboration between Qualcomm and the University of Amsterdam. The mission of QUVA lab is to perform world-class research on deep vision. Such vision strives to automatically interpret with the aid of deep learning what happens where, when, and why in images and video.

The Thira Lab is a collaboration between Thirona, Delft Imaging, and Radboud UMC. The mission of the lab is to perform world-class research to strengthen healthcare with innovative imaging solutions. Research projects in the lab focus on the recognition, detection, and quantification of objects and structures in images, with an initial focus on applications in the area of chest CT, radiography, and retinal imaging.

In this interview, both Labs come together to discuss the challenges in deep learning regarding the medical imaging domain.


Phillip: Keelin, you witnessed the transition from simple AI to deep learning. What do you think deep learning has to offer in medical image analysis?

I believe deep learning has a huge role to play in medical image analysis. Firstly, radiology equipment is expensive and requires the training of dedicated physicians, which means that low and middle-income countries can not meet their clinical radiology demands. Using deep learning-powered image analysis, therefore, has the potential to homogenize medical imaging accessibility around the world.

Secondly, even in richer countries such as the Netherlands, we can use deep learning to reduce the costs of radiology clinics. Every minute a radiologist spends looking at an x-ray, for example, is expensive and radiologists have to review a lot of x-rays every day. While every x-ray still requires the radiologists’ utmost attention, many of these x-rays actually show no signs of malicious threat. Deep learning could be used here to prioritize radiologists’ work list, by putting cases that seem normal at the bottom and cases that are deemed urgent at the top of the list. When artificial intelligence can really be relied upon, we could even start removing items from the radiologists’ workflow entirely.

Phillip: You mentioned that you use deep learning, which of course has many facets of neural networks, such as graph neural networks (GNNs) or transformers. Since you are working in imaging analysis, I assume you mostly work with computer vision models. Are you using convolutional neural networks (CNNs) for classification and segmentation or do you even go beyond that scope?

As you mentioned, we almost always use a CNN, where the type of CNN is dependent on the application. More than not, due to the confidential nature of medical data, the most important factor in determining which model to use is actually the amount of data that is available. Using too little data to train a model runs the risk of overfitting and introducing lots of uncertainty. Therefore, we have to try to use models cleverly to make sure to overcome such consequences, by using class balancing, data augmentation, or adjusting the network architecture. Other factors include the size of the model. For instance, to enable global use of the BabyChecker, it is important that the model can fit on a mobile phone device which sets requirements for the size of the network we can use.

Phillip: We know that deep learning models can create false predictions, so it might happen that the system indicates that a measurement looks all good, while that person actually needs to go to the hospital. How do you deal with this uncertainty and possible mistakes?

First, we should acknowledge that uncertainty is inevitable. Radiologists make mistakes, just like models can make mistakes. Only through strict quality control processes can we ensure that these models are reliable enough so that they do more good than harm. Especially in the medical field, this poses many challenges. For instance, on the technical side, we should figure out how to deal with domain shifts. On the legal side, we should determine who is responsible if the model makes a mistake and what legal actions can be taken. Those things are incredibly unclear at the moment.

Right now, I still see that artificial intelligence has a big role to play as a suggestive assistant to a radiologist if one is present, or as a screening tool when one isn’t. For instance, in Africa tuberculosis is very prevalent but most often there is no physician available. One of the products developed in our group and now scaled by Delft Imaging is able to detect tuberculosis-related abnormalities in inexpensive chest x-rays and refer patients for a more accurate and expensive microbiological test when necessary. While this product is not flawless, it does allow us to help more people that we couldn’t have helped otherwise. So until we reach the stage that systems are sufficiently quality controlled, using deep learning for screening and suggestions can be really useful.

Phillip: This sounds similar to challenges in autonomous driving, where it is hard to determine who is really at fault in an accident. We know that another problem is that neural networks tend to be overconfident, also in situations where they should not be. Are there ways to address this problem?

Yes, I have not yet mentioned it to you before, but this is actually really important for getting artificial intelligence accepted in the clinical workflow. Sometimes it might happen that an image of noise accidentally makes its way into the database due to a malfunctioning scanner. If the system would still give you a score for emphysema, then you lose faith in that system. In such cases we want the system to output that the image is very different than the images it was trained on and that the model cannot classify that image. It would be even better if the system provides an interpretable explanation for why it made a certain prediction since transparency in the prediction process is crucial for clinicians to be able to trust the system.

Phillip: You mentioned interpretability, a topic that has gotten a lot of popularity recently. Especially due to discussions about whether interpretability techniques are truly interpretable. Have you already tried out interpretability methods for neural networks or are those methods still a bit too noisy?

While interpretability methods work well in theory, for me the field is still too under-researched to have practical value. One popular method for explaining predictions in the medical field is by producing heat maps based on the weights of the network. However, such methods are hard to quantify and look more pretty than actually being useful explanations.

Phillip: In the low data regime, where models are trained on small amounts of data, the explanations might also quickly overfit to random noise. Yes, indeed. When clinicians hear about these topics in AI, are they reluctant to participate in research on artificial intelligence?

My experience is really positive, but I work mainly with doctors who are interested in what artificial intelligence has to offer. I believe that clinicians recognize that AI is coming to their field and they either have to get on board or they are going to be left behind. As I mentioned before though, the technology in most cases is not ready to be left unattended. Therefore, use cases that researchers and clinicians prefer right now, often assume a suggestive/assistant role for the AI algorithm or a screening role in scenarios where no trained reader is available.

Phillip: Since we talked a lot about data sparsity, how important is it for you to have collaborations across hospitals or medical companies to get access to data?

Collaboration with your partners is super important, in lots of ways. I believe that researchers should never try to develop medical image analysis solutions if they are not collaborating with clinicians. First off, to do research, you are dependent on the availability and the quality of data that is gathered. If we want to move the field forward, we should communicate about the following topics more: which data can be used for what research purpose; how should the data be gathered so that the quality is the highest; and how can we get consent from patients to use their data more often than we do now.

Secondly, there is some knowledge sharing involved. Sometimes I read a paper that had no clinical input and you can really see the difference. Either the researchers made mistakes that could have been prevented, or you can see that the research does not have practical value.

Phillip: Do you consider the agreement of a patient to use their data to be the biggest hurdle for developing something like a medical ImageNet?

The problem is that patients are not asked often enough if their data can be used for commercial purposes. Even so, when they are asked, patients might be reluctant to share such private data without being aware of how and by who it is used. While everybody working in the field of artificial intelligence knows that data is the cornerstone of everything, we should think about how we can communicate this effectively to the community. For instance, by providing more education to create public awareness of what AI is and why large amounts of data are necessary to create successful solutions.

Phillip: From the perspective of a patient, it is a small thing to give, but for the research domain, every single patient who is willing to share their data makes a big difference in enabling better medical image analysis.

Yes indeed. Still, there are challenges that need to be addressed. For instance, do patients feel comfortable with sharing their data with all companies or do they prefer to share their data selectively? What does it mean for competition if all companies have access to the same data? These are questions that we need to find an answer to, together with the community.

Phillip: Yes maybe, patients even want to go as specific as to approve for which specific applications their data can be used for. As scientists, we of course assume that data will be used for good, but we need to make sure that data is really only used for beneficial applications and not applications that might harm people.

Yes, and we should also make sure that data is completely de-identifiable, so that the person the image is taken of, can never be traced back to that image.

Phillip: Now, what is the research focus of the Thira lab?

Our research focuses on two things: the scalability of existing methods in the medical domain, and the reliability of the new methods we are developing to make predictions. Whatever research we do, I would say the red line is always the clinical applicability of our solutions rather than developing pure theoretical knowledge.

Phillip: When developing a device like the BabyChecker, do you only use data acquired with that device to train the model, or is there some domain adaptation involved?

In general, in the minimum viable product stage, we only use data acquired with the actual device so no domain adaption is necessary. At this early stage, BabyChecker’s software works with a selected ultrasound probe so that early adopters in our projects can gain easy access to BabyChecker. Over 70 operators who have been trained to use BabyChecker are scanning pregnant women in Tanzania, Ghana, Sierra Leone, Ethiopia, and soon Uganda as well. The data comes back to our partner Delft Imaging, where experts keep a close check on how well the software is working and where physicians determine the quality of the data. This way we make sure that the system is rigorous and that patients get the correct care.

Phillip: You have already mentioned some future improvements to the BabyChecker, where do you want to be in four years?

At the moment, the BabyChecker checks a few things: 1) The gestational age of the baby to determine the estimated due date, 2) The position of the baby so that when the baby is in a breech position, the woman can make sure to deliver the baby in a hospital, and 3) The presence of twins since this is also a high-risk pregnancy where the woman should go to the hospital to deliver. Additionally, we are looking to perform placenta localization and detect the fetal heartbeat to discover possible pregnancy complications

Phillip: Let’s say that in four years the field of AI has made at least one or multiple steps forward. Where do you see that AI needs to improve, especially in the medical domain?

In general, I would like to see how we can use low-cost x-rays and ultrasounds for lots of other diagnoses. For example, heart failure or lung disease. However, in order for such applications to be feasible, we need AI methods that can work well with small amounts of training data. I think that is really the biggest challenge that we have to overcome.

Phillip: In terms of evaluation, when would you consider your research to be successful? Is it when doctors use the products that you have developed or is it when you feel like there is nothing to improve in the short term?

While I believe I will never feel like there is nothing to improve, I would say my research is successful if we can reliably screen large amounts of people in low-resource settings for all sorts of illnesses and possible complications and get them referred for the treatment they need.

On October the 6th, 2022, the Thira Lab and the QUVA Lab will talk about their current work during the lunch Meetup of ‘ICAI: The Labs’ on AI for Computer Vision in the Netherlands. Want to join? Sign up!

AI technologies allow us to do more with less – An interview with Geert-Jan van Houtum

The manufacturing industry is undergoing a paradigm shift. Because of increasing connectivity, we can gather a lot of data from manufacturing systems for the first time in history. The increasing connectivity also enables the linking, analysis, and performance optimization of supply chain components, even if they are geographically dispersed. The AI-enabled Manufacturing and Maintenance Lab (AIMM) aims to accelerate developments in this field using Artificial Intelligence. In this interview with Geert-Jan van Houtum, we will take a surface dive into some complex challenges in predictive maintenance.

Prof. Geert-Jan van Houtum holds a position as a professor of maintenance and reliability at the Industrial Engineering and Innovation Sciences (IE & IS) department at Eindhoven University. His expertise includes maintenance optimization, inventory theory, and operations research, focusing on system availability and Total Cost of Ownership (TCO).

EAISI AIMM Lab is a collaboration between Eindhoven University Technology, KMWE, Lely, Marel, and Nexperia.

What is predictive maintenance, and what is its purpose?

Traditionally, businesses either replace components when they fail, so-called ”reactive” maintenance, or use lifetime estimations to determine the best moment for maintenance, called age-based maintenance. Usually, reactive maintenance leads to machine downtime, while age-based maintenance is accompanied by the risk of replacing expensive components too soon. Predictive maintenance aims to be more proactive. Using data and AI, we can start actively monitoring the condition of components in real-time; it allows us to predict more accurately when a component is on the verge of failure and needs replacing.

What is the role of data analysis and AI in predictive maintenance?

For many components, you know why they deteriorate over time. You know the failure mechanism, and how to measure the component’s condition. For instance, when you drive a car, you know that the profile on the tire wears down. You can regularly check to see if the amount of profile is still within safety limits and replace the tire if deemed necessary.

There are also components where the failure mechanism is known, but the best way to measure the component’s state is unknown. Before predictive maintenance can be used in these situations, it is required to find a way to measure its state. Artificial Intelligence may be used as part of an inspection solution, such as visual inspection using computer vision, but this is not always necessary or desirable.

Finally, there are cases where the failure mechanism is unknown or has not yet been accurately mapped. Here the first step is to conduct a root-cause analysis. By collecting large amounts of data on all possible root causes, you can try to match patterns in the data to failure cases. Here, data analysis and artificial intelligence play an important role because they provide critical insights into the data that can be interpreted to create knowledge. This process drives innovation.

What is the most challenging aspect of determining the root cause using data?

Many failure mechanisms either occur infrequently or only under specific conditions. In these cases, there is simply insufficient data to perform data analysis or train a neural network, making it incredibly difficult to identify the root cause. Honestly, those situations are real head-scratchers.

Nonetheless, some businesses have found great success using anomaly detection algorithms. Such algorithms identify perturbations of normal behavior, which indicate the presence of a defect or fault in the equipment. Before Artificial Intelligence gained relevance, statistical process control was the gold standard for measuring anomalies. Through the integration of AI-based techniques, anomaly detection has become more refined and gives more intricate insights into the nature of anomalies.

What does AI research in manufacturing and maintenance mean to the world?

When equipment and manufacturing lines do not function properly, it leads to disruptions throughout service and manufacturing supply chains. This runs back all the way to the consumer. It is accompanied by pressure on the environment, increased cost of serving the customers in an alternative way, and in some cases unavailability of life-saving equipment or medicine. AI technologies allow us to do more with less. For instance, predictive maintenance allows us to avoid possibly catastrophic equipment failures while preventing unnecessary maintenance. It is the perfect combination of the financial incentive of businesses, societal values, and the Sustainable Development Goals (SDGs).

Is applying predictive maintenance techniques always more beneficial than more traditional forms of maintenance?

Initial investments, as well as the running costs for predictive maintenance solutions, are significant. Therefore, right now, predictive maintenance is most valuable to businesses that suffer large losses when their equipment fails or when equipment failures cause safety concerns. By working together with industry partners in our Lab, we ensure that our solutions are not only technically feasible and novel but also adhere to societal, industrial, and financial requirements. Predictive maintenance will play a large role in the manufacturing industry, but developments go slowly and it will not replace all traditional maintenance.

On the 15th of September, 2022, Geert-Jan will speak about a predictive concept for geographically dispersed technical systems during our Labs Meetup on AI for Autonomous Systems in the Netherlands. Want to join? Sign up here

Recap of the ICAI National Social Meetup

The first ICAI national social meetup on June 30th was a lot of fun. Our community members, from scientific directors, support staff, project managers, lab managers to PhD students, all had a great time meeting people face-to-face after the two-year pandemic.

There were two locations: Amsterdam and Nijmegen. Each location had its own program and highlights. In addition to tasty food and drinks at the Amsterdam site, there was a ping-pong table where people could stretch their legs during the meetup. Community members from Wageningen and Amsterdam labs had come to the Nijmegen site to chat and chill together. We got lucky with the weather too; the rain only came after the meetup.

Thank you all for making such a great social gathering possible! We will see you again in early 2023 for the next national social meetup.

Looking back at the ICAI Day Summer Edition 2022

On June 1st, 2022, the ICAI Day ‘AI Entrepreneurship: From the lab to the market’ took place at Startup Village Amsterdam. During this edition, Dutch AI Startup experts and professionals discussed the different phases of a startup’s life cycle. Difficulties, challenges, and possible solutions were shared.

The ICAI Day started with inspiring lunch table sessions where new ideas were developed, new connections were made, and feedback was given. Afterwards during the plenary part, the speakers inspired the participants on the different elements of an AI Startup journey. From the topic how to start with an idea towards how to raise VC money. During the presentations multiple questions were discussed and answered like ‘What is more challenging: Fundraising or delivering a product/ service that fits in the market?’, and ‘Where do you find the help you need, to kick off with your start-up?’.

Thanks to all the moderators, speakers, and panel members who contributed to this event:
Anita Lieverdink, Arjan Goudsblom, Bram van Ginneken,  Chris Slootweg, Giulia Donker, Hennie Huijgens, Hinda HanedJakub ZavrelJudith Dijk, Maarten Stolk, Thijs DijkmanTristan Van Doorn, and Vladimir Nedović.

We thank all participants for their presence and contribution. We hope to see you at the next ICAI Day on November 16th, 2022!

The ICAI Day summer edition 2022 was sponsored by Startup Amsterdam.

Partners were Thematic Technology Transfer Artificial Intelligence (TTT-AI) and Startup Village Amsterdam

For more ICAI Day videos, please click here.

ICAI Interview with Jeanne Kroeger: Making ICAI a household name

As project manager of ICAI Amsterdam, Jeanne Kroeger deals with the business and organizational side of the labs, occasionally receives delegates from abroad to talk about ICAI, and is now busy organizing the first physical social meetup on June 30th for the Amsterdam location. Kroeger: ‘It is important to create environments where people can meet their colleagues in an informal setting. I hope that all ICAI cities can join this social event.’

Jeanne Kroeger

Jeanne Kroeger is project manager of ICAI Amsterdam and before that she was community manager of Amsterdam Data Science. Kroeger has a Master’s degree in Chemistry from the University of Liverpool.

What is the idea behind the ICAI National Social Meetup on June 30?

‘The purpose of this social event is to have one moment where ICAI members across the whole country can come together at their location to meet their colleagues in an informal and relaxed way. The idea is that other ICAI cities will join in and that they will host their own physical meetup for all ICAI members involved in that city. Amsterdam and Nijmegen will host their own events. There will be a broadcast at the same time with a five-minute connection on screen with a few words from Maarten de Rijke, director of ICAI. Other than that, it is an informal gathering. It’s really an opportunity for everyone to meet and chat. It is accessible to all ICAI members, from junior, senior and support staff, and within academia, industry, non-profit and government. We will host the meetup from three to five pm, so it’s within working hours.’

Why is ICAI organizing social events like this?

‘I think there is a lack of community feeling in every organization right now. Because of covid, all the people who started in the last two and a half years have not had the opportunity to come into the office. In Amsterdam, for some people this event will be the first time they meet other ICAI members in person. All the labs focus on specific things, but there’s transferable knowledge across the labs. In my previous role for Amsterdam Data Science, I could see that some people were working on very similar topics, but had no idea about each other. It is important to create environments where people feel like they can come and meet their colleagues in an informal setting. The environment in which you work is so crucial. For me it’s almost more crucial than the content because it’s what gives me the energy and motivation to continue.’

How well do the people from the different ICAI Amsterdam labs know each other?

‘I recently had organized lunch for the ICAI Amsterdam lab managers. There were ten of us in the room and only two people really knew each other. The rest had never spoken to each other, while some of them have their offices maybe five doors down from each other. So there’s something to be said about creating more of a community in ICAI Amsterdam and the other hubs and then across those hubs.’

What should the ICAI community look like in four years?

‘I think ICAI should be a household name. The general knowledge about ICAI is starting to build. The ICAI labs have been producing incredible results in the last five years and have made incredible collaborations. We are forming a solid network of labs and the aim is to build more connections across the country. I’ve had meetings with delegates from other countries to talk about ICAI. The word is going out about ICAI!’

Which organizations from abroad visited you to talk about ICAI?

‘We had a delegation from Estonia and I’ve had conversations with large international companies. I think in four years it would be great for the ICAI format to be more standardized. The Netherlands is really well-positioned: it’s a great international hub, easy to get to and it has an amazing standard of living. We are at a point where new AI initiatives are coming out, and it would be great if we can make sure that we position all of these initiatives together, so that they are acting in the same direction as opposed to competing against one another. ICAI has really put itself on the right path to make the Netherlands an important research AI hub.’

What were the main questions these delegations came with?

‘A lot of them were amazed by the amounts of money the labs received for fundamental research. Their main question was basically how the ICAI labs managed to do that. You don’t see this willingness of companies to fund fundamental research in many other countries. To get a five-year commitment from companies, that’s just phenomenal.’

What will be the main challenge for ICAI in the future?

‘ICAI has got that nimbleness. It’s very agile and flexible. Prestigious organizations like the European ELLIS, the Royal Society in the UK or the KNAW in the Netherlands have become so large that things can start to move very slowly. ICAI is growing right now, but I hope it can keep that nimbleness. I think this is possible if ICAI keeps evaluating and keeps seeing what it needs to be.’

Would you like to get to know your fellow ICAI members and have a drink with them? Sign up for the ICAI National Social Meetup – Summer Drinks on June 30th!

ICAI Trio Interview: AI entrepreneurship and a shared ownership of talent

It has been four years since ICAI kicked off and in the meantime ICAI has grown from 3 to 29 labs. How is ICAI doing so far? We take stock of the situation with a lab manager, a PhD student and the scientific director.

Efstratios (Stratis) Gavves is (former) lab manager of QUVA lab, co-director of QUVA and POP-AART ICAI labs, associate professor at University of Amsterdam and co-founder of Ellogon AI BV.

Natasha Butt is first year PhD student within QUVA lab, has a MA degree in Data Science and a BA degree in Econometrics.

Maarten de Rijke is the scientific director and co-founder of ICAI, professor of AI and Information Retrieval at the University of Amsterdam.

What was ICAI’s original purpose? Has that changed in the last four years?

Maarten: ‘The original vision was that we felt that more needed to be done to attract, train and create new opportunities for AI talent, while at the same time we wanted to work with a diverse set of stakeholders on shared research agendas. The underlying idea was that AI can make a positive contribution in lots of societal areas. We have been trying things out. And you learn by doing; that has been the mantra since day one and that will not change. One thing that is changing though, is that the first ICAI labs have matured and that there is a follow-up contract that is not just about attracting and training talent, but also about retaining talent. With the Launch Pad program we want to help the PhD students find their next opportunity in interesting places, ideally here in the Netherlands. Similarly, as PhD students begin to graduate from their lab, some of them have entrepreneurial plans. With the new Venture program we look at how we can help them connect to the right stakeholders and funding. So it’s still the same mission, but the instruments expand.’

ICAI has grown from 3 labs to 29 labs in the past four years. What is it like to work in a research lab with external partners?

Natasha: ‘What I really like is that you get to meet and collaborate with so many different researchers within industry. For a PhD student starting out this is really interesting and exciting. I can’t really weigh in on the negatives because we haven’t published a paper yet.’

Maarten: ‘Especially in labs where the non-academic partners don’t have a long tradition of research, it can be a challenge to identify good problems that matter academically and industrially. You need good problems that don’t need ten years to solve, but that also cannot be solved in three months. Aligning the horizons and expectations is something that needs attention.’

Stratis: ‘Working with external partners is inspiring and fruitful. The cornerstone for a successful relationship is managing expectations. Generally one could say that companies like stability and structure, while researchers in the university thrive with creative chaos. Finding a good balance between these two can bring great results. In fact, in my experience I have seen this work quite smoothly, because we have been lucky that the people involved are very conscious and open-minded.’

‘From now on funding will be less expected from government structure and instead come from private initiative.’

Stratis Gavves

To what extent do universities and companies or governmental organizations need each other in developing AI that can make us more future-proof?

Maarten: ‘We see a slow change right now in the ownership of big challenges. It is no longer just governmental, academic, or industrial, but much more a shared ownership. We are coming to the realization that the best way to tackle climate, health, energy and logistics problems, is to go after these problems together. All of these big challenges are multi-stakeholder and multi-disciplinary. For example, when you’re working on computer vision, at some point you will run into some legal or ethical questions that are tough. Think of all the deep fakes. On the one hand these generative models are fantastic and creative, but there’s another side. An algorithm developer should hang out every now and then with people who bring a different perspective to the table.’

Natasha, you are from Great Britain. Stratis, you are from Greece. Are there initiatives like ICAI over there?

Stratis: ‘I think ICAI is a very successful experiment that will be followed, one way or the other, by other countries. We had some preliminary conversations in Greece and I think that there is interest for sure.’

Natasha: ‘In the UK I haven’t come across many things like ICAI. But when I studied at UCL in London, there were a lot of AI societies and entrepreneurship societies that would hold events and invite students from other universities. So there’s definitely an appetite for it. Especially in London there are a lot of hubs and all the universities are pushing it.’

‘Collaborating with so many different researchers within industry is really exciting for a PhD student just starting out.’

Natasha Butt

Are there countries that were an inspiration for ICAI?

Maarten: ‘Yes, the Von Humboldt fellowships in Germany for example. And especially the attitude behind it was an inspiration for us: start with talent, bring the talent to the country, and then invest and create opportunities. We also saw the same attitude in France.’

Stratis: ‘The instrument that ICAI presents, is an innovation by itself. And this success will be broadcasted to other countries, because there is a need for it. This is how things will be working from now on: funding appears to be less expected from government structure and instead come from private initiative. People are searching for alternative sources of funding and I think that ICAI presents a fair way of doing this in such a way that both sides benefit.’

What are the plans for the next four years?

Maarten: ‘We are working on a large new program, funded by NWO, to expand ICAI with 17 new labs. I hope that by the end of this year we will have around 50 labs. Part of the plans is to expand to all academic cities. We would like to reach out and help people there to get going. Another thing is that our colleagues, whom we are heavily involved with in Nijmegen, have set up AI course programs for medical professionals. We are trying to see how we can do similar things, but then for other sectors like logistics and civil servants.’

Stratis: ‘My goal is to get Natasha and her lab mates graduate. And to attract more industries into the concept of ICAI, perhaps export it outside the Netherlands and maybe even to Greece. And of course, to keep doing top-notch research.’

‘More and more people are coming to the realization that the best way to tackle climate, health, energy and logistics problems, is to go after these problems together.’

Maarten de Rijke

Do you have questions for each other?

Natasha: ‘I would like to know what plans there are for the future. What sort of events do you hope to put on, especially from a PhD perspective?’

Maarten: ‘We want to organize as much as possible as the PhD students need. So we should listen to what would help you. The ICAI Launch Pad program helps PhD students who are towards the end of their PhD trajectory. But of course early stage PhD students have different needs, plans and questions. So we’d like to hear how we can help to make this a better experience. So far, we have put a lot of focus on sharing expertise and experiences, but of course there’s more to being an AI PhD student than that. You Natasha, and other PhD students, should be the ones that tell us.’

And where can she go with her ideas?

Maarten: ‘YaSuei Cheng, the ICAI community manager, can help organize things or find the right people to get something going. And here in Amsterdam we have quite some experience in setting up internships. But I’m sure that there are many things that we’re not seeing, so please let us know.’

Stratis: ‘I was wondering, what is the idea on how to get new spin-offs into existence? Is there guidance there? Let’s say that Natasha comes up with a great idea that her lab partner Qualcomm is not interested in. What should she do?’

Maarten: ‘We’ve teamed up with an initiative called TTT-AI. This organization is all about tech transfer and helping people finding out if there’s a market for their ideas. This initiative works around the whole country. It wants to connect the local ecosystem with local researchers, but also share systems across the country.’

The next ICAI Day on June 1st will be about AI entrepreneurship. Stratis, as co-founder of Ellogon AI, you know a thing or two about this. What is it like to launch a company from lab to the market?

Stratis: ‘I’m still learning, so I can’t tell you the full story from A to Z, but maybe from A to F. It is a lot of fun actually. We are the new generation of academics. It is expected, or at least appreciated, if we look at possibilities like this. But I’m not sure that everyone will be cut out for it. In a way we are working double jobs. It’s really rewarding though in many ways. What I found really interesting, is that so many academics and researchers already have moved to industry. And maybe there is something beyond the obvious argument that people only go there because there are better salaries. I can confirm that creating your own company, working on real problems and solving completely different issues, is really interesting.’

Natasha, how do you feel about making the move to industry in the future?

Natasha: ‘I’m pretty open-minded. It would be really nice and useful to hear the experience of people who went to industry and people who stayed in academia. Doing internships would also help.’

Maarten, what would you advise PhD students in finding the next step?

Maarten: ‘I think it’s a great idea, like Natasha says, to try out a few internships. I generally recommend to go to a completely different team and work on different problems. A different experience helps you to shape your thinking about what you’d like to do next. Maybe even consider doing an internship with a NGO. The Red Cross for example has loads of interesting challenges.’

And what can be done to help researchers to set up AI startups?

Maarten: ‘Mentoring is always useful. To hear other voices and to speak with friendly but critical colleagues who can walk alongside you for a while and connect you to potential customers and challenging problems.’

Stratis: ‘Once you’re in a company, you’re living in borrowed time until you really make it. Learning how to run a company, while developing a product, can be hard. So one thing that can be done is to familiarize people with this aspect of entrepreneurship so that they can anticipate the difficulties. And there are so many things that can be quite easily solved that can still make a huge difference.’

Would you like to meet your fellow ICAI members? On June 1st, the hybrid Summer Edition of the ICAI Day takes place. The theme of this edition is ‘AI Entrepreneurship: From the lab to the market’. Sign up!

Coming up: The National course on AI & Ethics

The National AI course will be continued! More than 300,000 people have been reached since its launch in 2018. This course explains the basics of artificial intelligence in an understandable way. A special course on AI and Ethics will be launched after this summer.

This new course focuses on topics such as algorithmic bias, combating disinformation, the power of tech companies and the importance of human rights in the digital world.

With experts from many walks of life the course wants to raise awareness around the pitfalls of digitization and stimulate the debate on human-centered AI. Experts include Sennay Ghebreab (ICAI), José van Dijck (Utrecht University), Mieke van Heesewijk (SIDN Fund), Merel Koning (Amnesty International), Quirine Eijkman (College for Human Rights) and Sander Duivestein (author of Echt Nep).

Upon completion of this free online program, the student will receive a certificate.

After summer, the course can be found at https://ethiek.ai-cursus.nl.

TTT-AI Workshop: Setting up an AI Startup

On July 5th, 2022, The Thematic Technology Transfer – Artificial Intelligence (TTT-AI) will organize a two-hour workshop on setting up an AI startup.

TTT-AI offers a specialized venture-building program and investment fund for knowledge/research-based AI startups. During this two-hour workshop at the University of Amsterdam, you’ll get a clear understanding of the different aspects an AI startup will go through. Topics include technology development, product-market fits, team formation, customer relations, IP protection, the startup lifecycle, funding, and many more. During the workshop, the presenters will not only teach you some important tools, but will also use and experiment with them. To end the workshop, TTT-AI invited two successful startups to share their best practices and answer some of your questions.

Please send an email to Giulia Donker (g.donker@uva.nl) if you want to come to the workshop.