The Complexities of Train Schedule Management: A Look at NS’ Planning Process and the Push for Optimization – An interview with Bob Huisman

Train schedules are a crucial part of our daily lives, but have you ever thought about the complexity behind creating them? From ensuring seats are available to deploying personnel; there are numerous factors to consider. Despite significant advancements in planning automation, there is still room for improvement, especially in hub planning optimization. So, what’s happening under the hood at NS, and why is optimizing railway planning so challenging? We interviewed Bob Huisman, Manager of Research and Development Hub Logistics at NS, to learn more about how AI is revolutionizing the train scheduling management process.

Bob Huisman is a respected figure in the railway industry, with his career spanning several decades. Huisman currently holds the position of Manager Research & Development Hub Logistics at NS, where he is responsible for delivering innovative methods and tools for planning and scheduling shunting related processes at railway hubs, as well as assessing the logistic process capacity of railway hubs. Beyond the technical and scientific aspects of his work, Huisman sees it as an opportunity to make a social contribution. He describes himself as ‘ having one foot firmly planted in the business world and the other in academia’. His  career path is a testament to his ability to bridge the gap between research, development, creativity, and challenging problems. Huisman is one the principal investigators of the LTP ROBUST programme and the chair of the user’s committee.

‘At first glance, the process of train schedule management may seem simple – travelers at the station, a train ready to board.’ But as Huisman points out, it’s actually a complex interplay of factors, while peering into the direction of Utrecht Central Station, one of the important hubs in the railway network. The hub planning, as part of the overall railway planning, involves ensuring trains are in the correct composition to maximize seat availability, that they arrive at the right platform at the right time, and that they are in good technical condition. Additionally, once a train reaches its final destination, it must be checked for technical issues, cleaned, potentially rearranged, and parked in a way that maximizes space efficiency.

But that’s not all. Railway planning and control involves fleet assignment and the deployment of personnel, which comes with its own set of complicated and important limiting factors. Employment conditions, work variation, and the ability of colleagues to come home at the end of the day are just some of the factors that the team of Huisman must take into account. ‘Train schedule management is almost paradoxical. As a traveler you may experience that we have one timetable during the entirety of the year, but under the hood, the rail sector makes a unique plan for every single day of the year. This involves planning for the timetable, fleet and staff, as well as for the 34 hubs – the stations with connected yards where multiple train lines converge – months in advance.’

Over the past two decades, NS has seen a significant increase in automation when it comes to network planning. However, there’s still no automation in hub planning, which Huisman notes as remaining an obstacle to overcome. Currently, this daunting task is solely on the shoulders of human hub planners, who are responsible for what is called the “knitting process”. ‘The “knitting process” involves juggling a multitude of factors simultaneously and making decisions in real-time, ensuring that passengers arrive at their destinations safely and smoothly.

How do those train scheduling experts manage to make everything run like clockwork?

‘It is a multi-step process. First, we create a timetable, then assign our fleet and lastly assign our colleagues.’ Albeit the largest, NS is only one of the 7 Dutch operators on the Dutch rail network, with ProRail responsible for infrastructure management, capacity allocation and traffic control. Huisman notes that planning NS’ train operation involves multiple iterations before arriving at the final basic pattern, which is then translated into a blueprint for each weekday, and subsequently each specific day. The schedule is finalized a month in advance for a specific calendar day, and from then NS still has the ability to make adjustments up to two days prior. ‘Once the plans are handed over for operational execution, controllers at ProRail and NS must act quickly in the event of a collision, malfunction, or employee illness. At that moment, it resembles tinkering more than actual planning by optimization’, Huisman notes.

NS has been making significant progress since 2015 towards automating the hub planning, but Huisman emphasizes that there are many “dirty details” that need to be taken into account. One of the most gratifying moments of this project came in 2020, just before the onset of the pandemic. ‘We were able to demonstrate on the basis of a proof of concept that we are going to make it’, Huisman recalls. ‘Moments like these; when colleagues have confidence in the success of a project and more resources become available; those keep me young. I have resolved that when I retire, to have this project standing,’ he says with a sense of pride. ‘This project represents a unique collaboration with young researchers, a continuous flow of PhD and master students, and has been one of the most personally rewarding projects of my career.’

Why is automation and optimization so important?

‘Two goals for the rail system are difficult to reconcile: on the one hand, meeting a growing demand for transport and, on the other hand, a robust operation. The driving force for automated hub-planning support is the need for fixing the plans as late as possible and be able to make changes on-line. That will improve the robustness of the transportation process, rail infrastructure usage, and seat availability,’ Huisman notes.

Currently, to anticipate uncertainties and unforeseen disruptions, slack is incorporated into the planning, with respect to both space and time. This slack allows the railroad planners to be flexible and to deal with details that will become clear on the day of operation itself; it gives space to breathe when things go wrong. ‘Reducing slack to facilitate future passenger volumes, increases the risk of a domino effect of disruptions, however, fast automated support for planning and control may compensate for this. All of these challenges beg the question; how do we achieve sustainable growth, act more dynamically and be more robust, while using our existing resources more efficiently?

NS seems to already have invested considerable resources into research that helps professional planners create more optimized plans in less time. Why is this process so tough?

‘Good question. Now, picture the entire rail system as a massive, interconnected wirework – a complex maze that requires meticulous planning to operate smoothly. There are countless variables that can impact the system, making optimization a daunting task. It’s not just a matter of flipping a few switches, pushing a few buttons and moving a few trains around’’.

‘Hub planning is a combinatorial problem with an enormous search space in which it is hard to find good and feasible solutions. Moreover, the need to fix plans as late as possible, requires to model many dirty details of the real world and complex safety rules, which excludes linear optimization methods. ‘When we started in 2015, hub planning had been the topic for a well-known international competition in the field of Operation Research. Curious as we were, we reached out to all the competition’s prize winners to see if they knew something we didn’t. Eventually we concluded that no practical solution was found yet, which was why we decided to set up a long-term research and development program ourselves. One way or another, we had to find a way. Together with our academic partners we finally succeeded in building a working system, mainly based on powerful local search, combined with other methods like linear optimization and constraint programming. We coined it the Hybrid Integrated Planning method (HIP). Although the systems generates plans that are acceptable to professional planners, continuation of the research is needed to enhance the system’s functionality.’

‘Following the progress made by Deepmind, preceding AlphaZero, we started a research track specifically focused on applying deep-reinforcement learning to multi-agent pathfinding. The idea was to build up a logistic plan by generating a chain of individual actions, representing train shunting movements. After six years of research together with our academic partners, we found that brute force local search still outperformed our various complex reinforcement learning approaches, even on simplified models of reality. Taking a step back to see the forest for the trees, we halted the program and shifted our focus. Our current research direction is aimed at using machine learning to complement local search, constrained programming, and linear optimization. The challenge is to find and modify plans more quickly, specifically plans that are easily understood by humans. To build a system that exhibits some intelligent behavior in the future, it must be able to learn from previous situations and to communicate at some abstract level with its users. We still have a long way to go, which asks for perseverance and creativity’, Huisman notes.

“Humans in the loop” does not conform to the traditional view on automation, in which AI systems entirely replace human operators. However, recently researchers have started to view automation as a two-dimensional process, where humans and machines work together and compliment each other’s strengths and weaknesses to achieve a common goal.

‘In our unique use case we are looking for automatic tools to support people’, Huisman emphasized. ‘Hub planning is a challenge that humans are quite good at and a task that requires substantial creativity, ingenuity and the ability to color outside the lines – but it takes a lot of time. On the other hand, algorithms can speed up the planning processes but currently cannot handle its full complexity. That is why we are focused on creating systems that help people to adjust the “knitting process” more efficiently; reducing slack and maximizing space usage to meet the growing demand for train travel.

When it comes to AI, Huisman has a unique perspective. ‘AI research can bring us closer to understanding human intelligence, yes. However, as long as we don’t understand and have defined human intelligence, stop talking about artificial intelligence that has to replace humans.’ Instead, Huisman believes that we should focus on building super tools that exhibit intelligent behavior, regardless of whether there’s a steam engine or neural network powering it. ‘I see neural networks and reinforcement learning as ingredients, among others, to create value’, Huisman explained, adding that it’s all about developing an overall system that can deal with the intricacies of logistic planning in cooperation with humans. ‘AI has the potential to disrupt the approach we take in this, but it is not just about generating plans automatically; you have to communicate the output to planners and make it understandable to them, give humans control over plan qualities, and link the output to the systems of other parties.’

Recently, ICAI announced the LTP ROBUST program; a new initiative supported by the University of Amsterdam and 51 partners from government, industry, and academia. As part of the program, 17 new AI labs are established that focus on the development of trustworthy AI technology to address socially relevant issues in areas such as healthcare, logistics, media, food, and energy. Can you elaborate on the importance of trust in AI systems and your role in the program?

‘Trust is a crucial factor in the adoption of AI systems. Our perspective is that the public, customers, travelers, patients, users and authorities all base their judgment not only on the functionality of the AI algorithm in isolation. Rather, they base it on the whole of interacting ICT, the organization behind it, the procedures put in place to regulate it, the UI, and the availability of the system. Trustworthiness of an individual AI algorithm is a necessary, but not sufficient condition for their effective use in a system. Therefore, research into creating such AI systems necessitates a symbiotic relationship between academia and industry. In the end, private or governmental organizations set the specifications of the system, design and build the system and operate the system over the years. In LTP, research, development and system engineering meet, to obtain social impact by operational trustworthy systems.

Trust is also contextual and domain-specific; the risks of a medical diagnosis differ from the risks of logistics planning or music recommendation, and people rely on different systems in different ways. The program’s approach is to start with a system vision and a targeted research question for each lab, with the private partners playing a vital role in validating the output and asking the right questions. ‘While the validation and questioning may vary for different fields, the general approach to winning the trust of the user can be similar. As one of the principal investigators and the chairman of the overarching user committee, my role is to oversee the cooperation between the partners and ensure knowledge transfer between labs.’

The RAIL Lab, a collaboration between Delft University of Technology, Utrecht University, ProRail and NS, is one of the LTP ROBUST Labs joining ICAI. Its goal? Working towards algorithmic support to ensure safe and reliable logistic operations and capacity planning that is trusted by human experts. Explainable AI plays a role in this.

‘Explainability is often seen from the research world as: if I could just explain why my algorithm came to this conclusion and if I change something about my input, how would my evaluation change? It’s almost an internal accountability from your algorithm to the outside world, which is necessary, but might only be sufficient to accept or reject an individual prediction of the system. The question is whether that is sufficient for humans to use and accept the system as a sparring partner for decision support.’

Huisman emphasized the importance of setting standards for what an explanation of an algorithm should look like. ‘Authorities often require a deeper understanding of how an algorithm works, including how it makes considerations, what information it looks at, what information is necessary to make a good choice, and how uncertain the algorithm is in its output. Furthermore, for specific instances, humans may ask counterfactual questions to understand why some decision is proposed and not some other. By understanding the requirements for human decision-making, we can create more effective explanations that provide a more complete understanding of the algorithm’s decision-making process. Since the user is often responsible for the final decision to be taken, he wants to be sure it is the right one.’

To address these challenges, each LTP ROBUST lab will include a researcher with a background in social and behavioral science. The RAIL Lab is a testament to this effort, with one PhD student focusing on the cooperation between human and AI planners. This study will reveal requirements, expectations, and potential pitfalls of human-AI interaction, specifically of interaction with algorithmic planners. These results will be augmented with data science techniques to extract important factors from past decision-making and planning processes, to develop a computational cognitive model of the decision and planning process.

Huisman sees a colorful future ahead: ‘NS has its fair share of critics – some say it’s too big, bureaucratic, or slow. On the other hand, I know fewer other companies that have invested as much time and resources into innovative projects like railway planning, as NS has in the Netherlands.’ Optimizing rail systems is a complex task that will require many more years of research and a delicate balance between human expertise and advanced AI algorithms. However, Huisman and his colleagues are committed and up for the challenge. ‘With LTP ROBUST and RAIL Lab’s ongoing efforts, we can hope to see more trustworthy, efficient and seamless rail systems in the near future.’

———

We hope that through this interview you learned a bit more about NS and the intricacies of railway planning. NS, ProRail and their academic partners, TU Delft and the University of Utrecht, are currently recruiting PhD students for the RAIL Lab. If you are interested in a complex technical AI challenge, in the light of a social contribution, check out their webpage: https://icai.ai/icai-labs/rail/ The next time you’re waiting for your train, take a moment to appreciate the intricate dance of 22.000 employees that’s happening behind the scenes to help you get to your destination, and maybe consider joining us!

Artificial Intelligence in Agriculture and Weather Forecasting

The world is facing a number of converging climate change challenges: population growth, more frequent extreme weather events, and a need for the sustainable production of nutritious food. Some say that machine learning can support us to mitigate and prepare for such consequences of climate change, however, it is not a silver bullet. In this interview, Congcong Sun and Chiem van Straaten discuss the challenges of machine learning in agriculture and weather forecasting, and the similarities and differences between their respective fields.

On November 16th, 2022, ICAI organizes the ‘ICAI Day: Artificial Intelligence and Climate Change’ where Congcong, Chiem, and many other researchers will talk about how AI can be used to mitigate and prepare for the consequences of climate change. Want to join? Sign up!

Congcong Sun  is an assistant professor in learning-based Control at Wageningen University & Research (WUR) and Lab Manager of the ICAI AI for Agro-Food Lab. Her research interests are in using learning-based control to explore the overlap between machine learning and automatic control and apply them to agricultural production.
Chiem van Straaten is a PhD student at the Vrije Universiteit Amsterdam (VU) and the Royal Netherlands Meteorological Institute (KNMI). His research focuses on improving sub-seasonal probabilistic forecasts of European high-impact weather events using machine-learning techniques.

Congcong and Chiem, could you tell me what your research is about, and how it is connected to artificial intelligence?

Congcong: Yes, of course. My research focus is on learning-based autonomous control in agricultural production. For instance, in a greenhouse or vertical farm, climate control can be optimized to make the crops grow under more favorable conditions and produce a better quality crop. Another example is logistical planning for agro workers, such as harvesting robots in a multi-agent setting. Learning-based control applications are complex, which is why I mainly use deep reinforcement learning, which is the combination of reinforcement learning algorithms with neural networks.

Chiem: The research that I conduct pertains to studying and making predictions about weather and climate extremes. Many industries, such as agriculture production, depend on accurate weather forecasting. Understanding our climate better is crucial for preparing ourselves for extreme weather and at the same time allows industries to use their resources more efficiently. However, predicting weather events far in advance is extremely tough due to time lags, the conditional nature of observed patterns, and the multitude of factors influencing one another. Machine learning has the potential to deal with such levels of complexity, which is why I am interested in applying it to weather forecasting.

Do you see any similarities or differences between your research?

Congcong: I believe our research is interconnected. As Chiem mentioned, weather patterns are a large source of uncertainty within the agricultural industry, particularly for those applications where the farm is located in an uncontrolled environment, such as open-air farms.

In agriculture, the weather is not the only source of uncertainty, however. Uncertainty arises from the crops themselves. Different crops have optimal growing conditions, which means that a control policy that is effective for one crop might not be effective for another. Even if you were to place a different crop in the exact same greenhouse environment, you would need a vastly different policy for controlling it. What are your thoughts on that, Chiem?

Chiem: Yes, you are trying to tackle something that inherently is multivariate, which is similar to weather forecasting. Although I am not well-versed in the specifics of agriculture, I can imagine that you need to take into account many factors such as irrigation, lighting, and temperature?

Congcong: Yes, indeed. When we seek to regulate the climate within a greenhouse, there are a lot of variables we need to consider, like humidity, irrigation, fertilization, light, and temperature. Analyzing the relationships between these variables requires knowledge from various disciplines such as plant physiology and biology. Additionally, certain relationships might not have been discovered yet, which adds to the complexity of balancing these variables. The combination of machine learning and automatic control can help us explore some of these relationships and translate them into knowledge about how to best regulate these environments.

Chiem: Ah, exactly. Here, I see a great similarity between autonomous control of agriculture environments and the prediction of weather patterns. For a long time, physical numerical prediction models have been developed in order to incorporate as many of the processes that are known to be important for weather prediction as possible. However, it is also known that these models are not perfect, as the weather is extremely complex. Therefore, we attempt to replace parts of the numerical models with statistical models to capture yet-to-be-discovered processes

Congcong: Yes, indeed. What kind of data do you use to make weather forecasts?

Chiem: In the non-statistical forecasting models specifically, we use a plethora of data to make weather forecasts, including humidity, pressure, air temperature, and wind speed. Like the input, the output is often multivariate, similar to learning-based agriculture control. Another similarity might be that in both domains you encounter challenges due to cycles. For instance, I could imagine that in agriculture you need to take the growing cycle of plants into account, which is different for every plant. In weather forecasting, you also have to deal with many different cycles at the same time, such as the seasonal cycle, weekly cycles, and daily cycles.

Congcong: Yes, exactly! Plants have different optimal growing cycles. In greenhouses with multiple plants, it could be that different growing cycles overlap similarly to how cycles overlap in weather forecasting. It is interesting to see so many similarities between our two domains!

In your conversation, you mentioned some applications of machine learning in your respective domains. One challenge we often hear about is related to trustworthiness, especially in applications with high degrees of uncertainty. Are companies in your industry enthusiastic or reluctant to work with machine learning?

Congcong: Greenhouse climate control is quite mature in the Netherlands. Some commercial greenhouses have already implemented automated control, however, we are still not making use of all available cutting-edge sensing techniques. The adoption of such techniques by farmers might be slow since they are expensive and if they do not work as intended, it could ruin a farmer’s business. Also, farmers might be hesitant to trust machine learning technology, since it is a relatively new technology.

Chiem: As Congcong noted; the trustworthiness of a system is crucial for its widespread acceptance. Applications such as heatwave prediction are not quite ready for widespread use because heat waves have to be predicted far in advance, which is immensely tough to do accurately. Short-term forecasting applications, such as rainfall forecasting applications have a track record of successful predictions, however. Moreover, weather forecasting has rapid update cycles, so if you make an errant forecast today, you still have a chance tomorrow to forecast the same thing, but do so with greater accuracy. For heatwave prediction, such errant predictions have way more severe consequences. In agriculture, I could imagine the consequences are similarly more severe. What do you think Congcong?

Congcong: I agree with you Chiem. Plants are quite sensitive, so if a wrong prediction leads to hazardous conditions in which the plants cannot survive for long, the grower might lose all of their plants. While system control in agriculture does not come with direct harm to humans, like in autonomous driving, the margins on crops are small. Therefore, they are in general more averse to using machine learning and statistical modeling approaches in general.

Chiem, during the ICAI Day, a day revolving around the numerous challenges regarding machine learning and climate change, you will walk us through a heat wave prediction use case. What would you say the largest hurdle is in this research?

Chiem: The primary challenge in climate change research is the interaction between processes across different scales. On a local scale, processes such as heat exacerbation due to dry soil conditions or particular local atmospheric configurations can influence heat waves. However, such local conditions can also be synchronized across the scale of the complete northern hemisphere, which means that hundreds of kilometers away, very specific conditions might also be an indication of an impending heatwave. This can become increasingly more complex when you, for instance, include global connections.

The interaction across these many scales creates challenges in determining the resolution of the data you need and also what algorithm is most suitable to use. Additionally, climate change is actively changing our data distributions as we speak. Data that we gathered in the past might therefore have different weather dynamics than the weather right now, which makes generalizing very difficult. To an extent, your machine learning model is always extrapolating.

That is intriguing, thank you for your explanation! Congcong, during the ICAI Day you will moderate a lunch table discussion on Artificial Intelligence and Agriculture. What do you plan to discuss and why should people join?

Congcong: During the lunch table discussion, I would like to come together and talk about the current challenges of applying AI to agriculture; the popular and potential AI solutions to confront these challenges; as well as the future trends of applying AI to Agriculture. I believe it is valuable to join since it will be a very good chance for researchers, engineers, and students who are working in this area, or even just feel interested in this area, to ask their questions, share their opinions, and also may get some answers about their doubts through the discussions! Beyond that, it will also be a very good chance to build your network and explore potential collaborations for the future.

To round off; when would you say your research is a success?

Congcong: Any progress of my research, I consider a success and will make me happy. These are things such as my PhD students achieving a small step, solving pressing challenges for farmers, and making food production more sustainable by reducing emissions and energy use.

Chiem: One large success would be the ability to answer questions regarding climate change attribution such as: how much has climate change exacerbated the impact of this specific extreme weather event or made it more frequent? Being able to answer such questions confidently would allow us to hold parties, such as big emitters, accountable. While far off, I believe that machine learning has the potential to give us the tools necessary to do this in the future.


On November 16th, 2022, ICAI organizes the ‘ICAI Day: Artificial Intelligence and Climate Change’ where Congcong, Chiem, and many other researchers will talk about how AI can be used to mitigate and prepare for the consequences of climate change. Want to join? Sign up!

Using Artificial Intelligence to Enable Low-Cost Medical Imaging – Phillip Lippe interviews Keelin Murphy

Medical imaging is a cornerstone of medicine for the diagnosis of disease, treatment selection, and quantification of treatment effects. Now, with the help of deep learning, researchers and engineers strive to enable the widespread use of low-cost medical imaging devices that automatically interpret medical images. This allows low and middle-income countries to meet their clinical demand and radiologists to reduce diagnostic time. In this interview, Phillip Lippe, a PhD student at the QUVA Lab, interviewed Keelin Murphy, a researcher at the Thira Lab, to learn more about the lab’s research and the developments of the BabyChecker project.

Keelin Murphy is an Assistant Professor at the Diagnostic Image Analysis Group in Radboud University Medical Center. Her research interests are in AI for low-cost imaging modalities with a focus on applications for low and middle-income countries. This includes chest X-ray applications for the detection of tuberculosis and other abnormalities, as well as ultrasound AI for applications including prenatal screening.
Phillip Lippe is a PhD student in the QUVA Lab at the University of Amsterdam and part of the ELLIS PhD program in cooperation with Qualcomm. His research focuses on the intersection of causality and machine learning, particularly causal representation learning and temporal data. Before starting his PhD, he completed his Master’s degree in Artificial Intelligence at the University of Amsterdam.

The QUVA Lab is a collaboration between Qualcomm and the University of Amsterdam. The mission of QUVA lab is to perform world-class research on deep vision. Such vision strives to automatically interpret with the aid of deep learning what happens where, when, and why in images and video.

The Thira Lab is a collaboration between Thirona, Delft Imaging, and Radboud UMC. The mission of the lab is to perform world-class research to strengthen healthcare with innovative imaging solutions. Research projects in the lab focus on the recognition, detection, and quantification of objects and structures in images, with an initial focus on applications in the area of chest CT, radiography, and retinal imaging.

In this interview, both Labs come together to discuss the challenges in deep learning regarding the medical imaging domain.


Phillip: Keelin, you witnessed the transition from simple AI to deep learning. What do you think deep learning has to offer in medical image analysis?

I believe deep learning has a huge role to play in medical image analysis. Firstly, radiology equipment is expensive and requires the training of dedicated physicians, which means that low and middle-income countries can not meet their clinical radiology demands. Using deep learning-powered image analysis, therefore, has the potential to homogenize medical imaging accessibility around the world.

Secondly, even in richer countries such as the Netherlands, we can use deep learning to reduce the costs of radiology clinics. Every minute a radiologist spends looking at an x-ray, for example, is expensive and radiologists have to review a lot of x-rays every day. While every x-ray still requires the radiologists’ utmost attention, many of these x-rays actually show no signs of malicious threat. Deep learning could be used here to prioritize radiologists’ work list, by putting cases that seem normal at the bottom and cases that are deemed urgent at the top of the list. When artificial intelligence can really be relied upon, we could even start removing items from the radiologists’ workflow entirely.

Phillip: You mentioned that you use deep learning, which of course has many facets of neural networks, such as graph neural networks (GNNs) or transformers. Since you are working in imaging analysis, I assume you mostly work with computer vision models. Are you using convolutional neural networks (CNNs) for classification and segmentation or do you even go beyond that scope?

As you mentioned, we almost always use a CNN, where the type of CNN is dependent on the application. More than not, due to the confidential nature of medical data, the most important factor in determining which model to use is actually the amount of data that is available. Using too little data to train a model runs the risk of overfitting and introducing lots of uncertainty. Therefore, we have to try to use models cleverly to make sure to overcome such consequences, by using class balancing, data augmentation, or adjusting the network architecture. Other factors include the size of the model. For instance, to enable global use of the BabyChecker, it is important that the model can fit on a mobile phone device which sets requirements for the size of the network we can use.

Phillip: We know that deep learning models can create false predictions, so it might happen that the system indicates that a measurement looks all good, while that person actually needs to go to the hospital. How do you deal with this uncertainty and possible mistakes?

First, we should acknowledge that uncertainty is inevitable. Radiologists make mistakes, just like models can make mistakes. Only through strict quality control processes can we ensure that these models are reliable enough so that they do more good than harm. Especially in the medical field, this poses many challenges. For instance, on the technical side, we should figure out how to deal with domain shifts. On the legal side, we should determine who is responsible if the model makes a mistake and what legal actions can be taken. Those things are incredibly unclear at the moment.

Right now, I still see that artificial intelligence has a big role to play as a suggestive assistant to a radiologist if one is present, or as a screening tool when one isn’t. For instance, in Africa tuberculosis is very prevalent but most often there is no physician available. One of the products developed in our group and now scaled by Delft Imaging is able to detect tuberculosis-related abnormalities in inexpensive chest x-rays and refer patients for a more accurate and expensive microbiological test when necessary. While this product is not flawless, it does allow us to help more people that we couldn’t have helped otherwise. So until we reach the stage that systems are sufficiently quality controlled, using deep learning for screening and suggestions can be really useful.

Phillip: This sounds similar to challenges in autonomous driving, where it is hard to determine who is really at fault in an accident. We know that another problem is that neural networks tend to be overconfident, also in situations where they should not be. Are there ways to address this problem?

Yes, I have not yet mentioned it to you before, but this is actually really important for getting artificial intelligence accepted in the clinical workflow. Sometimes it might happen that an image of noise accidentally makes its way into the database due to a malfunctioning scanner. If the system would still give you a score for emphysema, then you lose faith in that system. In such cases we want the system to output that the image is very different than the images it was trained on and that the model cannot classify that image. It would be even better if the system provides an interpretable explanation for why it made a certain prediction since transparency in the prediction process is crucial for clinicians to be able to trust the system.

Phillip: You mentioned interpretability, a topic that has gotten a lot of popularity recently. Especially due to discussions about whether interpretability techniques are truly interpretable. Have you already tried out interpretability methods for neural networks or are those methods still a bit too noisy?

While interpretability methods work well in theory, for me the field is still too under-researched to have practical value. One popular method for explaining predictions in the medical field is by producing heat maps based on the weights of the network. However, such methods are hard to quantify and look more pretty than actually being useful explanations.

Phillip: In the low data regime, where models are trained on small amounts of data, the explanations might also quickly overfit to random noise. Yes, indeed. When clinicians hear about these topics in AI, are they reluctant to participate in research on artificial intelligence?

My experience is really positive, but I work mainly with doctors who are interested in what artificial intelligence has to offer. I believe that clinicians recognize that AI is coming to their field and they either have to get on board or they are going to be left behind. As I mentioned before though, the technology in most cases is not ready to be left unattended. Therefore, use cases that researchers and clinicians prefer right now, often assume a suggestive/assistant role for the AI algorithm or a screening role in scenarios where no trained reader is available.

Phillip: Since we talked a lot about data sparsity, how important is it for you to have collaborations across hospitals or medical companies to get access to data?

Collaboration with your partners is super important, in lots of ways. I believe that researchers should never try to develop medical image analysis solutions if they are not collaborating with clinicians. First off, to do research, you are dependent on the availability and the quality of data that is gathered. If we want to move the field forward, we should communicate about the following topics more: which data can be used for what research purpose; how should the data be gathered so that the quality is the highest; and how can we get consent from patients to use their data more often than we do now.

Secondly, there is some knowledge sharing involved. Sometimes I read a paper that had no clinical input and you can really see the difference. Either the researchers made mistakes that could have been prevented, or you can see that the research does not have practical value.

Phillip: Do you consider the agreement of a patient to use their data to be the biggest hurdle for developing something like a medical ImageNet?

The problem is that patients are not asked often enough if their data can be used for commercial purposes. Even so, when they are asked, patients might be reluctant to share such private data without being aware of how and by who it is used. While everybody working in the field of artificial intelligence knows that data is the cornerstone of everything, we should think about how we can communicate this effectively to the community. For instance, by providing more education to create public awareness of what AI is and why large amounts of data are necessary to create successful solutions.

Phillip: From the perspective of a patient, it is a small thing to give, but for the research domain, every single patient who is willing to share their data makes a big difference in enabling better medical image analysis.

Yes indeed. Still, there are challenges that need to be addressed. For instance, do patients feel comfortable with sharing their data with all companies or do they prefer to share their data selectively? What does it mean for competition if all companies have access to the same data? These are questions that we need to find an answer to, together with the community.

Phillip: Yes maybe, patients even want to go as specific as to approve for which specific applications their data can be used for. As scientists, we of course assume that data will be used for good, but we need to make sure that data is really only used for beneficial applications and not applications that might harm people.

Yes, and we should also make sure that data is completely de-identifiable, so that the person the image is taken of, can never be traced back to that image.

Phillip: Now, what is the research focus of the Thira lab?

Our research focuses on two things: the scalability of existing methods in the medical domain, and the reliability of the new methods we are developing to make predictions. Whatever research we do, I would say the red line is always the clinical applicability of our solutions rather than developing pure theoretical knowledge.

Phillip: When developing a device like the BabyChecker, do you only use data acquired with that device to train the model, or is there some domain adaptation involved?

In general, in the minimum viable product stage, we only use data acquired with the actual device so no domain adaption is necessary. At this early stage, BabyChecker’s software works with a selected ultrasound probe so that early adopters in our projects can gain easy access to BabyChecker. Over 70 operators who have been trained to use BabyChecker are scanning pregnant women in Tanzania, Ghana, Sierra Leone, Ethiopia, and soon Uganda as well. The data comes back to our partner Delft Imaging, where experts keep a close check on how well the software is working and where physicians determine the quality of the data. This way we make sure that the system is rigorous and that patients get the correct care.

Phillip: You have already mentioned some future improvements to the BabyChecker, where do you want to be in four years?

At the moment, the BabyChecker checks a few things: 1) The gestational age of the baby to determine the estimated due date, 2) The position of the baby so that when the baby is in a breech position, the woman can make sure to deliver the baby in a hospital, and 3) The presence of twins since this is also a high-risk pregnancy where the woman should go to the hospital to deliver. Additionally, we are looking to perform placenta localization and detect the fetal heartbeat to discover possible pregnancy complications

Phillip: Let’s say that in four years the field of AI has made at least one or multiple steps forward. Where do you see that AI needs to improve, especially in the medical domain?

In general, I would like to see how we can use low-cost x-rays and ultrasounds for lots of other diagnoses. For example, heart failure or lung disease. However, in order for such applications to be feasible, we need AI methods that can work well with small amounts of training data. I think that is really the biggest challenge that we have to overcome.

Phillip: In terms of evaluation, when would you consider your research to be successful? Is it when doctors use the products that you have developed or is it when you feel like there is nothing to improve in the short term?

While I believe I will never feel like there is nothing to improve, I would say my research is successful if we can reliably screen large amounts of people in low-resource settings for all sorts of illnesses and possible complications and get them referred for the treatment they need.

On October the 6th, 2022, the Thira Lab and the QUVA Lab will talk about their current work during the lunch Meetup of ‘ICAI: The Labs’ on AI for Computer Vision in the Netherlands. Want to join? Sign up!

AI technologies allow us to do more with less – An interview with Geert-Jan van Houtum

The manufacturing industry is undergoing a paradigm shift. Because of increasing connectivity, we can gather a lot of data from manufacturing systems for the first time in history. The increasing connectivity also enables the linking, analysis, and performance optimization of supply chain components, even if they are geographically dispersed. The AI-enabled Manufacturing and Maintenance Lab (AIMM) aims to accelerate developments in this field using Artificial Intelligence. In this interview with Geert-Jan van Houtum, we will take a surface dive into some complex challenges in predictive maintenance.

Prof. Geert-Jan van Houtum holds a position as a professor of maintenance and reliability at the Industrial Engineering and Innovation Sciences (IE & IS) department at Eindhoven University. His expertise includes maintenance optimization, inventory theory, and operations research, focusing on system availability and Total Cost of Ownership (TCO).

EAISI AIMM Lab is a collaboration between Eindhoven University Technology, KMWE, Lely, Marel, and Nexperia.

What is predictive maintenance, and what is its purpose?

Traditionally, businesses either replace components when they fail, so-called ”reactive” maintenance, or use lifetime estimations to determine the best moment for maintenance, called age-based maintenance. Usually, reactive maintenance leads to machine downtime, while age-based maintenance is accompanied by the risk of replacing expensive components too soon. Predictive maintenance aims to be more proactive. Using data and AI, we can start actively monitoring the condition of components in real-time; it allows us to predict more accurately when a component is on the verge of failure and needs replacing.

What is the role of data analysis and AI in predictive maintenance?

For many components, you know why they deteriorate over time. You know the failure mechanism, and how to measure the component’s condition. For instance, when you drive a car, you know that the profile on the tire wears down. You can regularly check to see if the amount of profile is still within safety limits and replace the tire if deemed necessary.

There are also components where the failure mechanism is known, but the best way to measure the component’s state is unknown. Before predictive maintenance can be used in these situations, it is required to find a way to measure its state. Artificial Intelligence may be used as part of an inspection solution, such as visual inspection using computer vision, but this is not always necessary or desirable.

Finally, there are cases where the failure mechanism is unknown or has not yet been accurately mapped. Here the first step is to conduct a root-cause analysis. By collecting large amounts of data on all possible root causes, you can try to match patterns in the data to failure cases. Here, data analysis and artificial intelligence play an important role because they provide critical insights into the data that can be interpreted to create knowledge. This process drives innovation.

What is the most challenging aspect of determining the root cause using data?

Many failure mechanisms either occur infrequently or only under specific conditions. In these cases, there is simply insufficient data to perform data analysis or train a neural network, making it incredibly difficult to identify the root cause. Honestly, those situations are real head-scratchers.

Nonetheless, some businesses have found great success using anomaly detection algorithms. Such algorithms identify perturbations of normal behavior, which indicate the presence of a defect or fault in the equipment. Before Artificial Intelligence gained relevance, statistical process control was the gold standard for measuring anomalies. Through the integration of AI-based techniques, anomaly detection has become more refined and gives more intricate insights into the nature of anomalies.

What does AI research in manufacturing and maintenance mean to the world?

When equipment and manufacturing lines do not function properly, it leads to disruptions throughout service and manufacturing supply chains. This runs back all the way to the consumer. It is accompanied by pressure on the environment, increased cost of serving the customers in an alternative way, and in some cases unavailability of life-saving equipment or medicine. AI technologies allow us to do more with less. For instance, predictive maintenance allows us to avoid possibly catastrophic equipment failures while preventing unnecessary maintenance. It is the perfect combination of the financial incentive of businesses, societal values, and the Sustainable Development Goals (SDGs).

Is applying predictive maintenance techniques always more beneficial than more traditional forms of maintenance?

Initial investments, as well as the running costs for predictive maintenance solutions, are significant. Therefore, right now, predictive maintenance is most valuable to businesses that suffer large losses when their equipment fails or when equipment failures cause safety concerns. By working together with industry partners in our Lab, we ensure that our solutions are not only technically feasible and novel but also adhere to societal, industrial, and financial requirements. Predictive maintenance will play a large role in the manufacturing industry, but developments go slowly and it will not replace all traditional maintenance.

On the 15th of September, 2022, Geert-Jan will speak about a predictive concept for geographically dispersed technical systems during our Labs Meetup on AI for Autonomous Systems in the Netherlands. Want to join? Sign up here

ICAI Interview with Jeanne Kroeger: Making ICAI a household name

As project manager of ICAI Amsterdam, Jeanne Kroeger deals with the business and organizational side of the labs, occasionally receives delegates from abroad to talk about ICAI, and is now busy organizing the first physical social meetup on June 30th for the Amsterdam location. Kroeger: ‘It is important to create environments where people can meet their colleagues in an informal setting. I hope that all ICAI cities can join this social event.’

Jeanne Kroeger

Jeanne Kroeger is project manager of ICAI Amsterdam and before that she was community manager of Amsterdam Data Science. Kroeger has a Master’s degree in Chemistry from the University of Liverpool.

What is the idea behind the ICAI National Social Meetup on June 30?

‘The purpose of this social event is to have one moment where ICAI members across the whole country can come together at their location to meet their colleagues in an informal and relaxed way. The idea is that other ICAI cities will join in and that they will host their own physical meetup for all ICAI members involved in that city. Amsterdam and Nijmegen will host their own events. There will be a broadcast at the same time with a five-minute connection on screen with a few words from Maarten de Rijke, director of ICAI. Other than that, it is an informal gathering. It’s really an opportunity for everyone to meet and chat. It is accessible to all ICAI members, from junior, senior and support staff, and within academia, industry, non-profit and government. We will host the meetup from three to five pm, so it’s within working hours.’

Why is ICAI organizing social events like this?

‘I think there is a lack of community feeling in every organization right now. Because of covid, all the people who started in the last two and a half years have not had the opportunity to come into the office. In Amsterdam, for some people this event will be the first time they meet other ICAI members in person. All the labs focus on specific things, but there’s transferable knowledge across the labs. In my previous role for Amsterdam Data Science, I could see that some people were working on very similar topics, but had no idea about each other. It is important to create environments where people feel like they can come and meet their colleagues in an informal setting. The environment in which you work is so crucial. For me it’s almost more crucial than the content because it’s what gives me the energy and motivation to continue.’

How well do the people from the different ICAI Amsterdam labs know each other?

‘I recently had organized lunch for the ICAI Amsterdam lab managers. There were ten of us in the room and only two people really knew each other. The rest had never spoken to each other, while some of them have their offices maybe five doors down from each other. So there’s something to be said about creating more of a community in ICAI Amsterdam and the other hubs and then across those hubs.’

What should the ICAI community look like in four years?

‘I think ICAI should be a household name. The general knowledge about ICAI is starting to build. The ICAI labs have been producing incredible results in the last five years and have made incredible collaborations. We are forming a solid network of labs and the aim is to build more connections across the country. I’ve had meetings with delegates from other countries to talk about ICAI. The word is going out about ICAI!’

Which organizations from abroad visited you to talk about ICAI?

‘We had a delegation from Estonia and I’ve had conversations with large international companies. I think in four years it would be great for the ICAI format to be more standardized. The Netherlands is really well-positioned: it’s a great international hub, easy to get to and it has an amazing standard of living. We are at a point where new AI initiatives are coming out, and it would be great if we can make sure that we position all of these initiatives together, so that they are acting in the same direction as opposed to competing against one another. ICAI has really put itself on the right path to make the Netherlands an important research AI hub.’

What were the main questions these delegations came with?

‘A lot of them were amazed by the amounts of money the labs received for fundamental research. Their main question was basically how the ICAI labs managed to do that. You don’t see this willingness of companies to fund fundamental research in many other countries. To get a five-year commitment from companies, that’s just phenomenal.’

What will be the main challenge for ICAI in the future?

‘ICAI has got that nimbleness. It’s very agile and flexible. Prestigious organizations like the European ELLIS, the Royal Society in the UK or the KNAW in the Netherlands have become so large that things can start to move very slowly. ICAI is growing right now, but I hope it can keep that nimbleness. I think this is possible if ICAI keeps evaluating and keeps seeing what it needs to be.’

Would you like to get to know your fellow ICAI members and have a drink with them? Sign up for the ICAI National Social Meetup – Summer Drinks on June 30th!

ICAI Trio Interview: AI entrepreneurship and a shared ownership of talent

It has been four years since ICAI kicked off and in the meantime ICAI has grown from 3 to 29 labs. How is ICAI doing so far? We take stock of the situation with a lab manager, a PhD student and the scientific director.

Efstratios (Stratis) Gavves is (former) lab manager of QUVA lab, co-director of QUVA and POP-AART ICAI labs, associate professor at University of Amsterdam and co-founder of Ellogon AI BV.

Natasha Butt is first year PhD student within QUVA lab, has a MA degree in Data Science and a BA degree in Econometrics.

Maarten de Rijke is the scientific director and co-founder of ICAI, professor of AI and Information Retrieval at the University of Amsterdam.

What was ICAI’s original purpose? Has that changed in the last four years?

Maarten: ‘The original vision was that we felt that more needed to be done to attract, train and create new opportunities for AI talent, while at the same time we wanted to work with a diverse set of stakeholders on shared research agendas. The underlying idea was that AI can make a positive contribution in lots of societal areas. We have been trying things out. And you learn by doing; that has been the mantra since day one and that will not change. One thing that is changing though, is that the first ICAI labs have matured and that there is a follow-up contract that is not just about attracting and training talent, but also about retaining talent. With the Launch Pad program we want to help the PhD students find their next opportunity in interesting places, ideally here in the Netherlands. Similarly, as PhD students begin to graduate from their lab, some of them have entrepreneurial plans. With the new Venture program we look at how we can help them connect to the right stakeholders and funding. So it’s still the same mission, but the instruments expand.’

ICAI has grown from 3 labs to 29 labs in the past four years. What is it like to work in a research lab with external partners?

Natasha: ‘What I really like is that you get to meet and collaborate with so many different researchers within industry. For a PhD student starting out this is really interesting and exciting. I can’t really weigh in on the negatives because we haven’t published a paper yet.’

Maarten: ‘Especially in labs where the non-academic partners don’t have a long tradition of research, it can be a challenge to identify good problems that matter academically and industrially. You need good problems that don’t need ten years to solve, but that also cannot be solved in three months. Aligning the horizons and expectations is something that needs attention.’

Stratis: ‘Working with external partners is inspiring and fruitful. The cornerstone for a successful relationship is managing expectations. Generally one could say that companies like stability and structure, while researchers in the university thrive with creative chaos. Finding a good balance between these two can bring great results. In fact, in my experience I have seen this work quite smoothly, because we have been lucky that the people involved are very conscious and open-minded.’

‘From now on funding will be less expected from government structure and instead come from private initiative.’

Stratis Gavves

To what extent do universities and companies or governmental organizations need each other in developing AI that can make us more future-proof?

Maarten: ‘We see a slow change right now in the ownership of big challenges. It is no longer just governmental, academic, or industrial, but much more a shared ownership. We are coming to the realization that the best way to tackle climate, health, energy and logistics problems, is to go after these problems together. All of these big challenges are multi-stakeholder and multi-disciplinary. For example, when you’re working on computer vision, at some point you will run into some legal or ethical questions that are tough. Think of all the deep fakes. On the one hand these generative models are fantastic and creative, but there’s another side. An algorithm developer should hang out every now and then with people who bring a different perspective to the table.’

Natasha, you are from Great Britain. Stratis, you are from Greece. Are there initiatives like ICAI over there?

Stratis: ‘I think ICAI is a very successful experiment that will be followed, one way or the other, by other countries. We had some preliminary conversations in Greece and I think that there is interest for sure.’

Natasha: ‘In the UK I haven’t come across many things like ICAI. But when I studied at UCL in London, there were a lot of AI societies and entrepreneurship societies that would hold events and invite students from other universities. So there’s definitely an appetite for it. Especially in London there are a lot of hubs and all the universities are pushing it.’

‘Collaborating with so many different researchers within industry is really exciting for a PhD student just starting out.’

Natasha Butt

Are there countries that were an inspiration for ICAI?

Maarten: ‘Yes, the Von Humboldt fellowships in Germany for example. And especially the attitude behind it was an inspiration for us: start with talent, bring the talent to the country, and then invest and create opportunities. We also saw the same attitude in France.’

Stratis: ‘The instrument that ICAI presents, is an innovation by itself. And this success will be broadcasted to other countries, because there is a need for it. This is how things will be working from now on: funding appears to be less expected from government structure and instead come from private initiative. People are searching for alternative sources of funding and I think that ICAI presents a fair way of doing this in such a way that both sides benefit.’

What are the plans for the next four years?

Maarten: ‘We are working on a large new program, funded by NWO, to expand ICAI with 17 new labs. I hope that by the end of this year we will have around 50 labs. Part of the plans is to expand to all academic cities. We would like to reach out and help people there to get going. Another thing is that our colleagues, whom we are heavily involved with in Nijmegen, have set up AI course programs for medical professionals. We are trying to see how we can do similar things, but then for other sectors like logistics and civil servants.’

Stratis: ‘My goal is to get Natasha and her lab mates graduate. And to attract more industries into the concept of ICAI, perhaps export it outside the Netherlands and maybe even to Greece. And of course, to keep doing top-notch research.’

‘More and more people are coming to the realization that the best way to tackle climate, health, energy and logistics problems, is to go after these problems together.’

Maarten de Rijke

Do you have questions for each other?

Natasha: ‘I would like to know what plans there are for the future. What sort of events do you hope to put on, especially from a PhD perspective?’

Maarten: ‘We want to organize as much as possible as the PhD students need. So we should listen to what would help you. The ICAI Launch Pad program helps PhD students who are towards the end of their PhD trajectory. But of course early stage PhD students have different needs, plans and questions. So we’d like to hear how we can help to make this a better experience. So far, we have put a lot of focus on sharing expertise and experiences, but of course there’s more to being an AI PhD student than that. You Natasha, and other PhD students, should be the ones that tell us.’

And where can she go with her ideas?

Maarten: ‘YaSuei Cheng, the ICAI community manager, can help organize things or find the right people to get something going. And here in Amsterdam we have quite some experience in setting up internships. But I’m sure that there are many things that we’re not seeing, so please let us know.’

Stratis: ‘I was wondering, what is the idea on how to get new spin-offs into existence? Is there guidance there? Let’s say that Natasha comes up with a great idea that her lab partner Qualcomm is not interested in. What should she do?’

Maarten: ‘We’ve teamed up with an initiative called TTT-AI. This organization is all about tech transfer and helping people finding out if there’s a market for their ideas. This initiative works around the whole country. It wants to connect the local ecosystem with local researchers, but also share systems across the country.’

The next ICAI Day on June 1st will be about AI entrepreneurship. Stratis, as co-founder of Ellogon AI, you know a thing or two about this. What is it like to launch a company from lab to the market?

Stratis: ‘I’m still learning, so I can’t tell you the full story from A to Z, but maybe from A to F. It is a lot of fun actually. We are the new generation of academics. It is expected, or at least appreciated, if we look at possibilities like this. But I’m not sure that everyone will be cut out for it. In a way we are working double jobs. It’s really rewarding though in many ways. What I found really interesting, is that so many academics and researchers already have moved to industry. And maybe there is something beyond the obvious argument that people only go there because there are better salaries. I can confirm that creating your own company, working on real problems and solving completely different issues, is really interesting.’

Natasha, how do you feel about making the move to industry in the future?

Natasha: ‘I’m pretty open-minded. It would be really nice and useful to hear the experience of people who went to industry and people who stayed in academia. Doing internships would also help.’

Maarten, what would you advise PhD students in finding the next step?

Maarten: ‘I think it’s a great idea, like Natasha says, to try out a few internships. I generally recommend to go to a completely different team and work on different problems. A different experience helps you to shape your thinking about what you’d like to do next. Maybe even consider doing an internship with a NGO. The Red Cross for example has loads of interesting challenges.’

And what can be done to help researchers to set up AI startups?

Maarten: ‘Mentoring is always useful. To hear other voices and to speak with friendly but critical colleagues who can walk alongside you for a while and connect you to potential customers and challenging problems.’

Stratis: ‘Once you’re in a company, you’re living in borrowed time until you really make it. Learning how to run a company, while developing a product, can be hard. So one thing that can be done is to familiarize people with this aspect of entrepreneurship so that they can anticipate the difficulties. And there are so many things that can be quite easily solved that can still make a huge difference.’

Would you like to meet your fellow ICAI members? On June 1st, the hybrid Summer Edition of the ICAI Day takes place. The theme of this edition is ‘AI Entrepreneurship: From the lab to the market’. Sign up!

ICAI Interview with Rianne Fijten: Tightening the relationship between medical clinics and commercial parties

In order to implement new AI technology in medical clinics in a sustainable way, close collaboration between the clinic and commercial parties is crucial, argues Rianne Fijten. ‘You need to make sure that if the grant money runs out, which it always does, the product that you built is not just lost.’

Rianne Fijten

Rianne Fijten is one of the scientific directors of Brightlands Smart Health Lab, assistant professor and senior scientist of clinical data science at Maastro clinic.

Brightlands Smart Health Lab is a collaboration between Maastricht University, Brightlands Institute for Smart Society, Zuyd University of Applied Sciences, Maastro Clinic, Maastricht UMC+, ilionx and Netherlands Comprehensive Cancer Organization.

Could you tell me about the research happening in the lab? What makes this research unique?

‘What is interesting about our lab, is that we really go from technology to the clinic. That’s a concept I’ve not seen anywhere else. Usually a research group focuses on a very specific part of a pipeline, problem or societal issue. Within the lab we have three pillars: data infrastructure, data science and clinical implementation. It is a pipeline from start to finish: we set up the infrastructures to get the data out of the hospitals, extract the data, build AI models and then implement it into the clinic.’

‘Another important thing is that we are close to business. Getting data science into a medical clinic is difficult, but getting it into the clinic without a commercial party involved, is even more difficult. To make sure that the new techniques are supported and maintained, it is crucial to connect the clinic to commercial parties, because researchers will not sustain it after their research is done. They have other research to do.’

What is your personal mission within this lab?

‘My main focus is on the last pillar. Since AI is booming business, so many AI-models have been built. But what you see in healthcare is that implementing those in the clinic is the difficult part. So we try to implement clinically relevant tools, but also find out why research doesn’t end up in the clinic, and what the problems and issues are in that process.’

What kind of clinical needs are you addressing?

‘A good example is a decision aid for prostate cancer patients that we built with the company Patient Plus. As every treatment has different side effects, this tool gives patients the option to find their personal risks of getting side effects, based on their personal characteristics. Prostate cancer is an interesting choice for a decision aid tool, because this disease has a very high survival rate, which makes it possible for patients to choose between different treatments. Patients answer questions like ‘what is your age?’, ‘do you smoke?’ or ‘are you a diabetic?’ Those are all risk factors for incontinence for example. At the end the patient will get a visualization of their personal risks and learns about the disease along the ride. For this tool we have set up a collaboration with urologists that we know very well. And we then offered it to a company, under certain conditions of course, so that they can make sure it will be used in the clinic in the future.’

The lab collaborates with seven different partners. What is it like to work with so many partners?

‘It gives us a lot of flexibility. Working with this big pool of collaborators allows us to set up different alliances that are suited to answer a specific question or solve a specific problem.’

All nine PhD students of the lab are located physically at the partners and mentored by senior scientists at the partners. Why did you choose that approach?

‘In order to keep the collaborations alive and to keep the relationships good, it is important to work together, even if you don’t have a specific project that you are working on that very moment. I think it is very important to establish long-term relationships and by working together in supervision of these PhD students you achieve that.’

What do you want to have achieved in four years?

‘If anything comes out of our ICAI lab, I hope that it is raising more awareness about closer collaboration with the clinics and industrial partners. What we see a lot within the projects is that at first the people at the clinic don’t really see the need for or are a bit anxious to involve industrial parties. I don’t know why, I think it’s the non-profit versus for-profit problem. I hope that with the projects we are going to do within the ICAI lab, that this is one of the take-home messages that we can deliver. We are currently forming the bridge, and hopefully in the future they can keep finding each other without our help.’

On April 21, 2022, the Brightlands Smart Health Lab will talk about their current work during the lunch Meetup of ‘ICAI: The Labs’ on AI for Radiation Treatment in the Netherlands. Want to join? Sign up!

ICAI interview with Evy van Weelden: Finding your way in the PhD maze

Corona has made it more difficult for PhD students to find each other, while this group benefits a lot from being part of a community. Evy van Weelden started her PhD in 2020 and only saw her fellow PhD students half a year later in person. ICAI is now organizing its first PhD social meetup. Van Weelden: ‘A PhD is like a maze in which you have to find your way. I feel like I could learn a lot from PhD candidates that are in their third or fourth year.’

Evy van Weelden

Evy van Weelden is a PhD candidate within MasterMinds Lab.

MasterMinds Lab is a collaboration between Tilburg University, Fontys Hogescholen, ROC Tilburg, Actemium, CastLab, Interpolis, Marel, MultiSIM BV, Municipality of Tilburg, Port of Rotterdam, Royal Netherlands Air Force, SpaceBuzz, TimeAware and WPG Zwijsen.

The research reported in this study is funded by the MasterMinds project, part of the RegionDeal Mid- and West-Brabant, and is co-funded by the Ministry of Economic Affairs and Municipality of Tilburg.

Working on flight simulations with the Royal Netherlands Air Force and MultiSIM sounds exciting. What amazed you so far?

‘Before I started I had some experience with virtual reality (VR), but when I first tried the flight simulation I was very impressed with how realistic it was. The company MultiSIM models PC-7 aircrafts exactly how they are in real life. Some people are prone to simulator sickness, but I’m not, so it was very fun. You feel very present in that visual environment, which is very important for the motivation of learning. There are different levels why this simulation is so realistic: the sounds, the environment and if you put pressure on the stick or throttle, the response of the aircraft is exactly how it would be in real life.’

What exactly are you researching?

‘My project focuses on neurophysiological indicators of learning in VR flight simulations. I am currently looking at the difference between desktop flight simulators and a VR flight simulators. To what extent does the fidelity of the simulation – so the degree to which the flight simulation resembles a real flight – influence the subjective workload or flight performance of the user and their brain activity? This topic fits in several types of fields, but the main one is neuro-ergonomics. With ergonomics you look at how a person interacts with a system. But neuro-ergonomics is more specific: you’re actually looking in the brain while this person interacts with a system, computer or machine. Once we have established models of the brain activity during training, we can try to predict the learning curve in VR flight simulations. Eventually we want to give neuro-feedback to the user, in the hope that it would increase their learning curve.’

What is it like to do your research with two external partners?

‘There is a lot of communication involved, with the partners, and internally with my supervisors at Tilburg University. And there is a lot of brainstorming. Everyone is enthusiastic and proactive. The meetings with the people from the partners are fun and inspiring. They are intelligent and have a lot of content-related feedback.’

What does the collaboration look like in practice? Do you go there?

‘My past data collection took place at Mindlabs, but for the next studies I plan to use the pilot trainees. That will take place in Soesterberg where the Airforce and Multisim are settled or I go to Woensdrecht in Zeeland where the pilot training takes place.’

You started with a study in neuroscience. How did you get into AI?

‘During my masters I did an internship that considered brain-computer interfaces (BCIs), and after that I knew for sure I wanted to continue with this kind of research. In short, BCIs are AI-driven interfaces that translate brain activity into device commands. In other words, we use AI to make sense of the electrical signals that are measured from the brain. BCIs could be applied to find out whether a person could be cognitive overloaded, which can impact safety, attention, but also learning. Our research concerns learning. We hope that with the use of BCIs in VR, we can increase the learning curve of these pilot trainees. Although BCIs in the field of work are relatively new, a lot of research groups worldwide are working on it right now. But as far as I’m aware, no one is researching the impact of BCIs on the learning curve in VR flight training yet.’

Is a PhD something you have to discover along the way?

‘Yes, it always starts with an idea and then you have to find more information and advice. You have to find out whether your ideas are practical. It takes a long time before you can actually start a study or data collection. There are so many fields, so many devices, so many ideas.’

You started your PhD in the middle of Corona time. How was this?

‘Well, everyone was in the same boat of course. And there were a lot of online meetings. Also meetings where we could interact with other PhD candidates and sometimes even play games, which was nice. When the lockdowns were less restrictive we got to see each other in person and we could really interact. And then another lockdown came. Right now we are starting up again, but we will probably continue to work flexible.’

What role could ICAI play in this for you and other PhD students?

‘The last time I had an in-person meetup at ICAI, at the ICAI day in October, I was able to connect with a lot of people from different levels and fields. We were seated at tables with a certain topic, where we could brainstorm. I got a chance to talk to people from different universities, PhD candidates, postdocs and even professors. It was really nice to learn about other projects within ICAI. I learned as well that there are some projects that involve BCIs.’

Would you like to see more meetups specifically aimed at PhD students within ICAI?

‘Something PhD-specific is always nice to have. As a PhD candidate you have different needs than someone who is a postdoc or beyond. If you’re struggling with something in your project, data analysis for example, or the AI part of machine learning, other PhD students can think along with you and recommend something.’


On Friday, March 11, 2022, ICAI organizes the first Social Meetup for PhD students (invite only). Do you want to get to know your fellow ICAI PhD students? Sign up!

Save the date: The ICAI Day – 2022 Summer edition will take place on Wednesday June 1, 2022!

ICAI interview with Renger Jellema: Deploying AI to develop sustainable food processes to feed the world

The modern world is facing a number of converging megatrends: population growth, increasing scarcity of natural resources, and a need for the sustainable production of nutritious food. Through biotechnology DSM develops sustainable products, using nature’s toolbox, such as microorganisms. The AI Lab for Bioscience (AI4b.io) aims to accelerate this innovation process using AI technology. Renger Jellema: ‘More time to use our human creativity is going to be the most important thing we will gain from AI.’

Renger Jellema

Renger Jellema is program manager of AI4b.io, the ICAI AI Lab for Bioscience, and he is Senior Data Scientist at the Biodata & Translation group at DSM Science & Innovation.

The AI Lab for Bioscience is a collaboration between Delft University of Technology and DSM.

The lab’s first press release stated that you are the first lab in Europe to apply AI to life science and bioproduction. Why hasn’t this been done before?

‘Engineers have already been applying mathematical models in life science for decades, but now there are rapid developments in computing power and breakthroughs in AI. The combination of methods and techniques has become a unique playing field to take biotechnology, process technology, food science and even health and nutrition to the next level.’

Is this approach being used now by other researchers as well?

‘Yes, more and more biotech scientists and engineers worldwide are now launching initiatives similar to what we are doing. What is unique about AI4b.io is that we scale down, from cubic meters to nanoliters and from months to milliseconds, and not follow the more common reverse order which brings scale-up issues. We have defined five lines of research: starting with scheduling in factories, to unit operations, to automated labs, to microbial strain developments and screening to microbial cultures and health relationships in the gut.’

What can AI mean for bioscience?

‘Developments can go much faster. Because a lot of the patterns in data is already stored in the AI-models, researchers can directly go to the core of the problem. And then there will be more time for the researchers to be creative. That’s also how I explain it to colleagues who are a bit hesitant: we free up time to interpret results and come up with novel ideas. Right now about 80 percent of our time goes into managing data and doing things repeatedly.’

What kind of questions can you try to answer with the help of AI that you couldn’t answer before?

‘We want to reduce the cost of innovation while accelerating our development cycles. Mathematical models already play a good role in reducing experimental work by calculating possible scenarios in advance. What we expect is that with the help of AI we can develop better models, leading to so-called Digital Twins of microbes, processes, and factories. At DSM, for example, we produce food and feed ingredients using the process of fermentation. We grow microorganisms on sustainable, plant-derived sources such as sugar and carbohydrates. The microorganisms convert the sugar into valuable products in large steel vessels. Using advanced simulation models, we can then predict the behavior of microorganisms and interaction with their environment in such large vessels. Based on that, we can optimize these processes to become more energy efficient and produce fewer by-products.’

Can you give an example of a typical application?

‘We have developed advanced process models that can be used for large-scale fermentation vessels with a scale of 100 m3 and above. The problem with this is that calculating a few minutes of the behavior of such a vessel quickly takes a few days of computational time on a multi-core computing platform. This makes it impossible to track or monitor the process in real time. For this application, AI can be trained to represent these models – easily speeding up the calculations by a factor of 100 – acting as Digital Twins of the real fermentation vessel. The Digital Twin becomes a sophisticated digital copy of the real process.’

What can this research eventually mean to the world?

‘At DSM, we develop novel ways to produce healthy nutritional ingredients to feed the world in a more sustainable way. The Digital Twins I mentioned before, help us in the development of such processes and products, working for example toward meat alternatives using plant-based material. We combine different protein materials with ingredients such as vitamins and other micronutrients to create food solutions that taste good, have an appealing texture and keep you healthy.’

We have just set up a Launch Pad program to coach PhD students entering the job market. You have been working as a researcher in industry for quite some time. What advice would you give them?

‘Connect with scientists in companies that inspire you. If you get a chance to present your work at a company, seize that opportunity. It’s easy to shy away and stay behind your computer. But know that companies are interested in your research and are willing to help you further. Also, exploring how your research findings can be applied in practice, will improve your thought process.

Personally, I did my PhD in collaboration with Hoogovens, the steel giant now called Tata Steel. I could have stayed behind my computer and emailed them regularly to pick up the samples I needed for my modelling activities. But often I chose to visit the plant and talk to the operators who had to collect the samples. There I saw how difficult it was to take those industrial samples from the extremely hot processes and I learned to understand why the samples were sometimes not that good. As a result, I was able to change the procedures to improve my research. You have to get your hands dirty to get the best insights.’

On February 17, 2022, the AI Lab for Bioscience will talk about their current work during the lunch Meetup of ‘ICAI: The Labs’ on AI for Food in the Netherlands. Want to join? Sign up!

Not sure about your next step as a PhD student in AI? Knock on Kai Lemkes’ door

Kai Lemkes has been a recruiter in the AI domain for ten years. Since a few months he has been a matchmaker within the ICAI Launch Pad program where he coaches PhD students. Lemkes: ‘PhD students have a blind spot when entering the labor market.’

Kai Lemkes

What does the ICAI Launch Pad program look like?

‘After a first introductory meeting with the PhD students, I coach them in how they can best prepare for a job application, how to build a resume, how to present themselves on LinkedIn, et cetera. We evaluate that and then look at how this person can best present themselves and enter the labor market. We can also hold a closing meeting on request. My door is always open.’

Why is there a need for Launch Pad?

‘Many of these PhD students are at a crossroads where they don’t really know what they want next. What I encounter a lot is that students want to stay in the domain they’re already in, purely because they already know it. I recently spoke to a girl who was strongly attached to the research domain. But when I asked her to describe her ideal job, she said that she would prefer to keep improving products, give presentations and a number of things that you see much more in the commercial domain. It is therefore very important to show this group clearly what they actually choose. That’s a blind spot.’

‘The AI ​​domain has exploded in just a few years. At the moment, almost every company I work with – mainly top-500 companies – is investing in AI. For that reason, many young professionals are quickly lured abroad by companies. Foreign companies are sometimes a bit more ‘aggressive’ when it comes to recruiting talent. They proactively approach PhD students and offer them a substantial salary.’

What are Dutch companies not doing well besides less actively recruiting talent?

‘I see the recruitment process go wrong quite often. Candidates have to sell themselves in a very short time and that does not always result in a good match. Based on two or three conversations, it is quite difficult to determine whether someone is a good fit for a company for the long term. Right now, you need the luck to meet someone who likes you. If you’re having a bad day, you’re not going to look good. And especially in the technical domain you will find many specialists who are a bit more introverted or who find it less easy to present themselves, so that they already start such a process quite tense or uncertain.’

How can companies better handle this?

‘It is better to set up a process in which a company really experiences a candidate and to schedule interviews with several people from the company and not just one person. It is also a good idea to let a promising candidate speak with the whole team. Because the demand for AI specialists is so enormous right now, you sometimes see that companies present themselves as super high-tech and that a young professional later finds out that it is not that high-tech at all. Or they find out that there are no other specialists with whom they can consult. And then they can feel terribly alone. Companies need to be honest about what they have to offer.’

What is your solution to this problem?

‘I am developing the digital platform Future Impact. This should become a lively community in which students and young professionals can help each other, have peer-to-peer conversations and give ratings to companies. On this platform, companies can also present themselves and tell what they have to offer as an employer. Virtual appointments can then be scheduled for a first acquaintance. I also want to organize meetups here with people who, as PhD students, have made the step into the commercial world and can coach others in this process.’

How did you get into this job?

‘I really enjoy making matches and connecting people. I like to network and chat. I stumbled into the AI domain by accident, but I really fell in love with it. Such beautiful things happen here; startups working on zero-co2 emissions technologies, for example. AI offers so many possibilities.’

What does ICAI mean to you?

‘I am originally a commercial recruiter, but for ICAI I am really more of a coach. And actually, as I found out again, I think that’s the most wonderful job. In this role as a coach, I enter the conversation with a different intention than as a recruiter. That gives me a great deal of satisfaction. In addition, my network is growing. My highest goal is to get to know the entire AI ecosystem of the Netherlands.’

Kai Lemkes is a matchmaking expert within AI. He is the founder of several matchmaking platforms including Future Impact.

Interested as a PhD student or organization to participate in ICAI Launch Pad? Register here or send an email to kai@future-impact.io.