AI has the power to transform the world, but only when guided by the shared values and visions of its stakeholders. Therefore, ICAI is proud to become part of the Partnership on AI, a worldwide coalition of academic institutions, media organizations, industry leaders, and civil society groups dedicated to promoting the positive impact of AI on people and society.
The Partnership on AI (PAI) was founded with six thematic pillars, aimed at addressing the risks and opportunities of AI:
Safety-Critical AI;
Fair, Transparent, and Accountable AI;
AI, Labor, and the Economy;
Collaborations Between People and AI Systems;
Social and Societal Influences of AI;
AI and Social Good.
Partners from all over the world, and AI subdomain, take part in the partnership with one goal; to promote best practices in the development and deployment of AI. Through collaboration, the Partnership on AI develops tools, recommendations, and other resources by inviting voices from across the AI community and beyond to turn insights into actions to ensure that AI and ML technology puts people first.
“We are pleased to be part of PAI and connect on a global level to other organizations to work on AI challenges”, says Maarten de Rijke. Our membership in the Partnership on AI reflects our commitment to the responsible deployment of AI technology and our belief in the importance of collaboration in shaping the future of AI. By sharing expertise, taking part in steering committees, and dedicating ourselves to sharing resources and education regarding the development of AI policy, we can all learn from each other.
If you are interested in learning more about Partnership on AI, please find their website via the following link: https://partnershiponai.org/. Moreover, if you specifically want to know more about ICAI’s involvement in the partnership, please contact Esther Smit at esmit@icai.ai.
January 10, 2023Press releaseComments Off on ROBUST AI programme receives additional €25 million in funding from Dutch Research Council
Total project budget of over 87 million, including 17 new labs and 170 new PhD candidates over 10 years
ROBUST, a new initiative by the Innovation Center for Artificial Intelligence (ICAI), is supported by the University of Amsterdam and 51 government, industry and knowledge-sector partners. The programme aims to strengthen the Dutch artificial intelligence (AI) ecosystem by boosting fundamental AI research. ROBUST focuses primarily on the development of trustworthy AI technology for the resolution of socially relevant issues, such as those in healthcare, logistics, media, food and energy. The research sponsor, the Dutch Research Council (NWO) has earmarked 25 million euros for the programme for the next 10 years.
ROBUST unites 17 knowledge institutions, 19 participating industry sponsors and 15 civil-social organisations from across the Netherlands. Maarten de Rijke, UvA university professor of Artificial Intelligence and Information Retrieval, is the ROBUST programme leader.
The additional €25 million grant comes from a call by the research council for Long-Term Programmes, which give strong public-private consortia the chance to receive funding for a ten-year period. This as part of the initiative of the Netherlands AI Coalition to invest in explainable and trustworthy AI. Next to the research council, companies, knowledge institutes contribute to the programme. The total ROBUST budget amounts to €87.3 million, of which 7.5 million coming from the Ministry of Economics and Climate. The ROBUST programme is complementary to the AiNed programme and will shape the collaboration on dissemination, consolidation and valorisation of the results, as well as retaining talent in the Netherlands. This contributes to the ambitions of the Strategy Digital Economy of the cabinet to be in the forefront of human-centred AI development and AI applications.
170 new PhD candidates Seventeen new public-private labs will be set up under the ROBUST umbrella and form part of the Innovation Center for Artificial Intelligence (ICAI), thus bringing its lab total to 46. ICAI focuses on AI talent and knowledge development. In the coming year, ROBUST will recruit no fewer than 85 new PhD candidates, followed by another 85 in five years’ time.
Human-centred AI for sustainable growth ‘What makes ROBUST unique is that not only will the new labs contribute to economic and technological objectives, they will also aid the United Nations’ sustainable development goals aimed at reducing poverty, inequality, injustice and climate change’, says De Rijke. ‘One important focus of all projects is to optimise reliable AI systems for qualities such as precision, soundness, reproducibility, resilience, transparency and security.’
Twin-win study Just like the other ICAI labs, the ROBUST labs will put the twin-win principle into practice: intensive public-private research partnerships in AI technology that lead to open publications and solutions that have been validated in practice. ‘We test our scientific findings within an industry context. Research and practice thus come together at an earlier stage, allowing for far better validation of the results. This way, research validation doesn’t end in the lab, but also in the outside world.’
Startups, SMEs, and policymakers ‘AI is a systemic technology that touches all aspects of society. That’s why it’s important to ensure that the application of AI technology becomes a widely shared responsibility. ROBUST collaborates with regional civil-social partners throughout the Netherlands, and especially with startups and small to medium-sized enterprises (SMEs).’ The objective is not only to develop knowledge and innovations with ROBUST partners, but also to make them more widely available to other parties within the Dutch ecosystem. New findings and their policy implications will also be shared with national and European policymakers.
Contact Journalists wishing to contact Maarten de Rijke or other relevant scientists, or to find out more about ROBUST, please contact persvoorlichting@uva.nl.
From the first of November onwards, knowledge institution Eindhoven University of Technology (TU/e) and globally leading hearing aid manufacturer GN Hearing will join forces in FEPlab. The lab is dedicated to ameliorating the participation of hearing-impaired people in both formal and informal settings.
Research
FEPlab will focus its research on transferring a leading physics/neuroscience-based theory about computation in the brain, the Free Energy Principle (FEP), to practical use in human-centered agents such as hearing devices and VR technology. FEP is a general theory of information processing and decision-making in brains that is rooted in thermodynamics. The principle states that biological agents must take actions (or decisions) that minimize their (variational) free energy which is a measure of the amount of total prediction error in a system. Practically, by minimizing free energy, the agent takes actions that optimally balance information-seeking behavior (reduce uncertainties) against goal-driven behavior. Theoretical foundations for AI application of FEP-based synthetic agents have been produced by BIASlab at TU/e. In the current endeavor, FEPlab is focused to bring FEP-based AI agents to the professional hearing device industry. Professor Bert de Vries, the scientific director of FEPlab alongside Associate Professor Jaap Ham, believes FEP-based synthetic agents have much to offer to signal processing systems:
”I believe that development of signal processing systems will in the future be largely automated by autonomously operating agents that learn purposeful (signal processing) behavior from situated environmental interactions.”
Bert de Vries, Scientific Director FEPlab
Expertise and Focus
FEPlab will comprise experts from different fields of expertise such as Audiology, Autonomous Agents & Robotics, Decision Making, and Machine Learning to tackle the complex multidisciplinary challenges at hand. The lab will employ five PhD students at TU/e, of which four will join the BIASlab research group in the EE department and one PhD student will join the Human-Technology Interaction group at the IE&IS department. Key research topics include reactive message passing for robust inference, generative probabilistic models for audio processing, and interaction design for hearing aid personalization.
Sustainable Development Goals
FEPlab will focus on two SDGs. Firstly, the research goals of the lab resonate with SDG 3 focused on Good Health and Well-being since untreated hearing loss in the elderly increases the risk of developing dementia and Alzheimer’s disease as well as emotional and physical problems. Secondly, the lab’s research goals also support SDG 8 of achieving higher levels of economic productivity through technology upgrading and innovation as hearing loss is also shown to affect work participation negatively.
The world is facing a number of converging climate change challenges: population growth, more frequent extreme weather events, and a need for the sustainable production of nutritious food. Some say that machine learning can support us to mitigate and prepare for such consequences of climate change, however, it is not a silver bullet. In this interview, Congcong Sun and Chiem van Straaten discuss the challenges of machine learning in agriculture and weather forecasting, and the similarities and differences between their respective fields.
On November 16th, 2022, ICAI organizes the ‘ICAI Day: Artificial Intelligence and Climate Change’ where Congcong, Chiem, and many other researchers will talk about how AI can be used to mitigate and prepare for the consequences of climate change. Want to join? Sign up!
Congcong Sun is an assistant professor in learning-based Control at Wageningen University & Research (WUR) and Lab Manager of the ICAI AI for Agro-Food Lab. Her research interests are in using learning-based control to explore the overlap between machine learning and automatic control and apply them to agricultural production.
Congcong and Chiem, could you tell me what your research is about, and how it is connected to artificial intelligence?
Congcong: Yes, of course. My research focus is on learning-based autonomous control in agricultural production. For instance, in a greenhouse or vertical farm, climate control can be optimized to make the crops grow under more favorable conditions and produce a better quality crop. Another example is logistical planning for agro workers, such as harvesting robots in a multi-agent setting. Learning-based control applications are complex, which is why I mainly use deep reinforcement learning, which is the combination of reinforcement learning algorithms with neural networks.
Chiem: The research that I conduct pertains to studying and making predictions about weather and climate extremes. Many industries, such as agriculture production, depend on accurate weather forecasting. Understanding our climate better is crucial for preparing ourselves for extreme weather and at the same time allows industries to use their resources more efficiently. However, predicting weather events far in advance is extremely tough due to time lags, the conditional nature of observed patterns, and the multitude of factors influencing one another. Machine learning has the potential to deal with such levels of complexity, which is why I am interested in applying it to weather forecasting.
Do you see any similarities or differences between your research?
Congcong: I believe our research is interconnected. As Chiem mentioned, weather patterns are a large source of uncertainty within the agricultural industry, particularly for those applications where the farm is located in an uncontrolled environment, such as open-air farms.
In agriculture, the weather is not the only source of uncertainty, however. Uncertainty arises from the crops themselves. Different crops have optimal growing conditions, which means that a control policy that is effective for one crop might not be effective for another. Even if you were to place a different crop in the exact same greenhouse environment, you would need a vastly different policy for controlling it. What are your thoughts on that, Chiem?
Chiem: Yes, you are trying to tackle something that inherently is multivariate, which is similar to weather forecasting. Although I am not well-versed in the specifics of agriculture, I can imagine that you need to take into account many factors such as irrigation, lighting, and temperature?
Congcong: Yes, indeed. When we seek to regulate the climate within a greenhouse, there are a lot of variables we need to consider, like humidity, irrigation, fertilization, light, and temperature. Analyzing the relationships between these variables requires knowledge from various disciplines such as plant physiology and biology. Additionally, certain relationships might not have been discovered yet, which adds to the complexity of balancing these variables. The combination of machine learning and automatic control can help us explore some of these relationships and translate them into knowledge about how to best regulate these environments.
Chiem: Ah, exactly. Here, I see a great similarity between autonomous control of agriculture environments and the prediction of weather patterns. For a long time, physical numerical prediction models have been developed in order to incorporate as many of the processes that are known to be important for weather prediction as possible. However, it is also known that these models are not perfect, as the weather is extremely complex. Therefore, we attempt to replace parts of the numerical models with statistical models to capture yet-to-be-discovered processes
Congcong: Yes, indeed. What kind of data do you use to make weather forecasts?
Chiem: In the non-statistical forecasting models specifically, we use a plethora of data to make weather forecasts, including humidity, pressure, air temperature, and wind speed. Like the input, the output is often multivariate, similar to learning-based agriculture control. Another similarity might be that in both domains you encounter challenges due to cycles. For instance, I could imagine that in agriculture you need to take the growing cycle of plants into account, which is different for every plant. In weather forecasting, you also have to deal with many different cycles at the same time, such as the seasonal cycle, weekly cycles, and daily cycles.
Congcong: Yes, exactly! Plants have different optimal growing cycles. In greenhouses with multiple plants, it could be that different growing cycles overlap similarly to how cycles overlap in weather forecasting. It is interesting to see so many similarities between our two domains!
In your conversation, you mentioned some applications of machine learning in your respective domains. One challenge we often hear about is related to trustworthiness, especially in applications with high degrees of uncertainty. Are companies in your industry enthusiastic or reluctant to work with machine learning?
Congcong: Greenhouse climate control is quite mature in the Netherlands. Some commercial greenhouses have already implemented automated control, however, we are still not making use of all available cutting-edge sensing techniques. The adoption of such techniques by farmers might be slow since they are expensive and if they do not work as intended, it could ruin a farmer’s business. Also, farmers might be hesitant to trust machine learning technology, since it is a relatively new technology.
Chiem: As Congcong noted; the trustworthiness of a system is crucial for its widespread acceptance. Applications such as heatwave prediction are not quite ready for widespread use because heat waves have to be predicted far in advance, which is immensely tough to do accurately. Short-term forecasting applications, such as rainfall forecasting applications have a track record of successful predictions, however. Moreover, weather forecasting has rapid update cycles, so if you make an errant forecast today, you still have a chance tomorrow to forecast the same thing, but do so with greater accuracy. For heatwave prediction, such errant predictions have way more severe consequences. In agriculture, I could imagine the consequences are similarly more severe. What do you think Congcong?
Congcong: I agree with you Chiem. Plants are quite sensitive, so if a wrong prediction leads to hazardous conditions in which the plants cannot survive for long, the grower might lose all of their plants. While system control in agriculture does not come with direct harm to humans, like in autonomous driving, the margins on crops are small. Therefore, they are in general more averse to using machine learning and statistical modeling approaches in general.
Chiem, during the ICAI Day, a day revolving around the numerous challenges regarding machine learning and climate change, you will walk us through a heat wave prediction use case. What would you say the largest hurdle is in this research?
Chiem: The primary challenge in climate change research is the interaction between processes across different scales. On a local scale, processes such as heat exacerbation due to dry soil conditions or particular local atmospheric configurations can influence heat waves. However, such local conditions can also be synchronized across the scale of the complete northern hemisphere, which means that hundreds of kilometers away, very specific conditions might also be an indication of an impending heatwave. This can become increasingly more complex when you, for instance, include global connections.
The interaction across these many scales creates challenges in determining the resolution of the data you need and also what algorithm is most suitable to use. Additionally, climate change is actively changing our data distributions as we speak. Data that we gathered in the past might therefore have different weather dynamics than the weather right now, which makes generalizing very difficult. To an extent, your machine learning model is always extrapolating.
That is intriguing, thank you for your explanation! Congcong, during the ICAI Day you will moderate a lunch table discussion on Artificial Intelligence and Agriculture. What do you plan to discuss and why should people join?
Congcong: During the lunch table discussion, I would like to come together and talk about the current challenges of applying AI to agriculture; the popular and potential AI solutions to confront these challenges; as well as the future trends of applying AI to Agriculture. I believe it is valuable to join since it will be a very good chance for researchers, engineers, and students who are working in this area, or even just feel interested in this area, to ask their questions, share their opinions, and also may get some answers about their doubts through the discussions! Beyond that, it will also be a very good chance to build your network and explore potential collaborations for the future.
To round off; when would you say your research is a success?
Congcong: Any progress of my research, I consider a success and will make me happy. These are things such as my PhD students achieving a small step, solving pressing challenges for farmers, and making food production more sustainable by reducing emissions and energy use.
Chiem: One large success would be the ability to answer questions regarding climate change attribution such as: how much has climate change exacerbated the impact of this specific extreme weather event or made it more frequent? Being able to answer such questions confidently would allow us to hold parties, such as big emitters, accountable. While far off, I believe that machine learning has the potential to give us the tools necessary to do this in the future.
On November 16th, 2022, ICAI organizes the ‘ICAI Day: Artificial Intelligence and Climate Change’ where Congcong, Chiem, and many other researchers will talk about how AI can be used to mitigate and prepare for the consequences of climate change. Want to join? Sign up!
Working with medical data comes with many challenges, ranging from improving data usability to maintaining privacy and security. To outline some of these challenges, ICAI organizes the ICAI Deep-Dive: Working with Medical Data on the 3rd of November, 15:00-18:00. This hybrid event will be moderated by Nancy Irisarri Méndezand will take place on location at Radboud University and online.
Artificial intelligence solutions are rapidly transforming the world by automating tasks that for long have been performed by humans solely. Training on increasingly massive datasets is one of the enablers of this widespread use of robust and trailblazing models. However, due to socioeconomic and legal restrictions, the industry lacks large-scale medical datasets to enable the development of robust AI-based healthcare solutions. Therefore, there has been an increased interest in technical solutions that can overcome such data-sharing limitations while simultaneously maintaining data security and the privacy of patients.
We will open this ICAI Deep-Dive event with an introduction to two specific data-related challenges in the medical field. The first challenge will be introduced by Bram van Ginneken of the Radboud UMC, who will discuss FAIR (Findability; Accessibility; Interoperability; and Reusability) requirements for data sharing in practice. Thereafter, Gennady Roshchupkin of the Erasmus UMC will conclude part I of the event by discussing the challenges of using Federated Learning in genomics research.
The second part of the ICAI Deep-Dive event will be a panel discussion that centralizes the question “How do we tackle challenges in medical data usage by collaborating together?”. During the panel discussion, Nancy will moderate the discussion among Bram van Ginneken, Clarisa Sánchez, Gennady Roshchupkin, Johan van Soest, and is also open to everyone who is interested in the challenges mentioned in the previous two talks.
After the panel discussion, it is time for networking while enjoying some drinks.
On November 16th, 2022, ICAI organizes the ‘ICAI Day: AI & Climate Change’ This hybrid event will take place on location in Utrecht and online. Registration is now open for anyone interested.
During the first part, the Lunch Table Discussion (on-site only), from 12:00 to 13:30 hrs, you will have the opportunity to talk to others in small table settings during a catered lunch. Each table revolves around a specific topic with regard to Artificial Intelligence and Climate Change. You can choose from seven sectors: Heavy Industry and Manufacturing, Electricity Systems, Transportation/Mobility, Agriculture, Societal Adaption, Ecosystems and Biodiversity, and Markets and Finance. The seats are limited for this part, so sign up quickly.
The second part of the ICAI Day, the Plenary Session from 13:30 to 18:00 hrs, will be a hybrid setting moderated by Felienne Hermans (Leiden University). You can sign up for this plenary program physically or online. Keynote speakers Chiem van Straaten (KNMI), Elena Verdolini (EIEE, Università degli Studi di Brescia) & Ralf Herbrich (Hasso Plattner Institute) will share their insights on how Artificial Intelligence can be used to tackle Climate Change. Thereafter, there will be a panel discussion and some drinks to close the event.
On Tuesday 20 September the AI and Ethics course was launched at Startup Village in the Science Park of the University of Amsterdam. The course is a follow-up to the original Dutch National AI Course that focused on the basic building blocks of artificial intelligence.
Jim Stolze explained during the official launch. “There were a lot of lingering questions among the participants of our first course, questions such as: ‘How do we keep it fair and how do we avoid prejudice entrenching itself in algorithms?’ To make sure that everyone understands how the sector grapples with these topics and thinks these things through for themselves, there is now the National AI Course: AI and Ethics.”
After the opening words, there was a group discussion with, among others, professor Maarten de Rijke, founder of ICAI, and distinguished professor of artificial intelligence and information retrieval at the University of Amsterdam. Maarten recounted how trustworthiness is one of the most important pillars on which a broader rollout of AI technology could rest. People will only trust AI if it proves itself worthy of that trust. This means that AI should be explainable, transparent, have reproducible results, and should be trustworthy for less prominent groups.
The way that he and his colleagues approach this issue is to try and push the gas, but make sure there are guardrails. Explicability, transparency, trustworthiness for less prominent groups, and reproducibility could be some of these guardrails.
From left to right: Sennay Ghebreab, Mieke van Heesewijk & Quirine Eijkman
After the talk it was finally time for the official launch, Sennay Ghebreab the director of the Civic AI Lab at ICAI, Mieke van Heesewijk from the SIDN fund, and Quirine Eijkman from the Netherlands’ Institute for Human Rights had the honor to press the big red button and give the world access to the course. These three domain experts are featured in the course due to their long-standing work on human rights, AI, and ethics.
The conversation on AI and Ethics continued with some food and drinks. All attendees brought some of their experience and expertise for lively discussion and a group picture.
October 3, 2022ICAI InterviewComments Off on Using Artificial Intelligence to Enable Low-Cost Medical Imaging – Phillip Lippe interviews Keelin Murphy
Medical imaging is a cornerstone of medicine for the diagnosis of disease, treatment selection, and quantification of treatment effects. Now, with the help of deep learning, researchers and engineers strive to enable the widespread use of low-cost medical imaging devices that automatically interpret medical images. This allows low and middle-income countries to meet their clinical demand and radiologists to reduce diagnostic time. In this interview, Phillip Lippe, a PhD student at the QUVA Lab, interviewed Keelin Murphy, a researcher at the Thira Lab, to learn more about the lab’s research and the developments of the BabyChecker project.
Keelin Murphy is an Assistant Professor at the Diagnostic Image Analysis Group in Radboud University Medical Center. Her research interests are in AI for low-cost imaging modalities with a focus on applications for low and middle-income countries. This includes chest X-ray applications for the detection of tuberculosis and other abnormalities, as well as ultrasound AI for applications including prenatal screening.
Phillip Lippe is a PhD student in the QUVA Lab at the University of Amsterdam and part of the ELLIS PhD program in cooperation with Qualcomm. His research focuses on the intersection of causality and machine learning, particularly causal representation learning and temporal data. Before starting his PhD, he completed his Master’s degree in Artificial Intelligence at the University of Amsterdam.
The QUVA Lab is a collaboration between Qualcomm and the University of Amsterdam. The mission of QUVA lab is to perform world-class research on deep vision. Such vision strives to automatically interpret with the aid of deep learning what happens where, when, and why in images and video.
The Thira Lab is a collaboration between Thirona, Delft Imaging, and Radboud UMC. The mission of the lab is to perform world-class research to strengthen healthcare with innovative imaging solutions. Research projects in the lab focus on the recognition, detection, and quantification of objects and structures in images, with an initial focus on applications in the area of chest CT, radiography, and retinal imaging.
In this interview, both Labs come together to discuss the challenges in deep learning regarding the medical imaging domain.
Phillip: Keelin, you witnessed the transition from simple AI to deep learning. What do you think deep learning has to offer in medical image analysis?
I believe deep learning has a huge role to play in medical image analysis. Firstly, radiology equipment is expensive and requires the training of dedicated physicians, which means that low and middle-income countries can not meet their clinical radiology demands. Using deep learning-powered image analysis, therefore, has the potential to homogenize medical imaging accessibility around the world.
Secondly, even in richer countries such as the Netherlands, we can use deep learning to reduce the costs of radiology clinics. Every minute a radiologist spends looking at an x-ray, for example, is expensive and radiologists have to review a lot of x-rays every day. While every x-ray still requires the radiologists’ utmost attention, many of these x-rays actually show no signs of malicious threat. Deep learning could be used here to prioritize radiologists’ work list, by putting cases that seem normal at the bottom and cases that are deemed urgent at the top of the list. When artificial intelligence can really be relied upon, we could even start removing items from the radiologists’ workflow entirely.
Phillip: You mentioned that you use deep learning, which of course has many facets of neural networks, such as graph neural networks (GNNs) or transformers. Since you are working in imaging analysis, I assume you mostly work with computer vision models. Are you using convolutional neural networks (CNNs) for classification and segmentation or do you even go beyond that scope?
As you mentioned, we almost always use a CNN, where the type of CNN is dependent on the application. More than not, due to the confidential nature of medical data, the most important factor in determining which model to use is actually the amount of data that is available. Using too little data to train a model runs the risk of overfitting and introducing lots of uncertainty. Therefore, we have to try to use models cleverly to make sure to overcome such consequences, by using class balancing, data augmentation, or adjusting the network architecture. Other factors include the size of the model. For instance, to enable global use of the BabyChecker, it is important that the model can fit on a mobile phone device which sets requirements for the size of the network we can use.
Phillip: We know that deep learning models can create false predictions, so it might happen that the system indicates that a measurement looks all good, while that person actually needs to go to the hospital. How do you deal with this uncertainty and possible mistakes?
First, we should acknowledge that uncertainty is inevitable. Radiologists make mistakes, just like models can make mistakes. Only through strict quality control processes can we ensure that these models are reliable enough so that they do more good than harm. Especially in the medical field, this poses many challenges. For instance, on the technical side, we should figure out how to deal with domain shifts. On the legal side, we should determine who is responsible if the model makes a mistake and what legal actions can be taken. Those things are incredibly unclear at the moment.
Right now, I still see that artificial intelligence has a big role to play as a suggestive assistant to a radiologist if one is present, or as a screening tool when one isn’t. For instance, in Africa tuberculosis is very prevalent but most often there is no physician available. One of the products developed in our group and now scaled by Delft Imaging is able to detect tuberculosis-related abnormalities in inexpensive chest x-rays and refer patients for a more accurate and expensive microbiological test when necessary. While this product is not flawless, it does allow us to help more people that we couldn’t have helped otherwise. So until we reach the stage that systems are sufficiently quality controlled, using deep learning for screening and suggestions can be really useful.
Phillip: This sounds similar to challenges in autonomous driving, where it is hard to determine who is really at fault in an accident. We know that another problem is that neural networks tend to be overconfident, also in situations where they should not be. Are there ways to address this problem?
Yes, I have not yet mentioned it to you before, but this is actually really important for getting artificial intelligence accepted in the clinical workflow. Sometimes it might happen that an image of noise accidentally makes its way into the database due to a malfunctioning scanner. If the system would still give you a score for emphysema, then you lose faith in that system. In such cases we want the system to output that the image is very different than the images it was trained on and that the model cannot classify that image. It would be even better if the system provides an interpretable explanation for why it made a certain prediction since transparency in the prediction process is crucial for clinicians to be able to trust the system.
Phillip: You mentioned interpretability, a topic that has gotten a lot of popularity recently. Especially due to discussions about whether interpretability techniques are truly interpretable. Have you already tried out interpretability methods for neural networks or are those methods still a bit too noisy?
While interpretability methods work well in theory, for me the field is still too under-researched to have practical value. One popular method for explaining predictions in the medical field is by producing heat maps based on the weights of the network. However, such methods are hard to quantify and look more pretty than actually being useful explanations.
Phillip: In the low data regime, where models are trained on small amounts of data, the explanations might also quickly overfit to random noise. Yes, indeed. When clinicians hear about these topics in AI, are they reluctant to participate in research on artificial intelligence?
My experience is really positive, but I work mainly with doctors who are interested in what artificial intelligence has to offer. I believe that clinicians recognize that AI is coming to their field and they either have to get on board or they are going to be left behind. As I mentioned before though, the technology in most cases is not ready to be left unattended. Therefore, use cases that researchers and clinicians prefer right now, often assume a suggestive/assistant role for the AI algorithm or a screening role in scenarios where no trained reader is available.
Phillip: Since we talked a lot about data sparsity, how important is it for you to have collaborations across hospitals or medical companies to get access to data?
Collaboration with your partners is super important, in lots of ways. I believe that researchers should never try to develop medical image analysis solutions if they are not collaborating with clinicians. First off, to do research, you are dependent on the availability and the quality of data that is gathered. If we want to move the field forward, we should communicate about the following topics more: which data can be used for what research purpose; how should the data be gathered so that the quality is the highest; and how can we get consent from patients to use their data more often than we do now.
Secondly, there is some knowledge sharing involved. Sometimes I read a paper that had no clinical input and you can really see the difference. Either the researchers made mistakes that could have been prevented, or you can see that the research does not have practical value.
Phillip: Do you consider the agreement of a patient to use their data to be the biggest hurdle for developing something like a medical ImageNet?
The problem is that patients are not asked often enough if their data can be used for commercial purposes. Even so, when they are asked, patients might be reluctant to share such private data without being aware of how and by who it is used. While everybody working in the field of artificial intelligence knows that data is the cornerstone of everything, we should think about how we can communicate this effectively to the community. For instance, by providing more education to create public awareness of what AI is and why large amounts of data are necessary to create successful solutions.
Phillip: From the perspective of a patient, it is a small thing to give, but for the research domain, every single patient who is willing to share their data makes a big difference in enabling better medical image analysis.
Yes indeed. Still, there are challenges that need to be addressed. For instance, do patients feel comfortable with sharing their data with all companies or do they prefer to share their data selectively? What does it mean for competition if all companies have access to the same data? These are questions that we need to find an answer to, together with the community.
Phillip: Yes maybe, patients even want to go as specific as to approve for which specific applications their data can be used for. As scientists, we of course assume that data will be used for good, but we need to make sure that data is really only used for beneficial applications and not applications that might harm people.
Yes, and we should also make sure that data is completely de-identifiable, so that the person the image is taken of, can never be traced back to that image.
Phillip: Now, what is the research focus of the Thira lab?
Our research focuses on two things: the scalability of existing methods in the medical domain, and the reliability of the new methods we are developing to make predictions. Whatever research we do, I would say the red line is always the clinical applicability of our solutions rather than developing pure theoretical knowledge.
Phillip: When developing a device like the BabyChecker, do you only use data acquired with that device to train the model, or is there some domain adaptation involved?
In general, in the minimum viable product stage, we only use data acquired with the actual device so no domain adaption is necessary. At this early stage, BabyChecker’s software works with a selected ultrasound probe so that early adopters in our projects can gain easy access to BabyChecker. Over 70 operators who have been trained to use BabyChecker are scanning pregnant women in Tanzania, Ghana, Sierra Leone, Ethiopia, and soon Uganda as well. The data comes back to our partner Delft Imaging, where experts keep a close check on how well the software is working and where physicians determine the quality of the data. This way we make sure that the system is rigorous and that patients get the correct care.
Phillip: You have already mentioned some future improvements to the BabyChecker, where do you want to be in four years?
At the moment, the BabyChecker checks a few things: 1) The gestational age of the baby to determine the estimated due date, 2) The position of the baby so that when the baby is in a breech position, the woman can make sure to deliver the baby in a hospital, and 3) The presence of twins since this is also a high-risk pregnancy where the woman should go to the hospital to deliver. Additionally, we are looking to perform placenta localization and detect the fetal heartbeat to discover possible pregnancy complications
Phillip: Let’s say that in four years the field of AI has made at least one or multiple steps forward. Where do you see that AI needs to improve, especially in the medical domain?
In general, I would like to see how we can use low-cost x-rays and ultrasounds for lots of other diagnoses. For example, heart failure or lung disease. However, in order for such applications to be feasible, we need AI methods that can work well with small amounts of training data. I think that is really the biggest challenge that we have to overcome.
Phillip: In terms of evaluation, when would you consider your research to be successful? Is it when doctors use the products that you have developed or is it when you feel like there is nothing to improve in the short term?
While I believe I will never feel like there is nothing to improve, I would say my research is successful if we can reliably screen large amounts of people in low-resource settings for all sorts of illnesses and possible complications and get them referred for the treatment they need.
On October the 6th, 2022, the Thira Lab and the QUVA Lab will talk about their current work during the lunch Meetup of ‘ICAI: The Labs’ on AI for Computer Vision in the Netherlands. Want to join? Sign up!
September 8, 2022ICAI InterviewComments Off on AI technologies allow us to do more with less – An interview with Geert-Jan van Houtum
The manufacturing industry is undergoing a paradigm shift. Because of increasing connectivity, we can gather a lot of data from manufacturing systems for the first time in history. The increasing connectivity also enables the linking, analysis, and performance optimization of supply chain components, even if they are geographically dispersed. The AI-enabled Manufacturing and Maintenance Lab (AIMM) aims to accelerate developments in this field using Artificial Intelligence. In this interview with Geert-Jan van Houtum, we will take a surface dive into some complex challenges in predictive maintenance.
Prof. Geert-Jan van Houtum holds a position as a professor of maintenance and reliability at the Industrial Engineering and Innovation Sciences (IE & IS) department at Eindhoven University. His expertise includes maintenance optimization, inventory theory, and operations research, focusing on system availability and Total Cost of Ownership (TCO).
EAISI AIMM Lab is a collaboration between Eindhoven University Technology, KMWE, Lely, Marel, and Nexperia.
What is predictive maintenance, and what is its purpose?
Traditionally, businesses either replace components when they fail, so-called ”reactive” maintenance, or use lifetime estimations to determine the best moment for maintenance, called age-based maintenance. Usually, reactive maintenance leads to machine downtime, while age-based maintenance is accompanied by the risk of replacing expensive components too soon. Predictive maintenance aims to be more proactive. Using data and AI, we can start actively monitoring the condition of components in real-time; it allows us to predict more accurately when a component is on the verge of failure and needs replacing.
What is the role of data analysis and AI in predictive maintenance?
For many components, you know why they deteriorate over time. You know the failure mechanism, and how to measure the component’s condition. For instance, when you drive a car, you know that the profile on the tire wears down. You can regularly check to see if the amount of profile is still within safety limits and replace the tire if deemed necessary.
There are also components where the failure mechanism is known, but the best way to measure the component’s state is unknown. Before predictive maintenance can be used in these situations, it is required to find a way to measure its state. Artificial Intelligence may be used as part of an inspection solution, such as visual inspection using computer vision, but this is not always necessary or desirable.
Finally, there are cases where the failure mechanism is unknown or has not yet been accurately mapped. Here the first step is to conduct a root-cause analysis. By collecting large amounts of data on all possible root causes, you can try to match patterns in the data to failure cases. Here, data analysis and artificial intelligence play an important role because they provide critical insights into the data that can be interpreted to create knowledge. This process drives innovation.
What is the most challenging aspect of determining the root cause using data?
Many failure mechanisms either occur infrequently or only under specific conditions. In these cases, there is simply insufficient data to perform data analysis or train a neural network, making it incredibly difficult to identify the root cause. Honestly, those situations are real head-scratchers.
Nonetheless, some businesses have found great success using anomaly detection algorithms. Such algorithms identify perturbations of normal behavior, which indicate the presence of a defect or fault in the equipment. Before Artificial Intelligence gained relevance, statistical process control was the gold standard for measuring anomalies. Through the integration of AI-based techniques, anomaly detection has become more refined and gives more intricate insights into the nature of anomalies.
What does AI research in manufacturing and maintenance mean to the world?
When equipment and manufacturing lines do not function properly, it leads to disruptions throughout service and manufacturing supply chains. This runs back all the way to the consumer. It is accompanied by pressure on the environment, increased cost of serving the customers in an alternative way, and in some cases unavailability of life-saving equipment or medicine. AI technologies allow us to do more with less. For instance, predictive maintenance allows us to avoid possibly catastrophic equipment failures while preventing unnecessary maintenance. It is the perfect combination of the financial incentive of businesses, societal values, and the Sustainable Development Goals (SDGs).
Is applying predictive maintenance techniques always more beneficial than more traditional forms of maintenance?
Initial investments, as well as the running costs for predictive maintenance solutions, are significant. Therefore, right now, predictive maintenance is most valuable to businesses that suffer large losses when their equipment fails or when equipment failures cause safety concerns. By working together with industry partners in our Lab, we ensure that our solutions are not only technically feasible and novel but also adhere to societal, industrial, and financial requirements. Predictive maintenance will play a large role in the manufacturing industry, but developments go slowly and it will not replace all traditional maintenance.
On the 15th of September, 2022, Geert-Jan will speak about a predictive concept for geographically dispersed technical systems during our Labs Meetup on AI for Autonomous Systems in the Netherlands. Want to join? Sign up here
The first ICAI national social meetup on June 30th was a lot of fun. Our community members, from scientific directors, support staff, project managers, lab managers to PhD students, all had a great time meeting people face-to-face after the two-year pandemic.
There were two locations: Amsterdam and Nijmegen. Each location had its own program and highlights. In addition to tasty food and drinks at the Amsterdam site, there was a ping-pong table where people could stretch their legs during the meetup. Community members from Wageningen and Amsterdam labs had come to the Nijmegen site to chat and chill together. We got lucky with the weather too; the rain only came after the meetup.
Thank you all for making such a great social gathering possible! We will see you again in early 2023 for the next national social meetup.