Although we are already half way through 2023, it’s always good to look back at the successes our institute have reached so far. In the spirit of ICAI’s mission to support open talent and technology development and ‘Make AI for Everyone’, we managed to accomplish great goals in the past year.
Therefore, we would like to present you: The ICAI 2022 graphic overview, in which we offer a comprehensive overview of our achievements in 2022.
This infographic showcases four of the six pillars of ICAI, which resembles the foundation of our institute: ICAI Labs, ICAI Academy, ICAI Venture and ICAI Connector.
ICAI ended 2022 with a portfolio of 32 labs. 334 PhD students and 100 partners worked together to deliver a great amount of 254 research papers, which we are very proud of. We continue to grow, and expect to add more valuable labs to our ecosystem this year.
We managed to organise a lot of interesting events within our community, giving visitors the opportunity to network and educate themselves about the latest trends in the field of AI. Interested in our next event? Don’t forget to regularly check our events page on the website, or sign up for our monthly newsletter.
With our Dutch podcast ‘Snoek op Zolder’ (produced by Henny Huijgens) we reached the magic number of over 5500 downloads! Little teaser: a new season is in the making, and we can’t wait to share this new season with our community.
We are expanding! In the past weeks, two new ICAI labs have launched. ICAI is happy to have them as a part of our ecosystem.
Explainable AI for Health
The Explainable AI for Health lab is a collaboration between the Leiden University Medical Center, Amsterdam UMC and Centrum Wiskunde & Informatica. Together, they will work on developing new forms of artificial intelligence that help physicians and patients with clinical decisions.
The goal of the lab is to tailor and validate new inherently explainable AI techniques and guidelines that can be used for clinical decision-making.
Scientific director Peter A.N. Bosman says: “Clinical decision support based on AI can have great added value for physicians and patients”.
Responsible and Ethical AI for Healthcare Lab (REAiHL)
Our 49th lab is the REAiHL lab, is a collab between SAS Institute, Erasmus Medical Center and Delft University of Technology. The lab aims to develop and deploy AI technologies that are safe, transparent, and aligned with ethical principles to improve healthcare outcomes.
The new AI Ethics Lab was initiated by internist-intensivist Michel van Genderen from Erasmus MC. Diederik Gommers, Professor of Intensive Care Medicine at Erasmus MC, is also closely involved: “Initially, the new AI Ethics Lab will focus on developing best practices for the Intensive Care Unit,” Buijsman says. “But our ultimate goal is to develop a generalized framework for the safe and ethical application of AI throughout the entire hospital. We therefore expect to soon start addressing use cases from other clinical departments as well.”
Do you have a passion for video? Do you have an eye for images and are you good with a camera? If yes, this vacancy might be interesting for you!
We are looking for an enthusiastic student Assistant video editing (8 hours p/w) from 1 September.
What you will do
As a video editor, you will be responsible for producing videos for our social media channels. Your main task will mainly be to edit (and subtitle) interviews of different stakeholders within our ecosystem. It can also happen that you might be sent out (together with a colleague) to actually shoot the content.
Besides editing videos, you may also be asked to help develop and publish (graphic) content on our website. Experience with WordPress, Canva and/or Photoshop is an advantage, but not a requirement.
Who are we looking for?
– You are a student (at the University of Amsterdam);
– Experience with Adobe Premiere Pro;
– Excellent communication skills in English.
– You are sociable and you enjoy working together, but can also work well independently.
– You are eager to learn, you can’t wait to improve your video/montage/interview/etc. skills and work really hard to do so.
– You are good with a camera and know how to take nice shots.
– Graphic skills and/or experience with web design is a plus, but not a must.
The National Innovation Center for Artificial Intelligence (ICAI) has the mission to keep the Netherlands at the forefront of knowledge and talent development in AI. Creating and nurturing a national AI knowledge and talent ecosystem is our central aim. In doing so, we as an organization want to deal sustainably with resources that arise from the activities and further activate the resources in the Netherlands. This way, the Netherlands will become a strong European catalyst in the field of AI talent and AI knowledge development by
– attracting talent to work on problems;
– attracting problems and data for talent to work on; and
– feeding local and national ecosystems for talent and knowledge development.
What do we offer?
– The work is part of the Student Assistant position. Your gross hourly wage starts from €2618 for fulltime employment and depends on the academic year you are in.
Do you recognize yourself in the job profile? Then we look forward to receiving your application!
Today is ICAI’s fifth anniversary. We started ICAI from the fundamental belief that if AI is going to transform and permeate every single aspect of our society, we should help ensure that we can influence, steer, and own these developments ourselves, here in The Netherlands and in Europe. A core motivation behind ICAI has been to help address the significant concentration of power and data in the hands of a very small number of companies that have almost exclusive access to these technologies outside of any form of democratic control. Giving ourselves the means to develop talent and technology in AI is an issue of industrial and, ultimately, societal sovereignty.
The mission that ICAI has pursued to support open talent and technology development in AI since its launch on April 23, 2018 can best be summarized in three phrases: shared ownership, augmented intelligence, and many voices.
“Shared ownership” refers to democratizing AI. It refers to the collaborative development of talent and technology in AI, with different types of stakeholders – knowledge institutes, industry, government, civil society – based on shared innovation agendas that the stakeholders determine, work on, and revise themselves. Learning-by-doing is a key ingredient of shared ownership, so that all stakeholders become smarter through their participation in collaborative development. ICAI’s labs are our primary vehicle for putting shared ownership into practice. I am very proud that as of today, when we turn five, 47 labs around the Netherlands have been launched, each with at least five PhD students. Between them, they bring together more than 140 partners from all sectors of Dutch society.
“Augmented intelligence” targets the development of AI systems not as autonomous systems that are meant to replace people but as systems that complement and support people to help them decide and act better. There is no lack of challenges where we can use help: global pandemics, resource scarcity, energy transition, aging populations, collapsing biodiversity, digital divides, climate change, staff shortages in key sectors, growing inequality, food waste, unaffordable healthcare, eroding democratic institutions. Increasingly, ICAI’s labs do not just target technological or economic goals but align their innovation agendas with the UN’s sustainable development goals.
Finally, “many voices” recognizes the diversity of perspectives on the development, roles, and impacts of AI. It also refers to the need to optimize for goals that go beyond accuracy. With the recently launched ROBUST program, ICAI now includes a large number of labs that focus on different aspects of trustworthiness of AI-based systems, such as explainability, reliability, repeatability, resilience, and safety. Rather than relying on end users, or indeed on society, to deal with the consequences of AI technologies that have been optimized for accuracy only, ROBUST emphasizes the range of meaningfully different trade-offs that technology development and deployment may and should make. Above all, the ROBUST program fosters the collective intelligence of diverse and collaborating groups of stakeholders.
ICAI has grown into a nation-wide ecosystem that is organized in a bottom-up fashion, in just five years. With this ecosystem we seek to decentralize and democratize technological power and to make sure that technology is applied for human empowerment, to have broad and genuine benefit.
Maarten de Rijke Scientific director Innovation Center for Artificial Intelligence
March 29, 2023ICAI InterviewComments Off on The Complexities of Train Schedule Management: A Look at NS’ Planning Process and the Push for Optimization – An interview with Bob Huisman
Train schedules are a crucial part of our daily lives, but have you ever thought about the complexity behind creating them? From ensuring seats are available to deploying personnel; there are numerous factors to consider. Despite significant advancements in planning automation, there is still room for improvement, especially in hub planning optimization. So, what’s happening under the hood at NS, and why is optimizing railway planning so challenging? We interviewed Bob Huisman, Manager of Research and Development Hub Logistics at NS, to learn more about how AI is revolutionizing the train scheduling management process.
Bob Huisman is a respected figure in the railway industry, with his career spanning several decades. Huisman currently holds the position of Manager Research & Development Hub Logistics at NS, where he is responsible for delivering innovative methods and tools for planning and scheduling shunting related processes at railway hubs, as well as assessing the logistic process capacity of railway hubs. Beyond the technical and scientific aspects of his work, Huisman sees it as an opportunity to make a social contribution. He describes himself as ‘ having one foot firmly planted in the business world and the other in academia’. His career path is a testament to his ability to bridge the gap between research, development, creativity, and challenging problems. Huisman is one the principal investigators of the LTP ROBUST programme and the chair of the user’s committee.
‘At first glance, the process of train schedule management may seem simple – travelers at the station, a train ready to board.’ But as Huisman points out, it’s actually a complex interplay of factors, while peering into the direction of Utrecht Central Station, one of the important hubs in the railway network. The hub planning, as part of the overall railway planning, involves ensuring trains are in the correct composition to maximize seat availability, that they arrive at the right platform at the right time, and that they are in good technical condition. Additionally, once a train reaches its final destination, it must be checked for technical issues, cleaned, potentially rearranged, and parked in a way that maximizes space efficiency.
But that’s not all. Railway planning and control involves fleet assignment and the deployment of personnel, which comes with its own set of complicated and important limiting factors. Employment conditions, work variation, and the ability of colleagues to come home at the end of the day are just some of the factors that the team of Huisman must take into account. ‘Train schedule management is almost paradoxical. As a traveler you may experience that we have one timetable during the entirety of the year, but under the hood, the rail sector makes a unique plan for every single day of the year. This involves planning for the timetable, fleet and staff, as well as for the 34 hubs – the stations with connected yards where multiple train lines converge – months in advance.’
Over the past two decades, NS has seen a significant increase in automation when it comes to network planning. However, there’s still no automation in hub planning, which Huisman notes as remaining an obstacle to overcome. Currently, this daunting task is solely on the shoulders of human hub planners, who are responsible for what is called the “knitting process”. ‘The “knitting process” involves juggling a multitude of factors simultaneously and making decisions in real-time, ensuring that passengers arrive at their destinations safely and smoothly.
How do those train scheduling experts manage to make everything run like clockwork?
‘It is a multi-step process. First, we create a timetable, then assign our fleet and lastly assign our colleagues.’ Albeit the largest, NS is only one of the 7 Dutch operators on the Dutch rail network, with ProRail responsible for infrastructure management, capacity allocation and traffic control. Huisman notes that planning NS’ train operation involves multiple iterations before arriving at the final basic pattern, which is then translated into a blueprint for each weekday, and subsequently each specific day. The schedule is finalized a month in advance for a specific calendar day, and from then NS still has the ability to make adjustments up to two days prior. ‘Once the plans are handed over for operational execution, controllers at ProRail and NS must act quickly in the event of a collision, malfunction, or employee illness. At that moment, it resembles tinkering more than actual planning by optimization’, Huisman notes.
NS has been making significant progress since 2015 towards automating the hub planning, but Huisman emphasizes that there are many “dirty details” that need to be taken into account. One of the most gratifying moments of this project came in 2020, just before the onset of the pandemic. ‘We were able to demonstrate on the basis of a proof of concept that we are going to make it’, Huisman recalls. ‘Moments like these; when colleagues have confidence in the success of a project and more resources become available; those keep me young. I have resolved that when I retire, to have this project standing,’ he says with a sense of pride. ‘This project represents a unique collaboration with young researchers, a continuous flow of PhD and master students, and has been one of the most personally rewarding projects of my career.’
Why is automation and optimization so important?
‘Two goals for the rail system are difficult to reconcile: on the one hand, meeting a growing demand for transport and, on the other hand, a robust operation. The driving force for automated hub-planning support is the need for fixing the plans as late as possible and be able to make changes on-line. That will improve the robustness of the transportation process, rail infrastructure usage, and seat availability,’ Huisman notes.
Currently, to anticipate uncertainties and unforeseen disruptions, slack is incorporated into the planning, with respect to both space and time. This slack allows the railroad planners to be flexible and to deal with details that will become clear on the day of operation itself; it gives space to breathe when things go wrong. ‘Reducing slack to facilitate future passenger volumes, increases the risk of a domino effect of disruptions, however, fast automated support for planning and control may compensate for this. All of these challenges beg the question; how do we achieve sustainable growth, act more dynamically and be more robust, while using our existing resources more efficiently?
NS seems to already have invested considerable resources into research that helps professional planners create more optimized plans in less time. Why is this process so tough?
‘Good question. Now, picture the entire rail system as a massive, interconnected wirework – a complex maze that requires meticulous planning to operate smoothly. There are countless variables that can impact the system, making optimization a daunting task. It’s not just a matter of flipping a few switches, pushing a few buttons and moving a few trains around’’.
‘Hub planning is a combinatorial problem with an enormous search space in which it is hard to find good and feasible solutions. Moreover, the need to fix plans as late as possible, requires to model many dirty details of the real world and complex safety rules, which excludes linear optimization methods. ‘When we started in 2015, hub planning had been the topic for a well-known international competition in the field of Operation Research. Curious as we were, we reached out to all the competition’s prize winners to see if they knew something we didn’t. Eventually we concluded that no practical solution was found yet, which was why we decided to set up a long-term research and development program ourselves. One way or another, we had to find a way. Together with our academic partners we finally succeeded in building a working system, mainly based on powerful local search, combined with other methods like linear optimization and constraint programming. We coined it the Hybrid Integrated Planning method (HIP). Although the systems generates plans that are acceptable to professional planners, continuation of the research is needed to enhance the system’s functionality.’
‘Following the progress made by Deepmind, preceding AlphaZero, we started a research track specifically focused on applying deep-reinforcement learning to multi-agent pathfinding. The idea was to build up a logistic plan by generating a chain of individual actions, representing train shunting movements. After six years of research together with our academic partners, we found that brute force local search still outperformed our various complex reinforcement learning approaches, even on simplified models of reality. Taking a step back to see the forest for the trees, we halted the program and shifted our focus. Our current research direction is aimed at using machine learning to complement local search, constrained programming, and linear optimization. The challenge is to find and modify plans more quickly, specifically plans that are easily understood by humans. To build a system that exhibits some intelligent behavior in the future, it must be able to learn from previous situations and to communicate at some abstract level with its users. We still have a long way to go, which asks for perseverance and creativity’, Huisman notes.
“Humans in the loop” does not conform to the traditional view on automation, in which AI systems entirely replace human operators. However, recently researchers have started to view automation as a two-dimensional process, where humans and machines work together and compliment each other’s strengths and weaknesses to achieve a common goal.
‘In our unique use case we are looking for automatic tools to support people’, Huisman emphasized. ‘Hub planning is a challenge that humans are quite good at and a task that requires substantial creativity, ingenuity and the ability to color outside the lines – but it takes a lot of time. On the other hand, algorithms can speed up the planning processes but currently cannot handle its full complexity. That is why we are focused on creating systems that help people to adjust the “knitting process” more efficiently; reducing slack and maximizing space usage to meet the growing demand for train travel.
When it comes to AI, Huisman has a unique perspective. ‘AI research can bring us closer to understanding human intelligence, yes. However, as long as we don’t understand and have defined human intelligence, stop talking about artificial intelligence that has to replace humans.’ Instead, Huisman believes that we should focus on building super tools that exhibit intelligent behavior, regardless of whether there’s a steam engine or neural network powering it. ‘I see neural networks and reinforcement learning as ingredients, among others, to create value’, Huisman explained, adding that it’s all about developing an overall system that can deal with the intricacies of logistic planning in cooperation with humans. ‘AI has the potential to disrupt the approach we take in this, but it is not just about generating plans automatically; you have to communicate the output to planners and make it understandable to them, give humans control over plan qualities, and link the output to the systems of other parties.’
Recently, ICAI announced the LTP ROBUST program; a new initiative supported by the University of Amsterdam and 51 partners from government, industry, and academia. As part of the program, 17 new AI labs are established that focus on the development of trustworthy AI technology to address socially relevant issues in areas such as healthcare, logistics, media, food, and energy. Can you elaborate on the importance of trust in AI systems and your role in the program?
‘Trust is a crucial factor in the adoption of AI systems. Our perspective is that the public, customers, travelers, patients, users and authorities all base their judgment not only on the functionality of the AI algorithm in isolation. Rather, they base it on the whole of interacting ICT, the organization behind it, the procedures put in place to regulate it, the UI, and the availability of the system. Trustworthiness of an individual AI algorithm is a necessary, but not sufficient condition for their effective use in a system. Therefore, research into creating such AI systems necessitates a symbiotic relationship between academia and industry. In the end, private or governmental organizations set the specifications of the system, design and build the system and operate the system over the years. In LTP, research, development and system engineering meet, to obtain social impact by operational trustworthy systems.
Trust is also contextual and domain-specific; the risks of a medical diagnosis differ from the risks of logistics planning or music recommendation, and people rely on different systems in different ways. The program’s approach is to start with a system vision and a targeted research question for each lab, with the private partners playing a vital role in validating the output and asking the right questions. ‘While the validation and questioning may vary for different fields, the general approach to winning the trust of the user can be similar. As one of the principal investigators and the chairman of the overarching user committee, my role is to oversee the cooperation between the partners and ensure knowledge transfer between labs.’
The RAIL Lab, a collaboration between Delft University of Technology, Utrecht University, ProRail and NS, is one of the LTP ROBUST Labs joining ICAI. Its goal? Working towards algorithmic support to ensure safe and reliable logistic operations and capacity planning that is trusted by human experts. Explainable AI plays a role in this.
‘Explainability is often seen from the research world as: if I could just explain why my algorithm came to this conclusion and if I change something about my input, how would my evaluation change? It’s almost an internal accountability from your algorithm to the outside world, which is necessary, but might only be sufficient to accept or reject an individual prediction of the system. The question is whether that is sufficient for humans to use and accept the system as a sparring partner for decision support.’
Huisman emphasized the importance of setting standards for what an explanation of an algorithm should look like. ‘Authorities often require a deeper understanding of how an algorithm works, including how it makes considerations, what information it looks at, what information is necessary to make a good choice, and how uncertain the algorithm is in its output. Furthermore, for specific instances, humans may ask counterfactual questions to understand why some decision is proposed and not some other. By understanding the requirements for human decision-making, we can create more effective explanations that provide a more complete understanding of the algorithm’s decision-making process. Since the user is often responsible for the final decision to be taken, he wants to be sure it is the right one.’
To address these challenges, each LTP ROBUST lab will include a researcher with a background in social and behavioral science. The RAIL Lab is a testament to this effort, with one PhD student focusing on the cooperation between human and AI planners. This study will reveal requirements, expectations, and potential pitfalls of human-AI interaction, specifically of interaction with algorithmic planners. These results will be augmented with data science techniques to extract important factors from past decision-making and planning processes, to develop a computational cognitive model of the decision and planning process.
Huisman sees a colorful future ahead: ‘NS has its fair share of critics – some say it’s too big, bureaucratic, or slow. On the other hand, I know fewer other companies that have invested as much time and resources into innovative projects like railway planning, as NS has in the Netherlands.’ Optimizing rail systems is a complex task that will require many more years of research and a delicate balance between human expertise and advanced AI algorithms. However, Huisman and his colleagues are committed and up for the challenge. ‘With LTP ROBUST and RAIL Lab’s ongoing efforts, we can hope to see more trustworthy, efficient and seamless rail systems in the near future.’
We hope that through this interview you learned a bit more about NS and the intricacies of railway planning. NS, ProRail and their academic partners, TU Delft and the University of Utrecht, are currently recruiting PhD students for the RAIL Lab. If you are interested in a complex technical AI challenge, in the light of a social contribution, check out their webpage: https://icai.ai/icai-labs/rail/ The next time you’re waiting for your train, take a moment to appreciate the intricate dance of 22.000 employees that’s happening behind the scenes to help you get to your destination, and maybe consider joining us!
AI has the power to transform the world, but only when guided by the shared values and visions of its stakeholders. Therefore, ICAI is proud to become part of the Partnership on AI, a worldwide coalition of academic institutions, media organizations, industry leaders, and civil society groups dedicated to promoting the positive impact of AI on people and society.
The Partnership on AI (PAI) was founded with six thematic pillars, aimed at addressing the risks and opportunities of AI:
Fair, Transparent, and Accountable AI;
AI, Labor, and the Economy;
Collaborations Between People and AI Systems;
Social and Societal Influences of AI;
AI and Social Good.
Partners from all over the world, and AI subdomain, take part in the partnership with one goal; to promote best practices in the development and deployment of AI. Through collaboration, the Partnership on AI develops tools, recommendations, and other resources by inviting voices from across the AI community and beyond to turn insights into actions to ensure that AI and ML technology puts people first.
“We are pleased to be part of PAI and connect on a global level to other organizations to work on AI challenges”, says Maarten de Rijke. Our membership in the Partnership on AI reflects our commitment to the responsible deployment of AI technology and our belief in the importance of collaboration in shaping the future of AI. By sharing expertise, taking part in steering committees, and dedicating ourselves to sharing resources and education regarding the development of AI policy, we can all learn from each other.
If you are interested in learning more about Partnership on AI, please find their website via the following link: https://partnershiponai.org/. Moreover, if you specifically want to know more about ICAI’s involvement in the partnership, please contact Esther Smit at email@example.com.
Total project budget of over 87 million, including 17 new labs and 170 new PhD candidates over 10 years
ROBUST, a new initiative by the Innovation Center for Artificial Intelligence (ICAI), is supported by the University of Amsterdam and 51 government, industry and knowledge-sector partners. The programme aims to strengthen the Dutch artificial intelligence (AI) ecosystem by boosting fundamental AI research. ROBUST focuses primarily on the development of trustworthy AI technology for the resolution of socially relevant issues, such as those in healthcare, logistics, media, food and energy. The research sponsor, the Dutch Research Council (NWO) has earmarked 25 million euros for the programme for the next 10 years.
ROBUST unites 17 knowledge institutions, 19 participating industry sponsors and 15 civil-social organisations from across the Netherlands. Maarten de Rijke, UvA university professor of Artificial Intelligence and Information Retrieval, is the ROBUST programme leader.
The additional €25 million grant comes from a call by the research council for Long-Term Programmes, which give strong public-private consortia the chance to receive funding for a ten-year period. This as part of the initiative of the Netherlands AI Coalition to invest in explainable and trustworthy AI. Next to the research council, companies, knowledge institutes contribute to the programme. The total ROBUST budget amounts to €87.3 million, of which 7.5 million coming from the Ministry of Economics and Climate. The ROBUST programme is complementary to the AiNed programme and will shape the collaboration on dissemination, consolidation and valorisation of the results, as well as retaining talent in the Netherlands. This contributes to the ambitions of the Strategy Digital Economy of the cabinet to be in the forefront of human-centred AI development and AI applications.
170 new PhD candidates Seventeen new public-private labs will be set up under the ROBUST umbrella and form part of the Innovation Center for Artificial Intelligence (ICAI), thus bringing its lab total to 46. ICAI focuses on AI talent and knowledge development. In the coming year, ROBUST will recruit no fewer than 85 new PhD candidates, followed by another 85 in five years’ time.
Human-centred AI for sustainable growth ‘What makes ROBUST unique is that not only will the new labs contribute to economic and technological objectives, they will also aid the United Nations’ sustainable development goals aimed at reducing poverty, inequality, injustice and climate change’, says De Rijke. ‘One important focus of all projects is to optimise reliable AI systems for qualities such as precision, soundness, reproducibility, resilience, transparency and security.’
Twin-win study Just like the other ICAI labs, the ROBUST labs will put the twin-win principle into practice: intensive public-private research partnerships in AI technology that lead to open publications and solutions that have been validated in practice. ‘We test our scientific findings within an industry context. Research and practice thus come together at an earlier stage, allowing for far better validation of the results. This way, research validation doesn’t end in the lab, but also in the outside world.’
Startups, SMEs, and policymakers ‘AI is a systemic technology that touches all aspects of society. That’s why it’s important to ensure that the application of AI technology becomes a widely shared responsibility. ROBUST collaborates with regional civil-social partners throughout the Netherlands, and especially with startups and small to medium-sized enterprises (SMEs).’ The objective is not only to develop knowledge and innovations with ROBUST partners, but also to make them more widely available to other parties within the Dutch ecosystem. New findings and their policy implications will also be shared with national and European policymakers.
Contact Journalists wishing to contact Maarten de Rijke or other relevant scientists, or to find out more about ROBUST, please contact firstname.lastname@example.org.
From the first of November onwards, knowledge institution Eindhoven University of Technology (TU/e) and globally leading hearing aid manufacturer GN Hearing will join forces in FEPlab. The lab is dedicated to ameliorating the participation of hearing-impaired people in both formal and informal settings.
FEPlab will focus its research on transferring a leading physics/neuroscience-based theory about computation in the brain, the Free Energy Principle (FEP), to practical use in human-centered agents such as hearing devices and VR technology. FEP is a general theory of information processing and decision-making in brains that is rooted in thermodynamics. The principle states that biological agents must take actions (or decisions) that minimize their (variational) free energy which is a measure of the amount of total prediction error in a system. Practically, by minimizing free energy, the agent takes actions that optimally balance information-seeking behavior (reduce uncertainties) against goal-driven behavior. Theoretical foundations for AI application of FEP-based synthetic agents have been produced by BIASlab at TU/e. In the current endeavor, FEPlab is focused to bring FEP-based AI agents to the professional hearing device industry. Professor Bert de Vries, the scientific director of FEPlab alongside Associate Professor Jaap Ham, believes FEP-based synthetic agents have much to offer to signal processing systems:
”I believe that development of signal processing systems will in the future be largely automated by autonomously operating agents that learn purposeful (signal processing) behavior from situated environmental interactions.”
Bert de Vries, Scientific Director FEPlab
Expertise and Focus
FEPlab will comprise experts from different fields of expertise such as Audiology, Autonomous Agents & Robotics, Decision Making, and Machine Learning to tackle the complex multidisciplinary challenges at hand. The lab will employ five PhD students at TU/e, of which four will join the BIASlab research group in the EE department and one PhD student will join the Human-Technology Interaction group at the IE&IS department. Key research topics include reactive message passing for robust inference, generative probabilistic models for audio processing, and interaction design for hearing aid personalization.
Sustainable Development Goals
FEPlab will focus on two SDGs. Firstly, the research goals of the lab resonate with SDG 3 focused on Good Health and Well-being since untreated hearing loss in the elderly increases the risk of developing dementia and Alzheimer’s disease as well as emotional and physical problems. Secondly, the lab’s research goals also support SDG 8 of achieving higher levels of economic productivity through technology upgrading and innovation as hearing loss is also shown to affect work participation negatively.
The world is facing a number of converging climate change challenges: population growth, more frequent extreme weather events, and a need for the sustainable production of nutritious food. Some say that machine learning can support us to mitigate and prepare for such consequences of climate change, however, it is not a silver bullet. In this interview, Congcong Sun and Chiem van Straaten discuss the challenges of machine learning in agriculture and weather forecasting, and the similarities and differences between their respective fields.
On November 16th, 2022, ICAI organizes the ‘ICAI Day: Artificial Intelligence and Climate Change’ where Congcong, Chiem, and many other researchers will talk about how AI can be used to mitigate and prepare for the consequences of climate change. Want to join? Sign up!
Congcong Sun is an assistant professor in learning-based Control at Wageningen University & Research (WUR) and Lab Manager of the ICAI AI for Agro-Food Lab. Her research interests are in using learning-based control to explore the overlap between machine learning and automatic control and apply them to agricultural production.
Congcong and Chiem, could you tell me what your research is about, and how it is connected to artificial intelligence?
Congcong: Yes, of course. My research focus is on learning-based autonomous control in agricultural production. For instance, in a greenhouse or vertical farm, climate control can be optimized to make the crops grow under more favorable conditions and produce a better quality crop. Another example is logistical planning for agro workers, such as harvesting robots in a multi-agent setting. Learning-based control applications are complex, which is why I mainly use deep reinforcement learning, which is the combination of reinforcement learning algorithms with neural networks.
Chiem: The research that I conduct pertains to studying and making predictions about weather and climate extremes. Many industries, such as agriculture production, depend on accurate weather forecasting. Understanding our climate better is crucial for preparing ourselves for extreme weather and at the same time allows industries to use their resources more efficiently. However, predicting weather events far in advance is extremely tough due to time lags, the conditional nature of observed patterns, and the multitude of factors influencing one another. Machine learning has the potential to deal with such levels of complexity, which is why I am interested in applying it to weather forecasting.
Do you see any similarities or differences between your research?
Congcong: I believe our research is interconnected. As Chiem mentioned, weather patterns are a large source of uncertainty within the agricultural industry, particularly for those applications where the farm is located in an uncontrolled environment, such as open-air farms.
In agriculture, the weather is not the only source of uncertainty, however. Uncertainty arises from the crops themselves. Different crops have optimal growing conditions, which means that a control policy that is effective for one crop might not be effective for another. Even if you were to place a different crop in the exact same greenhouse environment, you would need a vastly different policy for controlling it. What are your thoughts on that, Chiem?
Chiem: Yes, you are trying to tackle something that inherently is multivariate, which is similar to weather forecasting. Although I am not well-versed in the specifics of agriculture, I can imagine that you need to take into account many factors such as irrigation, lighting, and temperature?
Congcong: Yes, indeed. When we seek to regulate the climate within a greenhouse, there are a lot of variables we need to consider, like humidity, irrigation, fertilization, light, and temperature. Analyzing the relationships between these variables requires knowledge from various disciplines such as plant physiology and biology. Additionally, certain relationships might not have been discovered yet, which adds to the complexity of balancing these variables. The combination of machine learning and automatic control can help us explore some of these relationships and translate them into knowledge about how to best regulate these environments.
Chiem: Ah, exactly. Here, I see a great similarity between autonomous control of agriculture environments and the prediction of weather patterns. For a long time, physical numerical prediction models have been developed in order to incorporate as many of the processes that are known to be important for weather prediction as possible. However, it is also known that these models are not perfect, as the weather is extremely complex. Therefore, we attempt to replace parts of the numerical models with statistical models to capture yet-to-be-discovered processes
Congcong: Yes, indeed. What kind of data do you use to make weather forecasts?
Chiem: In the non-statistical forecasting models specifically, we use a plethora of data to make weather forecasts, including humidity, pressure, air temperature, and wind speed. Like the input, the output is often multivariate, similar to learning-based agriculture control. Another similarity might be that in both domains you encounter challenges due to cycles. For instance, I could imagine that in agriculture you need to take the growing cycle of plants into account, which is different for every plant. In weather forecasting, you also have to deal with many different cycles at the same time, such as the seasonal cycle, weekly cycles, and daily cycles.
Congcong: Yes, exactly! Plants have different optimal growing cycles. In greenhouses with multiple plants, it could be that different growing cycles overlap similarly to how cycles overlap in weather forecasting. It is interesting to see so many similarities between our two domains!
In your conversation, you mentioned some applications of machine learning in your respective domains. One challenge we often hear about is related to trustworthiness, especially in applications with high degrees of uncertainty. Are companies in your industry enthusiastic or reluctant to work with machine learning?
Congcong: Greenhouse climate control is quite mature in the Netherlands. Some commercial greenhouses have already implemented automated control, however, we are still not making use of all available cutting-edge sensing techniques. The adoption of such techniques by farmers might be slow since they are expensive and if they do not work as intended, it could ruin a farmer’s business. Also, farmers might be hesitant to trust machine learning technology, since it is a relatively new technology.
Chiem: As Congcong noted; the trustworthiness of a system is crucial for its widespread acceptance. Applications such as heatwave prediction are not quite ready for widespread use because heat waves have to be predicted far in advance, which is immensely tough to do accurately. Short-term forecasting applications, such as rainfall forecasting applications have a track record of successful predictions, however. Moreover, weather forecasting has rapid update cycles, so if you make an errant forecast today, you still have a chance tomorrow to forecast the same thing, but do so with greater accuracy. For heatwave prediction, such errant predictions have way more severe consequences. In agriculture, I could imagine the consequences are similarly more severe. What do you think Congcong?
Congcong: I agree with you Chiem. Plants are quite sensitive, so if a wrong prediction leads to hazardous conditions in which the plants cannot survive for long, the grower might lose all of their plants. While system control in agriculture does not come with direct harm to humans, like in autonomous driving, the margins on crops are small. Therefore, they are in general more averse to using machine learning and statistical modeling approaches in general.
Chiem, during the ICAI Day, a day revolving around the numerous challenges regarding machine learning and climate change, you will walk us through a heat wave prediction use case. What would you say the largest hurdle is in this research?
Chiem: The primary challenge in climate change research is the interaction between processes across different scales. On a local scale, processes such as heat exacerbation due to dry soil conditions or particular local atmospheric configurations can influence heat waves. However, such local conditions can also be synchronized across the scale of the complete northern hemisphere, which means that hundreds of kilometers away, very specific conditions might also be an indication of an impending heatwave. This can become increasingly more complex when you, for instance, include global connections.
The interaction across these many scales creates challenges in determining the resolution of the data you need and also what algorithm is most suitable to use. Additionally, climate change is actively changing our data distributions as we speak. Data that we gathered in the past might therefore have different weather dynamics than the weather right now, which makes generalizing very difficult. To an extent, your machine learning model is always extrapolating.
That is intriguing, thank you for your explanation! Congcong, during the ICAI Day you will moderate a lunch table discussion on Artificial Intelligence and Agriculture. What do you plan to discuss and why should people join?
Congcong: During the lunch table discussion, I would like to come together and talk about the current challenges of applying AI to agriculture; the popular and potential AI solutions to confront these challenges; as well as the future trends of applying AI to Agriculture. I believe it is valuable to join since it will be a very good chance for researchers, engineers, and students who are working in this area, or even just feel interested in this area, to ask their questions, share their opinions, and also may get some answers about their doubts through the discussions! Beyond that, it will also be a very good chance to build your network and explore potential collaborations for the future.
To round off; when would you say your research is a success?
Congcong: Any progress of my research, I consider a success and will make me happy. These are things such as my PhD students achieving a small step, solving pressing challenges for farmers, and making food production more sustainable by reducing emissions and energy use.
Chiem: One large success would be the ability to answer questions regarding climate change attribution such as: how much has climate change exacerbated the impact of this specific extreme weather event or made it more frequent? Being able to answer such questions confidently would allow us to hold parties, such as big emitters, accountable. While far off, I believe that machine learning has the potential to give us the tools necessary to do this in the future.
On November 16th, 2022, ICAI organizes the ‘ICAI Day: Artificial Intelligence and Climate Change’ where Congcong, Chiem, and many other researchers will talk about how AI can be used to mitigate and prepare for the consequences of climate change. Want to join? Sign up!
Working with medical data comes with many challenges, ranging from improving data usability to maintaining privacy and security. To outline some of these challenges, ICAI organizes the ICAI Deep-Dive: Working with Medical Data on the 3rd of November, 15:00-18:00. This hybrid event will be moderated by Nancy Irisarri Méndezand will take place on location at Radboud University and online.
Artificial intelligence solutions are rapidly transforming the world by automating tasks that for long have been performed by humans solely. Training on increasingly massive datasets is one of the enablers of this widespread use of robust and trailblazing models. However, due to socioeconomic and legal restrictions, the industry lacks large-scale medical datasets to enable the development of robust AI-based healthcare solutions. Therefore, there has been an increased interest in technical solutions that can overcome such data-sharing limitations while simultaneously maintaining data security and the privacy of patients.
We will open this ICAI Deep-Dive event with an introduction to two specific data-related challenges in the medical field. The first challenge will be introduced by Bram van Ginneken of the Radboud UMC, who will discuss FAIR (Findability; Accessibility; Interoperability; and Reusability) requirements for data sharing in practice. Thereafter, Gennady Roshchupkin of the Erasmus UMC will conclude part I of the event by discussing the challenges of using Federated Learning in genomics research.
The second part of the ICAI Deep-Dive event will be a panel discussion that centralizes the question “How do we tackle challenges in medical data usage by collaborating together?”. During the panel discussion, Nancy will moderate the discussion among Bram van Ginneken, Clarisa Sánchez, Gennady Roshchupkin, Johan van Soest, and is also open to everyone who is interested in the challenges mentioned in the previous two talks.
After the panel discussion, it is time for networking while enjoying some drinks.