On the 1st of November 2023, the ICAI Day Autumn Edition took place at Radboud University, Nijmegen. This event delved into the intersection of Artificial Intelligence (AI) and healthcare, offering a comprehensive exploration of AI’s influence on the medical landscape. Bringing together experts, researchers, and enthusiasts, the event featured a diverse agenda that delved into the technical intricacies and broader implications of AI applications in medicine.
In-Depth Presentations by Leading Experts
The event kicked off with an insightful overview of ICAI health labs by Colin Jacobs, setting the stage for presentations by members of several healthcare ICAI labs and field experts. Highlights of the day included presentations from the Healthy AI Lab focusing primarily on AI applications for prostate cancer diagnosis with MRI, CARA Lab focusing on AI-guided cardiac interventions, Brightlands Smart Health Lab’s exploration of AI in Radiation Oncology, and AI4 MRI Lab’s examination of challenges and opportunities in AI-based MRI reconstruction.
The CARA Lab’s presentation by Jos Thannhauser highlighted the potential of AI-guided cardiac interventions, utilising Optical Coherence Tomography (OCT) for increased efficiency and precision during cardiac procedures. Henkjan Huisman from the Healthy AI Lab presented on the use of medical imaging AI, covered issues with current generation AI systems, and focused on recent work in the field of AI for prostate MRI. Thijs van Osch from the AI4 MRI Lab presented specific AI-based approaches to address challenges in MRI reconstruction, including motion artefacts, ultimately accelerating MRI examinations. Petros Kalendralis showcased the research done by the Brightlands Smart Health Lab, providing a transformative perspective on AI in Radiation Oncology, showcasing case studies on FAIR data infrastructures, image analysis, radiotherapy outcomes prediction, and quality assurance. James Meakin introduced the Grand Challenge platform, a cloud-based solution leveraging Amazon Web Services (AWS) to advance machine learning applications in biomedical imaging. This platform fosters collaboration and the development of cutting-edge ML solutions, bridging the gap between research and clinical practice.
Engaging Round Table Discussions
On-site participants immersed themselves in enriching round table discussions covering pivotal healthcare topics. These discussions addressed ethics in healthcare, data governance strategies, challenges and opportunities in implementing AI, the use of foundation models like LLMs for medical text, techniques for explainable AI, legal aspects, and safety measures in the context of AI applications in healthcare.
Showcasing Cutting-Edge Work
The event also featured a poster and demo session, allowing attendees to explore the latest work from ICAI Labs across the Netherlands. Their contributions were invaluable to the success of the event, as they showcased the remarkable work done.
A Platform for Networking and Collaboration
The event concluded with a networking session, allowing participants to connect, share insights, and discuss the future of AI in healthcare. The Borrel/Networking session facilitated further exploration of the posters, fostering a collaborative environment for professionals in the field.
The ICAI Day Autumn Edition left attendees with a deeper understanding of AI applications in medicine and a strengthened network of professionals poised to shape the future of healthcare through innovative technology. As AI continues to revolutionise the medical landscape, events like these play a crucial role in fostering collaboration, knowledge-sharing, and the advancement of cutting-edge solutions in healthcare.
ICAI is proud to launch its 50-th collaborative research lab. The new lab, dubbed “AI for Oversight,” will develop trustworthy and responsible AI-driven methods and applications for the support of governmental inspectors in the oversight domain. It is a collaboration between Inspectie Leefomgeving en Transport, Nederlandse Arbeidsinspectie, Inspectie van het Onderwijs, Nederlandse Voedsel- en Warenautoriteit, Universiteit Utrecht, Universiteit Leiden, and TNO.
A next generation of responsible AI which can enhance the effectiveness of inspectors? This can only be achieved by developing algorithms and approaches that ensure optimal support for inspectors. And on this front, there is good news: the recently launched ICAI Lab AI4Oversight is committed to making this possible.
5 partners: Human Environment and Transport Inspectorate (ILT), Netherlands Labour Authority, Inspectorate of Education, Netherlands Food and Consumer Product Safety Authority and TNO.
2 universities: Utrecht University and Leiden University
Their joint goal? Not to reinvent the wheel individually but to combine forces to achieve responsible and explainable AI. They aim to ensure that inspectors and AI systems work optimally together, leveraging each other’s strengths.
Supervision does not mean inspecting everything everywhere continuously; that’s impossible. Choices must be made. The challenge is to inspect precisely where the societal contribution is greatest. How do you achieve a risk-based approach, deploying inspectors as effectively as possible at the right times and places? This is the task for which regulatory bodies are collectively seeking a solution. The use of artificial intelligence (AI) plays a significant role, especially as these become more sophisticated.
Optimal Support through Algorithms “We already use AI where possible for a responsible, selective, and effective deployment of our inspectors. But there are more opportunities ahead,” summarizes Mattheus Wassenaar, Inspector General of ILT, the motivation to start this collaboration. “Together with universities, we will develop methods to ensure that our people are optimally supported by algorithms. Inspectors are scarce, and they don’t generate much data. This means we need algorithms that learn faster with limited data. There is also a focus on preventing unwanted selection bias. We are doing everything to collectively develop AI that can be deployed in the oversight domain responsibly and reliably.”
Developing and Testing New Methods Practical experiences have highlighted the need for new methods in AI within the oversight domain. This led to the current research agenda, where universities are developing methods that align well with practice. All this is done in close collaboration with participating inspectorates. In this way the gap is bridged between theory and practice, where inspectorates and TNO can use the new methods, and universities can build their research on practical case studies. The research agenda focuses on the topics: collaboration between humans and machines, faster and fairer learning algorithms, and the contribution of AI to behaviorimprovement.
How Humans and AI Can Strengthen Each Other “With the use of AI, inspectors gain a colleague,” says Jasper van Vliet, one of the scientific leads of the lab. “It’s a digital colleague with a strong memory, that is tireless and consistent, and can advise inspectors on where they can have the most impact.”
“The strength of AI algorithms is particularly evident with large and complex datasets, while humans excel in individual cases and placing information in the right context,” adds Cor Veenman, another scientific lead of the new lab. “By closely integrating human inspectors and AI systems, you get a very effective team.”
Testing New Approaches Participants of the new ICAI Lab will not only share their knowledge and expertise but will also conduct joint experiments to test new approaches. There is a strong emphasis on the interaction between inspectors and AI applications. This is a crucial success factor in achieving responsible AI that is fair, just, and explainable. Moreover, inspectors can play a vital role in the learning process algorithms must undergo. An additional challenge is that inspections take a lot of time, and obtaining the right data is difficult. This explains why data is so precious and scarce in the oversight domain.
Researchers in the Field with Inspectors “Feedback from inspectors is essential,” emphasizes Van Vliet. “They are familiar with the application area and often have insight into whether an inspection is worthwhile. If we can incorporate this knowledge into the AI learning process, we can learn much faster. How this will work in detail will be the focus of the PhD candidates. They will not only work behind their desks but also in the field, accompanying inspectors to experience how AI can make a difference.”
Preventing the Mirror Effect “It is essential that we support inspectors only with reliable and fair algorithms,” emphasizes Veenman. “In the AI4Oversight Lab, there is ample attention to challenges such as unwanted steering in advice. During data collection, human biases that color the data often occur. If an algorithm then adopts those biases, you encounter the mirror effect. Highly undesirable, of course. The new lab is fully focused on addressing this. In collaboration with all participants, we will develop new forms of data collection and algorithms to counteract the mirror effect.”
AI and Behavior Change It’s essential to realize that inspectorates are not there to impose fines, but their ultimate goal is to contribute to positive behavior changes. AI applications can also contribute to this goal. The new lab aims to develop a data-driven approach that allows modeling the dynamics between behavior and inspections.
Building Bridges Between Theory and Practice The four inspectorates in the lab already see the benefits of deploying AI, but they all face the same challenges. Effectively organizing teamwork and feedback and preventing the mirror effect are high on the agenda. The difficulty lies in the fact that theoretical knowledge about these topics is often not yet tested in practice. Therefore, bridges need to be built between theory and practice. TNO works extensively on this bridge and is happy to invest in the lab: “We collaborate with governments and businesses on the valuable use of AI that can make an impact. Jointly developed methods, grounded in practice, are particularly important,” says Frans van Ette, Program Director AI at TNO.
Collaboration between PhDs and Data Scientists The new collaboration also offers great opportunities for universities. Thomas Dohmen, Director of AI labs at Utrecht University, says, “The accessible and versatile casuistry of the partners in this lab provides a range of opportunities for research. We see it as our joint responsibility to develop concrete usable methods that advance inspections in daily practice. We also offer talented graduates the opportunity to delve into this theme through a PhD trajectory.”
Extending a Hand to Other Inspections The ICAI Lab AI4Oversight currently has funding for five years. This provides enough time to complete PhD research and build a reliable, effective use of AI in oversight. “We want to show that this collaboration benefits all parties and hope that other inspection services will join during the project. So, while we are taking the first step, we are keeping our hand extended to other inspections,” Van Vliet concludes.
For the opening event of MindLabs, approximately 250 people were there, among which representatives from all MindLabs partners, representatives of the government, and more.
After a welcome by Tilburg Mayor Theo Weterings, visitors were invited for a tour along five knowledge stations demonstrating the research that takes place at MindLabs. Tilburg University presented the five MasterMinds projects together with its (business) partners. The MasterMinds project, led by prof. dr. Max Louwerse, consists of five innovativeresearch projects in which 15 partners work together to develop breakthroughs with the help of AI technology.
In the Gaming Lab, Gianluca Guglielmo, MSc, presented his research about complex decision making processes in logistics using serious games, together with partners The Barn (Bas van Nuland), who developed the game, The Port of Rotterdam (Annemieke Hol-van Besemer) and Fontys IT (Olaf Janssen).
In the VR-lab, Mohammad Ali Mousavi, MSc, presented his research about the effectiveness of virtual reality training in engineering and maintenance tasks. He showed the audience about the performed experiments at MasterMinds partner Actemium. In addition, Luuk Schepers, from partner Marel, demonstrated their training simulation which the audience was able to experience using a VR-headset.
In the Brain & Behavior Lab, Evy van Weelden, MSc, explained about her research which focuses on neurophysiology, virtual reality and aerospace, a project with the Royal Netherlands Air Force and multiSIM. She demonstrated multiSIM’s virtual flight simulation and the audience was invited to experience flying through aerospace themselves.
Laduona Dai, MSc, gave a presentation about his research on virtual humans in education, together with partners Zwijsen (Femke van der Lecq and Natasja Corver), Spacebuzz (Janine Geijsen) and Fontys IT (Olaf Janssen). Laduona also demonstrated, using VR headsets, a 360 virtual environment in which an astronaut teaches about space.
In one of Tilburg University’s college rooms, Niloy Purkait, MSc, together with Johan Ringeling from Interpolis, gave a presentation about the research on predicting behavior using data science, and the goal to be able to communicate to specific groups which are more prone to specific risks.
Watch an introduction to the MasterMinds project here:
Want to know more about the MasterMinds projects? Use to the following pages:
Although we are already half way through 2023, it’s always good to look back at the successes our institute have reached so far. In the spirit of ICAI’s mission to support open talent and technology development and ‘Make AI for Everyone’, we managed to accomplish great goals in the past year.
Therefore, we would like to present you: The ICAI 2022 graphic overview, in which we offer a comprehensive overview of our achievements in 2022.
This infographic showcases four of the six pillars of ICAI, which resembles the foundation of our institute: ICAI Labs, ICAI Academy, ICAI Venture and ICAI Connector.
ICAI ended 2022 with a portfolio of 32 labs. 334 PhD students and 100 partners worked together to deliver a great amount of 254 research papers, which we are very proud of. We continue to grow, and expect to add more valuable labs to our ecosystem this year.
We managed to organise a lot of interesting events within our community, giving visitors the opportunity to network and educate themselves about the latest trends in the field of AI. Interested in our next event? Don’t forget to regularly check our events page on the website, or sign up for our monthly newsletter.
With our Dutch podcast ‘Snoek op Zolder’ (produced by Henny Huijgens) we reached the magic number of over 5500 downloads! Little teaser: a new season is in the making, and we can’t wait to share this new season with our community.
We are expanding! In the past weeks, two new ICAI labs have launched. ICAI is happy to have them as a part of our ecosystem.
Explainable AI for Health
The Explainable AI for Health lab is a collaboration between the Leiden University Medical Center, Amsterdam UMC and Centrum Wiskunde & Informatica. Together, they will work on developing new forms of artificial intelligence that help physicians and patients with clinical decisions.
The goal of the lab is to tailor and validate new inherently explainable AI techniques and guidelines that can be used for clinical decision-making.
Scientific director Peter A.N. Bosman says: “Clinical decision support based on AI can have great added value for physicians and patients”.
Responsible and Ethical AI for Healthcare Lab (REAiHL)
Our 49th lab is the REAiHL lab, is a collab between SAS Institute, Erasmus Medical Center and Delft University of Technology. The lab aims to develop and deploy AI technologies that are safe, transparent, and aligned with ethical principles to improve healthcare outcomes.
The new AI Ethics Lab was initiated by internist-intensivist Michel van Genderen from Erasmus MC. Diederik Gommers, Professor of Intensive Care Medicine at Erasmus MC, is also closely involved: “Initially, the new AI Ethics Lab will focus on developing best practices for the Intensive Care Unit,” Buijsman says. “But our ultimate goal is to develop a generalized framework for the safe and ethical application of AI throughout the entire hospital. We therefore expect to soon start addressing use cases from other clinical departments as well.”
Do you have a passion for video? Do you have an eye for images and are you good with a camera? If yes, this vacancy might be interesting for you!
We are looking for an enthusiastic student Assistant video editing (8 hours p/w) from 1 September.
What you will do
As a video editor, you will be responsible for producing videos for our social media channels. Your main task will mainly be to edit (and subtitle) interviews of different stakeholders within our ecosystem. It can also happen that you might be sent out (together with a colleague) to actually shoot the content.
Besides editing videos, you may also be asked to help develop and publish (graphic) content on our website. Experience with WordPress, Canva and/or Photoshop is an advantage, but not a requirement.
Who are we looking for?
– You are a student (at the University of Amsterdam);
– Experience with Adobe Premiere Pro;
– Excellent communication skills in English.
– You are sociable and you enjoy working together, but can also work well independently.
– You are eager to learn, you can’t wait to improve your video/montage/interview/etc. skills and work really hard to do so.
– You are good with a camera and know how to take nice shots.
– Graphic skills and/or experience with web design is a plus, but not a must.
The National Innovation Center for Artificial Intelligence (ICAI) has the mission to keep the Netherlands at the forefront of knowledge and talent development in AI. Creating and nurturing a national AI knowledge and talent ecosystem is our central aim. In doing so, we as an organization want to deal sustainably with resources that arise from the activities and further activate the resources in the Netherlands. This way, the Netherlands will become a strong European catalyst in the field of AI talent and AI knowledge development by
– attracting talent to work on problems;
– attracting problems and data for talent to work on; and
– feeding local and national ecosystems for talent and knowledge development.
What do we offer?
– The work is part of the Student Assistant position. Your gross hourly wage starts from €2618 for fulltime employment and depends on the academic year you are in.
Do you recognize yourself in the job profile? Then we look forward to receiving your application!
Train schedules are a crucial part of our daily lives, but have you ever thought about the complexity behind creating them? From ensuring seats are available to deploying personnel; there are numerous factors to consider. Despite significant advancements in planning automation, there is still room for improvement, especially in hub planning optimization. So, what’s happening under the hood at NS, and why is optimizing railway planning so challenging? We interviewed Bob Huisman, Manager of Research and Development Hub Logistics at NS, to learn more about how AI is revolutionizing the train scheduling management process.
Bob Huisman is a respected figure in the railway industry, with his career spanning several decades. Huisman currently holds the position of Manager Research & Development Hub Logistics at NS, where he is responsible for delivering innovative methods and tools for planning and scheduling shunting related processes at railway hubs, as well as assessing the logistic process capacity of railway hubs. Beyond the technical and scientific aspects of his work, Huisman sees it as an opportunity to make a social contribution. He describes himself as ‘ having one foot firmly planted in the business world and the other in academia’. His career path is a testament to his ability to bridge the gap between research, development, creativity, and challenging problems. Huisman is one the principal investigators of the LTP ROBUST programme and the chair of the user’s committee.
‘At first glance, the process of train schedule management may seem simple – travelers at the station, a train ready to board.’ But as Huisman points out, it’s actually a complex interplay of factors, while peering into the direction of Utrecht Central Station, one of the important hubs in the railway network. The hub planning, as part of the overall railway planning, involves ensuring trains are in the correct composition to maximize seat availability, that they arrive at the right platform at the right time, and that they are in good technical condition. Additionally, once a train reaches its final destination, it must be checked for technical issues, cleaned, potentially rearranged, and parked in a way that maximizes space efficiency.
But that’s not all. Railway planning and control involves fleet assignment and the deployment of personnel, which comes with its own set of complicated and important limiting factors. Employment conditions, work variation, and the ability of colleagues to come home at the end of the day are just some of the factors that the team of Huisman must take into account. ‘Train schedule management is almost paradoxical. As a traveler you may experience that we have one timetable during the entirety of the year, but under the hood, the rail sector makes a unique plan for every single day of the year. This involves planning for the timetable, fleet and staff, as well as for the 34 hubs – the stations with connected yards where multiple train lines converge – months in advance.’
Over the past two decades, NS has seen a significant increase in automation when it comes to network planning. However, there’s still no automation in hub planning, which Huisman notes as remaining an obstacle to overcome. Currently, this daunting task is solely on the shoulders of human hub planners, who are responsible for what is called the “knitting process”. ‘The “knitting process” involves juggling a multitude of factors simultaneously and making decisions in real-time, ensuring that passengers arrive at their destinations safely and smoothly.
How do those train scheduling experts manage to make everything run like clockwork?
‘It is a multi-step process. First, we create a timetable, then assign our fleet and lastly assign our colleagues.’ Albeit the largest, NS is only one of the 7 Dutch operators on the Dutch rail network, with ProRail responsible for infrastructure management, capacity allocation and traffic control. Huisman notes that planning NS’ train operation involves multiple iterations before arriving at the final basic pattern, which is then translated into a blueprint for each weekday, and subsequently each specific day. The schedule is finalized a month in advance for a specific calendar day, and from then NS still has the ability to make adjustments up to two days prior. ‘Once the plans are handed over for operational execution, controllers at ProRail and NS must act quickly in the event of a collision, malfunction, or employee illness. At that moment, it resembles tinkering more than actual planning by optimization’, Huisman notes.
NS has been making significant progress since 2015 towards automating the hub planning, but Huisman emphasizes that there are many “dirty details” that need to be taken into account. One of the most gratifying moments of this project came in 2020, just before the onset of the pandemic. ‘We were able to demonstrate on the basis of a proof of concept that we are going to make it’, Huisman recalls. ‘Moments like these; when colleagues have confidence in the success of a project and more resources become available; those keep me young. I have resolved that when I retire, to have this project standing,’ he says with a sense of pride. ‘This project represents a unique collaboration with young researchers, a continuous flow of PhD and master students, and has been one of the most personally rewarding projects of my career.’
Why is automation and optimization so important?
‘Two goals for the rail system are difficult to reconcile: on the one hand, meeting a growing demand for transport and, on the other hand, a robust operation. The driving force for automated hub-planning support is the need for fixing the plans as late as possible and be able to make changes on-line. That will improve the robustness of the transportation process, rail infrastructure usage, and seat availability,’ Huisman notes.
Currently, to anticipate uncertainties and unforeseen disruptions, slack is incorporated into the planning, with respect to both space and time. This slack allows the railroad planners to be flexible and to deal with details that will become clear on the day of operation itself; it gives space to breathe when things go wrong. ‘Reducing slack to facilitate future passenger volumes, increases the risk of a domino effect of disruptions, however, fast automated support for planning and control may compensate for this. All of these challenges beg the question; how do we achieve sustainable growth, act more dynamically and be more robust, while using our existing resources more efficiently?
NS seems to already have invested considerable resources into research that helps professional planners create more optimized plans in less time. Why is this process so tough?
‘Good question. Now, picture the entire rail system as a massive, interconnected wirework – a complex maze that requires meticulous planning to operate smoothly. There are countless variables that can impact the system, making optimization a daunting task. It’s not just a matter of flipping a few switches, pushing a few buttons and moving a few trains around’’.
‘Hub planning is a combinatorial problem with an enormous search space in which it is hard to find good and feasible solutions. Moreover, the need to fix plans as late as possible, requires to model many dirty details of the real world and complex safety rules, which excludes linear optimization methods. ‘When we started in 2015, hub planning had been the topic for a well-known international competition in the field of Operation Research. Curious as we were, we reached out to all the competition’s prize winners to see if they knew something we didn’t. Eventually we concluded that no practical solution was found yet, which was why we decided to set up a long-term research and development program ourselves. One way or another, we had to find a way. Together with our academic partners we finally succeeded in building a working system, mainly based on powerful local search, combined with other methods like linear optimization and constraint programming. We coined it the Hybrid Integrated Planning method (HIP). Although the systems generates plans that are acceptable to professional planners, continuation of the research is needed to enhance the system’s functionality.’
‘Following the progress made by Deepmind, preceding AlphaZero, we started a research track specifically focused on applying deep-reinforcement learning to multi-agent pathfinding. The idea was to build up a logistic plan by generating a chain of individual actions, representing train shunting movements. After six years of research together with our academic partners, we found that brute force local search still outperformed our various complex reinforcement learning approaches, even on simplified models of reality. Taking a step back to see the forest for the trees, we halted the program and shifted our focus. Our current research direction is aimed at using machine learning to complement local search, constrained programming, and linear optimization. The challenge is to find and modify plans more quickly, specifically plans that are easily understood by humans. To build a system that exhibits some intelligent behavior in the future, it must be able to learn from previous situations and to communicate at some abstract level with its users. We still have a long way to go, which asks for perseverance and creativity’, Huisman notes.
“Humans in the loop” does not conform to the traditional view on automation, in which AI systems entirely replace human operators. However, recently researchers have started to view automation as a two-dimensional process, where humans and machines work together and compliment each other’s strengths and weaknesses to achieve a common goal.
‘In our unique use case we are looking for automatic tools to support people’, Huisman emphasized. ‘Hub planning is a challenge that humans are quite good at and a task that requires substantial creativity, ingenuity and the ability to color outside the lines – but it takes a lot of time. On the other hand, algorithms can speed up the planning processes but currently cannot handle its full complexity. That is why we are focused on creating systems that help people to adjust the “knitting process” more efficiently; reducing slack and maximizing space usage to meet the growing demand for train travel.
When it comes to AI, Huisman has a unique perspective. ‘AI research can bring us closer to understanding human intelligence, yes. However, as long as we don’t understand and have defined human intelligence, stop talking about artificial intelligence that has to replace humans.’ Instead, Huisman believes that we should focus on building super tools that exhibit intelligent behavior, regardless of whether there’s a steam engine or neural network powering it. ‘I see neural networks and reinforcement learning as ingredients, among others, to create value’, Huisman explained, adding that it’s all about developing an overall system that can deal with the intricacies of logistic planning in cooperation with humans. ‘AI has the potential to disrupt the approach we take in this, but it is not just about generating plans automatically; you have to communicate the output to planners and make it understandable to them, give humans control over plan qualities, and link the output to the systems of other parties.’
Recently, ICAI announced the LTP ROBUST program; a new initiative supported by the University of Amsterdam and 51 partners from government, industry, and academia. As part of the program, 17 new AI labs are established that focus on the development of trustworthy AI technology to address socially relevant issues in areas such as healthcare, logistics, media, food, and energy. Can you elaborate on the importance of trust in AI systems and your role in the program?
‘Trust is a crucial factor in the adoption of AI systems. Our perspective is that the public, customers, travelers, patients, users and authorities all base their judgment not only on the functionality of the AI algorithm in isolation. Rather, they base it on the whole of interacting ICT, the organization behind it, the procedures put in place to regulate it, the UI, and the availability of the system. Trustworthiness of an individual AI algorithm is a necessary, but not sufficient condition for their effective use in a system. Therefore, research into creating such AI systems necessitates a symbiotic relationship between academia and industry. In the end, private or governmental organizations set the specifications of the system, design and build the system and operate the system over the years. In LTP, research, development and system engineering meet, to obtain social impact by operational trustworthy systems.
Trust is also contextual and domain-specific; the risks of a medical diagnosis differ from the risks of logistics planning or music recommendation, and people rely on different systems in different ways. The program’s approach is to start with a system vision and a targeted research question for each lab, with the private partners playing a vital role in validating the output and asking the right questions. ‘While the validation and questioning may vary for different fields, the general approach to winning the trust of the user can be similar. As one of the principal investigators and the chairman of the overarching user committee, my role is to oversee the cooperation between the partners and ensure knowledge transfer between labs.’
The RAIL Lab, a collaboration between Delft University of Technology, Utrecht University, ProRail and NS, is one of the LTP ROBUST Labs joining ICAI. Its goal? Working towards algorithmic support to ensure safe and reliable logistic operations and capacity planning that is trusted by human experts. Explainable AI plays a role in this.
‘Explainability is often seen from the research world as: if I could just explain why my algorithm came to this conclusion and if I change something about my input, how would my evaluation change? It’s almost an internal accountability from your algorithm to the outside world, which is necessary, but might only be sufficient to accept or reject an individual prediction of the system. The question is whether that is sufficient for humans to use and accept the system as a sparring partner for decision support.’
Huisman emphasized the importance of setting standards for what an explanation of an algorithm should look like. ‘Authorities often require a deeper understanding of how an algorithm works, including how it makes considerations, what information it looks at, what information is necessary to make a good choice, and how uncertain the algorithm is in its output. Furthermore, for specific instances, humans may ask counterfactual questions to understand why some decision is proposed and not some other. By understanding the requirements for human decision-making, we can create more effective explanations that provide a more complete understanding of the algorithm’s decision-making process. Since the user is often responsible for the final decision to be taken, he wants to be sure it is the right one.’
To address these challenges, each LTP ROBUST lab will include a researcher with a background in social and behavioral science. The RAIL Lab is a testament to this effort, with one PhD student focusing on the cooperation between human and AI planners. This study will reveal requirements, expectations, and potential pitfalls of human-AI interaction, specifically of interaction with algorithmic planners. These results will be augmented with data science techniques to extract important factors from past decision-making and planning processes, to develop a computational cognitive model of the decision and planning process.
Huisman sees a colorful future ahead: ‘NS has its fair share of critics – some say it’s too big, bureaucratic, or slow. On the other hand, I know fewer other companies that have invested as much time and resources into innovative projects like railway planning, as NS has in the Netherlands.’ Optimizing rail systems is a complex task that will require many more years of research and a delicate balance between human expertise and advanced AI algorithms. However, Huisman and his colleagues are committed and up for the challenge. ‘With LTP ROBUST and RAIL Lab’s ongoing efforts, we can hope to see more trustworthy, efficient and seamless rail systems in the near future.’
We hope that through this interview you learned a bit more about NS and the intricacies of railway planning. NS, ProRail and their academic partners, TU Delft and the University of Utrecht, are currently recruiting PhD students for the RAIL Lab. If you are interested in a complex technical AI challenge, in the light of a social contribution, check out their webpage: https://icai.ai/icai-labs/rail/ The next time you’re waiting for your train, take a moment to appreciate the intricate dance of 22.000 employees that’s happening behind the scenes to help you get to your destination, and maybe consider joining us!
March 29, 2023ICAI InterviewComments Off on The Complexities of Train Schedule Management: A Look at NS’ Planning Process and the Push for Optimization – An interview with Bob Huisman
AI has the power to transform the world, but only when guided by the shared values and visions of its stakeholders. Therefore, ICAI is proud to become part of the Partnership on AI, a worldwide coalition of academic institutions, media organizations, industry leaders, and civil society groups dedicated to promoting the positive impact of AI on people and society.
The Partnership on AI (PAI) was founded with six thematic pillars, aimed at addressing the risks and opportunities of AI:
Fair, Transparent, and Accountable AI;
AI, Labor, and the Economy;
Collaborations Between People and AI Systems;
Social and Societal Influences of AI;
AI and Social Good.
Partners from all over the world, and AI subdomain, take part in the partnership with one goal; to promote best practices in the development and deployment of AI. Through collaboration, the Partnership on AI develops tools, recommendations, and other resources by inviting voices from across the AI community and beyond to turn insights into actions to ensure that AI and ML technology puts people first.
“We are pleased to be part of PAI and connect on a global level to other organizations to work on AI challenges”, says Maarten de Rijke. Our membership in the Partnership on AI reflects our commitment to the responsible deployment of AI technology and our belief in the importance of collaboration in shaping the future of AI. By sharing expertise, taking part in steering committees, and dedicating ourselves to sharing resources and education regarding the development of AI policy, we can all learn from each other.
If you are interested in learning more about Partnership on AI, please find their website via the following link: https://partnershiponai.org/. Moreover, if you specifically want to know more about ICAI’s involvement in the partnership, please contact Esther Smit at email@example.com.
Total project budget of over 87 million, including 17 new labs and 170 new PhD candidates over 10 years
ROBUST, a new initiative by the Innovation Center for Artificial Intelligence (ICAI), is supported by the University of Amsterdam and 51 government, industry and knowledge-sector partners. The programme aims to strengthen the Dutch artificial intelligence (AI) ecosystem by boosting fundamental AI research. ROBUST focuses primarily on the development of trustworthy AI technology for the resolution of socially relevant issues, such as those in healthcare, logistics, media, food and energy. The research sponsor, the Dutch Research Council (NWO) has earmarked 25 million euros for the programme for the next 10 years.
ROBUST unites 17 knowledge institutions, 19 participating industry sponsors and 15 civil-social organisations from across the Netherlands. Maarten de Rijke, UvA university professor of Artificial Intelligence and Information Retrieval, is the ROBUST programme leader.
The additional €25 million grant comes from a call by the research council for Long-Term Programmes, which give strong public-private consortia the chance to receive funding for a ten-year period. This as part of the initiative of the Netherlands AI Coalition to invest in explainable and trustworthy AI. Next to the research council, companies, knowledge institutes contribute to the programme. The total ROBUST budget amounts to €87.3 million, of which 7.5 million coming from the Ministry of Economics and Climate. The ROBUST programme is complementary to the AiNed programme and will shape the collaboration on dissemination, consolidation and valorisation of the results, as well as retaining talent in the Netherlands. This contributes to the ambitions of the Strategy Digital Economy of the cabinet to be in the forefront of human-centred AI development and AI applications.
170 new PhD candidates Seventeen new public-private labs will be set up under the ROBUST umbrella and form part of the Innovation Center for Artificial Intelligence (ICAI), thus bringing its lab total to 46. ICAI focuses on AI talent and knowledge development. In the coming year, ROBUST will recruit no fewer than 85 new PhD candidates, followed by another 85 in five years’ time.
Human-centred AI for sustainable growth ‘What makes ROBUST unique is that not only will the new labs contribute to economic and technological objectives, they will also aid the United Nations’ sustainable development goals aimed at reducing poverty, inequality, injustice and climate change’, says De Rijke. ‘One important focus of all projects is to optimise reliable AI systems for qualities such as precision, soundness, reproducibility, resilience, transparency and security.’
Twin-win study Just like the other ICAI labs, the ROBUST labs will put the twin-win principle into practice: intensive public-private research partnerships in AI technology that lead to open publications and solutions that have been validated in practice. ‘We test our scientific findings within an industry context. Research and practice thus come together at an earlier stage, allowing for far better validation of the results. This way, research validation doesn’t end in the lab, but also in the outside world.’
Startups, SMEs, and policymakers ‘AI is a systemic technology that touches all aspects of society. That’s why it’s important to ensure that the application of AI technology becomes a widely shared responsibility. ROBUST collaborates with regional civil-social partners throughout the Netherlands, and especially with startups and small to medium-sized enterprises (SMEs).’ The objective is not only to develop knowledge and innovations with ROBUST partners, but also to make them more widely available to other parties within the Dutch ecosystem. New findings and their policy implications will also be shared with national and European policymakers.
Contact Journalists wishing to contact Maarten de Rijke or other relevant scientists, or to find out more about ROBUST, please contact firstname.lastname@example.org.
From the first of November onwards, knowledge institution Eindhoven University of Technology (TU/e) and globally leading hearing aid manufacturer GN Hearing will join forces in FEPlab. The lab is dedicated to ameliorating the participation of hearing-impaired people in both formal and informal settings.
FEPlab will focus its research on transferring a leading physics/neuroscience-based theory about computation in the brain, the Free Energy Principle (FEP), to practical use in human-centered agents such as hearing devices and VR technology. FEP is a general theory of information processing and decision-making in brains that is rooted in thermodynamics. The principle states that biological agents must take actions (or decisions) that minimize their (variational) free energy which is a measure of the amount of total prediction error in a system. Practically, by minimizing free energy, the agent takes actions that optimally balance information-seeking behavior (reduce uncertainties) against goal-driven behavior. Theoretical foundations for AI application of FEP-based synthetic agents have been produced by BIASlab at TU/e. In the current endeavor, FEPlab is focused to bring FEP-based AI agents to the professional hearing device industry. Professor Bert de Vries, the scientific director of FEPlab alongside Associate Professor Jaap Ham, believes FEP-based synthetic agents have much to offer to signal processing systems:
”I believe that development of signal processing systems will in the future be largely automated by autonomously operating agents that learn purposeful (signal processing) behavior from situated environmental interactions.”
Bert de Vries, Scientific Director FEPlab
Expertise and Focus
FEPlab will comprise experts from different fields of expertise such as Audiology, Autonomous Agents & Robotics, Decision Making, and Machine Learning to tackle the complex multidisciplinary challenges at hand. The lab will employ five PhD students at TU/e, of which four will join the BIASlab research group in the EE department and one PhD student will join the Human-Technology Interaction group at the IE&IS department. Key research topics include reactive message passing for robust inference, generative probabilistic models for audio processing, and interaction design for hearing aid personalization.
Sustainable Development Goals
FEPlab will focus on two SDGs. Firstly, the research goals of the lab resonate with SDG 3 focused on Good Health and Well-being since untreated hearing loss in the elderly increases the risk of developing dementia and Alzheimer’s disease as well as emotional and physical problems. Secondly, the lab’s research goals also support SDG 8 of achieving higher levels of economic productivity through technology upgrading and innovation as hearing loss is also shown to affect work participation negatively.