AI-FAIR

Full name

AI-FAIR – AI for Automotive Imaging Radar

Research Lines

  • Accuracy
  • Explainability
  • Resilience

Sustainable Development Goals

The AI-FAIR Lab is focused on improving the completeness, generalization, explainability, and deployability of deep data-driven methods for automotive imaging radar processing, with a primary use case of collision risk prediction with vulnerable road users. AI-FAIR Lab is a collaboration between the Eindhoven University of Technology, NXP Semiconductors, and NLAIC.

We expect recruitment for this lab to open in early 2023. Please check back frequently for updated information about how to apply, and to register for an online information session in February. If you’d like to join our mailing list, please fill out the form, and we will be happy to keep you informed of all the latest developments.

The scientific aim of the AI-FAIR Lab is to improve, using deep data-driven methodologies, automotive imaging radar processing with respect to the following aspects:

  • Completeness, i.e. estimating a world model that is as information-rich as that of humans including estimating the spatial and temporal presence of objects/elements/events for which no or only very incomplete measurements are available;
  • Generalization, i.e. recognizing objects/elements/events that fall outside the distribution of the training data thereby extending the conditions under which the system can be used effectively;
  • Explainability, i.e. providing supporting evidence (on an expert level) to justify the actions taken by the deep data-driven system; Deployability, i.e. guaranteeing that AI methods can effectively be used on available compute platforms and thus meet operational (real-time) constraints.

The primary use case for automotive imaging radar processing is collision risk prediction with vulnerable road users. This lab’s goals of Completeness and Deployability contribute to the overall program’s research line of Accuracy, by improving the effectiveness in being able to timely predict potential collisions on embedded automotive processors. The goal of Generalization contributes to the overall program’s research line of Resilience, by extending the operational conditions/domain under which such predictions are effective. Finally, the goal of Explainability directly contributes to the same overall program’s research line, by improving the ability of the system to provide supporting evidence (on an expert level) to justify the actions taken by the system.

Researchers will focus on improving the quality of the 4D data tensor by increasing spatial resolutions and reducing artifacts such as radar clutter. They will also work to improve collision risk prediction by increasing its completeness and generalizability. Additionally, they will research the optimal mappings of the deep neural networks developed in the first two components to embed real-time compute hardware and will investigate the technical requirements for explainable AI in deep data-driven systems and implement and validate extensions for the neural networks to make them explainable. These efforts will involve an iterative process in which the research outputs of one team will be shared and reused by other researchers as the project progresses.

Sustainable Development Goals

AI-FAIR Lab is part of the ROBUST program on Trustworthy AI-based Systems for Sustainable Growth which is financed under the NWO LTP funding scheme.

The AI-FAIR Lab is committed to contributing towards Sustainable Development Goal 3, Target 3.6. The lab aims on developing deep data-driven in-vehicle technologies that can predict and prevent potential collisions in dynamic and complex traffic situations, with a particular focus on protecting vulnerable road users. The effectiveness of these technologies will be demonstrated through controlled field tests, which provide quantifiable indicators of their impact on traffic safety. The ultimate goal of the AI-FAIR Lab is to increase trust in deep data-driven in-vehicle technologies and drive their adoption by the automotive industry. By bringing forward novel AI algorithms and system designs based on the results of our research, we hope to influence the design of in-vehicle technologies and ultimately contribute to the overall goal of reducing the number of global deaths and injuries from road traffic accidents.

Our efforts also align with Sustainable Development Goal 11, Target 11.2: provide access to safe, affordable, accessible, and sustainable transport systems for all. By modernizing and improving the infrastructure of the automotive industry through the development and implementation of AI solutions, we aim to drive innovation and efficiency in the sector.

SDG 3: Ensure healthy lives and promote well-being for all at all ages

Target 3.6: By 2020, halve the number of global deaths and injuries from road traffic accidents

SDG 11: Make cities and human settlements inclusive, safe, resilient, and sustainable

Target 11.2: By 2030, provide access to safe, affordable, accessible and sustainable transport systems for all, improving road safety, notably by expanding public transport, with special attention to the needs of those in vulnerable situations, women, children, persons with disabilities and older persons.

Staff

Gijs Dubbelman

Scientific Director

Hala Elrofai

Scientific Director

Partners

The Technical University Eindhoven (TU/e) and NXP have an overarching strategic partnership of which the AI-FAIR Lab will be a part. In the AI-FAIR Lab, Eindhoven University of Technology (TU/e) is responsible for organizing and managing the lab’s scientific research activities, which will benefit both TU/e’s internal fundamental artificial intelligence roadmap and NXP’s applied technology roadmap. NXP, on the other hand, will provide access to technical infrastructure and valuable domain knowledge to the PhD students at the lab. This collaboration is highly synergistic, with many ex- TU/e students, PDEngs, and PhDs having professional careers at NXP and many NXP researchers being part-time researchers or professors at TU/e.