The AI-FAIR Lab is focused on improving the completeness, generalization, explainability, and deployability of deep data-driven methods for automotive imaging radar processing, with a primary use case of collision risk prediction with vulnerable road users. AI-FAIR Lab is a collaboration between the Eindhoven University of Technology, NXP Semiconductors, and NLAIC.
The scientific aim of the AI-FAIR Lab is to improve, using deep data-driven methodologies, automotive imaging radar processing with respect to the following aspects:
- Completeness, i.e. estimating a world model that is as information-rich as that of humans including estimating the spatial and temporal presence of objects/elements/events for which no or only very incomplete measurements are available;
- Generalization, i.e. recognizing objects/elements/events that fall outside the distribution of the training data thereby extending the conditions under which the system can be used effectively;
- Explainability, i.e. providing supporting evidence (on an expert level) to justify the actions taken by the deep data-driven system; Deployability, i.e. guaranteeing that AI methods can effectively be used on available compute platforms and thus meet operational (real-time) constraints.
The primary use case for automotive imaging radar processing is collision risk prediction with vulnerable road users. This lab’s goals of Completeness and Deployability contribute to the overall program’s research line of Accuracy, by improving the effectiveness in being able to timely predict potential collisions on embedded automotive processors. The goal of Generalization contributes to the overall program’s research line of Resilience, by extending the operational conditions/domain under which such predictions are effective. Finally, the goal of Explainability directly contributes to the same overall program’s research line, by improving the ability of the system to provide supporting evidence (on an expert level) to justify the actions taken by the system.
Researchers will focus on improving the quality of the 4D data tensor by increasing spatial resolutions and reducing artifacts such as radar clutter. They will also work to improve collision risk prediction by increasing its completeness and generalizability. Additionally, they will research the optimal mappings of the deep neural networks developed in the first two components to embed real-time compute hardware and will investigate the technical requirements for explainable AI in deep data-driven systems and implement and validate extensions for the neural networks to make them explainable. These efforts will involve an iterative process in which the research outputs of one team will be shared and reused by other researchers as the project progresses.