REPAIR - Robust and ExPlainable AI for Radarsensors
- Period
- 2021 — 2024
- Funding
- Österreichische Forschungsförderungsgesellschaft mbH FFG, (Österreich)
- Partners
- Infineon Technologies Austria AG (Österreich)
- Research Areas
- Contact
A basic requirement for the widespread acceptance of autonomous vehicles is a robust and reliable automatic vehicle guidance in all possible situations without the need for human intervention. The foundation for that is provided by sensors for the environmental perception, with radar sensors being an indispensable technology due to their independence of weather and lighting conditions. In real world traffic situations a manifold of disturbances and environmental influences can occur, such as temperature effects, different traffic situations as well as a targeted manipulation of input data (i.e. adversarial attacks). The sensor system must be able to address these influences robustly and to recognize when its own predictions are uncertain. If the prediction of a sensor is classified as “uncertain” in a specific situation, other sensors with higher confidence can be given more credence or a secured emergency routine can be triggered.
The main project goal is the development of explainable and robust machine learning (ML) models for object detection using radar sensors. For safety-critical applications it is essential to understand exactly how an ML model determines its prediction. Explainable AI can be used to ensure which influencing factors are decisive. In order to increase the robustness of the ML models, research is focused in three areas:
- Hybrid models are developed - these combine the robustness of classical modeling techniques with the superior performance of ML methods
- Bayesian neural networks provide uncertainty information about the object detections – this information can be exploited in higher level processing steps and safety-relevant aspects can be guaranteed
- the robustness of a neural network is directly related to the “distribution shift” between training and test data. To ensure robust behavior on distribution shifts, a non-parametric method using the Wasserstein distance is developed and compared with different normalization methods.
The aim of the developed robust and “explainable” ML models is to demonstrate their usefulness and advantage in safety-critical applications. Furthermore, these methods make the risk of a wrong decision quantifiable. The evaluation of the ML models on the basis of different use cases enables a critical comparison in a wide range of problems in the context of autonomous vehicles and environmental perception using radar sensors.