Signal Processing and Speech Communication Laboratory
homeresearch projects › PRISM - Perceptive Radar Induction via Symbolic Modeling

PRISM - Perceptive Radar Induction via Symbolic Modeling

Period
2025 — 2028
Funding
Österreichische Forschungsförderungsgesellschaft mbH (FFG), Österreich
Partners
  • Infineon Technologies Austria AG (Österreich)
  • Levata GmbH
Research Areas
Contact
Members

    Recent advances in sensor technology have paved the way for multi-sensor systems to become a practical and cost-effective solution across a range of industries, including mobile radio systems, energy grids, and autonomous vehicles. These systems improve perception, reliability, and robustness by integrating diverse sensors; however, they introduce new challenges regarding effective joint processing. Traditional multi-sensor processing methods rely either on classical analytical techniques or purely data-driven approaches, such as deep learning. While analytical methods may miss crucial data from individual sensors, data-driven models require large volumes of high-quality data and often lack interpretability. Achieving coherent multi-sensor processing requires overcoming environmental factors, and variations between sensors, which can negatively affect performance. Our research proposes a hybrid approach that combines subsymbolic AI with higher-level symbolic representations to address these challenges. This method leverages deep learning to capture essential low-level features and their statistical properties before elevating the data to a more abstract, symbolic level. This abstraction enables coherent processing regardless of sensor calibration issues and allows for the integration of domain knowledge and physical constraints. The symbolic representations enhance system interpretability, providing humans with insights into system performance. One crucial area for the application of hybrid multi-sensor systems is (semi-) autonomous driving. Modern vehicles rely on multiple sensors—such as lidar, ultrasonic, and radar sensors—to perceive their environment and make safety-critical decisions. Among these, radar sensors are most crucial for robust environment perception. However, radar systems are prone to calibration errors, synchronization issues, and limited bandwidth, all of which hinder effective coherent processing. These limitations can lead to missed or false detections, posing safety risks. Our project will focus on enhancing automotive multi-radar systems by first improving local perception at each radar sensor through subsymbolic machine learning. Next, we will address calibration and bandwidth limitations by lifting the sensor data to a symbolic level, where probabilistic and AI-aided graph-based methods will combine the abstracted data with domain knowledge for a coherent environmental perception. Additionally, we will implement a feedback mechanism, where symbolic-level representations can be used to optimize sensor configurations dynamically. For radar systems, this could mean optimizing the field of view, aperture, or resolution to focus on critical objects, improving data quality and perception accuracy. In summary, our hybrid approach, which blends subsymbolic and symbolic processing, will create adaptive, efficient, and interpretable multi-radar systems, ultimately advancing vehicle safety and autonomous driving technologies.