To main content

Robust AI

Artificial intelligence systems are increasingly deployed in critical and dynamic environments where reliability is essential. However, AI models are often vulnerable to uncertainties, adversarial inputs, and data shifts, which can compromise their performance and trustworthiness. Robust AI focuses on ensuring that AI systems remain reliable and effective under such challenging conditions.

Contact person

Our expertise lies in developing AI models and methods that are resilient to noise, adversarial attacks, and real-world variability. We focus on techniques such as uncertainty quantification and modelling, adversarial training, and robust optimization to ensure dependable AI systems even in complex and unpredictable scenarios. Modelling uncertainty based on expert knowledge (prior knowledge) and uncertainty observed in gathered data is based on methods from probabilistic machine learning. Such models give users better insight into the models’ uncertainty and operational domain. This can support AI/ML users in high impact and critical domains.

Robust AI is critical for applications in sectors like energy, healthcare, and autonomous systems, where failures can have significant consequences. By prioritizing reliability and adaptability, our work enables AI solutions that can operate confidently in real-world environments.

Explore research areas

Related project

RICO