Our expertise lies in developing AI models and methods that are resilient to noise, adversarial attacks, and real-world variability. We focus on techniques such as uncertainty quantification and modelling, adversarial training, and robust optimization to ensure dependable AI systems even in complex and unpredictable scenarios. Modelling uncertainty based on expert knowledge (prior knowledge) and uncertainty observed in gathered data is based on methods from probabilistic machine learning. Such models give users better insight into the models’ uncertainty and operational domain. This can support AI/ML users in high impact and critical domains.
Robust AI is critical for applications in sectors like energy, healthcare, and autonomous systems, where failures can have significant consequences. By prioritizing reliability and adaptability, our work enables AI solutions that can operate confidently in real-world environments.