Research

Developing autonomous systems is inherently challenging due to perception uncertainties, model disturbances, and dynamic environments. Machine learning is often seen as the best approach to handle this complexity, yet machine learning models typically lack interpretability, safety guarantees, and require large datasets. In contrast, model-based approaches often require a lot of engineering knowledge which leads to a low transferability, but often provide guarantees and explainability of the decision-making.

In my research, I aim to unlock the potential of learning-based techniques for real-world systems by incorporating formal methods to achieve data-efficiency, reliability and interpretability. I currently focus on guiding machine learning with abstract system knowledge, e.g., traffic rules, descriptive observations of a disease, which is made computationally tractable by formal methods. I am validating my research on a variety of applications, e.g., standard control tasks, motion planning of autonomous systems, cell-cell interaction. Still, my focus is autonomous vessels since they are a relevant safety-critical autonomous system and feature low‑frequency traffic data with uncertainty, and abstract knowledge from dynamical models and expert handbooks. Ultimately, I aim for a foundational framework for real‑world autonomy where different information source, e.g., time-series data, system models, text, can be seamlessly integrated into machine learning algorithms, resulting in robust and interpretable models.

In a nutshell, I work at the intersection of machine learning, formal methods, and robotics. My work can be clustered into three thrusts:

  1. Algorithms for reliable machine learning: Develop learning algorithms that provide guarantees with respect to task or safety requirements.
  2. Guidance with formal methods: Formally integrate abstract system knowledge to efficiently guide the learning process to a performant model.
  3. Solving complex real-world systems: Validate theoretical results on complex tasks and develop open-source benchmarks.

Selected Publications

2024

  1. preprint_safeRLvessels.png
    Provable Traffic Rule Compliance in Safe Reinforcement Learning on the Open Sea
    Hanna Krasowski, and Matthias Althoff
    IEEE Transactions on Intelligent Vehicles, 2024
  2. preprint_continuousActionMasking.png
    Excluding the Irrelevant: Focusing Reinforcement Learning through Continuous Action Masking
    Roland Stolz, Hanna Krasowski, Jakob Thumm, Michael Eichelbeck, Philipp Gassert, and Matthias Althoff
    In Proc. of the Thirty-eighth Annual Conference on Neural Information Processing Systems (NeurIPS), 2024

2023

  1. OJ-CSYS2023.png
    Provably Safe Reinforcement Learning via Action Projection Using Reachability Analysis and Polynomial Zonotopes
    Niklas Kochdumper, Hanna Krasowski, Xiao Wang, Stanley Bak, and Matthias Althoff
    IEEE Open Journal of Control Systems, 2023
  2. TMLR2023_ProvablySafeRLComparison.png
    Provably Safe Reinforcement Learning: Conceptual Analysis, Survey, and Benchmarking
    Hanna Krasowski, Jakob Thumm, Marlon Müller, Lukas Schäfer, Xiao Wang, and Matthias Althoff
    Transactions on Machine Learning Research, 2023