Research

Learning-based techniques are a powerful and efficient way to design versatile controllers that are able to adapt to complex dynamic environments rapidly. In contrast to classical control approaches, learning does not require comprehensive domain knowledge but distills this knowledge from data. Thus, learning techniques are generalizable to a broader range of problems. Yet, learning-based techniques usually lack theoretical guarantees that are essential in safety-critical robotic applications.

In my research, I aim to unlock the potential of learning-based techniques for cyber-physical systems by incorporating safety guarantees into learning of controllers for autonomous systems. I currently focus on integrating safety specifications formulated via formal methods – e.g., temporal logic and reachability analysis – into deep reinforcement learning. To evaluate the performance of my theoretical approaches, I validate their effectiveness and efficiency in various motion-planning tasks, including autonomous driving, unmanned aerial vehicles, and mobile robots.

In a nutshell, I work at the intersection of reinforcement learning, formal methods, and robotics. This work can be clustered into three strands:

  1. Provably safe reinforcement learning: Develop reinforcement learning algorithms that provide absolute guarantees with respect to safety requirements.
  2. Formal methods for system safety: Design a language for formal safety specifications that is versatile and well-suited for cyber-physical systems.
  3. Motion planning for cyber-physical systems: Validate theoretical results on safety-critical motion planning tasks using real-world data or physical experiments on autonomous systems.

Selected Publications

2024

  1. preprint_safeRLvessels.png
    Provable Traffic Rule Compliance in Safe Reinforcement Learning on the Open Sea
    Hanna Krasowski, and Matthias Althoff
    IEEE Transactions on Intelligent Vehicles, 2024
  2. preprint_continuousActionMasking.png
    Excluding the Irrelevant: Focusing Reinforcement Learning through Continuous Action Masking
    Roland Stolz*, Hanna Krasowski*, Jakob Thumm, Michael Eichelbeck, Philipp Gassert, and Matthias Althoff
    In Proc. of the Thirty-eighth Annual Conference on Neural Information Processing Systems (NeurIPS), 2024

2023

  1. OJ-CSYS2023.png
    Provably Safe Reinforcement Learning via Action Projection Using Reachability Analysis and Polynomial Zonotopes
    Niklas Kochdumper*, Hanna Krasowski*, Xiao Wang*, Stanley Bak, and Matthias Althoff
    IEEE Open Journal of Control Systems, 2023
  2. TMLR2023_ProvablySafeRLComparison.png
    Provably Safe Reinforcement Learning: Conceptual Analysis, Survey, and Benchmarking
    Hanna Krasowski*, Jakob Thumm*, Marlon Müller, Lukas Schäfer, Xiao Wang, and Matthias Althoff
    Transactions on Machine Learning Research, 2023