Research

Learning-based techniques are a powerful and efficient way to design versatile controllers that are able to adapt to complex dynamic environments rapidly. In contrast to classical control approaches, learning does not require comprehensive domain knowledge but distills this knowledge from data. Thus, learning techniques are generalizable to a broader range of problems. Yet, learning-based techniques usually lack theoretical guarantees that are essential in safety-critical robotic applications.

In my research, I aim to unlock the potential of learning-based techniques for cyber-physical systems by incorporating safety guarantees into learning of controllers for autonomous systems. I currently focus on integrating safety specifications formulated via formal methods – e.g., temporal logic and reachability analysis – into deep reinforcement learning. To evaluate the performance of my theoretical approaches, I validate their effectiveness and efficiency in various motion-planning tasks, including autonomous driving, unmanned aerial vehicles, and mobile robots.

In a nutshell, I work at the intersection of reinforcement learning, formal methods, and robotics. This work can be clustered into three strands:

  1. Provably safe reinforcement learning: Develop reinforcement learning algorithms that provide absolute guarantees with respect to safety requirements.
  2. Formal methods for system safety: Design a language for formal safety specifications that is versatile and well-suited for cyber-physical systems.
  3. Motion planning for cyber-physical systems: Validate theoretical results on safety-critical motion planning tasks using real-world data or physical experiments on autonomous systems.

Here is a list of my published research with * indicating equal contribution:

Preprints

2024

  1. preprint_seaweedLongTerm.png
    Maximizing Seaweed Growth on Autonomous Farms: A Dynamic Programming Approach for Underactuated Systems Navigating on Uncertain Ocean Currents
    Matthias Killer*, Marius Wiggert*, Hanna Krasowski, Manan Doshi, Pierre F.J. Lermusiaux, and Claire J. Tomlin
    2024
  2. preprint_continuousActionMasking.png
    Excluding the Irrelevant: Focusing Reinforcement Learning through Continuous Action Masking
    Roland Stolz*, Hanna Krasowski*, Jakob Thumm, Michael Eichelbeck, Philipp Gassert, and Matthias Althoff
    Accepted at NeurIPS 2024, 2024

Publications

2024

  1. preprint_safeRLvessels.png
    Provable Traffic Rule Compliance in Safe Reinforcement Learning on the Open Sea
    Hanna Krasowski, and Matthias Althoff
    IEEE Transactions on Intelligent Vehicles, 2024

2023

  1. OJ-CSYS2023.png
    Provably Safe Reinforcement Learning via Action Projection Using Reachability Analysis and Polynomial Zonotopes
    Niklas Kochdumper*, Hanna Krasowski*, Xiao Wang*, Stanley Bak, and Matthias Althoff
    IEEE Open Journal of Control Systems, 2023
  2. TMLR2023_ProvablySafeRLComparison.png
    Provably Safe Reinforcement Learning: Conceptual Analysis, Survey, and Benchmarking
    Hanna Krasowski*, Jakob Thumm*, Marlon Müller, Lukas Schäfer, Xiao Wang, and Matthias Althoff
    Transactions on Machine Learning Research, 2023
  3. CDC2023_ProbabilisticVerificationRL.png
    Safe Reinforcement Learning with Probabilistic Guarantees Satisfying Temporal Logic Specifications in Continuous Action Spaces
    Hanna Krasowski, Prithvi Akella, Aaron D. Ames, and Matthias Althoff
    In Proc. of the IEEE Conference on Decision and Control (CDC), 2023
  4. CDC2023_UnderactuatedSafety.png
    Stranding Risk for Underactuated Vessels in Complex Ocean Currents: Analysis and Controllers
    Andreas Doering*, Marius Wiggert*, Hanna Krasowski, Manan Doshi, Pierre F.J. Lermusiaux, and Claire J. Tomlin
    In Proc. of the IEEE Conference on Decision and Control (CDC), 2023

2022

  1. CommonOceanLogo.png
    CommonOcean: Composable Benchmarks for Motion Planning on Oceans
    Hanna Krasowski, and Matthias Althoff
    In Proc. of the IEEE Int. Conf. on Intelligent Transportation Systems (ITSC), 2022
  2. ITSC2022_safeurban.png
    Safe Reinforcement Learning for Urban Driving using Invariably Safe Braking Sets
    Hanna Krasowski*, Yinqiang Zhang*, and Matthias Althoff
    In Proc. of the IEEE Int. Conf. on Intelligent Transportation Systems (ITSC), 2022

2021

  1. ITSC2021.png
    CommonRoad-RL: A Configurable Reinforcement Learning Environment for Motion Planning of Autonomous Vehicles
    Xiao Wang, Hanna Krasowski, and Matthias Althoff
    In Proc. of the IEEE Int. Conf. on Intelligent Transportation Systems (ITSC), 2021
  2. IV2021.png
    Temporal Logic Formalization of Marine Traffic Rules
    Hanna Krasowski, and Matthias Althoff
    In Proc. of the IEEE Intelligent Vehicles Symposium (IV), 2021

2020

  1. ITSC2020.png
    Safe Reinforcement Learning for Autonomous Lane Changing Using Set-Based Prediction
    Hanna Krasowski*, Xiao Wang*, and Matthias Althoff
    In Proc. of the IEEE Int. Conf. on Intelligent Transportation Systems (ITSC), 2020