Research
Learning-based techniques are a powerful and efficient way to design versatile controllers that are able to adapt to complex dynamic environments rapidly. In contrast to classical control approaches, learning does not require comprehensive domain knowledge but distills this knowledge from data. Thus, learning techniques are generalizable to a broader range of problems. Yet, learning-based techniques usually lack theoretical guarantees that are essential in safety-critical robotic applications.
In my research, I aim to unlock the potential of learning-based techniques for cyber-physical systems by incorporating safety guarantees into learning of controllers for autonomous systems. I currently focus on integrating safety specifications formulated via formal methods – e.g., temporal logic and reachability analysis – into deep reinforcement learning. To evaluate the performance of my theoretical approaches, I validate their effectiveness and efficiency in various motion-planning tasks, including autonomous driving, unmanned aerial vehicles, and mobile robots.
In a nutshell, I work at the intersection of reinforcement learning, formal methods, and robotics. This work can be clustered into three strands:
- Provably safe reinforcement learning: Develop reinforcement learning algorithms that provide absolute guarantees with respect to safety requirements.
- Formal methods for system safety: Design a language for formal safety specifications that is versatile and well-suited for cyber-physical systems.
- Motion planning for cyber-physical systems: Validate theoretical results on safety-critical motion planning tasks using real-world data or physical experiments on autonomous systems.