Robotic Systems, Powered by Formal Methods

This is a summary of the introduction to my PhD thesis defense talk. The talk was based on my work at Max Planck Institute for Software Systems, where I was exploring how one can combine formal methods (FM) and autnomous systems.

A hospital robot
Robots helping in a hospital

When deciding whether to use formal methods for the development of a software system, there are three strong arguments in favor of doing so:

This sounds like a great value proposition. And yet, when the decision time comes, most teams decide against using formal methods.

Why is that so?

There are of course some strawman arguments.  ”Programmers are terrified by math“ - says Leslie Lamport. “FM as a discipline constantly overpromises but rarely delivers”, say others. There is also a question of semantics: as soon as a method starts being used in mainstream software development, it is no longer a part of formal methods - it is simply considered regular software engineering.

Realistically, though, it is always about what benefit formal methods are bringing to the table. The three selling points of FM are easily dismissed by:

And given that using FM does not come for free (there is a somewhat steep learning curve, immature tool support, and a necessity to keep the specification in sync with code changes), these arguments can be valid sometimes. For instance, when programming a mobile app: it can always be safely restarted after crashing.

Picture by Blutgruppe/Corbis

But what if we are developing a (multi-)robot system? It will likely contain a couple of learned components, so inspecting a code and having a clear idea of what it does is illusionary. Furthermore, robots are big and strong machines operating near humans: we cannot be almost correct, guarantees are necessary. And finally, developing a robotic system requires knowledge from many diverse disciplines. Thus, synthesizing even parts of the overall system and letting programmers focus on the application logic is valuable.

Synthesis for Multi-Robot Systems

Motivated by this reasoning, I decided to explore interactions between formal methods and robotic systems. Concretely, with help of many great collaborators, I first developed a programming model for multi-robot systems in which users give commands describing what needs to be done (in the language of linear temporal logic, LTL), and the system synthesizes all the robots’ actions to accomplish the task. We implemented one system supporting such a model, both in simulation and on a group of Turtlebot robots.

The implemented system receives commands in linear temporal logic (LTL) and takes care of lower-level details (planning, coordination, task assignment, failure recovery,…)

Adaptable Formal Language for Instructing Robots

Following that, I set off to answer the question of how to help non-expert users write correct specifications. This line of work culminated in LTLTalk (read: little talk), a system that uses a natural language description of a task and a single demonstration to infer the formal specification of the task. Through interactions with users, LTLTalk augments its underlying formal language: its language will always remain formal (retaining all its benefits, such as interpretability of the system’s decisions), but it will converge towards the fragment of the natural language spoken by its users.

Reinforcement Learning with Inferred Reward Machines

Finally, the last line of work was about helping reinforcement learning algorithms learn faster and become more interpretable by inferring automaton-like structures called reward machines (prior to our work, these machines were always assumed to be given). For more details about each of these projects, please have a look at the thesis :)

Conclusion

Coming back to the initial question of when to insist on using formal methods, the conclusion is this: the more intricate the system is and the greater the price of failure, there is bigger need for using formal methods. The robotics domain is one such example, which furthermore offers some unique opportunities (objects and actions are concrete, thus interacting with users through examples becomes easier).

Of course, having high incentives is only a part of the story. The other part is building scalable, polished, user-friendly FM tools that every programmer will love to use, as well as providing learning material to ease the adoption of the technology.

Given how much of the world runs on software, increasing the level of trust in our software is a moral imperative. I thus hope that the work described in this post contributes towards making software (and autonomous systems in particular) more reliable.


This post was first published on 6.6.2022, here.

, ,