Zum Hauptinhalt springen
TU Graz/

Artificial Intelligence: Not Only Fast, But Also Safe


by Birgit Baustädter published at 09.04.2026 Research
Artificial Intelligence: Not Only Fast, But Also Safe
Artificial intelligence should be fast, expandable and deliver great results. And it should also be safe, trustworthy and understandable, especially in everyday applications. These aspects are now being combined.
Grafik eines Computerchips mit einem Schild darauf.
Künstliche Intelligenz mit Garantien. Bildquelle: Adobe Stock - generiert mit KI

Would you ride in an autonomous vehicle without being able to understand why it makes which decisions, why it brakes when and why it chooses which route? Sounds rather risky, doesn’t it?

Bettina Könighofer and her working group at theInstitute of Information Security atTU Graz are focusing on precisely this. How autonomous systems can be designed to be fast and trustworthy at the same time. There are currently two main approaches to artificial intelligence: symbolic AI and sub-symbolic AI, which now need to be combined.

Security or safety?

But let’s start at the very beginning. Strictly speaking, Bettina Könighofer’s work is not about security, but about safety. It sounds confusing, but the German term “Sicherheit” can mean safety or security. Safety ensures that artificial systems do exactly what they are supposed to do; in other words, that they are trustworthy, reliable and safe for users. Security protects artificial systems from attackers who want to cause damage – for example, a classic firewall in a computer. Bettina Könighofer deals with the first of the two areas; in other words, with the design of autonomous AI systems whose decisions are correct and understandable.

New research area

Her research area is still young and is called “bilateral AI”. The aim is to combine the two strands of artificial intelligence that are common today. “There is symbolic and sub-symbolic AI,” explains the researcher. “Symbolic AI makes decisions based on clear rules, draws logical conclusions, plans ahead and provides mathematical guarantees. Sub-symbolic AI systems learn from data, for example from observed images. We are primarily concerned with reinforcement learning. This is a type of machine learning in which the system tries out different strategies and receives rewards for correct decisions.” Chess AIs or AlphaGo, for example, are based on reinforcement learning. The AI system learns from each game, tries out new strategies on its own and gets better and better.

This type of AI also serves as a basis for intelligent autonomous systems; from drones to autonomous vehicles and robots. Using a video, she shows a drone that learns to fly through a complicated course on its own. “You can see wonderfully how its path is very chaotic at first, but develops step by step into an optimal trajectory.” Even with such a clearly defined and small course, the control system has to calculate billions of states that the drone can enter. This example quickly shows that this safe, calculation-based system quickly reaches its limits when the world to be calculated becomes larger.

Bilateral AI is now about combining symbolic AI and machine learning in such a way that AI systems learn independently how to perform their tasks highly efficiently, but at the same time users can rely on the system not causing any harm. For example, a robot learns how to deliver a parcel, but takes into account the people working in the building.

Reliable and fast in a complex world

In her doctoral thesis in 2018, Bettina Könighofer published the first approach for provably correct machine learning. In this work, she used symbolic AI to calculate a so-called “shield”, which checks all the decisions of the learnt AI system before they are executed. “This has the advantage that the entire system does not have to be verified. The learning system makes a suggestion and the calculating shield then only calculates whether this suggestion is safe or not. It can then intervene in an emergency. We are currently working on integrating the shield with an LLM so that users can easily have the shield’s decisions explained to them.“

There is currently a very large research initiative in Austria on the topic of bilateral AI, which brings together leading research institutions. It is named BilAI. The main aim is to solve new, complex problems and to anchor this new type of AI deeply in industry.