TU Graz/ TU Graz/ Services/ News+Stories/

#1: Neural Networks and Artificial Intelligence

06/08/2022 |

By Birgit Baustädter

Robert Legenstein is a computer scientist at TU Graz and heads the Institute of Theoretical Computer Science. He researches artificial neural networks and tells us about his work.

Robert Legenstein heads the Institute of Theoretical Computer Science. © Lunghammer – TU Graz

Talk Science To Me is the most curious science podcast in the podcast world – but especially at TU Graz. We ask the questions, and our researchers provide the answers. From artificial intelligence to sustainable construction and on to microorganisms that feed on CO2 to produce proteins… it’s all found here! Listen in and be inspired.
Subscribe to the latest episodes on the podcast platform you trust:
iTunes
Google Podcasts
Spotify
Deezer
Amazon Music
RSS-Feed

Talk Science to Me - the science podcast of TU Graz

Welcome to the TU Graz science podcast Talk Science to Me. Nice to hear you today. My name is Birgit Baustädter and in this very first episode we are talking about artificial intelligence and machine learning. Today's guest is Robert Legenstein, who will answer all my questions about his work. He is a scientist and professor at TU Graz and heads the Institute of Theoretical Computer Science. There he researches brain-inspired artificial intelligence. But it's best if he tells us himself.

Talk Science To Me: Professor Legenstein - thank you very much for being our guest today on the very first episode of the TU Graz Science Podcast and for answering all the questions we have about artificial intelligence and machine learning. You are a computer scientist at TU Graz and head the Institute of Theoretical Computer Science. Can you describe your day-to-day work? What are you researching?

Robert Legenstein: In principle, my team and I work on two related issues. I am a computer scientist and work on questions of the fundamentals of information processing. I also started working on neuroscience quite early on. That's not so far-fetched, because neuroscience deals with the brain. And the brain does nothing but process information. Of course, I'm not a neuroscientist. But if you combine computer science and neuroscience, you get something called computational neuroscience. So I deal with the question of how information is processed in the brain. And we do that with more or less abstract models, which we then simulate and also analyse.

And since the brain is so efficient at processing information, this automatically leads to the second question we have: We are trying to develop novel paradigms for new computer systems. These computer systems are therefore inspired by the brain and are called brain-inspired computing. Or neuromorphic computing. And our task is to make sure that these systems do something useful. This is very much in the direction of artificial intelligence and machine learning.

Talk Science To Me: You have already mentioned that you deal with the basics side. But then when you look at the application - what is the ulterior motive? Where do you want it to go?

Legenstein: AI systems. Systems that are self-learning. Systems that can learn from data themselves. An important aspect of this is energy efficiency in the application. We are not building any chips ourselves now. But our research partners build chips that are significantly more energy-efficient than conventional computers.

Talk Science To Me: You already mentioned it a bit yourself, why the brain is taken as a model: Because it is energy-efficient and very powerful. But can you describe it again? Why exactly the brain? What are the advantages of building computer systems inspired by the human brain?

Legenstein: On the one hand, it's about AI research - artificial intelligence. There are some very powerful systems and models that you can do a lot with. But of course you are miles away from the intelligence of the human brain. There is still a very long way to go.

In terms of functionality, the brain is an ideal role model. Because it is ultimately a proof-of-concept for an intelligent system. You can get inspiration from it. You can look at what you could improve in the AI systems we have to get more intelligent systems.

Energy efficiency is an important point. Just to illustrate that: In the case of the brain, we are talking about 25 watts needed by the brain. And the performance is comparable or better than that of our best supercomputers. And they need power plants to run them. And there's a huge gap in between that people are trying to use for technology.

Talk Science To Me: What exactly does it look like when you use the brain as a model?

Legenstein: Well, mainly it's a lot of reading. What I do in my work is read papers. There are libraries full of publications about what we know about the brain. You know an insane amount, but not how it really works. You know a lot of details, but you don't know how it all comes together to produce intelligence. That means you have to read an insane amount to get inspiration and see where you can start. Then a lot of my work is also with paper. You sit in front of a piece of paper and think about how you could build a model that has a certain functionality. Or you analyse things.

And one of the main points is, of course, simulations. We do a lot of simulations. And you have to imagine it like this: We work with spiking neural networks. These are even more brain-inspired networks. They practically live in these simulations. That means I sit in front of the computer - or mainly my colleagues - and code neural networks, define them and simulate them. Actually, they are simulated on our cluster, which is located in the basement on campus. A cooled room with many computers. Or also on supercomputers, like in Vienna. And then the results come back and we can analyse and visualise them. How were the activities in the network? Did it solve the tasks set? And so on. That's how you can imagine it.

Talk Science To Me: Can you see what the network is doing? Can you see the steps?

Legenstein: Yes, the spiking neurons function to send out impulses. For example, we simulate five minutes from our network. So in real time it would be five minutes. How the brain would function for five minutes. The individual neurons fire so-called spikes. These are short voltage pulses. And we simulate that. Then you can look at how the voltage impulses look over the five minutes. Patterns are created - point clouds. This is how you can imagine it. Each spike is a point. Over time, there are many dots. But you want to have few dots. Because the fewer the spikes, the more efficiently the network works. But it is supposed to solve a certain task with them.

Talk Science To Me: That means you use your brain to recreate a brain?

Legenstein: Yes, I try (laughs).

Talk Science To Me: We've been talking about artificial intelligence all along. Is this artificial intelligence we are talking about?

Legenstein: That raises the question of what artificial intelligence is. And at what point it becomes artificial intelligence. It is certainly a goal to build intelligent systems with such spiking neural networks. Or rather, that it can be used for artificial intelligent systems when they are implemented in hardware by our partners. This is special hardware that is very efficient and is called neuromorphic hardware.

We have already briefly touched on why the brain is being studied. If you look at the systems or the models that exist now: One of the most successful are so-called artificial neural networks. They are even more abstract than the networks we simulate, but they are also brain-inspired. They are, after all, networks of even more simplified neurons that also have synaptic connections to each other. And what is very important is that these systems are not programmed, but learn themselves what they have to do. So you give them large amounts of data - examples, training examples - and they learn from that how they have to behave. And that's something very important, because it's the same with humans - with biological learning systems. True, the human brain is not born tabula rasa. Of course, there are many genetic predeterminations. But much of what we can do we learn through our development. We have to learn how to control our bodies. We have to learn to see eventually, to hear and understand what we hear. That's what's being modelled in the artificial neural networks and in our even more brain-inspired networks.

Talk Science To Me: What is the difference between human learning and systems learning?

Legenstein: That's a very good question. There are big differences. I think the biggest difference is that machine learning systems learn tabula rasa. These are neural networks, for example. They can theoretically learn everything that can be learned. They are so-called universal approximators. This also has disadvantages. Because it means that you need a very long time to learn something specific. That means they need a lot of training data. That is one of the main problems. You need huge training data sets. And you have to train them for ages with a lot of energy. I think people are learning machines. The brain is a learning machine. But it is optimised to learn certain things that are important for us. We don't have to be able to do everything - we can't - but certain things. Evolution has optimised the brain for this. If you look at animals: Some things animals don't have to learn at all. A horse is born and can walk immediately. It doesn't have to learn that at all. A human being has to learn to walk. But you can see that it's an interplay between genetic coding and learning. And that is largely missing in these artificial systems. And these are also approaches that are being researched. How to improve that.

Talk Science To Me: Do you think it will ever be possible to recreate an artificial system the way it works in humans? Is that even the goal of research?

Legenstein: In principle, it is possible to build systems that function similarly. But you will probably never have a system that functions 100 per cent like a human being. Or behaves like a human being. Intelligent behaviour that is human-like, but not exactly like a human. The reason I believe that is because of learning. I think that only learning-based or partially learning-based systems can produce such complex behaviour as humans. That is one thing. But when you learn, of course you learn from the experiences you have. And humans learn from their experiences from birth. Birth, if you can trust psychology, is an extremely memorable event. The relationship with parents, the relationship with other people. Simply being human. Machines do not have these experiences and will not be able to get these experiences in the same way. Truly human artificial intelligent machines will probably not exist. But that is not necessarily the goal, nor does it make sense. The goal is to get machines that achieve a certain goal and have a certain function. For that, you often need intelligent behaviour, but it doesn't necessarily have to be human behaviour in that sense.

Talk Science To Me: Where does intelligence begin for you?

Legenstein: Another very difficult question. Intelligence itself is very difficult to define. I don't think there is a really universally valid definition of intelligence. If you look at biology, it's usually about behaviour that adapts to the environment and is accordingly good for the organism. That starts with worms and goes up to humans. And adaptability naturally increases as you go up evolutionarily. The ability to learn is certainly something that is an important principle in intelligent systems. And precisely this adaptability. A very important concept in machine learning is the concept of generalisation. Generalisation means that you can apply learned knowledge to new situations. Situations that you have not had before during your training. You can describe this very well with what you have at school. There are subjects at school where you can learn by heart. I always hated that as a child. I have such a bad memory. And it just didn't work. But that's not a sign of intelligence either, if you learn something by heart. That's just memory in the end. What's important is that you can adapt what you've learned to a new situation. And that is precisely this generalisation.

But how does one generalise? With humans, one says that she or he must understand something. But it's not clear to me because I don't know what understanding means. People have certainly thought about it. I don't know exactly. I think it is very difficult to say. I think it's thought processes in the brain that are a bit chaotic in parts. The brain is, I think, a network of chaotic associations. And if you understand something, then maybe it just means that you can put it into different contexts with the associations.

How do you implement generalisation in AI systems? That is perhaps another interesting question in this area. There are these two methods - or two streams. One is classical AI, which is symbol-based. And the other is learning-based AI. This used to be called connectionism. In classical AI, the solution used to be that you set up certain rules and the computer programmes can connect and combine these rules and apply them in many different contexts.

In machine learning, the generalisation arises from the large training sets. So if you have a lot of training data, then a certain generalisation automatically arises if the model fits accordingly. And that's how intelligence emerges, I would say. The ability to apply what has been learned to new things.

Talk Science To Me: Science fiction likes to dream up visions of the future that include artificial, intelligent systems that then pursue not-so-positive goals. How much do we have to worry about artificial intelligent systems?

Legenstein: Science fiction actually plays an enormously strong role in the perception. An important role. Science fiction always tends to show the negative. It's supposed to be exciting at some point. The question is how close it is to reality. What do we have to worry about when it comes to AI? I think there are different levels.

The level you mentioned - the fear that there will be a certain independence, perhaps even a consciousness of its own developed by the AI - is, in my opinion, certainly in science fiction and will remain science fiction. At least at the moment, we don't need to worry about that at all. There are certain intelligent systems, but, as I said, it's still a long way from human intelligence. But there are other levels that are perhaps more interesting.

One level would be the question of reliability. Are the systems reliable when you use them? And here there is also a lot of research, because this is also seen. If you have a self-learning system, then you give a certain autonomy to the system. The human being no longer directly determines what the system does. Instead, the learning system determines that itself. Or the training data determine that. Which can also have problems. So reliability is a big question. And we are also working on this to some extent in the so-called Dependable Systems Lab as part of the Silicon Austria Lab. And it is clear that there is no 100 percent security in any technical system. I think that's easy to say. One problem that people often have is that they ask themselves why an AI makes a certain decision. People often want to be able to understand it. Then you can say: it made a wrong decision. Has there been a mistake? In the software or somewhere else? In a neural network, for example, you can't say that. Because it is so complex that you can't really understand why it made a certain decision. There are tools - and there is a lot of research going on - where you can read off certain things. But you can't really say exactly. And that is another area of research that is important. Explainable AI is what it's called. How can I make it comprehensible? By the way, with humans it is also the case that people think they make decisions rationally and can also explain them. However, there are indications from neuroscience that, in some settings at least, decisions tend to be made intuitively and unconsciously and are only explained afterwards. Similar to a neuronal network. Only afterwards do we say: Ok, that's why I did that or that's why I did that. But in reality, intuition is very, very important and perhaps even dominant.

The third level I want to address is the area of application. This is also something to worry about, perhaps. I would put it this way: We should perhaps not worry so much about AI, or be afraid of AI. We should be more concerned about the people who use it. Because of course there are applications here that are problematic. For example, in the area of surveillance, this is already being exhausted. Or in the area of the military, there are possibilities here. And there are already efforts to regulate it.

Talk Science To Me: What are the biggest challenges in research at the moment?

Legenstein: I have already mentioned a few: Explainable AI, Dependability,... Generalisation is one that is being explored a lot at the moment. Because we can see that this generalisation can bring the greatest progress. I'll give a few examples: Object recognition is a classic example of learning systems and neural networks. One can already recognise objects very well, for example. The network can recognise whether there is a car in an image. However, if you train such a network and then apply it to a rain situation - if suddenly there is rain in the picture, which the system has not been trained for - then it will fail. Because it cannot generalise. Then you ask yourself the question as a human being: How is that possible? For humans it is quite clear. I know it's a car. It doesn't matter if it rains or snows or whatever. It will remain a car. And that's where many people say: The network just lacks understanding. Again, it's not quite clear what "understanding" actually means. It's like this: These systems work on a statistical level. They see a lot of pictures and draw the statistics from them, how do I map it to "car". This background knowledge of the world is missing. This is also a big area of research: how can I improve generalisation? For example, through background knowledge. How can I incorporate that? How can I combine learning systems with classical AI? In classical AI, there are certain methods for incorporating background knowledge - with knowlege bases, for example. How can I combine that so that I get the best of both worlds?

Another example from robotics - robotics is very important, or acting agents in general: Here there is the area of reinforcement learning - that is a technical term. Ultimately, it's about training systems not by telling them what to do, but by telling them that they have done something well. That is much closer to biological learning. So you get positive feedback and that reinforces the action. There has been a lot of progress in reinforcement learning. We know this from many newspaper articles: There has been great progress in chess. Also in Go. Also in computer games. StarCraft, for example. These are very complex games. And the neural networks that are trained to play them are practically always better than humans. Much better, typically. But: there are also problems here. Typically, learning in robotics or with such agents works very well in simulated environments that you can control well, where you can simulate a lot. But if you want to put it into reality - I train a robot and want it to act in reality, in the real world - then it becomes difficult. Because in the real world everything is much more variable, there are many more different possibilities. And overcoming this so-called reality gap is also an important part of research.

Talk Science To Me: What projects are you working on at the moment?

Legenstein: I have already briefly discussed Silicon Austria Labs. My team is involved in three other projects. There is the SMALL project, which is funded by the FWF. This is a collaborative project with some other partners from ETH Zurich, for example, and from Southampton. Our partners are building hardware here - very energy-efficient chips. And the basic question we have here is something I already mentioned: the ability to learn. If I want to put a smart chip into an application, I can't put it in straight away. I can't say I'll buy it, put it in and it will monitor some process. Because it has to adapt to what is specifically required here. That means it would have to be trained on site. And this training should of course take place as quickly as possible. As we have already heard, one problem in machine learning is that we need a lot of training data. Now we want to build these systems in such a way that we pre-train them and that they learn how they can then learn quickly. In other words, the machines learn how to learn quickly so that they are ready for use in the application as quickly as possible.

Then we have another project that is very interesting because it combines biology and artificial intelligence. It's called SYNCH and is funded by the EU. It also has a lot of cooperation partners. The idea is to try to connect our artificial spiking neuronal networks with biological networks. Possibly also for medical applications. Because the hope is that we can make new therapeutic methods. That means our partners are building implants. Or we do research on how to build implants - we don't really build, we do basic research. But it's about ultimately building implants that can pick up neuronal activity from the brain, process it in the network and then perhaps interact with the brain by sending impulses back. This can be used, for example, to make brain stimulation more efficient. That is used for some diseases. But it is often very brutal. Huge amounts of neurons are stimulated. And to improve that, you could use such technologies.

Another project is called ADOPD and is also funded by the EU. This is about learning with light. Most of the time, the hardware you use is electronic hardware. That is called electronics. But you can also build computer systems that work with light. Here, no electrons are shifted, but photons are sent. In this project, we want to build learning machines that work with light. Of course, this has the advantage that you can calculate extremely fast with light. And this would also give us very fast learning machines that can also learn very quickly.

Talk Science To Me: Your work is very creative - where do you get the ideas?

Legenstein: That's a great question (laughs). I think creativity in this field, or in science in general, is the most important thing. I think the best scientists are very creative people, almost artists probably. Where do I get that from? That's a very personal question. But I can say that I am a person who likes to be very creative. I love to play the piano and that is the best balance and also something where you can fill up on creativity. I think it's one of the best things to do. Apart from that, of course - creativity doesn't come from anywhere. Inspiration is something that comes from a foundation. You need the knowledge to be able to be creative. Learn as much as you can and then wait for something to jump out. They say that you get the best ideas by sleeping. They come unexpectedly. And you have to make time for that - for creativity. If you are constantly in your work wheel, nothing will come. You really have to take time - time for creativity.

Talk Science To Me: Money is also always a very big issue in research. If it really didn't matter at all, what would you implement, what would you want to do?

Legenstein: Money always plays a role in reality, of course. In this area, money is of course also very important. For me it's like this: you are always limited to the resources you have. To the computing resources in particular. Of course, the dream is to be able to scale up the systems, the networks that you have. That means simulating networks that are as large as possible. With the technical possibilities you have, you are very limited compared to what a brain can do. This upscaling is something you can really dream about. In general, if you look at where the big advances in AI research are being made, it's the big companies. They now dominate the research. For example, Google, Facebook and so on. And the reason for this is of course exactly that: the financial possibilities are almost unlimited and you have to see how you can keep up. Some countries have recognised this. Germany, for example, has invested enormously in AI research. In Austria, that hasn't quite happened yet. We hope that something will happen and that there will be more funding. Otherwise, it is very difficult to keep up.

Talk Science To Me: Thank you very much for being our guest today and answering the many questions! Thank you very much!

Legenstein: Thank you, too! With pleasure.

And thank you so much for being with us today. We'd love to hear from you again next time.