It's great to be here. So DFKI is a German research center for artificial intelligence. We do research, applied research on many topics related to ai. And talking about ethics, first of all, you know, the discussion of AI is like something that's, that's gained a lot of traction now. But of course, AI itself is nearly a century old. And you know, one of the reasons we now have all those discussions is that in recent days where recent years, we had a lot of quite spectacular things that AI did, right?
Starting from from from the Go challenge way AI beat the Grand Master through some very spectacular me medical breakthroughs, of course to Chet g pt and the recent thing with the Beatles songs. So now we can just, you know, revive any, any artist and, and make that that music. And that of course has create a lot of discussion and those things being spectacular.
A lot of his discussion reveals along, you know, spectacular things. And typically you have this sort of utopian and dystopian things that are in, in the middle, right? A lot of talk about how we had it in the panel before, right?
Singularity and taking over the world, but also some people looking at the opposite thing. You know, AI will do everything for us and happiness forever, and some things that are plainly absurd, like people from the legal profession talking about imposing AI with, with a legal personality because when the car is driving, it has to be responsible. So as usually all this spectacular thing in the public creates a lot of nonsense around it.
And, and also a lot of political interests. We have a discussion about regulation and I I think that's actually a nice statement from, from the European Commission. A lot of European legal relation may not contribute to to, to clarity of things, but that actually is quite a good one where essentially two things were pointed out, right?
The first one is that AI is one of the most strategic technologies that you have in terms of impact.
And, and it's something that, that you are going to nearly everywhere. But the second thing where it comes to ethics is that AI is also something that has an unprecedented influence on the way our society will change. So change you're looking at is probably change that is comparable only with the rise of the industrial revolution in, in the late 19th century. And the reason for that is twofold. The first one is, and I think a lot of you understand it, that essentially even now, AI is everywhere and it's going to be in even more places, right?
Financial system is driven by ai, healthcare is driven by ai. You look at modern car, it wouldn't work without ai use. Smartphone would just be useless without ai.
Essentially, virtually everything in our lives contains some sort of ai. And if it doesn't, it will very, very soon. The other aspect is that a lot of the impact that AI has on things is not just an incremental improvement, but it's fundamental transformational, right? You think about this thing with a Beatles song, it's not just a tool like a synthesizer was that allows people to more easily make music.
You know, Tesla changed the entire area and currently I can say, you know, I I want to have more Beatles songs. Or you look at the, the acting industry, right? We producing movies without actors.
It, it's, it's completely changes the way things are done. And you look at Che GPTI was recently invited to give a talk at a conference of this consulting companies that help European commission write and evaluate proposals and also as usual, have proposal writers write good proposals, right?
And what I realize is that soon they're going to have proposals written by CHE GPT that will be fed to Che GPT for summary and evaluation. And then all of them will be given to somebody commission who will useche GPT to do a summary and selection.
So things are changing and you know, you can go bit further, right? Things like, like all this dating app and thinner, it's all driven by ai. So you can say that what is happening is that the future genetic composition of humanity will be determined in a large degree by ai, right? Independently, that you know, the most of the most important choice of your life you spend the rest of your life with is something that AI is given.
So, you know, the is huge. That's why you really need to target ethics from different angles. And I would just quickly, like I give you, you know, five, let's say aspect, I believe you can divide this, this complexity into the first one again, is, is this new one about this all this singularity consciousness and danger of AI tanking over.
You see, I put it in gray because I think that of all the aspects that's currently the least significant one, and let me tell you why, right? So the whole thing emerged of course with chat GPT and chat GPT is amazing, right?
So I ask you before the pre presentation, what are the most important aspects of AI ethics? You know, look, this thing like 11 points, I could just take them, make 11 slides, right? Ask deli to create me some images and I'll have the presentation, right?
And, and probably you wouldn't notice. Maybe you don't know, maybe that's what I did, right?
And, and, and then essentially, you know, it, it takes some very differentiated things like, you know, aspects, collective foundation develop many after looks and challenges and insight. It sounds tremendously human. And that's where all this discussion came from.
But then, I dunno how many of you know how really Chad GPT is trained?
It's very interesting to see then, because what happens is that the only thing they do to train, let's say the fundamental engine, is to go to all the texts that they find on Wikipedia and other books take sentences and sentences group, delete a word on a sentence and train the think to predict it back. There's absolutely nothing else that is being done. It's done on such a huge scale that it actually creates this amazing results.
And that is something that has really surprised the AI community, but that's what it's doing. Now, how does it go from this to essentially pro providing the answers that I've shown you? It's very simple. I pose the question so it predicted the most probable next word given that question, which is the first words, the word of my answer. Now you have the question and the first word of the answer, you feed it back again into the loop and you get the second word of your answer.
And you do it until you answer as long enough.
Now, because what you are of course talking about is not, what is the next word? Giving a sentence of words before, but you have a probability because you know the same sentence and different endings, every time you ask the thing to use the text, you get a different one. But that is essentially everything that is happening there.
So, you know, thinking that it's sort of a probability distribution is going to wake up our one day, say Kognito ergo and I take over the world. It's just plain nonsense. And you know, you can see that if you, if you really go deeper. So I try playing around with she GPT and there's a question opposed to it, it involves something that's not so easy to, to, you know, in German there's this thing you call doa, right?
And that's essentially what said GPT is and very useful one. And those people are useful in management.
doa, extremely useful. But, but the point is you asking some about physics where you cannot do around, and the question, you know, you do pushups and everybody knows it's much easier to do pushups like this, right? When you are not totally horizontal than on the ground. But if you ask your GPT, it says the exact opposite, it's much harder to do, to do pushups on a hill. Why? Because it just does text statistics and anything you go on a web and you ask about doing something uphill, it's always harder because of gravity, right? Running uphill everything.
So because it does not understand, it does not know anything, it just knows text statistics, it gives you that answer and it gives you in a very elaborate, trustworthy way, right?
Which is actually the big problem with it Anyway, so that's really funny. There's another example that that shows you that it doesn't really understand anything.
You know, this picture generation things are really cool, right? So I, I had a project, I asked it, you know, can you show me a picture of a suburban neighborhood with a church tower in a background? And it looks fantastic, it a search generated. So I go and ask him, you know, exchange a church tower for a chimney when it make a bit less attractive as a neighborhood, okay? So it does it, but it produces different results. And here's one results it produce. And you look at a background, it actually left a cross on a church tower. It's even worse, it didn't even leave the cross on a church tower.
It actually deleted the church tower produced at a chimney and added a new cross, right?
Which shows that there's absolutely no clue what a church, what a what a chimney, whatever is it, it just reproduces some sort of a statistical patterns it found somewhere. I think it's, it's extremely useful. You just have to understand that it's not something for answering question, it's, it, it doesn't know anything. It can do certain things pretty well. And you know, the whole talk about this thing somehow emerging as singularity is plainly a fairytale.
Now every fairytale has actually, it's its own, you know, part of true, and there are two, part of true in one in this one is of course that independently of check GPT, there's a lot of really significant research in cognitive science, neuroscience, computer science, which may eventually bring us to the stage where we're building a computer where we would say it could be something that is, you know, c controls really intelligence may come about.
And at this stage we, you will need to think about it as hard as we think about stuff like human cloning and other things.
If you really want to do this, what are the implications? But it's like something very, very, very far away.
Again, sometimes things that we think of far might come quicker than we think, but it, it's not JGBT the other part of truth in it. It's really complexity and it's something that's been partially immersion mentioned in the panel before.
Yeah, maybe some of you come from finance known this plot. Something you see time and time again in, in finance is sort of flash grasser, right? So what happened here is without any significant external event, you had this huge loss in market valuation and minutes later it, it rebounded. Now what happened, of course nothing physical, right? You don't have changes in, in physical properties in minutes.
What happened is that you have all this spot trading, it's very, very fast computerized trading, which involves a lot of machines interacting with which share a very fast and that creates certain effects that, that you call like chaos effect. It's essentially the thing, while predicting the weather of the day after tomorrow is not only difficult because we don't have computers that you can do, it's fundamentally impossible, right? You have enough of things that interact with each other in certain patterns. You get what's called chaos theory doesn't work.
Now, if you look at what AI does, it's not just a lot of AI systems. You have AI systems and through smartphones, smart glasses, smart watches, your Alexa, they influence us, us as people, right?
They, they take what we do, they learn from us. So we have a lot of those connected feedback loops and, and con complex interconnections, which if you go at it scientifically create exactly this sort of complex systems, it's actually much, much more complex because of course we don't really know what is AI systems are doing.
And of course, what has happened, if you take those chaotic system and give them not just control of part of the stock market, but let them control all the financial systems, your traffic systems, your power systems, your news delivery, your telecommunications, then you don't need an evil terminator to create havoc and existential threats, right? So you, you know, you, you, you can really have tremendously catastrophic consequences, not because of some mystical AI emer emergence, but just because chaotic complex systems screw up and human oversight.
And that's an important point of ethics, right? You always say human oversight is something that you need, but within these complex systems, human oversight is often an illusion. Classical is okay if it does, if it doesn't work properly, you pull the plaque.
Now, if your entire power grid is controlled by an ai, what happens when you pull, pull the plaque on the ai, you know, you can as well let the, the wrong way I do it.
It's just doesn't work. So that really isn't problem. Now the other, and, and you know, and, and the other problem that you have is of course all this thing about potential misuse. And you know, I'm not starting to talk. You have this thing what happens when you know the mafiaa you know, legal, autonomous weapons. There's a lot of things that you can say about dual use.
I don't think there's a need to talk about, it's very clear, but there are more subtle things there, right? So if you look at something like, you know, this, this, this feedback loop, the US humans are essentially more and more driven by AI advisors and you know, they take your data, they learn from you. And the problem is that that feedback loop is of course, controlled by a small number of players. These are definitely not rogue players.
You know, I'm a big fan of Google, I love working with their stuff, but these are of course large monopolies that suddenly have an amount of power that is absolutely amazing. And we are talk about regulation in the previous panel. The problem is, I always have to laugh when there, I think a few months ago even there was another article how the European Commission is so happy that they finally extracted another find from Google about search engine, whatever monopoly. And it's like this thing, you know, it's old technology long dead.
What you're now talking about are those home assistance shed g PT and stuff, and they create an amount of monopoly and power that is just uncomparable to anything that you had before, right? And, and that brings us really to the most important thing about AI ethics, which is sort of its social impact, right?
As I said before, AI is, is omnipresent in all areas of our lives. And in addition to those sort hard things like security, like misuse, you really have to look at a subtle effect, right?
Again, I mentioned Tinder, right? You know, it's, it's, it's changing the structure of our society, right? And the problem is it's doing in a ways that in many cases we don't understand. And it's some, some of you may know it, you know, we, we like to use this machine learning image recognition. So here's a nice example.
You know, we're talking about transparency explainability, right? So, so this is a system that was trained to distinguish between walls and huskies. I cannot do it myself. It was good until it failed. And the reason it failed is that it turned out that a system didn't learn to distinguish between wolves and huskies.
It just learned that all the pictures of the wolves went on snow and all the huskies were in the garden on the green, right? And you see this effects a lot.
And the problem of course is you train it, it seems to work, you had the wrong training data and then you release it to do something useful, right? To to, to give people advice and just nothing.
You know, another thing some of you know, it is adversarial attacks because the system represent information in a way that is fundamental different from the way we humans understand it. If you take a picture of a stop sign, you add some noise to it, you get a picture of a stop sign that for us looks absolutely unchanged. But the system will classify as a, as a, as a yield or a totally different different picture. So the problem is they're completely o pac in any way, and they control a lot of things.
And that of course comes to this things of, of discrimination, right?
You may notice faces, if you, if you're some of this image generation things, you give them a female face, you're gonna get a a a figure in bikini, you can em male face, you get a figure in, in, in a business suit, you give it a black face, you you're likely to get in, in the US somebody with weapons. So all sort of discrimination. Now what you can say is actually it's not that bad because it's, you know, it's what it does. It's not that AI is discriminatory, it essentially holds a mirror to see what our society is like because it's a strain on, on, on data that is there.
Which by the way is a big problem when you talk about making AI non-discriminatory. Because what you are asking the system to actually do is to falsify a statistics in such a way that you find it agreeable and in line with your own ideology.
Now, of course, on the other hand it's not as simple because when you use this stuff to make decisions about people, this sort of bias can be critical at it enhances the problems that you have in society, right? If you use that sort of discrimination to make decisions about credit, employment, healthcare, you know, possible dangerous people post to society, you're just increasing that. And that is a very, very big ethical problem. Because on one hand it definitely cannot be that you use systems like this. On the other hand, fix it is not easy.
Not just in terms of technolo technology, but also in terms of, of of normative things. You know, how do you, how do you decide that a system is non-biased, right?
Which, which sort of bias is good, which sort of bias is bad sometimes you want the system to reflect reality.
So that's a very, very tricky thing there, which I, I don't think that, you know, regulation addresses properly and don't think that really understand all the aspects of, of that properly. Now there's always obvious things, you know, it's, it's like intentional manipulation that something that's not emerging that all this generative AI can have.
You know, can we already see that? Things like social media and, and news filtering has had a very polarizing and, and disturbing effect of some our social and political systems. You look at what you can do now, it, it's going to go beyond it. Essentially the notion that you can trust any sort of information video or other feed is becoming illusory, right?
You can, you can create your own reality, right? So, so that definitely is something that, that for me is the biggest issue in ethics is the social impact.
And then you have to know traditional things, safety, data protection and so on. And interestingly, in terms of regulation, if you look at something like AI act to a very large degree, that is the only part that it sort of really regulates i of safety and data protection, all those, those other things that are ambiguous because they're very difficult, right? To cover by, by regulation.
Unfortunately, after some, some initial panic, there was no regulation with respect to the first point, which could have been quite catastrophic in terms of what, what you can do. So sort of summarizing, let me show you a quote for a paper i, I wrote with some colleagues and it, that's a quote by, by Holger halls, which sounds really nice. So the really important issue with the today AI systems like generative AI is not what, what a lot of people say is, which like comes close to AGI, which is, which is artificial general intelligence, all this, all this singularity nonsense.
But actually it is that those systems make us believe that they can do much, much more than they actually can. That they are much, much better than they actually are. And then because you read the stacks and it reads like, whoa, what a fantastic expert wrote it. And in reality it's plain nonsense.
Sometimes, sometimes not. And, and then people use this, the, the technology in ways that they just absurd and that is the real danger. So summarizing, you see the real danger of AI is not too much intelligence, it's a combination of artificial and natural stupidity, which taken together is really dangerous. Right? And sort of to, to, to finish with it, you know, this is sort of a picture I like to show my children when it comes to responsibility. That's a quote from Spider-Man, with great power comes great responsibility. And that essentially is what I think summarizes the ethics of ai.
It's a hugely powerful technology which need to be used in appropriate, responsible way and you know, in some, some, let's say advertise own. So we have now a very big effort at DFI, looking at this with a big interdisciplinary team, with people from philosophy and computer science, looking at supportive processes, discourse of AI and ai. I take research over interested, please, please contact us and talks to us. Thank you.