Hi, everyone for a pretty long time, there have been warnings of human stupidity and artificial intelligence. At the same time. One of the critics that we hear the most more or less the most in broader media is Elon Musk, who even warns of artificial intelligence to be potentially even more dangerous than nuclear weapons. And he doesn't even stop with the artificial intelligence as such. He goes on about artificial intelligence experts who are against his opinion and believe that this is an opportunity for mankind.
And he says that the biggest issue he sees with so-called AI experts is that they think they know more than they do. And they think they're smarter than they actually are. And he continues that smart people define themselves by their intelligence. And they don't like the idea that a machine could be smarter than them way smarter than them. So they discount the idea at all.
So he's not happy with people doubting his criticism on artificial intelligence. In other words, intelligent people, he believes tend to be stupid at times, that makes one thing clear.
We don't even know really what intelligence is, and that makes it a decent question to ask what artificial intelligence is. Research show that mankind until 1975 became more intelligence ever, ever since birth cohorts became bit by bit less intelligent. So if I wanna summarize that as we all become in average, at least less smart, our lives are becoming more and more comfortable. We are really experience convenience that smart machines are offering to us. So if human intelligence is declining and machines, intelligence is at its rise, is it time to regulate artificial intelligence?
Now, if our lives by artificial intelligence, I believe are just becoming more convenient, I'm happy. Even I, as a lawyer, Martin don't need a regulation, but if there's any kind of danger to summarize it, I think we should at least think about a regulation, whatever that will be in detail.
So crucial at the end, I guess, is the definition of what actually is artificial intelligence. And of course you can approach it to it from a legal background, from an, from a technical perspective, you can just talk to anyone on the street. Some people, most people have an opinion on that.
And some people believe that artificial intelligence is everything that's just smart, any tool that's our convenience. And so in this sense, any mobile phone would be already artificial intelligence, even though our lives have been changing a lot by the usage, the heavy use at times of mobile phones. I don't believe that we do have to fear this situation and we need regulation on that still though. I believe that only smart tools, aren't really artificial intelligence. And I guess most of you believe that's very same way.
So from this definition, we have to say that we need an autonomous perspective on it, and it needs to be really intelligent machine machines and the capacity of taking decisions by artificial intelligence.
That's really profound and crucial on this definition. So the decision taking, I think it is worth by mentioning the European union beliefs that very much and repetitively talks about the autonomy, the interconnectivity, the trading, and the analysis of data, the possible self-learning criterion, which according to the opinion is an optional one.
So additionally least minor physical support, the adaption of actions to the environment. And the, of course, the absence of an biological intelligence and body with this definition, this very pretty precise definition for here. And now I believe there's three good reasons to find regulation on artificial intelligence, which would be ethics, privacy, and liability ethics. I guess that's pretty sure we need to have a non-discriminatory due process. We need transparency. We need to really understand what kind of decisions would a machine take for us or on us. And what's the process behind it.
That's an ethical question. The labor market will change drastically and we shouldn't be too bored when machines take everything over once. They're as smart as we all have, you know, for a future perspective, we need social policies. We need to assure the human dignity, the autonomy, the self-determination of the individual and all of that, especially in the field of human care. So that's the ethical part. And I think this will be one of the three big pillars that we will see eventually rising.
It has already started a bit, but I guess this is an important part for finding a good approach to it, for understanding really what's the challenges for mankind for privacy reasons. I guess that's a pretty good other reason to find regulation of artificial intelligence, because already today we have laws in place the GDPR, especially at general data protection and regulation that interdis automatic decision taking today.
So it's impossible that a machine autonomously legally takes a decision for a human being, which is referring to personal information.
And probably there will be more and more privacy concerns upcoming. So maybe it's even a little war between privacy and artificial intelligence. And privacy is something that has been around for 30, 40 years, even though we feel it's only been one year more or less. And I guess it's really a big counterpart for artificial intelligence because artificial intelligence without the use of personal information is pretty difficult to maintain.
Finally, liability is good, a very good reason for a regulation. We need to build insecurity and ethics in artificial intelligence. We need to accept legal liability for the quality of the technology produced. So I guess in any product, it's important to have a good quality with artificial intelligence that even more important, especially if our definition asks for a decision taking machine, because there we need a good quality.
We need a good algorithm and we need a rule on what's okay. And what's not after all.
And if I'm overcoming a border, it's important, how far does liability go and who would be liable? So after all legal responsibilities need to be clarified, let me introduce you to just basically the current rules and liability. The current rules foresee legal liability of various parties, depending on their opportunity to either construct or deconstruct damage to cause or prevent damage.
And that might be a manufacturer that may be an operator, an owner, a user of any device of any possibly smart tool, for example, an autonomous car that decides to drive without my action intervening here to, to drive against a traffic light or anything else who's responsible for the damage while it's going to going to be the manufacturer. Even if he maintains current technical standards, if I didn't do anything to it, if I didn't program the machine myself, the car myself to do it, if I didn't do anything else for this to happen, then it's not going to me as a driver or an owner.
It's going to going to be the manufacturer. Of course it can be the driver at times, or it can be both the manufacturer and the driver AI itself. That's not possible yet. There is no legal personality behind artificial intelligence yet. So there cannot be any liability now. And I guess that's the point if that's gonna be something in the future question is whether current liability rules as introduced right now are applicable in the future. I think it's very important to understand the less AI can be considered as simple tools in the hand of mankind.
And the more AI take their own decisions, the less the cause of harm can be retrieved to a specific human actor. And that's the basic idea today only if the damage can be connected to a human being, then there is liability. And if the machine is very smart and there is not an, an immediate connection between the action of the human being and the machine anymore, there will be not a liability connection between the both.
So this all leverages the logic of the current legal setup, and that makes current liability in applicable for a truly artificial intelligence.
So we need a new framework questions. What could that be? It could be business standards, it could be ethical guidelines. It could be standard standards deriving from any lawyers or any governments. It could be pretty much anything. It could be a mixture of all. And I guess that would be probably the case after all and would have to ask also if there would be a local one or an international one, I personally believe that an efficient regulation only works globally. And that's gonna be difficult because even though we don't have that much on the market, yes.
Talking about rules, we have some, and no, none of the rules are global yet. All of them are local. There have been some attempts by the G seven and G 2020 to have a code of conduct.
But beyond that, it's only local initiatives and they're not, don't have anything to do with each other. They ethically, but also regarding the liability have nothing in common yet. So it's really decentralized regulation now. And I think this is really lack. We are witnessing here.
So however it will be done whether locally or globally, whether now, or a bit later, I guess it's important to be in time to exaggerate a bit. We should regulate it before we are being regulated as human beings. And once we have the impression that very soon artificial intelligence will be smart enough to regulate us and not care about our rules anymore. I think it's about time and that's less a legal question than a factual question.
I always ask myself the question, what happens if we don't understand the processing algorithms anymore, because they're too complicated for us because we just don't have a clue how it works because it's very dynamic. It's too difficult for us. Not only because we get less and less stupids in 75.
So what could we agree on the EU?
Again, has several considerations and even I, as a lawyer and that, yes, I'm very used to, to thinking rules and regulations. Even. I'm pretty stunned on, on what I read here. And this is just a very small glance. You can read a lot from the European union. They think about establishing an insurance scheme first. So this is just like a number plate on your car, your artificial intelligence, your robot, as they say, whatever number, license plate, probably something more technically thrilling than what we know from a car, but still the same system. We would have an insurance scheme behind it.
And you are like the owner of your artificial intelligence. I don't see that in a situation where artificial intelligence is very intelligent and has his own mind, then it's gonna be difficult to have such ownership.
I believe so. I personally don't see this coming up, but it tells us where we are in the discussion. This is really discussed heavily in the long run. They're saying even that there should be the creation of a specific legal status for robots.
And this is probably something that robots would like artificial intelligence would like if they would be asked if they would be intelligent enough to go through that discussion. So maybe that's something where we will finally emerge and that for liability reasons would make a lot of sense because if there's a damage cost by an artificial intelligence, even though I always imagine artificial intelligence to be pretty perfect at things, even though at times, there may have issues like a human being with damages being caused by them. Then that makes sense to have a legal status.
We have seen that in single cases, there have been marriages between mankind and robots would be good to have a legal status for ethical reasons and all of that. So this is something that I do read a lot. It's not only lawyers saying that, but the legal status, I think is more, more something that we should think about. It's something that we have to discuss for animals for a long time. Should they have a legal status? They don't have a full legal status, but they do have some legal status by now. And this is something that has only developed in the last 20, 25 years.
So if this goes all wrong and Elon Musk's right to be afraid already in 2025, he's planning his man mission to Mars. And I'm sure he will not take artificial intelligence. And I guess that makes it safe to be on Mars. Then if you are afraid of artificial intelligence, thank you very much.
Thank you, Carson.
In fact, you can't be a lawyer. You didn't use the full time probably because you're not paid per hour here.
No, it's because I promised actually,
Oh, okay. Stop with the lawyer. We have a few questions here. Maybe we can display the questions. I like that talk. And I think it's a very important, very big questions. But I think to discuss on, I think we have two interesting questions here. And the one, I think you touched a little, which is if you talked about machines getting smarter and humans getting more dumb, good thing is I've, I'm born 1965 ahead of the peak.
Yeah. I told you I was born in 75. Okay.
Hopefully that's the reason for all these things.
So you said it's time to regulate the machines and that the question is shouldn't be better when people get more dumb regulate the humans instead of the machines.
Yeah. I think if you wanna be cynical, probably that would be your answer. And I think we all have had situations in life where we would have believed that would be the right answer for the situation who should do that. I don't know. Are we smart enough to regulate ourself?
I mean, this is very philosophical.
I think it goes directly to the next question.
So, so we know that human decisions are frequently bridge at this. How do we train an ethical AI when we can't trust our own ethics?
That's probably to the core of the question.
If we, as humans, we live a lot with failure and errors and, and we learn by, by failure after all that's that's. I think the, one of the core ideas of mankind, if, if I understand that and artificial intelligence, and I mentioned it, I always see as something perfect at the end. And if we compare in an ideal sense, if we compare that, I think that's, that's the difference. And if we see that our decisions are maybe wrong, even after all very often, and AI is not, maybe that's just two different approaches for a perfection in a own sense.
So how do we train an ethical AI when we can't trust our own ethics, I guess, as wrong, our ethics may be in the future. The ethics for artificial intelligence will be, so there will be of course, mistakes we do. And we always already witnessed this today because if we wouldn't make mistakes, we probably would have a setup and an idea. And we would be further developed with that
Question. And we also might with these scenarios where it's virtually impossible to make the right decision, because, so this is this common anonymous vehicle thing.
So you have to decide between whom to kill so to speak. Some, some decisions probably are virtually impossible to make,
I guess so. And probably if you wanna see it old fashioned in a legal sense, then by most jurisdictions on this planet, it's not allowed to have such decisions. So this is what I meant when I was talking about the GDPR. Intering automated decisions by machine right now. Yeah. Cars are not allowed to decide whether I should kill a younger person or an older person.
Okay. Castin thank you very
Much.
Thank you, Martin
Applause for.