Very much and will welcome in my case from Stuard to all of you, the name is Ling. I'm a senior consultant with PD PD is the German states. In-house consultancy. And I'm also an author of a number of books and papers and lucky for you. They're all in German. So I'm not gonna be summarizing those. And what I will talk about today is AI ethics. And I thoroughly dislike everything about that term. Why is something I will discuss with you? So what can you expect in the next 20 minutes?
The first part of this presentation will be de buzz wording because it's a hype and there's a lot of buzzwords that need to be clarified. And I'll be trying to talk with as little technical background as possible. If you wanna go into the details, you'll find my contacts at the last slide, and I'll be very happy to go into the nitty gritty details of how to implement all of this.
So first Debu wording. Second part is the tool that artificial intelligence represents and what we can do to use it wisely or to use it less wisely.
The third one is a very clear message of enabling, because a lot of my clients and projects with non-technical backgrounds, step back when it comes to implementing AI systems are designing them. And I want you to step in because you do not need technical background for most of what designing IOL entails. And as the camera will, all these zooms out, I wanna use the last minutes to talk to you about what policy makers and regulators can and should do to create an ecosystem.
And I will be talking about the, what, what I call the impossibility injunction, where companies often tell us that it's impossible to regulate AI. And that frankly is BS. And part of the goal of my presentation today is to calibrate your bullshit sensor.
And this is the last time I'm gonna use that word in full. So your BS sensor to detect certain implications that I think are used as an excuse. So three goals for today.
First, I want you to know what AI can and cannot do. Second. I want you to know what you can and should do to use it well.
And third, I want you to know what the policy makers and regulators around you can and should do, and then I'd be very happy to discuss. And I mean that, so please, please prepare questions. I'll start with the B buzz wording, artificial intelligence first and foremost means that a machine can do something at least as well as a human being. That's it. And we differentiate between weak, weak AI, which means that the machine can do one thing at least as well as a human being and strong AI, which means it can do a lot of things, at least as well as we can.
And an example of weak AI is Google maps, cuz it can navigate at least better than I can. It's not hard to do. And an example of strong AI would be any sci-fi spaceship control unit. And the fact that that's the example I'm giving already tells you that as of today, we do not have strong AI applications. So that will be the first thing that you need to differentiate. And it's gonna be important to differentiate between one thing and all of the things later on. How do we do that technically?
And again, I'm gonna try to keep the tech stuff out of here. An algorithm is basically the static recipe for apple pie. Every time you go through the steps, you're gonna get apple pie. There's no way this is gonna come up with anything else. Machine learning basically works by presenting the machine with the ingredients and an image of apple pie and telling it to come up with the recipe of its own. So you can expect different recipes that you've maybe never tried, but they will give you apple pie if you do well.
And the thing that I'm focusing on, and then I wanna talk to you to about today is algorithm decision making. So the machine sorts out the best way to make apple pie actually bakes it and people have to eat it. And that's where it becomes important because as long as it's just theoretical recipes, we're all up for that. As soon as I have to eat the result, I'm getting suspicious.
So with regard to the tool itself, I think artificial intelligence can be best compared to fire. It's very powerful tool and we can use it to do very nice and yummy stakes or to burn down the ma Dame.
And which of the two we do depends on how we use it. So one BS detection trick that I like to use when I read about AI in papers is to replace the word artificial intelligence with fire. And if the headline still makes sense, I'm reading it. If it doesn't make sense after that I'll stop. For example, is AI good? Is fire good? I don't know. It depends on how you use it. So I'm gonna stop reading the question is AI good? Because it's too broad. I wanna tell you three things about what we can do technically to do the stake version of using AI.
And the first one, I think it's not gonna be a surprise is data quality. And I'm gonna read the picture so you can stay with me and they're not distracted. So the guy says that was surprisingly easy. How come the robotic uprising use spares and rocks instead of missiles and lasers apply. If you look at historical data, the vast majority of battle winners use pre-modern weaponry and the comic encapsulates something that I, I find a lot with projects. It often uses big data and doesn't look into correct data. So is the data up to date?
If you use historical data from 50 years back, you're gonna let get a lot of bias and a lot of false representations. Second is the data representative. So for example, facial recognition, data samples often consist of predominantly white males and has a proven bad track record as identifying images of colored people and of women.
And the third one is, do you know the causalities behind your data because it's nice to know it correlates somehow, but do you know why?
For example, there is a credit scoring agency that has to cut discovered that if you have bought converse sneakers within the last six months, you're less likely to pay back your debt. I agree that that might correlate, but I'd be very curious as a sneakers where myself, what the causality between default risk and converse seekers is. So please check your data for its quality. The second one is, understand your tool.
Again, it's something I get a lot when we procure AI from vendors for state use that they tell us that it's too complicated to be explained. And again, your BS sensor should flare at that because if it's too complicated to be explained, then you should probably not use it.
So the second thing that we can do to actually create good AI, to take the time and understand how it works, not down to the last line of code, but to the general logic of what it is used to infer, what model and causality and to come up with what result and how is that result then processed, please do not stop at the technology, but look at the implementation because there are studies for example, that if you give traffic light systems, as a result of AI judgment, people are more likely to follow it than if you give it in percentage, because percentage makes us skeptical.
How does it come up with 27%? If we get a red light as a result of AI judgment, we stop because psychologically we respect traffic lights, at least up to a certain age. The second, the third thing that a lot of AI projects that I work with get wrong is attribute responsibility.
So a dog asking if I poop and you spread it around the room who gets the blame and responsibility is not one person. That's never one person. So you need to differentiate what was the technician's responsibility? What is developer responsibility? What is management responsibility?
For example, the developers will and cannot come up with the goals of your AI system. That's management's responsibility. What are the user's responsibilities and what is the objects responsibility? And that's also something that's very ambiguous. When you read about AI systems who uses the AI system on whom. So we need to differentiate are the users, the people who are being evaluated or the tools that are being evaluated, or are the guys who are doing it, the users and who needs to do what?
For example, Google has been receiving a lot of criticism over an EU meeting that got released yesterday, where they say that users of their products need to make sure that the products don't discriminate and that's not entirely false.
Of course, Google needs to make sure first and foremost that their products don't discriminate, but it's not entirely false. That we as users also need to work with the technology.
So attributing responsibility along the chain of developing, implementing, using, and evaluating an AI system is a task that needs to get the time and attention it deserves for us to do AI. Well, the time and attention does not only mean developer's attention. And what I'm about to show you here is very important to me, because again, my clients often refrain from joining the discussion about goals, about system design, because they're non-technical non-techies and I'm gonna show you six steps to a good AI system.
From my perspective, I'm gonna show you how many of those require programming skills and spoiler alert. It won't be too many. So you start by defining your goals. What is this AI system of yours supposed to do?
Like does it sort bad products of your production line or does it sort people?
How, why are you even using it? The second one? And we come back to the image about the ancient weaponry is choose the data that actually serves your purpose.
And again, please do not only look for big data samples, but take the time to look into your data sample, whether that particular data, it gives you a qualified meaning of what you're trying to achieve. The third one is your model.
And again, this requires math. Yes, but not programming skills. And that's something that should be explained to you. If you don't understand the model should smell fishing for static algorithms for machine learning. That part does is part of the machine.
And again, it's explainable like machine learning often gets the excuse that you do not need to explain. You cannot explain it because the machine does magic in a black box and certainly heard the term.
And then we can only see the result, but there are technologies, there are methods to make machine learning results, explainable, and you should not be deterred from asking for that kind of transparency when procuring or developing your system. Then that's the only part where you actually to program all of what we discussed before.
So your goals, your data, your model into the actual system, that's something that your developers need to do. And then you need to train the appliers because you cannot just push the system into an existing social interaction field. For example, there is a technology that I'm certain you've heard of. It's called predictive policing. And the idea behind it is that burglar strike repetitively within a given area. So the police monitors burglaries, and once it found out that burglar strike in a certain area, it will dispatch resources. So policemen and, and the hope is to catch more burglars.
The problem is that when they implemented this, they didn't account for any other sort of theory. So yes, burglaries went down, but bike thefts went up because once the burglars found out about the system, which wasn't too hard, because there were press statements about it, they just cut back on burglaries and switch to bike thefts, for example.
So train your repliers and last but not least evaluate your tool, look into whether you actually achieved the goals that you defined up front, whether there were any unexpected consequences, like for example, the rise in bike thefts in your area and recalibrate again and again, the whole message of this part of my presentation is to please get involved in the discussion in the, in the debate in your organization or in any context in which you are asked to debate ethical AI or responsible AI or whatever, the hype buzzword that you put about this, it's basically AI governance.
The problem is that our responsibility ends outside of our organization and we cannot go further than that. And that is why I think we need an ecosystem to use AI safely. We need rules and we have the same problem with fire. And we often hear that regulating AI is impossible because it's too dynamic. It's developing too fast because it's too complex. People don't understand it because it will stifle innovation. There are a lot of arguments against regulating AI, and most of them have a true part. But the thing is, if we do not regulate AI, it's the same as with not regulating fire.
This just does not work. So what do we need? We did it before with fire. We had at least in Germany regulation concerning how to build houses, to make them fireproof, to have exit routes at every point. So before designing our house or in that case, a system, we need to prove that we thought about how the exit strategy would look like.
And then you have smoke alerts in every room. So if something goes wrong as an automatic reply or feedback that tells you there smoke in this room, is this correct? So the algorithmic AI pond on of this would be to implement feedback loops into your algorithms.
So you're not surprised to learn that Google image recognition, for example, claimed black people to be gorillas. And Google was very surprised by this. They should have checked this one example where this was done very well is Twitter's met team because they did a bias bounty hunt on Twitter. So for 24 hours, hackers could, could find biases on Twitter and report them to Twitter's own meta team. And the best or worst in that case bias would get a reward, a monetary reward. So Twitter actually incorporated a sort of smoke alert and gave a monetary incentive to report smoke in its own house.
And that's, I think a very good example of how regulating these kinds of systems could work. Then once the damage is done, we have firemen. So trained professionals, emergency teams who come in and fix whatever needs fixing or can be fixed.
And again, that's something I miss with algorithmic systems because for Germany, and I think Switzerland, we have algorithm watch, which is a watchdog NGO who does sort of this, but they cannot go inside the organization. So they can only report that something's broken. They cannot fix it. I would ask for emergency teams who can actually respond the same way we have them for cybersecurity. If you have a cybersecurity problem, then response teams who will help you get through it. I would love to have the same for high risk AI.
And last night, at least we train our citizens in how to react in cases that something went wrong along a fire alert, the AI regulation that the EU has proposed does not see any way for citizens to notify the EU.
If something had gone wrong, organizations can notify the notifying bodies that the regulation draft provides. There is no citizen route to complain about being MISD.
And if I look at the algorithms that administrative bodies, so my clients are currently using they're being used in people they're being used to assess unemployment benefit rates or helps they're being assessed in the us. For example, to judge, whether prisoners should go free or not, these are vital decisions and citizens cannot escape their consequences. It's not like if I don't like Amazon, I'll buy it on eBay. If I don't like the, the algorithmic judgment on whether I can go out of prison, I'll stay in prison. There's no alternative system that I can turn to.
So we need complaint mechanisms for citizens who feel they've been judged unfairly by state actors, but also by private actors. If those private actor decisions have that vital impact on people's lives.
So this ecosystem that we've created around fire is not fireproof. We have pyromaniac, we have accidents. Notredam still burnt down a year ago. So it will not make us absolutely safe, but without it, the chances of falling victim to AI gone wrong are much higher.
So even if a system like this is not perfect, even if the regulation is not as dynamic as the field is personally, I feel that if we do not regulate this, if we leave it up to chance or to companies to do it themselves, we as a society are engraved because these algorithms will create decisions around us that impact our lives, think unemployment, benefits and prisons. And most of the mistakes that are made in the projects that I work with are not made out of evil genius. Nobody's stroking their cats and, and aiming for world domination.
Most mistakes come from the finality of evil from not having thought things through, from not having defined goals, not having taken the time to deal with data quality, to deal with issues of, of what's the criteria that I'm using.
And am I understanding those criteria? Well? So I think that the leverage that we have to do good AI or ethical and responsible AI algorithms, if we cut the buzzwords, the most important leverage that we have is to take the time and understand what we're doing.
And a regulative framework would help us and give guidelines to what we need to do, what steps we need to go through. And if it goes wrong, there's an infrastructure that will save us. There are smoke alerts, there are fire teams, and we've been trained to deal with this. This is my very short and very dense 5 cents on AI ethics. And I'm very, very eager to answer your questions. If we keep those four things, I think the stake is ours. I hope most of you will have a stake in your lunchtime. Please contact me if you wanna go into the nitty gritty details. And now I'm all up for your questions.
Thank you very much.