Hi everyone. So we're going to be taking a, a quick side journey away from privacy in this sense to ethics. When we think about AI and machine learning, and there are a lot of bridges here, and so we're gonna be drawing some of those bridges, but keep in mind the, the idea of protecting the privacy of the individual, how that differs from data protection and bring that over to the concept of ethics and how we protect people in the midst of data sets, decision making, machine learning projects, and how we can keep that integrity through.
So I'm really pleased to invite someone who's been working in this space for the last few years. She's here to provide some of her insights and to talk a little bit about what she's seeing from her perspective. So here on stage with me, I have seen a Brun. So welcome to the, to the stage. Welcome to Berlin.
Thank you. Thank you for having me today.
So perhaps if we can kick off, you're not new to EIC. You were here with us last year, but perhaps you're new to the audience here.
Can you share a little bit about yourself, your role and how you came to be connected with this, this topic of AI ethics?
Sure. Happy to do so. So hi everyone.
I'm, I'm really happy to be able to talking to you today. My name is Cena as, as Annie already said, and I've been working industry in the industry for couple years now. Like I think 11 years, something in the it and all of it at Bosch, including my studies and my educational background is business information management and also a master's degree in computational. So I did that at the university of Sogar where I'm also located. So I'm still living at Sogar, I'm working at Bosch. So like every second building at Sogar is probably belonging to Bosch. So like the, the headquarters there.
So I'm still located there. And so in, in my years at Bosch, I've been working in a lot of different projects. So it's ranging from T to building methodologies, connected manufacturing. So as diverse as Bosch is, so also my projects been as diverse as that.
And, but it's was always connected to the, the domain of machine learning. So I've been doing machine learning as a developer, as a data scientist, also as an architect. So from all kind of different angles, I've, I've been in touch with the topic and with the rise of AI ethics that's been going on in the industry, of course also Bosch has taken on this challenge with the publication of the AI codes, which I've probably come to later on.
Again, I've been also starting to work internally how we wanna do that at Bosch and wanna make sure that all of our products are safe and trustworthy.
Fantastic. Thanks for that introduction. And so then that, that leads us to all sorts of questions and, and how you're seeing from your position then AI ethics coming through.
Because if we, if we take a look just at perhaps the machine learning models, which you're working with day to day, or in, in some aspect, maybe it's quite far away from people, but if we follow the effects down the line, through the products, through the processes, through the machines, to the end effect, we are talking about impacts on people. So how do we follow that long chain to figure out who's being impacted and how do we then frame our questions about ethics?
So I'm very much convinced that every technology decision that we take and every technology product innovation that that we bring to our market has an ethical relevance to, to a society. And because we bring it to the people, people use it and it has an effect on their lives. So that has been the case already before AI, I think, just think about the invention of a car, how that affected all of our lives and how the design of a car affects, how, how we use it and, and what happens to us in our daily life.
Of course, with AI, it's the same. It probably has an even bigger impact on people's lives. So at BOSH we always have, and, and we are putting people in the center of innovation and we kind of circle around humans who are in touch with our products, and that's really our goal to make their lives better. Also always falling.
Of course, our big cradle invented for life. We want to make the life of people better with our innovations.
Fantastic. Maybe off the top of your head. Can you think of any, any examples of this, any concrete examples that you could give to the audience?
Sure. So you were talking about impact on, on people and, and groups of people. And as I already said, every technology decision has to be reflected carefully, of course, because I mean, no one would intentionally bring bad influence on people to the market. Let's for example, think of the, the inventory of the street lamp.
No one of that invents proof like intentionally create light pollution that we have right now, or probably disturb people with a light sleep that need complete darkness. So this was an unintended side effect of those street lamps.
And, but AI's a little bit special here. So it's can kind of be compared to street lamps because with the street lamp, you might be disturbed or have an unattended side effect to let's say five people maybe, or 10, depending on the circumstances. But with AI, it's a little bit different.
You had, you could have an impact on millions.
Let's just think about the, the algorithms that are used in social media that could unintentionally enforce harmful behavior of people. So it's a little bit different. Like the scale is a little bit different. And also one thing that's different in AI is that we learn from historical data.
So it, most of AI applications, that's the case. And sometimes this is fine. This is also intended, let's say an image classifier for dandelions, for example. And we could take images from the fifties, like from a hundred years ago, if we had those images and learn from that, how a dandelion looks like, and that would be totally fine. No one would care, but let's say we wanna use a, like a predictor to see which people to hire at a tech company. We probably don't want to learn from the fifties because they're the workforce look totally different.
And we would learn that probably men are the better employees, and this is something that we want to, to avoid. So those are the examples where it's, it's fine, it's intended and unintended side effects where we really have to be careful.
Yeah.
Great, great example with, again, the, the, the contrast of the Dan Alliance to the hiring process where perhaps the underlying tools could be compared, but the, but the goals and the outcomes are completely different and have a different relationship to history as well. And, and that historical data, this is just a bit off script, but I think quite interesting for our audience to then consider is the, the social change that we can influence exactly with the hiring example, that we don't have to hire exactly the same sorts of people who were present in the fifties.
And so perhaps you can give some insight on how we can make those decisions of social change. If we're shaping the models to create the reality, we would like to have, again, with the example of the hiring example, our diversity, our gender diversity, our backgrounds of education, how can we be making decisions like that? Knowing we're having an impact on the way our society would look?
Yeah.
So I, I think this is really a chance where we, we should embrace this possibility that we could think about the values that we have and how we would our future want to look like if we apply those sorts of technologies. So I think it's, it's crucial here to really think about the values, what, where we want to go and not just say, all right, the things how we did in the past. It's fine. We can just take it without looking at it. So we should just embrace this as a chance. I think
Fantastic.
Then again, going back to one of your examples of the street lamps, where there's obvious positive benefit at a, you know, if we take this just at the simplistic level, there's positive benefit, there's light, you can walk around at night, there's negative benefit. You can't sleep at night because there's light everywhere. How do we start to balance these positive and, and negative benefits then at a more nuanced level?
Of course, we can't just talk about, this is great. This is bad. There's many shades in between. How do you, perhaps you and your team think about this and, and assign value to these, these varying types of impacts.
So, so at Bo coming back to my team, we, we strongly believe in AI is a, is a key technology for the future. And we are really convinced that with the, of AI, we can, we can make life better for people also coming back to our invented for life credo. And we are also convinced we can do that in multiple ways. So for example, make traffic more safe, reduce accidents, make buildings more energy, efficient, preserve natural resources, and, or also make Lys, for example, more user friendly, more intuitive to use.
And when we bring this AI to the market, we have, of course, a strong focus on the positive effects. And this is something that we also probably can measure.
Like, let's say we reduce the amount of accident by 30%. That's great because we already, we really save lives and this something that we can count, of course, like with those negative side effects, this is like, this is a little more, a tricky part where other metrics kind of have to be introduced that we probably didn't have had before.
And that's and a huge challenge then to start to measuring those unintended effects where you're not quite sure where to look from the beginning.
Do you have any, any recommendations from experience or from other projects working on, how do you look for, and how do you find, how do you measure unintended effects?
Well, as I said, we always put humans and people in, in the center. So we have the strategy of really think about in which way any human individuals or groups of people could be affected by, by this technology or by this new product that we bring, bring to the market.
So it's, it's kind of a, like, like a user experience kind of strategy. So we all know those, those methods.
So this is, we really look from different angles on, on the person who are affected by this technology and just think, you know, also in a creative way, what are really possible or things that we think that are really UN unpredictable or unprobable, but still take them into account, like what could in any way happen and then really think about, is this going against of any, our values or anything that really where we draw like the red line and we say, okay, we are not going to do that
Really interesting. Thanks for sharing that.
And you mentioned a bit earlier, the, the codex that you guys have at Bosch, maybe you can explain a little more about that and how you and your team do handle governance of AI and your own organization kind of at a practical level. What can others learn from this?
Sure. So AI governance on a technical level, it's, it's kind of comparable to other governance processes that are, we already have, like, when we think about like how to bring model into production, how to do testing and all that kind of stuff. It's a little bit comparable to security. I T so that's, that's not so much new.
It's the, the innovative part or the new part where we also had to explore is when it comes real to, to those soft factors like ethics. So it's for a tech company, it's, it's quite new to measure those things. And the COEX is something that we published in 2020. So it's already two years out. And here we describe really what we want to do with the power of AI, how we want to handle that advice. And it gives clear guidance to the developers. And it also contributes to, in my opinion, in very important societal debates, or like, let's, how, how do we want to leverage this power of AI?
And it really like the heart, the codex is that it should serve the people and not the other way around.
Great. Then perhaps shifting to, to more looking towards the future, what are some of the priorities that you and your team have regarding AI and machine learning?
Right. So I wanna emphasize a little bit more that, that, of course you want to leverage the power of AI, cuz I said it's a key technology, not only for Bosch, but probably for the whole industry, but the biggest priority is to always keep those red lines that are already mentioned in mind and really clearly define them.
And so it's also, there's the awareness in the whole company that here's the red line that we don't wanna cross. So one example for that would be that we, we internally agreed. We don't want to measure lives of people against each other. So that's the very famous trolley example. So this is the product. If it required that we wouldn't bring it to the market, because this is something that we don't want to do, not individual lives or also lives of groups of people. So this is one point where we would cross that line and also mentioned values.
So if there was anything that would go against the charter of human rights legality, also our internal Bosch values, those are really things where we say, alright, this far and not further.
Great. Thank you. And then again, for other organizations to learn from what you guys are doing, do you have any recommendations, any tips that you've you and your team have learned over the last two plus years working on this and, and from earlier as well?
Yeah, so, so it's quite a process. So I'm happy to share that. And we experienced that the, the first crucial part is really consider your values. So what does your organization stand for? What is important to you? Like what is, what is your DNA like?
Where's, where's your vision where you wanna go, this is where you start and you really need to have clarity on that. And then you gotta connect those values to AI and think and think, all right, what are my products? How are they connected to those values? How could those values be harmed? So make the connection between the, the, the technical part and the values, and then just go from there and create a risk framework. Really think again, where could those values be? What are the risks?
What, what would be my risk mitigation strategies would be, so this, the commonly known as a, as a risk framework and then probably the most difficult part is to create a framework to operationalize the source framework, because of course, depending on the size of the company, but is a really big one.
They are for smaller ones. It's probably maybe a little bit easier, but really bring that to the awareness of, of the developers, of the architect. Everyone who's involved in the value chain of bringing a product into life.
So it, it nearly really needs to be like in almost intuitively done in, into, in the development process because in such a complex field, as AI development, everyone in the whole process takes decisions every day, for example, to use a pre-train model. And it probably does this pre-train model bring a bias with it and such considerations just, just have to be made. And if the awareness is not in the head of the developer, he or she might not even ask, might this pre-train model be biased. So these considerations, they just be there intuitively in the process.
And so bringing it from the risk framework that you can just draw on paper in, in your own room, bring it really to the minds of the people is a tough part. And then just adding on this continuity and maintaining the organizational awareness on all levels. I have to say, cuz often it's propagate, alright. We just take some key positions and they just spread in the company. And if they, those key people know it and everyone will know it, that's probably not the case.
So I strongly believe that this awareness has to be there on all levels, like from management to the ones who really create and design the products.
Absolutely. Thank you so much for sharing this, your experience, these concrete pieces that people can take back and bring to their conversations with their team. So thank you so much for your, your time and your sharing here today.
Yeah. Thank you for having me.