KuppingerCole's Advisory stands out due to our regular communication with vendors and key clients, providing us with in-depth insight into the issues and knowledge required to address real-world challenges.
Unlock the power of industry-leading insights and expertise. Gain access to our extensive knowledge base, vibrant community, and tailored analyst sessions—all designed to keep you at the forefront of identity security.
Get instant access to our complete research library.
Access essential knowledge at your fingertips with KuppingerCole's extensive resources. From in-depth reports to concise one-pagers, leverage our complete security library to inform strategy and drive innovation.
Get instant access to our complete research library.
Gain access to comprehensive resources, personalized analyst consultations, and exclusive events – all designed to enhance your decision-making capabilities and industry connections.
Get instant access to our complete research library.
Gain a true partner to drive transformative initiatives. Access comprehensive resources, tailored expert guidance, and networking opportunities.
Get instant access to our complete research library.
Optimize your decision-making process with the most comprehensive and up-to-date market data available.
Compare solution offerings and follow predefined best practices or adapt them to the individual requirements of your company.
Configure your individual requirements to discover the ideal solution for your business.
Meet our team of analysts and advisors who are highly skilled and experienced professionals dedicated to helping you make informed decisions and achieve your goals.
Meet our business team committed to helping you achieve success. We understand that running a business can be challenging, but with the right team in your corner, anything is possible.
So I'm very happy to have the chance to present today at the cybernetics world. So I'm quite thrilled as all the other slots, my presentations plan for 17 minutes and I, then I will be happy to re receive questions. All right. So as already introduced my topic is implementing AI ethics at Bosch. And I would like to start with the question, why is AI ethics even relevant for big companies?
I mean, we heard some, some perspectives already today, but as just an introduction, I would like to go on topic. Once more, AI itself is a technology that feels sometimes a little bit uncomfortable to people or it can. So because it challenges the core principles of human self-esteem, they can questions come up like if knowledge processing and learning is the key to supremacy, what will then happen if humans are not the most intelligent entities anymore, which can sound a little bit scared to people with the goal of generating trust in this technology.
And at the same time, the competent, competent risk awareness companies and many firms start to communicate their values and risk awareness utilizing AI and the concrete risk that arise from the ethical aspect of AI are potential loss of reputation, of course, and not sufficient trust in the product products, which results in less revenue. So it's quite clear for us that AI introduces ethical risks to a major extent. And we at Bosch define AI as the key technology. And we are convinced that AI is able to improve the qualities of life of many people.
So we defined our way of implementing AI ethics. And this is where I would like to take a closer look onto the next minutes. All right, but where to start.
So, as we already heard in the, in the previous talk today, it's, it's not so tangible this topic. So how can we tackle this? We all know that AI ethics doesn't come in a box with varying values among cultures, and also varying use cases of AI. It's not so easy to grasp. So it's clear that a data and AI ethics program has to be tailored specifically to the business and the industry's needs as said so diverse here, five steps towards building a customized operationalizable, scalable and sustainable data and AI ethics program.
The steps that I would like to go through today are identifying existing infrastructure that we could use for this undertaking, identifying existing industry centers. As we don't have to reinvent the wheel, once again, create the ethical risk framework for the company, then create guidance to operationalize the framework, to help the associates and as a final step, and probably most difficult one to build the organizational awareness set, I will go through each of these steps and ex explain what we did and what we are still doing. All right.
So first of all, identifying existing infrastructure that a data and AI ethics program can leverage for Bosch. This is the Bosch center of artificial intelligence. And for short, we call it internally BC AI, the Bosch center of artificial intelligence has locations all over the world and like round about 270 AI experts, the locations range from Sunnyvale and us, Pittsburgh tubing and ganging and Germany, Israel, India, and China. And it's built up in five pillars. We have AI consulting, which supports project strategies with AI expertise inside of Bosch.
We have AI enabling to accelerate digital business by training internal Bosch employees. We have AI and marketing promoting Bosch AI internally and externally. We have AI research, which probably speaks for itself. And we hire, we have AI services, which drives the commercialization by scaling AI solutions with engineering expertise and AI knowledge. Combining those competencies BOS is focusing mainly on industrial AI. All right, the second step as we don't, as we don't need to reinvent the wheel ourselves.
As I said, identification of existing industry standards for the ones who've been listening, the, the majority of the day, the small might sound familiar because the, the for tank or the Dutch for tank also leverage this, this industry framework. We also use the high level expert group on AI as explained on the first slide. Trustworthiness is key to successful AI business and Bosch obligates itself to trustworthy AI, the high level expert group, which is initiative or DEU sees this the same way.
So this is a great fit for us for an industry framework that we can leverage some words on the high level expert group, as already said, this group is an EU initiative and their objective is to support the implementation of the European strategy on artificial intelligence. The members of this group are renowned representatives in research and economy all over the Bosch is also represented in this group by Dr. Christoff Pilo, the head of the Bo center of artificial intelligence.
This year, they published the ethics guidelines on artificial intelligence, where they put forward a human-centric approach to AI and define key requirements for an AI in order to be trustworthy. The key requirements are, as we already heard before human agency and oversight, accountability, societal and environmental wellbeing, diversity non-discrimination fairness, transparency, privacy, and data governance and technical robustness and safety. The major pillars of their publication are legality ethical values and robustness. Okay.
So having an internal infrastructure and having external frameworks that we can reuse, we can now create the Bosch codex and AI and ethics. So our own framework that we want to tailor specifically to our own needs. So with the input of the highlight expert group and the social values, which are already part of the Bosch genetics, we created, as I said, the codex that we want to beat the guidance for ourselves. This undertaking was a joint community effort with the participation from all levels and departments in the company. This completes a third step.
So, as I said, with the creation of data and AI ethical risk framework, the major pillars of our BOSH AI codes are always respecting the frameworks of social consensus. This reaches from the universal human rights, which is the global consensus to the IOT principles, which is a Bosch specific consensus regarding IOT development. We want to stay true to ourselves, and we want to stick to our internal values as defined in the mission statement we are bought, and our cradle invented for life. AI should be saved robust and explainable.
We already heard a lot about explainable AI during this day, which I personally found quite interesting, really interesting. We also don't want to build human out of the loop products as we want AI to be a tool for people and not the other way around the major pillars of our products are trustworthiness as interpreted by the high level expert group of DEU. We want transparency to AI, to behave in a comprehensible and understandable and plausible way and fairness, which regards to, to the transparency and origin, to the, of the kind of data that is used to train the AI. All right.
So some of you might remember the next step of the process, which is the operational operationalize operationalizing, the high level framework. And this is where the fund begins values. Like fairness are very easily watered down because I mean, who, who would describe her actions as unfair, who would say of, of herself that, that she's not unfair, or she disrespects the universe's human rights?
I mean, no one would say that, but still we need to be more precise to also transfer those values to our reactions. In regards of AI in interdisciplinary teams discussions often start with the few on ethics and data and AI. One thing that occurs regularly when thinking about the topic is in the business context is the fight of the academic view versus the business oriented view, the academic would ask, should we do this? Will this be good for society or role?
Does this conduct a human flourishing, which are, which are essential questions on the other side business would ask given, we are going to do this, how can we do it without making ourselves vulnerable to ethical risks, where you see that those are very, very different approaches to take out a topic in businesses, the ones asking the questions are in most cases, enthusiastic engineers, data scientists, and product managers. They know the business view in and out.
And they also know how to ask the business relevant and risk related questions because they are the ones making the products to achieve the business goals. What they sometimes don't have is the kind of training that academics receive as a consequence, they miss the skill, knowledge, and experience to respond to ethical questions systematically and efficiently. So it is key to find the best of both roles and to start on the topic of AI ethics in a business context. And as I understood, this is also the topic of the PhD that we heard earlier. So I would be thrilled to exchange with you on that.
Okay. But how did we start at Bosch? So what did we do to tackle the step of creating guidance and tools for product managers and to operationalize the framework, we found ourselves in cross functional working groups, which were led by the BCI, the bar center of artificial intelligence. The member members of these working groups came from all over the company, such as security, automated driving, central audit, or also central quality management.
Our goal in the working groups was to create guidelines and guidance for four central topics, which were AI, business development, operations, and maintenance for own AI products, deploying third party AI products, and also how to control work and interact with decision supporting AI systems. All of those results were casted into corporate best practices with the collection of the relevant question that everyone who is in touch with AI should ask himself.
So to make this little more tangible, one of her major aims was to heal the pain of the AI projects, because when talking to colleagues who are involved in AI development, they often ask me, I, as a project lead or, or project developer, whatever I want to be ethically compliant, but I don't know how, what do I do because I sometimes feel this insecurity to, to, to be compliant. People want to be compliant, but they're very insecure how to do that because I mean, as BOS also very heavy in the automotive industry, we all remember what can happen if you are not compliant.
So people are some sometimes insecure and we want to, to help them to tackle all this insecurities. So, and as I said, this is what we want to answer with the corporate best practices. So to start off, we recommend to ask the following questions. First of all, the category, what is, does the system look like? Is it human and command? Is it human in the loop? Is it human on the loop? It gives the hint on the criticality of the system.
Second, who are the participants that are impacted by the system and the system's decisions in, in patrol are they, one would think that this question is very easy to answer for, let's say a navigation system, the aim of a navigation system is to guide the driver as efficiently as possible from a to B easy. So the major participant is the person driving the car, or also maybe the co-driver who also has access to the console. But when you think about this case thematically, there are more participants coming up, for example, whose life are also impacted by the recommendation of the system.
So what if the fastest route is leading through a residential area where people want to enjoy their garden and peace instead of the route over the highway? So how do we prioritize the needs of the participants, which participant is more important than the other one. So as you see the question on this slide, they can be considered as opening questions for whole discussion. The next question, the assets, which assets are involved is probably more easier to answer which hardware, which software, which data then the adaptiveness is.
Is it an adaptive system that we are dealing with and also how the responsibilities distributed for the adaptive system, knowing all this, a project lead or a developer has already considered the crucial questions and can start with, with thinking about the relation and impact of assets and participants based on central AI guidance and guidelines. So we recommend starting with the relation and impact of assets and participants based on central AI guidance and guidance and guidelines. All right.
The final step building organizational, a awareness, which as I already said is probably the, the most difficult and most little longest one. This means for us transfer guidance and guidelines to the culture of Bosch, we want to make a soldiers understand that living the Bosch values also means transferring them into AI products. The major challenges of everyday work that we identified for us are for everyone to understand the role and relevance of ethics in AI and non-AI products, and to identify a starting point, how to deal with it.
One message that we want to be very clear about is that giving thought is key to start thinking about the ethical dimension of AI is the first step to also transfer those reflections to everyday work. And finally, where do we want to be in the future for the ones who don't recognize the man on the left, this is good old, the goal that not only developers and projects leads aware of the ethical dimension of AI, but every associate one example, let's say an HR associate is using an AI based software to filter incoming applications, which is quite convenient and can also be very helpful.
We want the HR associate to critically question the recommendations that are based on AI. And by that we don't want our associates to distrust AI systems. We want to give them the skills to decide why and when they can trust AI based system. Additionally, as an organizational add-on, we want to have an interdisciplinary ethics commission, a committee that is available for every individual request all of the above, always following the invented for life cradle.
All right, to sum up once again, the five steps that we went through in the last couple of minutes and the solution that we came up with. So identifying existing infrastructure, this is for us, the Bosch center of artificial intelligence, the existing in industry standards that we leverage is from the high level expert group of expert group of DEU. The ethical risk framework that we created for ourselves is the Bosch colleagues on AI and ethics, creating guidance for the associates.
We found ourselves in the cross-functional working groups and building the organizational awareness is where we are having our current challenges. So we checked every box, but this does not mean that we are done as, as the technology is constantly changing. Also those steps need to be repeated in an iterative approach. So basically when dealing with an emerging technology, which AI definitely is also its governance has to emerge constantly and needs to be adapted repeatedly. And this is what I would like to be my closing statement. Thank you very much for your attention here.
My reference references, and I'm looking forward to your questions.