Thank you. Okay. So thank you everybody for having us here today. We're going to talk about building trust in AI and how and why mainstream representations of AI are a key element to consider in building trust. Next slide please.
So we have already presented ourselves so we can go to the next slide. Okay. So let me start with where this presentation came from. This presentation actually emerged from three of us complaining to each other about how difficult it was to find suitable images for our a AI related work.
We somehow all felt uncomfortable with the current representations of AI, especially when talking about AI ethics and its social impacts. And we wanted to better understand the problem and think about what we can do. So before we start, we would like to understand where you and your organizations stand when it comes to these depictions of AI with a lot of human-like robots, brains, brains, and suits, and, and more so please take the poll and feel free to use the chat.
If you, you would like to contribute more about your experience with this depictions of AI. So the first question of the poll is have you worked with an organization or project which uses images like this, and might your organization be featured in the media with images like this? So feel free to contribute, and we can go to the next slide.
So you might be wondering why it's important to think about how we represent AI and as the buzz around artificial intelligence has increased. So have the issues around trust.
There's an increasing polarization in the discourse around AI, ADSS and automation. On the one hand, there's an excitement over the transformational efficiencies and life enhancing, even world changing applications that AI can definitely bring. This is the, the U AI utopia, which is represented in this scenic view here.
Next slide, please. On the other hand, there are high profile people and platforms warning that automation is the precursor of mass unemployment that data-driven technology already surveils and shapes our opportunities and identities.
And that machine learning is the trigger of unparalleled amplification of biases and inequalities. And that AI will be weaponized and ultimately lead to the technical singularity. This is the moment that we as humans lose control of machines, which leave us unable to intervene in systems we've lost control of. So with visions like these embedded in the movies we've grown up with, from how to terminate from blade on us, to the matrix, there's little ones that there's fear of a future in which we lose control.
And indeed there are few roadmaps on translating ethical principles to practice and not yet enough mechanisms to regulate the deployment and uses of AI. Next slide please.
And these concerns are growing the global increase in the reliance and the focus on digital technologies as a result of the pandemic has started to surface amongst wider population, the hidden dangers of AI, that's the bias embedded in the code, the exclusion marginalization, and rapid move to automation. There's now more of a spotlight than ever on the capabilities and limitations of the technology.
We as organizations and consumers are turning to in order to enhance every aspect of our performance in our lives, on which your devolving decisions and operations to next slide, please. And so we come to trust businesses and organizations in a market economy are based on the trust between stakeholders that each party is acting in good faith yet even before the pandemic in January, 2020, the Edelman trust barometer reported that for the first time in 20 years, a growing sense of inequity is undermining trust in developed markets against this background.
It's interesting to see that if you Google images. So if you Google images trust in technology, you get an array of images on a similar theme against this background, sorry, in, in May 2nd, 2018 digital sociologist, Lisa Lisa Talia, Maretti led a piece of research called it speaks that explored the language use within the AI industry. And one of the things she looked at was this use of disembodied robotic, robotic and human hands. And she says, I quote, we've often seen the recreation of Michael Angelo's creation of Adam.
What is really interesting in these images is that from 2008 to 2010, the robot's hand is modeled from Adam's hand. And the human hand is modeled from God's hand.
However, in 2011, the hands suddenly switch positions. The human hand representing Adam moves to the left and the robot hand representing God moves to the right. And therefore it's a higher composition questions arise from these images who has the power to make, create, or destroy life. Is it human or machine? So that's her quote. So I found it very telling that the images portraying trust and not only thinking that people who need to trust technology are white male and in a suit that technology is something like us that we can metaphorically shake hands with.
But also too often represents trust. Meaning one party has this God-like unquestionable quality, which doesn't sound like the basis of many trusting relationships. And so next slide please, and moving to, to back to busy.
Okay. So before we explore what we can do about it, let's continue looking at the most common pat patterns that we see in these representations and what they make us think about and how do they make us feel. So this is what we are going to show to you is not at all an exhaustive collection of all the representations out there and all the patterns that we can see in them.
But we are focusing here on a few of them during this talk. So the question, you know, that comes to my mind when I see a humanoid robot like this, I'm asking myself, how much does this make me think about the many ways in which AI is already being used in our society to diabetes in banking, in hiring in education and healthcare and more next slide, please. And why do we represent robots in professional settings as white male C Caucasian robots? And what does this actually mean? And next slide please.
But we see women like characteristics.
For example, the voice of Amazon Alexa, when artificial intelligence systems are made to serve us or like sex robots are represented as with the, with a woman body next, slightly. And another common pattern is that we keep referencing the human brain with electric circuits a lot in these images. Next slide please. So we are going to dive deeper here in this section, into this mental connection that we have between intelligence, human brain, human and humanness, and try to understand why, where this might be coming from.
Because I believe that human, like representations of AI are partly caused by this association that we have between intelligence and humanness. So this mental model is actually the fruit of a historical process. It is only during the middle ages in the west, that human intelligence started being represented with the brain through anatomy, art, science, religion, and more, and before as scientists, artists, and other people, they were seeking to locate the human soul.
So intelligence moved from heart to the brain during only the middle agents and enlightenment and the mind by the duality in Decar thought became predominant and paved the way into thinking of thought as something that is mechanize or a consequence of a mechanical instrument, which is the human brain. And we can see that this belief implies and is rooted in the idea of artificial intelligence and, and this willingness to replicate the human mind through artificial intelligence. So intelligence is also the marker of humanness and related to power in Western philosophical tradition, Dr.
Steven work shows that shows actually how the concept of intelligence has been used to form systems of power, such as colonialism patriarchy during, throughout the history. And you can check the, the, this quote by Plato's Republic that he uses in his article that I cited at the bottom of this slide. So the intelligence in the term artificial intelligence is a historically loaded concept. When we create statistical models that mimic intelligence by association, we tend to attribute them humanlike qualities and expectations. Next slide, please.
So these representations actually are showing us more about ourselves and our existing social and political landscapes than the technology itself. And we call this attribution of human characteristics and behaviors to, to robots, the entrepreneurism, and this creates certain social and policy implications that we need to think about because it gives way more way more power and agency to AI systems, meaning statistical models than they should actually maybe currently have. And it blurs the lines of responsibility and accountability in the case of harm.
And therefore it confuses policymakers and regulators. For example, if you think about the many hundreds of AI ethical principles that have been published throughout the last years, we see that in these text, AI is referred to and asked to be fair and, and ethical. So actually, is it the AI or the people organizations, systems, resources that will organize around AI that should be ethical and should not produce harm.
And when it comes to representations of AI, you know, AI being a white male robot or a, a woman robot that is serving when these representations have this race and gender dimensions, these actually reflect current power structures and depicts an unequal and non-diverse picture. And this picture drives women and people of color away from the field.
And if we look at the statistics less than 10% of major tech companies, AI laboratories actually are represented by women and, and other minorities and the last, but lastly, this utopian or dystopian narratives that Tanya has talked about at the beginning of this presentation, create a sort, some sort of cognitive in the minds of the general public, who doesn't engage with AI on a daily basis or as experts, right? Because this, you know, automatically makes us think of AI as something that is in a distant future or in a movie.
And it hinders our collective ability to reclaim our agency and decision making power, to shape the technologies that we want that benefit us. So thinking about the social impacts of AI, also necessitates thinking about the representations of this technology, and if you would like to understand better and dig deeper into this, I recommend you read the work of Dr. Dr and AI narrative. So I'll hand over to from here.
Thank you. And so next slide, and thanks booze for explaining the hidden implications and problematic nature of these images.
And now we'd be really interested to hear from you, how many of you have previously felt motivated or able to challenge, challenge any images like this when you see them, or has it been too easy to accept them is just normal. This session for us is the start of finding out how we can help people to, to feel empowered, to take a role in making AI more accessible and inclusive through images. So please put your thoughts in the comments for us to get feedback and any ideas or examples or alternative images that you might have. Next slide, please.
So changing images, shouldn't just be about ethics washing though. So although we're looking for alternatives, it's, it's not just about kind of making it look better. It's an opportunity. It's an opportunity to think about honest images, which help people to understand what AI actually is. And until the general populations get more literate about AI, how AI is used in their lives and how it manifests. They'll not be able to participate in the critical thinking or the dialogue, which underpins a decision on what technology we deem worthy of our trust.
The image is reminiscent of the AI movies make people more prone to reject and fair technologies and images from marketing briefs of companies who wanted to bombastically hype up the idea of general or super intelligence in a bid to seem super smart. Don't foster transparent relationships between stakeholders.
So practically, what can you do?
Well, what are the next steps we appreciate? You might not be an artist or in charge of your organization's design or visual communications, but we hope that giving the case for why it's so important can empower you to do at least one of the following to audit your company's use of images. And if there are any like the ones we've discussed, explain why they might be bad for business, seek out artists and fund their work to attribute them. And to introduce us to others, you can support our project.
You can suggest organizations in the comments or contact us on the E on the emails or Twitter handles that we've put here. Yeah. So thank you very much. We'd love to take some questions. And I was hoping to be in the networking lounge afterwards, but I've had technical problems and also dog problems as you may have seen as well.
So if, if I can join the, the networking lounge afterwards on the platform, I'll be in there after this session as well.