Okay, we have 20 minutes to dive into a new software reality, and we're gonna look at the risks and the rewards of this new world. So when we, when I, I started to think about this keynote, I was very inspired by a conversation with Jorg and the, I think paradigm shift right now is that we, we realize the amazing productivity tools we had with the invention of the pc, the evolution towards our smartphones that are almost like an extension of our arms. And now we are moving into the spatial web. This software defined reality.
And if web two was about software eating the world, web three is about us living in the software. So we're about to have this hybrid experience in the way we live life.
We work, we play by this immersive experience. So where did this term first come from? It actually came from a novel in 1992 where the, the term metaverse was first coined by the author of Snow Crash.
And like all good sci-fi novels, it was a little dystopian. And so it was looking at this idea of the metaverse being this collision of the virtual world where we can interact with each other and digital indent identities in this immersive environment.
So again, us moving into the software, this convergence of virtual reality, augmented reality, and the internet, or the internet as we've come to know it, where we can explore and socialize in virtual spaces. And the interesting thing about this is if anyone has young children and they are playing Minecraft or Roblox, this is the world they already live in. They're moving between this physical and the digital all the time.
So let's move from children to what C-suite are currently worried about with this move and the change right now, this is a a, a very recent study conducted by McKinsey on the top priorities right now for 23 for CEOs disruptive technology.
We'll dive into that economic downturn and geopolitical issues. And when we start to think of some of the new actors or a new friend that will accompany us in the metaverse, we see how these things converge. We see that meta or Facebook renaming as meta is one of the organizations that has tried to blaze the trail in terms of defining what the metaverse is.
But we also see in the context of this economic downturn, a significant reduction in headcount and also reprioritizing back to existing business model. And partly we see that the shaping of the metaverse is now having, in terms of the things we're talking about, it has competition with the entrance of AI and specifically chat G B T. So maybe this move into the metaverse in the short term is less about the game space and it's much more about things that are close to those of us in the room, those of us that are watching online.
And that is this fusion between the physical and the digital with the introduction of ai, specifically how we're going to seamlessly move and use our identities and our digital assets between the physical and the digital. How we will be able to prove something about who we are in the physical and then be able to take either something about that, our identity and avatar or a digital asset and transact.
And so there's a whole new metaphor and, and requirement for what that means when we are dealing with customers in terms of being able to work out who they are and we are dealing with a, with a physical representation of that person or whether or not we are doing, dealing with something that they are wearing or something that's acting as a proxy for them, like their car or their coffee machine.
So back to those concerns, the 2023 concerns around what's driving leadership to be thinking and contemplating strategic issues, the digital disruption developing advanced analytics, enhanced cybersecurity and automating work.
Again, enter chat G B T that right now has around hundred million active users and we see in the last six months end consumers pulling this technology towards them. And when it comes to monitoring and advanced analytics from the report, one CEO said, Alan comes chat G B T and it's pouring gasoline on an already well lit fire.
So we hear and see cries from around the world over the last few months for us to halt. But the problem is that if we stop, this also has geopolitical impacts because either everyone stops or whoever stops, runs the risk of losing the race. And so whilst there has been some commitments in particular from OpenAI to slow down some of the training for G B T five, we already have G B T four coding, correcting coding teaching itself. So I think the genie is out of the bottle.
The interesting thing now is how we're going to navigate that.
So one of the things that is understood particularly in democratic environments is this need for governance and input and mechanisms that we don't currently have. And partly that's because those that actually understand the technology may be focused on the technology. And some of the decisions that we need to make are decisions that we need to make as a society that are more around ethics or actually the type of society that we want to live in. So there is no end of risks that are surfacing right now. Those things are loss of privacy and data security concerns. Those things are not new.
We recognize those things, we're dealing with them every day, but how we deal with them is beginning to change. This is an example from the United States, so not a far flung totalitarian regime, but Radio City Music Hall where MG M MSG Entertainment defended their use of facial recognition by deciding they made up their own rule that if an opposing council, somebody that they were in some kind of litigation with visited Radio City musical, they would have the right to have that person leave.
And in fact, an attorney went with a Girl scout group with her daughter to watch the Rocket show and she was asked to leave because she was detected in the audience. And so these issues in terms of privacy rights in the physical world that are extending into the digital world, are starting to shape even the way that we move about society that's only amplified by social int inequalities and discrimination. Why? Because if we use the training data that we currently have that has inherent biases, what we're doing is compounding that bias.
And so one of the calls in terms of how we're evolving this extension of the physical and the digital is an interesting study that's come out of Canada focusing on, on bringing sociologists into the design around AI and this emerging metaverse space specifically by being able to create an environment where critique is possible, fighting inequality through technology, the actual design, and obviously focusing on the governance of the algorithms.
However, at the same time, very interesting study published last week from Stanford and Google about Smallville, where agents were trained with a backstory like, I live in this village, you are my neighbor, I bump into you at the grocery store. And what they found is that very quickly, these avatars, these bots, these representations not only started to mimic the way that we socialize, but actually started gossiping about each other. So it's very interesting the way these things are very much sewn into sort of the DNA of our training data.
Obviously AI generated content and the sophistication of deep fakes is also a new phenomenon. We understand that we haven't solved the fishing problem. We've got a whole new layer of this. And in fact, an MP in the UK last week called on the UK parliament to say, what is the government's response and what steps are being taken to mitigate the risk of misinformation in the pending elections?
And so how is it that the UK is going to respond to the decisions that their physical constituents make based on the access and information that they're reading or absorbing in the digital space?
This is a small but interesting use case company in Australia, a newsletter that I get each day Innovation, Oz tells me everything that's kind of happening in across technology, cybersecurity. And they recently announced that they were going to introduce a paywall and I rolled my eyes and I thought, oh my God is, you know, why logging in all the rest of it. And then I read their CEO's letter explaining, and her argument was, they're adding this because what they want to do is kind of defend against the data being taken by a bot, republished, repackaged and then competing with a service.
And so the idea of trying to put that physical barrier in place was to try and defend their content and make sure it wasn't being used by an agent.
So we understand that this evolution is already starting to impact us and the way we work. A recent survey, the people that responded 62%, they agreed that they thought this AI enabled world was going to impact people and people's work, but only 28% thought that it was going to impact themselves. And so we also have this increasing gap between the likelihood of the impact and the likelihood of thinking that that is going to be you or me.
So I think we've found many rogues. The question is with those rogues and the capabilities that we have to hand is will there be steam? Are we going to find that we're going to amplify those things or are things that we can do to mitigate? What if we could code for trust? What if we could start looking at the training data and the rules around reinforcing the types of outcomes that we want, enhanced user experience through AI powered features.
What if we could actually make it easier for people to trust and understand the decisions they need to make in a moment when they're swiping right or left? And what if we could increase efficiency and convenience in virtual environments by removing some of the friction? And of course the opportunity to create, there's the endless opportunity for us to be able to imagine things and have that creative process assisted in some way. Those are all positive things. But in order for that to be possible, we need to think about whether or not that means everything becomes identity.
Everything that we see, touch, hear, interact with may need to be verified in some way to give us the confidence of that path of convenience, to give us the confidence about something that we see on YouTube or we read from the news or when we transfer an asset or engage in any kind of commerce.
And increasingly when we go to pay for something, the proofs that may come with that to either prove that we're eligible or make that transaction smooth and seamless.
So if we start to think of prominent providence being embedded in everything, one of the steps towards this is the EU introduction of the digital passport with the target date of 2030 some sectors earlier with the aim of being able to create a circular economy to cut down on waste, to cut down on CO2 emissions, to raise trust and transparency and confidence in the supply chain.
Well, that might seem like a real pain, but what we've seen post covid is some of the organizations that were forced to do this have already seen the benefits of not only uplifting revenue but better business continuity, stronger partnerships, improved end-to-end visibility, better scalability, and faster pivots to what are very strange and interesting times.
So we need to identify, we need iden, we need an identity to anchor trust and commerce in this transfer between the physical and the digital.
We need to be able to find a way to prove something in the physical space and with all of the requirements of privacy and consent, transfer that into another environment and be confident. One initiative that hasn't helped this so much in recent months was the move for Twitter Blue to add a verification button simply if you could produce a credit card. And I think those of you that have been following this story saw that people were very easily able to impersonate public figures, obtain the the verification.
And so one of the challenges with something like this is on the one hand, it raises the idea of verified, but it actually trains consumers, users to not actually understand what verification means.
What we really need is the means to be able to determine who or what it is that we are dealing with. I'm excited to see and hear just in the last day already, how much interest there is in the world of credentialing, verified credentials, digital signatures, cryptography. And I think this is really where the opportunity is.
We are going to need to develop new tools and new workflows for managing verification of everything because we all wanna be able to issue store share, prove revoke. And anytime that we can wrap that with a cryptographic signature, a verified credential, some sense of providence and truth that we can rely on, then all of the positive things are possible. But particularly within the EU context, I would love to think it was everywhere in the world it's not.
But here we need to be able to build control and consent in so that we are also at the same time empowering citizens, empowering individuals to meaningfully interact with this.
And if we are res doing this responsibly in terms of the training and development, we also need transparency improves around fairness, transparency, and accountability. Now this is something that's been close to my heart for almost a decade. This idea that we will be able to get equity and a value by engaging in digital chan transactions, sharing data.
So the challenge for us now is to build software defined realities that are private, secure, and wired for trust. Now, if we can do that, we have the foundation for life management platforms and human agency in the metaverse, it means that we can start to have this integrated life where we can move seamlessly between the physical, the digital health, financial services, transport, location, and we can act with agency because we know those avatars, those bots, those algorithms are working for us. So I'm optimistic and confident that together we can create this world of providence and trust.
But I'm also excited by the fact that it's really only just begun. Thank you,
Katrina. What can I say? Brilliant as ever, thank you. Thought provoking and exciting as ever.
And scary a bit.
Well, yeah, just a bit scary. I have some questions for you. In terms of risk mitigation, what's likely to be the most important technology regulation or education?
Oh, that's a great question. I think it would depend on where you are in the world.
I, I actually think the huge opportunity right now for us as professionals is building some of these mitigations into the technology. I think here in Europe we are used to regulation to try and remove those harms, but the problem we always have with regulation is it lags at least a decade behind. And we're struggling enough for technologists to understand how quickly this technology is changing. So I think if we rely on regulation, so I would be, I would be appealing to our better selves in terms of the way we design and architect these systems for sure.
And then the education piece will either come in one of two ways. We will reach this and, and people will see those positive outcomes or they will be the victims of all of these things.
You know, fakes, fraud, and then learn the hard way. And I think that's where we have a responsibility around education.
Great. Thank you. That brings us nicely to time to time please. Another round of applause for Katrina Dow.