AI Governance and Regulation AI Governance and Regulation I'd like to ask Scott David to come to the stage. So Scott has a long title, he can explain his title himself, because there are three lines on the app. And the other one will be me, Martin, and we will talk, as I said, we will talk the next 20 minutes about AI Governance and Regulation. Feel free to raise your questions at any time, and maybe Warwick can watch the online questions coming in, if there are any, and alert us.
So, and I think what we want to look at is sort of navigating the landscape of AI regulations, where we stand, where we should go, probably, and share a bit of our thoughts on that, because I think at the end of the day it's a very interesting challenge we are facing, between having governance without disrupting innovation, which is surely one of these things where balance is needed, and I think many of us have learned there are some cool things we can do with Genii. There are things where the results are below expectations, and so finding that right mix is very important.
So, welcome, Scott. Maybe you introduce yourself quickly. Thank you. My name is Scott David, and I'm at the University of Washington Applied Physics Lab, also a fellow analyst now with Kupinger Coal, as of several weeks ago, and at the physics lab I run an initiative called the Information Risk and Synthetic Intelligence Research Initiative.
And so, delighted to be here. Okay, a pleasure. And I think you can, as a starting point, so we already heard a couple of sorts from Srei, maybe as a starting point, could you a bit summarize the state of what you're talking about?
Yeah, it was funny, because Martin said he'd like me to summarize global regulations on AI in three minutes. So, for me, three minutes can extend into many minutes, but I'll keep it. The problem is, you've been a lawyer, so three minutes anyway, very short for you. That's right, we get paid by the word.
So, there's a couple of resources I'd like to point out to folks. There's the OECD maintains a list of global regulations in AI, and that's very useful.
Also, George Mason University just put out a report recently where they took AI and did a survey of AI policies. One of the things that's emerging now is there's a number of things that aren't really regulatory in the sense of constraining behaviors, but rather are AI policies generally, because a lot of what you're seeing now is a combination of restriction and encouragement of innovation. And some of the themes, I made a little list because it's not easy to remember all of them, but there's five.
In the George Mason report, it was very interesting because they talked about patterns that are emerging globally. And they talk about it as a wardrobe, and you go get a shirt and a pair of pants and some socks, same kind of thing. There are certain categories of things that are being available. And if you want to look at different jurisdictions, you can kind of look at what's the combination, what's the outfit that got put together from the wardrobe in order to deal with AI. And there are really five things. One is the category of safety and security is one of the kind of themes.
Another one is transparency and explainability. Another one is accountability. Non-discrimination is another. And then data protection. And I thought what Srei said made me think about something I hadn't thought of before. It's very interesting. In a sense, AI invites us to think about how to render information reliable. We've been focusing on making data reliable. And the bringing forward of the GDPR and other data learnings is part of it. It's necessary, but not sufficient to make the information reliable.
And so the question would be, if data and meaning equals information or data and context equals information, how do we make meaning and context reliable? And I think that's Srei, thank you for making me think about that. I hadn't thought about that before. But it may be that we're transitioning now and that provides a lot of opportunities for new risks but also a lot of opportunities for new products and services in that area to figure out what does it mean to make meaning and context reliable. And I think that is interesting what you're saying about also information and data.
So at the end, we are all in a space which is called IT, information technology. But factually, I think a lot of things we do is sort of data processing lesser than really dealing with information. Right now with, for instance, LLMs, we go from so to speak, from data to information in a sense. And I think this is something which is important to understand that we are finally there where so to speak, the promise of information technology has been, by the way, cyber security also once has been called information security. Not that long ago before the cyber term came into play.
And I think that is something, I remember a while ago in the CISO Council, Cooper and Cole was running, one of the CISOs said in a conversation about, it was more the data security aspect, he said we didn't care enough about data and we definitely also didn't care enough about information. So also when I go to my domain identity management, most of the things we are doing don't deal with information and not even with data they deal with functions I have in a product.
I can use that function and I think this is something we definitely need to change when we want to get a grip, we need to change our understanding. Srei, any points from your end? So when it comes to any kind of regulations I always believe one simple thing, that we always wait for regulators. Regulators as of now, or any kind of agencies which make regulations and recommendations are at the backseat of the car. The pace at which the technology is moving forward is much much much more faster the pace at which the regulators are moving forward.
So it's very important for these regulations, self-regulatory body government regulatory body to move from the backseat, come in the front seat, drive the car along with the drivers who are technology and match the pace. The time when you have backward looking regulations, you will not win. Regulations have to be forward looking. For example, if I am a product owner of a tech company, I know what I will do in the next 5 years, 10 years, what is my plan of development.
Regulators need to understand that what will happen in 5 years, 10 years and they plan their regulations which are futuristic which has a viewpoint which are futuristic. Is it that they need to understand what will happen in 5 or 10 years or is it that they need to understand what sort of constraints to set so that the things can evolve without getting out of control. I think it's a bit of a difference here. I was just going to say, Srei, you're making me think about all sorts of new things.
So one of the things that just made me smile is that regulators, you know, Biden just came out with that executive order and he's talking about floating point operations. Now, if we have Biden talking about floating point operations, he actually didn't write the executive order, right? So we can't rely on governance for tech. We got that. But it just made me realize, if we're talking about meaning and context reliability, that's what we do that all the time in human governance.
So maybe we can start to if we, obviously we need to understand the tech in regulation establishment, but maybe it's much more of the natural process to have regulators working with meaning and context. So if the tech can help understand enough, humans have dealt with that forever.
You know, there's a gentleman who asserts that property itself, the concept of property is a collective hallucination. It's not, there's no property. It doesn't exist in the world. It's a relationship among humans with respect to an article that's out there in the world. So we can start to cast the regulatory construct in terms of meaning and context management and leave the data management to the tech people. So maybe we've been asking the government to be too knowledgeable about the tech directly.
And, you know, when we say meaning and context reliability, what does that mean? Well, you know, we were one village in Africa a hundred thousand years ago and we migrated out around the world and we developed different food and language and politics and music and then the internet brought us together for show and tell. So we didn't have shared meaning, right? We have different languages and et cetera. The shared those kind of things, making bridges among those things. Yesterday it was remarkable using the AI for the translation, the instantaneous translation.
So we're already having meaning sharing being made possible by the technology. The technologists can verify whether you have an accurate kind of transcription and then a linguist can also come in and verify whether the word, just like a translator notes in the beginning of a book. So we have these other mechanisms that are already instantiated in the non-technical fields to measure degrees of reliability and predictability and meaning and context. That's the kind of metrics that maybe we need to bring into regulation in addition to the data metrics and other technical metrics. Yep. Absolutely.
Nice. Completely agree. And I'll add one more point to it. You are completely right.
But, you know, I always believe and I certainly have this impression that governments are handicapped when it comes to AI understanding. They do not understand artificial intelligence or the fundamentals at which artificial intelligence is being built. They find it something like, you know, humanity destroying initiative, that is a concept which will overshadow humanity and destroy everything, go to singularity and things like that, which is a lot of background coming from a sci-fi movie. So it's very important that the government needs to have an AI governing body.
I remember, I don't remember the country exactly, but one Middle Eastern country, be it Saudi Arabia or UAE, have created something called Ministry of Artificial Intelligence. And they have a very good number of people who knows what is AI. And I think that kind of initiative is required. You can't merge AI into technology and say, hey, it will destroy the entire world.
Yeah, but isn't it also partially because we are using still this term AI, which is a very old term and has a history. And at the beginning, when I go back into the 60s, there was a lot of about, oh, we will soon be at general AI, etc. We are far away. What we actually have is not artificial intelligence. We have augmenting intelligence. And when we would sell this a bit differently and say the purpose of this is not, because what you say, this is the perception of replacing humans.
Factually, it's about augmenting humans. It's about making things better. When we have an assistant or assisting system in a car, it's augmenting us as a driver. It's making us better. Gen AI is intended to make us humans better, doing our jobs better. And I think there are some great examples of what is out. And then I think the scary factor of that can go a bit away. And I think this needs to be explained, but not only by a governing body. It needs to be explained by everyone, by not overpromising, by making clear where the value is. We use it in many, many areas to really make things better.
But I want to come to another point first. And I think there was a bit this, oh, the regulator in the backseat. I think we have, from my perspective for AI, we have a quite positive situation because at least the regulators in the backseat, in most cases of technical development, it has been and is that the regulator at best at some point comes in the trailer that is attached to the car, not being in the backseat and not even being close to the driver's seat.
And I think we are much further with discussing regulations and thinking about regulations than we have been in the past for most other areas. I was polite.
You know, it's funny because so when I was a kid, I was at the beach. And a wave knocked me down. And then I got up again and the wave knocked me down again. And I got up again and the wave knocked me down. And I looked down the beach and there were surfers, the same wave, they were surfing the wave. And so we can, regulation, if we look at it as something that constrains, then we're going to get knocked down. The question is, is there an inevitability here? Is this wave something that we can manage directly?
Or really, if you look at regulation, what we're doing is managing human to human relationships with respect to this thing. Whatever it is, AI, railroads, whatever the new technology is, it took 50 years to come up with regulations for railroads. So we have to have the experience to know how to manage but there's some, I won't go into the, all of it, but there's some inevitability here. Part of our work, we look at AI as just an artifact of the fact of the exponential increase in interaction volume since the Big Bang, quite frankly.
So, and I've talked about exponential increase in Berlin and EIC. As humans, we have no management, we have no metrics of exponential change. We just don't know how to do that. We have exponential metrics like Richter scale and decibel scale, but those are instantaneous measurements. I'd like to bring up in the little bit of time we only have left, one more point. And when I look at some of the stuff which is going on currently around regulation, then this is about sort of very critical dangerous versus less critical, et cetera, things.
When I think about this, to me it sometimes looks a bit like in some sort some of the most compelling use cases unfortunately are the ones which are closest to security and safety.
I think it starts with autonomous vehicles where a lot of safety things are around and I think it holds true for many other areas, which is a bit of a dichotomy we are facing that and that also would mean we need governance, we need regulations that find a good balance between enabling things between accountability, as you said between data privacy and all the other stuff, but really focus on the use cases that are sort of most attractive, but also most critical.
Well, one of the things is we've sort of been here before I make reference sometimes to self-driving corporations and what I mean by that is if you want to see what the AI future looks like look at what corporation relationships look like to humans because there's levels of abstraction that happen in systems that are not entirely human you have legal personhood for corporations some independence and discretion and so you start to have the abstraction can be a form of violence, you know the painter Kandinsky once asked the question or asserted that violent societies yield abstract art and I often wonder if the reverse is also true is abstraction itself a form of violence and when you're dealing with corporations you know you call the phone company and you want to get something corrected and they treat you as a number not as a person.
In identity we want to be known as we are, but in all group dynamics you have to have some level of abstraction to deal with the group and so part of what we're dealing with here is fundamentally an identity issue and a human identity issue and I think the question we need to ask ourselves is not how do we regulate AI, but what does it mean to be a human in an AI future.
Okay, that's very philosophical, but maybe let's get a bit more to the ground to close up that panel and a very simple question where I'd like to have a very short and precise answer is what is the main thing you'd like to see from the work on AI regulations in the future or from now on? What is the most important thing?
So what I would like to see that when it comes to regulations, regulations are talking to business, they are talking to academics, they are talking to industry, they are talking to expert and I would like to see a proper governing body which is multi-country in the sense it is across different geographies and by global body so that any regulations which we come out with or they come out with it's not contradictory in nature, that is China's regulation is contradicting with something else, that should not be a situation, we'll avoid that situation. Okay, thank you. Scott?
Yeah, and along the same lines, I think that we need something that's uniform enough so that we can de-risk together. I've found that the most effective systems are the ones that allow us to mitigate risk and leverage in ways that we can't do alone because AI is a challenge for all humans and all countries. There will be new things that we can de-risk together that we can't do alone and if we're explicit about that, that can actually help not just in terms of AI risk but in the geopolitical and other risks that we have as humans so it actually can be a time for healing of the human experience.
I'd like to add, I'd like to see regulations that are clear and tangible and precise in what they do and not only look at the risks but the opportunities we have. So enable but mitigate the risks. So Srei and Scott, thank you very much for being at the panel.