We could start with the ladies first.
Hello. So thanks for having me here. My name is Drs. I'm director and the CY Privacy Directors at pwc. And the topic of AI is quite, quite important for me because authentic cybersecurity AI will be implemented more and more. So a couple of years ago I did a research with the Munich University, how to apply machine learning in ai, for example, for security monitoring. And we figured out, yes, it's a huge use case for a security monitoring, for protective use cases in the cybersecurity domain.
But this research showed also that there's a lot of research, especially in China, for example, how to fool AI with ai. That's from my practice. And the cyber privacy practice, of course, we would build in trust security, which is one of some main guiding principle at PWCs. That's why we have also established also beyond cyber practice around ai.
Thank you,
Scott. Please. Thank you. It's a pleasure to be here and I really enjoyed our time together at the conference. My name is Scott David and I work at the University of Washington Applied Physics lab.
I run an initiative there called the Information Risk and Synthetic Intelligence Research Initiative where it's basically a way of thinking engineered language. So we're looking at the people talk about standards for technologies. We're looking into standards for the language and the policies and the business methods and other elements that support the technology. And trying to make sure that we have a hybrid engineering of these systems for, as Martin's been describing in some of the systems, socio-technical systems, things where people and technologies are brought together.
How can we integrate in standards that can appropriately offer prediction in the area of ai? There's this proliferation of both technical and social implications. And so the, the dance continues.
Thank you. Martin.
That's,
Yeah. So I'm Martin Kuppinger, principal Analyst at Cole Analyst.
Yeah, I think all set, isn't it? But by the way, we, we, we rarely buy it t and so feel free to move a bit closer to us and not just sit in the, in the last row of the room. I think this we promise to behave.
Yeah. And I think this is a topic that we all need to be concerned. Maybe by the way, we are missing strive. We are having a, a connection problem. Once he's back, I'm gonna let him to introduce himself again. Okay.
So let's, let's start with the simple question. Why is a AI governance is important and or necessary? And what are the key principles for its guidance, let's say? Or what do you expect?
Shall I start?
I, I would find the question around you. You ask why is AI regulation governance important or governance important? The question, the better question I would use is, is AI governance important? Is AI regulation important?
And if so, where for which use case and how? And I, I think this is very important.
There, there is a bit, you know, we had this yesterday, we had a panel about, which was really more on the regulation side. I think we need to be very careful to find a good balance between enablement and control.
And to me, all this is about use cases and the risk of use cases. So at the end, the more can go wrong, the more governance we need. And then we need regulations.
I could, could make an analogy to, to access governance.
So we have regulations saying we need to enforce the least privileged principle, which basically makes sense, but usually it is then a bit applied across everything. So some organizations are smart enough to say, we really take a risk based approach, but we should always take it. So be much better than we usually are in access governance for the high risk data and be more relaxed with the low risk access.
And I think that might be a AI might be a great moment right now to say, let's differentiate better between where are the higher risks, which use cases, where do we need to be really cautious and conscious? And we are the things where we say, Hey, who cares?
Well, well I, I couldn't agree more what Martin was saying. I'm a security architect, which means I like to build in technology for new use cases. And in the speech before we scenes a lifecycle of, of the AI lifecycle and where we all have to look at the governance piece, data privacy, for example, bias and so on. And while we have the data here and the dataset, if there could be misuse that could cause some kind of risks. So we've all seen the hacks and, and the misuse cases of cyber criminals, which they have caused financial losses for examples, data are lost, stolen, encrypted, and so on.
But now with coming in, ai, AI and digitalization is building in more and more processes and use cases where it's not only a security part but also a safety part. Just a simple example, I have a car that drives partly autonomously, which means it's a moment, a regulator force. This manufacturer said, I have to keep my hand on the car. And there are good reasons for it because if I wouldn't work that sometimes I'm live in the ruler area where I see that the car is not yet capable to drive without my assistance.
So there's a good reason to have governance for AI and I think as Martin said, but it has to be a balance to balance chances and risk
Trusting about whether there would be a business of artificial hands to sell for the driving wheel.
Artificial hands or artificial hands in that.
Yeah, the, it's interesting the, I I look at it a little different take consistent with that, but a little different. It's funny that we say AI governance, AI regulation, it's like saying, I wanna regulate hammers. I wanna make hammers soft so that they can't be used to hit people in the head. But instead we make hammers hard and say, have a rule that says don't use a hammer to hit someone in the head. Or here in Germany more relevantly, if we don't have a regulation that says let's have all cars go five miles an hour, the Germans would not like that from what I can see.
So that they can't be used for bank robbery getaways. Instead we make cars go a hundred miles an hour or in Germany 150 miles an hour or 200 kilometers an hour.
So, and we say don't rob banks and use them getaways.
So it's really, the regulation really is about us. And that goes back to comments, my colleagues here, because what, what shall we do to about us with using ai? For now it may be that AI itself needs to be regulated, kind of an Isaac as of iRobot or kind of three rules for robots thing. But right now it's about regulating us and later in the panel, and this is something I've just been able to cultivate in the last few days through the inputs of the EIC and the people here.
I'm gonna have a specific proposal that I'd like to offer for how we as humans can right away tomorrow create a regulatory environment for global ai. So we'll talk about that a little later, but I do think it's important for us to realize it's us that we're regulating not the AI itself.
Yeah.
But, but I could, you know, the problem is a panel where all our careers that it's boring. So maybe the counterpart I think this is bit, the thing we had was this, oh, we need to stop development for six months.
So, so the point is maybe AI is so, so big and so complex, you know, the accessing it, it's one organization. AI affects everyone. It affects a lot of areas of our lives. So we take need to take a different approach because otherwise it'll go out of control. That could be, so to speak, the counter-argument, I don't really don't buy into that, but that could be the, the counter-argument to say it's so big.
And, and so we un unless we have good regulations and good governance in place, we need to, to stop it. Honestly, even in two, back in 2008 and the, the years after, no one said, Hey banks, stop your business until you have implemented the new regulations. No one came, had had this idea saying, Hey banks, no, no financial business anymore until you're perfectly done with all the governance efforts that that would have been the same thing. So I don't buy in it. But surely there, there is a bit of the sinking sometimes we, we need to keep in mind and at least we need to have good arguments.
Why this is I the banking conversion is maybe a good one. Why this is not the right way to act.
I'm pretty much unsure because I have not yet thought around it regulating us. If you look at who is implementing ai, it's us. And the question is how good are we? How good are we as implementer? How good are we as human? And that's what AI now brings to the table is thinking about our biases. Are we fair enough? Because we are implementing AI in, in many, many use cases and it has an impact.
And now this kind of AI regulation brings us to think about how good are we and what do we have to think about when we implement regulation? Yeah,
But, but, but, but you, you're taking the simple approach here. You say it's us in the sense of the organizations, it's us, but where would we need to start with regulation and governance, you know, Martin bringing Chad g p t to provide responses on politically unacceptable questions.
Then we would need to regulate Martin then would need to be a Martin governance, someone who is doing the governance for Martin because so to speak, the potential hehis is doing that. So it's it's it's solved already. That's the good thing. At least I think that's what he intended to say.
But I I, I think where, where where does does this end and, and I, I think it will not work very simply.
One thing is for sure we need to make this more boring because it's the world is too exciting. I mean that serious, I was a tax lawyer for 25 years. Boring always wins. Exciting is not exciting, is risky. And seriously we need to make ourselves more boring and make AI more boring and predictable and reliable.
Well I think there is some kind of need for, to take example to regulate Martin and I think you are already regulated. Maybe you are not conscious because we are all in Germany.
I'm grown up in Bavaria Catholic environment. We have to yeah. And you are here in a German unification, which means there is already some kind of social and legal environment which regulates you to not kill someone to not whatever for,
For the Germans. I could bring up the counter-argument of Boris Paloma who escaped regulation for quite a while, so to speak to self-regulation. It's a joke. Only the Germans understand, unfortunately.
Yeah, but, but, but you, you're right. You, you know, I could also say check G PT is not a good way to make it more boring because when I ask a short question to chat G P d, I get such a long, long response, which is maybe part of making it more boring.
Do have, do you want, can we still talk or do you have any questions? Yeah,
You have all
The time. If I could take a minute just to introduce the notion that I, that again, it's been something I've just cultivated during the eic. So I want to thank the, the conference for allowing this to bubble around. The punchline is take about two minutes to describe the punchline is we're going to, my proposal is that we look at strict liability and a self-regulatory reserve. And so let me describe how it works.
If I'm a farmer, and some of you were in a presentation yesterday where I introduced this, if I'm a farmer and I have a cow and the cow goes into the neighbor's property and destroys something, then I then the neighbor wants to get damages for that. They have to prove negligence, they have to prove that I left the gate open or something like that as part of their case. If I have a hippopotamus and it gets out and destroys the neighbor's property, I'm strictly liable because I was keeping a wild animal in the neighborhood.
And you don't, they don't have to show negligence.
Same thing if I keep explosives and and fireworks and they blow up and their damage, the person doesn't have to show that there was negligence. The, the, the reality is that the damage was enough. So let's what if we treat, it's called inherently dangerous. What if we treat AI as being inherently dangerous right now because we don't know causation, that black box, we don't know this supply chains and how things manifest. Who's responsible for what. So that's the first element. The second element is, okay, we say there's strict liability. Well how does it get funded?
Well one thing we can do is look at things like F D I C insurance in the United States or no fault insurance in autos you can have the beneficiaries, it can be users as beneficiaries, designers, operators of ai, each of them contributing to funds or a fund.
And that fund then pays damages out.
So they, how how much do they pay Will will decide that based on experience like an insurance product. So it's a self-funded reserve to pay insurance for those damages. And then the last part of it, so we have strict liability. If there's damage done, we have a fund. The last part of it is, well how do we do that? It's a mass contract.
So if you look at the le look at the legal systems around the world that have a billion or more people under them, it's India, China, Facebook's terms of service, Google's terms of service, Microsoft terms of service, we can do mass contracts over billions of people. So you think of a mass insurance contract and the last piece is that why would a country do that? We have Italy saying no ai, well what if you had Italy saying no ai, but if you're under the hippo contract, I call it hippo, it's harmful intelligence, people protection organization, hippo. Cuz the hippo is the our thing, right?
Pretty good, huh? I thought of that when I was half asleep this morning.
So you have that and the country can say if you're doing hippo then you may have AI implemented in our country. End of story. So that's my suggestion that we start to explore that not for the forever, but as an interim matter. Because if AI is unstoppable and unable, as I suggested in my keynote, it's exponential increase. The only thing we can do is get together and synthetically intelligently de-risk.
And that's the, that's what we've done before is things like strict liability for things we can't control and then reserves and pooling of funds to help pay for damages. So I just wanted to introduce that as, as one possible notion.
And, and and I have heard about this yesterday already by you by the way. So it's originates from you and, and what it reminded me about is in, in very few parts of Germany, including the state of Pat Rutenberg, we in the past had the obligations for every owner of a real estate to pay into an insurance against natural disasters. So the the point is if your, your house is on a river, you may have a higher risk if it's in an earthquake area, you may have a higher risk, but it's that everyone has to pay sort of a, a similar share into that.
And in the case of natural disasters, that insurance then paid and that worked well. That was not very expensive because everyone paid a little bit instead of some with high risks, paying a huge amounts of the others say, oh I don't need it.
They gave it up.
And I think even since then, it's not that long that they, they gave it up, it's coming back, back, back again and again the discussion to, to bring it back and when we had a huge natural disaster two years ago in, in the, in some regions and in Germany, that that proof that it would have been extremely helpful to have something because it would have speeded up. It wouldn't have avoided the disaster. And your insurance also won't avoid AI disaster.
Honestly, to a certain extent it would better than the natural disaster thing. I'll tell you in a second why. But I think it really would, it, it's basically an idea that has proven in some similar way to work. And I think for the AI ca case, it even would help more because if at the end it probably would be a transaction fee or something or a flat fee per per month or whatever and and everyone probably would, would think about, oh don't let that fee go up too much. So better behave responsible.
So it would drive self-governance
Well and rather well you mentioned this kind of transaction fee and I'm as part of pwc, I also think I'm some kind of enablement not as a fee. What we are doing for example, is also supporting clients with ESG reporting because more and more people are deciding how do I invest my money? I also would have to have some kind of environmental sustainability or also governance spec considered. And we have developed, for example with the German bsi, a controlled catalog, it's called AI four for for cloud-based AI systems which consists of 40 controls.
But if you are able to use this as an, in your ESG audit as an attestation, which means that some kind of enablement because the financial market will avail it if you look at it from a governance perspective.
Yes. And I and I I like those that those comments and Martin your comment about it driving behavior is, is perfect. I hadn't thought about that kind of feedback mechanism, right, because if you want to keep your rates down on paying into the fund then the behaviors, right? Yeah.
Your under organization individual level.
Exactly.
No exactly that it hadn't occurred to me that part of it where you're, it's the actuarial analysis. So you're driving the behaviors and then you're gonna have sectors and groups that have certain exposures will start to have standards including identity standards and technical standards and business operating legal, technical and social standards in each category to try to as a group drive the behaviors to something with a less exposure.
Shall we give after 20 minutes? Osmond the chance to raise a second question.
I didn't
Mean to be in polite, I was planning
To ask a couple of questions but you really get it through but it's nice actually I would like to see if anyone want to contribute to the conversa discussion we made so far or any questions about it? Yeah, sure.
Not the same question as yesterday, but related maybe to Scott. Imagine we had regulated AI three years ago. Would it still fit and imagine or we are about to regulate AI currently, will it be appropriate three years from now or how should it be shaped to be that way?
You know, I think the AI is just the latest artifact of the fact of exponential increase in interactions. And so as I've said before, there's only so much we can do. This was the bunnies idea, right? The bunnies can take a vote to say they don't want to be eaten by eagles anymore, but the eagles don't care. And so we can do whatever we want to try to
Hope that things aren't going to come at us. So I think ai, you look at all the different risks we have, they're going to increase exponentially.
And the best we can do is put our minds together to do the best things we can do and the, and the practices combination and sharing practices and sharing how to de-risk. So I quite frankly, I don't, there's since there's no stopping this thing, it's like if, if we know an asteroids coming to hit the now recently they've deflected an asteroid maybe, but if we know an asteroids coming, you can't stop these things. And similarly, I don't think we can stop the asteroid of interactions that's coming at us and that's proliferates data and information and proliferates differentials and those are risk.
So to me, the only way I, I don't think of what I'm saying here is that's special.
It's what we've always done. It's nice and boring insurance, nobody would claim that insurance is exciting.
It's, it's nice and boring, right? It makes things boring.
And so we, we won't be always able to make things boring unfortunately. But I, I think if we, this is a way for us to pool our mentality, pool our behaviors and, and at scale we don't have to have conversations with each other when this actuarial analysis is helping to drive the behavior.
And so it, it allows for the behavioral standards. So even if we had done this before or in the future, it's a continuous battle against disorder and increasing interactions leads to increasing risk and so it's increasing disorder and quite frankly, I don't know of another way to do it and fortunately humans have been doing it ever since we've had societies. Traffic rules is another example.
You know, it inconveniences me to stop at a red light but I like that other people will stop at the red light too. Yeah,
You have been a lawyer haven't you?
Can
You tell such a short question and such a longer response to
We got as lawyers we always get paid by the word. Yeah.
So as an Analyst my, my response probably would start with, it depends on, so short answer on your question, I believe some of the models that we currently discuss can work forever. What we have come to think about these models three years ago, probably more likely not than Yes,
That was much shorter.
Yeah.
Yes, but, but I had 10 minutes to think about it. Yeah,
Okay, I had two minutes more. So just when we talk about regulation of AI and I always, what's the definition of ai? And you've seen as a speaker before all this Matthias around I, when I used to study 20 years ago, we had a, a study subject called knowledge discovery and databases. And it was a lot of statistic, it was a lot of mathematics and also formally senior sets the AI part of it, which means when you're talking about regulation of ai, it's regulating Matthias and statistic.
But I think what we really talk is regulating the risk and the use cases and the damage it might cause.
You know, it's funny to talk about regulating math. I remember there were a number of years ago that in Kentucky there was actually a legislative proposal to say that PI equals three
Makes it easier.
It's not always appropriate to try to regulate math.
Right. Maybe we could continue with the second question.
So considering the other stake, sorry, considering the other stakeholders getting involved with the AI and in regulation of it, so what should be the role of government or private sector or academia in specific for each of these for regulating ai and what role do you think that each of these stakeholders play can play in ensuring that AI is developed and maybe also de deployed securely? Do you think? Do you think that maybe for example, we need to have some, some, some concrete rules for rules like this is government's job or something like that?
I'll go last cuz we only have 14 minutes left.
Okay, I, I have to leave in four minutes or so because I have to have to do opening the closing keynote ba basically I believed the best thing academics can deliver is helping in understanding risks. So it is core risks. Where where are the risks in, in, in an academic well structured, well proven manner. For government it would probably be probably something like saying okay, we enforce and we, we we, that insurance thing, that's it. And but don't go over the top and don't make it too theoretical. So if the risk model is something no one understands, then it doesn't help.
It must be pragmatic.
Well particularly I'm a little bit biased because at PWC we are supporting the BSI for example, to to to develop this kind of control catalogs.
However, what we see, if you look at the topic of AI and the use case cases, I think it's important to have a broad spectrum of opinions involved. Academia of course, but also legislation but also data protection but also society because it has an impact on society.
You know, it'll be interesting, I was reading with Microsoft research on a public health matter a couple of weeks ago and they said, I forget it w they said the amount of information about health was doubling and I think it was every 76 days, something like that.
So part of the challenge of academia and academic papers is there's lots of knowledge but it's not, it's not available, it's not discoverable. There's too much. So one of the things that'll be quite interesting is we talk about regulating ai but obviously AI is gonna help us with discovery also. And so that kind of the hybrid element of we're gonna keep producing knowledge and academia is one of the places we look for the production of new knowledge and how might we then use that knowledge, how do we bring it to market?
You know, when people talk about tech transfer and universities, it's typically just patent licensing.
It's really not a lot of other things. There's so much other knowledge. If something's not patentable as a matter of law, then should it not be brought into markets. And so part of the, what academia can contribute is new perspectives, new ideas, but part of what we're gonna need to do with in business and in technology is find ways to help take those ideas and make them real and make them actionable. And that's always a challenge.
And now with the proliferation of knowledge, it's even more of a challenge because there may be wonderful solutions or sets of things that could go into solutions that are exist and we just simply unaware of them. So they can't do us Any good.
Anyone to contribute to that? No.
Alright, then my next question will be, actually I prepared a really good question, but I received a really precise question in the similar form. So I was going to ask like how could we actually made a global global consensus on this? Like maybe like collaborating with the other stakeholders and, but my question will be how can we build a global consensus around AI governance and regulations? And one of our online attend is just ask us, really put it in really simple way. Is it possible to regulate AI on a global level or is it too late?
Well, it's a good question. It's an interesting question and I'm not sure if I have all the answer yet in the talk before you've seen all the different kind of impacts and and biases. And if we look at what we need to have a global understanding for regulation and also governance on ai, we have to have a unified global understanding about effects. Do we have it, do we have it? Well the biases, we had all this discrimination like gender, et cetera. In Germany we have per per paralegal legislation that we treat women and and men equally. Is it across a globe like this?
So we have to fundamentally discuss the kind of basic fundamentals of, of equal rights that we have some kind of set for it. And I think it's still a challenge.
Yeah, please share. That would have been exactly my question. I fully agree with the notion that came out in yesterday's discussion and today more precisely that AI is unable in terms most people would think. But having regions in this world where democracy, e equal rights inclusion, these things don't play a role or, or are kicked. How can we make sure that values we are used to the un carta of human rights, for example, are not mistreated by these ais.
There should be some body which might be, I don't know, similar to the UN today or maybe it might be a UN task on the UN level to finally find some sort of regulation how these things are feed. Yeah,
So I, I love where this is going because the fundamentally we're, we're talking about global governance, we have global government now the nation states, which again after the 30 and eighties years war, we had the piece of westphal in 1648 and kind of established fronts of French revolution kind of established the idea of the nation state and and other pieces of history.
If you, in my keynote I talked about the exponential increase in interactions. And so when we woke up this morning, each of us, there were certain companies in the world, there are certain countries in the world, there are structures and those structures are doing their best to help govern, but they're all artifacts of earlier problems. And now we have new types of problems.
And part of the reason I was excited about this strict liability and reserve idea is its global governance right now we have derivatives forward contracts, they're global governance, they work as a contract as long as they're not illegal in any country they can work.
So what we have now is an opportunity to start having things that work around the world.
Now we're not yet talking about the human rights issue, but once you look at that proliferation of interactions that that blank space that's opening up, we can start develop habits of governing on a global basis and they can be contractual initially, but then be normatively cross-referenced just like we normatively cross-reference things in technical standards by governments. And then you start to have more interoperability in terms of regulation among countries in the blank space. The blank space is gonna be the place where the new risks emerge.
That force us together in the old risks is my view on it. It's a pathway to recognizing the possibility of global governance in different types of structures. Structures in my, in my mind. Did you have a question?
Yeah,
I do have a question. There's the notion that war is the mother of all inventions and is driving development forward. We are currently not in a global war, but we are near to a cold war already, at least in geopolitical tensions with this, this situation not render all efforts obsolete to regulate AI because it might be useful in this, in on one or the other side, or if we cannot can can be then afford to have these tensions going forward. I mean it would be become quite dangerous in this world. Yeah.
So it's fascinating.
So the, is it web var or Mills said the, the nations has the monopoly of legitimated violence. So if I drop dead right now here instantaneously, I think of it like quantum entanglement. My physical body is entangled with my, the agencies I set up, the contracts I set up instantaneously. There's a change in legal authority associated with those things. So as long as the nation state has auspices over the physical violence, there'll still be relevance in the, in those differences in that war.
And I was talking to someone who's involved in the military in the United States one time and they said something that initially offended me, but now I think about it a lot and I embrace it. He said that I always thought I'm a, I'm a peacenik, you know, I I and I'm a, I'm vegetarian and a dove and, and he said, what if violence and war is the natural state not peace and it takes energy to keep peace.
And that made me think about it a lot.
We have to work at peace because organisms, it's like the bunnies and the hawks again, they have to work at peace cuz it's, there's a conflict intrinsically in terms of their relationship. And so it, I think we have a chance in this context because we have this superior anxiety now at us that we have a chance to revisit some of the presumptions we have about how we'll interact as a species.
Now that sounds very aspirational, but the more challenges are similar across countries, I think, we'll we will still be, have wars and local conflicts but ultimately it, our survival may in fact depend upon our ability to deconflict in order to survive on these these other things that happen, climate change, these other complexities that we haven't been able to deal with. I see our online person now has visited, has revisited maybe nice to provide him with.
Yeah,
Yeah, yeah. I think, I think it'll be only fair enough to also give him some time to talk. But maybe we could, I could ask the, ask you guys the the last question and then if you have time I'll also ask Ray's opinion on a couple of matters. So what could be the potential challenges of implementing AI regulations?
Well with, as we see it with any kind of regulation, if you start to implement regulation, where to start and how to digest it.
And that's what we typically do is consulting, taking the regulation, translate it, translate to the organization, look what it means for the organization, bring it to the pieces to the life cycle and start, take it from there.
For me, if we look at regulation as a, as a product of government, I was a former tax lawyer, I love it when you have different laws because we used to venue shop, we had all sorts of wonderful structures, you know, triple dip leases, all and all sorts of things like that. So one of the big challenges is we do this nation by nation.
AI has no, there's no physicality. I can put a server anywhere. And so there's all sorts of ways in which that could disadvantage us. So one of the challenges for me is how do we get it to be global? And that's why I talk about global governance through contracts, which are, we don't have to ask permission or forgiveness to do a contract. We could do one right now and it could be effective tomorrow. So one of the big challenges is looking at the governments to do this and whether that will ever be effective because it might be too isolated.
Thank youre, maybe you could also tell me some, tell us something.
Yeah, absolutely. I completely agree. The only thing which I feel is a major issue with regulation is the pace of regulatory development is very, very less compared to the pace of technological development, right?
So if, if the technology is at 10, the regulation is at three right in in ca. And again, regulations are never future proof, never. While technology ambitions and growth are very future proof, we know that we'll see GPT 10 or something like that in next year or down the line 2, 2, 3 years and there's planning going on.
But when it comes to regulation, no such policy, so number one, regulations need to have the same speed as technology, technological advancement, and number two, regulations should also be future proof as technological, as technology is crucial, these are the two main points which I feel will make regulation implementable. For example, I implement a regulation today, I know after two years there'll be new regulat. Every year I get a new regulation. Every six months I get a new regulations.
So what I see, what quite works quite well in projects, GDPR course privacy by design and I say trust by design, which means gets legal people, the security architects and all the people who architect trustworthy design right at the start. Because what happens traditionally project we're going on, something was implemented and some some security or legal people or check if we are compliant.
So why implementing it and in incorporating legal people but also security people in the design phase and we, we could sort it out while we are implementing or not start waiting until we had cost something.
That's very nice. Also because the design, you know we always have practices to best practices to standards to institutions.
And so the regulations, if we can have a structure and architectures of the design, the practices and best practices and then the regulations can be localized for different contexts, different cultures, but still be interoperable because they're based on the same principles from the, from the PR practices and best practices. It's similar to what I was hearing from the open wallet folks. They were talking about we're gonna not do standards but we're gonna do the principles and the structures generally recognizing that then they can be localized.
And similarly, it feels like in the regulatory context, if we can develop structures for patterns that can be iterated in different regulations, then at least as the regulations are added in by different jurisdictions, then they'll resemble each other and be more interoperable potentially. Yeah.
Thank you Scott. Yep. But I think that we should finalize around here because we are running out of time and then we are gonna have a closing keynote on our C1 room upstairs. So thank you very much for your participation and Right, thank is sorry for the technical inconvenience.
We will make it up next time for you probably. Yeah. Thank you very much.
Yeah, thank
You. Bye.