I would like to introduce the speakers that we will hear, but continuing with the last topic about the regulations and about all the changes that are happening nowadays, our next panel will touch a little bit of these topics, and we will discuss Crossing the Divide – Bridging Legal Accountability and Tech Innovation. Please join me in welcoming our speakers for today. We have here, well, Dr.
Jacoba, she was here in the previous presentation, Emilie van der Lande, Cyber Risk Consultant, Independent, Michael Palage, Chief Trust Officer in InfoNetworks, Valentin Knobloch, Funding Partner and Legal Counsel of Souschef Ventures, Thomas Lohninger, Executive Director of epicenter.works, and Pamela Dingle, Director of Identity Standards Microsoft. Thank you so much. This is great.
So, I just want to give a sense of what we're going to cover. We have a nice chunk of time here, 40 minutes. The things we're going to cover, the title of the panel is the Bridging of Legal and Tech, and so among the things we're going to cover are how do we make tech more compatible with law, how do we make law more compatible with tech, and then we're going to start, I think, with some of the current deficiencies in the regulation of tech before we move to the AI amplifications and new risks.
So I wonder if any of you, or each of you, have some thoughts on current deficiencies in the regulation of information technologies, and you can, you know, allude to the, how those be amplified in the AI context. Anybody care to do that? Would you like to try that? I think harmonization, so we don't need more, but we need more simply understandable things because I see a lot of doubles getting, you know, if I read all these texts. So I think in the context of AI, it's all about trying to catch up.
So one of the things I've kind of looked at is in the context of copyright law, and there has been an ongoing tension going back to player pianos, radio, television, photocopiers, VCRs, and what I've always found interesting is the laws have adopted to the technology to protect the underlying principles. So to me, I think what is, what's particularly most challenging with AI is the pace at which the change is coming, and how one adjusts, and whether the legislation and private contracts can compensate accordingly.
Yes, I would add to that, I'll keep it quite short.
I think that it's fundamental that we have very multidisciplinary teams working on legal regulation for technologies, because very often, as Yuko mentioned in our previous talk, you've got the legal people, you've got the techies, but there's still very big bridges, very big gaps to bridge between these two, and I think this starts very early in the education of the people who are going to be writing the legislation, so I believe that this also starts with people in law school already, from law school, that people should have stronger technical knowledge to be able to build better, implementable legal laws.
I fully agree, and I will add the reason products nowadays are more digitalized, compliance gets more important, governance gets more important. So all legal aspects in products that we see are so heavy, and have such a big influence on the product, and you can fall out of compliance, of regulation very quickly, that this is actually a part where legal and tech should really work closely together.
Well, I agree with a lot of what was said, I also think that we have to be serious when we are drafting legislation, how to enforce it, I mean, if we live in a world where Facebook meta is fined for breaking the law in the US, a ridiculously low fine, next moment the stock price goes up.
So clearly we have a problem here, clearly we have a situation where breaking the law can be a successful business model in the internet, and this is also very much a European problem, because we call it in the privacy community the Irish problem, we have certain countries in the EU who made it their business model not to enforce laws against big corporations, and that creates an unfair playing field, particularly unfair for European corporations by the way, this is not a civil society problem.
And so the biggest challenge in my opinion when we draft EU legislation is enforcement, how do we get it right? The DSA has some good approaches there, I'm happy to go into detail.
Pamela, Anling? We're working on the audio here, keep talking and we'll, wait, I think I heard you. Can you hear me now? Excellent.
Yeah, I think everything everyone has said is absolutely right, especially the pacing, but the other piece that is a concern, certainly for Microsoft as a multinational company, and I think for many folks is the fragmentation, meaning we need to comply to so many regulations in so many countries that are all just slightly framed differently, that there is a burden here for folks to comply, and a burden to prove that we are complying.
You know, it's interesting because it's not new, in some ways what we're dealing with is not new, in some ways it's new, and I mentioned in an earlier session that eavesdropping, the word eavesdropping, which is now used for monitoring, was originally a trespass violation, a legal trespass when someone dropped in under the eaves of the house, so they were sufficiently close to the window so they could hear, why did they use a trespass violation because there was no privacy law at the time.
So that was a result of agriculture as a technology, because agriculture allowed us to have cities and to have more concentrated populations, so then the printing press comes along, and then we have the publication problem, and that's Brandeis, and some of you may have heard of the famous article of Brandeis, oh don't publish things about me, that's an intrusion, then the technology keeps changing our interactions, and our interaction volumes, the quality and quantity of our interactions, and so in some ways it's not new what's happening now, but in some ways it is entirely new, are there differences that we observe now, either in the AI going forward, or what we've experienced to date in terms of the internet, that you can draw from the past and make assertions about how we might approach things in the future, is it just a matter of picking up what's already there, like trespass laws or existing laws, and trying to apply them in a new place, or are there new patterns and new ways we might think about, some of you raised some of those issues, how might we think about how do we even start to address this new set of problems, is that of interest to anybody in the panel?
My previous life as an anthropologist before I started in digital rights a decade ago, what I can say, you're absolutely right, the law evolves with technology, but cultural norms in many of these respects are universal, you always had a connection between an artist and the art, you always had a distinction between the public and the private, where we draw these boundaries was always culturally specific, but the fact that we have these boundaries is universal, and in a way particularly when we talk about privacy, this stems from human dignity, there's a reason why stripping someone naked is a form of torture that's illegal under international law, so when we are now in a world where technology has become ubiquitous and intrusive, we also have to update these norms, and that's why digital rights or digital policy making is so interesting, because it's our generation that drafts these foundational laws, the questions in 20, 30 years will be totally different, and we are the generation that gets to decide, so we better all pay attention.
Michael, I'll turn to you next on the glyph and things like that, in the last panel there was reference made to licensing as a model for individuals to control data flows, and we, I started salivating as an ex-lawyer, because licensing means I can write some 40 page document which would be delightful for me, but an individual wouldn't be able to consume that, and I know you've been doing some work on using life system and existing systems for identity projection, starting with Michael, but for all of us, what are the existing legal tools, legal structures, legal approaches that we might bring, that they're sufficiently familiar so we're not introducing something entirely new, but we're kind of reusing them in these contexts for new challenges, do you have some thoughts Michael?
Yeah, and to actually build on Pam's comment about predictability for businesses, so one of the things that I think is, that I've always found interesting from glyph is, it basically was the G20 that conceived of the concept of glyph and LEIs after the 2000 economic meltdown, so yeah, so after the 2008 economic meltdown, the G20 got together and said part of this economic meltdown was attributed to the fact that different financial instruments could not be attributed to different organizations, so G20 came together, glyph is the global legal entity identifier foundation, and they assign unique 20 digit characters, yes, thank you, character identifiers for legal entities, and what this does now is it provides predictability and this is the one thing that most organizations, particularly operating on a global scale want, how can we have predictability, so one of the things that I do think is very interesting with what glyph has done with the LEIs is, it was originally created to solve a specific problem of identifying financial instruments, but now with VLEIs, they are actually looking at how they can extend that into other use cases, so I think, again, this is part of how do you look at what tools are in your toolbox and how you can repurpose them for different situations, and when those tools have a global applicability, I think those are kind of some of the better ones that you want to use in your toolbox.
I'd just like to weigh something here. I agree that there are certain regulations or certain tools, let's say, that we have to deal with. The problem or the challenge that I see is that AI and technology is evolving very fast, and then I don't see that sometimes the regulations are evolving in the same pace, so then I think that one would be the main challenge here. I don't know, what are your thoughts on that? That's absolutely a challenge that regulation, tech, don't go in the same pace, and you were asking for tools.
I think a very, very effective tool that I see that you can apply every day in every product, every situation, is be clear about what you're doing. What is the product? Who does what? Define that part on a very factual basis, and then everything on top, who is accountable, who is liable, makes it so much easier, so the first step is understand what you're doing. Very often, we have a fantastic technical product, no idea what we're doing on the legal side, what is the legal framework, where do we position that, and not only in start-ups.
I saw that earlier today, I saw a presentation on the framework for the UD Wallet, I'm not saying which country, but it's a pure technical proposition, no business, operations a little bit, no legal, no social. You said the word bolts, business, operation, legal, technical, social, and if you take only one part, it makes it difficult to add the other parts at the end. Be clear about the product, be clear about what you're trying to do, analyse it from the different sides, and then everything makes it easier, the second step, accountability, liability, responsibility, et cetera.
So, to expand on that notion, the bolts idea, business, operating, legal, technical, social, it's kind of like a handy checklist.
If your information product, because these are socio-technical products, services, they involve all of those elements, and the failure to take account of what's going on in any of those categories will cause a failure of the product, and I know Pamela's been involved in a lot of product development cycles, and I'm sure has seen that, and it's hard, because there's different vocabularies in each of those domains, and part of what we're talking about here is, we're not going to, the technologists are not going to go to law school, the lawyers are not going to learn all the technology as completely, so how can we bring those things together more effectively?
How can we, is it that you put it out in the market and then have a failure and that teaches you, or are there ways that we can have those communications? Obviously, EIC and other meetings like this provide us with an opportunity to share vocabulary.
Michael, did you have a thought on that? One of the things that I found interesting in the NIS2 directive is, I think it's Article 14, talking about the cooperation group, and one of the things that I think they got right was, they specifically, one of the tasks of the cooperation group is to look at best practices, and in there, there's also Article 28, which deals with the accuracy of domain name data, registration data, and what they did is, in Recital 111, they specifically made reference to best practices in the area of digital identity.
So, what I thought was interesting there was, they didn't know what it, they didn't say X or Y, they said they recognized that this was going to continue to evolve, and they basically provided for the flexibility for emerging technologies and to have the cooperation group feed that back in. So, I don't look at NIS2 as a static, but something that is dynamic, so.
I think that security legislation, I am wired, can never be static, because once your ink is dry, the legislation is already outdated, so that's why a lot of operations and technical standards are not in the directive of the regulation itself, but in underpinning implementing acts, and some of these accountabilities, to define them, are in the member state area. But I think, as a company, and I just presented it in my previous slide for those who were not there, I saw it working quite well before any product development was being done in our bank.
I found it, I had this problem all the time, you know, I was head of IAM, and then the business kept developing lots of apps and things and using data, and then after that, that would be the risk department, the audit department, the CISO department, and then after that, it had to be all redeveloped to be compliant to this, that, and the other, only afterwards.
So, I founded a group, a multinational group of staff group, one, the head of compliance was there, the head of the sanctions desk, the head of legal, the head of whatever, audit, so all really content-knowledgeable people in these fields, because the reality is location or department agnostic, and companies are built like silos for each, and on the globe, it's even on a larger scale the same. So, and we had every week of legs on the table session, okay, business, what are you planning?
Well, we're planning this and that. Do you know that that's, then you have to, these are the compliance requirements, and that would be really with a lot of coffee and tea and cakes, and it would take two hours, and in the end, it was really good friends, and it was, it took away the threshold for the business to show what they were doing to these nasty mafia guys of all the legal risk and compliance people.
Well, now that I know that cakes are going to be involved, please invite me. Thank you. And Pamela, I wanted to go to you on the product development and on the standards work you've done and other experiences.
You know, what's your take on that integration? You know, how do you bring these different voices in so that you make sure you don't have a failure out there in the market? Can you comment on that a little bit?
Yeah, I think, and this is just my personal experience, not any kind of corporate statement, but it has been my experience that many of the technology efforts are negotiated on a bilateral basis, often, especially in the beginning. So if you, for example, look at federation, there are legal and operational considerations there, but many companies are negotiating, you know, individual contracts with individual companies for individual reasons.
And so what I feel like we lack is the ability, there goes my camera, sorry, is the inability or perhaps the lack of an initial business reason for us to start with design patterns that match regulatory asks. You know, we do things individually first to solve individual problems, and then we try to retrofit to those processes. I don't know if that's anyone else's experience. One of the things I want to pick up on while people are thinking about that is the invention of design patterns. It's something that I want to make sure folks in the audience are aware of.
In the early years, and Pamela, correct me on this or others, in the early years of secure software development, they were referencing the work of a gentleman named Christopher Alexander, who wrote books on architecture. And his assertion was that if you want to design new architecture, you should look at what's been done, the patterns of what's been done in the past. So if I want to design a portico, an entryway, I should look at the entryways of yurts and churches and residences, look at the patterns, and then use those patterns to inform what I do.
That was picked up in secure software development, and it feels, and I think Pamela's alluding to it here, it feels like that might be some way, even if we don't have direct precedent to use, it might be a way for us to think about how, what patterns did we see in the past that had been effective, and how might we use those. To give a specific example, and this actually involves Microsoft's involvement within the ICANN stakeholder process. So ICANN is a multi-stakeholder organization responsible for the Internet's unique identifiers, domain names, and IP addresses.
And as part of the consultation process, there was a policy development working on the accuracy and access to domain names, and Mark SV from Microsoft actually participated in this particular group. And there was a lot of efforts by businesses to advance certain initiatives, but unfortunately, a lot of the contracting parties pushed back and those recommendations did not come to fruition.
But what was interesting is one of the people also participating in that was someone from the commission, and that person actually then was in part responsible for writing Article 28 regarding the accuracy and access of domain name information.
So I think what's important, and I think, Pam, this goes to what you were talking about, is businesses, when you participate in these dialogues, it's really important to listen to what regulators or other government bodies are thinking, because if you do not find the ability to reach mutual agreement, the ability to have something imposed upon you could not be very favorable. So there you go. Emily?
You were talking about patterns earlier, and I was just thinking, I'm listening to this conversation and we are talking about the regulations, how they have been set up for the way society is currently set up. I'm actually thinking about the future. There were quite some conversations yesterday about agentic AI, AI agents who are able to set up companies on the fly of a whim. I know some people who set up a dropshipping company, actually it wasn't people, it was one guy.
He spent an afternoon, he got several different AI agents to outsource several parts of the business, and in one go, he had a dropshipping company that was running all year. But at the same time, I was watching a YouTube video just the other day about how multi-agent frameworks are also used to imitate more complex structures. For instance, a small hospital was actually also simulated, where they were giving medical advice based on several AI agents, all controlled by one person.
So I think that taking this into account, we may face a lot of institutional overwhelm when every single person could technically have the capability of setting up several companies and several institutions that work together, also working together with the eye on collective intelligence. So I think that just like how you were mentioning transport really changed the way that people interact, that has changed, it's become very hard to define liability, there's a big, in law, there's a very big question always on which territory did it take place.
That's really hard when you look at digital services, it's actually really hard to define. But as technology is evolving even further, and that we are fragmentizing, fragmentalizing businesses even further, I'm very curious to see how we're going to tackle the question of liability in that sense. So that's the answer on the patterns you're mentioning.
Milly, staying with you for a minute, your presentation last year freaked me out. So I want to talk a little bit about, let's talk about patterns of harm.
Because, you know, the harm, I remember you made allusion to the children and protection of children, and that's something that throughout the ages we've had theft, protection of children, these are issues that are perennial issues. And so those are things that, from the human aspect, what humans do, and I wonder if there's, we can be informed by, you know, we kind of take the precedent, take what we did before and say, well, let's bring that forward, and that's what our focus has been so far.
But are there other ways we might be thinking about this, since we are assuming that our existing tools will be ineffectual to some extent? Maybe looking at the patterns of harm and saying, okay, we have people who are innocents. How do we protect the innocent? And so are there some thoughts you have on that, and others as well?
Yeah, that's a really good question. I'm always very intrigued when I look at the younger generation. I feel like there's a very big gap again to bridge increasingly as the generations keep coming with technology. And this is also going on with something that you said earlier about how the perceivement of harm is evolving in terms of privacy.
Before, it was very strange to have someone eavesdrop in your house. Today, it's very weird to have someone look at the neurological patterns in your brain. But looking at tomorrow, if I'm looking at my little cousins, for instance, I'm not sure if they are going to find it so strange for people to have direct access to what is going on inside of their mind, which for us freaks us out, right? But for the next generation of regulators, they may think it's fine.
So I think answering your question, the most important thing I think we can do today as parents, as brothers, as sisters, as colleagues, as just people or instructors, teachers, handling young people who we increasingly no longer understand on fundamental levels, is to really go in on that dialogue with them and go like, okay, to basically keep the fundamental human values going as the generations keep coming, to keep that dialogue open. Thomas? I agree with you that ultimately, we have to look at society as it is right now.
And to quote Bruce Schneier, fundamentally, technology is nothing more than a lens and a method to increase power, wherever power lies. At the beginning, it could be activists like me that suddenly appear much stronger, but in the long run, big corporations and governments will keep up. We talked a lot here at this conference about digital identity and all the goods it would bring.
This week, also, a report came out how digital identity systems are abused in Uganda for a very draconial government control scheme and mass surveillance. So there always is a cost. And if we proliferate these systems on a population level, we have to look to the most vulnerable parts of society, to the ethnic minorities, so the people affected by racism, by sexism. And there is, you just mentioned earlier, yeah, there's only the technical debate in the EIDAS.
Yes, and I mean, we paid attention to that for the last three years since the law came out and did what we could to make the regulations safe. But there are so many unanswered questions. Just to give you a glimpse, when you have a general purpose, universal system to identify people and share data about them, and everything in the data, including things like family status, sexual orientation, gender, we have not even thought about what the standardization of these values would mean.
And I can, even within Europe, where we have quite a harmonized system, see that we'll not agree. So there are so many central issues that are out of the spotlight.
And again, coming back to patents, the patent that's repeated here is that these laws are discussed in a silo without the people that are affected by them, without proper processes to really understand what the technology will do once it's out in the wild. And that's just hugely irresponsible. And I see the only people with a big plan are the big tech platforms, and it's only their plan for them. And of course, after the pandemic, everybody agrees that we need to digitize government, that we need to offer things online. But that's not the end of it.
And we really have to ask the question, what type of society will these systems bring us in? And if we are not even capable of asking, let alone answering these questions for simple things like digital identity, and we understand the nature of passports for a hundred years, then we have no chance of actually answering for it, AI, which is a technology that so few people actually have understood, if at all, so that any meaningful regulation is out of the question.
I mean, I would make one exception to that statement, which is biometrical recognition, face recognition. There, I think it is actually very clear. This is dangerous. This is removing freedom from physical spaces. This eliminates the public nature of public spaces.
Yet, the big AI act of the EU failed to establish strong safeguards for this technology. Kick the can down the road, now it's up to member states. And it's just irresponsible. That's not the Europe that I want.
So, we have a question from our virtual audience. Once again, I would like to thank you for engaging, and I remind you here in the room, if anyone has a question, please feel free to raise your hand. The question is, with the use of generative AI, what are your thoughts in terms of intellectual property and copyright?
That is, you know, like, I would like to make a comment on this. I'm a lecturer as well, and I remember, you know, with colleagues in academia, what we say is, like, how do we know when the paper was actually written by a person or co-author or by the GPT?
So, I kind of have a unique, so I'm an engineer and a lawyer, so I kind of look at this from two different lenses. Yes.
So, going back, I talked about how, under U.S. law, it's kind of always evolved, and one of the interesting cases was, under the original copyright law, it only protected physical works, right? And what happened was, in the early 1900s, the piano roll came out, which was a mechanical roll, and the authors were quite upset when their music was being automatically played on pianos, and they sued, and that case went to the United States Supreme Court, and the Supreme Court said, no copyright protection.
So, the copyright owners went to the U.S. Congress, and in 1909, they amended the copyright law.
So, part of, you know, lawyers are really good at suing at times, and sometimes those lawsuits can result in good or bad action, but that, I think, is part of the thing. So, going back to AI, which I find interesting, some of the early litigation involving AI is tending to be on copyright.
What, was it fair use for certain copyrighted material to be included in certain LLMs? Then, also, as far as the output, right now, at least under the U.S., there still has to be an author.
So, one of the cases in the U.S. involved, I think it was an ape or a primate that took a selfie, and the person whose phone it was tried to claim copyright protection, and the courts were like, no, you didn't, that's your phone, but it was the animal that took the selfie.
So, you are not able, so, again, one of the things that's interesting is copyright law and patents are actually provided in the Constitution, Article I, Section 8, Clause 8. It literally is embedded to protect the authors and inventors.
So, that, I think, is the issue. Now, if someone is involved in prompt engineering, that is, I think, where the struggle is going to be. It's not the AI generating something, but when you have a human interacting, crafting that prompt, evolving it, that's where I think the law will extend copyright protection to that work. That's kind of my, I don't believe that, you know, using a search engine for Google to help me write a paper somehow prohibits me from exerting copyright protection.
So, again, to be determined, but this is where I think the lawyers, for better or for worse, will probably help provide some clarity. This comment reminds me to the presentation of Jacob Ehrlich that says, okay, the lawyer who said it depends.
Well, yeah. Well, that was literally the very first day of law school. The very first day of law school, the professor asked this question and all, a bunch of eager law students were like this, that, and after about 15 minutes, he's like, wrong.
He goes, the first answer is depends. The second statement out of your mouth is once you provide a retainer, I will provide further clarity.
So, that was literally my first day of law school. So, yes. Just to follow up on that copyright notion, my recollection is that before the statute of Ann, the copyright right was the money went to the printing press owner, not the author.
And so, we need to talk about economics and power here also. The rule of law is about the perpetuation of power structures, ultimately. Thomas is itching on this one, so let's finish this thought. Spotify all along. That's right.
So, it's interesting because you hear a lot of weeping about author rights and things like that, but the printing press was the equivalent of the large Hadron Collider at the time. It was extremely expensive and you had to amortize the cost.
So, we had similar things in music play with the publisher versus the composer and there's all sorts of division there. So, one of the things for us to keep in mind is that ultimately, a lot of this may be choreography in front of power.
And so, for us to be aware of, and we've alluded to it many times, we have nation states, which are sovereigns. They don't ask for permission or forgiveness.
And now, we have companies that are accreting power in the information domain, not in the physical domain like nation states did with the piece of Westphalia, divided up the world. So, Thomas, I don't know if you have some thoughts on the economics issues and what you might want to add there.
I mean, as we just said, and that's similar to continental Europe copyright, ultimately, it is meant to protect the artist, the author. Yet, if you look at the current economic realities, that's the least it does. If you're in music, then most likely, you will not be able to live off your art. And that's to the detriment of the production of music, but to the huge benefit of a few very big corporations, the labels and the streamers.
And here, we also see that current licensing deals give those companies the rights to train AIs based on copyrights. And we know from the big Hollywood strike what the right response to that should be, that creatives protect their art and the rights to it, and that we have this debate.
Again, I would agree with you that this needs to be decided by the legislators. I don't think we should toss this question to the courts, but it's very hard to stall things around on copyright in particular. I don't want to make an assessment on when the next U.S. legislation will be passed on this, but even in Europe, having worked on a copyright directive from 2015 till 2019, I can tell you that even the EU has very little wiggle room here, because we do have international treaties dating back over hundreds of years that limit what even the EU can do.
So, it is a really difficult space where I fear that the most powerful players will just create realities. The licensing deals that OpenAI is currently striking with many publishers is a worrying sign in that direction. The only good thing that could come from it is the eventual downfall of Google. I have a remark, not specifically on copyright, but on legislation in general. Legislation tries to capture reality, physical reality, life, into a rule-based approach to make sure to set a rule set for what should and could and should not and should not be done.
And that will always fail, because you can't describe the complete multifaceted reality of the world and define what should and could and should not be done, because there will always be new realities emerging. That's what we see with AI.
So, by default, on a highest abstraction level, law can never be sufficient. In the closing minutes, I just wanted to ask people for one sentence on what good looks like in the year 2040, 15 years from now. Anybody want to start? What does good look like? What is a satisfactory condition in the human condition with law and technology being balanced, looking way out? I can start. I've been thinking about this, and something that's a bit on my mind is, I think that many of us, we have artificial intelligence. One sentence you said, right?
Okay, I need to cut it down. I'm going to cut it down. I want people to take accountability themselves and make sure that they stay accountable in an age where they use technology that is smarter than them.
And, Pamela, you've been very patient. Do you have a thought on this? I do.
I mean, I think innovation in this space means bringing legislation, regulation, and technology closer together so that you have patterns for success that are easy and that do the right thing, according to guardrails, that are responsive in less than a decade's worth of turnaround. Other folks? Last thoughts? Yeah. In my ideal world, in 15 years, people are still thinking, doing the thinking, and technology is supporting, helping to do it better. But the thinking part is still with the humans. Thomas?
We know that we've won when the regulation makes the technology to equalize power imbalances in our society instead of amplifying it. Michael? Symbiosis. If there is a symbiotic relationship between humans and AI, that's a good thing, in my opinion. I wanted to close with Plato, who said, well, the world's going to be better when kings become philosophers and philosophers will become kings. So I would change it a little bit when engineers become lawyers and lawyers become engineers. And on that note, please join me in thanking the panelists for a wonderful discussion. Thank you.
And with this, we finalize this part of the talk.