I would like to today introduce you to the thoughts as well as the considerations of lawyers in any project aiming to regulate AI and to explain you what the mind of a lawyer looks like and case the technology is actually explained to, to us. Then I, I asked our research associates what they understand when they hear the term ai, and they came up with, as you can see, a variety of names.
That's one of the major difficulties because there is a language barrier between the people who actually concept and design AI and the lawyers, because in general, the majority of lawyers doesn't really understand what stands behind the term ai. Moreover, there is a huge problem. In any case, you try to concept a law shaping the actual use of a technology because shaping laws and designing the law itself needs a huge amount of time, which is, can be, can be seen here.
If you consider a few points of in the, on the steps of creating AI in the present understanding, you can see that the legally AI is a rather new area. In fact, prior to the GDPR, the majority of lawyers didn't even consider AI as something actually being necessary to be regulated.
So, and maybe if, if you know that the GDPR came into force as of 2018, so it's a really, really young law field, but that also correlates with the fact that law is always behind the present technical status quo.
That's also something you can see here. Research carried out based on the categories you see here. The categories were actually invented by, or were tried to the class. The definitions of AI were tried to classify by Russell Novik, and based on these categories, craft and others tried to analyze whether the laws aren, the technical definitions they found actually match.
And the result is quite horrifying for US lawyers because it shows that from their perspective, lawyers don't actually understand what AI is because the majority of researchers, they asked in the papers, they, they they read. As you can see, 72% actually tended to define AI as something which either things rationally or actually acted rationally apart.
Contrary, the, the laws actually tended to define AI as something which thinks humanly or acts humanly.
Having said this, there's another very big concern if you ask my, my clients or our clients in terms of ai, there's always these big risk of being fined due to a violation of the GDPR or as of now violation of the AI Act. There has to be stated that in fact there is no such thing as legal being legally compliant in an understanding of there is no breach, there are no breaches of law at any.
When I started my career as lawyer, the one of my, my older colleagues told me, Daniel, there's actually always someone being bribed and cannot be further to the truth because actually this is something we face every day. There's always the one employee who actually doesn't care about rules, who has to be sanctioned in any way. So as a corporation in, in a variety of fields, you, you must be aware that there is no such thing as comprehensive compliance in the understanding that you would understand compliance.
Moreover, you should always be aware that there is a risk of legal and entity, entity, which actually means that the bigger the corporation, the bigger the influence is on the laws. That is something I will, I would like to address as well. Even though there is a, a ACT or A-G-D-P-R enforce right now, that does not mean that it has to be the way it is forever. As some of you might know, the GDPR has actually been revised already and there will be new measures implemented trying to address difficulties and in judicial literature there's a ment called risk of leader endogen.
It's a negative term. It says that if you're big enough as a cooperation, you can shape the laws by stipulating a compliance management system and this system will likely be approved by a supervisory authority. And then you will shape the law because after the, the, the actual approvement of the supervisor authority, the smaller corporations will always follow and then you will create a use case for compliance and this will actually be the same.
And I would rather point this out as a positive aspect because if there is a use case with AI in focusing on legal terms or in legal terms, then you should be aware that if you follow these cases, you cannot be wrong in the understanding of the supervisory authority.
However, there is several aspects to be considered focusing on the GDPR in particular with respect to personal data.
Also, it is critically analyzed whether the principles of the GDPR are actually an adequate measure to address artificial intelligence processing data protection. If you look on the, on the first point, there's the purpose limitation because the GDPR states that if you have a data processing activity on any kind, you have to limit the purposes. And if you try to design NNI in which you don't actually know what purpose you will process the data for, then it is argued that this cannot be actually be compliant to the GDPR from a present perspectives.
Moreover, you have to use, that's what the data minimization points says, excuse me, you have to use as less data as actually necessary and not as much data as possibly helpful. And the third point is accuracy.
You have to use the correct data in terms of personal data. So in any case, you actually use personal data and you would process data of my, my personal data.
You have to, you, you are prevented from actually modify this data and that's a big issue because if you want to test what the algorithm or the AI in future actually can do, then you should modify anything, any data because you try to actually gain results. You don't know yet what these results might be and you don't know yet what the AI will actually provide you.
Finally, there's the principle of storage limitation, which more or less states that you have to delete data. In any case, the purpose you initially have to define is not applicable anymore. So there's limited as there's a limited possibility to to to use different purposes for the same data, which also is difficult in any case if you want to actually increase the use of an algorithm or the use of, or if you want to use data for several processing activities.
Having stated all this, I asked myself is it actually a, are there actually so big frictions compared to the GDPR particularly but possibly also to the AI act that there, the compliance in this case is actually in utopia and there there will be predictable frictions. As always there's a solution which is quite easy because in fact, as I already stated, good news are that the corporate compliance does not need to require a hundred percent fulfillment of the, of the principles in terms of the understanding that there will be no breaches.
However, as a corporation you must truly not breach laws intentionally. But the good thing from of the both AI act as well as actually the GPR is that there is general a risk based and approach. So if you have a closer look on the, on the AI act, then there's only a particular results which might be created by the, by the by, by the ai.
These are forbidden. Any other cases forbidden for example, are you, if you try to analyze the mimic of employees and if you try to analyze whether these employees are actually feeling good or bad, that's something you must not do.
However, apart from these specifically enumerated cases, AI is a rather possible field and legal terms. So in any case, you start with a adequate risk analysis and you try to analyze whether the data you actually need contains a risk for the individuals and you more of a go on and actually impose appropriate measures to address these risk, which not only contain the general terms as you might know, technical organization organizational measures, but particularly also you have to train your employees who are presently working with the AI tools.
You have to train your employees who are maybe will maybe get in contact with the data you use and you have to enforce the rules and measures you incorporated.
Enforcing in terms of labor law always means you have to terminate contracts. In any case, a employee does not actually feel like complying to laws. That's what I said there is always the one getting bribed. The one getting BRI has the, the inflation employment relationship of the one getting bribed tested, be terminated for sure.
If, if a corporation does all of these steps, then in general there is no risk. However, something you cannot actually prevent from a present perspective is the difficulties related to personal data because actually the, the GPI does not allow any cooperation to use personal data in terms of the current understanding of, of an ai.
However, the data can be anonymized, which also realizes that if the data you collect in any form and you are able to initially, that's why it's important to, to start contacting lawyers in, in the initial approaches process of designing the ai. If you start to anonymize the data from the beginning, then there will be no frictions in general.
However, something I cannot do is speak for any algorithm considerable for sure that's something we should or you should decide on a case by case basis. But in general, if you actually carry out these steps, then from our experience, the majority of tools or the majority of algorithms actually designed from a current perspective are legally possible.
So that's all from my side.
Daniel, I have a one question for you. Just one quick question.
Do, do you think that the AI act and the GDPR act, you know GDPR is looking to privacy as a data security measure and you pointed out that AI needs data and there's some inconsistencies. Do you think that the AI act, and not the AI act, but AI itself may signal an end to GDPR as a privacy approach, is it going to become not relevant anymore because there'll be so much data in AI systems and used by AI systems that it'll overwhelm the data minimization and there's other things you mentioned. Is GDPR on the way out, do you think? Unofficially of course. Yeah.
Not, not exactly a way out in the meaning that the, the GT PR will not exist anymore. What I consider lightly is that for the specific design of AI, that there will be categories being stipulated by the GDPR or whatever act will be the next one, which actually contain a, a green list as we lawyers call it, and enable the corporations to process the data for a specific purpose.
And I would also consider, I would also consider it likely that the data should be collected from public publicity, publicity, and so, so from from a, from from the location where you can actually, where anyone can, can collect it.
So we have a question here from, from the audience. Let me just pass the microphone please.
Hi George Fletcher. So I have a question on the GDPR deletion aspect, is there an expectation that if I go to a site and I say delete my data and my data had been used in the AI LLM, right, would they be required to retrain?
Is that a a legal requirement or is that sort of assumed to be, I mean I know that within LLMs there are ways you can muck with the prompts to extract personal data back out anyway, so that's the question.
The deletion of personal data actually is a two step assessment. If a data section to require you to delete the data, then you'll have to analyze whether the deletion is actually possible at first and after it's possible if there are not any reasonable grounds why the data must be stored.
In this case, I don't know exactly the the algorithm, but if it's actually used for training and you cannot extract the data under any circumstances, then I would consider it likely that if there was a legal basis prior to the actual request of deletion, that then the deletion won't be necessary.
The, the tricky part is going to be whether with some of these LLMs by the way that you ask the prompt, and I'm no expert, but I was talking to some of our experts at my company that there are ways you can manipulate your prompts to actually get at the individual data that was used in training the LLM and extracting it out. And so that would be the danger of if you don't delete it, the person could have said, delete my data, but someone might be able to get something about them back out. And would that then cause risk for the company?
It's probably a relatively low risk or difficult to do, but anyway that's sort of, I think that's an area that would be interesting to see, you know, the law get a little bit clearer on because there's huge impacts to companies if they have to retrain.
Yeah, thank you. That's actually one of the, the most interesting aspects of the, the GGPR and ai.
They, they connect me to GDPR and ai. Something my clients tend to do and that's actually something I'm not allowed to provide you under any circumstances of. This would won't be legal advice, but in general they'll do a risk in this case as well, not solely considered as I stated here, in terms of which risk will occur for the corporation but also related to the GDPR, which fine has already been imposed. And if the fine being imposed does not actually seems critical, then my clients also tend to, yeah, yeah, you can present the rest. Thank you
So much for this presentation.
It's actually very interesting. Thank you so much.