After this very, let's say organizational practical at the end, also daily perspective on things, how to daily work with the issues. I'm very happy to introduce professor grim from university of Copelands faculty of informatics. That will certainly also give some daily insights, but as I have seen his slides and discussed with him, I'm sure that this plans imperfectly having two different perspectives and being a data protection officer and privacy officer.
Of course, I really wanna know more about your conception, preconception of risk of privacy. Is there a risk doing privacy? Is there a risk only if you don't do enough privacy? Certainly we don't have necessarily the same perspective you as a, as coming from the informatic side, we see it with different eyes and I'm very happy to have you here. Welcome.
Oh, maybe we take, we take this one here then. Yep. There you go. I'll give you a clicker. Thank
You very much.
Okay. Thank you for this introduction. And thank you, Tom, for your way to speak about risk and security. I would like to talk about the way you do. It's very close to my approach by the way. But of course from academic point of view view, you have another style to talk about it. I have not only learned to take better care about my toilet lit and about my toothbrush, but more important to make a difference between hygienic risk and real risk.
Actually, this is my topic, my theme as well. What I would like to tell you today is understanding privacy and its risks. Not only what is risk with privacy, but how we learn to assess this risk, to take measures about this risk, to identify a risk. And we start with understanding what is privacy? What I want to tell you is two things.
Mainly one is how do computer scientists understand privacy and how do we try to make it operational? What are those tools, technical organizational, that work well. And on the other side, those that do not work well.
So we have a, a big history of failure and privacy technology. And second, not only technology is responsible or technology and organization is responsible for dealing with the risk of privacy. Also there's change of our societal approach to privacy. Let's start with the risk and where there's risk. There's also opportunity. Let's have a first look on the opportunity of what we can do with all these data and exploiting these data, new business models and well, that's my suggestion for a business on the business side, it's quite clear. They might get a service. They might earn money out of it.
The risk side is on the user side, who is confronted with a prediction of his future, which he might have not known before. And this is one of the risks that due to algorithms and data, and yeah, there is a prediction about your own behavior in the future. And you don't know about your own future. And do you want to know it or maybe it's even wrong? So there's an obvious risk with that.
I would like to share one observation, which I made in the recent weeks about our change of our societal change in our approach to privacy.
This painting is from the exhibition in Frankfurt, the state, the impression impressionism exhibition CLA Munk is wonderful. So who hasn't seen it yet? I suggest you go and, and, and, and, and see it. This painting is just beautiful. So the today person like me and like you, we approach this painting and we just love it. It's so friendly. It's so intimate. It's family, it's private. I learned from the description at this exhibition, that this painting was a scandal at the time when Claude Monique presented it to the critics, to the salon and Paris, and they refused it.
They said, this painting is a scandal. This must not be exhibited to the public. It is absolutely impossible to make such a private thing with this, like this to become public.
Why? So at that time, 1860, I learned paintings of this big format, huge form where reserved only for religious topics or for ethical topics or for historical topics, but for nothing which is private. And this is not only private. As we see family sitting there having a lunch, but it's very private.
Everybody knew that in the middle of the woman was the wife of Claude Monier, but he was not married to her an unmarried relationship. That was that, that is their child, an illegitimate child, a bastard. And these things will not be to be discussed in public, not to be shown to the public. So today we don't understand what's the problem with, with such an expression. But now if we shift to, to today, we blame our kids, that they put everything in the public to the Facebook. Why do you put all your private things to the Facebook?
This does not belong to the public, or maybe you do not really know how much public this is, but maybe that these young people, young generation or come and change in privacy will change the same way that not only we have to learn to come back to our old approach to privacy and teach ourselves to restraint ourselves. But the whole approach to privacy is, is due to change. So I would like to call this, this painting, the first Facebook scandal in history. Yeah. A computer sciences understanding of privacy.
First step to understand is what is privacy in relation to what obviously hiding something against the public, but what kind of public there at least two very important public in modern life. One public is the governmental public, the state public, and the risk is NSA. And without Eric Snowden, we wouldn't have this huge increasing of interest in, in, in privacy issues.
The other one is the economic public. That is the, the, the, the customer towards this, the service provider or the employee to the employer.
And there's a private area, which is to be protected against kind of business public. This is Facebook, this is Google and our relationship and a privacy concerns. And these two directions are very different from one, one another, not very but different enough to take this into respect for building technology. We have to build other technology in protecting our economic privacy and doing e-business eCommerce or being an, an employee other than towards the, the government not being observed in our political behavior.
Privacy in the first place for the computer scientist always was a legal concept. We learned from legal experts from the ustan, what privacy is. And they told us privacy means the following things. It is the right to be left alone. It is a personality, right? Which is very important. It's a personality, right? It's not an ownership, right? And we have a lot of laws already in law tradition, which protects privacy outside of the E area, supposed to telephone, integrity of toile and so forth.
And then they told as even more that the legal approach to privacy can be broken down to principles and computer scientists love principles. Because if you have principles, you have an idea what kind of technology you can build. And these principles can be supported by automatic measures with measures. I don't mean metrics. I mean means functions, technology, organizational measures, and means.
So the first principle we learned from the legal experts is purpose. Binding personal data can be processed, collected store.
If there is a specified purpose, and then they can be used only for this purpose and for no other purpose. And if you have a purpose principle, then you can also formulate the data minimization principle, because this goes with purpose. Minimization means it is minimal. If it is constrict restricted to the absolute purpose, if it's more data than purpose, then it's not minimal anymore. Wonderful principle, what kind of technology can you build about that? Then content, you can use personal data if the user senses to it and say, yes, you can have it. I give it to you. I agree.
Wonderful for computer scientist, because this is communication. And we can build with email or with, with buttons.
And we, we know how to deal with that.
Legal permission of course is another way which allows you to, to use, to, to, to work with personal data, then transparency and notice. Consent is choice. And notice is transparency showing to you, telling you again, a communication act telling to you, the service provider tells you I'm going to do with your personal data, the following things, that's privacy policy and expressing policies.
You know how to do that with computers, by again, pressing a button and coming up a webpage, which tells you what is going to happen with your, with your data and content. Again, notice and choice is an easy communication act, personal control, something like notice and choice correction, the right to correct wrong data, to delete data which are out of date.
Again, you can build this into communication functions. External control is a very important principle, which takes burden from the individual user.
There is a control which is done by somebody else, probably better equipped to do that. And if you, you are too weak, you can find help. Or even you have the hub without asking for it. It's already organized by Dan Schutze of garden in different areas who do their work instead and in help and in support of, of the individuals.
And then of course, confidentiality is another very important principle and computer scientists love confidentiality because this goes with encryption. Encryption goes with mathematics, and this is right deep in the basics of academic life. And I think it's not a chance. It is quite normal that the confidentiality principle is the one which is best understood in, in, in academic life and in basic research.
Well, and if you have these principles, you can also organize your risk assessment around these principles. And that's what we are doing.
Even in consultancy with firms making a risk assessment with privacy, they would always ask, what does that mean? We can say, we make an risk, risk assessment of these principles. And then we go one by one through the principles and make risk assessment. Another very important distinction which we make for technology and organization about privacy is this distinction of self data, protection technology, and system data protection technology.
Well, self data protection means tools in the hands of users, what they can use and help them to enforce their privacy. Even if the word outside is bad, like awareness is of course, the very first thing awareness is an individual thing. You get educated, you learn it and you become illiterate or absence. You don't use it. You decide, no, I, I'm not going to the service.
You take a choice, this service, but not that service or you download and use tools like no script gory at blocker and your browser, which I, by the way, recommend to you very easy to use tools and very effective self data protection tools.
And then of course, end to end encryption, which can be done by the service provider and the user, and all communication between these two is encrypted and no intermediary, not even the service provi the, the internet provider or the NSA should be able, I don't dare to say is a to look into it. Yeah.
Why, why don't I say is, is a hundred percent unable. Well, of course the encryption mechanisms must work. So there's still a remaining trust in something, but at least if you use safe systems, then end to end, encryption is such a self data protection tool system data protection is another thing. This is technology in organization, which is so to speak built in it's embedded. It works without users need to do anything. It takes all the burden from the users.
Data minimization, for example, cannot be enforced by a single user unless the user makes sure only to provide those data, which understands this limit, which is only restricted to, to, to the purpose.
But for example, if you would like, just like to, to serve through the internet anonymously, you can't do that as a, with an individual decision. But if there is, for example, a mixed net of analyzing networks, this could help for, for anonymous surfing, but this must be a service provided to the users like deletion of purposeless data.
If data are out somewhere in the network, a user has no power to delete it, unless there is some service, some system, which does it for you minimization. I said it already, or, or getting the information. Good example is the IP security method where on the IP level, on the internet protocol level already, there is encryption and the users don't recognize it. It's organized by the, by the internet providers on, on the lowest level, this is a good help users don't need to do anything with that or SSL, which is the browser server encryption with this lock, which you see, which is used with HTTPS.
So if you do your home banking, then you have SSL encryption as a user. You don't have to do anything with it. It's built in, it works on its own. It works fine.
Again, it should work fine. So also there, you need some trust that this technology technology is really working. And as we know, it's not always working so that we call privacy enhancing technology. We have the legal principles, we have ideas, how we could, how we could implement these principles by technological means organizational means. And we have done a lot in the past to do that. And I must agree. Most of that, we failed a lot of failure over there. Pet means privacy enhancing technology, and is everything what supports privacy.
And the contrary pit would be called the privacy invading technology. The bad guys are pit. The good guys are pit.
Yeah, this is a pet typology that computer scientists have built up and using until today, pet which works with communication with users, tools, for encryption tools, for anonymity and pseudonymity filtered tools to keep away, for example, trackers policy tools, which explain privacy policies or negotiate privacy policies.
And then number six, which I would like to emphasize a lot is rights management, especially in the modern mobile world. So if you download app, by the way, the word app, everybody of us know, knows of course, what an app is, what an app is.
But I think three or four years ago, nobody knew what an app is. There were no apps, there's a very new thing. Everybody uses it now. And the way apps are built is on which principle on the principle notice and choice. There is a notice, the app tells you before download this app wants to have this and that permission of you. And now you can select yes or no. Then it's the choice you say. Yes. And then you agreed to the access to your personal data as was notice as was specified on this, on this notice rights management.
And I think it is pretty obvious that this way of rights management with mobile phones, smart phones does not really work. This is a flawed way of asking the consent of users more or less.
I, I believe, and I can argue for is more or less. We are helpless with this way of decision. We cannot really decide and most often decide the wrong way and must decide the wrong way. Otherwise we would be outta service. Let me tell you a little bit about data sources and traces. What is the reason that all these data are collected? What are the opportunities on the side of the data collectors law and ment law enforcement and fighting crime is a wonderful reason.
And we say, yes, we want to be protected against the bad world, predictive policy, drawing conclusion out of data where the next crime might happen and organize police work in order to prevent this, these criminal criminal activities.
And future sounds very interesting if it works well, if it's not misused, we say, okay, but remember the first picture which I showed to you, there might be even some wrong decisions coming out of it. Scoring for insurances. On the one hand we have the feeling.
No, I do not want to be prisoner of, of these metrics of algorithms on the one side, on the other side. Yes. But to have a fair insurance maybe, or a fair money credits, maybe that some, some sense in it, customer relationship.
Well, I think every customer likes to be behaved, treated well by, by the service. So if there's a well organized customer relationship, well, I agree to that. If there's a two big profiling over different branches, then we have the feeding. That's too much.
Again, human resource management, before you are employed, you are already screened completely.
And everything is known about you. If feeling that's too much on the other hand, that's what we all do. That's what I do myself. Before I get into contact of a new employee or even a colleague to be called the university. I look at the internet, what do I know about this guy? What can I get, find, what can I find out? So there is some good opportunity, but a huge area of misuse, a user specific advertisement we hated and political persecution. We hated.
But if we take the very first and the very last bullet point, again, it's not so much different. And it depends on the state and the culture and the society in which you are. If this is a political persecution, which we claim happens in China, Russia, or if it is criminal, what they would say, this is not a political issue.
It's criminal act in our country. So these two things, at least technologically is more or less the same. So this is one, one of the reason, one view on opportunities for data collection. And there are three types of data sources as far as, as I observed.
And number one is well known and is discussed everywhere. And we are aware of this data source. This is one's own action. I agree. For example, through the apps, or if I get, make a contract through eCommerce, I agree. I provide the data. I act in the internet or with a mobile phone. And I am observed in these data I used okay. Or Facebook. And we tell our kids, don't put so much of your private life into the, into the public. This is your own action. Don't do so much in the internet with it, but this is not the only source and maybe not the most important one.
Number two is very interesting.
This is data about the individual, about you, which you have not put into the net, which others put into the net like alumni lists. For example, every school does that. Every university does that. Every club does that, or these credit reports for Shah and Germany. I don't put these data into the network. This is done by the services in the background, or if there is a newspaper about you, then something about you is in the network. These data are not put into the network, but nonetheless, they are still related to individuals. Number three is very interesting. It becomes the most important one.
And it's underestimated completely as a very important source. These are data which are not about individuals and they are collected by global measurements like global metrics. So there is a survey of habitation wound, occupation, be Hoover education.
It is all put into metrics and every person, every individual in this huge metric room would build up a individual vector, but is not yet placed as long is not in the network, but the measurement is already there. The metrics is there. And then you do just one action.
And then already you are known and the whole metrics, the whole world you are already placed. And there's a full profile about you. Although you have done only very little in order to provide these data, and this is the so to speak most risky way of building up the data because no individual can do anything against it. So if you're asked for example on purchase before you pay, please give me the postal number of your, of the city, where you come from. So postal number of my city is not individual. So you give the postal city, but this is exactly what happens here.
It's, it's another data put into the surveys of the whole world and you provide data which then kept, which then builds up and, and identifies individual vectors. Whenever you do a small action in, in this world, not myself, but these are trans individual clusters after just one match. The vector is in the cluster is built. And then you have a very pre size scoring of the individuals traces by own activity.
Well, yes, in the web by email social networks, by telephone, most in interesting and most modern way is by mobility. You know what they always ask as when you download an app full internet access, read contact data. I want to read the telephone status. I want to read your geolocation data. I want to call actively the installed applications I would switch on and off the micro and so forth. These all establish traces by mobility and, and by usage.
Now, how do we do the risk analysis of privacy? I will do it only by example, by a small example, to understand risk. It is important also to understand the other side of risk, that which is trust. If there is a risky situation, you have two ways to deal with it. One is you control the situation and mitigate the risk this way. Or if you cannot control, then you have must trust somebody else who controls it for you. And in this situation where on the left hand side, you have the trustee whom you trust. And this might well be the technology, the privacy technology that you have to trust.
There's the trust propensity. The person who has to trust has some individual propensity to more or less to trust. And then there is the situation which you go, the perceived risk. And this might be only perceived, which is the toothbrush, or may be an actual risk, but the decision goes through the perceived risk.
And then you take a decision and go into, into the relationship and then something comes out and either your trust is broken and then you will take another decision the next time, or you feel well with this trust.
And then, then you can go into the trust again. So risk is the situation in which you go and with the, with the risk assessment of privacy, you would go through the different protection, privacy protection principles by identifying what is the subject of trust? What is the risk? What can go wrong by likelihood and by damage by amount, how can you limit the risk? What is the measure? And then if you have taken some measures to mitigate the risk, there is still a remaining risk. You will never come to a zero point.
Say there is no risk left, even that is to be identified and is also helped to identify the reason now why after having taken all these measures and accepting the remaining risk, why you have a reason to, to trust?
Is there an exchange of goods? Something like tit forted, is there a common value, cultural value, for example, or a law, which helps you, is there a contract? Is there a common interest, which makes it very likely that your partner will is, is able that you, that you trust the partner.
So, and you go through the risk with the purpose, binding data minimization, constantly your permission, transparency, and so forth through the single principles. And then you can identify pretty clearly and precisely the single elements of, of privacy risks.
And of course, I can't do that here just for an example, to do it with the confidentiality and the risk with confidentiality is even if you have the best measures, the trustee, the one to, with whom you communicate transfers the data to others, even if the line, the communication line was very well encrypted, but your partner of course gets all the information.
And now you have to trust that your partner keeps the confidentiality.
So which measures do you have to enforce your partner to keep the confide confidentiality, this a risk, or maybe you trust the partner that he is integer will not transfer, make an onward transfer, but is maybe callous is not really able to keep confidentiality applies. No measures op applies the wrong measures. For example, there is no server protection. When you do your home banking and the server, the SSL doesn't work properly, or there are protection measures, but the protection measures are vulnerable. So these are three examples for risk.
With confidentiality, you put to the risks measures in order to limit these risks. And then you come to a, you come to a remaining situation. What is the remaining risks? Still lack of integrity in benevolence of the trustee will always remain. Even if you have the best measures or you need still trust the system.
So even if you built in encryption, you must trust the encryption. So how do you deal with this risk, wonderful tool? I encrypt everything asymmetric and huge keys, but are the keys in the right hands? What does the NSA do in the background and so forth?
So there's still a remaining risk for confidentiality. My working hypothesis on this is institutional trust and medium strengths.
The, the trust, the strengths trust in the trustee. So we should build better.
Medium, should build better tools and make these tools usable. A medium security must work more automatically, not so much that users always have to decide. And very interesting research question, by the way, does technical knowledge strengthen, or we can trust the medium the more, you know, do you feel the better or the worse? I know so much about the medium. I know what can, what can go wrong?
So that weakness might trust me.
There, there seems to be an upper limit. So, and the limits of pet, you do the same with all the principles. And I will skip that all for, for time reasons.
And again, take just the, the last one. These are all the principles, external control.
Again, with confidentiality, there are reasons of distrust business with personal data is profitable. There's a lot of money with it, and that's a big reason to be very, very careful with it. Service security is very hard to detect. So even if confidentiality is broken, there are absolutely little means to find that out. It forensics may be a solution for the future, but we have almost no technology to prove or enforce purpose binding. And then to prove that these data have been leaked at this and that place.
So measures require personal integrity procedure, bene benevolence of the partners, which is maybe a cultural thing or educational thing, technical, organizational integrity, medium working hypothesis, again, combination of network and end to end encryption. It's not one or the other. You must put them together.
Network encryption is system data protection end to end is self data protection ethics, which is awareness, which is teaching, which is education plus external control, which helps even if you are helpless, even if you are not well educated, if you're not well informed, purpose binding measures are required is really a huge research gap. Finally, hypothesis one individual actions do not lead to individual disadvantages, but they lead to social damage. This goes with my third data source. If I provide personal data of my myself in the most places, I personally will not have disadvantage. This.
I have nothing to hide, which is more or less true, but it doesn't help much because you build up the huge metrics of the world for all the others.
It is a societal problem. And then of course it also damages individuals, mostly others, individual consent, and transparency is an overload of users and remain toothless without external control example, mobile communication permission model. It's an overload. We are not really able to take the right decisions and we need technology, which unburdens users and data networks should be anonymous on a network level already.
So data has to be taken out of the network. Of course we need personal information personalization, but not on the network level. This must go on the content level. This is a very nice claim because in modern life, you cannot really separate.
The two, for example, Facebook is both is a data provider and is a content provider. So must be able to do both help for an Analyst communication and for personal communication. So that's what I want to tell you. And if you want to learn even more, then I have some references for you, which you can use for further reading. Thank you.
Thank you very much.
Well, I have had the perspective a bit before that everything's happening on the intersection of law and technique law and it, and I think this is what you brought across as well. You started off with the legal ideas. Some people call them pretty ancient. I do sometime too when I don't have a good day believing in my own laws. Not that I made them, but my laws in the sense that I have to defend them sometime. And I had hope that you could give me a message that whatever conception of law there is technique would be able to bridge that into practice.
And in a way you gave me a clear yes and no. Yes and no. Is that my right understanding? Yes.
Yes. I think one of the most important work we do now, we do already for a couple of years, is to bring law and technology in another way together than, than was traditional in the old ways. This is the law build technology, which makes sure that the law is kept and we learn the technology breaks, the law, and most things happen outside of the law. And we know law doesn't work. So law also must learn to cope with new reality. And new reality is mostly built up by new, by new technology.
So all these apps, WhatsApp, for example, or Facebook is a new reality. And it came in by technology and we have no law about it. At least no law, which really works. So
While the law would work, if the world wouldn't be around it, right. That's right.
In theory, it does work as technique always works in theory. That's my personal comment.
However,