Dr. Jacques Bus, Secretary General, Digital Enlightenment Forum
April 19, 2012 9:30
KuppingerCole's Advisory stands out due to our regular communication with vendors and key clients, providing us with in-depth insight into the issues and knowledge required to address real-world challenges.
Unlock the power of industry-leading insights and expertise. Gain access to our extensive knowledge base, vibrant community, and tailored analyst sessions—all designed to keep you at the forefront of identity security.
Get instant access to our complete research library.
Access essential knowledge at your fingertips with KuppingerCole's extensive resources. From in-depth reports to concise one-pagers, leverage our complete security library to inform strategy and drive innovation.
Get instant access to our complete research library.
Gain access to comprehensive resources, personalized analyst consultations, and exclusive events – all designed to enhance your decision-making capabilities and industry connections.
Get instant access to our complete research library.
Gain a true partner to drive transformative initiatives. Access comprehensive resources, tailored expert guidance, and networking opportunities.
Get instant access to our complete research library.
Optimize your decision-making process with the most comprehensive and up-to-date market data available.
Compare solution offerings and follow predefined best practices or adapt them to the individual requirements of your company.
Configure your individual requirements to discover the ideal solution for your business.
Meet our team of analysts and advisors who are highly skilled and experienced professionals dedicated to helping you make informed decisions and achieve your goals.
Meet our business team committed to helping you achieve success. We understand that running a business can be challenging, but with the right team in your corner, anything is possible.
Dr. Jacques Bus, Secretary General, Digital Enlightenment Forum
April 19, 2012 9:30
Dr. Jacques Bus, Secretary General, Digital Enlightenment Forum
April 19, 2012 9:30
Well, we have our third presentation now for the morning from Dr. Jack bus, whom we met last night, those who were here for the awards ceremony, he was participating there, who is now the secretary general of the digital enlightenment forum after a career, which made him very well known with, with the European commission. And he has one of the shortest titles. At least everybody else has long titles in the program and short titles on their, on their slides. Here. We have trust and complexity and digital space and we've added self organization. Thank you very much.
How does, Okay, Thank you very much for the, for the introduction. I will not have a very technical presentation. The title is long because I wanted to of course, express complexity, not just complication by the way, but just complexity, but I, I would like to go a little bit from away from the technology environment and try to place that in a broader context and to see what the consequences are and what it means for the kind of things that we have been discussing here at the conference in issues on trust platforms and, and all kind of other things. So it's particularly about trust.
And what I will do is first try and go a little bit through the normal thinking about trust, what, what it is and trust and security, of course, which is part of that relationship. Well, first of all, security and safety in general is something that people that is essential for people, and that has always driven their organization and their society, their self-organization, or their organization in communities. They brought themselves together, sorry, in, in groups that were defended or defend.
And they created certain relations with other groups which were close or not close, or they had war with them. But nevertheless, they always organized themselves based on the fact that they wanted to feel secure And do their own things and live securely. And trust is a mechanism that facilitate the social behavior.
It means that you are replacing, in fact, you don't have enough knowledge to save securely that everything is alright, but you have a short of a, kind of an indirect way of finding out whether you feel that this is indeed trustworthy and that indeed you can, you can be secure that things work as you would like them to work. So trust is basically if, if everything is secure, you don't need trust. And if you trust and certainly it's not completely secure it, can there be very natural? If you think about the modern child relation, for instance, it can extremely can be extremely naive.
There can be no trust at all. There are societies, no known with based on distrust instead of trust the its based on norms. It can be based on contracts. It can be based on law, on institutions and so on, but so it's, it can be rational. It can be intuitive and it can be completely irrational and it can be a mix of one. So if you look to some views of, of trust, then first of all, we might say that it's a relationship of reliance, of a trusting person to a trusted party entity.
So it's based on a prediction or particularly a belief of reliance or a belief of the trustworthiness of the trustee in the absence of full knowledge. I think it should be very clear that that is the normal social way of thinking about trust. I know that particularly in the it environment, the word trust and trustworthiness is used in a completely different way and far more towards the complete security thinking. But I think I, I, I want to do it from the social standpoint in the first place, in order to understand some of the difficulties that happen, if you make make systems in the end.
So it provides a mechanism for the truster, the one who needs to trust to compensate the shortage of knowledge. And it helps him dealing with the social complexity and with the uncertainty that goes beyond rational reasoning. And there are three aspects, the social one, and that is a whole series of activities of, of aspects sociology. That means the relationship, the belonging to certain communities and so on with play a role in the social aspects of trust, the psychology, the belief in the honesty of another person, or the idea that you understand him in order to be able to trust him.
Of course, also the economic issues of the market, where the marketplace, an important role, the organization of the marketplace plays a role, but also as you know, of course, reputation brands and whatever can be there for the consumer to trust a company or the product to go ahead with him. And then of course, politics.
And I think this is an important issue that needs to be looked at because I, some of the I come, I come back on that also in relation to some discussions that I heard yesterday, politics means the institutions that we have built up in particular, this institutionalized trust in order to say, find a balance in the power relation in society. Because of course, if one can enforce behavior of the other, then the other can try and trust it. But it's not very relevant if he doesn't trust him, he still has to do it as if he is trusting the, the Tyran or the one who is controlling him completely.
So this particular part of trust is, is very important. If you think about the constructs that we want to build in the digital society, we have the fellow philosophical and aspect, which is based on the fact that it is in the nature of humankind to trust and the norms that are coming with that, whether they are coming from heaven or whether they are coming from other things, I am not quite sure, but I leave that also to you to decide, but that is what we would discuss if we think about philosophical and trust things.
And then of course, the system, the system scientist viewpoint, and you are more familiar probably with that, but it is also the overall system in which we are working, how that system is definable and how indeed we can work and create procedures and tools and architect, architects, and so on in order to say, model trust and trust behavior. So some views. So what I said already, it's in the cognitive category, it's belief of the trustworthiness of the other. I also mentioned already that it is reducing complexity. It allows us to, to cope with uncertainty.
Nibo mentioned a couple of things that are particularly used if we are building trust relations. And I think it's useful to look at that a little bit more in detail is five things, the history and the reputation you learn from, from experience, a reputation is built up from experience. Then of course, the inference is based on personal characteristics. That means your conclusions on the basis of the discussion that you have with him on the basis that you see the person, whether he has, how he, how he is closed, how he is talking and, and whatever, you can conclude certain things.
And these are often based completely on prejudices by the way. But nevertheless, this is how we do it. The relationship in communities in families, one to one friendships and so on the role fulfillment of course, is an important thing.
We, we trust the surgeon to do a surgery, but we do not trust the pilot to do a surgery. So it's the role in a particular situation which creates trust or not. And that has also some institutional consequences. And then of course, the context, the group, the community, the social norms that are a valid in that particular culture, the law, the punishment possibilities insurances that we have liabilities that are, that are there and all these kind of things that are, that are relevant.
And if you look to, to human beings, into communicating on the internet, that means using the digital world as it is now, then we conclude that basically we are missing identities. So we it's difficult to connect attributes to a particular person. It's difficult to have this relationship and find this psychological things.
So it's, it's really creating an enormous extra barrier to create trust between persons. It also is missing.
And, but of course, we should be careful that in solving that problem, we are not going to create another problem by not being able to live a private life anymore. And for instance, to be able to be anonymous, we also miss the personal characteristics.
We, we don't see many things of the other person on the other side of the line, and it's difficult then to find out what is, what is happening by the way, if you don't know the identity, it becomes very difficult of course, to, to have a, a transaction with them because the transaction is in your view, connected to that particular identity that you have in mind, and it might be somebody somebody else. And of course, and that's something that is coming up more and more.
And certainly also has been mentioned yesterday already in various contexts, the inscrutable perplexing, confusing, obscure context that are created by this very in transparent technical environment that is being, being created. And which makes it difficult to understand how we have to react in this. So these things we have to find solutions for. If we want to build a digital society where people can still live in this and use this digital tools and trust each other.
Let me say a little bit about something on, on, on trustworthiness and trust in general and, and what the, what the parameters are, how I see them. First of all, trustiness I consider the property of the trustee, the one to be trusted in a given context for a given task. This is important, of course, and it should be clear that it is not measurable. It's not binary. Yes or no. It can be compared in the sense from that I can trust one person better or more than another person.
And I know that for myself because it's a belief in the first place, but the, for me, it means then that that person is more trustworthy than the other person. It's not symmetric.
It's not, if I trust you, you do not necessarily trust me. It is also not transitive in the sense if I trust one person and that other, that person trusts somebody else, I do not necessarily trust the third person, although it might help to improve my trust if my friend trust you also. So that's, that's clear. And of course there our mechanisms, we need mechanisms to obtain information and, and, and, and various things. So you could think about a bit of a, kind of a mathematical relationship that you could define trustworthiness as a relation between the one that has to be trustworthy.
Why in this case where it is about the behavior within a context T, which is confirmed to in, in which confirms to a representation are, which is given about why, and that is provided by a certain agent that, that provides this description, this representation of Y and in that, for that particular set of contexts. So you could see, you could define a number of parameters here, which I think are very important. And then if you say X trust, Y it means that X believes that why is trustworthy.
This is more to give you some idea about what are the parameters around it, so that I can go a little bit further to what we say and what we can think. If we talk about trustworthy technology, and you could say, based on this particular definition, that the technology is trustworthy, it is behavior within a given context, you know, the functionality of that particular technology system, that, that is that that behavior is in conformity with the representation, as it has been published by the accredited agent.
That means with the functional specifications, with the selling folders and whatever, which are given by the, the software maker and the one who is selling it for instance. So, and that means that often it will be secure, robust, reliable, and compliant. That looks like a rather logical definition of trustworthy technology. But what is the problem if we do that, do that, just one particular example, say that, That an agent, or let's say a government introduces a biometric identification system, the, which is then the one to be trusted.
This is extensively tested in a particular context with a lot of user involvement and all kind of other things, very nicely architectured and designed with maybe even privacy by design thinking, all included the company who makes the system has a very high representative of high reputation. He has done it all nicely, according to the specifications tested well and so on. And then the system is there.
And you could say from the viewpoint of the system designer and the system Analyst and the system producer that this is a trustworthy system and that the user should trust this particular system, but in general, it doesn't that it doesn't work at all like that because it might very well be. And it is even very likely that the user does not trust the biometric ID system in the normal sense of the word. That means the technical trustworthiness does not lead automatically to the belief of the user to trust the system. And that can be for many, many, many reasons.
And that has not much to do with the system. Often, there is this government. I might think that the government doesn't have the right to take my fingerprints at all, and that it shows distrust and that I don't like that it might be. I might think that the security of, of the system is not good enough and that I fear that my identity will be stolen. The government is always using function creeps.
You know, you never know that if do something today, that how they're going to use that data later. And of course the government uses this power position to push the system on me, which I consider not beneficial or not, not for me and not for the society at all. There are all kind of other reasons, which means that basically the statement that the users should trust the system, because it was trustworthy in technical sense that that doesn't mean that the user really trusts the system itself.
And that is the basic problem that we all the time have when we are designing systems where we think that we have done it all so beautifully and where basically it's not accepted. And the reason is quite easy as I, as you saw in, in, in the arguments.
In fact, the, this, the context T has been in the normal sense, widened up to a context which is far broader. It includes the arguments on the government or your political feeling and all, and, and your, your norms and whatever. And so it's, it's not comparable anymore. The representation of R of the system, the representation are of the system, of course, does not cover all these additional elements that I have in mind when I'm saying whether something is to be trusted yes or no, or no.
And the agent that we had first, the, the producers and the, and the retailer, and so should be also extended with the government as, as the owner with the operators and many others. And even with all kind of other, other representations on morality and whatever that you can think of. So what we should realize is that with the same words, we moved effect from the deterministic technology realm to a very complex societal realm. And that creates a lot of discussion and those discussions we see all the time.
So, and I would like to add to that, do we actually know which technology we trust, maybe in the case of an identity system, we can still, or a biometric identity system. We can still kind of define the boundaries of the system. But if we think about the targeted advertising market, we have no ID. What is basically happening behind the schemes. If I am just going to one particular website, how that is going to be used in that targeted advertising market.
So it's, the system is becoming extremely complex and indeed inscrutable. So it becomes rather impossible to define properly what the system is. And maybe we should just safe and it's all the cloud or so, So summarizing in the daily discussing and trusting technology, the technology itself is often very complex and difficult to understand for the people who use it. The context extent to the whole context extent, to the complex complexity of our life, our political and social working and living their representations cannot cover these kind of models.
And the agents must have credibility in this particular technology. There must be good representations, but this is very difficult to achieve in just a technical sense only, only. So the belief of X in the trustworthiness of such a system, that means that all the other variables, sorry, in, in addition, of course, all these variables in these things are also time dependent, which create an additional element of complexity. And self-organization, let me just make one or two steps aside on complexity and self organization in, in, in this, in the general sense.
If I say self-organization, that doesn't mean the organization of the one person on his own life. I mean, in general, how society organizes itself and takes autonomic decisions in order to do something for their own security or in order to be able to trust another organization or not to trust an organization. And if they don't trust it, what to do to get around it and so on. So this is what I mean to say with self-organization, and that is quite normal for complex systems in the general sense of the world. And these complex systems are not time irreversible. They cannot go back in time.
You cannot think like lots of mathematical systems that are nicely continuous, that you can go both sides in time. And it doesn't matter. That's not the case.
It's, self-learning it's itself aware, and it is non-linear. And that's something that people should realize very well. It means that such a system is not equal to the sum of its parts, by creating more and more parts together, it creates also an emergent behavior and it can lead to instability, but it can also lead to, to stable situations. There is data compression, you know, define systems are defined implicitly by the rules on the compressed data, like genes are playing a role in the definition of the, say the human body.
There is, there are spontaneous creations of new structures in bifurcations, and there is exponential growth. And as I said, trust and security are typical drivers in this particular self organization in society. So it's essential complex systems cannot be fully understood by reduction to smaller systems or predicted by extrapolation, like sometimes happens in economic models where most of the market parameters are just created of, of suggested for the next year by pure extrapolation and nothing more.
Another side step human, the, the relation human technology, humans shape the technology, of course, the techno the, the technological world, but the technological world is also shaping human beings. And that is that the things are changing. That is why people say on this moment that privacy is we are losing privacy, which is something that can be discussed, but at least people think that indeed we are being influenced by the technology and that can change our life and what we have seen.
And I think what is a very important part in the discussion about trust and trustworthiness is that where we had in the past technology that was supporting the physical activity of people by creating a car or having a fork to, to eat or do work on the land or whatever. Now we are creating technology that influences and helps and supports and enhances the thinking activity of people.
And I think once you'd realize that this is an extreme, extremely important step over a threshold, and that it creates with normal people, an enormous fear that we get technology that can, can basically influence your brains, can basically influence your body, can basically influence your thinking, your perception and whatever you can think of this is far more threatening than getting a car and maybe get into an accident. And that's all, okay. So it should be very much realized that we are in that situation.
And that thinking about trust is extremely important from particularly if we think about technical trust, I think I come more or less to the, to some conclusions, well, what can we do if we, if we try to develop trustworthy technologies, we should, first of all, ensure transparency, accountability, and open discussions in order to make sure that people can build up their trust in what we are doing.
So trust by design and not only at the design space stage, but also through the whole life cycle, we should develop trust platforms in the sense of say the controllable institutions of the digital society. Like we have been building up in the last couple of hundred years institutions in our society that keep the power imbalance that create a democratic control on what is happening in the thing. We yesterday, I, I listened to a discussion about trust platforms and there were two visions.
The one of, let me say very brutally, the, the case, the, the peer to peer thinking, the distributed thinking. And on the other side, the Tyran, which is also a bit of a brutal thing, that means the one who is very centralistic and, and organized. It's not the choice between the two, what we do if we talk about trust. And that is why I made a comment about trust. The poli political part of trust is an important thing. We have been building in our society institutions that try to get, keep certain control on this power relation.
And we should go further and try to integrate this kind of institutions and try to transform this kind of institutions in our digital society, which is difficult, but it's certainly the kind of step forward that we can do. We shouldn't go back to the old thinking of the division between the centralist and the, and the distributed freedom. So it's a very old political discussion, and I think we should, we shouldn't come back to that all the time. We should work on how to keep the balance, how to keep control on the thing.
And we should, maybe we should build this kind of trust platforms with policies, with certification, with auditing, and so on first in sectors, and then try see how that grows together. Like in a normal situation of, of complex systems, you start with all the elements, building them up, and they will organically come together via standards, via interoperability and so on. So we should think in terms of ecosystems, which means that we should think in terms of competing entities in the system that will therefore then also create rules in order to understand how the system should work better.
And again, maybe sector dependent tools and systems first. And of course we should always include the normative part of the systems and the social organizations. We should never think only from the technology view, the norms on what happened in the past with the system and how does the system change or inflict the, the norms that we have in the social organizations that we have should is very important. And this inbound made a whole book about that and then use all instruments, data protection, liability law, consumer protection, law, and contract law.
And that is particularly for the governmental organizations. So then the final thing for the longer term, let us also try and develop tools and interface is that really focus on the functional effect for the users with an understandable, meaningful, but limited set of controls. The ways people are going around with cars, instead of knowing exactly how the, how the, how the engine is working. So it would stimulate self-learning and it would help create systems that are useful.
And at the higher level, at the higher mathematical level, trying to understand complex systems, try to understand the user behavior Fefe technology, and try to develop social trust, measuring that reflects really the complexity, maybe via statistical distribution and so on. So summary facilitate and speed up the ever ongoing social self-organization process in the digital space, I think is the essence what we should aim for if we are making systems that are so important and so deep going, going into our social life, that we might get scared about it. Thank you. Thank you very much.