CIO CIO of info third, and he will talk about and digital trust. KA it's your
Term.
Thank you, Martin.
Here you go.
Good evening, everybody. It's been a long day, but very interesting. What ill try to do is to connect the dots. Some of the dots we have today, but with a different perspective, the perspective of a trust service provider, which is info our footprint, we are pretty much all over Europe offices in Italy, Spain, and UK. And we have been going double digit very fast since we were born 10 years ago, the why does the market need a certification authority like info? We have heard that today because the internet was not built for security.
Now, internet was built less than 30 years ago and in less than 30 years, it has become so entrenched in our life. That denying access to internet is considered a breach of human. Right? According to the United nations, we do everything these days with, with the internet.
We, we work. We, we have it atom in the car, in the office. We even do no social networking of over the internet. And we do all this.
Now, everything has happened so fast that we did not have the time to build a right governance model around it.
So the need for trust in the digital world comes from this. And what is trust? Trust is basically centered around identity. This is the right conference. Talk about trust. Identity is core part of trust in digital transaction, but alone is not enough. A trustworth the digital transaction required a certification of digital identities, a clear liability framework stating what happens when things goes wrong and things can always go wrong.
And a secure process that ensure integrity, confidentiality, privacy of all data exchanged in a digital transactions because of the critical role played by trust service providers like info. Of course we are strictly regulated and in Europe we are regulated under IIDA IIDA came into forks in 2016. Before I IDAs we had a patchwork of local regulations that made very difficult for interoperability between countries in IIDA.
You have several degrees of, of trust, if you wanna say so, and to get to highest level, to get to the highest level trust, you need to rely on a qualified trans service provider.
The role of qualified trans service provider is critical within the IDAs framework because he will be responsible for identifying the parties, taking part in transaction, authenticate them, assess their willingness to enter into transaction, make sure the validation, the non reputability, and it will be all will do all these activities while carrying the entire liability for, for the process.
So for the first time, whatever is your business, you can outsource all the task part at the identity part to a qualify provider that will manage that for you will take all the liabilities of doing so. ADAS is so important these days, that if you read all for coming directives in the, in the financial space, for example, PSD two, or am L I, they all refer to AI when it comes to digital identity or trust in those words.
And it works extremely well when we are dealing with the digital transaction among physical persons or legal entities. The classical example is the opening a bank account.
We manage that process for more than 50 banks around, around Europe, but the work is changing. It's changing very fast. The counterparts in digital transaction are more and more being substituted by software agents. We have heard many times today. Now the Google IO conference of last week, where we are seen clearly, what does it mean when the software agents replace a counterpart in a digital transaction without even the other part, realizing what's happening?
What is actually happening right now, as we speak is what mark Andre forecasted in 2011, that software is eating the world.
Software is replacing all counterparts in digital transactions. In every industry. It can be the power grid. It can be the financial system. It can be the banking, wherever software is. As a matter of fact, conquering all is becoming a critical piece of, of, of the service we we get and everything is happening so fast that we did not have the time to get a clear, a pop governance model for the software itself. Let me give you an example today. If you want to build a bridge, a power grid or whatever, you need to have a specific qualification to do so it can be a degree, something like that.
If you want to build the software that manage those asset, you don't need any of those certifications.
So there is no clear, let me say licensing around the, the, the, the people that build the software. There is no clear liability framework for software producers and no accountability schema. When software interact with each other, let's take one example. We heard today many times now, autonomous speakers. And I like this example because it's straightforward that soon, very soon, we will all be sitting in autonomous vehicles.
And this is gonna happen because I dunno if you, if you know the statistic, but every year, 1.3 million people die in car accident. And between 20 and 50 million get injured or disabled because of car accident, this is happening because humans have terrible drivers. So there is no doubt that autonomous vehicles will do better because they don't get drunk. They don't don't get tired. They will do a better job, but we should think now what's happening to manage the process.
Sorry.
Today, a modern car is a bunch of software agents, hundreds of thousands of software agents that interact with each other, get the data from the surrounding environment, be other cars or the street infrastructure has incredible processing power to analyze those data. And they take decision, take actions, activate the brakes, activate the steering wheel. So the software has an impact on real life. And the question is, can we trust this? Do we think there is a clear trust framework to guarantee this behavior?
Actually, we should ask not three questions first. Now, if each single piece of software involved in this decision can be trusted. If the interaction among software components can be trusted. And if the AI behavior involved can be trusted. And the short answer to the in 70 minutes is no. And let me give you more insight on that.
I mean, today software is very poorly written.
It's very poorly written because there is no incentive on companies that advance software to have high quality software that buy on price. And there is no specific qualification needed for people that advance software, as we said before. So we cannot trust any single software component. Even when software component are of high quality, there are some insecurity that are getting out from the interaction between software components, typical example, it's high frequency trading algorithms.
You probably know that investment banks, these days rely on software to make all the trading because they're faster, they're more accurate. But what was seen happening many times are those phenomenon call as flash crash, where two trading algorithms start trading one against the other.
They, they, they go into a selling spiral, a they cause in minutes, a crash of the market up to the point that the human being realize it unplug one of the components that we get back to normality.
So we have some piece of software that is behaving in a rational way when it's alone, but when start interacting with other piece of software becomes irrational. The third aspect, maybe the works is AI. The point with AI is that AI is not AI. Behavior is not intentionally built.
What I mean that the behavior of AI depends in part of the neural network that some team is developing, but in part on the training that the AI receives. So the neural network get filled with huge amount of data, with the goal to identify patterns in those data. Those pattern will determine the behavior of the, of the AI. This means that not even the software developer who build that system knows how the system is gonna behave in real time. And why is he behaving that way? And you have to add that, you know, the system once is releasing to real world.
He keeps on learning stuff, keeps on changing his behavior. This will make the trustiness of the system even even harder. So we are moving Fastly, moving toward the world where software will, will permeate everything. And we should be asking some basic question about how trustworthy this is gonna be.
It's not easy to come up with a trust framework or for the software world. That's why we don't have it yet.
I mean, in Europe, we thanks to AI us. We have the most advanced trust framework for the digital world. When it comes to physical person or legal entities, what makes things more much more difficult in the, in the software world is the international nature of software. Because when we speak to, to see in our phone or to Alexei in our home, we are speaking with a software agent can, can be anywhere in the world, in any jurisdictions. And next time we talk to the same software agent. It can be relocated in a different jurisdiction, and this makes things particularly difficult to address.
In any case, the a proper solid task framework for this work will need at least three pillars. We need a technology pillar on which to build this task framework. Technology is required, but is not sufficient. We heard now, for example, let's take no cryptocurrencies. Cryptocurrencies are based on just on, on software in theory, no, there pure, pure perfection. It cannot be broken, but what happened if your, your cryptocurrencies in your wallet disappear? There is no one you can comply with no, no one you can claim against no the loss of your wallet. This is not theory. No.
According to public figures, the number of fraud in cryptocurrency exceed 1 billion as we speak.
So technology is important, but it's not enough. Second pillar is people. People are the human factor in which such technology must work and are often the weakest link of any trust framework, because they are not educated and not aware of the digital trends we are as a society need to work on this aspect. And the third in order increasing complexity is regulation.
We need a, in order to proper address the trust of software, we need a regulation that cannot be country basic cannot be patchwork of local regulation because companies will exploit the weaknesses between countries yes, to be managed at supernational level.
It's not gonna be easy. That's why we are in this situation today. So I had to handle a little bit to sustain time.
What I believe we should do, like like leader in this market is somehow help and support our government in, in making those decisions about software governance, because it takes a lot of skills, skills in legal technology, compliance business mix of skill that are not easy to find on the market, much more difficult to find on the government side. So we, as a market leader in this space, we should somehow address and support the government in, in this direction.
That's the reason of why infoset is getting involved in some initiatives, like the one showing on the screen, the cloud S construction, where we are trying to set the standard for digital signature in the clouds together with Adobe, the sovereign foundation, you've heard about sovereign. Many times, you will keep on hearing that tomorrow as well, where we are trying to come up with a model for self-sovereign identity on disability, ledger, and mid PKI, which is our implementation of PPI for the IOT world. That try to put an answer to the question.
Whereas before this a short video that shows what we are doing in this
Space, IOT transactions are booming with tens of billion devices connected by 2020. IOT means that for the first time in history, a digital system is empowered to take decisions that have impacts on real life.
Therefore, some important questions need to be addressed. Are we able to identify a machine? Can an automated system be certain about the authenticity of a received signal? The answer is no. And less a clear trust framework is in place. Trust in IOT is the result of a mix of ingredients, technology, regulation, and people trust between objects is key to ensure the identity and authorization level of each object taking part in a transaction, the integrity and authenticity of exchange data and a clear liability framework in case something goes wrong.
Info cert the leading European qualified trust service provider has developed a framework to guarantee trust in. I OT the I D PKI platform, interconnected entity devices, software, or people are equipped with digital certificates issued by a trusted PKI, able to prove the identities and the responsibility of actions carried out by each entity. M I D PKI platform allows the secure spread of IOT technology in our society with the proper level of trust
Info. The digital future is now
Perfect timing.
Yeah. I think that's a perfect end anyway for your presentations. Thank you very much.
Thank you. And we have, I think we have a one very interesting question here, which I wanna quickly pick. So it's the, the first one I, I want wanna use, do you think it's possible to create a liability framework for software where there's no such framework for hardware? Think about Intel spectra in other stuff. So will this work out,
Don't make that difference between hardware and software. We need to come up with, we are Fastly moving toward the world where software components will be released in the real world that they will survive the creator of the software and will keep of evolving.
So the hardware will stay as it has built for a longer time. Let's say so. So what makes it very difficult on soft side is its ability to adapt. We've saw before that AI can build AI better than humans.
I mean, this is the world where we are heading. Hardware is difficult to, to make change. One piece of is, is been put in in the outside world.
Okay. So thank you very much for your present.