Welcome to the KuppingerCole Analyst Chat. I'm your host, my name is Matthias Reinwarth. I'm an Analyst and Advisor at KuppingerCole Analysts. My guest today is Marina Iantorno. She is working with facts and figures at KuppingerCole. She's an expert for AI and all things AI and everything in the results and the effects of AI. And this is something where we want to continue our discussion. But first of all, hi Marina, good to see you.
Hi, Matthias. Thanks for having me here. I'm very happy to be here again.
It's great to have you because you are an expert in all things artificial identity and you are working in this area. On the other hand, you are working a lot in the area of data analytics. So you're contributing this part of research to what KuppingerCole does. We want to continue a topic that we left off, not really, which we last talked about at EIC. And we want to talk about digital trust and why that is so crucial in our AI driven world. And this is already running up to our next big event, the cyberevolution in December in Frankfurt. But first of all, when we talk about digital trust, how do you define digital trust? What is it? How do you think one should think of digital trust?
Well, so when we talk about trust, are talking about confidence, right? So then digital trust would be the confidence that user having the ability of AI system to operate securely, ethically and reliably. This is extremely important nowadays because everyone is talking about how AI is evolving and how technology is changing. But the idea is to have a, let's say, ethical framework around it. So then it would include the guarantee or the assurance that the data is handled with care, with privacy transparently. So then in today's digital world where we can see that AI is touching all the spheres and is driving basically most of the decisions that we or the decisions in many of the areas that we can see nowadays. Because all of us, interact with AI at least once a day. Digital trust is very important because the AI systems have influence in the decision that affected our daily lives. Let's say, for example, financial transactions. I think all of us, we have a banking app, for example, that suggests you if you want to invest, if you want to go to cryptocurrencies, or maybe buying shares, or even healthcare suggestions. If you have, for example, an app to ask for an appointment with your doctors. Maybe the app is indicating you, okay, it's been already, I know, three months since your last visit to the dentist. Maybe you can plan to go again. So then it is important that the users feel trust and they feel confident and safe when they are using the technology.
Right, there's saying in German that's at least that you need to earn trust and that takes years to earn and deserve trust over the time, but it's easy to lose trust in a second. if something just goes wrong, and this is not only related to AI, but I think it's really elevated by AI. But if we look back into the history, can you provide an example of a major incident, a data breach, whatever, that really destroyed trust from one second to the other?
Yes, totally. Well, there are a couple of cases. We already know that there were some massive data leak in some companies. One of them was Equifax, for example, in 2017, where the data of millions of people was compromised. And when I say millions, I'm talking over 100 [million] people. And these data included social security number, birth dates, addresses. Of course, the customers, they lost the trust. So then they stopped working with the company. There were legal repercussions, financial loss. So it was a really, really bad impact. And as well as I'm talking about this one, we talk about another very famous case, Facebook, for example. Facebook had an enormous data leak in 2020 and many people were affected. So what happened is a lot of people stopped using WhatsApp and started using other apps, for example, to communicate. Even though it seems like, okay, it will not be a problem for Facebook, it is because it's a loss of money, they lose data. Then at the end of the day, when the customer just loses the digital trust, so then it's very hard to actually go back from that.
Right, just as a side thought, if we think of all this information, that data that has leaked over time from various breaches, this is all training material for AIs for the future and for right now. So this can be used to fabricate synthetic data, data that looks like it's real, but it isn't, or it is composed of different sources. I think that is also something that plays into the matter of trust when it comes to how do we... How do we get better? And getting better is the topic of my next question. What are some successful and maybe also some failed attempts to build digital trust in AI so that we as users that partners in an ecosystem can rely and have digital trust in AI systems? And what are you seeing in the market?
Well, we can see as you said, both sides of the coin, right? There are some cases that are really good. IBM is one of the companies that is working really well and they put a lot of emphasis in the transparency in educating people and informing the users what their data is used for. Another good case could also be Google, for example. Google explains all the time when you open, for example, a website that is Google, it mentions what your data will be used for and if you accept or not, which is actually very good because it's not that long. It's not, you know, like this, I don't know, five pages that people never read. So it's something that everyone can read and be aware of, which in my opinion, it's a show of transparency. And this is great. This is actually something that impulses the trust. Now on the other hand for example something that was really bad and there is even a movie about this was the case of Cambridge Analytica because they literally took the data from people who were users of Facebook and they manipulated this data in order to move or make movements for political campaigns. So, in that sense, we can see a really misconduct and a bad use of the data because people didn't really know there was no transparency in those cases. And then many users lost the trust in Facebook. And of course, this one involved other entities, right? So then at the end of the day, I believe that these two cases show, you know, what could be a, let's say, ethically correct, transparency, and the importance of maintaining all the users informed of what is happening with their data and how it is used.
Right, you've mentioned the term transparency several times before. So we started with the user perspective to just understand what happens with my data. That's transparency. You've mentioned the transparency of what happens with the data in a system, for example, when it comes to deriving, for example, political recommendations towards somebody. When we come to artificial intelligence, to machine learning, the role of transparency is much more elevated. And are there other examples of AI systems that are designed to be transparent and what is the benefit of that?
Yes, of course. Well, as I mentioned, Google is one of the examples. Google has a framework around AI. So they are calling this Google's AI principles. So then what they are doing is actually providing a guide that shows how to do correctly the development and deployment of AI technologies. Now, the idea with this is to have detailed information on how AI is working with the data of the users. And what are the measures that are taking or are in place to ensure that this data that is used or the machine learning models that are used are fair and are accurate? Because sometimes what happens, and this is very commonly, you know, a very common topic within data scientists and data analysts, what happened with the black box when you cannot really explain the algorithms. So then I believe that now there are many academics and many people in the industry working together, which is great, and especially with the open sources, trying to improve the transparency. Because as I said, the transparency builds trust in the users. And it is important that people know how their data is used. We see that there are a lot of privacy advocacies. Some of the companies are not really following this really well. So then it would be very good if all the organizations that are using AI move towards this direction.
So now I have to put in a shameless plug for what I do in another situation. I'm having a webinar series together with our colleague and friend Scott David and John Tolbert. And we talk about the good questions to ask when using AI in an enterprise context and acceptable use, proper use of AI is one of the most important parts then. That goes from the individual, how do I use ChatGPT and what can I type in and what should I avoid, up to how do I create a proper system? And I think that is very close to what you just said. So really understanding where's the benefit, how do I do it right? Is there a framework, some guardrails that guide me in doing things right? And I think transparency is one of the key aspects. So if...
Well, I watched the episode and I agree on the fact that it is important to understand what are the advantages and the disadvantages of using AI because of course AI is there and it is there to help us. But there are many things that they set a limit, for example, the copyrights, you know, so then this kind of things are crossing the ethics. So then there are certain questions that there should be done, you know, before putting some things in place.
Exactly. And you've mentioned privacy advocates, closely related, but a different beast is data protection. So things like the GDPR in Europe, how do they play a role in ensuring data privacy in AI? And why is this important in our main context in digital trust?
Well, I would say that we are very lucky to live in this region because of the GDPR regulations. There are other regions that are actually adopting this, Australia, Singapore, the US, but there are some regions that still are lagging in terms of creating regulations that ensure the data privacy. For example, Latin America. I'm originally from Argentina myself. And I can see that, let's say that I don't want certain data of myself being on the web, for example, the right to be forgotten on the web, I am entitled to because I live in Europe. But people who live, for example, in other countries, let's say in Latin America or even in India, when they claim, I don't want this data to be revealed on the Google search, the answer is no, because there is no regulation such as GDPR. So then in that sense, I would say that this is extremely important. And regarding the trust, it also contributes to the trust in organizations, because if people know that they are clients or customers of organizations, that they are following the GDPR regulation, so then it's a kind of relief for the final user. I talked with Max Schrems at EIC. There is a video in this series that we always publish where we talk about this. And we just mentioned that some people say that data privacy eventually will be a luxury. And what he mentioned, and I agree with him, is that it cannot be a luxury. It has to be a right. You know, and we can see in the services, for example, the right to be forgotten on the web, it's a right, you know? So then I believe that when users are operating with companies that are following these regulations, they feel safer. And if the companies don't comply with the regulations, the fines can go really high. We saw this, for example, with Facebook, with this data leak. I remember that Facebook paid like millions of euros and it was not the only company, but are those companies that they had to pay a lot to compensate what happened. So then I would say that the legal framework helps in preventing that there are data breaches, there is a misuse of the data and it focuses on give a very trustworthy digital environment.
But paying money is really just the aftermath. I think the design of these systems should be so that this actually does not happen.
Maybe we don't really realize how much, but we're all affected. And if you pay attention to that, are organizations that they are using your data constantly, maybe for good things. Let's say Amazon, if you are buying a product, maybe Amazon is just telling you, okay, when there is another offer, you know, just for system recommendation, machine learning, which is okay. But the thing is, you know, and you agree on giving your data, let's say. But some people they don't really understand that this is what they are doing. And I'm just mentioning, you know, one case so imagine other cases that we can see, you know, every day all the time.
Exactly. before we close up, because this is a really interesting topic and I could continue for hours, but this format is limited. When I prepared for talking to you about digital trust in the age of AI and in an AI driven world, I sat down and said, okay, what are key tenets that I would like to see when we implement systems that enforce, that facilitate digital trust in AI driven systems. And that's easy to do. So transparency, you've mentioned that, fairness, ethical, following ethical principles, et cetera. That's easy to put them down. I think the hard part is to do it. So when you would like to give a recommendation and outlook, some actionable strategies and best practices for organizations to implement these simple written down key tenets of digital trust, what would you say?
Well, there is a great video of IBM. They are doing a great job. I mentioned this before. I mentioned this now again. They are really doing a great job. And they started at the very beginning when AI started being like a kind of revolution around this. But they would say that there are some pillars that the companies should keep in mind. So the first one is fairness. If you are using the data and you are implementing a model, you need to ensure that this model will be fair for everybody. Not making differences, let's say. For example, let's say if the gender of the person would actually affect positively or negatively, well, then you should be careful whether you add this feature or not. Robustness, for example. So you need to ensure that the models and the AI models or machine learning models that you are using, they are robust enough to don't mislead the results. Transparency, as I mentioned before, people need to know how their data is used and what are the purposes. There are some organizations that they are not informing what are they using your data for or even that they are using your data for, which is, of course, not correct. And then ensuring privacy. So do you have measures in place that ensure that even though you are using the data, let's say like my data or the user's data, this data is not compromised. You mentioned synthetic data earlier. This is a very good solution, for example. So you have an original data set, then you transform this data set. And then this data is not, you know, connecting or linking or, you know, making you able to identify a person with this data. And it's a way to ensure the privacy. So then these are the pillars that I agree with IBM that the company should actually follow. And I know it is challenging. I know that there is a lot of work to do, but there are so many things going on. And unfortunately, technology is progressing faster than the regulations go because the regulations in general need a lot of time to actually have people discussing about this. But I'm really looking forward to see what is coming next. I love AI, so then I expect to see very good changes, especially after the EU Act regulation.
Exactly. And I think that's a good closing of our topic because the genie is out of the bottle and whether we like it or not, it's there. So we have to deal with it. And if we have to deal with it, we want to do it properly, adequately and following all these nice key tenets for digital trust in AI driven systems. So thank you very much, Marina, for being my guest today. That was a great conversation. Thank you for sharing your thoughts on digital trust. This is a topic that we will continue at cyberevolution in December and running up to cyberevolution. If you as the audience have any comments, questions, topics to cover, please leave them in the comments. If you're watching this on YouTube, please put your thoughts into the comments section just below that video. If you are hearing or watching that somewhere else on our website in one of the podcast apps that you're usually using, please reach out to us. We are easy to find at the website, kuppingercole.com and leave us a message by mail or however, or connect to us on LinkedIn to Marina and or me. We are eager to learn more from your side. This is a vibrant topic. Things are changing so fast. And if we close down this episode and open up the next new site, there will be news around AI again. So thank you, Marina, for being my guest, and looking forward to talking to you soon.
Thank you, Matthias. Same here. Have a great day.
See you, bye bye.