As Mike said, my name is Marina. I am a research Analyst here at Kuppinger Coal and I am pleased to present this, yes, a session with with my colleague here,
Alejandro la, also here at Kuppinger Coal. And the name of the session is Reflections and Predictions on the Future Use and Misuse of Generative ai. So before we begin the presentation, we'll just go over the agenda. So we're just gonna start by introducing just what traditional AI is, and then we'll talk about generative ai. Then we'll explore some use cases and then talk about generative generative AI in the context of cybersecurity.
And then at the end we'll discuss some of the ethical considerations.
Let's just start with one quote of Alan Touring the father of artificial intelligence. So he said, artificial intelligence is the science and engineering of making intelligent machines, especially intelligent computer programs. What we want is a machine that can learn from experience, and this is where we are actually getting on there.
Were significant progresses in the last years and well, so I would say from 2000 and ahead we can see many, many impactful uses of artificial intelligence and reinforcement learning is gaining place in this game.
So the main question, right, what, what is artificial intelligence? There are many definitions and if I ask people in this room, you probably have a unique definition, but in the essence it always is the same.
AI refers to the use of machines to perform tasks that are usually, that usually require human intelligence and cognitive abilities, such as learning content generation, problem solving, or decision making. As the technology evolves, then we can make a, distinguish a distinction between traditional AI and generative ai.
But that is not all because we also have conversational AI in nowadays. So we can actually have, you know, a difference between traditional AI that this is how it started, conversational AI and generative ai.
Of course, these different kind of techniques, they have different goals, different data input, because we don't need the same data for all of them. We have a different output. And the main point here is how the word is actually changing or moving towards more generative and conversational AI in the uses. So for example, if we talk about traditional ai, it is actually done, or the object of this is perform a specific task. And how does the machine perform this task?
Well, through algorithms, the algorithms have a training and then they perform a specific task for us. Now, if we compete with conversational ai, conversational AI is trying to mimic human language.
So the machine will try to establish a conversation with us, and genetic AI will create content. And the idea of creating content is having something that is similar to what humans create.
Now, how the machines can accomplish this well through data, we need to train the model. How do we train the models?
Well, in traditional ai, what it is used is a structured data predefined datasets. So if we talk about the structured data, we talk about, imagine a dataset that has different columns. So then we have different variables and everything is very, very well defined, very well structured. But we also have on a structured data, and I would say that 80% of the data that is around is on a structure. For instance, a post on link, that one is unstructured data. It is saying something about the person. There are some features that we can instruct, but the data is on structure.
And conversationally AI is using structured data, semi-structure, and unstructured data, and it is using natural language processing that this is another technique that nowadays is, is actually evolving. And genetic AI is using unstructured data and it is extracting data from different text files for from images, and it creates content. So then we will have different outputs according to what we want. What are the outputs that we can actually expect?
Well, when traditionally I, we can expect predefined responses. So it's something that we expect to receive in conversational ai.
Well, it would depend on what the user is actually talking with the machine, because the idea of this is that the machine actually mimics the human language. And on the other hand, generative AI is creating content. So what we will see is an output that we generate unique responses according to what the user is asking.
So if we go through some examples, for instance, traditionally I, we can see the chess playing. In the nineties, there was a very famous case that the machine actually won against the world champion of chess. And at that point, I lost the question. So what is happening?
The machines are actually more intelligent than humans, and this is where we can see that the machines can actually learn from the experience conversationally. I, well, all of us eventually had to, I dunno, contact an airline. And especially after the pandemic, the chatbots were racing a lot. So then we can contact any airline and you start talking with the chatbots and the chatbot is asking you, okay, how can I assist you today? Can you gimme your booking number?
Et cetera, et cetera. And the generic, well, Google translator for example, this could be one of the, one of the examples that we can see or chat g pt, okay, that we can create content.
I won't be spending too much time on this slide. It's basically just a timeline of how generative AI has evolved over the years. We can trace back the origins from the 1950s, but essentially we see that people from both the academia and the private sector have been investing heavily their time and resources to apply algorithms and new theories to lead to the technologies that we have today.
Like Marina said, the biggest example is chat, G P T. We are all talking about it. We see some media articles doing a lot of hype. Maybe some of them are overreacting, some of them are maybe not tackling the issue in the right way, but that's why we wanted to have this session today. So we can explore some use cases and give real case examples of how generative AI has been used in different industries.
Some facts about how users feel about generative ai. But I would like to take a step back and talk about traditional AI in different countries and regions. People see AI in a different way.
For example, if we see how the United States and China, they're both investing heavily in artificial intelligence. Some people say that if during the Cold War the Soviet Union and the US were competing in other space, now China and the United States are competing in cyberspace. So for them it's more, let's say political, more for the defense industry. When it comes to countries like Japan and Korea, it's more about the problems that they're trying to address. For example, these two countries have an aging population, so they're investing a lot on robotics and how to tackle that issue.
In other places, like in Europe, it's perhaps a conversation is more about how we can make sure that artificial intelligence is going to address the challenges that we have, but also maintain the values that we have in other countries or regions like Latin America where we both come from, it's a bit more challenging because there are benefits, there's potential there, but it can also exacerbate the social and economic inequality in these countries.
And there's some regions in the world that have a history of authoritarianism and dictatorships.
So there's fear and skepticism among the population about how AI can really bring benefits to the region. But that's about traditional ai when it comes to generative ai, if you look at this graph, at this graph, you can see that the adoption is worldwide and it's being used most in most regions of the world. But just to conclude this last slide, if we really want to address this challenge, either generative AI or traditional ai, we need to really have a conversation going between everyone in academia, in the private sector, in psychologists, philosophers, educators.
Really everyone needs to be involved in this because just like climate change or like the covid pandemic has demonstrated, we are now a global society and we need to have this conversation going.
Now, here comes the big question. How does genetic AI work? How does this great technique work? So first of all, we need to understand that there are different components. It is not just only one algorithm or one technique. We have different techniques that are involved. So as it was shown in the timeline in the well in 2000, generative adversarial network was on the game.
And this is an unsupervised learning model. When we talk about supervised, unsupervised learning, reinforcement learning.
So this, I know that they are kind of technic terms, but unsupervised learning model will help to actually create different clusters, different groups, and it will learn from patterns. So I guess many of you have a Facebook account and you receive a notification that, oh, your friend is having a very day today, wish him or heard the best. And when you just see this, you just think, I haven't seen about this person in so long.
So then maybe the person is, is okay, it's not active.
And when you go to the Facebook account, you see that the person was actually active and you, and you just wonder why I don't see any notification. Well, this is probably because the machine is learning that you are not engaging with the content. So if you don't give a like a hate or I dislike or something, it will not show up. And this is an example of an unsupervised learning model. And the genetic adversarial network is working with that, for example as well, with spa filter, we also have variational out on coder. And this is where a statistics comes to the place.
For those who know me, I am an aesthetician. So then this is actually the science that is behind this, and there are probabilities that are in place. So what are the probabilities that we consider the dimensionality reduction?
So imagine that the model is trained with a lot of data. So imagine like, I don't know, thousands, millions, millions of data points, okay? So it goes through a reduction of that taking the most important features and then give the maximum or the best quality result. That is the idea.
And this is why probabilities are needed in behind, because there are conditions to actually accomplish the la, the last results, the outcomes. And then we have large language model. And this is where NLP is in place. And when we talk about, okay, so how the human language is working and here is where artificial intelligence is actually struggling because there are some different languages or in the same language we have different slangs or different ways to pronounce something. So then we need to train the machine to understand every single word that we want the machine to know.
And when it happens, if any of you use judge G B D, for instance, eventually if you write, let's say summarized with S, judge, G b, D, understand that you're using British English. So then it, the responses comes with British English. Now if you write exactly the same using set, so then it understands that you're using American English and this is something wonderful that is happening. And these three techniques are working together to generate content.
Now this is an example, okay, in a, just in a picture to show you how it works.
So if you remember we started saying, okay, so we have a lot of data. So imagine a lot, a lot of data. And the algorithm says, okay, these are the most important features. So we have an structured data, we just select the most important features, and the algorithm actually is structure this data in something that it is able to read. And this is the process where we have an encoder in the end when the user says, I want the image of an, okay, so the machine will decode all these most important features and this one will be the outcome. Now what is that we have to consider?
Well, if the data is not good, the outcome probably is not good either. So we need to have very good data to train the model and to be able to have the best outcome later on.
But it's not only about images and text. So what happened with the video editing, what happened with the audios, it is also possible to get it. Now there are even apps where you can upload pictures of a person that you know from different angles and some audio behind and it can create videos that actually has some content to say. And this is something that is actually not that new, but it has been a booming nowadays.
For example, what happened with TikTok, with the TikTok videos. So you have like a background with some audio and then you are able to upload an image or some videos and it creates something for you and you can even edit it there. And in the end, these kind of techniques are very useful for video editing and for virtual reality as well.
So now we're gonna talk about the use cases. So when Marina and I were making this presentation, we wanted to make sure that we can provide some useful information. And what we did is in each use case, we have a picture with a real case scenario.
So something that really happened, so, so what's the power of generative ai? It can be applied in different industries, right? And that raises ethical concerns. But it always depends on which industry we're talking about because the way it's used, it always depends on how people are using it. How is it affecting that environment and what questions we need to address. The first example that we have is customer service.
Some people say that what they like about customer service is that they have an interaction with another human.
So maybe extroverted people are happy about it because they can get to talk to someone, but maybe introverts are more happy that now they can just talk to bots. So it's important to consider that there are two different types of chatbots. The first one is task oriented dialogue systems. So that's what when, for example, you make a reservation in a restaurant or in a hotel or when you buy your train tickets. But when it comes to open domain dialogue systems, that's the chatbots that you, we usually see on websites.
When you wanna check a vendor or any other website and you see like a a little robot down the screen saying Hi, that's the open domain dialogue system that we're talking about. And we believe that in the future with the rise of Chad g pt, and as it becomes more popular, we will see more efficient customer service experiences.
The next use case is marketing. And this is an in an interesting one because some small agencies they can benefit by using for example, da, right?
But there are also some problems because maybe people working as photographers or as artists, maybe they will struggle to compete with this new generation of marketing content. Because something that is very interesting about the alley is that you can really come up with content that is specific for your own use cases, for your own requirements and needs. And the next slide is somewhat relevant to what I previously said. Here we see the picture of the fountain. It was an a piece of art that was exhibited in New York in 1917 by Marcel Duchamp. And there were many critics when this was exhibited.
Many people said that this represents the decline of art, but what does that mean? The decline of art?
If we look at the history and evolution of art in the West, it has always been shaped by secular and religious ideas. In the very beginning, we see people doing small paintings, let's say in capes back in the days of living things, plants, animals. And then it literally evolved to religious symbols. And if we look at the history of art in especially in Europe, they were largely dominated by religious teams.
During the Renaissance, we see that people are getting knowledge from humanism because they found, let's say Greek and Roman texts or art. So then we see the rise of the portrait. Then in the Barack period, we see the use of art to demonstrate the power of monarchs. And that's the rise of the Westphalian state, the rise of the modern state. In the enlightenment, we see now the appearance of the human in the picture.
We don't see really themes anymore. We see landscapes as as, as an example as well.
So we are in, in the history of art, we slowly see the disappearance of of the human out of the picture. And then in the 19 hundreds we see some movements like cubism or serialism, when in reality art is completely changing. And the question is, what does art represent today when it comes to a machine doing it, what does that mean when it comes to the whole evolution of art? And I think the answer is that art represents the cultural, political, and philosophical trends of the time.
So I think this is a very important question to ask and if there are any artists in the room, maybe you can ask or tell us at the end of the presentation.
Well, healthcare is another industry that is getting advantage of generative ai. There are many uses. One of them that is actually very famous is what about psychology or the psychiatric industry? I remember when all the situation with lambda, this machine that was talking or you know, it came from Google and people were saying No, is it true that the machines can actually have feelings?
Well, of course the machine cannot feel in the sense that we do, but the machine can identify the feelings or the motions of people, they can actually identify the motions of the users through facial recognition, through natural language processing, for example, with sentimental analysis. So we can actually see whether the motion is positive or not. So now we have machines that can actually impulse or create positive emotions in people. Another very important use in in healthcare are these nurse bots.
So then if you just explain the machine or if the user just input the, the symptoms, the machine can actually detect what is the illness or the disease or the problem that the person is actually facing. And in 80% of the cases, it actually matches with the, with the real scenario. So then the question would be, what will happen with the doctors in future? So what will happen with the nurses? Can it really improve the healthcare system? Let's say the hospitals. Can we reduce the waiting time in emergency for instance? And this is a great use that professionals can actually benefit from.
Another one is robotics. Of course, when we talk about artificial intelligence, we, it is very easy to relate this with the robots and we can imagine, okay, so then there will be a robot here in front of me doing everything as a human person. It is not exactly that the case because we can have actually arms that are distributing, for example, content in packages. And CPT is a great help nowadays and it is actually integrated with other systems. For example, an eco navigator that is in charge of managing python spider jupyter notebook, they are planning to integrate J G T with it as well.
Why? Because it can actually access functions to get more control to monitor and to detect anomalies. And the idea here is that J P T and the robotic machines can be combined and could create more powerful models. For instance, the humanoids that are doing night care of people or places.
So this is something that also will be benefit from all this software development. And here comes the big question, who is creating the codes?
So if you go to T for instance, that is one of the, let's say it's generative AI that is in women, you can ask, okay, I want graph that is doing this and that in linear regulation. And you can see that it is coding in the moment.
Yes, fantastic, it is good, it is helpful, but we need to consider that the machine was trained with data that is static. So it's data that happen in a certain period of time and then the time is moving on and the technology is evolving. So then whatever we get, yes it is very useful, but there could be some progresses in the middle that were not actually in the system to train it.
And the point here is that yes, it is very good to have it, it is a great help for developers and more companies are actually allowing their employees to use generative AI to improve the system to detect if their code is actually good or not. But we will still need to keep a human eye to see that if it is actually up to date and if it is what the company needs.
And the pharmaceutical industry is another industry that actually used generative AI in silicon medicine used, well, the G N I was talking about this earlier today, and they could develop a new drug in 46 days.
And developing a new drug is actually something that usually takes years and they could reduce the time only to 46 days. Of course, if we talk about businesses, yes, they save a lot of money, they save resources as well. What is happening with the employment? So then here comes as well the question. So if we are implementing this in different industries, what will happen with employees? Do we really need all of them? And well this one was a very good experience. Now the regulations around the pharmaceutical industry are actually wondering, okay, so what are the limitations?
And up to what extent companies can use artificial intelligence and up to what extent we actually need from the humans.
The next use case is politics. So you know, are politicians gonna start using c g BT to come up with new policies and decision making processes? But I think the main question when it comes to politics is a problem of disinformation. I believe there's a few challenges when we use the term disinformation. Do we mean that this information is just false information or is it misleading information or is it both?
Another problem with this information is that it seems like many policy makers and people in the academia, they talk a lot about this information, almost like they admire the problem. But what we really need to do in study is to look back at the history of how technology has disrupted information ecosystems in the past.
I can go as far as the printing press, how as soon as the printing press was created in the year 14, 40 few decades later, we saw the reformation and religious tension in Europe, in France, in England, and in Germany was exacerbated by the use of the printing press because that allowed more people to exchange information to perhaps deliver false information and to disrupt the social and political structures of of the time.
The internet is another example and I believe that generative AI is another technology that will disrupt the way we process and share information.
So perhaps the best way to deal with this information is not to always talk about this information, but also to talk about how we can educate users. How can we maybe launch programs in schools so we can tell teenagers how to distinguish what's real and what's not. Although the problem is also for old people, right? So it depends on the age, right? The next use case is important for us, right? Are there still gonna be analysts in five years?
Well, I hope to be here in five years talking about something. I think it's gonna be like a tool for us. I don't think it's gonna completely replace analysts because what we do at co coal, we have lot of meetings with companies on a weekly basis.
So we see updated information, we talk to them and we know what they're doing and they share with us some experiences.
You know, they tell us, oh, we were just at the RSA and we learn this and this. So I, I think that human interaction is very valid and very important to have. So I still believe that there will be analysts in some time, but I think that generative AI is gonna help us do more research to write more content. And kind of like Wikipedia or Google is used, the next use case is manufacturing. So we hear a lot about digital twins, right? These are simulations of real world environments.
And in the timeline slide that I showed earlier, one of the first applications of generative generative area was in manufacturing. And digital twins also has been widely used in this sector. If you wanna know more about this topic, I know that Sylvia needle, she did a session yesterday on digital twins and she had a whole slide on how it's used in in the manufacturing industry. So I encourage you to check that out. On the next slide we're gonna talk about generative AI in the context of cybersecurity.
We started talking about this this morning a little bit on what is happening with the companies start using artificial intelligence to improve their cybersecurity system. Well it is actually a good thing if we think about that anomaly detection could be actually faster and using artificial intelligence well, so the generative models can actually detect the anomalies earlier and if we detect it earlier, so then we have a way to sort it out, right? So this is actually the main point. The machine learning algorithm as well, they are used to do fraud detection.
So if there is a malware trying to break into the system, machine learning is able to detect it. Now, something very important about this, it is crucial not to rely and charlie and automation because the code can also be hacked and if there is something wrong in this data input, so then of course we will see the issue later on, right?
So it is something that it will be dragging delta coders and the GNS are actually detecting as well the anomalies earlier.
For instance, for medical records, this is something that it is very important in healthcare industry because they manage a lot of sensitive data. So then it is very important to actually keep the data secure and how to do this using generative ai. Yes we can use it, but still we have to monitor and there is a variety of purposes, right? Like the main point here is to identify the data and with this data that is valuable, we can get an output. And from this, well, so we are actually learning from the training trainings that we are using. And this is why as well we have to keep it up to date.
So yes, it is good to use genetic AI and cybersecurity, but we always have dark side. So on the positive hand we can see, yes, we can predict the new attacks because genetic way can see the potential new attacks. Imagine that it is actually using historical data. So then it is possible to, to see or to detect for the machine. Okay? So if it was happening before and if this threats were actually trying to break into and they didn't work and the attackers were trying with a different one, so we can be alert the automate the automated response.
Well, so automation I would say is the main advantage of artificial intelligence, right? So this is what we all know.
Now, genetic AI could actually automatically respond and mitigate the threats. Again, it is very important that the humans are actually controlling over this and checking that this is actually working and the adaptability as well. So the models have to be able to adapt to the new threats and to detect the new attack methods because as long as technology is evolving as well, there are more sophisticated attacks. So then it is actually, you know, in in, in the both sides that we need to pay attention to.
Oh no, go back please. Oh, sorry, the negative.
Yeah, the negative part. You want to go with this or go?
Yeah,
Sure, I'll go for it. So when it comes to the negative aspects, we see there's like a lack of transparency. The models can be very complex and and difficult. So that makes it very difficult for security analysts to understand and interpret. Then there's a dependency on, on data. If the data is used to train the model, it could be biased or just incomplete, which creates issues for the, for the model. And then there's also the potential for adversarial attacks. Some attacks can be facilitated with the use of generative ai.
Another point that I learned at our conference in November, I had a conversation with one of the presenters, her name was Sia Iuk, and she told us about how the Russian invasion of Ukraine led to a vast amount of, of let's say, propaganda by using pots. But something interesting that she said is that since Ukrainian and Russian are different languages, the way that they could distinguish was that some of these bots were using incorrect grammar when it came to Ukrainians. So it was very easy to spot the, the, let's say the false information.
But she told us that when it comes to generative AI, that people are becoming more creative and they can use new tools to launch more sophisticated attacks.
Now when we say AI is fighting ai, it is actually, it is happening nowadays. So the use of Dell G PT actually create a revolution in in, in the industry.
Now, yes, it is true that we can improve the cybersecurity system. It is true that we can have a more robust plan in place and we can get the help from AI to do this. But at the same time, so how can we prevent that the generative AI is not helping also the attackers. So it could be, you know, in both sides, these two articles that we just said, the headlines here, we have one of them, both of them are from the same source, both of them are from from force. So one of them says TK can't stop helping hackers make cyber criminal tools.
And the other one says four ways that J B D is changing cybersecurity.
So then is it actually helping or is it damage in the industry? And here is what we have to start asking about.
Okay, yes we can have a better response time, we can have better automation, we can have machine learning algorithm that detect the anomalies. Now the observations that we have to do is in cybersecurity, if we talk about cybersecurity, right, is it real the JD is actually helping attackers to create better techniques or is it actually helping attackers who don't have experience? Because you know, if we, if we look at the two sides of the coins, it could be that okay, yes, attackers with very little experience, they could actually use it.
On the other hand, it was proved that Chelsea would actually create very good phishing emails. So then people actually believe that they are real. And this is why companies must have cyber security training for all the employees because you don't need to be like a technical person to actually fall into one of these attacks. Anyone who is online can actually have it.
The deep fakes, well, Alejandro mentioned the deep fakes before. And the thing here is that one fake news runs six times faster than a real one.
So then it is important to actually know or trying to distinguish whether these defects are coming from a real person or not. And this is where the genetic DVA is actually contributing to make more sophisticated attacks because it learns from the patterns, it learns from how the person is actually writing what the person usually says. So here the point is, isn't a really helpful for attackers that are experienced? So can they get some new information from it?
Well, if we base our answer only in J G B D J G B D was trained with data from 2021 and we are already in 2023. So I believe that in the meantime there were like more sophisticated attack coming up. So I would say yes, it could be helpful for an experience attackers, but we have to keep an eye and see how we can actually help companies to create a more robust cybersecurity system.
That's why education to users is very important because the example that you mentioned about this disinformation is also important to distinguish between disinformation and misinformation.
Maybe someone is gonna share some fake news, but maybe that person believes that this is true. So is that person spreading disinformation or is it just misinformation? So it's important to distinguish it too. And in the next slide we'll talk about generative AI and enterprises. We recently did a leadership compass on Soar and IBM curator was ready there and they benefited greatly by integrating the Watson capability, which uses these features.
Also, we see Google and Microsoft entering the race and we see that it's a very competitive and and dynamic arena where all these companies are really investing in this technology. So exciting times. And the next slide we'll just talk about the ethical concerns behind generative ai. And there's some of the, some of the questions that arise are included here.
There's uncertainty.
How, how sure are you that this is the right answer? How do you know this is the right information? Where is it coming from? Right? Accuracy can you always refer to the source? And as generative AI becomes more sophisticated, it becomes more problematic to distinguish. There's also a question of governance in this conference, we've talked a lot, we talked a lot about standards, about protocols, about how to govern the implement implementation of these technologies. Are there any regulations in place? What are the legal consequences? How can we keep pace with these technological developments?
And then there's a question of accountability. Who is responsible? Can you sue Chad g p t who owns the truth, right? Some of these are very philosophical questions that are necessary to ask. And it goes back to my previous point that we need to have this conversation going with all members of society, not only identity and cybersecurity experts.
Something that I would like to add is regarding the source. Okay? So we can actually have a lot of information or read something and we talk about the deep fakes and how do we know that this source is actual trustable?
And something that happened, for instance int is that we can ask it, okay, so gimme a case about this or that. It gives you the case and then when you ask for the source, maybe the link is broken or maybe it doesn't really exist. So then in that sense we can see yes, perfect. So I have this text that it looks beautiful, it looks wonderful, I can put it in a report, but if if, if I don't see where it is coming from, how do I know that I can trust it? You know? And and at the same time, are we actually complying with the regulations in place?
Five years ago the EU established the GDPR law, the general data protection regulation, and most of the companies must comply with this. Like it is a mass, it is not even a, a benefit compliant. Like you must comply otherwise that you could have issues. Now with Geneva, are we sure that we are compliant with these regulations? And this is something that we actually need to keep an eye and especially if you are using genetic value or open sources to ask for it. So we need to be careful in what is the information that we are actually given up.
And as Alejandro said, so if something goes wrong, can you suit G B T? Some years ago there was a rebrand painting. I was like the first one created by a via group that they were specialist artificial intelligence. And the ethical question came up and said, okay, so who is the creator of this painting?
Is it the group? Is it the machine is rebrand himself?
But after that, so, and, and, and this was a question that was ongoing and luckily for us, there are many meetings and many sessions that are going on around ethics in artificial intelligence, but globally because we need to actually set it up. Now it is very hard because every society has different rules. So then what is accepted maybe in one of the societies is not accepted in the other one. So then I would say that this is a kind of gray area yet, but this is something that we have to keep in mind.
And like, like we said earlier, you know, this is a global global challenge.
And I know in England back in the 19 hundreds, there was a group of people called the Luddites, I think I'm I'm pronouncing that right, yeah. There were people who were very angry because they were losing their jobs and they could not compete with these new machines and tools that were in the factories during the industrial revolution. And they were just, just went into the factories and started to smash all these new machines.
So the question is the average person out there who maybe is not very, no, doesn't know much about identity and cybersecurity, like the people in this conference, these people just wanna know what's the future gonna look like to me? Is my job gonna be available in the future? So we need to address these problems and we need to have this conversation going.
And I know I'm, I keep saying that, but I just cannot stress that enough. Some of the ethical considerations, truthful and accuracy, like we said, we just need to educate yourself how to distinguish this too.
Then there's a copyright ambiguities, like Marina said, the example of urban brand, who's the owner of the painting? Is it drum brand? Is it this group of developers making this, this solution or the machine? Or is the machine? Yes. And there's also an increase in biases. There's some models that were, for example, used by the Los Angeles police in the United States to recognize facial recognition and they used a database that had more data from a particular racial group and it turned out to be a very biased tool. So that's also something that we need to address.
Well, they miss you of generative ai. So are we actually using this for something good or is it for something harmful? It is something that would be an inappropriate content that we are creating with this. For instance, one of the examples that we mentioned was about in politics. So is it ethical that the politician actually includes in this generative ai?
Okay, so I need a text to talk to people that they want, that they feel this, this and that. So is it actually appropriate to use it for that? Or in education for instance, a big challenge that the universities are facing is who are the people who are writing the content that they are submitting? It was a couple of weeks ago that they read an article on Lincoln in from a professor in, in Belgium and he said, how long will it take for judge BT to be a co-author of the papers that we receive in academia?
Because it could happen that you said, okay, so I want this paraphrase and this and that.
So then the point is, will it be accepted or is it correct that we accepted? Is it ethical? Who is actually the creator of this?
And, and there are like many topics that are touching this area, risk of unemployment. Well as Alejandro mentioned, what is happening when there is a revolution coming?
And yes, it is true that genetic AI could actually do the job of people. And now the main point and what the history shows us, and I had a conversation with a very experienced colleague recently who told me, we cannot stop technology. We can't. So history shows us that we need to adjust, we cannot stop it. Another point is, and this is my perspective, okay, yes, genetic could actually make a lot of task and perform a lot of task. We still need the humans.
Now the point would be that those who don't know how to use artificial intelligence, yes they, they will face the risk of being out of the market. But it is not that artificial intelligence is replacing people themselves. And then the defects, well the cyber criminal are using this for scams to take in new identities, for example, to creating emails, for instance, to perform transactions.
And this could be actually a very yes, a a a very important problem to discuss
When it comes to deep, deep fake something that happened to, to my family that my grandparents, you know, they're old so they maybe don't really know how to distinguish what's fake and real, but they receive a call one day from what was supposed to be my uncle and he had the exact voice of my uncle. It turned out to be some criminals who somehow they obtained the voice of my uncle. And basically they were asking my grandparents, I'm I'm, I'm, I need help. I need to you to send me money.
These people have taken me, whatever, whatever, right? So my, my grandparents are just freaking out and they tried to call my uncle and luckily he picked up and he's like, no, no, I'm good. But these other criminals, they're getting very creative, very sophisticated, and sometimes you don't even know how they come up with your voice. Maybe they saw it on, on social media, I don't know, or on YouTube. But that's just an example.
So then a summary of this deep dive and, and after this conversation, and, and thank you all for being here all this time with us.
So the use of generative AI is expected to keep growing. So this is, this is what we expect it expect that it will break into different industries and different regions around the world.
And when it comes to cybersecurity, generative AI can benefit how we do cybersecurity.
But also, like Marina said earlier, it can also help some cyber criminals to launch sophisticated attacks or during periods of war they can be used to generate fake content to create alert and panic. So that's also something that we need to address.
The ethical concerns as well is something that has to be on the table because organizations need to prioritize the transparency. So how are we, how are they using AI and if they are compliant with all the regulations by using it.
And when it comes to innovations, like we said earlier, there are companies that are investing heavily in this technology. And I think in conferences like this, it's necessary to have these kind of sessions. So we can talk about, yes, let's promote innovation, but we also need to address the problems that come along with it.
And well finally we would like to say that the benefits are very, very big to actually ignore them. But we need to use our genetic AI casually, let's say this is like having a lighted in the forest.
You decide if you just make a fire to protect yourself or if you will burn the entire forest in this case is exactly the same. So the idea is trying to use the genetic AI in a way that we are just creating the fire to warm ourself and to make our life easier when we are working. So here we conclude the presentation. Thank you. Thank you for your attention.
Well tha thank you very much indeed. That was a, a splendid presentation that has gone over and, and provided some real substance behind a lot of the hype that we hear about this.
And this has been a very thoughtful and useful presentation, I think for people to go away with some knowledge. But if, if, if there are any questions, please put your hands up, otherwise I'll ask a few questions.
Yes, please.
At one point when you were talking about the images, you showed how images are detected and indeed what you showed there is typical of the way that AI works. And the problem with that is that whereas you and I have a, a general knowledge of the world, so we exclude noise, this doesn't, and often what happens is that it, it detects in an image or in the data something that is really the wrong indicator.
And so because of that, there was a time when I could buy a t-shirt with a sort of a little sort of bit of something on it that was guaranteed to defeat all of the visual scanning things. And there were these tales about Teslas that could be steered off the road by an almost invisible strip.
What, how big a problem is that?
Well it is actually a problem and the noise in the data is actually the problem, you know, for the outcome. And this is why it is very important to have professionals who understand what is the data that is used to train the model. Because once you train it, it's like this is aesthetic data. Now the point will be, or the challenge and what is expected and what scien scientists are working nowadays is how you help the machine to learn from this experience. Because yes, reinforcement learning is on the game, but not particularly for this generic d ai.
So how can we actually implement it on this? And this is what is expected, you know, like in, in, in in the coming years.
Yes, and and that is exactly the problem. This leads on to the issue to do with explainability.
Exactly, exactly. And it is very hard sometimes to explain what is inside the algorithm because sometimes the algorithm, I dunno, well probably most of you heard about the black box. The black box has this name because it is something black like you know, you, you cannot see from inside what is said, you know what is inside. And the idea is that the scientists have to actually learn themselves how to explain what the algorithm is doing. And this is one of the main challenges as well.
Yes, and and that also, well first of all, explainability is a problem to do with training because if you can't understand why it's getting the wrong answer, it makes it very difficult to actually coach it to do the right thing. Absolutely. But in terms of utility and value, explainability is a critical thing. So you talked about these people that are developing drugs in 46 days.
Well, but how can you trust that the gen generative AI is in fact developing something that is good if it can't explain why it's done it?
Absolutely, and this is a great question, Mike. This is why now the regulations, you know, the legal requirements are actually trying to change.
Okay, perfect. You can use generic but up to what extent, you know, because you are dealing with the health of people. Yes. You know?
Okay. So I know you've got to go.
Yes, but, but I'm going to ask one further question, which is where do you draw the, you talked about human intervention in the ethics in all the other things. So how do you decide where it is that you need this human intervention?
This is actually a great point. There is an AI ethics organization and they have meetings every year with representatives from around the world and they try to actually agree there is an agreement and it is available on their website where are, you know, the uses that are actually allowed in artificial intelligence.
Now the main issue or the main challenge that they are facing is exactly what I I, I said earlier, whatever is accepted in one society, maybe it's not accepted in another one. So then they need to actually come up with an agreement that is accepted internationally and it can be used globally.
The problem is also that some of these companies are competing with each other. So I think there was recently an open letter saying that we should stop doing and well, yeah. Right. But
So thank you so much. Thank
You very much indeed.