Hello, everyone. Welcome. Good afternoon. Good morning. Good evening, according to what part of the world you are actually watching us. My name is Marina Iantorno. I'm very happy to be here. I'm a research analyst at KuppingerCole Analysts. And today in this webinar session, doing our road to the EAC, I will talk about generative AI with a very good colleague of mine, Jianchao Wang. He is the director and co-founder of AltoSice. How are you, Jianchao? Thank you for joining us.
Hi, Marina. I'm all good here. I'm really glad to be here.
Yeah, thank you. Thank you. That's great. That's great.
Well, the intention of this webinar is actually to give an introduction to an amazing workshop that we are preparing with Jianchao. You know that we will be at the EAC in Berlin. And as a pre-event, we will offer a workshop where we will discuss about generative AI. And we will have this workshop divided into two blocks. So it will be kind of special because it will be very comprehensive.
And, well, the idea is to have different case scenarios and maybe some concepts that people need to know if they want to actually embrace this wonderful world of generative AI. Maybe Jianjiao wants to explain a bit more about this.
Oh, yeah, 100%. You will spend a whole morning with us. We prepared a workshop that contains not only practical cases, but also explanation, foundation, and recommendation that are useful in today's digital business. Many organizations are already using generative AI. And it is touching all industries.
And, of course, it's touching IAM and cybersecurity as well. So we'll prepare the whole morning with a detailed workshop for you. That's great. That's fantastic. So some details for everybody in the audience regarding to the webinar of today. You are muted, so you don't need to mute or unmute yourself. During this webinar, we will run some poll questions.
And, well, the idea is to discuss the results at the end. You will have your opportunity as well to proceed with questions and answers. So then you can actually add your questions at any time using this event control panel. Please feel free to do so. And at the end of our presentation, we will go through the questions, and we will be very happy to answer them. For you to know, this webinar is recorded. So this presentation will be available along with the slides on our website. So let's just get started. And we will start with a quote from Alan Turing. He is considered the father of AI.
And in 1950, he said, I believe that at the end of the century, the use of words and generally educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted. This is actually a great quote, I would say, because it was written less than 100 years ago. And I would say that this is what is happening now.
Xiangqiao, what do you think about it? Yeah, I believe what he was talking about is happening now. I remember watching the movies many years ago, where a computer will smarter than machines. He probably thrilled to see the AI can mimic thinking to an impressive degree now. I'm wondering what does the Turing test now, what's the results at the moment for genetic AI?
Yes, absolutely. And, you know, it is not just with the movies, the movies, maybe stories, books that we're talking, okay, so what will happen, the robots, even the cartoons, if we go back in time, I remember the Jetson that they said, okay, so they attended the doctor, you know, just with the screen. And now this is something that for us is pretty common, we can even, you know, interact, maybe with AI tools every day, and we really have no idea about it. And I would say that even, you know, when we think from some time back until now, there were so many changes that happened.
We start, for example, with the self service machines, when you go to the supermarket, for instance, so you don't really need to go through the cash to the to the cashiers. So you can actually go and do self service, or, for example, the use of AI in medicine to try to actually identify different diseases, this is something that AI is actually making a great outbreak, I would say. And I believe that if Alan Turing could actually see what is happening now, he was a visionary. So I would wonder what he would actually say that will happen in future.
Yeah, I totally agree. I mean, I believe Alan Turing would like to like to be like to see how amazing the genetic I know, not only the mimic aspect of human intelligence, but also exceeds in task like a data analyze and complex problem solving. It's exactly those achievements that blur the line of human and machine capabilities. From the use of chat GPT, we know, right? It's not 100% right all the time, but it help us and work along with us.
Yes, totally. This is as well something that many people ask me, you know, okay, so you are working in AI, what will happen? Will AI replace human will replace us, we will lose our job.
Because, of course, if we refer to what happened in the Industrial Revolution, yes, there were so many changes. And there were machines that were actually set into factories to make the production faster. And it is true though, that there were some roles that were not necessary anymore. But people who actually adapt and learn how to use the machine, they kept actually their roles. And I believe that this is pretty much what will happen now people will have to learn how to work with AI.
And the reason one more thing that we have to consider, and it is that the machine is actually trained, it is learning. So then they, the machines need a human to actually put something into the machine for the machine to train. There is one of the of the sessions that we will have at the EIC that will speak specifically about this, is the machine really intelligent? Or we are actually given to this information training as we train, for example, an animal who doesn't have a conscious? Maybe I don't know, Shan, like, what is your opinion on this? But there will be like a very extensive session.
What do you think about this about the intelligence and the machines? Yeah, I do think a boil down to the, the bare bone of this genetic AI is still predicting next words, right? So you're still doing that. And from the training data, like you said, so right now, the genetic AI still doesn't have the capability of self reinvent, self discovery. But I do think there's a talk about it. Because right now, genetic AI can, it's like the best student in your class, always tell you the answer. So so fast. There you go. But they were they were talking about to give genetic AI basically a question.
So let it think for two or three or four days. So that capability is not there. What we're talking about here, it probably to the next level kind of self discovery, self reinvent. And that will help us in scientific awards and all that. Definitely. I love your example. So the genetic AI is the best student of your class. But it is true. And today we will explore what are the reasons for actually that to happen. So let's continue a little bit.
And let's talk about what happened with AI in the last years, I would say, if we compare with what happened six months ago, until now, there were so many changes. So imagine in all these years, if we remember, many years ago, when all of this started, and people started talking about AI, what was happening was, okay, perfect. AI will be robots, and people connected AI only with robots. And then they said, okay, so this is robotics. And they imagine like, maybe a robot that will clean your house or something like that. But AI is actually much more extensive than that.
And the main point here that I would say is this jump that happened, let's say, between 2015 2016. Until now, what do you think, john, that is, well, the most pivotal moment of of AI in the last years? I think it really started from 2010 ish, early 2010. Probably the development of deep learning architectures, like say, and, and transformers, you know, those are really give the AI a significant spike on the progressing side of it.
And those, those techniques, enable breaking through in like image recognition, and natural language processing, a set of foundation for a system like autonomous cars, real time translators, even the genetic AI we know as today. Yes, well, you know, talking about self driving cars, this is something so curious.
Recently, I was talking to someone who is working for a company, and he told me that they started their business thinking about, okay, yeah, they started in 2016. And then they believe that in maybe two years, all the cars around the world would be self driving. And I don't know you, john, but I don't have a self driving car. And I believe that he was actually a wrong perspective, or a wrong prediction at the moment.
But then we also need to consider that all the changes that are happening, or all the changes that we expect to see also depend on the environment, the environment is changing, and the needs of the market are also changing. So then these self driving thing that they were actually thinking of for the cars was actually translated, for example, in other industries like agriculture. And it is actually much more productive, I would say then having a self driving car in the street.
But coming back to the changes, you know, you started talking about, okay, so there is here a change and the transformers and CNN. What do you think that happened actually, between the transformer and GPT? Because I see that there was a massive outbreak, when people started having access to GPT, or BERT, for example, you know, very accessible for everybody, even for free version, or paid version, if you want more features, more capability. So what was the big jump from one thing to another one? I think there are three points from my thinking.
The first is probably several crucial actually advancement from the algorithm side. There are different deep learning architecture now, like advanced ones, like deep neural network DNNs. And there are different kind of transformative transformer models. And the other two, I think is really big is, is actually the data, the growing of big data. So we collect so much data now, be able to actually train on them. And also the competition power is the other big factor. So those two, those three are really the key element from this big jump for for AI.
I mentioned about like, also, this is beside and talk now about just the kind of self driving cars. And I think we all know, like Tesla and their FSDs are going through this kind of moment as well with big data to collect. And also the onboard compute computation power of those Tesla vehicles. So definitely not in two years, but maybe it's on the right track there. Yeah.
Yes, no, absolutely. I totally agree with you. And especially with all the capabilities that we have now to actually keep a photo, let's say of what is happening in the environment, you know, so then they can actually take a real time photos, protecting the privacy, replicating the same location, even with the satellite from, let's say, Google Maps. So then it is amazing what is happening. I would say something that we couldn't really imagine how fast it would evolve. And there is one more thing that you mentioned, and it's the grow and the fluid of data.
I also believe that with all the regulations that are in the market, and especially in the EU, the scientific data will eventually overshadow the real data, you know, trying to replicate the features, and maybe, you know, having large amount of data to keep training the models. Well, actually, it is incredible. So let me continue then with our presentation here. And let's go through our first poll questions. And then the question here is, in which area do you expect the biggest impact of AI on identity and access management?
And the answers could be supporting access, and role analytics and management, AI based identity, threat detection and response, generative AI supporting identity access management, administrative task, others, or you're not sure. I invite our audience to submit their answers to this poll question. As I mentioned at the beginning of our session, the results will be discussed at the end. So we appreciate your participation here.
And sorry, yes, go on. No, I was just saying, I like to vote as well. You want to participate in the in the poll. So let's let's jump now a little bit through.
Well, to our main topic, that is the mixture of experts. This is a new model that we actually have, if we compare with the previous models that AI was actually using. This kind of model, the idea is, if we compare the two graphs here, we can see that in the original models, we will have an input, there were some rules that were set, then the algorithm would process this information and would provide an output.
But then, if we think about what is happening now, with the mixture of experts, the machines don't have only one model to process it. It has multiple experts. It's like having, as Chanchal said, the best student in your class.
Well, imagine the best student of different fields in your class. So then there is an input that we can actually ask. For instance, when we ask GPT, okay, so I want to know whatever question you have. Then there is a router that will detect what is your query and what is the expert that can answer it. And then this expert will produce an output for you. Something that is amazing is that nowadays, it is not only that one expert can actually answer, it could happen that many experts could actually create an answer, combining the knowledge between all of them to produce the best outcome.
In that sense, I would say that the mixture of experts versus what we had before is actually much more powerful, because the algorithm reinforced the learning through different methods and in different areas, and then creating something that could actually combine with this one approach. So I would say it's actually very, very good, you know, in terms of technology. So in other words, the idea here would be with these kind of experts, sending the sign to the correct expert to take the output from it, or taking different topics.
I would say, anyway, so even though we have this, the main comparison that we can do between the traditional models and this one is, well, so before we had the limitation that we just had whatever data and it had to go through all this. Now we have different experts. What would be the main comparison that you could actually do, Shan? I don't know if you believe that I am on the right track, or you have a different opinion.
Yeah, you're totally on the right track. I think traditional models often struggle with diverse tasks simultaneously, but a mixture of experts dedicates efficiently. And much like a well-coordinated team, like you said, there are some different kind of expert, best teacher, oh, sorry, best student of the grade of different classes, right? And also this model give advantage on the operation side as well, because we know the large language model like LLAMA, GPT 3.5, they are normally 70 billion parameters. And this model here, a mixture of expert can actually operate a little bit smaller.
So one is active parameters, maybe a lot less. We will talk a little bit more of this in the workshop later on.
Yes, but for now, we can talk a little bit about Mistral. When we talk about the Mistral, well, so we need to understand that it actually combines, as we say, different experts. And as the word indicates, it combines eight experts, and each of them will specialize in different domains.
Maybe, well, as Xianqiao said, we have other methods as well, or other, well, I don't know if you call them methods or rather algorithms. How would you call them, Xian? They're individual of a small model. They're all seven billion parameters each, and they're specifically fine-tuned into some area. Let's say one could be trained on coding, one could be trained on text generate. When I say trained, it could be fine-tuned on.
Yeah, so that's the area. For Mistral, we'll discuss quite a bit in the conference in Berlin. But when it's active, it has about 42 billion parameters taken in the kind of... You need the memory to run, which is smaller than, let's say, Lama 2, which is 70 billion.
Early on, I talked a little bit of a selected two kind of experts. So you're talking about active parameters, which is only 12 billion. I don't know where they come up with the math. It's not really 7 billion plus 7 billion.
Somehow, they find a 12 billion. There are different kind of methods that people are thinking about swapping those experts for their own use.
Also, like swapping those experts, 7 billion, into the active kind of memory. So you don't need to run the whole 42 billion parameters in the memory, but like 12 million. So there's a lot of modification going around behind it with this model's efficiency on tasks.
Yeah, it can save quite a lot of money on the long run when you're hosting the infrastructure. Well, even when you say 7 billion, 12 billion, what I imagine is a tremendously large amount of data.
Yeah, yeah, yeah. Yes. So each of these points would actually contribute, it would be a different data point.
Well, you work on a daily basis with this, with generative AI. Do you have any example or something that you want to share on what are the models that AltoSize is using? We mainly use a platform like AWS. We're trying to be a partner with them. So any kind of model they offer, Cloud, Titan, their own one, and Lama as well in there. It's really, what we're looking into is more efficiency of the model when we're using. We actually help customer to find their needed model and not necessarily big. We're also looking into efficient side of it. How do we run efficiently?
Also on the engine side, we all know they all run on GPUs, but there are different methods to run the AI a little bit faster. So yeah, we explore, we are a student at the moment as well. We're trying to be the best in the class at the moment because the AI involves every day. So we have to study as well just to learn it. Totally. You know what, now that you mentioned that we are students, I believe that most of the companies are in terms of, okay, how can we actually do something to improve it? Some organizations, they think, okay, so shall I create something that big that competes with OpenAI?
For example, OpenAI is massive. And others say, okay, so shall I partner with them or maybe train to use their strength and connect my API?
So it is, I would say, like the very beginning of the journey. One more thing that they would like to say about, well, the mixture of experts and just going now to Ms.
Charles, is that there are different variants for the mixture of experts. You mentioned before, okay, so we can have, for example, 7B, 12B, and then you can actually train this model with different topics. The main goal or the main aim of this model, particularly, is to handle very complex data. At the very beginning, when you started talking, Shan, you said, we have very complex scenarios. We work with very complex data. And I believe this is basically what, you know, it is happening. And we will talk in our workshop about Ms. Charles.
Do you have any example, for example, that you can say, okay, so Ms. Charles was successfully implemented and worked well? I think from Ms. Charles is outshining everybody else. Or on the even playing field, why all shine is the performance versus the infrastructure cost. So that's the one big area, Mr. Why is such a big talk about it. And then also, you can play with it, because a different model, the architecture of it, make a very interesting. I think that's why I know, a different kind of engine, which is using this trial at the moment, called rock.
And so it is different than a normal GPU kind of architecture is using, they call natural language model, or natural language architecture, I can't really get the right word that the naming normally is very long. But they achieved amazing speed of putting some of the responses. I think that's one of the best implementation. I think that's a great example. Let's go back now on our track and some of the topics. And let's go to the retrieval element generation. I know that many people are really talking about this. And here we have maybe a simplified graph on how it work.
So well, Shan, what do you want to tell us about these what our audience need to know before attending the workshop? From my from my experiences, talking to our customers, probably 80 or 85, our 85% of our customers wanted to have something specific to, to their business domain. And then the genetic AI model is really broad in a sense, right, we're talking about the best student in the class, maybe the whole grade. I cue that again.
But, but we call rag here, right. And so the rag can actually, you can, you can actually build your own databases with it. So you can push the different documentation with your business domain. And then I won't go into the details, there's indexing, and there's a chop into different trunks and embed them.
And, but that way, you can actually build your own assistant per se. And so this is the one of the area we talk quite a lot in the workshop late in Berlin as well.
So, so one of the use cases, it could be you're building something a personal assistant, or you have a project management, you push all the document into, into this, and you can do a kind of smart search per se, you can search, oh, when did this happen? When did that happen?
So, which is saves us a lot of time, wasting, was in document searching, sometimes we just, where's that document? Where's this? So this is a great use cases for, for the, for the business as well. Yeah. Absolutely, absolutely.
And well, so the fine tuning. So this one is a very interesting topic, because many people, when they start learning machine learning, or this kind of model, they said, okay, just measure the accuracy of the model, if it is good, that's it. But we also need to actually tune our algorithm. And many people don't really know what the fine tuning is. And what is the difference between calculating the accuracy, or the fine tune? Because the fine tuning is actually kind of a buzzword, let's say in AI. Why do you think so? Why do you think that this is happening?
Yeah, I think, fine tuning, we can, for our audience who doesn't understand, it's, you know, every time you're talking to chat, GPT, they're so prone engineers, basically says, Oh, you said, you are certain, certain, certain, you do certain certain things. Fine tuning is basically giving that a large amount of data of specific kind of a task to actually train your, your generative AI model to be to be to perform better in a specific area. This is a similar to kind of general practitioners into, we can trim it into a medical specialist.
So the answer won't be so very, like what we get from chat GPT, if you don't give him specific kind of instruction. So kind of like you're going from high school to university, you choose your topic, you you branch deep in that in that topic. Yeah.
Well, so I would say that this is actually very specific. And definitely we will keep talking about this in Berlin. This is another very important thing that we need to know. Let's go to other to our next poll question. This one says, What impact do you think the increase of AI adoption will have on cybersecurity in the next two years? Increasing vulnerability identification, increasing automation of cyber attacks, improve MDR capabilities, or increasing AI leverage complex attacks? So we invite our audience to answer. As I said before, we will discuss their results when we finish our webinar.
And please feel free to leave any questions if you have any. So we are reaching to the end, at least of what we really wanted to share that we consider that this very important or was very important for you to keep in mind for for our event.
Well, so you already know, we will be working the whole morning on different cases studies. And there are several topics. The good part of this EAC is that we will have an entire track dedicated to AI. And it is called upgrade reality. So apart from the pre event in June, which will be amazing, because Jan and I will be in there and we will help you to understand more concepts and to put your hands on different cases. We invite you to participate in the sessions. Something that I would like to mention is that Shantel is very skilled in this. He's a cloud professional.
And it is not the first time that we are working together in a project. So hopefully, it is something very interesting for for our audience.
Jan, I'm very excited to work with you. Yeah, me too. I can't I can't wait for the conference itself. So this is much, much more like a community. Everyone is a student, like I said. So if anyone there watching the webinar has question or wanted to chat with us.
Yeah, I think there's a Q&A and there's a different way to contact us for sure. Yeah, can't wait to see you in Berlin.
Jan, we have a couple of questions in the Q&A chat. Maybe I don't know.
Well, we can start out with this first one. How do you balance the ethical consideration of generative AI, particularly concerning issues like authenticity and misuse while fostering creativity and advancement? This is the ethical part of it.
Yeah, I think I do think and I think Europe has is definitely European Union is definitely ahead of the curve of everyone. There are the parliament actually approved some AI use and GDPR stuff.
Yeah, this is definitely a big topic. And also, I don't I remember interviewers chatting to chat GPT, GPT.
Oh, sorry, open AI. They're one of those products manager or something talking about Soma. They can't really tell where the data from as well.
And yeah, it's definitely a big area. Well, so I will tell you something here because I would like to mention the criteria for the ethics is something that it cannot be left by a chance. That's why there are many meetings and many regulations that are going on or are in progress.
This year, we had approved the EU Act for the use of artificial intelligence and to prevent the misuse. It is a very long regulation, let's say if we want to read all this, but there are certain limits that we have to set. And this is because technology is evolving very fast, as we said before. And there are some limits that will go, you know, there is a phrase that says, your rights to stop when my rights actually come in place.
Well, I believe that this is something very similar in terms of privacy of the people. So you can be creative, you can actually create something amazing, which is happening nowadays, we see what is happening with GPT, all the data that we are actually using, it's used to train the model later on. So every interaction that we have with these kind of tools, the model is actually taking them, learning from them, training, making something better.
But also, we need to protect our privacy. And this is why there are many regulations going on. I believe that this is something that of course, we have a much bigger scope, and there will be a big explore in the in the coming years. I will go through the other question here. What is the difference between Lama, Mistral and GPT?
Well, so first of all, yeah, I can I try to answer that now. The GPT and they're all genetic AI, they're all large language model, and they are very large. And they train by different company. And the main difference, I think Lama here and then GPT is kind of similar way. They're just large language models. And the Mistral we showed you a little bit early on the architecture of it is slightly different. It has eight of the small models, we're talking about when you see the large language models, you often see the number behind it like a 7B, 70B, all the numbers.
Just going to the details, they are the parameters, the trend down inside, inside the model. So they have 70 billion parameters inside. And the larger the model supposed to be slightly better on predicting next word. So the outcome is a bit better. So the Mistral itself is using from the webinar we did is using a different approach with like different experts in there.
So yeah, I think Mistral is slightly different than the other two. Yeah.
Well, so something that I would like to add in here is that the three models are, as you said, generative AI. The thing is that the three models use transformer architectures and they are designed for a wide range of application. They will differ in terms of the scale capabilities, for example. So not all of them have the possibility to scale at the same time. The efficiency, the specific approach that you want, the specific approach to train and generalization as well.
This is what makes each of these model unique and suitable for different uses, for different tasks, or even for different applications in the field. I would like to go through the polls. Let's see the results of our first poll where Chanchal wanted to participate. What are our results here? Whoa. So the question was in which areas do you expect the biggest impact of AI on identity access management?
Well, so the main answer here is AI-based identity threat detection and response. Yeah, I would choose A too. You wanted to vote for that one. Okay. So this one is the main, I would win it, let's say, within all the changes that are expected. I know that now at the moment, identity threat detection and response is like on the rise and everyone is talking about this.
It is, from my perspective, not a market per se, not yet, but it is going towards that. Why do you think, Chanchal, we got this kind of result here? I don't know, but I do think it's both sides of the coin. Because the AI, the large language model trained on code as well. There's a lot of human code. There are exploits already in the code and bugs in the code, so it's easier to do detection. And it's also, yeah, it's also kind of the model trend on them. So it's both sides. It can detect and it can respond as well. That's what I think. Yes.
And something that I would like to mention is we always think about the AI, okay, so how can cybersecurity experts can use it, how identity management professionals can use it, but also we need to remember that the attackers also have access to AI. So AI has the capability to help the defenders, but also to help the attackers. Let's see the results of our second poll, please. And let's see. Wow. So here we have a much more variety. So it says, what impact do you think the increase of AI adoption will have on cybersecurity in the next two years?
So the answer B was our winner here, increasing automation of cyber attacks. Absolutely. So I think that our audience read what I just said in my mind.
And then, but this one is a bit more, let's say a split in terms of the results. D is increasing AI leverage complex attacks. This one can be actually particularly, well, important or something that we need to keep our eye on, especially for cybersecurity professionals.
And well, then we have the other two increasing vulnerabilities and improve MDR capabilities. Xian, do you have any comments on these results?
No, I don't think so. I think B and D is pretty much where it's AI at, they can use for sure. Yeah. With ability like a rag, you can pretty much produce a script with internet search capability at the back end to exploit those bugs and then common or known exploits as well. Yeah. I can take it in my head probably. Yeah. It's easy to do.
Well, same here. Well, so thank you so much to our audience for answering these polls. And a couple of last words that I would like to say is that this event that we will have in Berlin will be full of good speakers. And as I mentioned, well, Xianqiao and I will be in the pre-event on the 4th of June. We will be the entire morning with you working on GeneraTBI and some real case scenarios where you will have your hands on trying to find solutions.
And well, we have these key topics that will be part of the event. Thank you so much, Jianchao, for being with me today. It was my pleasure working with you once again. And it was great that you share your knowledge with everybody here.
Oh, thank you. Thanks very much. I'm glad to be here as well. Can't wait to see any of the audience as well in the conference in Berlin. Yes. Same from my end. I'm really looking forward to meeting you all. And if you have any questions, you can always reach out to me and or to Xianqiao and we will be happy to be in touch with you. Thank you so much, everybody. Thank you so much, Jan. And see you in Berlin.