Welcome to our KuppingerCole Analysts webinar, "Understanding the Impact of AI on Securing (Not Only) Privileged Identities". This webinar is supported by BeyondTrust and the speakers today are Morey Haber, who is Chief Security Advisor at BeyondTrust, and me, Martin Kuppinger, I'm Principal Analyst at KuppingerCole Analysts. So in contrast to many of the other webinars we're doing, this will primarily be sort of a fireside chat without a fireside. So it will be a chat, a conversation between Morey and me. And anyway, ahead of this, we will do a quick poll.
Some housekeeping notes, we are recording the webinar. You are muted, you can enter your questions at any time. The more questions we have, the better it is. The right hand side of the web version of the webinar, you'll find the chat, especially you'll find the Q&A. Q&A is most important, polls are important. So the Q&A, enter your questions and we'll pick them up or try to at least integrate them into our talk, into our conversation. And the more we have, usually the more interesting, the more lively it will become. We will do two polls, one at the beginning, one more towards the end.
And with that, I directly go to the first poll here. And basically the correct question would be, what do you think of these areas? Where will be the biggest impact? So where do you see the biggest impact due to the increase of IEA adoption on cybersecurity over the next two years? So is it more your cyber attack side, sort of attackers benefit, including leveraging new types of attacks? Or is it more on the defender side with improving MDR XDR capabilities? Or do you say, okay, not much will happen?
I have a pretty strong conviction, which of the three responses we got zero, two, three options we got zero responses, but we leave it open for a bit. We'll see, and if time ever allows, we will pick up the poll results during the flow of the webinar.
As I said, we don't have a typical agenda here. It's just Murray and me talking about changing landscape.
Oh, sorry. I have the wrong title. It's not Application Access Governance. Murray and me talking about how AI impacts identity in general and privileged access management in particular. So with that, we directly will move to our first topic, which is, I would call it AI identity today. So AI and identity, makes a nice term, AI identity. And we will look a bit at the state of today. And with that, I'd like to ask Murray maybe to introduce himself, the first step, and then we'll proceed from there. Murray. Thank you so much, Martin, for having us today. My name is Murray Haber.
I'm the Chief Security Advisor here at BeyondTrust. I operate as a forward-facing asset for the organization, helping people understand modern security threats, helping them along their privileged access journey, et cetera. I've been with the organization about 20 years now, and I'm in the former CTO and CISO for the company, as well as the author of six cybersecurity books covering various attack vector landscapes.
So Martin, it's a pleasure to speaking with you today. And I think this topic of AI identity today is something that's quite fascinating considering all the definitions of identity. Yeah. And I think, what I think about a lot is that this relationship of AI and identity, and I think taking out the pole, and we have cybersecurity as well, we have this, you know, this attackers use it, and identity take deep fake. We use it to counter, take AI identity verification, for instance. But I think it goes even beyond that, to my thinking.
So what about AI-powered agent bot, however we name it, that works on behalf of us? So you can, so to speak, book the bot to do things for you. Others can do it as well, which leads us to, I would say, to very interesting identity relationship challenges already, because this AI bot must have an identity, like every service has an identity, but it acts on behalf of a human identity. So we are probably even intersecting non-human and human identities.
We are, and this really goes to that machine account relationship. When you have a machine account or machine identity, it has a human owner. When you then say, I'm gonna train an identity to have machine characteristics of yourself, you then have deep fake video, you have deep fake audio, but that may be a little bit of a stretch for some to say, how can that impact my business today?
But you can take paid versions of open AI, and if you are prolific in writing, train it for your writing style, upload your work, upload your documentation in private so that that data is protected being a licensed version of the solution, and give it a name, like this is the writing style of Morey Haber, since I have written multiple books. If I then post questions or ask it to, it will use key terms in the responses. I always use things like threat actors. I don't say hackers.
That then creates a very interesting machine identity type of persona I control, but that can actually generate text for me. AI today, in terms of identities, can extend what the human can do, but that could also be used in a nefarious way to impersonate somebody. And that's where we start getting into some interesting challenges.
Yeah, so the one thing is you, Morey, or me, Martin, of writing things and asking our training AI to create something that reads like we would write it. So I tend to say, if I do something like that, and occasionally I started doing it, I always say, I'm pleased to do it in a better English than I would write myself, which also is an advantage of training AI. So not being a native speaker, there's always room for improvement.
But yes, someone could also say, write something that reads like Martin has written it and post it that way. Yes, I think this is an interesting point. So should we, at the end of the day, allow AI to impersonate one without checking whether the impersonated person approves this? So we then have deepfake video, audio, potentially text, but AI detection methods to distinguish between human and machine using AI to actually figure that out, we're not quite there yet, in my opinion. There are some good models out there that can look for deepfake videos.
They basically break them into segments and they look for anti-aliasing scratches or other types of things in the video that might say, you know what? That's been digitally created. But doing it from text, you can pretty much upload it to plagiarism engines and it will determine, look if the text exists elsewhere or if it believes it was created by AI. But is there truly anything wrong if you're using it to help you as a professional create better content? And I think the jury's still out on that. What is the purpose of it? The purpose is that it augments.
In that sense, I would say we touched on AI a bit later also, but at the end, it is an augmenting intelligence, which I think is probably much closer to what AI is than artificial intelligence. It augments people in doing certain things more efficient, better. And I think there's something good in that.
But yes, I think it's an interesting point that makes me think, what is the risk of abuse? And on the other hand, I would dare to say that one can distinguish between a text that has been written by a human and that has been written by, so to speak, the AI impersonating that human. I think it's probably doable. It will like always sort of come closer than maybe we have better technologies to detect. At the end of the day, I think it's the same like with DeepFake, you brought up DeepFake as well.
I think we have this increase in DeepFake and DeepFake is getting better and better, but clearly we will also see that the AI becomes better in detecting DeepFake. And for DeepFakes, by the way, and maybe it's the same as text, it's potentially probably easier for the AI to detect that there's a bit of an anomaly than for the DeepFake to be perfect. Because I think this is where we flip a bit like, so usually cybersecurity, we say, hey, an attacker needs only one working attack factor. We need to defend against all. For DeepFake, it's a bit the other way around.
The DeepFake must be perfect, but we only need to spot one anomaly, which is a point we should keep in mind. If we're able to raise risk signals by small anomalies, we at least can increase the barriers. And imagine we're right now behind the tool we are on Zoom. Imagine Zoom pops up a small alert, oh, be suspicious, this might not be the real Murray. Then I would probably be very, very cautious here in that call. So I think AI and identity, it's a very interesting interplay with impersonation, et cetera.
But also when we take it further into what happens in the system, so we saw it with DeepFake and big payments made, it might also be something which is over time used to sort of act as someone also in systems in a way that tries to impersonate. There are some interesting thoughts here, yes. It brings up an interesting point. So we have video, audio, we just talked about text, but you now also have behavior and account. I don't know if it's machine or human, it's behavior using AI. Is it an AI driven component or is it really a human doing the work?
So when you think about a breach or an attack, a lot of automation does occur, but is that particular account being compromised, a human account being driven by AI or a machine account being driven by a human? And AI really could help determine behind the scenes whether that identity account relationship is truly being run by a human, being done by a machine, or a machine account being run by a human because it's doing things that a machine account would normally not do. And I think Gen AI's impact can help us in many ways there.
Yeah, but what you're saying here is also a bit more on the other side of it, but I think we should keep this in mind also for the British access management piece, but more in general, I think that AI, I think the one risk is it can impersonate in a way which is not, or which is at least problematic, maybe totally unacceptable. On the other hand, it may also over time help us to understand who is really acting here. And I think both is important. We already talked a lot about Gen AI.
So basically when we look at Gen AI, what do you see as the impact of Gen AI at the most important ones on the entire theme of identity on one hand, but probably also on identity management in the broader sense and on cybersecurity that we already touched these topics. So what would you see as the key aspects that impact these areas, the benefits, maybe also the challenges? So when you're training any model, any model, having concrete data that will be reliable in the training efforts so it doesn't produce AI hallucinations later, like missing gaps where the AI just make something up, right?
If you had video content where you knew this type of change or this command created a behavior that was unacceptable, or you had roles within your organizations incredibly well-defined where the privileges in the account relationship, whether it's a privileged, non-privileged contractor, vendor, et cetera, are well-defined, good solid data. Then when you do train for the identity account relationship to make recommendations, to look for flaws, your outcome is gonna become much better. If you train with garbage, garbage comes out.
If you train with a solid model, it's like machine learning, you're gonna get better responses out. And I think for organizations that have worked on their maturity curve for PAM and identity and access management that have gotten past a lot of the initial steps of good roles, good governance, good privileged access, session recording that's monitored, and then processing that, I think you're gonna find that the results are gonna speed up threat identification significantly.
I like, especially by the way, that the session recording part, the session monitoring part, which is, I think, when we go to PAM, which is always a bit the hard part in it, because the human effort tends to be very high. And that clearly is one of these areas where AI can help in sort of reducing the parts where a human needs to really pay attention to a minimum, and especially if it's good, if it's well-trained.
So, because then it's always about this, I think that training directly impacts the false positive, false negative ratios, and you want to go down. By the way, you talked about the hallucinating thing, which also can be sometimes helpful.
So, I just played around with some prompts and also asking them for create a linked list that's relating to research and other publications of clinical analysts. The result was, which had to be really hallucinating. But at the end of the day, I would say whatever four out of the six links were great ideas of what we could produce in research.
So, basically it came up with things which didn't exist, but some of them were, in that sense, good hints on what we should do. So, even that can be sometimes helpful.
So, just as a small anecdote here, but I think what we also learn is we need to learn how to use Gen AI. So, prompt engineering is an art of itself, and you need to do it very proper to have the right results, as well as you need to have the right information in the models to train them the right way to come up with proper results. And then there's the chance of potentially salting the models or exploiting the models, using them in ways that they were never intended.
I think for some of the audience they may be familiar with several years ago, an attorney in the United States went to court with a list of cases to make his argument, but it was hallucinations from CHAT-GPT that created false cases to make his argument. So, again, garbage in, garbage out. The verification or the abuse of the model is something, in my opinion, that most people are not used to seeing yet. They're not comfortable with that verification or the trust of what is being produced on the output, or even the classification.
If I was a company and had to hire people to look at sessions over and over and over every day, and I had AI doing it and flagging appropriately, is there hallucination? Is it a false positive, et cetera? You might get more complacent, or it might be a false negative and something never seen before and completely missed versus the human sitting there and watching it every day. So I think we have this positive negative aspect that we're still gonna have to work through in the next couple of years.
Yeah, but I think we also, we are learning, and with every new technology, we will learn how to use it the right way. There will be probably some things, as general rules be introduced over time. So like with, so vehicles were invented at the beginning, they were slowed, and they became more and more fast, and then some rules and speed limits and other things were introduced. And so in that sense, we may see something a bit similar over time that helps us to also avoid the potential negative events.
But we're talking a lot about Chen AI, and there's the other angle of it, the other element, which is basically, I tend to call it analytical AI, the stuff which is around for much longer, which we use whatever in every modern scene tool or security information management to analyze a lot of data to look for the anomalies. So which is lesser about generating, but really about analyzing huge amounts of data. So is this also, I think it's a bit of a rhetorical question, but I ask, is this also relevant in our domain, in our domains of identity and cybersecurity?
And maybe also, how do you see the interplay between these two types of AI? Well, there's an interesting concept here that I want to explore real briefly.
In 2025, BeyondTrust produced predictions for next year. It's available on our blog site. And one of the things that we honed in on is something called AI squared. It's the artificial inflation for artificial intelligence. And that is that there is some bubble that's going to burst from a marketing perspective regarding AI.
Now, AI is here to stay. It has a lot of great practical applications that Martin and I have already discussed, but there are places where it's just going to pop, where marketing has said, hey, I got an AI toothbrush, seriously, where it is just not realistic that it'll come to market. The analytics piece is one that I would argue is probably one of those areas that will pop in 2025, because it's really data analytics, many cases, or machine learning, and not true AI.
While there is AI analytics out there where it will form new results, I personally believe that when you apply AI to a SIM or any type of log, and it's learning and learning and learning, it's really not AI. It's more data analytics at the core. I have a very strong opinion on this.
Martin, we probably will go head-to-head on this one, so I'll let you jump in before I monologue. No, no, no. I think not necessarily head-on-head, because I'm an analyst, so there's a lot of marketing facets thrown on me day-by-day, so to speak. In that sense, I think I get your point, and I think also a couple of years ago already, I started talking about what is really AI and ML? When I take a simple identity example, role mining, so a lot of role mining is just plain statistics, and so not necessarily really a big AI.
I would also say some of the things we see, which can't be really named, when you just analyze your internal role models, then you don't have the data for a full-blown ML model, which relies on much more data. So I think, yes, there's a blurring line between what is what, but I think we have this in many areas, and at the end of the day, I think we all should think about what's the value of it, should we put AI in the focus, or should we better look at the value of what it is?
It's interesting to see, for instance, we see a lot of co-pilots nowadays from everyone, and I think people understand this is an AI use case, this is a big slide, AI, but it's not that, so most of these co-pilots are not called AI co-pilots, they are just called co-pilots, because the co-pilot, at the end, is the use case, the value, and AI is something that is powering it.
We also talk about autonomous driving, and that AI-driven autonomous driving, even while AI is in there, and I think so, we probably will move to an area where it's just there, and I think this is not new to IT, when we are honest, so a lot of technology becomes very prominent when it comes up, and then it basically disappears again at various levels, so databases or data stores, which are our type, are just here for most of us nowadays, we don't care about them, they are just here.
But if we were to look at something under our DLTs or used underlying for some things, we talk much less about it, but we may use it more in more cases than we may envision, and I think the same holds true for AI, so currently, it's the big hype thing, but it just may be something which is there and which we use in other tools which are then much more use case-focused. Well, let me ask you from this perspective, because you brought up co-pilot, does it truly learn like we expect AI to learn? Does it truly improve the algorithms to predict our words better, more and more and more over time?
Or is it just pick up the keywords, like you're gonna use the word very next, or the word thank you when you start keying it into your mobile device? I would argue that that's just some form of statistical analytics versus an AI engine truly behind the scenes that's getting better and better and better. So I would say if something is just the type I have, which I have in my word editor for years, then it's probably not nothing which I would call a co-pilot.
So for me, a co-pilot is really something which is doing more complex tasks, and really, I think the co-pilot stuff and received from many vendors, this is really the augmenting stuff, which helps me walking through complex challenges and providing me with the information, with the data, with the analytics, with the results I need. So for me, a co-pilot really would be more than just, I think, a bit maybe- More than just the email bot that's searching through all the emails.
Yes, okay. So we're back to a definition problem then, right? How far and where? Yeah. But at the end of the day, we could also see it differently. I had a lot of discussions with vendors to take another example of what is a cloud solution? So what is really a cloud or IDaaS solution? And what that- So is it when it's public multi-tenant? Can I only call it then IDaaS, or can I call it IDaaS when it's a managed services, managed service deployment?
At the end of the day, what I said at some point is, it depends on, I think when we take the perspective of the user, then if it meets the requirements of the user in that definition. So it's for something that is cloud service elastic, sort of pay-per-use model, subscription model, instead of a traditional license model. Some of these factors would it be. And so for AI, I could say, okay, at the end of the day, what cares is, or what counts is, it helps me. It augments me in doing my stuff better.
It helps me, like in an autonomous or partially autonomous vehicle, the assistive systems that help me driving more safely. If it helps, I don't care how much or how little AI it is, if it works. I agree. And I think there's things to it, but I can tell you my electric car with AI driving still has not learned that I grabbed the steering wheel from the bottom versus the sides. Just hasn't learned. Yeah. So I think if there's a lot of interpretation and I think there's a lot of growth there, I know that's probably a really silly example.
Maybe you gotta laugh, but the difference between analytics and it actually learning or being able to give you the information as you said, statistics, that's gonna be key because maybe it will start recommending, hey, did you look, is there another hotel? You know what? You always book this hotel. This one's got a better rating and more stars at a lesser price. Those are things that I might think it would improve upon for travel or other types of events. Yeah.
But I think it would be really smart if it says, and for the meetings you have, this is really better located because it really saves you a lot of time. And then it starts to get a bit scary and we are probably in this what's next discussion already. So it may then also say, you know, okay, well, let's take whatever Paris. It's in one of these areas you really like in Paris or it's just the type or the style of hotel you always rate it a bit higher. So more maybe the boutique hotel thing or so. And so I can recommend this, but clearly also there's this, what should the AI know about me?
Which again is a bit of an AI identity thing. So if this is my own AI agent, which acts on my behalf and where I say, okay, you do that for me and I'll tell you what I want. It's probably a different thing than when it just happens. And I wonder how did the AI get my data? So that might be an interesting area, but I believe you can do a lot of things where we have this impersonation where we also clearly end up with liability challenges. So if the AI does something on our behalf, who's liable?
Well, it's also the reverse. And I think this is a common challenge a lot of organizations are having, especially when they sell products and services. It's how far do you take AI being forward-facing for your company? For example, whether you use AI to generate emails, conversational emails or a chat bot, do you give it a human name? How far do you take it in telling the person that there is AI behind the scenes or it's fictitious? Do you go far as creating a LinkedIn profile for that name?
When are you legally responsible to tell the person they're interacting with a bot or an AI engine behind the scenes? And I think what's next, not only for it to tell us what we're gonna allow it, like book me a trip to Paris and you already know all my preferences, you already have all my information, you know what I like, go do it. But how much as a vendor or a company do you need to disclose even ethically?
Yeah, I think that's an interesting question. So should I know that I'm talking with Mori or is it Mori's avatar talking to me in this conversation?
So, yes, I think that is something which is important because I think there's also, you know, when you feel that something, so to speak, behaves as someone and you trap it, then this might also be perceived very negative. So that is always an interesting question. So what is also from the recipient side, I think you can also look at it from that perspective, acceptable. So people may felt being, yeah, betrayed maybe to a harsh word. I don't think so. But when they learn, oh, this wasn't the really calendar part.
Well, let's just take conversational marketing emails that are AI driven as an example. If you knew you were being solicited by an AI driven email system, would you disclose the same information regardless of NDA to a human that you wouldn't knowing it's a chatbot?
To me, that's kind of the what's next questions. If I'm looking for a new product as a CISO and I know that I'm talking to an AI engine, I might not wanna tell it certain information because I know it's getting processed in places that I can't even conceive of versus I know it's a salesperson or I'm still on the phone. I think there's a lot of what's now or what's next ethics that are gonna start popping up with AI impersonating people or how far companies take it. That might be way outside of even our conversation today, but things that people have got to start thinking about.
Yeah, I think fair point. Maybe before I come to my next response or input on that, just ending on the Q&A. So to everyone in the audience, you can enter questions at any time, we will pick them up. We have time left for the Q&A. There's also voting capability. So you can vote for questions that you find more interesting. So that we focus on the questions that have the highest number of votes. So don't miss this opportunity in the event tool here. But by the way, around impersonation, there's another thing where I think sometimes so how do I communicate with the AI?
So I still have, I think this is probably because sometimes I handle my chat GPT probably as if it's a real counterpart, I say, could you please? Or so things like that. Why are you being polite to it? It doesn't know. Maybe the results are better. You don't know. Maybe it says, okay, if this is an unfriendly person, I deliver worse results, probably not. But it is an interesting thing because I think there's really also a bit of a blurring line of how we perceive the AI and the more it behaves like a human, the risk of misinterpreting things may increase.
So definitely a very interesting theme and we will have to learn a lot of things and also probably figure out ways that are well-balanced in enabling or leveraging the potential of AI and finding a good balance with regulation. So this is always tricky, as we all know. So what should be regulated? What not? So do you see any big, or do you see any breakthrough innovations coming up? With that quality, we see things that are, because of AI, we will see breakthrough innovation. I do see some breakthrough innovation coming through.
I mean, being polite to AI, if it's consuming text or voice, I mean, there's science fiction from Star Trek to iRobot that cover things like that. Who knows, right? But from breakthrough innovation, there are some things that are helpful. We touched on video a little bit before, session type, but it's also, in terms of defensive strategy, the ability to determine if a human or a machine is actually conducting the action.
And I think this is an interesting piece of breakthrough in technology, something, for example, Beyond Trust has done where any activity, behavioral activity, can be pretty much determined. There is a machine operating an account that looks like a human or vice versa. And I think that's really important when trying to determine if an incident is occurring in your organization, because you're now seeing that that identity account relationship is being misused in a way or used in a way that it was never intended to.
For example, someone should never be logging in as a service account with interactive login. You know it's a machine account, but the behavior is a human doing something that they're not supposed to.
Well, it's either a bad configuration or it's attacked, or it's been compromised. And that type of AI, to me, is one of the breakthrough things that I think is gonna get bigger and bigger in the future. I think also the ability to consume incredibly more signals will probably bring us some interesting innovation. So when I think about trust as ability, maybe let's take three scenarios. So we all heard about this 25 million transaction after a deepfake video call.
So what AI could do is help us dealing with more signals and not only relating IT signals like between XDR or SIEM, but also relating business signals. So if there's an anomaly on that side and something just happened with involved people on the other side, then we have different risk signals because there's this transaction anomaly. And if you can relate it to things that happened on IT before, we can probably say, okay, let's have a closer look on this. So we strengthen risk signals.
I think another thing is probably the way we handle authentication with signals coming in and being maybe at some point even having enough signals to not authenticate in the narrow sense. That might be also one of the things we see. And I think we probably have a potential of getting rid of a lot of entitlements by being much more dynamic based on all the information, the context information.
Again, we gather to say, okay, we don't need that much of static entitlements because we can probably pretty well predict what someone needs at a certain time for doing certain things and keep this in my control and keep this very limited. So AI to enforce least privilege, is that where you're going with it? Yes.
Okay, I like that. I think you described my very long sentence. That's wonderful. I really like that model because if AI is learning what entitlements are needed to perform actions by role, by location, like you're gonna have different entitlements in Europe potentially that you would have in the United States just based on data privacy laws. I think it could learn that on an individual basis for a company and then just ensure that that's what's granted anytime you need something different. It requires some form of step up approval process.
I think that's probably a fairly good way of thinking of breakthrough innovation for AI in terms of privileged accounts, which is the topic of our discussion. Yes. Which brings us to that part of the discussion. So we talked about a lot of different aspects within AI and its impact on identity and cybersecurity. And clearly there's the very concrete question about AI and PAM. So the Privileged Access Management. So where's AI? I think it's there for a while. So we saw this at the end, that the time called user behavior analytics stuff around privileged access.
And nowadays we call it, which is much smarter, we call it ITDR because user behavior monitoring is really a bad thing. If we name the same thing, identity threat detection, we do something very positive because we don't watch the user, we protect against threats. Even if we sometimes do the same thing, it sounds much better if you call it ITDR. That is here. But what else do you do a lot of around this space? What do you see as other things happening in this area?
Well, I mentioned just before the overlap of determining behavior to be machine or human as an indicator of compromise. There's also the traditional cluster mapping technology, which is more ML, but it gives you the idea of baselining who is in what group and where are their deviations. That's still very relevant. And even though it's an ML subset of AI, it's relevant to PAM. Like these are unique outliers in terms of behavior. The bigger pieces with PAM and AI come about when you start seeing things that have no titles, when you see behaviors that just should not occur for any good reason.
Simple example. I have authorizations occurring in my environment without authentication. Now that would be data that you typically see in a SIM or other type of log place, but from a privileged access management standpoint, you never should be able to authorize something unless an authentication matches somewhere, someplace, sometime, et cetera.
Yeah, AI starts to learn. Authentications occur here, here, and here. These are the expected authorizations afterwards. These are the rights, permissions, et cetera. And then flagging them when something is wrong. This can lead to things like, I can determine if token hijacking has occurred. I can learn when man in the middle attacks occur because now I'm seeing behavior based on a stolen session. And I think PAM in terms of using gateway technologies, proxy technologies, and monitoring for privileged activity is gonna benefit the most. And some of it exists today.
When AI is applied to, you expect authentication authorization and the entire approval model to flow, and there are deviations from it. It's not a hard stretch to think about. You have to be able to collect data as you've indicated from multiple signals or multiple locations from your SIM, from your IDP, from other types of data sources or your privileged sources.
But once you start doing that, you basically have taken a lot of the threat hunting away from the human who's trying to link things together and automating it to find even lay of the land attacks where commands are being run that should never have occurred or authorized based on the previous chain of authentication, escalation, configuration management, et cetera. And honestly, a bit more down to the... I think this is super important, yes. And I think the other area I see is really is what I would call the re-copilots. Let's call them re-copilots. I get that, I get that. Go ahead.
Because I think there's so much complex stuff. And also in privileged access management, there's a lot of stuff which we only do occasionally, which we don't do daily. And being guided, being supported by our copilots still is also a huge advantage. So how to do these things properly, how to do them right. Also how to do them probably with less error. So take command line interfaces. So I see a lot of demos. And when the demos are sort of real-time live demos and someone does something on the command line, I would say that at least in one serve of the demos, there are typos in the command line.
And that means also we have a lot of typos in real world on the command line, which can be unproblematic or which can be problematic. I think all of this is something which can be improved. So a lot of things in really this augmenting perspective of AI, I think also are a huge potential for privileged access management. Plus also combining signals from many, many more areas to have really a better risk signal across everything.
So Mori, I think we touched a lot of points. We still have to- Can I ask you just to clarify on that? Because you just hit on something I've never thought about before, truthfully. If you were a threat actor, automating scripts or running something pre-built, complex commands you're gonna enter every time, but the human typing that probably would make a typo, that an indicator of compromise in your opinion? Looking for those? So it depends on them. So it's the type of indicator of compromise. It depends on who do you expect it to do.
But if there's usually a human doing it and then everything runs smooth and fast, then something has changed. It's probably not a human. Then you have an anomaly.
Yeah, it's a good point. Thank you.
Oh, welcome. So maybe let's quickly look at the second poll and then move into the Q&A. I was picking up the questions which came in. There are already a lot of questions. Please enter additional questions here so that we can provide you with the responses. So the second poll is a bit related to the first one, but really looking more at the IAM side of things and the positive impacts only. So in which areas do you expect the biggest impact of AI on IAM? And is it more supporting access and role analytics and management?
Is it the identity threat detection and response, which I would say always is AI-based, by the way. The privileged identity is a good approach to some of the ideas Morey also brought up here. Or the advanced behavioral analysis for session management where we really look much deeper into what is happening here. So what do you see as the most important? So the poll is open. You can respond here.
And we, in the meantime, will, so to speak, directly shift to Q&A and look a bit deeper into the questions you've provided here. So while we pick the first question, maybe, Alif has been open. We have a conference coming up on cybersecurity in December. Very worth to attend, I believe, in Frankfurt. So the first question to look at, I'd like to bring up here is, do you think that Gen AI or the analytical AI impact or value is higher? So on which side are you, Morey? Can you read that one more time, Martin? I'm sorry.
So is it, where do you see the bigger value in the newer generative AI or in the traditional more analytical AI? Which do I believe is more important? Yes. I do believe generative AI. I think analytical AI will occur a lot more behind the scenes. It will not be as consumer or business oriented. It may show up in ways that we don't understand as a user. But I think generative AI has more of an impact to everyone's everyday lives.
For example, one of the large shipping companies, everyone can think of the Jungle in South America, has put a ban on the number of books that anybody can publish per day because using generative AI, you can actually write novels at a very fast pace and upload them and try to sell them. Crazy impacts across the board that people can't even think about today just based on creativity. So I would say Gen AI. Okay. Another question, which just, I feel very, oh, maybe a little, and by the way, in the meantime, maybe we can display the results for the first poll.
But, well, this pops up. Maybe one more question that is, one which I find very hard to answer, so I'll leave it to you.
That is, will general AI, not generative, but general AI, arrive during our lifetimes or not? Wow. Okay. So for those that are not familiar, there's several concepts of AI that are out there. There's narrow AI, which has a very specific focus. It does an exact task. There is generative AI like ChatGPT, DALI, Canvas, others that can create video content, et cetera. And then there's artificial general intelligence. This is a debatable topic, but this is more in line with Star Trek's Doomsday Machine or iRobot or even HAL from 2001.
This is where multiple AI models can actually logically think together and produce output that is unique, not trained, and have deduction. Scientists argue we are anywhere from 10 to 100 years out before that could even occur. My opinion, I think with the massive investments in GPUs, we might start seeing some forms of it, despite the amount of power requirements and anything else. But I think in my lifetime, just like we said, we may or may not get back to the moon. I do believe there's a strong possibility that artificial general intelligence will make it somewhere in our life.
Maybe not mainstream, but somewhere. Well, I'm a bit more skeptical.
So, but I think answering this question is like trying to ask the question, when will quantum computing be at a level that we can use it to break cryptography at scale? So at a real scale? Might be soon or later? Or when is fusion energy available at scale? Sooner or later? So I think there are some things which are definitely very hard to predict. And the funny thing is, I'm around long enough. So I have some, I would say it was sometimes in the 80s. I looked up, I read a lot of the AI books back in these days where it felt like some of these things are relatively close.
While programming expert systems with Turbo Prologue. So some may remember. And it took really long until we had the really, really rural impact. So it remains to be seen, I would say. Let's have a look at the, maybe the poll results from the first poll. I agree with you, but if we took, we had one breakthrough, like room temperature superconductors, I think all of this changes. Just take one break.
Wait, wait, wait. Wait and see. I think it's hard to say.
Anyway, first poll, pretty clear result. Maturity expects that we will see new types of DACs that are just enabled by AI. That's an interesting- I agree with that. I agree with that. Yeah. Yeah. I'd like to answer D, because even though no one answered to it, I think the whole point of this webinar and for all of the audience to understand is it will impact us in some ways. Especially as a cybersecurity professional. And we have to be prepared for even very obtuse attack vectors or new technology. We cannot stick our head in the sand on this one. Okay.
So I think we can remove the display of this poll and then we continue. So for the other questions we have here, let's look which one I'll pick.
Yeah, that's one I think that shows on par. We touched, I think, very specific to AI, but it also clearly has a more generic angle, which is generally speaking, what about the identity of bots and robots? So probably in the sense of software robots. What about them? So I always think of a machine identity as an account. It has to. It has to operate as having a human owner. That classification of who owns it can be quite fuzzy. Let's just take a personal assistant that you may have in your home. You are the owner of that physical device. You bought it.
But it's operations is owned by the provider, whether that is a company that does home automation, alarm systems, et cetera. So the bot that might drive that or respond to you or make a recommendation for purchase, it's not yours, it's somebody else's. You can turn it off. You have the right to, or you have the right to say not collect data almost anywhere in the world. I think we're getting into a little bit more fuzzier lines when that becomes more of our daily life, even conversational with us that, how are you feeling today? Do you need something? That future is not that far out.
I would agree with the question. There's a lot of area there that we just don't know yet.
Yeah, what I believe where we need to get much better is in identity relationship management. So I think we still treat identities as something which is pretty flat. And I think different aspects of this conversation have demonstrated we need to get better. So we are still not very good in handling the relationship of humans to just technical accounts and privileged access management. Even that is something that we are not excelling, I would say. So from an ownership, from a user's perspective, et cetera, we can do some things quite good, but we are definitely a bit far away from being perfect.
And I think the more we do this, when you look at the complex relationships of someone using an application that uses a lot of service accounts to access resources across different clouds, then we have already quite complex relationships. And when we say, okay, there's something acting on our behalf, it becomes even more complex. So I believe we still see a bit of a tendency, I dare to say, where it's basically that robot has one identity.
I think a software robot that has only one identity that might be right in some cases, but in most cases it's wrong, that it should be much more differentiated. And I think we need to make really massive progress here to deal with the challenges we are facing. I could see that. I could see that. You had a before privileged access management, the report that Paul put out. A lot of the concepts that we're talking about today, while abstract, do have a lot of relevance for privileged access management, especially your privileged accounts. How do you know the owner?
How do you govern what they're doing is appropriate? What is it in terms of being a machine account? When is it a human account? Even though we're very AI identity focused in this conversation, privileged access management will benefit tremendously by AI technologies to help determine appropriate behavior, make recommendations, enforce lease privilege, many of the concepts we've spoken about today. And I think Paul's report here really showcases how valuable the leadership compass is in helping companies establish a privileged access program.
And Martin, just based on you and I speaking, we know you guys do a ton in the privileged access space, not only with BeyondTrust, but many other vendors. And for any listener, I would encourage you to embrace the diversity of all of the material you have to offer.
Yeah, thank you. And I think it's interesting to see that we probably need to think more about identities. I'm trust working currently on a leadership compass on my working title is Enterprise Secrets Management for Humans, Workloads, and Machines. And we currently have on LinkedIn a lot of discussions going on about machine identity versus non-human identity. Probably there are way more different types of identities we should think about, or we should just say there's an identity of whatever.
So, but it is something where we learn there's a significantly more complexity we need to deal with. And I think this is something which is really very important that we make up our mind, but also we can use clearly the technology to, and AI helps us to deal better with this complexity. And I think this is a huge advantage we have because otherwise we will fail. And there are these numbers for non-human identity. So everything, which is not Martin's account, which are depending on who brings it up, 40, 50, 80 times higher than the human identity.
So we have, so to speak, identity management challenges on steroids we need to deal with. And this is really where I believe we need to make progress in AI.
Overall, I think there's AI, there are dangers in AI, there are challenges in AI. AI going wild, we touched it already. This was also one of these questions, but I'm also very positive that we will get better. We will learn, there will be things that go wrong.
Yes, as usual, when there's something new, but overall, I think the augmenting potential and the potential of doing things better is incredible. Morey, we probably still have a lot of things to discuss.
We can, and Martin, we didn't disagree, but I will not put AI in charge of any missile defense. I just won't, not happening.
Yeah, I think there are things where we're having one or even more humans in control in addition to it, may make a lot of sense. But maybe having the AI also saying, okay, sorry, humans, you're doing something wrong also could be helpful. So both things could be the right way, but yes, we shouldn't. And I think what you're saying is basically, don't blindly trust. Correct. It needs to be supervised and continue to be supervised. Yeah. Always. That again, brings us back to augmenting in the sense of it helps us doing things, but we are still the humans. Yes.
So with that, I think we touched a lot of things. It's always very enlightening to talk with you. Thank you for everyone listening.
Thank you, Morey, for being here. Thank you for Tibet Beyond Trust for sponsoring this webinar. It was a pleasure.
Martin, always a pleasure as well. Take care, everyone.