Hello, everyone. Welcome to our webinar today.
Today, we're following up on another webinar we had last month where we were talking about what are the top questions you should be thinking about for using generative AI at the enterprise level. And today, I'm also joined, as like last time, by two colleagues.
First up, Matthias Reinwarth, our IAM Practice Director, and Scott David, a fellow analyst. Gentlemen, welcome.
Matthias, would you like to introduce yourself further? Yeah, not much additionally to say.
As I said, in my real life, I'm working in the IAM area, but like everybody who's working in tech right now, I'm quickly becoming a practitioner in AI and trying to do things right. So that's the reason why I'm here. Matthias is definitely one of our internal experts on AI.
Scott, over to you. Yes, thank you. It's great to be here today. I just wanted a couple of lead off things. I know there's a lot of CIOs and CISOs and other similarly situated folks in the audience. And I wanted to invite folks to start to think of their roles as like the CEOs of information space.
You know, things are really changing a lot. And there's a lot of emphasis on information systems as really offering lifeblood for companies. And so it's not a matter of taking over the roles of CEOs, but just recognizing that a lot of the responsibilities are less responsive now and more active and really getting out in front of the issues. And then secondly, really wanted to invite in our presentation, we're going to talk about a bunch of different categories and functions of companies and organizations, governmental organizations, etc.
And I want to invite you to think about what links them together. A lot of it is the risk. Each of the parts of the organization thinks of risk in a slightly different way, HR, tax, marketing. But think of the, from your role as a CISO or CIO, how all risks arise from information differentials. A lack of information about an attacker, lack of information on a maintenance schedule leads to negligent harms. So all risks, start thinking of information differentials when you see more risks, when you see them as information differentials, who's in charge of information differentials, CIOs and CISOs.
So I wanted to start with both that CEO reference and also thinking of yourself as risk leaders in the organization's new information space. And then when we talk about now the specific categories that I think it'll offer an additional perspective on how you might get out in front of a lot of the issues that your organizations face. Thanks. And it's great to be here today.
Well, welcome. So let's get started here. So everybody's muted centrally. There's no need to mute or unmute yourself. We're going to do two polls at the beginning, and then we'll talk about the results a little later. We're going to be taking questions and trying to answer them. We can do that throughout. So there is a space in the CMINT control panel where you can enter questions. We'll keep an eye on that, and we'll try to answer those questions as we go. So definitely encourage you to ask questions or leave feedback, and we'll try to address that. And then lastly, we're recording this.
So both the recording and the slides will be available in a couple of days. So with that, I thought I would just start off again with a list of some of the more common generative AI tools that we find in the marketplace. And with a little reminder, too, that it's highly likely that your employees are using these every single day already.
So, you know, having a policy, acceptable use policies, understanding the guidelines, you know, and really trying to come up with ways that employees can use these tools while being mindful of, like Scott said, the various risks, you know. And we started off last time by going over, you know, some risks, and we will finish up with the ones that we did the last time today. But we're going to talk about five other risks that we did not get to last time. So any comment on the tools that are out there, either one of you?
Any observations about, you know, maybe what you've seen, how people are using them? I mean, I think, you know, OpenAI, ChatGPT, Google Gemini, and the various co-pilots here are probably in pretty widespread use, just what I've seen anecdotally.
Yeah, I can just agree, because I see that with all of my colleagues. And I just did an internal training with our teams to make sure, or our team, the advisory team where I'm working in, to make sure that they do it right, they hold it right, to stay with Steve Jobs, to make sure that they actually get the results that they want to have. And that is really having the right prompts and the right content provided at runtime, and not just asking one question and then wondering what the result was. So that really the basics of how to use them properly.
And as you said, there are more and more platforms coming out. Co-pilot is gaining more traction.
ChatGPT, of course, is in everybody's mind, but there are so many more. And it's just, yeah, it's just spreading. And every week gives us new tools, as I said last time. And I'll just observe that I do some work with the IEEE on agentic AI and basically the multi-step kind of AI systems. And I think you're seeing in all these models as just a maturation of the tools. And as they're used more and as the use cases improve and become more granular, you're seeing a lot more use throughout the organization, even if it's unauthorized use.
Because the same thing is bring your own device problems we had 10 years ago. Then we had bring your own platform problems. Now we have bring your own AI problems with the employees bringing them in and using them. And then we'll talk later about the employee or responsibility for those kinds of things if they're used in the course of employment. So as these things become more sophisticated and the use surface increases, well, we all know that the threat surface equals the use surface and information space. So as the use surface increases in your organization, the threat surface increases.
And all the issues we're going to be talking about today get iterated more and more. So even though the models, the names, it's like the ship Theseus, there you change out all the boards and the nails. Is it still the same ship? The name Google Gemini or Microsoft Copilot is there, but there's a lot of changes that keep happening. So just be really aware of that and the reality of employee use. Thanks.
Yeah, right. While we're speaking one week ago, there was Lama 3.1 coming out from Meta and that is the next big thing that they're all trying out, including me. And so that's, it's really just changing while we are working. So this is really awesome. It's great times to be in.
Yeah, it's very exciting. So let's take our first poll question. Since we think it's already in use in most organizations, we're curious, has your organization actually acquired some sort of enterprise license for any of these gen AI tools? So our selections are yes, not yet, or not that I'm aware of, or no, we still prohibit this by policy. So we'll give you a few seconds to chime in there. And you can continue to answer these. We'll look at the second poll question and get that one launched too. So these are the risks that we outlined last time.
And you know, there's obviously quite a few of them. And we debated whether or not we should make this a multiple choice answer or just pick the single most concerning. And we decided to go with what, which of these do you think is most concerning to your organization? And so is it the quality and accuracy, or is it data privacy concerns, other legal or compliance concerns, ethics and bias, HR, supply chain issues, intellectual property?
And you know, that can go in both directions, you know, losing control of intellectual property, or, you know, how do you credit the creation of intellectual property that may be have been derived from another source through Gen AI? There's reputation risks, security vulnerabilities that are associated with Gen AI usage, and then financial.
So yeah, just take a minute here and consider our poll questions. And like I said, we'll take a look at the results at the end, but definitely very interested in your feedback. So let's take a look at those risks. We'll drill down into them in a little bit more detail here.
First up, operational and supply chain risks. So, you know, our description is, how do you integrate these in the proper way with vetting so that you're not disrupting, you know, your employee workflows, not causing undue burden on the supply chain, and not increasing delays or inefficiencies? Any thoughts on that? We'll start with Scott.
Yeah, this is a big deal because you're talking about the interactions. And so the, you know, the CIOs, it could be chief interaction security officer and chief interaction officer, as well as chief information officer.
You know, we focus on information, which is the result of interactions. But really, it's about the regularizing of the interactions themselves are your responsibility also. So in your, you know, operational and supply chain, you can kind of look at internal and external. Let's use that as one framing for this. And you could say operational is kind of internal, let's say. And that could be a big enterprise with multiple offices and around the world, different jurisdictions, different borders, different cultures, et cetera.
And then supply chain, let's say for purposes of argument that that's with third parties trying to figure out how to get a regularity of inputs and outputs. And usually can't look more than one contract up or down with any transparency in those who have problem anyway. So in those situations, you've got AI. And as we mentioned before, just a few minutes ago, AI takes that surface, makes it even more granular, even more expanded. And you have those existing threats in those domains. And now they become iterated because the elaboration of the interaction surface is possible. So one thing to do.
So what do you do? The things get more complicated. It's reading new interactions and new insights and all that.
Well, in internal and external, there's two different things you can do specifically next Tuesday. You can go and talk to your legal folks and make sure that your supply chain contracts, you can't make the problem go away, but you can lay it off on somebody else or share it with somebody else up or down the supply chain. So you can have a provision which says, hey, no way I was used here. If AI is used in any part of this process, then you'll be liable, blah, blah, blah. All this stuff has to be developed, but you can do that with contracts so that you're sculpting the arrangement.
Is it enforceable? Is it going to work? Is it going to be the right language? Who knows? But at least you're doing something. And so that's on the external side. And then just quickly on the internal side, people are treating IEEE and others that are coming up with standards are treating AI as a tool. So it's like a procurement situation. You're buying a truck or buying an AI system. And so the standards of care are on the procurer, your company. So you're probably getting asked already to do evaluations, AI systems, figure out how it works in your security system.
And basically it's like an acquisition of a tool. So one of the things to, when AI is used by your organization, then it's deployed by you, it's deployed by you, then there's potential liability and responsibility in your supply chains that you feed.
So again, what are you doing on the marketing side in terms of, are you working with the marketing people to say, hey, is our packaging correct? If there's an actual hardware instantiation of something or a warning or notice or a alert or something, even if there's no standards out there, you guys all work with standards on the technical side. But even if there's no standards out there, if you start doing something, a practice, then you're in better condition. If somebody has a problem, you say, well, we tried. And so that's the question, how do you try?
And the last point is on the internal side, training, right? You have the internal as the employees and they're within the ambit of what you can do. And so operationally, just having people understand and trying to say, don't do it at all is going to be like bring your own device. It's not going to work.
So having some reasonable understanding of where it's being, may be used and how it's being used and what the liabilities are there can help you have those discussions internally to, again, you want to be able to say that you're trying to do the right thing, even if the right thing is unclear at this time. Thanks. Mateusz? Right. Yeah. And if I jump in, I think we have that topic. If we ignore the AI part a bit, it's cyber supply chain risk management. This is something that we've done before. And actually it's more or less the same.
So we are trying to use services that are provided by somebody else and we want to use them and we want to pay for them. And we want to have them make it as we want it to be. And there are many issues that can arise from that, that could be compatibility issues. So that the service that we're actually using is not working to the standards that we want to have it or are not working at all. It's reliability and stability today. Today it was nice, but tomorrow JGPT is down. Data security. What are they actually doing with this data?
And this is something that we do in all areas of cyber supply chain risk management. Important factor, and Scott just talked about standardization, is vendor lock-in. The bigger they get, they will want to lock us into an ecosystem because it works fine and it's the state of the art. And that is something that we need to consider when we look at the supply chain management. And in the end, it's regulatory compliance. It's making sure that they treat the data according to the standards that we should follow or need to follow. And we need to make sure that we understand that.
And how do we get that? That is the typical kind of mitigating measures. This is vetting processes. That is compatibility checks. That is really trying to make them display some documentation by trusted third parties that they are doing it right because we won't go there and check them. So it's really the complete full process of cyber supply chain risk management applied to AI plus what Scott just said.
Yeah, I think that's a really good point. And I think these are kind of closely related to general intellectual property issues that we talked about last time where, okay, so let's pretend we're a member of a massive supply chain. We're sharing our trade secret designs amongst others in the supply chain. We maybe as a primary contractor take great care to make sure that those designs don't get lost out in the world.
But if you're sharing that information with subcontractors and you don't have any legal protections in place, any policies that would prohibit them, say, taking that and dumping it into any of these AI tools, then that's a potential for an IP escape. So looking at the broad supply chain, if you are a member of a supply chain, if you're a primary in a supply chain, then I think you have an obligation to start developing policies that apply to all the members of the supply chain so that you have uniformity in how these policies are enforced. Yeah.
And if you're a secondary, you need to band together a new trade association seriously to try to force it because otherwise you're just going to be subjected to that contract. One other thing occurred to me is kind of an analogy, but it might be useful. So I work at the Applied Physics Lab, University of Washington, in addition to my Copenger-Cole engagement. And I have a guy next to me who's a flow chemistry guy. He works with the international flow chemistry folks. And flow chemistry, you have industrial scale chemistry.
And so when you have chemicals going through, you can mix them and make sure that your inputs are pretty consistent. But when you have biologics, you get a lot of variation because biology is complex and does all sorts of weird things. Why am I talking about that?
Well, as CIOs and CISOs and other people working in those kind of similar settings, you're managing flow. And what I think I'm suggesting here is that with AI-fueled flows, it's not biologic, certainly, but it's starting to resemble the complexities of biological systems. So some of those things that are encountered when you're working in industrial scale biologics, like pharmaceutical biologics, et cetera, that could start to be models for how do you do management of inputs and outputs when you're managing.
It's more like a zoo managing a zoo than it is a warehouse when you start to have the AI inputs because the variables become so wild. Something to think about. Thanks. So just a reminder, feel free to put some questions or comments in the Q&A blank, and we'll take a look at them as we go.
Next up, oh, intellectual property. I guess I was foreshadowing it there. So mismanaging AI, entering information that's covered by an agreement with, let's say, a third party, introduces additional risks to your organization if, for some reason, that intellectual property is able to escape whatever boundaries are established within the framework that the AI tools have established.
And that's one reason why, not that a contract is any kind of actual enforcement guarantee, but having say, an enterprise license or a license that dedicates a specific space for the use of your IP within the AI tool is at least a good place to start. Any thoughts on intellectual property management risks?
Matthias, we'll start with you. Yeah, I'm not a lawyer, so everything that I say, please take it with a pinch of salt. But I think we all agree that intellectual property management, I think we are talking about different aspects. And starting with a simple example, I know you, John, and I, we are part-time home musicians, and we try to make music, and we do music. And if we generate music, we consider that this music that we generate through AI, because we maybe even paid for the use of the AI, is our property, and we can use it for what we want.
But what do we do if this song that has been generated really sounds very much like yesterday? So the question is, when is the result that comes out of the AI? And that is what you meant to also this IP escape. So that would be even simpler, that would be the Beatles, that is easy. But to understand that some parts of the results that came out of that AI are actually intellectual property of somebody else, maybe my partner within the supply chain or whatever. The question is really the content that comes out of there. Who is the owner? Who has which usage rights?
Do you can acquire usage rights to this content that just returned from that AI? Can you license that? Can I license that from you? How do I realize that I have to license that? There are so many open questions that I, as a non-lawyer, I'm still struggling with, and I'm really hoping that people like Scott, who have the right education, can educate me in that. I see lots of issues when it comes to generating text content, when JetGPT is fed with actually the whole of the internet. So the question is, where does it come from? Who owns that? I'm an advisor, I'm an author.
If I use that, who's the owner? So this is really an intellectual property open question for me. So I hand that question over to Scott. I don't know whether to thank you or shake my fist at you.
No, it's fascinating. So a couple of observations here. As lawyers, what they train you to do is spot issues. And once you spot them, you go figure them out. What we're doing here is spotting issues, and Mateus and John both are spotting exactly those are the issues. And the lawyers, the problem is, when there isn't a law or isn't a precedent, they go to industry practice and look at, well, what are people doing? Because all laws and all institutions came from practice somewhere.
And so you look at practice, if there isn't a law, and we're in a lot of space where there isn't law yet, there isn't regulation, there aren't contract provisions. And so we'll look at practice. And so let's back up, let's talk about IP. So all I'm going to do is I'm going to spot some issues that I think are interesting, but not resolve them. I am a lawyer, but I haven't practiced in 12 years, but this is not legal advice, I'm obliged to say. And you should consult with your lawyers about any important legal decisions you make in copyright or otherwise.
So, okay, so IP management. First of all, the most important thing you can do, and you'll be just, people will really respect if you just do the following thing. If somebody says, well, there's IP risk. And the first thing you should always say is, are you talking about copyright, patent, trademark, certification mark, trade secret, or something else?
Now, data is not protected per se by any of those. So if people are talking about data, they're talking about IP, they're not talking about legal IP, they're talking about our property in the form of data, which is intellectual in nature. So the first thing is when someone says IP, that's what they talk about, because you have a whole different analysis in copyright and patent. And it's just different, it's a different asset. It's like saying a chair versus a truck in your inventory. Okay. So in copyright, when AI gets really interesting, it has to be a work of authorship.
Well, who's the author? That's what we were talking about before. If it's not a work of authorship, it can't be copyrighted at all. So if something's generated just by a computer, whatever that means, there's a question, is it authorship? It might not be copyrightable. And anybody who gets really upset about author rights, we need to protect author rights, you can remind them that before the statute of Ann, which I think was in the 1500s, the copyright owner was the owner of the printing press. It's a copyright. And the author got nothing.
And the statute of Ann changed it, if I recall correctly. And why am I saying that? The printing press was like the Large Hadron Collider of that time, it was extremely expensive. And you had to amortize the costs of it. So the copyright went to the printing press owner. And now there's all sorts of differences in music over radio, and my point is, it's always been economic. It's not an ethical question of the author getting, maybe it's an ethical question of whether people should get compensated for expressive works.
But what I'm saying is, don't be too wedded to the idea that the authorship is so important. Okay, back to the AI, sorry. I used to work with, two more points. I used to work with a guy, I represented Microsoft, I was Bill Gates's dad's partner for 18 years before he retired.
And a guy, Bob Gomachiewicz, is one of their first software lawyers at Microsoft. And he said, in the world of software, the contract is the product. And what he meant by that, it's intellectual property. Where does it exist? It exists in words and narratives and the description and subroutines and blah, blah, blah. But the point is, work with you. This is the bottom line with the IP questions. Don't try to do it yourself. Go to your legal folks and work with them because the fence around these things is legal. It's also business and technical, I get it.
But when you're working with standards and things, you realize they kind of work together. It's a hybrid technical, legal, business fence that you have to put around these things. It doesn't exist physically. And so work with those teams to try to understand how AI, both on the creation side and on the consumption side, is going to affect your risk profile.
Sorry, that was rambling, but it's a lot of interesting issues that we will never even get to in this call. Yeah, that was really interesting. Yeah. And if we look at the use cases that we talked about last time, so really the use of AI in a commercial environment, like cooping a cold, doing it for our purposes and providing services to third parties, I think then things might get a bit easier.
Again, the layman's view. If we are using content that is produced by coping a cold and we apply AI to that and we could generate new content from that, then we know where the origin comes from because we've trained it ourselves. So there could be legal frameworks and agreements, and maybe even a clear IP ownership policy applied because it's based on what we do. So if we make sure that the data sourcing is clear, that could be something that we could really use in that context.
So to make sure that we keep the sources clean and understand where they come from, I think that makes it much easier than on the other side, having the full internet as the source. So that would be really just, yeah, well-maintained input leads to controllable output when it comes to IP, but that's just my notion. One thing's for sure is that there's a respondeat superior is the legal term. I always like to throw out some Latin, but it's employee activities. When they're engaged in the course of employment, the employer is liable.
So if your employees are not just using it, there's a concept in common law of the frolic and the detour. If your truck driver takes a detour around, but is still on route and they crash, then the employer is liable. If the truck driver goes to a bar or goes to have lunch and they're off their employment, it's a frolic and the employer is not liable. So you have these AI being used right now by your employees in their devices that they bring to the office. And one of the questions you might have in your policies, hey, don't use any IP, talk to your lawyers, but how do you want to do that?
But make sure the employees know not to do it at all, maybe when they're on the premises, you could say, or something like that, just to, again, to try to ring fence it, even though it's not, language is not ultimately ring-fenceable. So the next one, reputational risks. AI generated content that is inappropriate, offensive, or incorrect can harm company's reputation and public trust.
I'd go one step further and say, there've been a few cases that I'm aware of where maybe some reputable media companies have incorporated directly some AI content and that was discovered and that led to a bit of reputational questioning at the very least. So what are the reputational risks and how do businesses that use AI deal with them? Scott?
Yeah, this one is really interesting because it's kind of a soft, you know, it's not legal, it's cultural and social and language. So talk about brand here, we're talking about consumer confidence, we're talking about stock market value, those all come into play. And one of the things, there are some places where it's kind of soft exercise, you know, what is the market that you're in? What do they tolerate? What are the cultures that you're deployed in? What do they tolerate? And then the use of AI, will it push the envelope on that?
And this is a place where humans in the loop on marketing material is probably a useful, a useful notion, right? You wouldn't have AI directly talking to consumers, but when you have a chatbot you do, and that's brand and reputation right there. And if somebody has a bad chatbot experience and they then talk about that, then it becomes a big brand problem for you from one interaction.
So again, that's a threat surface thing. The interaction surface is the threat surface. And that's an example of your brand could, one chat going wrong could be bad. Like when the guy got beat up on United Airlines that time when he was trying to get a seat, I mean, one incident. So that's one notion. The other thing is it could be an actual regulatory problem for you.
So if you have, we talk about inappropriate, inoffensive or incorrect, well, incorrect in the United States, the Federal Trade Commission, their main input into the internet and privacy and all those things is whether it's an unfair or deceptive trade or practice.
So if somehow you have something that's incorrect, an analysis done by AI or whatever, and it's in the stream of commerce using marketing materials in the United States, then you can have a regulatory challenge also, because if that thing isn't demonstrably provable, and you hear about that all the time, you know, people make claims, four to five doctors recommend or three people like Pepsi. And so they have to be able to have data to back that up.
Well, with AI, a black box, maybe you don't have data to back up all your assertions. So I'm going to think about in terms of that regulatory declarations that are made, and then a general, two general statements and what you can do about it. So these are, basically, we're talking about nonlinear systems.
Now, the systems are so complex that they will have interactions that you can't predict. They're just going to have nonlinear, complex systems yield nonlinear results sometimes. And that's what the interact, when you have CISOs and CIOs, you know that already, you can't take care of everything. So one of the things, back in the, I guess it was the 80s, the Bhopal disaster, where they released chemical in an urban area in India, the thing was DuPont. And what happened afterwards is a lot of big companies and industries realized they didn't have pre-made disaster plans.
After Bhopal, people started saying, you know, you really need to have your materials together, whatever you can put together, you don't know what the problem is going to be, but you want to know who to contact, what to do, you know, do you talk to marketing, when do you convene? And you already have those, many of you already are involved in those.
But now, just take a look at it from the AI nonlinearities, that these things could be additional set of risks that can arise from anywhere in your supply chain or internal operations and try to put a plan together. And at least, again, the plan shows you tried.
That says, when there's no standards of care, showing you tried is really important. Thanks.
Yeah, the question is what you can try. So, and what you should do and what the appropriate measures are. It's one of my dark hobbies to collect these AI gone wrong events.
So, my favorite one is really the one of that attorney who faced legal consequences because he was filing non-existent court cases and then was fined and dismissed from the court case and all of this. So, but there would have been a simple mitigating measures that would be check it. Is this a thing or is this just hallucinated? And the same is true for every, almost every of these negative aspects when it comes to real reputational risks. What could it be? It could be inappropriate or offensive content.
And that is something that maybe grows over time, but if there is a process or be a human that takes care of that, it won't work without humans at that point, at least to a certain degree. Then human moderators can catch that and misinformation and inaccuracies can be checked. Maybe it's a lot of work and maybe another AI checks the first AI. This is possible, but there should be checks and balances to make sure that this does not go out into the wild unchecked. I think that is one of the most important things.
Unethical use is a bit more difficult because then you need to have somebody who can apply ethical standards towards what is happening that might call for a person or people or a really well-trained AI. And so we can go on and on for privacy violations, for bias and discrimination, all difficult, all good and can be checked. And I think that is really something where we need to get better. There should not be any content produced by an AI that goes out there unchecked. Maybe that's one of the rules that we need to make sure otherwise.
And that these are just chains of machines that control each other. That should be doable.
You know, that's really, really well said. I think, you know, introducing the notion of peer review, you know, maybe you need a preponderance of independent AIs to corroborate something is not a hallucination, you know, maybe that's the next step. But until then, you're absolutely right.
I mean, I think it's essential to, and to use chat GPT words, it's crucial to check what output you get. Does it cohere with reality? Because hallucinations are real.
And, you know, there was a recent article I read that said, you know, this is a feature, not a bug. I mean, that's kind of the way we work too.
You know, you can start going down a path and convince yourself of something just the way apparently an LLM can do. I mean, when it gets right down to it, an LLM is just a text prediction tool. It doesn't have, you know, deep knowledge of the subject areas that it's, you know, giving you text about. It's just predicting, you know, the right words that should come afterward. And sometimes it's right, and sometimes it's not.
So, the idea of a peer review, you know, people, multiple AIs, I think that's definitely got to be something that happens in the future. I mean, it has to happen right now with people using it. It's got to pass the sniff test, you know, does this make sense? But then even go further and follow up, you know, it has to be verified, I think, before it can be used in any of the output.
Oh, wait, we do have a question that's kind of related to this. As a marketer, do I have to worry about public perception of using Gen AI when everyone else is using Gen AI? I think that's an interesting way of looking at it.
You know, from the beginning, I thought, you know, creating marketing material is probably a great job for LLMs. And I'm convinced that that's what we see a lot of these days. I guess I would say, I would not worry so much about it if I was a marketer. Any thoughts from you all on the use of Gen AI in marketing? Maybe if I start, it's really what you get.
So, the perceived quality, the perceived originality, if I understand that, hey, that reads like all the stuff that I've read from JetGPT before, and I realize, I even can detect it even in the wording. There are some specific words that you can say, okay, this is generated because these words are not that heavily used. And finally, they come out because some machine is using that.
So, really, the quality, the originality, and then in the end, my trust in that service, in that marketer, and that is what really counts. Of course, we need to make sure as a marketer that we don't violate everything that we said before.
So, ethical considerations, bias, everything, that should be fine anyway. But when it comes to having messages sent to me, ideally targeted ones, they need to be really made in a way that they actually make me feel recognized, understood, and then this is something where marketers should not care too much if their creation process and their input material, that's training data in the end, is not good enough to target me appropriately, then I should worry. That would be my stance.
Yeah, just to expand on that, totally agree. And when everyone's doing something, this is like when you do something as a kid and you go home and you do something wrong and you say to your mom, well, everyone was doing it.
And so, the question is, well, is that where you want to position you want to be in? And that's what I was talking about with standards of care. In standard setting and for technical specifications, everyone says, okay, we're all going to make a USB plug this size. And then everyone does that and then it's a standard of care and it's recognized. And if you do it, you're good.
But here, we don't have the standards of care. So, I think in answer to the question, what I was reading there is if everyone's doing it, do I have to say anything about it? The answer, I think, is if you're depending on it and it's a customer-facing thing or reputation-facing thing, maybe being like in a privacy policy, you'd be explicit about your use of data.
You say, we do it for this, we don't use it for this, we won't do it for that. And the main thing the Federal Trade Commission looks at, and it's a pretty good test generally for reputation, is say what you're going to do and then do what you said you were going to do. That's usually a good thing because then you say, well, I told them, and you say, and I did it.
So, on the LLM side, yes, a lot of people are using it, but are customers and other parties you're dealing with aware of the ways you're doing it? Because it's like saying, well, I'm using electricity and these guys are using electricity over there, but I used it and I electrocuted somebody's cow accidentally.
Well, that's not acceptable. So, the use of just merely using it may not be enough of a, we call it zebra stripes. You're in a standard of care, the zebras all have stripes because they can't pick off an individual. And it's the same thing in trade associations or standards. You want to develop zebra stripes saying, no, no, no, I'm just in the group. But the question is, are there sufficient zebra stripes?
Is there a sufficient standard of care just by the use of it generally that any specific harm, reputational harm, or liability type harm, or just a customer concern that arises, will you have sufficient cover? If you're more explicit with your customers about it, then maybe even get ahead of it in terms of brand. And then people say, well, that's it. I trust them.
Look, they told me what they were doing and they followed through on that. So, you can kind of, if it's appropriate in your industry, separate yourself from the herd saying, yeah, we're using it too. And we're taking these extra precautions. We're using IEEE this, we're using this standard of there.
So, it just shows that customer trust then might be something that can help again, depending on your domain in other lines that are important to your business. Thanks.
Well, the next one here is security vulnerabilities. Gen AI tools do have vulnerabilities that can be exploited by cyber criminals leading to unauthorized access. And there are a few examples of these that you may have heard of in the recent past, data poisoning, which is tampering with the data that's used to train tampering with the data that's used to train AI models, that can introduce biases and other vulnerabilities and just generally affect the model's behavior.
There's model evasion, where the attackers modify inputs to AI models in ways that cause them to misclassify or misinterpret data. There's model extraction, and this is kind of tied to the intellectual property risk debate we were having earlier about crafting prompts that sort of reveal the underlying intellectual property that was used to generate the response. There are inversion attacks, attacks that are aimed at reconstructing training data by exploiting the outputs, supply chain attacks, we covered that, and then prompt injection attacks that have made the news over the last year or so.
This is where they manipulate the input to bypass filters and try to generate harmful or inappropriate content. And I think the companies that are in the Gen AI business have been working fairly hard on that particular avenue of attack.
Matthias, what are your thoughts on security vulnerabilities that have been discovered heretofore in Gen AI LLMs? Okay, actually, this is a huge topic, and the question is where to start, actually, because we are talking about a very complex set of very complex systems with a huge attack surface, as Scott pointed out earlier. One speciality of these systems is that we have this thing called the prompt, which can be used to get data into it, transfer content context into the system, but also extract data.
And the way how we formulate these prompts can be used as we expect them to be, or they can be even identified as hostile, as dangerous. So the prompt, as you said, prompt engineering or really the attack via a prompt is the usual one. The question is what is behind that prompt and what is the actual operation that is the security attack? So this is really a huge thing, and we will see a lot of attacks and methods to mitigate these attacks, just like we do in all things cybersecurity.
A few months ago, I came across a document that is called the AI attack surface map by Daniel Meisler, that is well documented, and had a very large and a very well-defined structure where things can happen from agent to the system, to the prompt, to the knowledge base behind that. That was very interesting to read, and while I was preparing for today, because I knew that question would come up, I did a bit of research and I sent that over to John earlier today.
There is a website that is called llmsecurity.net, and if you want to have nightmares using LLMs, just read the first 10, 15 entries on that page, and what you need to do to make sure that this does not happen, and it's not only saying this is the attack and this is how to mitigate that, but this builds upon each other. So this is really a pyramid of attacks. So protecting these systems, making sure that all the attacks that John just mentioned cannot happen, that will be the same cat and mouse game going on for, which is going on in cybersecurity, just in LLMs.
So these are highly attacked, highly endangered systems, and I think we will have lots to do, and if you are a cybersecurity analyst, maybe that is an area you want to make a living from, because that will be not solvable by AIs. Scott? Scott?
Yeah, it is. Excuse me, it is nightmarish. A couple of observations about what you can do, because none of us is responsible for solving the world's problems, and this is a world problem. So what we are responsible for is, in our organization, trying to mitigate the risk as much as possible, and ethically trying to not lay it off on your neighbors more than is, in the commercial context, is competition.
I get it, but so what do you do? So why did I frame it that way?
Well, a lot of this has to do with shared behavior, the equivalent of neighborhood watch. So you have different patterns of things.
Again, in non-linear systems, you are going to have – you are going to have non-linear – excuse me, in complex systems, you have non-linear behaviors, but what are the patterns? You know, Boeing has this service that they sell with their airplanes, where anyone who buys or leases an airplane can report in, and it is like a public health registry for airplanes. So if a rivet pops out, you report it, and then they have a database, and it shows, oh, if this rivet pops out, then you should ground just immediately, or if this rivet pops out, you are okay for 10 months.
But you do not have – they cannot generate that information alone. And so one of the questions is, can you join, like this call, is the sharing of information among peer-to-peer. So how do you deal with these kind of things, these things that are so global, so exponentially increasing? And one of the things is to be aware, for years, the jobs that we had were data security. We called it privacy, but GDPR is called privacy, but it is about data security.
You limit data, you do not secondarily use data, and that is fine, because that came from 1970s, from fair information practice principles, and before the Internet, the security of data was equivalent to maintaining privacy and proprietary advantage. Now, with the Internet, that is no longer true. So why am I mentioning that in an AI context? Most of our institutions and our rules are meant to be data security rules, and with AI, what we have is a problem beyond data security. We have symbolic, semiotic questions. We have language, semantic questions. These are not data questions.
They are content and context questions. They are language questions. And so you have, you know about it, all these attacks, phishing attack. It is about fooling somebody. That is not a data question. That is a question of being, fooling someone, mimicking that you are someone else, and we have mimicry throughout biology. So why am I talking about all this crazy stuff? Because one of the big problems with AI is the hallucination is a two-way hallucination, and you need to be aware of this in your organizations to help people understand what they are doing, and let me explain.
So humans, the humans that saw the lions in the grass, even if there was no lion, survived better. They over modeled the lion. So that is a hallucination. If I hallucinate a lion more often, and there is not a lion many times, okay, false alarm, but I am going to survive better than someone who does not do that. That is called pareidolia, p-a-r-e-i-d-o-l-i-a. It is human. We project out patterns, okay? Why am I talking about that? Because LLM output is computational. It has no content. We project patterns into the LLM content just like the lions.
We have been doing it for millions of years for survival, and so right now if I say pass a medical exam and do it in the style of Ernest Hemingway, that is a pattern. It does a computational thing. It does not know Hemingway. It does not know medical exam. It produces, it meaning large language models, AI systems, computationally extract that from language, plunk it down in front of us. We stare at it and go, oh my God, look at this. It can do medical exams in the style of Ernest Hemingway. That is us seeing lions.
So when your organization, when people start overfitting to the output, that is, I do not know anybody else's job. It is the chief information or interaction officers and chief information security officer's job to help people understand. You are hallucinating. You do not see, that is a mirage. It is not an oasis.
And so, and no dig on the standard, oasis standard setting organization there. They are terrific.
Anyway, but that is, it sounds crazy, but just, you are already tracking data security and making that happen, and you have already been moving into, for many years now, into semantic, you know, phishing and other kind of ways that we fool humans. Well, now we need to be aware of this, we project out meaning into these computational systems. How do we get our people and our organizations and in society generally to be able to step back and say, whoa, whoa, whoa, I am watching a simulcra. It is a simulation.
And I recognize that it may be useful simulation, but we have to understand that it is not human speech. It is not authorship for copyright purposes. It just resembles that. So whenever you hear somebody talk about AI hallucination, remind them that it is a mutual hallucination that is going on. It is productive, it can be productive if we are aware of it. Thanks. And it has to do with training.
I mean, it is just training, training question ultimately. Yep. That is a really, really good observations there, Scott. One thing I would like to add about security vulnerabilities before we move to the last category is that MITRE.org, you know, in addition to MITRE ATT&CK, which is sort of a database of all the different tactics, techniques and procedures that can be used to launch cyber attacks has a relatively new initiative called Atlas, where you can see all the AI specific attacks that can be launched.
So if you want to dive a little deeper into this, the site that Matias mentioned, LLM Security, and then MITRE Atlas, you might want to check those out. So our last consideration here today is financial risk. And I think saving it for last is probably good because that is ultimately what everyone is concerned about in the business world. How do we mitigate the risks that come from using AI? What are the unforeseen costs around implementing and maintaining it?
And, you know, some of those could potentially be fines or litigation costs from IP infringements or, you know, when data gets breached. So, yeah, Matias, what are your thoughts on the financial risk situation?
Yeah, I want to leave those that you've mentioned up against for Scott. And I want to talk about the foreseeable risks and the foreseeable costs. The question is, when you're building a solution, we're talking about the use of LLM, of GPTs, of AI in a commercial context, providing services to our customers, to partners, to the supply chain, whoever.
It could be just the high implementation and maintenance costs, underestimated, not a well-executed cost-benefit analysis, and that leading to implementing AI tools that are expensive, that you need to purchase software, hardware, hire people that do it for you. And then you end up with a huge amount of money to spend without the proper outcome.
And I think that is the first financial risk that we should have a look at, really understanding what makes sense for my company, where does it provide benefit, where do I want to spend which amount of money to make sure that this is really, yeah, a win for the organization, that I really create something that I want to create and provide a service that adds to my brand, that adds to my services, that serves my customers or whoever my peer group will be. I think that is the first risk that people should really look at. And nobody needs yet another chatbot if there is not a good reason for that.
And that is the question what really needs to, we really need to make sure that we understand what it really means. Gathering training data, gathering the right training data, correct, clean training data, and then providing a system, having a nice user interface, integrating that into our, all our other systems, that really will cost a lot of money. And that is a financial risk if it does not work. And I don't want to be the AI project lead that does not deliver in the end and has a record of lots of millions spent and no value gathered from that.
I think that is a financial risk that we should cover first before we think of fines and litigations and copyright infringements and I don't know, any other problems that we can come up with. I think if we look at that as an analyst from a business perspective, that should be the starting point. Absolutely.
I mean, it's expensive and I don't think people are aware of how high those costs can run just for these upfront foreseen things. That was a great observation. Scott?
Yeah, I think that is absolutely what Matthias and you were alluding to is the absolutely core thing. That's what businesses are.
Even, you know, obviously I'm in the United States where there's even less regulatory and governmental kind of overlay on business, but even in Europe, the businesses, the reason we have that kind of entity is to achieve a financial efficiency in something. And so that is the, that's the baseline. And so that make, what makes it difficult obviously is the unknowns.
Then, you know, when you have knowns that you can kind of look at the past to predict the future. But when you have unknowns in the future, you don't have anything to look to. And so you can't assess what kind of resources you should put into it. And we have that problem across society and agriculture, public health, things, externalities that we can't predict. And here there's a lot of externalities. And that's when we get into the neighborhood watch idea again is don't try to do it alone. That's the key consideration is there are certain things you cannot de-risk alone, forget it, no way.
So any, any situation where there's, you can de-risk in ways you can't do alone, like insurance, trade associations, information sharing, co-ops, things like that, where you are de-risking, you get together because you can de-risk or leverage in ways you can't do alone. And that's why those entities exist. And so here, one of the things to think about is you already have the resources, the accountants right now at your organizations are keeping their books with GAP, Generally Accepted Accounting Principles, or something similar, GAS or one of those other similar things.
Why am I mentioning that? Well, the accountants, the company organization spends money on things that they can't do by themselves. If you have an employee, you can do a thing. If you internally, but if you have to spend money on the accountants for keeping track of that, and so they are in the best position to tell you what your exposures are in general, then you can go in as a CISO or CIO and do an analysis of the AI implications in that area. So maybe talk to your accountants next Tuesday or Wednesday, whatever, and say, hey, what are exposures in this?
Oh, let's be especially attentive of AI implementations in those areas because those reveal risk. And if I have one more minute, I'll just, there's one other thing I wanted to alert people to.
Oh, maybe I don't. Well, anyway, look up the question of functional identity and functional privacy. We just wrote a paper on that, and it's a way of more specifically identifying the systems that are relevant with information systems and then also the accounting for them. So that may be helpful to folks. Thanks.
Well, yeah, we're up against the top of the hour. I wanted to, I'll just quickly give you the survey results. 78% said that they did have enterprise licenses for Gen AI tools, and legal and ethics combined were somewhere in the neighborhood of 70% of what people were most concerned about. So thanks everybody for joining us today. If you've got any questions, reach out to us. If you'd like to see more of this kind of content, drop us a note. Let us know. We'd be happy to do this. And thanks for those who have participated. And thanks to my fellow panelists, Matias and Scott. Thank you.
Thanks everybody. Take care now. Have a good rest of your day, everyone.