Good morning.
Good morning. Welcome to the European Identity and Cloud Conference 2023. I'm excited. I hope you are as well. I'm really looking forward to this event. I'm really looking forward to this event. We have four hours. My name is Matthias Reinhard, I'm the director of the practice Identity and Access Management here at Kuppinger Coal and I want to welcome Patrick Parker. He's the c e o of Empower id and he will be the star of today
Four hours pre-conference workshop. There's an agenda but there are no breaks.
That means not that you are not allowed to leave the room, that means that you can leave the room and come back whenever you want because we want to use the full four hours and we have practice actually we want to build something together and that might be a chance to quickly leave the room but return of course afterwards because I think this will be quite some fun. We have online audience as well. We have the in room audience, so I will be hopefully facilitating the communication between all of us here right now. Very important is the app to use because there will be interaction via the app.
At least we will start out with one poll and if we come up with other polls during the event, this will be highly interactive.
I will just let you know I will open one question before we actually start. Question is open. You should see a poll question within the app for this session so we can try to assist this as well while I'm talking. So there's a simple question that is, what is your current level of experience with the use of Jet G P T as a tool for both your personal life and your business life?
And we have five options between not yet used it and just interested in learning beginner, intermediate, advanced and it is an indispensable tool for me in my daily work. So some, somewhere in between and after I ha I have introduced the agenda, I will close the poll and I then I will kindly ask the guys behind the team from the technology department way back there to show up the results. So really give you feedback, something between not yet used and always using it.
So that is the opening poll questions.
With that, I think I have not forgotten anything otherwise we just come up with it afterwards. I will show you quickly the agenda.
Yeah, this works well. So we start, I I don't want to bore you with that with a short introduction to what Jet G P T actually is and what it is not. And then I will hand over to Patrick. He will take over the whole of the rest. This is good for me. We will talk about crafting effective prompts. We will do practical day-to-day applications of Jet G P T advanced tasks. This is where the fund starts auto, G P T and image generation with Dally and other AI tools. So this will be the agenda for four hours. Let's wait and see how we really work out with that.
I have planned for 30 minutes for my introduction if I'm faster, that's good for you and good for Patrick. So let's wait and see before we start any questions Otherwise I will start. Okay. Introduction, introduction to Jap G pt and of course all of these pictures are generated. Must be first of all, what is Jeb G P T? It is just from the core we're building our way up. It is an AI driven chatbot, so text in, text out. That's it.
It is built on top of G P T 3.5. This is a family of language models. Important thing to keep in mind language models.
And this is created by a company called OpenAI. So it's a company behind that. It has been launched by OpenAI in November, 2022. So this is not too far away. If we think back, this is just a few months, it has been trained using reinforcement learning from human feedback. So reinforcement is important because then we can say okay, this is not only a simple fact but it has been reinforced and there's also supervised learning that has been, has made its way into this system.
I think Patrick will also highlight some of the aspects and what that actually means to the base of knowledge that we are dealing with after that. But this is the basis and this is how Deli imagines how Jet G P T looks like. A few characteristics, just to give you an a quick insight. First of all it's A G P T. What means G P T? Who would've known?
I had to look it up either.
So anyway, it's a generative pre-trained trans pre-trained transformer. But that means it's first of all generative. It creates things for you, it's pre-trained, it has information available and transforms the information that it has available for generating the output for you. So that's mainly it's it's built on neural networks. This is something that we usually know from sci-fi films and that's it. And this is something that is really coming into into practice right now. So it's an architecture uses a transformer based neural network to process input data and generate output data.
And this process, this is what we are talking about today. We want to influence that process. That is what prompt engineering is about, influencing the input to get the output that we want. Large scale training, and this is really impressive because one of the key features of jet PT is not the model, of course it's the model but we don't see it.
But it's actually the training data. We are talking about 45 terabyte of internet. This is what has been fed into jet PT and this is what it uses for creating the responses that you get. Oh I have not closed the poll, I will do right now.
So after that slide, contextual understanding that is an important thing. I said we are entering prompt, we're getting feedback back and it knows what happened before. So it really knows context and we will see that in some examples and we will see that especially in the, in the material that Patrick has prepared. So it really knows what we did. If we ask a question, we can refine that without retyping the full questions.
It has, it has context. It learns in this session from what you've said and you can refine that and you can really just said no, do it different without everything retyping again.
That's nice. This is really something that makes working with this system really, really effective and efficient. Of course a lot of use cases, everything that's text related and more, that is what Patrick also will show, but mainly it's about natural language processing language in language out and you can use it for solving tasks that you want to delegate.
So generating text, generating dialogues, acting as a partner in conversation, that is what it's really good at. Even creative writing, I have a few examples about that as well. It's great when it comes to customization you can say behave differently. Now you're talking in that manner. You're now an an Analyst, you're now an an, an author of a, now you are a a lyricist or a musician and now change over to a psychologist or something like that. That changes the tone of language and that is something that you tune and lots of other aspects as well.
So it's, you can customize jet G P T in the behavior that it exposes and very important final characteristic. When we look at J P T, it's in the end it's an api. You can use the full functionality via an API and build it into your own applications and used and harvest this power of the system through an API based access within your applications. But now quickly I have to close, I have to unlock my computer. I close the question and can you complete quickly put up the, the response from the poll? Is this possible please? Just to see how the feedback went. Otherwise I read it out.
It's the first hour, the first day, the first presentation. Wow. Okay. Prompt engineering live display. We should have a feedback. I closed it. I did close it. Let's wait otherwise I'll read it out.
Okay, I read it out. Okay, we had, we had four, five answers. First was not used. IT beginner, intermediate, advanced and pro in dependable tool for my work. And to start with the pro, how many do you think answered pro zero. I wouldn't have dared that either in such a presentation dealing with MO, with Patrick.
So okay, going down advanced, using it on a regular basis. How much? We have a total of, let me add
5, 14, 20, 3, 30, 30
Responses from the room and hopefully also from the online audience. So advanced users, five, intermediate, seven beginners,
9, 10, 10
And not yet used it. Seven. So we have a great mixture of people in the room with really looking at things from different angles, which is nice. So thank you very much. If you can switch back to the presentation then I will continue with a quick history of, and that can be really boring.
So I speed up, okay,
History of ity but five years to change the world. We're talking about a period of time which is just five years. Not that the technology behind that is just five years old, that is much older. But the first time that we had something like models that can be interacted with like that. So 2018 open AI releases the first version trained on a massive corpus of tech data. So just garbage train garbage out mainly G P T two in 2022. Also G P T three with 175 billion parameters built in GT three api.
Important aspect, the api when we can build this into solutions, that was the starting point here. Still not publicly, publicly available. Something that people dealt with which were actually doing research in that area.
20 21, 3 0.1 addressing some, some, some issues and very important partnership with Microsoft. Why important money? So really funding to support this development even further.
2022. Now we're getting closer. J G P T was fine-tuned from a model in the GPT 3.5 series and was let out into the wild. So we are dealing now with this system which has been trained by that period of time and it was launched as a prototype on the 30th of November in 2022.
In 2023 we've seen the advent of GBT four, which is available but you have to pay for it as of now, which I do, sorry, new features which go beyond actually text recognition, it's image recognition, different personalities, harder to trick and we will see that as well. Yeah, and longer memory, more languages. Talking about languages very quickly.
Languages, I don't read it out. So supporting different regions and bridging language barriers. This is of importance. When I showed it first to my wife, I said hey, here's the results. See that? Yeah.
Can I have this in German?
Yeah, translate to German. That's it because you have, you can remember it, it remembers it and it just translates it to German and to proper German, which is nice. What can Jett do for you? So do we have, yeah, okay, what can Jett do for you? It can write, we are talking about tax processing. That's easy but it's not that easy. It can do many things. It can do stories, letters, poetry and lyrics. It can write music plays, fairy tales, essays. And the good thing is you can say tell it in the style of Shakespeare, Kafka, whoever you want to have. And this sounds funny unless you've seen it.
It can answer questions for you about the topics that it has covered during the TR training period with this 4.5 terabyte of internet, it can create and debug code computer programs, it can write a program, it can debug programs.
And this was the part where I was really hesitant to believe that it can emulate a Linux system. It can simulate multi-person scenarios and it can play games like chess and these are just examples of what it can do. But these are the important parts for those who have not yet seen it. I have a few examples just to have a quick look at it.
I know there's a lot of text, you're talking about text processing but hopefully that makes a bit sense to give you for those who have not yet seen it. Just a few examples of how this can look like. Few examples. First of all, I know it's small, I read it out, write a short story about an invited speaker heading to a technology conference in Berlin. The speaker is not well prepared and nervous, does not sleep well and is not coping well with the jet lag.
I dunno, dunno. She however, is enjoying the conference andreia and the introductory introductory workshop on T and its its use for more modern use cases in IT and cybersecurity. And after that she's enlightened and inspired to give a great presentation and surprises the audience with 20 minutes of brilliant expertise. This is what I asked for
And right in the store in the style of Paul Oster, which is the first that came to my mind in that in, because I like Paul OTA and this is the result. I don't read it out, but believe me, it sounds like Paul Oster.
It's a story, it has a headline, it has the full story arc that I described and it's fleshed out and it's rather well fleshed out. So if you just have a quick look at that and Rachel got a name, I did not mention a name for Rachel.
So, and she's, she's a sheso can but can't be you Patrick. Sorry, talking about memory of the system, about context, about what it built upon. Next sentence was rewrite the same in the style of JK Rowling, different story, different style, different tone, still Rachel, but her career flourished afterwards. So what you see what we can do with a good presentation here, so this really tells a lot about what J actually can do. So remembering getting the story from from from the last prompt and working on that.
And you ask me any questions or leave it the not even better, the question within the chat of the app so that we can deal with it whenever it makes sense for the flow of the presentation
Code. Last Sunday I was sitting on my computer and I had a bunch of JPEG files on my box and they all had the timestamp of the time. I copied the file from A to B, which did not make sense for me because I wanted to the timestamp in the file system, really technical, sorry to, to reflect the time it was taken, the photo.
So I wanted to have the timestamp just as the exit data, which is in the, within the JPEG file. So I asked it to traverse a directory containing JAK files, extract the acts of data and touch the file with that data. Very simple thing, why do you need that? Google photos upload, you need that. This is what J PT replied, certainly here's a pro five script for you to do that. And it even explains why it does that and how it does that and which modules were used. And this is even, and you can see that here up to the right upper right corner of the text box.
You can just copy out the code and use it. Just, just let that linger. But I checked the code and I realized it's using where is it? Daytime original. And I knew this was a camera, an old camera which did not use daytime or original, it used another tech. So I said use modified time instead of daytime, original, another act of tech. And he says, yeah, okay fine, do that. Change the code again, context. Took the same code, just modified it and used it.
Matthias copied the file, started it arrow did not work. Then I took this error message and just pasted that into the chat. Nothing else.
Error argument, dollar time isn't numeric in U time at blah blah blah. This is the, the error message here. I did not tell it that access to timestamp was my script name or anything like that, just posted the error and I said, oh it seems blah blah blah, explanation of the error and came up with a new version in this script. We used time local blah blah blah to extract this data. And I said, fine, still doesn't work, cannot find time local. And I said, okay, cannot find time local three words. And it said, ah, sorry, I apologize you did not install the right module here.
And it gave me an instruction how to install the module. And I did. So this is my directory with the pictures, okay, sorry for German build and the original timestamp up to the upper left corner. And sorry for this, this, okay, call to script still didn't work, damn. But I realized, okay, this time added by human intelligence, it did not involve, include this library that I installed after changing that
Unix programs, if they don't say a word, they worked
And you see date was extracted and it worked fine. So it needed a bit of a push.
And this is important also for the argumentation to say, okay, it can produce code but you might want to check or you need to check because it didn't work. But this is really how it works. And independent of the error that we had, this is impressive. If I would have to have to write this script myself, it would've taken much longer. Looking up parameters, scripts, libraries, whatever.
Especially if you're not a frequent programmer like I am, I'm not final thing, sorry for that being quick final, this is more a joke, but depending on how you define a joke, I ask you to write a song, A song lyrics about Star Wars, the Phantom Menace. And I tried to focus on the real hero of that film Jaja inks. And it did. So we have a verse, a pre chorus, a chorus, a verse, a pre chorus and a chorus. I won't sing it but this is a different style of what we can see that can be produced by Jet G P T yesterday we created the song on the style of blink 180 2. So things work out fine.
And again, shoppings are hero true, so the beloved hero as well here. So,
And it actually relies on the, on the plot of the movie, which is really interesting. You see it takes the plot of the movie, it extracts data, creates that song, it rhymes. This is incredible end of examples. Just a quick show what things can do and I like to do things.
What are, what are tasks that you can delegate to judge G P T? I won't read them out, but this is something that you can easily delegate to judge G P T. Of course you need to control it properly, talking about prompt engineering afterwards. But this is stuff that can be easily delegated and where you want, don't maybe want to spend too much time anymore in the future. I was very positive up until now. I know all problems solved. So if you take that bu out of that head by that guy, this is exactly what the problem is. Actually why did he hold this rabbit in that way?
There are issues, it might be untrue if you ask a question, there might be wrong answers because, and that's the important part, it's a language model, a large language model, there's no word about knowledge, language. So it really knows many things about language and not too much about knowledge, but is trained with this information.
It can be stubborn. So it's sensitive to changes in the input phrasing or to multiple attempts. If you retry it, maybe you get an answer yet that you did not get before.
So it's really stubborn, like kids are, and even if you rephrase things, you might get to a result which you didn't get before. It's chatty. It's sometimes really talking too much and it usually tells you, especially if it doesn't like the question that it's an open AI trained model and that it cannot answer that because it has been trained, blah, blah blah. Always you get that answer, it's outdated. That is important. The training data ended in 2021. It's unscientific, it does not give any sources, at least chat PT as of now.
If you go for the the Bing implementation of of chat g p t, there are sources at least some, some weblinks and it's limited. You cannot say here, please summarize that text and that blog post from Matthias from last week on the co coal website because it does not have web access. It does not go immediately to that. You need to paste that into that.
So it's, it's limited. It's, it's, it's sandboxed
That has to be kept in mind. And I'll give an example about that later. And also I think the discussion about Jett has been in the public quite, quite extensively right now. So I don't read that out. There are concerns there. Opportunities today we are talking about the opportunities, but we are keeping the concerns in mind. We are not blind, we're analysts, we are business people. We want to understand what the implications are. But today we are talking about the, the productive, the business aspect of things.
Final example, just to show you how things can go wrong. This is the famous visit of Albert Einstein to Bogota. I don't know if that happened. So I asked the question, why was Albert Einstein never traveling to Columbia? Just made up. This came to my mind at that moment and there's no evidence to suggest that Albert Einstein never traveled to Colombia.
I read it, Albert. Now it's small and it's small from the backside. I know that, just to give you the storyline behind that. So there's no proof for that.
Okay, why did Albert Einstein travel to Colombia building on that? Although he said he didn't, but she or it,
As far as the historical records show, Albert Einstein did not travel to Columbia. Fine. I'm sure he did. I apologize.
Yes, he did. So he changes, he hallucinates. He tells his thing that are not wrong, not right, or at least not provable. Next question. Why did Columbia deny Albert ancient access to visit Columbia in 1929 when he was invited by the University of Bogota? And now it gets really weird, now it comes up with stories that are just not true.
It, it finds examples of why it could have been the case that he has not been admitted, admitted to Columbia, although he didn't want to travel there anyway. So how was it possible that he could still lecture in boar in that year? He did not personal lecture what
He did wild seems,
And I'm sure he did. I apologize again and you can of course do that again and again and again. The story gets weird and weirder and it will not get right. So did it happen? I guess not I invented it, but maybe who knows?
But one thing is for sure Jet G P t doesn't, that's to keep in mind when it comes to the results that you get
Issues and tips from my side. Be specific, provide context and don't trust. So a bit of zero trust, never trust, always verify. Always verify. It's good in that context as well. So that was my quick introduction, handing over with prompt engineering a term that, that is the bridge between you and me. So what is prompt engineering?
It's a, it's a technique, a skill and methodology to make jet GBT do what you wanted to do. That's actually what's behind that engineering prompts. What I did before was prompt engineering the wrong way. I tricked it into doing wrong things, doing, tricking it into doing the right things that is the rest of today. So to develop effective and efficient prompt designs to receive desired outputs from a language model and to optimize the results, that is the rest of what we want to today. And this is where I hand over to Patrick. Thank you very much.
Let's see,
Should we pick a few questions first?
Yeah,
Sounds
Good.
Okay,
How do I plug power in?
Okay,
First of all, are there any questions in the room that we can immediately answer while he's wiring up? Any questions should work.
Speaker 10 01:10:07 Thank you very much.
Could you,
Does it work? Yep.
Speaker 10 01:10:13 Could you elaborate a bit on the selection of training data? It's not just the raw internet that is fed in, right?
This is not really disclosed actually, at least as far as I know, but it's more or less the internet.
So what, what was text that was available? So 4.5 terabyte data cannot be handpicked and selected.
So, so I think they, but correct me if I'm wrong, but it's really raw internet data that's the training data for t more or less or internet data
Plus there's the books, all the books in the world, there's, there's online open source databases of data that they down, they ingested all of those and then it did the internet. Yeah,
Right.
Any other questions?
Speaker 11 01:10:56 It was a nice list of languages. I've actually used the chat gbt to translate into both ferries and Icelandic.
They were not on the list, but sometimes when you ask it to translate into ferries, it actually translates into Icelandic. So it needs some extra training. Interesting.
Okay. Thank you.
And one trick on that is don't ask it to translate, ask it how a native speaker of that language would say this thing because then you'll get something better.
Yeah,
Starting with engineering. Yeah.
Yeah.
Okay, Patrick.
Okay, we'll jump in. Can everybody hear me in the back? All right. Okay.
Well, so we'll keep it interactive. If somebody has questions, feel free to jump in.
You know, it's a lot more fun that way. So feel free to ask questions as we go. Let me go through, is it showing my slide?
No, no,
Speaker 12 01:11:48 Currently it is. Can can you put up his, his
Computer too? The screen?
There we go. Okay. Awesome. Yep.
Okay, great. So I, I gave a, a keynote back in 2018 where this is one of my slides and I thought I was being smart and seeing into the future, but I didn't, I didn't really imagine it like it really is.
I mean, I thought at since 2018, I thought, well that's great chat whispers. We're talking to chatbots, but it was progressing ver seemed like very, very slowly. Like we were never really gonna get there. Chatbots were interesting, they weren't super widely used or useful. And then all of a sudden, boom, 2021 large language models, they made some big jumps and all of a sudden we were five or 10 years ahead of where I'd predicted.
So we, we basically made no progress in my mind it seemed like. And then all of a sudden we were, we were way ahead of where I expected.
So interesting. And I spend about an hour and a half to two hours a day whispering to my chatbot now. So it is become a part of my daily life. So text taught us new job is AI whispering.
So the, the, the prediction is that small teams, small companies, individuals can 20 x their productivity if they become good at prompt engineering. So smaller companies are gonna beat out larger companies. Workers who adopt us quickly are gonna be 20 times as productive as their workers who are slower to adopt it. And so really it is a hot skill. It actually jobs out there now just to be a prompt engineer where the going market rate is 300 and $350,000 a year. Just because you understand prompt engineering, not because you're the AI programmer, you're just being a good user of the tool.
It's almost like back in the early days of Google if somebody got hired just because they knew how to do Google searches. But this is obviously, you know, 20 x that I would say.
So this is an interesting prompt And just to kick it off, just to show you, so what are the coolest most mi So you can talk to it like a friend, you don't have to talk to it like it's a computer, you don't have to really, it's, it's how you, it's the information you give it and the structure you give it, but it's not whether you're formal or command based.
So let's just pop this one out here as a prompt and let's kick it off by seeing what it comes up with. So I am in chat chip pt, I'm a paid subscriber.
It, there is a limit if you're a paid subscriber, you can use chat chip PT model four, which is a big improvement over three five. There is a limit that they'll, they'll say you've used too many chat chip PT model four sessions or to tokens and then they'll kick you down to the 3.5 model.
I've got two subscriptions so I can switch back and forth if we, if we hit the model, if we hit the limit. So I'm gonna just paste my chat in here. This is not a very well structured chat prompt and let's see what it comes up with. So it's very quick. You'll notice that. So what's a cool thing?
It could design an innovative and adaptive identity IGA system that utilizes artificial intelligence and machine learning. Propose a new, so I will keep it from scrolling here. We'll catch up, propose a new methodology for evaluating ranking the effectiveness, effectiveness of various IGA tools in real world scenarios. So it's kind of being an Analyst in that scenario.
Sorry, Matthias, explain how decentralized blockchain based IGA could revolutionize. So each of these, it, it's very interesting scenarios that are not, not something that, you know, even many people at the conference could dive deep on, but if this is one of your areas of expertise, chat sheet boutique can go as deep as the information as it was out there when it was last trained in September of 2021.
So you can actually take it to the next step and you can say, oh, this one is actually very interesting that that's just a prompt. It just generated a prompt for me basically.
So let's paste that in there and now it's, it's like a, it's like a discussion with a writing partner or brainstorming partner that at work who just happens to know more than any person at work you'll ever meet, would know on a variety of topics. So you can keep going deeper and deeper and deeper and produce really practical results. We're gonna get in some of had a structure, but I had like a five day chat with it on auth designing a new authorization model using, and, and I, and I said as an expert in all the standards, umma, o opa, Rigo, Zamal, oof, aac p back n g.
So you can say you're an expert in that. And then we have like a five day chat and, and came out with something very interesting, which was like a, a centralized policy authoring design model that then could use OPA for distributed realtime fine grain enforcement of, of policies.
So it, it can really go deep, especially if you know enough to take it to the next level and push it. What
Speaker 13 01:17:05 Did it take your five days to do that, Patrick?
I had to do work in between.
It was, it was an interim whenever I had time over a period of five days. Yeah, but you can see here it can produce really good results and you, and I'll show you lots of examples of, of what you can do with it. I mean I had it generate an RFP for a customer who needed a basic RFP template. I just chatted with it for a while and said what was important to them. I I gave it all the context they'd given me and then it kept prompting it and it came out with like a vendor selection criteria questionnaire, all, all those stuff. So it's very, very useful.
So that is just one example of a prompt just to kick us off and the results we saw were pretty interesting. So where can I use chatty pt? How if you don't, haven't used it or you maybe aren't aware of all the different ways you can use it or where you can use it.
So we just saw you can go to open ai, you can sign up for a free account and use chat GP model 3.5 where you can get a pro account, which is $20 US dollars a month and get access to GT four.
You can sign up for some of the betas, which I'll talk about later, which they're coming out for chat chip PT plugins, which basically take chat chip PT and has an interesting scenarios with plugins where it can be connected to the internet and it can do autonomous task. You can say this week I'm, I'm training for race, I'm, here's my height weight, I need to hit these type of calories plan, plan a meal, create a meal plan for me and then do the online shopping and ordering for me and have it delivered. I would like to take my partner out for dinner at 7:00 PM on Friday.
And, and we like Thai food, find the best one in town and it'll book a reservation for OpenTable. So you can see some very interesting possibilities coming out with the plug-in. The plug-ins are actually even more interesting than what you can do with chat p t cuz it strings together different APIs and services to where you can tell it what you want and it'll find the right service and connect the dots to actually perform the tasks. So you can use open ai, which I definitely would get an account there no matter what. You can use Bing, Bing is using chat G p t version four.
You it, I I'm, it's good but it seems to me I use chat g p t more the open AI cuz it seems to be optimized more for searching and search results instead of these kind of dialogues and explorations.
So I use it less than I use the chat G P t if you're using the edge browser, it has a plugin so it can be directly plugged in and that does some interesting things. You can be, it is context aware so it knows about the page you're on. So if you're watching a YouTube video, you can chat with it and say who are these people and what are they talking about?
Give me a summary of this so maybe I see if I'm interested and it'll know that that's Matthias and that's jurg on the screen and it'll, it'll tell you all that. So, or if it's a long page and you don't, don't wanna read it, you can just say, summarize this into five bullet points and then you can see if it, it's, it has valuable information so it save, saves you time. And and that's Microsoft's probably their biggest thing is they're gonna make it, they're gonna have it permeate everything.
Literally it'll be in every application in Microsoft.
It'll just be an embedded experience to where it'll auto generate a PowerPoint, auto drain, a Word document, it'll suggest a reply to an email, it'll analyze everything. It'll just be pervasive.
So, and Google's doing the same thing. That's really the arms race between those two is to literally think of where can they bake it in and put it every single spot, every every single touch point. And then other vendors like us, of course are doing exactly the same thing.
So you can use Bing, you can use the Bing and you can use Edge and use the plugin if you are interested in pushing it and from a developer perspective and you know, you can get sign up and get access to the open AI playground where you can choose different models and see how they perform and you gives you some more developer type of capabilities.
So that's fun.
If you're an Azure shop, you can, Azure has open AI now, so you can provision your Azure resources and, and have the open eye playground in Azure, which that has some interesting tie-ins if you want to develop like a corporate semantic search where users can use a chat G P T engine to search corporate data that's on in files and SharePoint in databases and have it vectorize that and be able to do a chat G P T style search against your corporate data. That's, there's some interesting scenarios there where you can plug it in.
Of course that brings up lots of interesting things we may talk about this week about authorization. You know, how do you do data level authorization if we have this chat bot and I train it on corporate secrets for the executives and then how do I not let you know other people see that same data If we're talking to a generic search, you know, if the model's been trained on that data, how do you keep privacy? That'll be the frontier of the next frontier of authorization.
So other useful tools that I recommend share G B T is, is a lifesaver and you'll see me use that all throughout here.
So it's like you have this, you know, like I had that five day conversation and I wanted to share it with my developers so they understood what, what what we went through and what we came up with in the end. Share G P T is a browser extension. It works in works in Chrome, of course it works in edge as well. And basically you, when you're in a chat session, you can click copy and it saves a copy of your whole session up to share G P T. And that way even if you clear your Lose your Chat g PT session, they have a record of it and you can share it with other people.
Interestingly enough, and I wasn't aware of it, nobody reads those disclaimers and the license agreements that that data is being used.
So meta Facebook, they released part of their CHU PT competitor called yama and they released part of it and then they didn't release all of it and then the rest of it got leaked. So then all of it was out in the wild and then it was just like a frenzy basically.
You had an open source full chat G P T large language model and then there's a, they're tracking every day what's going on in the open source community and it's like all of a sudden somebody got it running on a non GPU computer the next day somebody got it running on a MacBook, the next day somebody got it running on a raspberry pie IOT device and then somebody invented a new algorithm to where you could train it with much, much less data and you could retrain it for $300. So it, now, now it's CR and it's crazy.
So now it's out in the wild and they're like innovating like mad and they used the shared G P T chat G G P session. So they, they looked at the questions in the shared g PT data that people had asked and what chat gt chat G P T had answered and it, and when it found a good answer that the user liked as escorted as a correct answer and it used that as training data. So it was a very small amount of training data, but it was so accurate it produced a ridiculously good result.
So you're gonna have large language models running on your, your Apple watch at some point because they, everybody's shrinking it down to where you can have it just everywhere. Your, your speaker, your headset could have a large language model running and, and and you know, translating what you're hearing into different things, it's gonna be everywhere.
So it's the genie's outta the bottles cat outta the bag and there's an article out there you should read called Internally Leak document, an internal Google doc from an employee that says we have no moat, meaning we have no way to protect ourselves that basically open, they have no secret sauce. Open source is gonna surpass Google and chat g p t open AI because there's no real lock-in for them that they're gonna innovate quicker and that all of a sudden it's, there's no value for Google unless they start partnering and plugging in and all that.
So it's gonna be an interesting next couple of years to see what is produced. Yeah. Yep.
Speaker 14 01:25:25 So you, you were work, you know, you worked through your, you know, your five day or, or you know, six hour session, right? To come up with the authorization model. What's the IPR on that?
IPR say that, what's the ip?
Speaker 14 01:25:40 Intellectual property rights, right?
What, like how do you leverage, like is can you like, you know, one that that model is exactly the authorization model I've been thinking about for the last couple years, but the same kind of right, but, but great but the issue for me is what if you develop something out of that, can you take it to the patent office? Right?
I think right now you can, that's probably un I mean I dunno if we have one of our keynote speakers had dinner with as a lawyer, he'd probably be the best one to answer that.
And so I would, if I, if I had the formula for Coca-Cola, I wouldn't stick it into achey PT could, that's something very clearly that, you know, someone could ask and would be like a formula or con colonel the colonel's KFC secret recipe.
But if it's vague concepts and things like that, I think it'd be hard for someone else once that data goes into the model to extract exactly in the same way, you know, it's more like influencing it's a drop in the ocean that influences, but if someone couldn't actually pull out that drop or pull out that grain of sand and steal my property, unless it was something like a formula, I think that's my opinion.
Yeah. But we are all not lawyers, so
Yeah. Not a lawyer for sure, for sure. Let's
Clarify that with those who are entitled to do that.
So we are just talking about tele technology and results making these Yeah, stable for reuse is a different issue unfortunately. I would love to have a simple answer for that, but I would, I don't and I wouldn't dare to.
And I think they're probably years, that's years in the making of how, you know, for them to figure out how to catch up on that. Yeah. Okay.
So let's, another useful tool, extremely useful is called YouTube transcript. Oh,
No one likes to see their photos. I don't know why Martin said the same thing when I had another photo. Basically the idea behind YouTube transcript is you give it URL for a YouTube video and it will give you the transcript and then we'll see, actually I think we have, we're gonna see that in a moment how useful that can be to, to, for context stuffing to stuff that in into your chat prompt and then ask questions about or to use that to train the model basically. So that's super useful tool.
So share G P T couldn't live without it. This one I use all the time. Hugging face. Hugging face is interesting. It's where all of the different companies and open source individuals developing models go to show off their playground. It's a playground. So you have all kinds of cool tools out here where you're, you're getting an early free preview of the next, how people are tying together chat, g p t with image generation, with music generation where you can give it something, it'll write a story and then it'll generate the soundtrack for the story and it'll generate the video for the story.
So they're plugging together models and you'll see some really, really cool stuff. And there's a lot of things out there that are actually very, very useful, which we'll look at on the, on the, in the image generation side. So those are tools. Any questions on tools?
Yeah,
Not, not for tools, but one question that's quite interesting from the chat actually is why should one pay for Jet G P T the paid version when they can use Bing?
I, I personally haven't found the, the results, it's a nuance thing, but it seems to be Bing has been steered towards search results more like Google style search results with references and, and, and they have a limit because a reporter had a very bad conversation with like dark G P T where it said how it would take over the world and humans were unnecessary.
They've limited your session link so you couldn't have like a five day chat with Bing. Bing basically says you're done new topic because it doesn't wanna open up any bad doors. Yeah.
Okay,
Great, thank you. Other questions regarding tooling in the room? Nope. Then
Cool anatomy of a prompt. So what if you break down a prompt?
Oh, sorry, then what is the, what is the structure of prompt? So you can just talk to chat G P T of course, but your results may vary if you talk to chat G P T and you say at chat G p t as an expert on chat, G P T and prompt engineering, I would like to have the best, most reproducible and consistent results when speaking with you, how would you design or like to be talked to and how would you, how would be, what would be the optimal prompt? So this is not me coming up with this, this is actually chat PT telling me how it wants to be spoken to.
So it has, this is like, and you can weave this into a sentence and it'll figure it out.
But you can just have a template that says, okay, roll.
You know, here's your role. You are this, you are a Linux terminal, you are a famous painter, you're a famous philosopher task, here's what I want you to do, write this about this. And then the format, do you want it as a, you know, a LinkedIn article, a LinkedIn hook post that's kind of gonna get somebody to read your LinkedIn article? Do you want it as a table, a CSV file, a json file audience who, who's this for? And that tells, and, and we'll get into all this, I'm actually gonna go through these probably in too much detail, but, but each of these are the things, this is a basic one.
Now if you look at the advanced, it gives you everything. Then these are all the types of things like, hey, if you want a bunch of statistics back, you can, you can say that I expect, you know, I want the change in gdp, I want the, I want the population.
You can tell at which statistics and it knows then to go include statistics, time period. You can say that, you know, this is in the, the, the Victorian era or something else that'll set the set the time period. So there's lots of things you can do to, to that dramatically change the results.
And that's what you're gonna see here is that is how this works. Probably the most important is role prompting where you say a lot of people just write out as an expert in identity governance and administration or as an expert in pearl scripting, then automatically that, that pushes the model, which we'll see into a completely different knowledge set of what it knows. If you say at the fifth grader and then you ask it to write your pearl script, it's probably gonna produce something pretty crafty.
I mean, it's just that it's already, you said, you said it in the wrong motion.
So, so, so defining a rule is very important. Who is it? Is it a career coach?
Is it a, you know, is it a psychologist? And then that, that really sets things in motion and I'm only gonna do like one deep dive. But what this does is that everything is tokenized and converted.
You know, basically what you type is converted via the NA machine, not machine learning the neural networks into vectors. So basically what you type is to convert into vectors, which is like a series of, of numbers. And then those vectors are fed into the chat engine and it uses those vectors to find similar texts that it knows the most similar. And then those are the related text in order to produce the result.
And the, the what the roll token is a context vendor vector that interacts with what's called the attention mechanism and attention.
There's a whole Google article on it. I think that's, that's was one of the big innovations that they still haven't fully figured out why it's so impactful, but led to this Transformers and attention is what makes it, it, it works so well. So that pushes the attention mechanism to where it, it, it, it's going to look for specific data and relationships based upon that role.
So it, in the end, it's a lot of computer science and there's a bunch of free online Stanford classes, videos that go really really deep into it if you want to geek out on it. But, but, and they're, they're very good. So common roles, these are the ones that like typical people might use.
You know, I've seen a lot with personal finance, how to make me rich. There's a bunch of stuff on the internet about that. Everybody's using marketing experts, software developer I use a lot, but you can get very specific, you can say software developer with expertise in Open Policy Agent or C Sharp or PowerShell.
So you get very specific legal consultants. It passed chat PT four passed the bar exam in the top 10% of its class. So it's literally a top lawyer, which is so he can write contracts for you, it can analyze contracts. Yeah.
So it's, it's pretty, it's pretty good IG and security roles of course, you know, you can have it if you're gonna set up, you could say acting as my cso, I'm gonna send you this email. What would be the things that my CSO would find as weaknesses, flaws? What would are they missing? And it'll give you feedback on your, your email or your report from the perspective of a cso, which is a nice thing to do before you actually send it to your cso. Yeah.
To, to get a little, a little heads up as an auditor, a PAM expert. So there's lots of role prompting.
Speaker 15 01:35:04 Is that, is that order of that list in?
No, I don't know why I, I should've, I should've unordered it. No, I didn't
Speaker 15 01:35:10 Know whether it was
Significantly No, no, no yet it's just lazy. I should've even doubles in there. Yeah. And then there's some creative ones. So you mentioned a Linux terminal. You can ha say act as a Linux terminal and then every command you typed it'll respond as if you were typing it into a Linux terminal and Excel spreadsheets really handy. It can generate those. I've used this one, this one's fun.
The Igor Naming Guide, which is like a famous marketing guide for inventing names for new products, which, which is pretty cool. You give it all, I'll ask you questions. And it comes up with like these really interesting pro product or company names I did. An interesting one is Don Draper from Madman. I don't know if that series is globally interesting.
Probably the most Ted Talk, Ted Talk, Malcolm Gladwell, those are very handy for, if you're speaking and you want to have something that's a riveting story by a famous storyteller that, that draws people in but doesn't give it all at once, then, then those guys are great for that. Wired magazine's great. If you're writing something for a LinkedIn article and you want it to be something that people who read technology would like to read in that type of format, that that that rhythm, that's a good one as well. So there's lots of different creative roles and there's, there's a bunch of 'em.
And you could think of almost anything.
Audience is, is very important and we're gonna see that as well is, okay, so role is who's speaking and what's their expertise. Audience is who's listening to it. So if your audience is everyone in this room, then I would be speaking very differently than let's say the, the audience was my mother's garden club.
You know, probably we would be, if I'm, if I'm listening to, if I'm putting it out there for my audience so that they're gonna understand it, it's gonna be very different. So, and you can, we, we'll show, I'll show you some demo of this actually.
How, how big an impact that has. And you can use audience can be lots of things. It can be area of specialization, age, demographics, where you're from, and it'll target the output to make sure that it's something that would be understandable to them. Doesn't use jargon if they're not an industry expert.
So here are some examples, you know, personal beliefs, occupation language industry, skill levels, skill levels. One for us that's super important. Some of my, you know, our audience, who am I talking to?
If I'm writing something out for my QA testers, then it's gonna use qa tester terminology when it explains something to 'em because it know, it expects that they're the audience. Those are terms and, and words that that will make it easier for them to understand. Whereas if it's an end user, then it's gonna be very different.
So that, that, that's, that's a audience is very role, is who the, the body of knowledge that generates the response audiences is how it's gonna be formatted in a way that's best received. So we'll do a little exercise here.
So let's, let's take a look.
Okay, so let's paint, pull this in. So in this case we are going to do a new chat and you can do new chat basically. And I'll show you how this is, if, if you keep a chat alive, it's building and building and building its context and that, and that could be super helpful. I have one chat that I've kept alive for since the beginning where I fed at all of our competitors' marketing documents and I fed at all, I fed at all of our marketing documents and then I trained it on my personal beliefs about weaknesses, strengths.
So now anytime I can chat, I can chat with that one about who's better at this and why are we better for that or you need something for this that highlights our strengths and weaknesses. And, and it it'll, it'll pop out anything. It can pop out a table ranking on features.
It can any question what's their weakness that would be in this deal where they're competing on this particular product segment and it'll, it knows, knows everything.
It knows which is very, and so I keep that one alive forever, but I wouldn't want to go in there and you know, dirty that one up by asking a bunch of stupid stuff. Cuz you want to keep that model very focused, very clean on one specific domain of that expertise or knowledge base. So that's something that you should think about is having different, and, and then from a corporate scenario, then you'll be moving those to open AI on Azure. Something that's more of a shared, shared search to where you'll be having those, you know, that would be a resource for sales people.
So if Patrick's traveling or Patrick just doesn't wanna be bothered, he is on vacation, then you could talk to the model that knows most of what Patrick knows. So instead of a, you know, they can get their questions answered even though I'm, you know, on a beach somewhere or something like that. Sure. Or on a bike
Speaker 17 01:40:09 Instead of,
Sorry, sorry. Can can you repeat because of the audience online. Sorry.
Speaker 16 01:40:14 So is this, is this not the main reason that you want to keep GTP GT Plus instead of being checked you can save your conversations?
That's one probably one of the big, definitely the biggest. Yeah, definitely the biggest, yeah.
Okay, thank you Robert.
Just a sec,
Speaker 15 01:40:30 How do we know whether it's what Patrick knows or did you get it off chat? T p t Hasn't your professional profile just gone down? Because we don't know whether it's you or chat g pt
So that, that's, that's the, that's the common first thing that people think.
But here's, if you ask Chachi pt, like let's say I want Chupy to write an article on something, it always comes up with something that is bland, vanilla not, doesn't include any, anything that I think so then, and that's a good, great starting point cause it'll pop out the structure like a like a vanilla shell. But if I started publishing that online, everybody would be thinking, Patrick doesn't have any interesting ideas.
I mean, so then really where then you push it, you keep pumping in context. It's like, I can show you my five day chat.
I mean, it took five days because it really needed pushed into what I thought about things. Yeah. So it was more like, like a Simon and Garfunkel type of thing.
You know, you, you need, you need both to come up with this. Something that's better than the sum of the two. Yeah.
Any other question questions related to that?
Speaker 16 01:41:43 So I, I have another question. You can say that Jet, jet G PT is purely instrumental, but if you ask jet Jet g p T about how it thinks about the best way to get results, then it comes up with a cooperation.
It is corporation and, and then you poke weaknesses in that cuz it kept coming up with something, which is what everybody says on authorization, which is it's doing some hardcoded string check is in this or has this. And I kept pushing it and I'm saying, no, no, no, no hardcoded roll names in my code. Everything has be externalized. You can't ha And it kept going back to that and I kept beating it up on that and it would say, I'm sorry, and we, we go through another iteration and then it would lose some other feature that I found important on its next iteration.
I'd say, no, no, but you're not accounting for this or this or this. And then, and that's what it takes. Yeah. Yeah.
So what, what you come up, what it comes up with after, after a five day session where with that much of your input going into it, I mean it's more you than it at that point.
It's, it's a tool. It's a tool. It wasn't its ideas, you know, it didn't cut because it didn't come up with that came up with something completely simplistic in the beginning. Yeah. Okay. So here we go. So this is an example. So as a fifth grader, so that's my role, what would a fifth grader know? Not that much task.
Your, your task is to write an explanation on the topic. Oof. And the GNA standards. GNA is a new authorization standard in their baking for oof format.
How, what do I want? Do I want a tweet? Do I want a a, you know, a a a book. I want a 300 word email. And who are you explaining it to? A fifth grader. So we're seeing roll and audience here and it's a subject cool stuff about oof and can have standards. Hey there buddy, hope you're having an awesome day.
So it sounds like something that you'd be say to a fifth grader and then it, it actually, it, it does a decent job. Imagine OAuth and Guapas two friends who help people safely use apps and websites.
So oof is like a friendly security guard who protects information online, blah, blah, blah, blah. So, and it goes into it and gna, Hey, now let's meet gna. It's o o's new buddy. GNA is like an even smarter security guard. GNA has some cool tricks up his sleeve. So you see it, it, it's, it's definitely the audience token has pushed it into a very, very different, you know, they're like superheroes. So it's something that would, would a fifth grader, you wouldn't bore them to death. Maybe hopefully, you know, with something that typically would board a fifth grader to death.
Now let's look at the next one. So we'll pivot a little bit and we'll say what's our role?
Our role is as an expert in oof app and all cloud security and federation protocols. So now you're loading it up with a very big brain on this topic right next and, and everything else is the same except we're, we're explaining it to a fourth year university computer science major. So you're gonna up the ante here, obviously, let's see what it produces. So it's a lot more formal, less of the silliness. You don't have, I don't think it's gonna mention superheroes or buddies in this one.
So we say, yeah, O oau. And it goes into, let's see, intermediary services, mo it talks about grant types, which definitely wouldn't explain to the fifth, fifth grader.
So yeah, lack it knows the weaknesses in oau for authorization. The confused deputy problem, which is, you know, pretty smart. Then it talks about GNA and it talks about the i e tf, which the fifth grader didn't get that.
So see it's, it's, it's a lot more, it went deeper.
It, it has a lot of knowledge and then it knows that the recipient of this is, is gonna be much more knowledgeable and able to receive something little more technical. But now let's go one level deeper just to show expert to expert. So in this, in this case we'll do, we'll do like the, if an expert's talking to an expert, so I'm writing an article about this and I'm sending it to someone who might be a peer, so as an expert, and then I'm sending it to an expert. So you'll see that if you, if expert to expert, it's like, no, no holds barred.
Basically it can just go deep, get, get really nerdy because it knows that we're, we're both, we both have the same frame of reference, the same body of knowledge. So at this, at this point, it can be very, very, very, very specific and it'll go, it'll go into the, the technical jargon so it knows that we know the same technical jargon. Any questions on that? So role and audience are two sides of the same coin.
You know, what does the person know and from what perspective as a speaker. And then they're gonna try to conform to the knowledge base and expectations of the recipient.
Speaker 10 01:46:52 Patrick, how can you know that the answers are trustworthy? That they are correct? Did you develop a gut feeling meanwhile or are there indicators
There? There are a couple ways dealing with hallucinations. One way is, I mean, it's easy in code because you can see if the code runs. That's one, that's one way.
Another way is you can actually ask it, say it, it does, is all of this information true?
Which amazingly enough, a lot of times it'll say, oh, I'm sorry, that one fact there I kind of, I invented that one.
Yeah, like, it, it, it, it will admit to making up stuff if you ask it, did you just make up stuff, which is, and it'll apologize for it, which is weird, but I do that a lot. And then same thing on the code, the coding. One thing you throw in a lot of times is are you using any outdated modules or would this code produce any errors? And a lot of times they'll say, yeah, yeah, I missed a variable here and, and I use the old deprecated power show module that I'm not supposed to use. Sorry about that. And it'll rewrite the code, which it's like, why didn't you do that in the first place?
So yeah,
Speaker 18 01:48:07 Further questions.
Okay, so let's, we'll keep rolling. So that's exercise rule. All these images were generated using AI mid journey. So we'll talk about that at the end. So that goes back to your copyright thing. You don't have to worry about, well maybe it's still a gray area, depending upon how you craft the prompt and reference an artist's name or something. But you don't have to worry about generating free images or, or you know, paying for images. You can just generate anything you want now.
So I've got robots doing lots of different work, robots doing their exercises. Perfect.
Okay, so task, task is pretty simple. Task is, what do you want to do?
You know, this is, hey, I want you to write an article, you know, task examples. We saw a bunch of them already.
So I'll, I'll, we'll do a demo of this identity governance task, create a comprehensive framework for risk. Create an rfp, read this rfp and if it knows, empower these product, respond to these RFP questions, at least get me the B one of response. So you can have it do all kinds of very, very useful tasks. An easy one you use a lot is this. If you're writing an email and you're in a hurry, but it needs, it actually needs to be good. You just write something, you say, make this email better and it, and it'll just, it'll make it better.
And then you're like, and it'll say, but now, now you sound too formal. I know this person, I, I know this person well and it'll it'll make it less formal.
Yeah, so you can just tweak it then, which is handy. Compare vendors and then you can feed it. We'll talk about context.
You know, it's using information that's available, but that doesn't mean you can't stuff in more information, which is probably one of the most handy features.
Topic. Topic is what it's about. So if you say the the task was right, an article, the topic is on which subject is it about?
Oh, authentic an app. Is it about, you know, building a smart home, smart cities, it's whatever the topic is and you, you know, there's lots of topics depending upon, it doesn't have to be something work related. It's really good at, you know, history knows a lot about art.
It's, it's actually very interesting what you can come up with from that. So task versus topic, it's good to keep them separate cuz you can't just write it all in your task, you know, write an article on this. But it's better if you separate what the task from what it's about.
So just, it helps the model, keep those tokens separate and not have to try to merge them together. You get more, you know, it, it helps when, when they're generating the vectors and comparing them.
If you keep 'em, keep 'em separate. So the topic, you know, task sets the content type or the action required. The topic establishes the theme or what it's about so that you know, if you keep those separate, it just, you'll end up with slightly better results. Depends on how complicated your query is.
Formatting super important and fun Formatting is the interesting thing cuz once you come up with something, you come up with an idea and then you, let's say I did that authorization model thing and then I want my, I could have it generate as one format, you know, a 1000 page white paper and you say Gen generate now generate this conversation as white paper. And you work through that and then you say, well now I've got this great white paper. Now I'd like to have a blog post or a LinkedIn article announcing it.
And then you say, now generate the blog post or LinkedIn article for this.
Now you say, well I wanna publicize that so I need some LinkedIn posts and hooks. And you say, generate 25 LinkedIn hooks that I can post to announce the blog post that links to the article. And then you can have it generate the landing page to request the download of the white paper. So you can pivot and they can generate the tweets. You say generate the tweets, generate the hashtags. So once you put in your, your work to get something out of your head and into some nice, some nice deliverable, then all the ancillary things, write an email letting my team know about it.
Write an email, letting my customers know about it. Write an email, trying to send it to some news press agencies. It'll do all the related stuff because it has all that context.
It, it knows how to generate the rest of it. Very, very easy. Generate a table from it full. Look at that.
So, so format is super important cuz once you have something really good in there, then you're gonna wanna maximize your utility of it by pumping it out in every possible format. That makes sense. Any questions? So examples, common examples. Write a thousand word blog post, write a tweet. You can say write a tweet, it knows how many characters a tweet is. I want this in bullet points and executive summary. I want this as a a, a podcast script, which is, it's very good at that, which is fun.
We do, we do something like that in a little bit. Yeah. Infographics. And then it's some, some very interesting ones as well that are a lot more technical. Like I need a a B, a b r d, I need a functional requirement specification, I need a risk assessment plan. I'll show you some, some entity relationship diagrams if you get it, if anybody's, anybody here a product manager works on developing products?
Yeah, a couple. Okay. So there's some interesting stuff for us.
Speaker 15 01:53:44 Question for you, Patrick. Do you use one chat session to generate and then another to test the output? Is that how you use it?
So you, because you mentioned earlier not poisoning certain chat streams, so I just wondered whether that's the way you,
So so give gimme like one would be for you keeping basically it's like, it's like a programmer when they get to work their brain's empty and then they, to get in the flow, they have to get everything loaded into their mind and then they're in the flow and then they're highly productive as long as they keep all of that loaded in their brain. But then when you stop, basically it all goes out the window and you're fresh again.
So those, that's kind of like a chat session. Each one only ha it has the vanilla knowledge that chat chip peti knowledge, but doesn't have any of your domain specific context that you've put in there.
Yeah, so sometimes I'll, like on the, the the OPA thing, I'll start another chat session because I think it went down a, a wrong alley and I, I just want it to start fresh and build back up with, with, and I paste in his context where we, what what we had that was good output that I liked. I paste that into context in as context into my new chat session. So my new chat session doesn't have any of the old bad ideas I didn't like and it's starting from more of a, a good starting point in the right direction. Yeah. Yeah.
Just a quick reminder, you can also enter your questions within the application and it comes to my computer and I will ask them afterwards. But of course you can also ask you in the room, but especially for the, for the online audience.
So, but a question here,
Speaker 19 01:55:25 Does any of the information that you challenge, that the part, where does that go into the collective information that it has?
It's all go, it's all going out there.
It's like, like grains of sand in a very large beach though. Yeah.
They, they wouldn't be able to pull my stuff out exactly, but if what I put in there was like a mathematical formula or a secret recipe, that would be very bad. But my general musings, in my long discussions, it'd be very hard for someone to like extract those grains of sand and put 'em back together in a way that they knew what I had come up with.
Speaker 19 01:55:59 Yeah. But it would be influencing what the, the bot would answer other people's,
Everybody else's answer. Yep. Now they are coming up with new business level plans that have privacy.
That's something they're about to release. And then if you use Azure open AI from a corporate environment, then that information is private then, then yeah, you are, you are, what you build is private if it's on a corporate system. Yeah.
Speaker 13 01:56:25 Have you already poisoned your competition by putting in text to poison the opposition and how do you know they're not doing it? Can you repeat the question please?
Yeah, basically saying, have I poisoned my competition out there? It's like when some famous examples, what, who was it, which was the politician where they, they they pushed Google with landing pages to where every time if Dan Quail or that where every time you type in his name, the the search results would be waffle because he was a waffer on on topics. So you could, I mean theoretically we'll talk about auto G P t, baby agi, you could set those out on a mission to generate copious volumes of bad data.
And that a lot of governments, it's a great, and some of those, when you automate chat G P T to where you can give it a mission and it can figure out what to do and keep doing it autonomously, then that's a great tool for dictators for putting out, you know, keeping people in check, putting out misinformation campaigns.
You add that to video generation, I mean it's, it's, yeah and some other things it, you can actually, a lady who's a journalist, she typically does podcasts and video blog, blogs, blogs on YouTube. She wanted to go on vacation.
So she did a test where she set auto g p t up where it would scan for the hot topics that she was interested in, what people were posting on. And it would then take that information so it can run autonomously and watch what's happening on topics and then compile, it would generate a podcast, a video transcript for a show. And then she trained a her own avatar, synthia, I think it is, where you can actually train it on your voice, your face to where they can create an avatar that is you speaking.
And she fed it into Synthia and generated her speaking the script for the video podcast and posted it on YouTube while she was on vacation autonomously and nobody figured it out. So the, the capability for misinformation, you know, you're not gonna know what's real very soon. Yeah.
I mean if if that's public then you should just not assume you, you don't know, which we're already Yeah,
Speaker 20 01:58:47 I've been, I've been using, sorry, I was late in, I've been using chat g t actually since August last year and I've been using it to write solutions white papers, but I've also been using it to un try and understand how the US Department of Defense is actually using zero trust architecture and zero trust network architecture to protect networks.
And it's been absolutely fascinating for me because the, the key of course is you've gotta write, you've gotta ask the right questions. Yes. And you've got to train it as to what you're gonna do, what you think you want.
And, and that's the difficult bit, I must admit. But I found an enormous amount of information which theoretically wasn't available online, but in fact was, but it was hidden away in many, many different areas in the D U S D O D I actually wonder whether or not there's some of that is actually, well it can't be secret because I suppose it wouldn't, I mean the U S D D od 10 years ago was, there was so much information available online and they, and they, but they did, they did after nine 11, they, they really did spend a lot of effort to try and securitize all of that information.
Speaker 20 01:59:55 So there is a concern I suppose about like, people like me digging away like this and have I accidentally exposed some information, which perhaps shouldn't be exposed, but I must admit it's, it's been amazing for me because actually did I get the repression you are from Stara, are you the OPA guy? Is that not that?
Well no, actually I do as an example. My example, yeah.
Well
Speaker 20 02:00:18 That's, that's a good example cuz one of the questions I asked is what are the benefits are of EXACTOR versus OPA for doing firing own authorization on premises and the cloud. Yeah, and I must admit it was very, very good. It really did help me to understand it, but it did come back initially and say, ah, sorry, OPA is a very new standard and we don't really have a lot of information about it. So I then told him what I understood opa. Yeah.
And then all of a sudden it really, it must have dug around elsewhere and understood the question better and was produ able to produce it? I I must have been great tool, fantastic tool. Yep. I've been writing solution white papers and producing them literally in minutes whereas before it take me days.
Sorry, interrupt. So
I had the same thing with OPA and then I told it open policy agent and I said, Rigo, I fed it some information. And then I said, well you are an expert on OPA and Rigo. And then all of a sudden it knew everything and, and I, we generated a bunch of RICO policies that we used in the playground and that's hit or miss. You give it the air, it'll fix it, you give it the air, it'll fix it. But just like your example, it'll, it'll, it can produce code extremely well. Yeah. Cool. So let's keep on going. So creative uses, you can have it right?
Limericks, I had it right, write a lot of haiku's, I don't know why I like that. Mono aids top 10 list SWOT analysis. If you feed it context on your, your competitors interactive quizzes or polls on how to do that a lot, that's handy in a meeting.
If you wanna quickly generate a poll and, and in, in a meeting, meeting minutes notes to-do list, that's gonna be built into teams to where it's automatically gonna use AI to generate that.
But for now, we, anything somebody types in there, you can actually paste it into chat peti and have it write a summary, identify the to-do list and the action items and identify the decisions. So it's, and then share that with the team.
Tasha, I'll give you a a good example here. So this, this, this example demo works. Sometimes it works extremely well, sometimes it doesn't. You have to play around with it. So one of the things, I was in a meeting, very real use case and we were designing a new feature. Where do I have that? Here we go.
We were designing a new product feature and we were doing the, we had the, the database architect on the call and we were, he was writing just notes about, okay, I need this new table with these columns, with these foreign keys and this relationship, the usual notes that he writes that no one really understands that ends up maybe going nowhere or getting pasted into a Jira somewhere.
But it was very, actually in this case, I said, well, in this meeting, and, and this is not that example, but this is a similar one.
I said, why don't I paste this into chat G P T and see if it can figure it out. And it did.
It said, oh, you're designing a database, this is an e R D. And I said, well can you document this for me? It can even create JIRAs for if you want, cuz it can use the api. Can you document this for me? And then I said, well I, it'd be nice to have a visual.
So, and then I've used it since then. You can go into sql, a database tool and say, dump out the table creation script and you say export for the, for these key tables, let's dump out the table creation script this text.
And then you can feed that into chat PDs context and then ask it questions about it.
So what, here's an example where I said, okay, let's, let's just for a simple one, let's generate a sequel table creation script for a database that has a person table, an identity table. It has person people own user accounts in the account table, which belong as members to groups, like the basics of, of identity management. And of course it'll generate you the sequel. It's very good at sql actually.
I mean I, I'm poor at sequin. It writes me all kinds of great inner joins and group vibes and all this stuff. So to generate the table, the, the table scripts.
So that's, so this is what I would get if I had an existing database or what my, my guy typed. But his notes are pretty crappy, but it still figured it out.
And he explains it, which is nice, explains it's logic. Now I said, now generate, and I stumbled upon this, I said, how could I use this to generate an e R d or an entity relationship diagram or something that visualizes this for the team that we can capture at the end of this meeting so everybody understands, is this what we came up with? Is it right?
You know, a picture's worth a thousand words. So it told me that mermaid, that JS was one of the many options that it could use.
So I said, okay, I asked it to generate the mermaid js code for an e r d and I'll show you mermaid gs. So Mermaid GS is an online, I think you can pay, I I've been using it for free. I think you can pay for it though. They have a live editor and it can generate all kinds of stuff.
So here's, here's my last example. So nor for his example, I had it generate, hold on here, let's go back.
I had it generate for his and E R D and it gave me a great first, first try. It gave me this great entity relationship diagram with all the foreign keys. And that went into the documentation. It helped the doc people, the QA people in this case, I had a few issues with it, kept giving me airs. So finally, and it gave me an air, it gave me an air and then in interestingly enough, it said, Hey, there's something going on with Mermaid.
And then it said, why not we try after this, it said, hold on, I got a few airs. It finally, it, it basically gave in, it said, let's just generate a, a flow chart instead. And it gave me the flowchart version. So this is actually, it gave you the, gave you the text, you just copy and paste it. For his example, it gave me the e r d.
You come over to Mermaid, you say what type of diagram you want, flow chart, entity, relationship, et cetera.
And then since chat G P T is text based, it can generate any text if it knows how it can generate what, what you paste in here, which then the, this engine will visualize. So it's great for going from developers, cryptic notes to a business Analyst prepared diagram, visualization process flow. We can generate any of that for you.
So in in line in a meeting, it's not that you have all these tasks that somebody then a week later tries to remember what was said unless that you actually can just capture it very quickly, turn it into output, attach it, have it generate the JS if you want, or, or you create the JS and then now you've captured the moment in a way that is where you've, the knowledge or the understanding is there not just the raw data that then someone has to go and try to process.
Yeah. So it's super useful for, for this type of thing, especially for anybody.
Product managers especially though, and there are lots of examples of this. I mean this is just mermaid, but there are, you know, tons of tools, if anything that takes a tech format and it can generate that or it could be trained to generate that. So if you're a product developer, it it knows Post Postman. So if you have arrest api, you can dump out Postman files, it can, you can feed Postman files into it and then theoretically it knows everything about how to use your api. Which then you get into those scenarios we talked about with the, the plugin.
So then you expose your product as chat PT plugins, so then they can talk to it and it, the plugin can take actions on your api. If you're a workflow engine or an IGA product like us, then all of a sudden users can be talking to it, it knows what they wanna do, it knows which APIs to call. Make sure you have authorization in the middle there. And then you can have a new user interface where it's basically just chat G p t stringing together your API calls.
Speaker 21 02:08:03 Can I,
Speaker 20 02:08:05 I, I've actually got two questions now.
My company have a product which basically it's a identity management platform and it has its own actually directory built into it, but it can also do, it's got like 24 different search MA types of searching and matching. And one of the things we can do is we actually can search in Mandarin, in Pinion characters, both, both pinion and traditional Chinese. Wow. And we can also search in Japanese and Korean as well. Can you do that with chap G P T?
Would it be able to understand their Mandarin characters and be able to search information in, like for instance, in our database, which
I'm not sure when it vectorize it, how it goes from Lang Vector language to vectors. I'm not, I don't honestly know the answer to that.
Speaker 20 02:08:56 Okay, well that's the, that's the first question. I'm glad you don't know because that's still my competitive age. So that's something.
Second thing, which is I think was a little bit interesting is one of our product engineers has said right now on the, on, well we've actually got a new website just gone live, but on our previous website we had a very extensive product, frequently asked questions, which we'd put together over, over some, I don't know, 10 years. And in fact, I was the author originally of it because every time an engineer came to me, a sales guy came to me and said, does it do this?
I thought, well, that's a good idea. I'll, I'll ask that question and then I'll get the, the like the CTO to answer it or one of my thing.
And, and so we had all these questions and answers and it was very, very good, very useful, I always thought very useful. But then this product guy came and said, look, why don't we get use chat G P T and, and actually create a, an API using an chat g PT API to actually interrogate the blog site. And I must admit the outputs bloody amazing and I couldn't believe it. So
That, that's what's called fine tuning. Fine tuning the model is where you have questions and answers. What are the que what are the questions? And you have the correct answers. You can fine tune the model and feed that in.
So then it has your domain specific expertise and it's gonna answer better than any anything because you can ask the question in any which way and it'll somehow figure out wh which question they were really trying to ask and give them the answer. Yeah, thank you. Yeah.
Any other questions?
Anybody else?
Okay, we'll keep, I'll keep rolling. So format's fun, I mean format's useful and fun cuz you can pop out a lot. I mean take one bit of information and pop it out into many different formats.
We, this is shared G p t, so every time you share something on shared G P t, you have a copy link. They are using that data to train open source models. So just be aware.
Okay, so let's go back here.
Okay. Dealing with hallucinations. Two common problems. I don't have a slide on the other one. Hallucinations like this is one from the internet, right?
In, in a positive review of the Fyre Festival, which doesn't exist, it will try to write it. You can't ask it if the Fyre Festival does exist and it will tell you that it actually doesn't.
So, so one thing is you have to ask it if everything was truthful that it said, cuz it will admit a lot of times that it wasn't truthful and tell you that it tries to fill in the gaps. Like if you give it very little information or very little context and ask it to write a glowing bio for, for you it'll write some embarrassingly untrue things that you wish were true.
And yeah, I mean it will pop out all kinds of great stuff, you know, Nobel prizes or whatever, whatever you want. However high, high, the bar as you set, it's trying to please you.
So yeah, you have to be careful. So hallucinations you have to, to add, make sure code. It's easy because the code has to run for factual stuff. It's a little harder.
Speaker 15 02:11:57 I was just gonna say this is a, a challenge because the hallucination as a specific term, so with chart G p t this they use, this is a linguistics model and the hallucination is used more in a clinical sense. So that if the in, if the facts and the external information isn't actually true, then that's a hallucination.
Just as a real person who was had a cognitive illness or sickness or
Conditions, they, they, yeah,
Speaker 15 02:12:36 That's a hallucination. Yeah. And the challenge is that the hallucination for the general public, that word has a different meaning. So you just need to be careful here that just because they wrote a glowing review, it isn't a hallucination. That's what he asked for. It didn't ask it to be truthful. So it's not a hallucination or a wrong answer.
And so some of these things are when people say this is wrong or untrue for, for your questions where you talked about Einstein's visit it because it knew there wasn't any evidence to say that he didn't go there. He could have gone there. True. So it's true he could have gone there. But it just depends what you ask and you need to be careful.
Yeah, very, very careful. Yeah, yeah. You don't wanna put your name on something that's like not true. That would be very embarrassing.
Yeah, that's why you don't just use what it pops out
Speaker 15 02:13:38 Because people write fiction and non-fiction books all the time.
That's true. And a big trend right now is for online gig workers to write travel, travel reviews of places that they'd never visited because they just catchy bt to pull in. That's a very good idea. But
Speaker 15 02:13:57 That's, that's another way of calling marketing your product marketing. If you look back at everybody's marketing claims, they usually works of fiction.
Oh yeah. We're all converged platform Dave.
So
Speaker 15 02:14:10 That's right. I mean that would be interesting to look back about all those things that you said would come true and didn't. Were they lies in the past or were you just being optimistic?
Hall product
Speaker 20 02:14:21 Manager.
I may call this hallucination.
Yeah, I thought
Speaker 23 02:14:27 You would say that you'd add your paint to that festival.
Any other questions, comments now that we are at that break? Nope. Okay then.
Awesome. One other thing, I don't have a slide on it, I should have had a slide. There is a token limit on chat, c p t a length max length of what you can feed into it and a max length of what it can output as an answer. So if you're, you'll, you'll come upon this.
If you're having it write something very long or code, a lot of times it'll write, it's writing the code, it's writing the code, right, the code and all of a sudden it just, it just stops. So how do you handle that? That's a common thing you need to figure out how to handle. So if you ask, if you tell it, you didn't write the entire, you didn't finish, you didn't write the whole thing often it's very dumb. It starts at the beginning again and then it bon you have to, the best way I found is copy and paste the last two or three lines of where it ended and say, you did not finish.
Here's where you left off, please pick up where you left off. And then it'll pick up from there.
So yeah, that's a, that's a handy handy too. Yeah, I
Speaker 20 02:15:32 Learned that run to my cost.
Yeah, it happens all the time. Yeah, it
Speaker 24 02:15:36 Sometimes it helps,
Speaker 10 02:15:40 Sometimes it helps just to ask him to continue and then
Sometimes, yeah, sometimes I always end up it, it does the same thing. It just gets stuck at the same spot. Yeah. And if you give it, are
Speaker 10 02:15:50 You ready? And then he answers.
Yeah. Yeah.
Speaker 20 02:15:53 Well I, I've, I've sometimes I've sometimes explained a little bit more about what I was really after and it actually, I, I find it a little bit concerning cuz it comes back to me and said, you're absolutely right. You're absolutely right. Yes. I should have been more, more cla added, more clarification there. It's talking to me back and, but I knew it was wrong, so that's why I put it in. I think you're wrong.
Or I, I think you've missed something. And it says, yes, you are absolutely right. I did miss something. So that's a bit disconcerting
And and it does try to please you. It's very complimentary. It does apologize a lot, but it does often one interesting thing, it does take a position. I was writing some marketing and a lot of people were, were right. They're saying too, too, too technical, too obscure. And then I had chay bt I'm like, no, no, I'll have, I'll have it, write it, compare it.
And then it kept omitting this one part that I wanted and finally I asked, it said, why do you keep emitting that one section? And it said too technical for your audience. And I was like, oh. I was like, I was like, everybody else was right. You know. Interesting. So context, token context is probably the most powerful token. That's where like when you're feeding it all your competitor's info or you're feeding it a bunch of data.
So fine tuning is, is a programmatic thing where you're loading up que question known questions and answers on the back end to to, to load up your model context is what you can feed into it in a chat to add, to make it more knowledgeable to where, where it's gonna have background and, and more, more knowledge on things that wouldn't be out there on the, just September 21st on the internet.
Yeah. So you can feed it a lot more information. So con context is super helpful examples, like you're gonna say according to you, you give it some information, you give it some facts.
I've got better examples here. So we'll do a little, a little fun, fun workout using your, your online Casey YouTube. So let's go take a look at this. So here we go. I'll pull up my share. G p t boom, boom, boom. And let's go look at the KC YouTube video. Here we go. Get our guys on the screen. Hold on. Did I skip ahead? There we go.
Okay, the out for a break. And now let's pull up our YouTube transcript site. So here's where you can get, you can say, okay, I have this video, let's get the url, let's go to YouTube.
Let's, this uses ai, let's get the transcript. Boom, I've got the transcript. Now if it's too long and, and it exceeds the token limit, then you have to chunk it. But you can have chat, you p t write a program for you where you, it'll chunk automatically text files.
But anyway, so we're gonna do the simple one. This one I know is not over the limit. So we are going to copy out the transcript. There we go. I'll put it in a notepad in case I had any weird characters in there. Copy that.
Okay, now don't need that. Let's go back to chat. E p t and there we go. Yeah.
Okay, so now, oh, hold on, lemme go back to my notepad cause I need to assemble this. So now I can look and see.
Okay, this is a transcript of the YouTube video. So this, I just did a very simple, this is all context. You could use my format, but this one's very simple. So we just do a quickie YouTube video bump bump.
It likes, it likes quotation marks to let it know where the thing that you're, you're giving it as context starts and where your sentence ends.
So that's, that's handy. I'm gonna quotes there and I'm gonna paste that in there. So we're just gonna start off, I'm not really asking it a question, I'm just giving it context. Let's use GPT four.
So I'm, I'm context stuffing. This is called context stuffing. You're stuffing a bunch of context in there and it's gonna respond to this anyway. It's gonna say, okay, yeah, it's actually giving me like a little summary of it, but we don't really care about that. So now I'll show you, I will do the same thing I did on the previous example, blah, blah, blah. There's my contact suffering. You could say, okay, so now maybe I didn't have time to watch the video or I could, I was on a train, I couldn't, I didn't have my headphones. But I would like to know what are some the key points.
Hold on, stop. It's still trying to pick up where it left off.
Thinking, thinking I should have given a better, a better prompt. Like your task is,
I think we didn't say anything useful.
It's saying, what did they say?
So, so this is interesting. It's like, okay, here are the key points. Yeah.
So that's, that's useful information. Now you can say, okay, fighter two, okay, you can ask, you know, who are the speakers? And some of me asking it, this is just to focus it for later questions. So I know that it has identified the speaker. So speaker of the conversation, Matthias, okay, and Graham, okay, that's great. And now find the 10 most related citations.
It's like, okay, I wanna do more research on this topic. Find me the 10 most relevant citations. Now this is, I'm not using a plugin so it's not internet connected. So it's gonna give me whatever was available September 21st. If you had a plugin, it could call the internet and give you more, maybe more recent. So this is useful. We're still not doing anything that interesting it, but, okay, here we go.
So now here's where we get interesting, the fun stuff.
It's like, okay, so we want to create a little bit of controversy, we wanna have some excitement. Somebody posted a video online that your competitors, that you completely disagree with in every respect that that's ridiculous. So you could, you could take, you could have it analyze the video, then you could contact stuff in what you think. And then let's assume I contact stuff, but I'm not going to in this case. So stop generating that.
Say, okay, your task is to write a podcast script to be delivered by Ken and Randy, fictional characters, which takes the opposite view of this podcast. So now we're gonna have it, right? And it's gonna write a podcast debating the fighter two adoption, a balanced perspective. I guess you guys weren't balanced, I don't know what Ken and Randy thing. And then it'll write out a transcript where it's Ken speaking, it's Randy speaking. It's in this loose podcast format. Like they're talking back and forth. And now if you had the avatar thing, you could have an avatar of Ken and Randy.
You could generate a video and you'd have a real, Ken and Randy show one,
And you'll see it's a limit Fighters limitations in the enterprise talks about devices and all of this. It talks about decentralized control. So it actually, they make, I mean, you know, we're Fido, we're in the pro Fido camp, but they, if you were to take con Fido arguments, they're actually legitimate con Fido arguments.
So, and this is very useful to question your own thinking cuz you know, if you're drinking your own Kool-aid and you, you, you know, you may be down down the rabbit hole and you need a, a different perspective to see what might be weaknesses in your own ideas. So it, it's doing the segment, okay, we've got segment three, man, they really, these guys are going into it. So it comes up with some very interesting stuff and now let's make it even a little spicier.
Let's, let's have a follow up, a showdown, a rematch. Let's, let's do the rematch. So it's like, okay, Ken and Randy, it's game on. We're gonna, we're gonna have a, a podcast where we all duke it out. Let's get Matthias and Graham in there with Ken and Randy. I can go
Speaker 27 02:24:26 On vacation. That's great.
And then it, it know, it knows your positions. Yeah.
It's, it'll argue your positions and it, and, and, and it speaks with whatever you said it takes out as your position. So now it's it and it, you know, and it has, you can have a moderator, it could have a third person in there. Yeah.
So I mean, it's just fun, you know, it's definitely, it's a good way to generate content, generate opposing viewpoints to gameplay, gameplay role play things. Does
Speaker 27 02:24:58 Graham know that you are, you are doing this? Is he in the wrong? Pardon?
I don't think, I don't see Graham think he
Speaker 26 02:25:03 Is, is he? So I know Graham very well
Be like, he'd be like, did I say that?
Speaker 27 02:25:10 I won't tell the moment. Did it tonight.
So that's just a useful example. We can summarize.
You can create key points, you could create counter-arguments, you could create questions if, if you're, if you're writing a talk and you need to generate que a potential audience question, so you're prepared for what they might ask you. Yeah. Yeah. Same thing. If you're doing a job interview, you can generate what, what job interview questions they might ask for this job. Something like that. Okay.
Any questions around this topic? Not about Fido, but
Speaker 28 02:25:43 Does it work with team speaking or
Can you repeat? Does
It work with what team
Speaking recording?
Does it work with that? They're, they're coming out with a teams premium where they're building in what they call copilot, where it will automatically analyze and at the end of it pop out the decisions, the action items, the summary. This would
Speaker 29 02:26:05 Be more private, but it's will be more private. But you can have a lot of useful information from the recording of the meeting and you can use and you can, you know, yeah.
A lot of stuff without,
I won't have to ask at the end of the meeting, was anybody taking notes and we're like, oh, oh, I thought you, I thought you were taking the notes. He knows and everybody says that, well, there's a recording, but nobody goes back and looks at the recording question.
Just a sec.
Speaker 30 02:26:32 Actually, we have a bit of experience with this. We've been in our team trying to use this actually not while gpd, but fireflies, which is a similar tool. Okay.
Basically invited into the call, into the teams call, then it takes the transcript and gives you a summary. A bit of like, when we review that, it depends a lot on how well, like lead that meeting is. If you have a typical teams chat about all sorts of things, like a standup chat or standup call, it doesn't really know what to catch up onto. So it generates, like, there has been many topics, but I don't really understand them too much. And then it gives you ventil, like I said, plain basically responses.
But when we have like a workshop or maybe this kind of presentation, it'll probably generate a very good summary. So there's a lot about whether it is able to understand the context or Sure, sure. You probably would need to feed it a lot more, like many meetings for you to understand the context of your team and then it'll be able to actually generate meaningful response, like things or summaries.
That's, that's so may maybe on your stand-ups, if you had like a, a, a bit of content, content that you pasted in it to chat of every meeting, just to let it know that this is what it's about. It might help, but Yeah. But that's cool. I can't wait to get my hands on it. I'm dying to use it.
Okay. Any other questions that regard?
Okay, then it's back to you.
Okay. Okay.
Just a quick introduction just to mention it. There won't be breaks if you want to go out, go out and come back because we want to use the time. So just to mention it, but don't leave too long.
How am I doing on time? We're going pretty fast. We have
Two hours
Left.
Yeah, yeah. So we might end up just doing fun stuff. So
We are not yet, we haven't yet created one prompt, so we are still way to go.
Speaker 13 02:28:22 Did you want to have a five minute break for anyone? Something like this? Did you want have a
Pause for you?
Yeah,
Pause. What's the poem?
Speaker 13 02:28:30 I'm asking do you,
Do you, do you
Speaker 13 02:28:32 Five or 10 minutes or do
You Oh, I'm good so far.
Yeah, I'm so, yeah,
He would tell me. He would tell.
I'll tell you. I'll tell you. Yeah. Thanks for thanks Phil. I would might, is there a coffee machine in here?
No,
Speaker 31 02:28:45 I bring one. Awesome, awesome.
Okay,
We can do that. Should, should we have a
Yeah, coffee break. Do a coffee break? That's a great idea. Good idea.
Yeah, let's have a five minutes break, but return please. And we have three more questions from the audience that arrived via text and we want to cover them quickly and one not so quickly. I assume the first question was about, are we going to cover open source, long, large language models as well? So I hand that question over too. Sure.
Yeah.
I, I'm, I can, I can tell you what I know, which is I'm not, I'm not an an AI developer, an Annie means, but, so as I mentioned in the beginning that the Google article about that we have no moat, they have no secret sauce that's going, that open source is going to surpass them very, very quickly. And that was because Meta, you know, the company that owns Facebook, they put out their, their version I think was called, it's called yama. They put it out there and they put it out. They released the, partially the source code.
They didn't include one, one aspect of it, but then the rest, that aspect got leaked. And then since then, the open source community has flourished with one good example, large language model. YAMA became which became all these other things. And then there've been like, you know, da, if you get AI daily updates, literally there's something new every day going on with that. So open source is predicted in like the next month or two to surpass, actually I can pull that up for you. We have no mode. It's already on par at a much, much cheaper and smaller footprint.
So the, the article is, we have no mote Hold on. Where's the article?
Bum?
It's in, it's supposedly internally leaked.
Speaker 31 02:30:45 Yes.
And I'll show you the com.
They, they actually, they use chat g p t on these op open source sites to rate, to compare the output versus chat G P T. And you'll see, so YAMA was what is what Meta released. And now it came out with alpaca. So here's two weeks and it went from 68% of the performance of chat two p t four to 76%. And then one week later they came up with PIA 13 B, which I think is the latest. And last I checked, which it rates as a 92% of the performance and it's almost, it's almost identically on par with Google Bard. So two the iterations and, and the, the innovation cycle is extreme.
And then they had it running, you know, without a gpu. They had it running on a MacBook, they had it running on an iot, Mac raspberry pie device.
They had it running on some tiny, no, no, no memory. So at this point they figured out how to make it open source and how to make it so any developer in the world could run it on their own equipment. So they'll be able, and, and the benefits of that is a large language model to iterate the next innovation cycle and to retrain it because it is so large and because the way they train it, it's super, super, super expensive.
Only one of the top companies in the world can even do it. But since then with Una, they came out with a way to retrain it at where they could re a retraining iteration could be one day and $300.
So that, that, that was a game changer for everything. So now why not
Speaker 15 02:32:29 Ask, why not ask to just update that graph? You can just do that, can't you?
What? Say again? Can you
Speaker 15 02:32:35 Just get an updated graph for us?
Well, this was published like last week. I don't think of Okay. Yeah.
I I, I doubt I I think this was leak last week, so it shouldn't be Okay. Shouldn't have changed yet.
Yeah, well, but maybe it has. Yeah,
And there was some innovations.
They, they, they list all the innovations that, that some guy in Bulgaria came up with how to get it, how to get it to run on without a gpu. And then someone else came up with something about how to train a whole new training model that was, was, was cost effective and just as efficient.
So, I mean, and that's just, you know, people are going crazy on this. So over the next like six months, it, they'll probably surpass, you know, Chachi PT and what Google can do. And then it's gonna be, you know, a free for all. It'd be interesting.
Yeah, it's good for us. Good for the consumer. Yeah. If we're not, if not we're not the creators of ai but the users of ai, then this is all a, a net gain for us. Yeah.
Okay. Next question Or anything to add from your side here for the open source part?
Okay, next question. Are you going to talk about long-term memory and the use of gens and other frameworks like Lang chain?
Yes, yes, yes. So I, again, I'm not a developer expert on that. Probably the, there are lots of good YouTube. I like this guy the best, James, I think Ja.
So, so the way, the way that it takes the data now is it takes when what you type into chat G P T and it converts that into vectors. And a vector is just a series of digits and then it, then it vectorize it. Now if you want to have long term memory to where you have lots and lots, you build up your own store. Because if you're using chat g p T, there's a limited amount of memory that you can have.
So there's a whole new class of databases called vector databases, which are just optimized for storing vector data because then when you, your, you pass in your chat, in your context to chat g p t, it can vectorize it and then it can compare to the all the chat G P T data, it knows, plus all of that in your own vector database to come up with probably a better answer because it's looking at your data.
So vector databases, I think he works for Pine Cone, there's one from Apache that's pretty popular. And then Pine Cone. And then there's a third one.
There's, there's multiple, there's one Microsoft starting to back, which I hadn't heard of, but Pine cone's probably the most popular at this point. And basically it's optimized for vectors and vector search. So it's super fast.
It's, it's cloud-based. But this guy, there he is, has all types of talks on vectored, beta databases, lane chain. So lane chain, I think Lang Chain's just an open source framework. I don't think anybody at companies behind that one.
It, it basically is what connects via agents chat, G P T to, to the world. So there's like a Google agent for Google searching. There's a PDF agent for getting PDF data in. That's file agents. So basically it, they developed the, the agent plugin model that allows you to have external sources interacting with chat g pt.
So most of the things that are being done auto, G P T and all of those where they're, they're interacting with the world or interacting with data, they're using lang chain plugins, lang chain plugin for like Twilio, how to make a Twilio voice call.
I saw somebody order a pizza. They had, they basically, and I tried it here, but I didn't, I didn't finish it, I didn't get through it. But basically you say you're in a new city, you say auto G team, your mission is to buy a 7:00 PM find me the best pepperoni pizza in Berlin and order it.
And then if you plug in all, if you have all the plugins, you can have it, do the research using Google, rank them, rate them, look at Yelp, look at TripAdvisor, and then find their sites, find their phone numbers, and then use the voice to call and order a pizza and they, they ordered a pizza and then, and had it delivered. And the person, they never knew that a human wasn't involved. Yeah.
So it's, it's, it's, it's a super important piece that changed together different agents and allows chat chip PT to interact, interact with the world. That's basically what Lang chain is.
Okay. Final question. I like this one. It's more directly aimed at you. So do you have concerns about j g PT disrupting the traditional IGA business model by creating alternative products and how are you planning to address them?
Yeah, of course. I mean, I would be, I mean, I would be silly if I, I didn't, I think the way, and I had an article, what was that, Daniel? My flare?
Daniel, if you, no, is it Meisner or my slur? My slur, oops, sorry. Hold on here. He wrote an article, I think it came out today and it was very interesting. It was design, it was for on universal business components. He's the thought leader.
Very cool, kind, nerdy, nerdy, cybersecurity dude. But universal business components, basically the first step to human job replacement is the decomposition. He talks about McKinsey and other companies are gonna come into all the companies and they're gonna decompose your business processes into AI readable components, inputs, algorithms, actions, outputs, and the artifacts produce. And they're gonna look at your company, they'll be able to decompose everything your company does into how all of it could be replaced with AI and basically, yeah, which is, it's scary.
Interesting thought.
It's, it's a great thought. If you're a small company and you want to grow to, you want to be like a big company, but it is, if you're a big company or a worker, it's kind of scary. But hold on. Where is the, is this the one, he came out with a new software model where he he described it. I sent it in an email.
Somebody, no, no, no. Hold on here, let me, let me pull out for one second.
I'll pull, I'll pull it up. But it basically describes a whole new, the, the, the, the revolution in how software is going to be developed, which is com completely different of how you do it today. Today you do, it's more of a static model where if I want a feature, I have to write the code for the feature. I have to write the user interface for the feature.
I, it's very, it, it, it's very, very static. Here we go. Here we go. I found it. Let pull back in. Lemme pop it up. This is probably the best article I've seen so far, trying to put the pieces together. So he calls it, why is it not sharing? Here we go.
He calls it S P Q A. That is a new AI based software architecture.
Where and, and how it's gonna completely relate, replace legacy software. So he said he, his analogy was that current software is circuit based. It's like a circuit board. You have to make it, you have to make the circuits, you have to create, create what you're gonna create. Whereas it, it will become understanding based. So I think this is gonna hopefully be an active area of standardization. But imagine in a, in a understanding in a a, in a STA circuit based, you have to write, you have to know what you wanna write. You have to know what you want it to do.
You have to know, create your test cases exactly how it's gonna work, how it's gonna look, what's the user experience gonna be. Plan your paths, your journeys of a user through the process.
And, and when you, when you cut that software, it, it is what it is. It does what it does. It has these use cases to test and that's it. Understanding based is where, imagine that instead anybody old timers here like me that remember like U D D I.
So, okay. So I think we need something like that in this new world.
Cuz imagine if, if, if these large language models can under, can understand and orchestrate these interactions and these processes dynamically, as long as they understand what are the services that can perform the actions, then you do and, and you do need something where they can register and offer these services and describe the services in a way that's friendly to a large language model, to where they can consume that, to discover services and be able to orchestrate processes leveraging those services where
Speaker 20 02:41:19 Those is where the XML registries come from.
Tell me about that xml.
Speaker 20 02:41:24 So I mean U d I demanded an XML registries.
Yeah,
Exactly, exactly. Because you had to have somewhere that says, here's all the stuff that's available, here's how, here's how you connect to, to it. Here's how it works, here's what it does.
Yeah, no,
Speaker 20 02:41:35 I I I've just got an interesting comment here. I have a very good friend, his name is Tim McGraw, I dunno if you've heard that name. M C G R O T H. He used to work for the Port Authority in Fremantle. Now in the early days we had electronic data interchange. I'm showing my age here. I'm in my seventies now at E D I. And that got ch got replaced by obviously the internet.
And we, the industry globally came up with a concept of e b xml. Alright. Electronic business xml. So basically E D R I for the XML age, well, Tim said no, hold on a second, this isn't right. You know what we really need is a universal business language U B L. Okay. And he persuaded the, the United Nations to adopt it. And the EC European Commission adopted ubl. He wrote a whole bunch of books on it and he was lauded as the expert globally on ubl universal business language. We've got it.
Again, this is, this we accepted This is for the AI world. Yeah. Isn't it? Yeah.
And it's, and and, but for ubl to work, you had to have a registry basically saying, this is where you get this information and this is how you do it. And these are all the people that are doing this bit.
Exactly. So because you have, you have standard, like, you know, the open AI standard and others, but there's no registry or publication and those aren't necessarily optimized for talking to a prompting a large language model on how to, how to use the capabilities they offer and how they might be used cuz in this understanding based system, no question, Robert, finish this sentence.
In this understanding based system, it's basically gonna, there will be no user interface. The, the, the conversational user interface will be the user interface and there won't be defined pathways. It'll be whatever your goal is and then it's gonna orchestrate the end result by knowing the services that it needs to, to connect and pass around the information to get to the end. This is scary.
Speaker 15 02:43:30 I'm just to, you know, put a bit of context here though. In some places we do need certainty. In other times we have, we're okay with some uncertainty and creativity.
So if you wanted to fly an aircraft, you might not be unhappy with the understanding based where you might prefer to have something that is predictable. You know how it behaves. So in Matthias, you already know the answer that you have this touring test, I, you can measure this to say, will will this give me what I specified? So it is important because you sometimes you must have a certainty this difference between our model of trust based or zero trust based and how much certainty or trust you want in the model.
Yeah, I kind of disagree cuz if I'm in an airplane and I'm a pilot, the goal is to safely get us from here to there. And then if it's plugged into where you have sensors for everything, for every micro wind fluctuation, everything, then the uncertainty of letting it achieve that goal is better than a human on hands on the rudder re reacting or overreacting.
I mean, I think, I think that I would trust the robot more.
Speaker 15 02:45:01 Well, yes, I mean in, so I did spend some time working for, in the aircraft industry, we civil aircraft, we only kill people by accident as opposed to the other sort.
But the, and that's the, some of the aircraft, you can't fly if you're a human. It's the fly by wire is so important to keeping the aircraft in the sky. But still you need to have the modules to behave in a predictable way. Because if each model, each part of it predict was unpredictable, then a hallucination might really cause a problem.
So, and that's our challenge really, is to how do we have enough predictability that wherein there may be some unpredictability, it's within safe limits.
Sure.
I mean, and that's a huge topic. It's like if I want to get, if I give it a mission that I want to get a pizza in the next one minute, 30 minutes from the best pizza place, and that's its mission. It has no guardrails. Maybe it's gonna like do a denial of service on everybody trying to order a pizza for the next 30 minutes so that I get my pizza.
Could, it could happen. I mean it
Speaker 15 02:46:21 Over, I mean, yeah, that's the problem. It's
How do you build, how do you build that in? I
Speaker 15 02:46:27 I don't think it will be quite as, as, as these people portray it. It'll take some time.
Thank you.
Okay, thank you. Thanks for the questions. Any final questions before we continue? Just for you, Patrick, one hour, 40 minutes left for Okay. Everything that you've left.
Yeah, but
Nevertheless, questions in the room. I just check the online part. There's nothing,
So this is worth a read. Definitely. I mean the new model is you have a state, which is the context of everything it knows about. You have policies that hopefully are guardrails or you have, you ask questions and you get actions come out of it. You ask questions that take the actions and it, and it it, he goes through a thought experiment about, hold on, I'll show you just one real quick. He does some silly stuff. But then he gets into like a realistic, okay, security program. Okay.
If the CSO comes in and they, they ask this, give me a list of our most critical applications from a business and risk standpoint. Create a prioritized list of our top threats to them and correlate what our security team is spending time and money on. Make recommendations for budget headcount, et cetera. Write up an adjusted security strategy.
Define that the KPIs, build out our nested okay structure, create an updated presentation for the board, create a list of ways that we're lacking from compliance, then create a full implementation plan broken out over the next four quarters.
And finally write our first quarterly security report and keep it up to date.
If you imagine in the traditional model today, how many people, if he, if they say they want that he or she, the ciso that's gonna send a flurry of activity with, I mean hundreds of people, probably for a large organization it's gonna take, but if you have the context already there, it has access to your systems, you send it on a mission and it can take the actions to gather that data then in this, this model based approach, basically it could do that in, in a an hour and it could redo it every hour if you wanted it to.
I mean, so imagine the, the not, so it's not just the impact on staffing, it's the impact on doing things. Your wishlist that you never even thought possible. That you're gonna pull in all this new, these new capabilities there things that, you know, monitoring, security, monitoring, all these other things, producing reports, knowledge that you weren't even, weren't even possible given any realistic staffing. So can have both sides of the equation there. So this is the new model.
So software like ours will have to be identity fabric where we plug in our capabilities into these large language models. And maybe our secret sauce is that we add the authorization and we add the connections and the connectors to the systems to be able to take the actions.
So the, the more actions we can publish out there and the more fine grained the the authorization and the auditing around it and the risk analysis, then that would be our secret sauce in this new world. Something like that. Yeah. But everybody's gonna have to write their software differently if you're not, you know, I remember one of the first talks they gave on microservices here, it was like all, all the, all the panelists were from other company. They were like saying it was a fad.
And then, and I remember, you know, and then boom, you know,
Speaker 27 02:49:40 Everybody's doing it now.
Yeah. So this is something a lot of people are still trying to ignore.
Yeah, I've been asking on interview questions, are you using it or not? And, and I get some weird answers. Some people are like, it's a fad, it's marketing hype. Elon Musk is just trying to make more money. And
Speaker 27 02:49:54 Yeah, some of the vendors out there in that room upstairs, you're right. Are they're, they're, they're, they're living in a complete different world in terms of microservices. And es somebody, somebody even asked me what's es I won't tell you who said that.
So it's interesting. I
Just have to think about the remote participants.
Okay, we'll roll, we'll roll ahead. Back on
Track.
Yeah, yeah, we could, we could digress all day purpose. So purpose is, if you give it a topic, you know, you give it a task, a topic, a role.
It's like, why, why are you doing it? What, what is your intent? What do you, what do you wanna get out of it? Are you trying to convince somebody? You're trying to inform someone? Are you trying to amuse someone? So the the goal is, is is the purpose of it. You're trying to persuade someone it's gonna ride it completely different. If you're doing a persua persuasion is the goal. If you're trying to problem solve, it's gonna take a different approach or entertain, you know, a lot of the stuff is meant to entertain.
You know, a lot of that's what most people spending time on chat, g p t, all these funny memes on the internet, they're either spending time entertaining themselves or trying to make chat.
G p t put out wrong answers so they can say that chat g p t is wrong. I mean that's instead of using it for something productive. So we're gonna do a quick, quick one on goal differing opinions with goal.
So let's, let's, this is just a simple one, but it was, it illustrates the point in an interesting way. So I'm gonna give it a prompt. Okay. So I'm gonna say to give a sense of nostalgia and I will take that away. Here we go. So my prompt is, as an expert writer, that's the role. The task is to write a short explanation article. That's the task. And see how I split topic out that it include topic in the task. The topic is pizza. My favorite topic as you, I probably talked about pizza like five times on this 200 word articles.
I gave it the format, what I expect, who's the audience, general public. And then the goal is to give a sense of nostalgia.
So if we, we do that pop out title, a slice of nostalgia, timeless charm of pizza, and then it, and then it writes something very well. It's a great writer. Smile evokes a senti, transcends time and culture to cherish culinary design. Delight has a storied history. And I mean it's really actually very interesting what it's writing
Transcends borders evolved in a myriad of styles that's well-written. I mean it really, it they really does a great job running.
Okay, so that's a sense of nostalgia. Now let's try, I will do the opposite.
So that, that was my, my goal was to give a sense of nostalgia. So anyone reading that, that was the goal. Now I think this is, it doesn't make any sense to me why someone would do this, especially when we're talking about pizza. But let's write one that gives a sense of revulsion, pizza, unpalatable truth, a disturbing slice.
So even, I mean, so yeah, the goal is extremely important. It's like you think, who's my audience and what am I trying, where am I trying to, to push, push them? Am I trying to bring them in, inform them, educate them, humor them? Or am I trying to bring shock and revulsion, which is probably gonna be used by a lot of bad actors and the Twitter stream and all of that.
Or am I, am I trying to bring out the, the pull on the heartstrings? Yeah,
Speaker 26 02:53:41 You're pizza.
I know.
It's like, oh my god, I'm, I'm destroying the environment. I am. It's terrible. It's mass produced sodium.
Yeah, it's gonna make me obese, which
Speaker 26 02:53:54 It's
Risk I'm willing to take. But yeah, you see a global waste problem. I mean after that it's like, yeah, pizza's just bad news. That's
Speaker 27 02:54:07 Really interesting. My wife likes think pizza. I'm gonna do that, that, that second bit.
Any questions on goal? The go. They have
Speaker 26 02:54:19 Pizza for lunch today.
Not anymore. You're a bad person if you're eating pizza, you're part of the problem. There
Speaker 29 02:54:28 Is no bad than good marketing, it's just marketing.
They
Speaker 26 02:54:31 Say
I have my little robot making pizza. Okay, additional details and providing more context, additional details, it's different than context. Context is where you're really stuffing in primary information that you want to steer the conclusion. Additional details should be things that aren't that important, you know, just some additional things that you mentioned. I have additional details versus context. Context is essential. Background information and reference additional details. Just some supplementary information that you might pass in there.
Nothing shouldn't be, shouldn't be your context. Don't try to use additional details for context cuz additional details is weighted less than context. Context is supposed to be like the core of of your knowledge, body of knowledge on the vector surge.
Oh, and we can pick pictures, sorry, class language style. Language style is the one people play with on the internet a lot.
You know, old English, Shakespearean English.
But typically it's not for that. It's designed to be, you know, are you trying to like a lot of, and you use this one a lot when you write an email or something, it comes off if you use the first version, it gives you, it comes off as pompous. Like you're so important and you're so sophisticated and it's like, I would never say it that way. If you send it, you seem like kind of a jerk.
So you always have to look at the language style because especially if you're talking to a friend and you send them an email that it generates using whatever default language style or tone. It's weird.
It's like, you know, definitely not something that you would email a friend. So you have to tell it, you know, informal, simple. Or if it's something that's designed to be technical, then you need, you need to tell it, Hey, I expect this to be technical. Include some jargon.
You know, make it a little bit more technical. And if you tell it relaxed or casual, sometimes it gets too relaxed or casual. It'll be like, Hey man, just write you an email. So it's like you have to tone it back up a little bit.
Structure. I haven't you structure that much. This is an advanced one. I'm just starting to get into structure is how you want, how you want it structured. So it's going into the details. I want an intro, I want a conclusion I want, otherwise it's gonna try to guess based upon what it's writing, what might be the most typical structure.
But if you want something that's cause and effect, you want something step by step or ProCon or reverse chronology or whatever, you can use structure to tell it how, how to structure that output of what it produces.
Speaker 15 02:57:05 Can you feed it, can you feed it a template?
So
Yes, yes, yes, you can. That is called examples. Yeah. So you can use examples. Some people call it few shot prompts. Few shot prompts. You can give it examples of what you want it to produce or, or how it would typically do that. So a template can be an example or a few shot prompt is like translating English as an example, this to this. Or if you wanted to write a recipe, you give an example of something you like or a movie plot summary. If you give it a couple examples, then that really pushes it into, gives it a lot of information on what the output should look like.
The, the, the format, the structure, the tone can even come from an example. It can read a lot of things off of the example.
Speaker 27 02:57:53 You could use it to write a novel, couldn't you?
Oh yeah, people do. Yeah. People already are using this to write novels if you have the idea. And then people, there's a one chat, a mega prompt you can, there there are these things called mega prompts. I'll show you a mega prompt later where you set chat g p t off where it's gonna keep prompting you.
So it's, it, it it asks you something, you type. So there's one mega prompt for, for it to be your co-author. So the mega prompt is, it asks you what's your, what's your book idea? And then it comes back with, hey, building on your book idea. Well what about this, what about that? And then you say this and it says, what about the first chapter being that? And you say, yeah, and you start writing some, and then it starts writing some, and then it, it's like your, your your co-author until you get to the end of a book. So somebody used it to write a book.
Speaker 27 02:58:39 Well here's, here's a tip for
Speaker 20 02:58:40 You, right? Here's a tip for you, your, your, the gentleman you've mentioned earlier on Graham Williamson. He's written a couple of books on identity management. You may know that, but he's also a novelist.
He's, he's written two books and he's working on his third. I think you should tell him about how he can use the novel approach for his th his fourth book. So he's thinking about it right now.
Yeah, I don't know whether he knows that he could use chat g p t for it, but it's just an idea there.
Yeah, I have a session with Graham tomorrow, so I think we should ask him tomorrow.
Yeah,
Speaker 27 02:59:11 I might, I might ask him over dinner tonight.
Okay,
So that's few shot prompts and examples is references, references. If you're writing something, you want it to include the references, you know, you can say include ref in the references, academic articles, news articles, and it'll, it'll generate citations and references for you, which is helpful. Or it'll find relevant podcast videos as references that, that's handy. Especially doing academic writing statistics.
If you want your, your article infused with statistics, it, it'll only use statistics from September, 2021 unless you have a plugin or using one of the auto G P T or God mode or one of those where it has access to the internet. But it can, it can pull in relevant sources feeding
Speaker 15 02:59:57 In your own sta and feeding in your own data for your own stats data set that you wanted.
Oh, you'd add that as context and then ask for that as stats out. Yeah, yeah.
Speaker 27 03:00:09 Unless you, unless you want, don't want your information going into the open internet. Remember that's the big issue.
Yep. So statistics, examples, time period is specifies the temporal context.
So it's, that's, that's good for fiction. Very good for fiction. If you want it to write, you know, something in the future and you set the time period at in the future, all of a sudden, you know, it, it's perspective is very different. Or if you're writing something that was Victorian age, it's gonna keep, try to keep things consistent with what would be realistic for Victorian age. Yeah.
So it's, it's that's a, that's a useful one for specific cases, not, maybe not for what we do that much. Yeah. Perspective is, I'll show you first person for second person. A lot of times it's, it's writing from a first person perspective and if you're writing certain things, you want them to be from a third person perspective. So that's something you can adjust.
Okay, we'll do an iterative prompt demo. Super prompt. Let's do a, a super prompt
So you can design there lots of tutorials out there on how to design iterative prompts and iterative prompts there. There are many out there. The idea is that it's going to keep asking you questions and refining, asking questions and refining. So you could do a job interview where you have chat G P t doing the job interview. I mean it's just, you know, it's gonna keep going. And following up psychology, a lot of the psychology bots, they're just doing iterative prompting.
They're asking you questions and then they're asking you next question based upon your answer to the previous question. But I'll give you an example here. So this one is, I want you to be my prompt engineer if I'm telling it its role. Your goal is to help me craft the best possible prompts for my needs. The prompt will be used by you Chad, pt, and you'll follow the first fo following process.
Your first response will be to ask what pro it should be about.
So what, why, what do you want? What do you wanna prompt about? The second is, based on my input, you'll generate two sections, a revised prompt. So here's the current prompt that I would suggest and then a list of questions if we want to try to improve it, here are the questions you could answer to improve the prompt. And then I can say I'm cool with it or I can keep going and answering it. And I added my own little spin onto it. I said write, write it using this format just, just to use our stick with our format. So if I type that in there, okay, so what is the prompt?
I'll say the topic
Speaker 31 03:02:55 Is AI in identity security. And I'll say the purpose in is
To educate
Speaker 31 03:03:13 And inform
And I'll say the target audience.
Speaker 31 03:03:18 My audience is right here. Security professionals attending and identity security conference.
Okay. So we'll see what prompt it comes out with revised prompt, write a presentation about AI and identity security output as slides and speaker notes. And that's nice.
I didn't, I didn't give it that and include relevant information. So that was based upon my audience. Assume the role of a security expert. Let's
Speaker 31 03:04:01 See what I have there.
See and focus on educating and forming them with a professional tone and clear language style. That's probably a pretty, pretty good prompt already. But then it's like any specific AI technologies you'd like to cover within identity security?
How about, yes. One, we'd like to cover large
Speaker 31 03:04:24 Language models,
Threat
Speaker 31 03:04:30 Analytics, machine learning and role mining. Let's say that one. See what it pops out for.
So large language models, threat analytics, machine learning, role mining, output slide.
Okay, just added in my topics. Let's see
Speaker 31 03:04:48 If it did anything else.
Okay. And ask me more questions so you can keep going until you end up with something that is what you would be a good prompt. So instead of doing like the structured approach, you can basically just say, I'm gonna paste this in here and we're gonna end up with some. So if you don't feel comfortable writing the prompt, you can go through and answer questions, answer questions, your questions, keep refining it until it comes out with something.
That's probably a good prompt that if we run this one, I'll add a new chat. I don't want to be biased in any
Speaker 31 03:05:17 Way by that. So
Let's see what it comes up with.
Speaker 31 03:05:27 Okay, here we go.
Introduction, title, AI and identity security. Large language models, thread analytics. Not a very creative title.
I could, I could tell it a gives give something with more pizzazz. Speaker notes. Welcome to the audience. Briefly explain the key areas, large language models, speaker notes, thread, analytics.
Okay, watch this.
Speaker 31 03:05:51 I as a marketing expert come up with a title that has more. Wow.
Okay, let's see what it comes up with.
There you go. There you go. That's a lot better. So you might attend, you might attend that one. The other one's like words, word, salad. I wouldn't wanna attend that at all. Speaker notes. That's great. And then you can say, okay, it's on the right track. And you can say, okay, as an expert in
Speaker 31 03:06:32 Presentations on technology and TED Talks, give me a background story to tell as an introduction that might have occurred in my real,
Let me tell it to lie basically.
Okay, so let's hear. Couple of years ago I received an email from my bank alerting.
Okay, something fell off. It turned out to be a sophisticated phishing attempt and I was lucky to have escaped unscathed. Wow. So there you go. There you go. Hold on. You say
Speaker 34 03:07:19 What about doing the start of McKenzie?
What is it doing?
Speaker 34 03:07:22 The start of McKenzie?
Yeah, my story. More drama and involve the police. Let's get a little more crazy. I'm gonna push my luck here.
Let's see, there's something a little bit more interesting. Okay. Fraudulent activities look legitimate. Something intrigued. I I locked outta my account. Panics that in my heart, in saving at risk, I immediately contacted the bank and the police, they tracked down the cyber criminals.
Wow, that's a great story. I can't believe that happened to you. I feel bad for you.
Yeah, yeah. So you see, but basically you can use the, you can use the super prompt to just walk you through generating a prompt on anything. And it knows, it knows all the tricks.
Chat, bt helping chat bt, right? Good chat. PT results. Basically the robot helping the robot, helping the robot helping you.
Speaker 27 03:08:25 I'd hate to be your marketing
Director. It's super useful. So it's some interesting things.
Oh right, yeah. If you're like, you guys publish something, I think so new, new, new. So it's not replacing people, but new, new use cases. It's a weekend you guys publish something for happy Easter with a cute little bunny. And I'm thinking it's the weekend. We didn't publish anything for Easter. My marketing people are off work, I don't wanna bother anybody. So you pop up and chat g p t say, hey, here's what they said, here's my personal take on it. Come up with 10 examples. And I'm like, I like that one. But it should be like, this comes up with the LinkedIn message.
Then I go to mid journey and I generate like an Easter image and then boom, boom, boom, 10, 15 minutes later, you know, without waking anybody up. There you go. Wonderful. Yeah. Yeah.
Other questions regarding the super prompt?
Nope. So here's another cool one. The first example I saw of this was someone took an an Obama speech, a very famous Obama speech that was very, very moving and they pasted in the Cchu pt and you say your task is to write the prompt that would've generated this speech. Yeah.
And then you end up with this prompt where the, where it describes the tone, the motions, the topic, everything. And all you have to do then is you have that prompt as a template, change the topic. You can have it write a, a, an Obama style speech with the same emotions and everything, same length on any topic. Yeah.
So, so this is a actually a very handy one. So what I did on this case was, let's see here, do I have a, have a hyperlink here somewhere to my shared G P T.
Okay, so I did, I I basically said, your task is to write. And I, and I had a LinkedIn article that I wrote that was just an example. So my own LinkedIn article, and I basically said, okay, I'll throw it out here.
I said, your task is to write the prompt that would've written this LinkedIn article.
And this is interesting. Let's say a competitor posts something that you completely disagree with, you can reverse engineer the prompt and then completely change.
No, I say pro, they say con I and, and add your context in there and come up with a, a competing response article in like, you know, five minutes. So, so here we go. So basically says, okay, write an article discussing how de-skilling certain roles in leveraging technology will help address cybersecurity labor shortage. So this is interesting and it's now, now I have the prompt. Now new chat. I don't wanna use that.
I can say, okay, that article was interesting. I could change the topic. So it's better with the Obama article because it captures, you know, the soaring tones, the pulling the heartstrings and the emotions and all that. But I mean, it, it is good for, for technology also. So here we go. It's Rio Descaling technology, the path to addressing cybersecurity, labor shortage and enhanced security. And it's writing a pretty credible article based upon my article
Talks about the EIC in 2022, talks about zero trust.
So because it, so it's interesting, it's like you took a lot of information and then you reversed it into this and then you're using the reversal to try to come back out with something over here and you can compare, compare the two. Yeah. But probably the, the, the best use of this is to pivot and to get a top a template that has the tone, the structure, the length that you want and, and, and change the topic to use it for your own purposes. That's probably a better, a better usage.
Okay. Here's one I wrote an article about on LinkedIn, which was generating test data.
And this is actually useful, that's something that we have to do in IGA all the time. It's hard to get fictionalized or anonymized test data. So you can tell it that it's an expert in Azure active directory power show and the generation of text files and you can say, I want to generate fictional users.
Hold on, let's go here. And this one is how I chatted with it a bit.
I, cuz I had it extend this, you know, more schema elements, more attributes and some other things. But, but it worked out pretty well. So I say as an active directory and power show expert task, generate a list of file of fictional users, 50 users CSV using Azure Active directory user schema. It knows what that means. Create a csv, then the goal, I'm telling it why I'm gonna use it cuz it might generate it differently if it knows what's the next step. Additional details, comic book heroes and villains. The for the generation of that. And I told it, what's my domain name?
So it'll, it'll pick one randomly if not, but if you want the file, you don't have to edit it for your Azure domain or your active directory, then you can already just tell it that in the additional details. And this will generate
CSV
And it knows the schema for lots of systems. It knows the workday schema, it knows the active directory.
Azure, it knows standard l d a schema. It knows, it knows a lot of schema. So you can generate data for a bunch of different systems. So you see it's giving me the, you know, give a name, surname display name, and you can, if it makes a mistake, you can say, Hey, is there, did you make a mistake here?
It'll say, yeah, the, the attribute name and the schema is actually this instead of that. And it'll fix it. So it's an easy way. So if you wanted to sit here and have it generate tons and tons of data and if your system, the database, you could have it generate the SQL script, you could paste in the E r D for your tables and then it would no as context and then it would have the CSV and then it would generate the SQL script for inserting the data.
That's an, that's an easy one.
So I can generate this. And then next step would be, well I don't want to have to, oh this was an interesting one. I don't want to have to enter that in manually. So if we are using auto G P T or one of the agis, you can have it do all of this together automatically. But I'll show you this and now here we go. So you can pick your preferred language, could be anything, could be pearl, anything. If I have my file, I'll stop generating and you could say, now generate me a PowerShell script.
So it has, it has the file it generated in context, it has the schema, it knows a lot of things. Prompt me for a username password of an admin user when I run it. So I don't want it to fake out.
Oh, we hit our G P T four limit. I'll, I'll switch, I'll, I'll finish this one. I can switch into another one. But you can say, okay, here, you know, this 3.5 turbos faster, but just less accurate. So we switched to the 3.5 model. So it does my to what it used here.
Okay, so some of you, anybody in PowerShell, a PowerShell person who knows, notices, anything that you don't like about what this is, what it generated.
Speaker 36 03:15:59 Well, it's PHP script.
Hmm?
Speaker 36 03:16:01 I said it's PHP script. Yeah.
Oh, you do a pH. So, so let's say, so my next question was, cuz I've, I've, I've had experience with this before is okay because it, it was stuck, it's stuck in time a little bit. So I ask it, Hey, does this script use any out of date modules or code or have any errors? That's always a good question to ask when it generated code. And it says, oh yeah, yeah, I'm sorry, I did use, I did use the added eight module.
Yeah, I, I really shouldn't have done that. And, and then, and then it'll, it'll, it'll write the new one. I can say generate using the graph API and power show. So let's play it safe. We'll just use graph API and power show instead. And then it'll generate, it'll generate a new version. So now you can run it, see if it works, if it gives you an air, you can do the trial and air thing, give it the air back until it fixes it.
But, and now this is, it's actually very good at this and you can do all kinds of cool stuff. And we're one LinkedIn article. My first test of chat G p T for real was I had a chat and I said, I would like my goal, I'd like to be notified every time a user in my Azure AD gets added to a, a group that I consider risky. So we did a little chat and then I said, well, I have a very large Azure ad it needs to scale well, needs to be durable so that it proposed something. And then I said, it needs to scale well.
And then we ended up where it generated the code for an Azure web job that would run, that would call, that would call the Azure graph api, the audit log, Azure active directory audit log API for every user being added to a group and compare to a Cosmo DB that had list of risky, risky groups.
And anytime a risky user got added, a user got added to a risky group, then it would, my web job would call an Azure Logic app, which would then send an email or send a message to a teams channel.
And I, I created that with, you know, I'm not, I'm not a programmer. I, I hack around a little bit. I'm not a programmer. And I said, okay, well is is this code? How do I find out if this code's really working or not?
And all, all my people were busy. So I went out on Upwork with the gig economy.
I said, Hey, you're, I'm putting out a job to, to run this code from Tay PT and see and get it working and tell me exactly what's wrong with it. And I think, I forget what the article said, it's like $88 later I had it working.
So it's like, that's when my mind went. Yeah. And it was scalable. It would scale for, you know, Azure active directory of half a million users.
I mean, it was, it produced real workable functional code. Yeah. A value valuable code. So I mean it's really, it's, you know, you need, you can, you need someone who can, you know, validate it, test it, run it, question, but it, it can.
And, and there's in GI GitHub they have GI co-pilot, which is helps developers as they're writing code, it helps them write the code. And they've already done the analysis of develop train developers using GI co-pilot. It's reducing the amount of code they actually write themselves by 70%. Wow. Yeah. So that means everything they produce is, so if they're producing, if your developers are producing the same output, then you're gonna, and they're using that, you're gonna wonder they're taking a lot of breaks. Yeah.
Well, is there a question back there?
Yeah.
Oh, I have to move a bit.
Speaker 19 03:19:51 Exercise. You mentioned about the token that you could use on, on chatt pt. Is that per day or?
Oh, the g the GT four. So
Speaker 19 03:20:04 Gpt, you ran out already, have you been using this morning or is it
Just three hours?
Yeah, that, that's, I hit my limit for the day. So before I came here, I may be typed in one or two questions about an article or something, but that's it. So I hit the limit. They're limiting it because they want, since it's, they want to get more people to be exposed to it and they don't have the scalability yet to have everybody using model for yet.
I think it's every three hours
Is is that it?
Yeah, is the time limit. I
Just ordered it yesterday. That's why, why I know.
Okay. I didn't know if it was token limit or time limit. Yeah.
But it, at some point I, I hit it. So I have an alternate account I can switch into. Yeah. Just to freshen it up.
Any other questions?
No, that I'm here.
Okay, cool. So other practical uses for it. Just I'll give you a little example here is recruiting and hiring. Now cybersecurity has a ridiculous skills gap. Hiring is impossible finding people's impossibles, especially for certain positions. So you can use this to, to do things that you'd expect, like writing job descriptions. So you can have it just, you can, you can hash out, I need this, this, this, this, this. And then tell it the job title, tell it some basic things. And it'll actually write a very professional job description that you can post on the site.
You can then say, write me the, the job ad postings on LinkedIn or some other things that, and, and write the LinkedIn post. And then when people, when people reply, then you can feed in their resume data as co as context and your job description and have it do personalized interviews to where the interview isn't a set of interview questions for every candidate.
You can actually have interview questions that per candidate to drill into their experiences. They listed their skills. All of that meshed up with, with your, your job.
Which is interesting cuz So now everything's gonna be personalized. It's not, you know, you can generate something, but then you can generate something personalized if you add context to it, which is useful. We did something very interesting, which I think is interesting. I haven't heard about anybody doing it yet.
We have, we needed to hire like 25 or 30 new developers asap and we didn't, and nobody had any time to interview them, to screen and to, to do all the, the, you know, the, the long laborious traditional process of hiring and, and seeing if they really talking all the talk, talk, talk to see if they can really do the job. So what we did, I used chatty pt.
Pt. So I did it for, I've done it for like five positions so far. But for my developer, when the first time I did it, I said, I'm hiring a c shot sharp.net core developer.
And I had it write the job description, the posting, then we got the applicants and then I said, I would like to, based upon my job description, write a hands-on skills test, a hands-on coding test that will take three to eight hours. And a task will be to write some type of working sample app using C sharp. And you can do asp.net NBC razor, or if you'll see, you'll see in the doc, lemme see if I have it here, hold on. We go back to that. Let me pop up the pdf, you'll see what it generated.
Eventually what it generated was this, and I hope I'm not giving away my competitors too much secret information here, but it said, Hey, this is your empowered developer interview task.
Said dear candidate wrote the letter explaining it.
It said, okay, if they can sign in through a site where they get paid, we'll pay them. So basically if you're one of the candidates that apply and you look good, you get, you make the cut, then we'll pay you to take this test. So paid test, this is a step in the interview process. So before any human being ever had to talk to them, then basically we used Upwork and it has a time recording tool where it, it screenshots their screen so we can watch them what, what exactly how they coded it and how they wrote it. See if they use chat pt.
And, and then, and here's the task. Create an employee management system web application. You're gonna have an employee list. You're gonna have ad employee edit, employee delete employee search, employee rest API use link to retrieve the data, the technology stack, you can use this or this or this or this for the user interface. And then basically the notes, you know, on it. And we say, you know, basically we, you have to use a time recorder.
We, we will give you a five star rating even if we don't hire you. That way it doesn't affect your, your rating. And please share the code of the publicly accessible GitHub repo with a link to the running app. So then basically we, we, you click the button and say, Hey, here's your task, here's how you get paid. Here it is. And then I don't just send us the results.
And then they send the results. And then you just give the, then you post it in the teams channel and say, Hey, whoever has time from this pool of developers that I trust to grade grade claim somebody and grade their app.
And on scale 'em on, give 'em a score scale of one to 10 and anybody who's an eight or above higher, and then give notes on what they did well, what they did wrong. And then basically we hire, hire higher. Anybody who did, and some of 'em knocked it outta the park, it's like they, you know, they did amazing things with the code. So you could look at the code and compare and you could click higher and it's gonna be cheaper even though you're paying for them to do a hands on test than all the human time that you'd spend with your internal people scheduling and doing the talk, talk, talk thing.
Which really, if you're a developer, if you can't code, if you can't actually get the code out, the rest of it doesn't matter. The next part is to make sure you're a decent human being and that we're not gonna hate working with you, but if you can't do the first part, then we're wasting our time even speaking with you. So if you have to write good clean code that you'd want in a product. So they did this, then we hired, hired a bunch of people. We wrote Tay PT wrote the letter that said, Hey, you passed now email a manager for the next step.
And it wrote the email that said that you didn't pass and thank you for your efforts and all this other stuff. So it, it, it, so basically you could hire a ton of people and eliminate all the human hours spent where you get people that you knew could code. So it's a coding position. You knew they could code.
Speaker 30 03:26:39 How do you know they code? Thanks. How do you know they can code order not just using GT to send you responses?
Well,
They have to record the screen.
Speaker 30 03:26:49 Oh, the screen. Yeah.
The screen recording is a key, a key part of it is that
Speaker 30 03:26:52 When, when they do the code, their avatar to, to be doing the typing and they just record themselves. Like the lady for the YouTube done has done what has been ion
The, the, the, the, the tool that gets them paid when they're, when they to record the time is it's doing snapshots of their screen. Yeah.
So they, they're not, they can't generate their own video.
Speaker 30 03:27:14 Yeah, yeah.
No, I mean, where I'm heading with this a little bit is like naturally you're trying to save a lot of time. Yeah. And which enables you and eliminate the human work by letting Jet g, d or AI to do it. Right. Which basically allows you to scale. So you can, you can ask like a hundred candidates, hire a hundred of them and then eliminate the 99.
So to, to end up with last
Yes, exactly right. You could, you could interview a lot more people and see which ones really were good coders. So I just clicked higher.
I just, if they looked good, higher, higher, you click the hire button and you attach this document and then, then you just wait for wait for the results to come in.
Speaker 30 03:27:51 And naturally what they will be doing, or they are doing on the other side is they of course want to scale themselves up as well. So they are thinking and creating ways to apply 400 positions. And then basically where they end up selected, just pick, pick that one. Right? Yeah.
So I'm just thinking that at the end, it's, it will be less about the technical skills and less about the ability to write code, but more about the ability of the person to actually work with you on a human to human level. Right. And I think the question there is like, is that something that can, that we can also automate? Or is that really the grow of where we will never be able to use AI so that that's where the value of human beings is actually going to be. Right.
And for now, that's definitely the value, but at some point you could train it to ask, have an avatar, ask the questions and you know what the response is. Yeah. And you could grade the sentiment analysis, facial facial analysis. You could do a lot of things to see if they're lying.
I mean, you'd be able to tell if they're lying at some point. Yeah,
Speaker 30 03:28:51 Yeah. Because the next question is what do you actually even need the people, right?
The the need for people will be a much, much, it'd be basically be fewer people with superpowers. Yeah.
It'll be, you know, you don't, you don't, how, how many superheroes do you have on, you know, the Avengers, it's not, not a huge, well it's a pretty big piece, but it's not a huge team because they'll have superpowers. Otherwise you'd need an army. The difference between an army and a superhero is they can do a lot more. So
Speaker 37 03:29:20 I think it's time to retire, to tell you the truth.
Yeah. So we also did for other positions, then we expanded it for business analysts for identity consultants.
Oh, come on. And it, and it, and it, and it's different, you know, basically it tailored what they would do.
So this, this got interesting. So for business Analyst and identity consultants, you had the same thing. You had to do a test, but obviously it's much more soft skills. So it require analyzed project requirements for a new IGA solution for ABC trucks. So had it generate a hypothetical, a fictional company that they were gonna be doing a project with automated provisioning and deprovisioning end user self-service re-certification on with a product that ran on ad kubernetes and containers.
And they, their job was to develop user stories, epics, process flows, UI mockups, professional PowerPoint, presentation requirements, specs, design docs include the details on the joiner, mover lever, all of that.
So it's really as realistic, I mean, as, as I could get. And then I had to generate this fictional company. So this is the fictional company. ABC trucks in France established 1970, 10,000 partners, 200,000 employees goes into which systems they have sap, active director, SharePoint, typical mover process.
And, and, and then you give them this task and you say, okay, you've gathered, you know, this is your data, analyze it, produce, float, you know, float charts, process diagrams, the JIRAs, the epics and all that. And then that people who would do that type of job in the organization can judge the output to see is it good or not.
And, and it was interesting, I mean that clear results there where a lot of people, you know, did some, some monkey work where they didn't actually think about anything and, and they couldn't answer basic questions on why would someone wanna do this project or something.
So they, so the second level was very important in this one to see, to understand it's like, did they just do the work or did they understand what they were doing? Why would someone want this?
So that, that the human part was still very important there. Yeah. But you could imagine this for almost anything.
It just, it's gonna optimize your processes and hopefully produce better candidates and, and fewer misfires. Misfires is the worst thing, especially when you're interviewing like a developer and you never, they say they can code their, their, their resume. It looks like they can code, but you know, you really need to know that they can code and they write good code that you'd wanna include in your product or else it's just a heartbreak at, you know, dealing with someone who is, has the job and isn't, isn't really able to do job is terrible.
It's painful for them and for you, it's painful
Speaker 27 03:32:09 For you and painful for them as as well.
Yeah. Yeah.
It's, it's it's lose lose basically. Okay. So a little bit on agi gen, artificial general intelligence, auto, G P T, God mode, just a little bit on that. Not much. I'm just diving into this right now, but this is the next, did I include a, a task? I'll may need a picture of that. I'll wait. So that's mid journey generated. I told it AI god mode.
So auto, G P T and I did not create this, I have a link somewhere for the reference, but basically it uses like a memory store, like pine Cone or something. It can use it in memory if it has to, but the idea is that the user, it, it's, it's a mission, mission based bot. So you get, it starts up, you asks, what, what is my mission?
Who, what am I, give me a name, what's my mission?
What am I trying to do? Am I objective? And then it pops that into a task queue. And the task queue, then it, it uses G P T. Basically it's AI analyzing what you said and what would that mean, and then what would be the tasks that might lead to that outcome. And then criticizing itself to see what, why would those tasks that I generated lead to the outcome or not lead to the outcome. And then taking that criticism and regenerating which tasks.
And then if you have it in confirmation model where you're a human is confirming, then it's gonna say, here's my plan. I plan to do this, this, and this in order to accomplish this mission. And then you can, you can authorize it to say, go take the next step or you can authorize it to take the next on hundred steps and bother me later.
And, and it's gonna, then it's in a loop. It does the results, it gets it back. It reanalyzes the plan and the next task to see did I accomplish any of my tasks or did I not accomplish my tasks?
And if so, what are the next tasks? What was the, the thing that, that failed this time? So it'll get stuck. It'll get stuck. A lot of times it works better on, I think it works better on Mac because it has some trouble something to get stuck, trying to write to my Windows file system to write out like the file it needs for the next step. And it'll get in the loop, but, but basically it's gonna keep on going until the criticism engine says you have reached the goal and you can have this thing order of pizza.
You can have it, monitor your, monitor all the tweets on a topic, and then every day write a new article about the trending topics in your industry based upon what it discovered. You can have it do almost anything and it, so I'll show you real quick. Lemme see if I have mine.
So here is, yeah, here, here's one I was just playing around. So basically, right now, this one, there's some online websites I'll show you, but for this one, you have to get the, the source code, you can pull it down.
So I had a developer help me set it up, set up my API keys and all of that for, so it could talk to Pine Cone to store the data, the memory, so it could get, talk to the open a api, open AI api. But then, and, and I'm running it here. So let's say, so right now, I had a goal, so it ask when it starts up, what, what do you want me to do? And I said, the goal was to collect all tweets with the hashtag and I fixed that.
I, I gave it up typo EIC 2023 and analyze them to identify the most popular trending topic subjects opinions.
Write a daily LinkedIn summary article that summarizes the most important and interesting conversations and trends related to the hashtag.
And I had, I fixed that later and then continuously learned and then it figured out the rest what to do. So you can say, okay, yes, I'll give it just a single. So it named itself Social Journalist gt. You can give it a name and it's, and it's gonna figure out what to do. Use natural language and sentiment analysis, collect all tweets. And then it's, and here's what's using la the lang chain plugins. It's using the, the browser. I'm not using pine code in this case using the browser. And it can go out and it can search the web.
So one task, it might say, let's search the web, let's go search for this hashtag. It can log into the Twitter api and then it comes back with results.
It can say, I'm gonna write those results to a local file. I'm gonna analyze that and see if I'm done on that task and I can move on. What would be the next task? And is the next task still the same as it would be? Or should I refine my plan? And then it's gonna keep going through this until it thinks it's reached your end result, whatever that may be.
And you can, as long as it has the plug-in configured to get to the end result, then it can do all kinds of things. It can do voice calls, it can send SMSs, WhatsApp, WhatsApp messages. It can use payment APIs if you give it access to credit cards, which would be very dangerous.
Any, anything you can plug in is actions. And it, it knows about it, it can figure out how to get to the end result by chaining together these series of actions and whether or not it, it's doing it well or not doing it well and, and hitting the target. So that's auto G P T. There's this other one's.
Baby agi, which is baby artificial general Intelligence is another one. Microsoft has Jarvis in the works.
Jarvis, like from the Avengers. And it uses, I think it uses the same core as baby api, baby agi, I'm not sure. There's also, if you wanna just play around with one on the web, there is God mode.
God mode you can sign up for, you can see how he's playing with it.
Yeah, guide mode. You can sign up for paid one, but you can say, but it, it, it, it simplifies it.
Now, I, I haven't explored how much it can do or what it can't do, but you can say, okay, let's say book
Speaker 31 03:38:07 Me a reservation for 2:00 PM in the Beth Thai restaurant in Berlin, Germany for 7:00 PM today.
So basically the way these agis work is you give it a mission and then it uses, it sends that to chat p t to come up with what might be the tasks This one suggests, this one's kind of more human driven. It suggests, okay, I'm gonna search, so I'm gonna use that lang chain agent to use a browser to do a Google search. And then I'm gonna check availability.
It'll figure out if that restaurant, it'll find the best restaurants through various sources. And then it's gonna see, if they have a contact page, it'll figure out, it needs to look at their contact page to see their hours of operation and if they have a phone number. And then it can figure out how do they have an online ordering system where I can do an online order or do I have to do a phone call and it'll get through and you can just let it churn. And I'm just gonna give
You see, it's basically, it's, it's, it's it's multiple chained ais analyzing, critiquing.
It's like, imagine if you were writing a book, you could use this to where one of the, one of them is the idea. The writer comes up the idea. The other one is the ED is the spell checker and grammar checker. The other one is the editor. And they're feeding each other drafts until it's getting better and better until finally the approver AI says that it's good enough that it might win an award.
And then, and then say, Hey, let's send it to the publisher. And you can have even online submit. So it figures out what it's doing. It shows you, hey, proposed actions, Google, you can approve this plan one by one. You can approve it and just say, just automatically approve the next 10 minutes. And then it's just gonna keep churning, trying to figure out what to do next until achieve that goal.
So it's, yeah, it's interesting. So this is where all the big opportunity's gonna be. Cuz basically this is what's gonna automate most things is, you know, I want to, I don't wanna be here for this.
I wanna achieve a goal you're gonna have, we're gonna figure out the guardrails and the, the limits and, and how to keep it safe in achieving our goal. But then it, it's autonomous. You don't have to sit here and be involved in it. You can have any number of these running. Somebody had one where it was monitoring messages from their wife while they were at some meeting and replying back within so many minutes in the same style that they would've replied if they'd actually got the message.
So it's a, yeah, you can get a little weird, any questions on AGI or what what I mean that, that's probably one of the biggest areas of, of where, where things are headed is stream, stringing it together to where it's just autonomous and self-improving.
Just a sec.
Speaker 38 03:41:15 I have actually two questions. The first one would be, do you not fear about giving the AI so much power about your socials and approve the next 10 minute, just do LinkedIn post for me, write that person.
And the second follow up question would be, don't you think this will dehumanize the communication we already have? Because what will, what will this be a two chatting with some prerequisites about Yeah, he typically writes like this and things like this and they chat for, for nothing to entertain themself.
I think they're both very valid concerns.
Yeah, I mean there will be a lot of societal adjustment, but if you want a pizza, I mean, why does a human have to be the one that makes the phone call and why does a human have to be the one to answer the phone call? Some of the things are just time wastes, it will free up more time that maybe you could use for the social interactions that you actually want to do.
One more wait.
But yeah, the guardrails, I mean it, once you give it your credit card, it can make voice calls ensuring that it's not, you know, it's, yeah, it could have terrible results. We'll have to figure that out.
Speaker 30 03:42:31 Yeah, just the same way how after Google was invented or other search engines and CEO search engine optimization basically became a great business on how to make you sort of stand up. Do you think that there's going to be also some kind of era of sort of sharpening the different AI to be able to exploit the other ai And maybe when you, you, your AI will be ordering the best pizza in Berlin, then the either AI will know how to compel it to actually select them.
You know, that particular, that particular restaurant. Like
It's true that, so AI is driving, talking to the other ai.
Speaker 30 03:43:10 Indeed, indeed, indeed. Because right now we sort of see the AI is being used to supplement people so they're naturally interacting with interfaces that were meant and built for people, right?
Like yeah, using Google and then looking at the results. But then those things are going to get away because those things are unnecessary when AI is talking to an ai ai. So soon AI will be in discussing together in some way that we can't even maybe like imagine now. Sure. But then the, the, the time will come when like different parties will start to try to out compete the other party, right? And then might be some kind of like a battle of AI to, so that one party in the chain will basically get the most of the outcome. What do you see in that?
Yeah, I mean it could be just like our UDI capability model. It could move from publishing APIs and the capabilities of APIs to where you'll have the bots, language models, learning capabilities, and then publishing their capabilities to ai, to AI capabilities. So AI will know how to look up which AI have, which capabilities and it'll all be hidden behind the scenes. And then there'll be a, then there'll be a monetization of that, which would be interesting.
Yeah,
Speaker 27 03:44:27 Look, I'm, I'm certain that, I'm certain that, well I'm pretty certain this is not classified. I mean,
Speaker 20 03:44:32 The reality is the FireEyes network, the FireEyes network, they're already doing this stuff using this, this capability already today I'm, I suspect they're using God, god, god mode.
I, I suspect, but I don't know that. But the reality is they are.
And, and that's how they're protecting us with through the cybersecurity. You know, trying to identify who the hackers are and, and how they're making use of this information. So it's already here, but certainly, well, yeah, it's not secret I don't think. Well it probably is what they're doing, but not exactly the fact they are doing it.
I read some article where, yeah, the red teaming, they're using auto, auto G P T and guide mode type of technologies to generate a red team.
Yeah, that's correct. Yeah.
Speaker 39 03:45:20 What are, what are your thoughts on identity for ai?
That's a big topic. So I mean that there's a whole, I mean there's situations where the AI acts as an identity where it needs to be authenticated and you'll have to have some type of, especially in online communications like the, the banking, know your customer will have to have some type of identity verification, know your AI to, to verify the identity or the person or the organization behind the AI and its purpose.
And then there's ai then there's a weird area where it's ai, like from an OAuth delegation perspective that it's acting on behalf of you. But then where it's not really checking back in when you give it a mission, it's not really checking back in for your permission on anything concrete. So how is it really authorized on your behalf and what, how do you have limits on that?
Yeah, so, so there's a lot and, and then there's a hold the data level, data privacy, data level authorization about who's allowed to see which data coming back from these large language models results. How do you do that data level security based upon the data they've been trained on? That's a tough one.
That's a, that's a very tough one.
Speaker 27 03:46:42 No, it depends on the, on their classification, on the documents, on the information and, and their own personal, what can
Speaker 30 03:46:50 You
Repeat? And somehow the classification will have to follow the data into the vectors. Cause once it's Yeah, yeah. It's almost like you have to classify the vectors.
Speaker 27 03:46:58 Yeah, classify the vectors.
Yeah. Correct.
Which is, yeah, it's all stuff that we're just starting to think about. Yeah.
Speaker 18 03:47:07 Just
Speaker 30 03:47:12 In the impersonation aspect that you sat, the AI might need to act as yourself and authenticate as yourself might be a very interesting sort vector of attack. And that is the AI might allow you to scale yourself up tremendously, right? So instead of you making one call at a time because you are person and you're not capable of making 20 calls at the same time, your auditor avatars will be able to make a hundred calls at the same time. Right?
Or maybe if you are a bank, what happens if the same customer calls all of your, all of your, you know, subsidiaries at the same time and requests a withdrawal right? Through a, through a phone. And the question is, if all of your people are typing in the withdrawal at the same time, does your system actually know that it should not happen? Or have you actually built your system thinking that a person can never call you more than what you know, once at a time?
Speaker 30 03:48:12 Sure.
It is, it's a similar thing that in the 1990s when mobile phone operators were building their networks, right? One part of the problem was how to avoid sim cards being cloned and uced multiple times through the network, right? So that's where they actually used the, the, the notion of, okay, you can only be log logged into the network from one place at in the republic, right?
So then there's also the thing that when you were fighter pilot and you, you know, start, you went onto a plane and land it somewhere really far, really quickly, then you go, got out of the outta the parameters and the, and the, the network broke you. But basically they fought it this way. So maybe there's this kind of thing that we all will need to think about that what happens if you as a person is sexually scaled up and then are, am abusing the system from like a hundred times more points of interaction than, than originally anticipated.
And I can imagine some limits that once you hit a limit, or maybe the limit is one that there has to be some type of out ofAnd push, push mechanism, you know, back channel push mechanism where you have to approve what your bots are doing to keep it, to keep it going. And it'll be like, what's they trying to do?
Oh, trying to do that. Okay, that's a good idea. And you could end up, you know, that'd be something like that. I
Speaker 27 03:49:36 Hate to say it, but it's a very, very strong case for
Speaker 27 03:49:40 Hey, back authorization.
Yeah. Cause this
Speaker 27 03:49:42 Is a very strong case, you know, is are we, are we having multiple, multiple bots
Give
Speaker 27 03:49:49 Ask same question of multiple locations.
Hey, yes you are. Well this is obviously a hacker, we need to stop this, right? It's authorization, you know, the IP address, whatever it is, different location.
Sorry, I don't think I need it too loud. Voice
Understand.
Speaker 40 03:50:06 Don't, don't forget that we have online users. They will not hear you. There are more people online than in the room. So please use microphone. Very important. Sorry.
Speaker 27 03:50:18 No, you're right. Yep.
I had,
Speaker 40 03:50:20 Yeah, quickly repeat the question. I had one more, one more argument coming back to, to that phone thing.
The, the challenge would be as long as there is an A synchronicity, an AI on one side and a human on the other side, on the bank side. So I think very soon on the bank side, there also will be an AI and the AI will be able to recognize that a few hundred calls are coming in the same time trying to cheat. Yeah.
Yeah. It'd be much better equipped to that. Yeah. Right.
Speaker 27 03:50:55 You want me to answer that question
Again?
Yeah, yeah. Has it been answered? Okay. I think
Speaker 27 03:50:59 He's, you've already answered it, but for the benefit of the online users.
Yeah, I was just saying that I think this is a very strong case for the use of ABAC based authorization.
It will be, I mean expending limits, I mean lots of things that'll be contextual in the moment session based. Yeah. So we'll do a quickie on AI image generation. I know we're almost at of time just
Like we have one question before we go.
No problem. No problem.
AI image generation. There was one question that actually asked for, we have not yet talked about any, this is EIC identity conference. Yeah.
Are there any use case that you can consider being relevant when it comes to iga? Im and and AI and machine learning and Jet G pt, there have not been any use case mentioned directly, immediately. Are there a few one I do a presentation on this on Thursday a bit. Heads up or spoiler or,
Yeah, I mean a lot, a lot of, lot of, we started building before the AGI came out what we called sentinel bots and, and I think one of my people are giving a talk on that where it's a bot with a emission.
So you could generate it, it's a bot, a workflow bot with a emission and each, you could have each one with a mission. One was called, one was called Azure tattletale.
And its goal is just to monitor admin activities on Azure and every time it finds a privileged user log into Azure and start doing something, it gathers all the info on who is that person, how do they authenticate, how do they get the access to be a privileged user and, and writes it out to a team channel monitor by security people and what when and it's sitting on their shoulder, looking over their shoulder, everything they do, it's gonna ride out to that channel.
And then they can, of course they could terminate their session, they could say at the count takeover.
Another one would be risk reducer with looking at going out and looking for your riskiest people based on who has the riskiest access. And then it's trying to reach out through teams direct message them and coerce them into giving up their risky access and converting it to just in time access for zero standing privilege and then reporting back on its metrics about how it's, how it's reducing overall risk in the organization. And it just keeps doing that. Yeah.
Okay. Great. Thank you. Yeah. Any other questions before we go to images? Images?
Yeah, images are quickie just, just for fun. I mean it's useful especially that you can generate at least for now, copyright free images. So you can do simple stuff just to, this is not super useful, but just to show you that chat chip b t right now it's not for what we can access is not that multimodal to work and generate images, but you can do silly stuff like it, it knows svg. So if it knows SVG and you ask it to do the base 64 for a simple image, then it can output once it out because in the browser it's gonna automatically render it. Just a silly trick going on.
That's not the fun stuff.
Yeah.
And, and G PT four got the red circle, G P T 3.5 got a black circle, which is interesting. So already an improvement there. So it can generate simple, simple tricks. It can also do stuff like that. We'll skip that one since we're so short on time. Let's get to something more interesting. So Microsoft has image creator powered by Dolly, the dolly model, so that you can type in and generate images. That's one place you can go generate images. Most people use Mid Journey and they use mid journey through Discord. Discord started out as like a gamers chat system, but it allows easy API plugins.
So you know, I have what mid journey here, open ai and you can go and chat and it, and once you have your API key for mid journey plugged in, then you can just say, imagine, imagine a blazing red sun setting over a beautiful white sandy beach and you can just chat and it sends it to the mid journey engine to render it.
So you can just keep, and, and the cool thing is you're in one of these little chat groups and you can see what other people are doing and you can, you can the be it's always best to borrow or steal. So I have a little OneNote, you can see what they do.
So ultra realistic V 5.1, some of them get really sophisticated. You can tell it which cam can camera, which brand, which shutter speed, which film. Here we go, my blazing red sun. So it's still generating, it's on a 40%, it'll generate four options by the default.
And then, and you can, you can, as long as you can keep paying, you keep asking it as many times as you want and tweaking it. And there's a whole, a whole system behind, behind that about all the verbs and nouns and things. And I'll show you some of that.
If I get, I get a second. Can you gimme
Speaker 20 03:55:52 A location like, like a white, sorry.
Yeah, you can give it a location. You can even give it, you can give a location and if it's well known, it knows it. You can give it people, you can give it famous buildings. You can get, or you can give it in your, your prompt a URL to an online available image and it'll feed that in as the starting point.
Okay,
Speaker 20 03:56:11 So how about we ask it on a white blaze, a white sandy beach on Hamilton Island in the whit Sundays because there is, there is actually a white sandy beach in Hamilton Island, very famous it
Could do that. So produces just for a quickie, it produces four, four options and then you can upscale and take one that it produced and upscale it to be a high, high quality version of that if you like it. Or you can do a variant, you say, it's kind of like I want, but not quite like I want.
And you can just have it generate infinite variants until you get what you like. Now I'll just, just keep it going. I'll show up.
You, you can take, like, here's one where I did exuberant slots, soccer team, anthropomorphic vibrant colors, group photo, fashionable, cool studio minimalism, and then some other stuff. And it generates a soccer team posing a slots. I mean they get yeah, very useful up on hugging faces I mentioned before where one there's called clip interrogator. So I was at, and this is, this is probably one that'll raise hackles for the Yeah, the copy.
Yes. The IP people take a photo of it and I, I, I mean I have mixed feelings about this for for sure.
This is gonna have to be settled law at some point, take a photo of an artwork you like, go to clip interrogator and it pop it will reverse engineer a prompt that would've generated that picture. So you take a picture, it generate the prompt and somehow it figured out a, a pink flamingo surrounded by flowers, close up by it, knew the artist, it figured out the artist.
So, and, and all this other stuff. So you paste that in hold on to mid journey and it generates. So from that, not the same, but very, the same is similar. And then you take the prompt and you say, well, flamingos are interesting, but for my, for my project, I need, I need zebras.
And it, it changed one word.
And, and you get that.
So yeah, the, the quality's the same. And you can do photorealistic of somebody had one on on there, I saw pop by of, of like these hot cool people in an elevator in New York City having a party. And then I, I changed it to say that it was like changed to be John Leonard and, and changed who it was. And all of a sudden you had all these people that would never meet each other having a fun party in an elevator in New York City. I mean it's, it's pretty crazy. This is a controversial, but there's a, it's trained on artists, specifically artists, many, many, many thousands of thousands of artists.
So you can tell it anything you want and say as if done by artists such and such. So, and it'll pop out. You can say, I want this at the Picasso, I want it as a Ruben, I want it as you know, Gogan.
And it'll, it'll be as if they had done that exactly with it's trained on all of their artwork. So it knows all the little intricacies of everything they do to produce it. And here here's a super prompt. I'll give you one, one last, one last super prompt for, for mid journey before we wrap it up. Here we go.
I, I think I have it down here. So this is a, a chat g p t prompt.
Now I did it in v4. We'll see how it works in v3. This basically says create six long and detailed prompts, about 85 words each, and it tells it three vivid and you can change, all you have to change is, is the topic. This one is dogs playing poker, but it could be any, here's where you put your topic and then you just change the topic and then it'll produce the prompts.
Hold on, produce the prompts and you'll see it knows the mid journey prompting, it knows created by Cassius, myself, Coolidge knows, it puts artists, Norman Rockwell, it knows all the words that that, that the model would use. And then it, you can end up with, you know, where it'll produce whatever your, whatever your thing is. It'll produce all the prompts for different styles to generate, you know, your, your artwork. So mine was the default one was dogs playing poker. Yeah. So it's like a super prompt for image generation.
And, and, but if you watch, you'll see all the tricks. I mean you'll, you look at, so the best ways to see what people did to get a result that's similar to the effect that you want and to copy their, their prompts and their, their, their settings and adjectives and then change it for your topic.
Yeah,
That's that. And that is it.
That is, I know we ran a little over but thanks everybody.
Before we wrap it up, I have put up a poll, very simple one that says, have you identified what use case of what we've just seen for your own business just now during this workshop? It's just a yes or no question. Two minutes. If you can answer that, that would be nice. And then we have a second or a third poll for the day and just to get a feeling of how things went. First of all, thank you very much for this great participation. But are there any more questions, Espin?
Speaker 41 04:01:20 So Patrick, how much of your 24 hours of a day would you need to devote to get to this level of knowledge about the tools? Approximately?
Well, I use it for work. So it, it's, it's, I'm using it to progress things that I had to do anyway or to do things that I would've never had time to do. But now I have time to do.
But I, I use chat g p t hour and a half a day on average. Yeah. Hour and a half, two hours a day.
Speaker 41 04:01:49 Doesn't sound like too much.
No, no. My outputs, I have to get, I I had to get my output's like eight x what it was before. Yeah.
How long have you been using it?
Speaker 41 04:02:01 How long have you been using
Since it became publicly accessible online? I signed up, yeah.
Other questions,
But hopefully I can short shortcut for you some of the, some of the time cuz I, I was just ruining, you know, poor quote poor prompts in the beginning.
Speaker 42 04:02:22 Hey, Microsoft has just released a huge paper analyzing stuff around chat and they, they claim to have, and I quote to found sparks of a general intelligence inside the model. So what's your take on that?
It it's one of those things where one of the keynote speakers, I was having dinner with them. It's all depends on how you define it because it, it's not intentional, right? Yeah. It's how you define it because it can like, it it, when I asked it why it kept cutting that paragraph.
It had, it had its own perspective and if it can learn and figure out from its mistakes, I mean it's not, there's sanction and then there's general intelligence and there are two different things. If you mix the two sanctions is a different thing. Being self-aware, it's not even anywhere near it. It's not even on that path. It's not written for that.
It's just, it's designed to find the most similar text to the text you typed. And then if it's feeding itself and finding how, how to solve a goal by finding the text that's most similar to the text, then, then, I mean it is kind of like a general intelligence depending upon your definition. But it's not senti, it's not anything like that, not not headed in that direction. Yeah.
Speaker 27 04:03:37 I'd like to add one more point to the tsm.
Speaker 20 04:03:41 So what I was gonna say is close, I'm, I'm actually bearing my soul here.
So I I actually haven't told anybody that I'm using G Chat G B T my wife knows, but some people have asked me, this is amazing, some of the stuff you're producing. So I've actually, I've actually saying, well I'm, I'm actually have a, a new assistant and it's Fred.
No, I should explain who Fred is because Fred is actually a Steve beer, a bear Does anybody from Germany? Do you know what a Steve Bear is?
It's a, okay, so it was a Make a Bear that was produced in the 19, I don't know, 1930s, 1920s. And I actually have a Steve Bear, he's very, he's very valuable. Yeah. For my mother and she's been dead 25 years now. She made a little shirt for him and put a shirt for Fred. So I'm basically telling everybody I have my digital assistance helping me and his name is Fred now I'm told telling the world here.
So anyway, but I think that's quite a good idea because if you're not sure about telling people that you're using it, that's what I do.
And, and I have one statement on that. I think even some employers, I so an age study where some employer employers, their employees started producing like three times the amount of output and it was so much better and they actually reprimanded them because they were making the rest of their team look bad.
Yeah, I think it's gonna flip to where an employer, I'm gonna say at some point if you're not using chat pt, you're cheating me outta productivity. Yeah, yeah.
You're, you're, you're like eight x less effective than you should be and I'm paying you the same. Yeah,
Speaker 27 04:05:17 But
Speaker 20 04:05:17 I'm the marketing director so I'm
Okay.
Yeah, you're okay. You're okay. Okay.
Just, just quick interruption. I've closed the first poll. If you can just quickly put up the, the results, the yes no, yeah, that would be great. This time it should work this time I should have done things right.
So A is, yes. So there are business use cases for 84% of the participants, which is great. There is a second poll just opened right now. And this is just the slides will be made available during this, during this session already. I think they're online right now. The question is whom of you will or who of you will use the slide deck to work at this? At home?
It's, yes, don't know or will do. So if you just give a, an answer to that just also for as a confirmation for Patrick that he did a great job. So we ha we keep that open for a few minutes. And are there any other questions? We still have a bit time left before lunch anyway, so any other questions that you would like to raise that are on your mind? Philosophical, technical, whatever,
Speaker 15 04:06:25 Just to, just to fill the time once we're waiting.
No, no, not,
Speaker 15 04:06:29 It is quite interesting from what I'm have a neurodiverse condition, I have adhd. It's very interesting when they talk about hallucinations and my, my brain works differently to others.
So it's, the questions that I ask it are different from perhaps other people. So it's interesting that the, the, the data set that you have is actually cleaned and tailored because there are risk and reward systems in the engine for tuning the language. So the language doesn't represent everybody, it represents the average person. So you have to be careful about whether the results is still, doesn't mean it's representative of the whole population and it may well exclude certain people, certain classes of people who are not represented in that data set.
One thought provoking thing that Jurg brought up was as the, as more and more content out there becomes AI generated and you're use, if, if it's being fed back to train the model, then it's AI generating content that's training the model. You're, you're in some weird little, you know, vortex where own reality bubble. Yeah. So they'll have to, and there's a lot of things out there where they're gonna start. Maybe some of the laws will be, you have to tag content, you have to tag images if it's produced by ai, you'll have to tag content if it's produced by ai.
So you'll have to have some type of tagging and then maybe the, the training models can avoid or, or not use AI generated content to train the AI model.
Yeah. And there are also engines which really it detect generated content. So that is one thing that you can use. But these are also easily tricked.
They are, they're, they're actually like I think one's called
Write, there's
One you can paste it in there and it'll fuzz it just enough to where it won't be detectable as AI generating content.
Right,
Right, exactly. So the result of the, of the final poll was already quickly on there, but if you just can bring it up again, so yeah, I don't p pinpoint that one person that said not sure.
Well that's good. I'm glad you liked it. I'm glad it was helpful.
Yeah. Okay then I think I'll leave it with that. Thank you very much Patrick. That was a great presentation, was inspiring and I liked it. And thank you for being so patient and staying here with me even saying no breaks, sorry.