Hey everybody. Lemme see. I need the clicker. So I've got a lot of material to get through here, but this is going to be fun, at least for me. So what I'd like to talk to you today about is that AI mainly, but how do we come to trust? Is this thing working? Is this thing on?
Let's see, where do I, hello there. Okay. Is it working?
Okay, one more. Alright, so let's start, let's just go ahead and start. This is funny. Does anybody know Flight of the Concords? Is that ringing a bell or anything? This is this really funny band out of New Zealand and they have this song that they created called The Humans Are Dead.
And, and when the introduction, they introduced this song by saying they wrote it back in like 1990s. And so they, and they said, this song is really, you know, more intended for robots than humans because, you know, we're really trying to get into that market once the humans are dead, kind of thing.
So that, anyway, it just reminds me that when you hear about ai, the, the very next sentence is always, we're all going to die, right? And here we are. I don't think that's true by the way, but just to show you a couple of concepts here.
Alright, so Elon Musk, depending what day you get 'em, he kind of thinks, yeah, we're, it's far. He says it's far more dangerous than nukes. Okay? So I dunno what that means, but the, but beyond Laun, I'm gonna quote him a little bit later from Meta, he's running the AI team at Meta. He says that LLMs are auto aggressive and won't view any such thing and neither will any future of ai.
So he can, he's completely a no on this one. Sam Altman of Open AI says, well, you know what? That's the government's fault. And that's my summarizing here, okay?
And, and then Yuval Noah Harari, who he may, if you don't know him, you should definitely look him up on YouTube or something. Very, very thoughtful author.
He says, potentially we're talking about the end of human history, the end of the period dominated by human beings. Okay? So he's saying that, okay, so we, the biggest actors in society are about to become ais, right? Of some sort, not human beings. So that's actually not quite correct.
We, human beings are actually not today's biggest actors. The, the biggest actors, in fact for a long time since are corporations, right? Corporations are much stronger, bigger, better than any of us individually, and they influence our lives to a very great degree. Okay? So the question then becomes, alright, well are we still human on the internet, right?
I mean, what, what is, what are we actually on the internet?
It is a completely foreign alien world.
And, and we are not actually natives in that world. We, you know, there, there are things, you know, biological mechanisms that we rely on every day, including things like pheromones or, you know, just mirror neurons in our, in our brains.
Those, those don't work on the internet, okay? So we are in a completely foreign planet anytime we are on the internet, okay? So it's kind of ridiculous to believe that we have our own selves and identity and trust mechanisms and everything else biologically that doesn't get transferred on the internet when we're there. Okay?
Actually, in that space, ais are the, are the, excuse me, are the true inhabitants of the metaverse. And they, in that world, there's not a whole lot of constraints.
Like we, we know in real life there's no, there there's no sense of gravity, there's no other sort of biological things that happen out there.
It's just, there's a couple of protocols, stuff like that.
So they, so AI exist in a world that is really bereft of any kind of thing that we think of as biology. Okay? So what does that mean?
Well, alright, alright. So AI, as I see it as changing our approach to identity and security on the internet, and it reminds us that really the only things that live on the internet are things like bots, ais, agents, all, all of these sorts of things. And if you stop back and step back and think about that for a moment, these things that are these beings that live on the internet, these ais, right?
They, they look a lot more like corporations than they do like people, right? And we know how to control corporations to a certain degree. So that's what I wanted to talk to you about.
Okay? What is the relationship between AI and human beings?
Well, so we've done a lot of research in AI based on our best understanding of how the human brain works, okay? Not the whole thing.
Like, we're going to point out that AI is missing what we would call a prefrontal prefrontal cortex. Meaning that it, right now, you know, we're doing, you know, we give AI's guidelines and regulations, we're trying to figure out watermarking and all these other sorts of things to try to control the ai, right? But the thing is, is that, let's see, did I cover that whole thing?
Yeah.
Well, okay, they're antisocial as well. We know that. So this is, this is where LLMs get a little bit weird, okay? So access management, if you think about it and identity verification, it's, it's already very difficult to do these sorts of processes as we know. That's why we have a conference about it. But it's going to get a lot, lot worse when suddenly you're not just you anymore, but you are, you plus all the GPTs that you just created.
Now, I don't know if everybody sort of noticed when OpenAI did a dev day, they had a dev day a couple weeks ago, they announced OpenAI did that. There's, there's this new concept called GPTs, right?
So the, the GPT idea is an agent that works on your behalf. So anybody can have any number of gpt. Now they're very easy to create. It only takes a couple of minutes.
And so now it's your identity plus any GPTs that are out there doing things for you. Okay? So what is your identity at this point? When it gets extended into the AI world?
You know, you, anything that you, anything you want, you can have agents out there operating on your behalf. Okay? So my identity now includes all of that stuff. How do you, I authenticate that. How do you validate that Id, what if it's just your agent doing stuff while you're sleeping, right?
That's, these, these modes of identity are coming our way right now. They're already here. So I made up this term homo exm, I guess I was trying to be funny, but, but yeah, it's the, I think the operative thing here is, is that the way that even OpenAI talks about what these gpt are going to look like, they talk about 'em, like there's some kind of kitchen appliance, right?
It's, it's, it's something outside of you, right? But really I see it as more of like a superhuman or super, super superman kinda suit superwoman. Kinda like, by the way, this is AI generated art, so hopefully you're appreciating this, all of the graphics there, that's the file size is huge by the way, but only took me a couple seconds.
You know, I just type a few things in and that's the arc that was produced. But that's the idea is that we will be living inside like these, these GPTs will be wearing them like a suit that makes us superhuman, right?
Alright, so how then do we go about, given the new reality of what identity looks like on the internet, right? That it's some combination of a human being projecting a certain kind of power onto the internet, and that's the identity that we're talking about.
How do you end up learning to control and live in a society like that where you have these human, these super humans out there, and then in addition to the internet services that we already know.
So I I, this is really the crux of it really. There's sort of four steps we have to take. One is that we need to build next generation ai. And it's nothing, it's not an LLM, okay? LLMs are done in a year, right? We're gonna be done with LLMs, I'll talk about that in a moment. So we need better ai and then we need to understand that these identities that are more approximating corporations than they are human beings, we need to understand that that is a per persona. It's a personal GPT swarm of your own that belongs to you and that you control.
But we need, we need an entity for that, some kind of id, right?
And then we need to create and foster a pro-social environment so that as these things start moving around and acting on their own to some degree, excuse me, if they're autonomous, then we need to make sure that there's enough constraints in the system that they won't just simply go crazy and do whatever they want to do. And then of course interact and so on. Now this is something that I borrowed from again, Jan Koon who is at meta, excuse me.
And the idea is that if LLMs today are based on some portion of the human brain, his concept is to really get to a full AGI, right? Then you have to actually add more layers and you have to add more types of neurons and you have to basically, and that this is his, this is his slide off to the side there where he talks about you need, you should, if you get a chance, go watch his presentation on this 'cause it's fantastic.
But he literally just comes out and says, LLMs, as cool as they seem right now, they suck. They're not that intelligent. Okay?
So first of all, we need to establish a much better founding of ai and that's coming very soon. Alright? Now what I mentioned that meta announced, not meta, sorry, OpenAI announced that, that they have these GPTs coming out and they announced an app store, right? So it's a lot like an iPhone. And by the way, I put the prompt up there, you can't even, it's funny that it produced a really nice picture, but you can't read any of the text. I don't know where those icons are coming from.
Oh, what happened here? Well, okay, but anyway, the, the thing is, is that the app store is actually not what we should be doing. Okay?
And I, the reason I put this iPhone up there is that if you think about the app store, and the reason that this is the wrong way to think about it is that when, let's say you want to send a message to your friend, okay?
So you pick up your phone, right? And then the very first problem you have is to figure out which of the 15 apps you have to open in order to send that message. And then you have to figure out, well is, you know, should it be WeChat, should it be Facebook Messenger?
You know, or we, who knows, right? So the problem, how did we develop that problem? All you're trying to do is talk to somebody and yet there's a, there's 15 vendors that got in the way, right?
Pick me, pick me. You know, and, and so the idea is, is that as cool as GPTs are, and we're gonna see a lot of them, okay? We don't necessarily want AGPT app store. What we want is, okay, I'm gonna have a relationship, right? Each one of these icons should be representing some relationship that I have with another somebody else's swarm, right?
Somebody else's identity. And it's the, it's the nature of that relationship. That's what we should be talking about. So when you click on a button, then it needs to sound like, okay, I'm gonna pick up where I left off on that relationship.
We were just having a conversation. I need to go back to that.
I'm, I'm transacting with somebody, you know, I'm working right now, whatever that transaction needs to be. But so that's when we get into this idea of, okay, so, so we have, just to be clear, so we have better ais, right? Much better, they're more brainy. And then we have an entity, right, called I, I've been messing around with that. I don't if you saw that me, me, GPT is that working? No.
And g gp, me, I'm trying to figure out what we call these, but, but persona essentially, right? So everybody has a persona. You have persona, I have a persona. We build these relationships together right?
Now, how is it, that's all well and good, but how is it you actually keep that relationship intact and successful? That's what we're gonna talk about next.
Let me give you just a brief rundown of some of the literature that's available on these topics. 'cause I think it's really relevant to really make a relationship work over time.
You, you need certain qualities that would include, and this is what something I alluded to earlier, James Coleman's book about symmetry, right? You can't have a relationship where the, where it's asymmetrical, where the power is. Actually we do this all the time now, but where there's a central authority that has all the ability to set all the rules and then there's somebody that has basically unempowered in that. That's not a relationship. You have to have symmetry, okay? Reciprocity.
Robert, Robert Axelrod, who came up with a term mutually assured destruction or mad talks about reciprocity. That you can have a relationship even with an enemy. That is a very good relationship as long as you have reciprocity.
Okay?
So great, so you got these, these two things, and then there's Eleanor Ostrom as well who put out this theory of trust and I, I won't read through that, but there's actually a lot more than that. And then also costly signaling. Okay? This is a biological reality that says, if I am, if, if you don't know me, but you know that I'm standing on this stage or that I own a certain car or a yacht or something like that, those things say things about me, right?
So a persona could basically have something like that as well, that you can signal that you have reputation, that you have authority, something like that. Okay? Now I have a medium account where I stored a lot of the stuff. You can go check it out there. This is just kind of an aside, but I thought I saw an article where these are li little kilowatts, they call 'em, and it's a swarm of these small wiggling.
They, I don't know what those are kilowatts, right? But they signal to each other and then collectively they decide where to move and where to be and where to, so bees do this, for example, in real life where they, they basically do this same practice in order to figure out where to build a nest or a hive. So this is emergent intelligence based on a swarm, okay?
We, we need these sorts of things on the internet. So here's what I'm proposing.
First of all, there needs to be something that I call a trust protocol.
And, and that means that when two avatars get together on the internet and they've never met each other, right? There has to be some way to initiate a relationship that will be successful over time. And so there's a lot of verbiage here. I'm not gonna go through that, but let's see, did I skip a slide?
Oh, we missed that one. Okay. Took out a slide.
But the, the idea here is, is that if you think about radioactive decay, for example, which is when you have a really large, you know, particle or something, you have a molecule and then that decays, you can detect that that particle has decayed because there's, there's radiation that it puts out so everybody can know. Another, what I'm trying to say is if you have a protocol, that's a protocol, then when somebody forms a relationship and that relationship falls apart, everybody knows about it.
It's written in the ledger, it's, it creates reputation and it's just everybody knows at this point what happened to that relationship. So you have, you have a means of understanding whether that relationship worked out or not. The second part is we need a bunch of things that are kind of a fabric of the internet that is, excuse me, that is pro-social in nature. And that would include things like micro currencies, digital title services, scoreboards, that's what I call 'em. Personal agents, press and media for personas and metaverses talk about that more later in that time.
But also we need to understand that there's institutions that get involved in these relationships as well. So they could be government services or private institutions, but fundamentally there are certain types of things that you need in society, even if it's online in order to make those relationships work.
Alright? Now that's as you might imagine generated from an ai.
But anyway, once you have these things in place, once you have, okay, we understand that there is going to be, they're gonna, we're going to have personas or my GPTs or whatever they call 'em, and those things are going to form relationships with other my GPTs. And then those things are going to behave in an environment that is requiring them to be good, right? And there's all kinds of infrastructure required to get this. Alright? So in summary, I think we're gonna have a really good time.
I think that AI is basically going to make us all more powerful, more knowledgeable, more useful, all kinds of really great things are going to come of this. But we do have to still create better AI and we still have to create the systems and the mechanisms through which we will relate to each other in that kind of world. Okay? So that's really all I had. Can open it up for questions.