We are very pleased to see you. My name is Marina Iantorno. I am a research analyst here at, and luckily for me, I am working with Scott David. We will be moderating the upgrade reality track until the end of the event. And we are very pleased to have you here.
So, and we have somebody will be joining us remotely. So let me introduce our speakers here, just very briefly.
We're not, we'll I'll ask you to refer to their bios for a full description, but we, Daniel Friedman, who's joining us, I think it's two in the morning, Daniel, if I'm not incorrect about that. He's a co-founder and president of the Active Inference Institute. For those of you who haven't experienced active inference, please take a look at their website. We have Trent McConaughey here and, and who's the founder of Ocean Protocol and Dr. Mahala Uluru.
Yes, thank you. Founder and president of the Impact Institute for Digital Economy. Just a couple of brief sentences on the introduction here. The idea of these first two sessions was for us to revisit the notions of intelligence as a term and what it stands for, and then the second session, the notion of identity as a term.
What does it stand for? When we get into the product cycles and service cycles of all the different companies and, and institutes that are reflected in this gathering, we often get narrowed in terms of what we're looking at and what those terms mean.
And because things are moving so quickly, because we have what feels like and may be exponential increases in interactions, it's, we thought it was useful to try to step back and to bring in some folks who can help us understand what's beyond what we're looking at right now. What are some of the things that may feel like they're anomalous right now that may become part of our realities and as a result affect the different companies and institutions that are participating here today.
So Marina, any initial words you would like to offer before we get into questions?
Yes. Something that Scott mentioned that I agree so much is that we should wonder what really intelligence is and what is the meaning that we give to that, the idea for the common sessions, this panel. And the next one would be to discuss how it impact in our society. Many experts are actually now touched by AI and we use it every day. So on a daily basis we interact with AI and we don't really know about this.
So it would be very interesting to talk to experts and see what is their point of view and where are we going towards to with all these changes that are happening so fast.
Great. Thank you Marina. So why don't we dive into the questions and let's start, Trent, if you don't mind starting off, what are the ways in which the current forms of AI and there are several, are similar or different than other in systems that we considered intelligent systems? And is that even the right question? How would you start our framing for today?
So I think overall there's a lot, you can think of intelligence as this giant circle of possibilities, right? And within that circle, there's many different intelligences that have been instantiated. So one sub-circle would be human style intelligence. And another sub-circle would be, you know, mammal style, well dog style and cat style and so on, insect style.
But then also there's other circles that are more machine intelligence, which are much smaller in capabilities often, but sometimes more and more powerful and within this human style intelligence that within that sphere, you know, people are trying to build agis, artificial general intelligences and they're trying to make them shaped to be similar to human intelligence, right? That's the idea of a GI itself. Where every imagine you have an AI that can be roughly power with a human across the board, right? And then a SI, artificial super intelligence would be much more powerful than humans.
Maybe across the board, maybe not. So there's all these different shapes and sizes and then there's a lot of intelligences that we can't even really conceive of. Like aliens out there, how might they think, right? Or imagine an octopus intelligence, but imagining evolve for another 10 million or a hundred million years, what, what would, what would be the shape of its intelligence? So there's a whole variety of, of types of intelligences. So I think we have to be epistemically humble about what they are and what it might mean.
And there's all these different, you know, potential definitions of intelligence such as intelligence is compression prediction is the, the essence of intelligence or maybe it's prediction plus agency equals intelligence. So there's all these different definitions.
It's okay, you know, if you are in a room of 10, a AI people and you ask them for a definition intelligence, you'll at 20 definitions, right? So it's really, okay. So overall epistemic hum epistemic humility really matters here just to understand that there's this whole universe of intelligences and most of them we haven't even conceived of yet. Thank
You. Tr how about you Mihaela Helen?
I, I completely agree with strength and because he mentioned, you know, human intelligence and other biological intelligences as well. You know, I don't know if any of you read Max Ted Mark's book Life 3.0 and he talks about an evolution in intelligence. So there's life which can remember and compute, and then there's life and that's life 1.0 life, which can also learn that is our human life. And of course other species as well, that is life 2.0 and life that can actually recreate itself comp. So remember, compute, learn and also recreate itself and further evolve.
And that that is life 3.0 and artificial intelligence is heading there. So to that extent, we are creating a new species just because you compared with to human, but intelligence is substrate independent, so it doesn't have to be biological. That's another important aspect to it. So we just need the right loss of physics and any man, and I mean not, maybe not any matter will do, but a lot of materials will do. And during has proven that and so on and so forth. Anyway.
So talking about the substrate, Daniel, I know your, your background is in entomology and, and biological system intelligence, but then now in the active inference context, could you give your views on that kind of general question of intelligence systems?
Oh, oh,
We cannot hear you.
Like there may be on our side.
Let's see, try talk a little bit more. Daniel, please. We can see you. So can you write a note?
No, just kidding. They're, they're working on it on this side here. Okay.
Oh, there we go. Okay, there we go. Okay. Yep.
Beautiful. Okay.
Alright, thank you. So one entry point into that question is looking at a colony foraging and just thinking about how the kinds of decisions that the colony has to make to persist through a season. It's related to factors that no nest mate would have any way to directly assess. And even if it could assess them, it would necessarily know the calculations on what to do.
So I think to, to Trent's point about the epistemic humility and then also the maha's point about the substrate mattering or not, there's new continua opening up between radically substrate independent forms like algorithms and also radically embodied forms like insects, people and so on. And I hope in this discussion we can unpack what some of those waves and turbulences might be because it's definitely a, some new cross cuts with theory and practice.
Fantastic.
You know, that it occurs to me that when we talk about identity and we talk about privacy, there's human expectations that we have from our historical and cultural roots that, you know, when people feel like they're not empowered in an institutional construction, that it is starts to resemble the situation with colonies of insects and others where no one organism has a view on the entirety of what's going on in in that system. And so there may be direct analogies in the intelligence side, but what is our expectation as humans?
How do we prepare ourselves for realizing that we're not not in control as such as we may have thought we were in the past where the, the entirety of a population is where the intelligence resides rather than the idea of the enlightenment where you had individuals where the intelligence was within the individuals.
And so let's start, before we get even more theoretical and out there, let's start with a question of how can we convey to regular people, and we're, we're all regular people in our own ways. How do we con convey to regular people What's going on here? How can we start to engage?
Because there's a lot of anxiety out there, the existential shift that's going on and the disempowerment that's perceived is uncomfortable both to people and institutions. And we're all experiencing that in our different institutions. What are some of the ways that we might invite people, the person on the street, our moms, as everyone always says, for some reason our moms are always considered to be dull. And I don't understand that, but in my case it's maybe accurate, but in other, but how might we do that?
What are some of the ways that we can start to engage regular people in the discussion? Mahala, could you start with
That?
Well, yes, I, and I wanted to say that what Daniel mentioned and you about the ants and so on, is this is a collective intelligence and that's another kind of intelligence. And we have to tap into that as a humanity. And this is the, I think the crux of the, of the issue is, and we will tackle it also on the decentralization panel. So the crux of the matter is that we are disempowered, which you, which you were saying, and this is because of how we organized ourselves Yes. In in these centralized mastodons institutions and, and, and countries and so on and so forth.
So it is very important to, to come back to that initial concept which nature has showed us and which which we also have learned that it is much better, much better decentralized. So how artificial intelligence is developed today with regard to the centralized institution is like AI is the gods.
Yes.
And, and they are the, okay, my AI will be better than your ai, my god will be more powerful than yours. And, and that cannot lead to anything in my opinion. Yeah. Constructive positive. That is what I fear. If I fear something, what I am for, and I know Trent as well, and Daniel as well is for an AI that is enhancing me Yes. An an an AI that is empowering me, an AI which models I can see they are transparent and so on and so forth. And then that can harness the collective intelligence and can enable that. And that is what we are working on Yes. On this decentralization self-organization.
So that, that raises an interesting issue for folks in this room because we hear a lot about how we're moving towards decentralized forms, your decentralized intelligence, decentralized identity, and perhaps our move to decentralized forms is moved meeting towards a more natural form, in a way, a more sustainable form or extensible form. So even though it's a struggle for institutions that previously had centralized paradigms, perhaps what we're moving towards is something that is more na, scalable, sustainable, et cetera. Trent an agile. Yes. Yep. And agile. Yeah.
Trent, how would you talk to regular people? Again, when I say regular people who aren't steeped in all this about what's happening and, and in order to not just make them allay their fears, but also to engage them and, and have them understand and be part of what's going on,
It's easier to interpolate than to extrapolate. And right now what most people are encountering is it feels like the future is crashing into them and they're trying to extrapolate a little bit, you know, one week, one month, maybe one year out, and they don't really have any guideposts.
What I found extremely useful for myself is to read sci-fi stuff that's 10 years out, 50 years out, 500 years out, and within sci-fi, you know, stuff written in the fifties, stuff written five years ago. And everyone between, there's all these different visions of the future. Some are dystopian, some are utopian, some are hilarious, et cetera, from everyone from, you know, Robert Heinlein to Adrian Tchaikovsky to Charlie Ros, right?
And I find that I read this not just for entertainment, but actually to paint to see all these different paintings of the future, what they might look like, and then from all those different possible paintings of the future, I interpolate, right?
So I I have a, you know, it's easier to predict the future 20 years out or 30 years out, whatever it is, rather than sort of one year out. And especially if you can figure out the steps in between, right?
And so I think what would be very helpful for society is to see, you know, to get exposed to these visions of the future in a much more mainstream way, right? Rather than just reading the sci-fi novels that you might have heard or not have a lot more, you know, movies and TV shows that aren't just thrillers, but much more sort of visionary in what this could look like.
You know, star Trek is one example, but there's, you know, 20 other really great visions of the future that could be very powerful. And then working backwards from that.
And I, I know myself even in the last couple years as AI has been really taking off, I've made a point of going back and rereading a whole bunch of novels with ais and how they're treated over time, you know, Hyperion, deepness in the sky, all this sort of stuff. And it's really, really helped to sort of reinform me about what that's about. So I'd say that's my very specific tactical thing, find ways to bring these visions of the future that sci-fi writers are already writing about and get those more in the mainstream via tv, print, media, otherwise.
Thank you. Daniel.
Oh, marina, did you have
Comment? Yes, I i I would like actually to, to ask you something Trent, because what you said is very interesting and you mentioned a lot about the future and how we see the future, but many people yesterday in the workshop that we had that say that the future is actually now we are already dealing and working with machines. Do you think that one intelligence will overshadow other one or we can really combine them?
I think there's two possible scenarios.
So it it, it can be one, it can be other or otherwise. What I see is that, you know, fast forward 20 years from now or a hundred, depending on how you feel about it, there's going to be a whole bunch of patterns of intelligences living on silicon, and they're gonna be a thousand times smarter than the ones now, right? And a lot of those are going to be derived from pure, you know, growing up in silicon, et cetera, train, et cetera. And we as humans, you know, how do we interact with these things that are a thousand times smarter than us?
Well, you can flip it around how do we as humans interact with ads? You know, if they came to us and said, Hey, dear humans, can you have aunt values?
Please, can you lower yourself to an standards?
You'd be like, no, go away. Right? So the only, like, to me, we need to ask ourselves how can we be competitive, have a competitive substrate with human values, et cetera, with these intelligences that are a thousand times smarter, right? But there's no guarantee of that, right? It could be that with this race for a AI and then a GI and then a SI, which there's tons of money, so the race is gonna keep going.
It could be that these things get way smarter than us before we have any fighting chance to be able to, you know, somehow keep humanity in the loop for the future, right? And I really hope we find ways, you know, brain computer interfaces are a big path there and maybe otherwise, but there's no guarantee.
So in, in one future there really might not be humans maybe, or maybe we're just like in zoos and stuff. In another future we get to explore and reshape the cosmos together with the silicon intelligences.
So when we, we can't pass by the ant discussion without asking Daniel his view, since he's been, since Daniel's been talking to ants for such a long time, Daniel, can you help us figure out how to be talked to as ants?
Can't answer that one, but to the previous question and the idea that all intelligence is collective intelligence.
So maybe one way to have that conversation is be like, well, sounds strange, but actually they're just coming to the realization in cognitive science that organisms have bodies extended effects on the environment and so on. So that kind of brings together the first principles of intelligence and like the discussion about is it gonna be a singular or a multi-agent? But even that kind of blurs, even a, a monolithic neural network or transformer could be understood as being a multi-agent of smaller nodes at a inner scale. So like every unity is, is composite.
So that kind of brings together this research angle, which is about looking at collective intelligence in the past, present, and future, like how to do that cognitive archeology and the reconstruction from traces, the real time the now casting, which ties into like predictive processing, what the brain and what organisms, what organizations do, and in the future, but not just doing rollouts of plans because that is already known to be limited going back to like deep blue and chess.
So when we're looking across different domains of just pragmatic considerations, then that is perhaps where to take the pragmatic turn for that person, like connected to industrialization in information knowledge work, including broadening that to be including like phone conversations or just different kinds of information transformations, just like data entry used to be. However, that scope is going to be as broad as people realistically start applying the techniques that we already have, they'll take years to percolate through usage.
And so to complement trends, sci-fi and the vision, I would also point back to for example, William Blake and the industrialization and some of that more like timeless component of industrialization and the wheels within wheels and what is the role of the human in the mechanized.
Thank you.
You know, and in case this seems very theoretical, let me offer a way to bring it back directly to corporate behavior. So fo following the financial break in 2008 in the United States, they had Dodd-Frank financial reform legislation. In that legislation they decided, and this is actually post Enron, they decided, oh, we're going to put c-suite people in jail if the financials are wrong now. So they did that make CEOs smarter?
No, it just made them yell louder at the accounting department, right? So the, the reality is that there, we've always had these constructions where the knowledge is in the organization. We have a CEOs don't, those of you who are CEOs know in a large organization, you can't know everything that's going on in the organization. So what is it functionally, the, the regulation just causes the CEO to yell more so that to make sure that things are internally tidied up so that the financial world is hearing information that is considered more reliable.
And so again, from the, from the notion of how we've done this before, we may believe that we had control, but in fact maybe it's just been an illusion of control that we keep playing with and it's being pushed now. And let's get, let's talk a little bit about the machine and human element because one of the things that we know is that the laws of physics are universal. Let's assume for a minute. And so that we built these systems of technologies that are global plumbing.
And so the constraints on global transfers of data, for instance, are not technical, they're cultural, business, social legal. And so what, can you comment a little bit each of each of you on the, in the accelerations that are made possible by having machines facilitate the communication and interaction, what is the impact of that both on in intelligence systems in a hybrid form, and what is the impact on that, on traditional human intelligences where they weren't so amplified by the machines? And what are some strategies we might use to, to attempt to keep up as, as individual humans?
Marina, would you like to start with that one?
Yes, of course. The first thing that we have to consider is that we are interacting with the machines constantly nowadays.
So, and something that history show us is that we cannot stop the technology. We saw this in the industry revolution. We saw it with the beginning of internet, for example. And many people I remember that they were very reluctant that, oh no, don't use Google keep using Karta for instance. And now we can see this with the generative AI models, for example. So some people and some companies, they're still very reluctant. And what I truly believe is that we have to adapt to the changes. So not to demonize like the new technologies, but embrace it and try to change it for better.
Because the idea is that the AI is here to augment our, our capabilities to make us more efficient rather than replacing us at least that my my point of view, what do you think
Mahala? Yes.
Well, I I could sum this up. How we, how technology is enhancing our capabilities through, I think we had that conversation some time ago when we were working together. It's called awareness based decision making.
So, and it comes from, you know, this is, I worked with the Department of Defense speaking of, you know, decentralization and, and self-organization and effectiveness, and that the CEO or the general doesn't know what's happening in the field. So there is a book by Michael Albert called Power to the Edge. I think it's a really good read and it refers exactly to this. So the idea is, so okay, I am, I am not capable to see the whole field or what is happening.
And, and I also cannot know the, by myself with my limited abilities, the impact of my actions. I'm not going to give an example now from the military.
Yes. But for example, simply if I throw this water, this plastic bottle, not in the recycling bin, and my agent is whispering in my ear, well, how many dolphins are gonna die because you don't recycle. Yes.
And, and makes you more aware in much the same manner, of course when we drive or the self-driving cars, but until the self-driving cars, the car can mention to me, oh, you don't see it, but the bicycle is coming from behind you. So it enhances our capability for this awareness. Like if I am to, to look at the future as men machine, of course I believe my belief, my pretty firm belief is that we are going to be somehow augmented like, like cyborgs if, if we want to. And then we would have, and the military already is, is cyborgist I can say.
And so we will enhance our capabilities and why, speaking of substrate, biological substrate, I mean at least our, yeah, it's so limited.
I mean, pocket calculator is so much more capable than at matics than my brain. And the MIT scientist, Seth Lloyd proved that there's still room for 10 at power 33 in hardware to be enhanced. So imagine, I mean, an AI running on such a hardware, it'll be much smarter than me. So what do I want? Do I want to live and to have that enhance my capabilities? Do I want that AI to be like my personal agent and know what I want and and, and help me at every step?
Maybe what do I want if I have a child? Let's say this is, and I do to let's say about this, this future species. Do I want this child to really revolve around me and only help me or I want the child to further revolve and develop. And these are very, very important and hard questions for us as a humanity because I personally as obviously I want my kids to help me if I need, and they do, but I also want them to be happy and free of me and, and, and the limitations. So where are we headed? I don't want to answer that. We will see.
Trent, what are your thoughts on the kind of machine and human hybridization and where that's might take us?
Yeah, I'll start with a story. So each of you probably has a phone in your pocket or in your purse. And inside that phone it's got 10 or 20 chips. Each of those chips has about 20 billion transistors. Each of those chips was designed typically in two to four months by a team of about 10 engineers. So how is it that a team of 10 engineers can develop and design and develop and then get manufactured this artifact with 10 billion parts? The answer is AI CAD tools.
And so CAD tools, computer aided design tools have been around since, since forever. But they really came on into strength in the early eighties and, and in fact they've been AI powered for a long, long time too. Some very famous algorithms in AI started there, probably most famously simulated and kneeling, which derivatives of today are the algorithms that are training modern neural networks, atom algorithm, all that.
So these AI powered CAD tools are at the heart of all these ais at the heart of all these tools that electrical engineers use every day when they design the next generation of Nvidia chips and intel chips and so on. How do I know this? Because I was helping to build those for 20 years, going back to the nineties. And in fact, what does drive, you know, how do these ships get more powerful faster over time? Well that's Moore's law, right? Which has been going since the mid sixties.
Initially it was just, you know, extrapolating a few years, but the industry realized the semiconductor industry, the chip industry, it realized that if it treated it not just as a prediction but actually a schedule, then they could follow the sort of, if you build it, they'll come philosophy. And that basically has worked for 60 years, right?
And you know, every now and then people say Moore's law is dead and so on, but it keeps trucking along and you know, maybe transistor density isn't quite decreasing as quickly as before, but we're also going up in 3D and all these other games that are happening too. So it keeps trucking along and in fact AI has accelerated it. So now we're getting about 3.3 XA year from silicon in terms of AI compute power. So overall, how is that silicon getting designed?
Well, engineers at TSMC, the leading found in the world, are using AI powered CAD tools to develop the next generation of silicon. How do I know? Once again, I was there, right?
So, so it's not just, you know, Moore's law driving ai, it's actually AI driving Moore's Law. It's this virtuous cycle and it's only clocked by economics. It's clocked by how quickly can you loop around?
And now that AI has become so wildly profitable, there's so much money to be made, that's what's actually making things go a lot faster. So these days AI is improving at 10 XA year, right? Roughly half of that from Moore's law and the, the, the hardware side and roughly half of that from improvements in algorithms. There's just so much money to be made.
So you asked your question of, you know, how do we augment, what are examples of that? Well, here's an example. Going back decades already of AI powered CAD tools that has actually driven, it's actually been the backbone for the modern silicon that we have. And that has been the backbone for all this other technology. And how I see what's happening now is it's AI powered CAD tools for everyone else, right? On your far smartphone in the goggles that you're wearing, the metas smart glasses, all these things. It's AI powered CAD tools for everyone else.
And, and this is the path to the future too. When I talk about these machines getting a 10 XA hundred XA thousand x smarter than us, well hopefully we can leverage these AI powered CAD tools for cognitive enhancement to help augment ourselves such that we can keep up with machines and join them as they explore and reshape the cosmos.
And Daniel, moving to you, the, the, the notion of active inference and other areas you've explored.
Are you, you that my understanding is active inference was observations from biology and originally and cognitive sciences and so a biological substrate. Are there elements not just of active inference, but of intelligence as we hybridize with machines or as we chase chase after the intelligence of some independent intelligence that resides in machines? Are there things we can learn or framings that we can employ to help us as humans, historical humans stay relevant and stay engaged with these systems as they develop?
Yeah, a lot to say on that. I'll give a few points. So I'm reminded of Mike Levin's recent work on Polycom computing, which sort of states that the minimal functional understanding of computation is not just intrinsic to the substrate, which we discussed earlier, but it's in relationship to the observer. Like an observer looking at an encrypted communication may not be able to tell it differently from noise. So they couldn't just look at it and say, well, I don't see a pattern.
So ergo nothing's happening because of course there was something happening, but they didn't have a decryption to that. So how does that connect to dealing with these new and different architectures?
Well, it makes me think about how some will be closer to the engine room and seeing these architectures in the forge and, and with multiple layers of, of wrappers and indirectness around them, that, that, as Trent mentioned, like will reach into the low code and the no code and increase the accessibility.
However, there's always going to be these kind of veils in between and layers, of course grading and abstraction to deal with these systems, even abstracted ones that have so many parts.
So I think that's where to connect it back to active inference and sort of the science of the observer and of observation, it's arising from the work of Carl Friston in the FMRI neuroimaging setting with sensor fusion. And then how do you get all these noisy measurements and do estimation of the dynamics of latent states in the brain? And then how do you start to deal with actors in the FMRI and the consequences in the world for them and as they learn an update. And so that sort of leads to this unified approach to perception and action amidst uncertainty.
So that's sort of, I think a little bit starts to lay out some of the lines of site for how research programs like in active inference, applied category theory and other more technical areas, we'll be able to sort of run in parallel and weave together with these issues like identity and everything that you're all discussing on another continent.
And if I can, if I may, I can see a, a parallel with the word bio sharmer, which I, I read recently.
And so he, he also wrote the leading from the emerging futures about others, and he's a professor of at MIT. The main point is that I think that's a point of, yes it is substrate independence intelligence, but there is a difference yes. In how we acquire it and how we manifest it and humans, let's say if I am to say humans versus machines, yes, biological versus silicon.
And, and here is, I think in what Daniel mentioned about the observer is where the difference may, may reside, Sharmer went one step further and he said there is, okay, so it's a third person, which is the observer, but now there's a force person. He said that's a field. And that's again the collective, right?
And, and how, okay, let's say un self-organized by simple rules, but humans have visibility. If you are in a stadium or in a collective or somewhere you, you feel the crowd and then you react to it and it's, it's amazing. It's more like the, the birds are flying. And so there's this field, and if I may just briefly explain it because that, that one explanation was given also by Mark Sta Mark who is a physicist and it is the wave function. So how waves work is they are also substrate independent. Yes.
So they again goes through the sound, goes through air, the waves in the water and so on, but they, they all work by the wave evacuation. And what matters is, you know, the amplitude, the frequency, just a, a few parameters, a surfer only matter only cares about the location and and the the height of the wave, the molecules of water not, or the molecules of ar they don't move.
It's like people in a stadium, yes, if they create waves they don't move, but the wave is, is being created. And that's exactly in much the same manner as the matter does not matter, but the pattern is what matters.
And this thing relates yes also to the field. I mean we have ability to perceive that kind of field. I don't wanna become too esoteric here with consciousness and so on. But I invite you to read this article by, by auto Sharmer in which, which I think it expands what Daniel was saying. Maybe he wants to say some more about that. I will Different intelligence. Go ahead.
Marina, Iantorno would like to add here we have a question from the audience in this topic for any of our panelists here. The question is, it says collective quaia occurs when a group shares a common perception of who they are as a group enabling collective action based on the group sense of self is joint quaia with our personal AI may be about to manage 1000 super intelligence at the same time.
Fantastic, thank you.
Whoever the, the person was, I would like to talk to you afterwards 'cause they got, they got my point, but I don't want to answer it because the others, you know, they, they didn't speak much about that
AI are people too. We don't believe that right now. But they'll convince us that they are,
They want to marry us or whatever.
Daniel, maybe you have some comments here.
Yeah, I mean, to add one more sort of pithy thing, they will be people in the sense that they will have legal persona, legal fiction and then as to whether they're worthy of care, that is gonna come down to some axiomatic positions that people take that may be irresolvable different.
And so for those who are skeptical about that notion, consider that corporations with legal personhood, legal personhood is granted by a statute and, and regulation and habit.
And there was a time when there was not legal personhood for these types of organization of people. And so that the, one of the things that's separating AI from personhood right now is really merely the ability to be sued and to own property. And so one could imagine throwing AI inside a corporate shell and all of a sudden you'd have something that looked like a personhood for a system of intelligence. Corporations really are computing systems themselves. And so let's talk a little bit about the technology at the arc of technology and how humans have adapted.
You have originally we had language as a technology and that allowed humans to interact in ways that were different. We still have it and, and we still have language that's right, but other organisms as well.
Then we had writing and that allowed us to engage in a different way. And then there were privacy intrusions associated with writing the famous Brandeis case where they were concerned about it was intruding on me because this technology allowed insight into who I am.
Then we have institutions themselves that are technologies, they're ways of enhancing and amplifying human capability and now machines. And historically we'd looked at it from the human perspective, oh, it's amplifying a human capacity. But if we wanna look at AI futures, maybe one of the things we should look at is corporate present. Think about it. Are corporations always responsive to a human's needs? Part of the challenge we have right now is that a corporation has to abstract who people are, and that's where the privacy intrusions come in.
Governments and corporations treat people as an abstraction because they can't deal with every person in all the complexities they have. And so what are some of the ways that we might learn from prior technologies on how we can engage with ai, let's not call it control, but let's call it management of ourselves as humans in order to better deal with this collective intelligence, which none of us can directly perceive or engage with, but which is an inevitability. And are there lessons we can learn from prior technologies?
Trent, is that something you feel comfortable trying to take on? Sure.
But I'll probably, I'll reduce the question to sort of an artifact as a mental model. So a good friend of mine, so I come, I I spent the last 10 years in the blockchain world, right, which is databases, yet at the same time they're decentralized, they've got immutability, they, they've got a bearer asset type approach to ownership.
You know, your, your keys, your Bitcoin, and they've got really powerful incentives. That's the big difference. So on that, on on these blockchains, his i his framing is it only exists if it's on the blockchain. Okay? And it's pretty interesting because you know, all database systems we think about whether you're building identity systems, otherwise what if they're corrupted? What if they're hacked and so on, right? But bitcoin is is out there for good, it will sort of survive nuclear Holocaust.
And at the heart of Bitcoin, it's just a ledger, it's just a list of tens of thousands of transactions, right?
So you can actually treat it differently. You can view it as this artifact, this list of, you know, who owns what Bitcoin, at the heart of it, list of transactions, be precise as this thing that will exist for all eternity. And that's a really practical way to think about it. Now what's interesting is you don't have to just have it as a list of who owns what Bitcoin. It can also, you can actually put identity on there, right? And there's two different standards.
There's one based on NFTs and there's one based on dds, which were developed to work on blockchain and off blockchain. But the DID standard, I'm sure most of you're well familiar with it. Decentralized identifier was developed actually, or co-invented by the inventor of TLS, Chris Christopher Allen a friend. And in inside the standard it actually has the idea of identity. It's just a random number because otherwise you don't, you, how do you resolve Zuko triangle, all that.
But then a very key thing is it actually has a list of resources.
And so it, it basically knows which resources it's able to control. So going back, imagine this DID is living on chain and therefore it exists, it's on chain. And then the list of resources could be for, so imagine the company is simply the identity at the heart and a list of resources is the bank accounts of the company, the contracts the company has had, the ip, et cetera, right? And you can even, you know, rather than thinking about digital twins where maybe there's, there's me and Mahala and then our digital twins and this quadrant of mapping, instead it's all just on chain.
And there the, so Trent McConaughey's, DID is on chain and it has, it has a resource called the meat bag version of Trent McConaughey. And the meat bag version of Trent McConaughey has delegated access control.
It has its own way to access this, right? But then things become much simpler conceptually because you've got this blockchain with the identity and each identity has its own resources to talk to, and then everything is just only interacting on chain where there's delegated access control, et cetera, right?
And by background as well, I'm not just working on blockchain, I'm working on decentralized access control via ocean protocol. So I think about this a lot, right?
And then, you know, you can have corporations on chain, you can have individuals, you can have cars, you can have ais, all that. And in fact, it's a very natural substrate for ais. They don't even need to get rights as a corporation because they already have this dry code rights, they can do their thing, et cetera, as more contracts. So that's how I view it and it's just my mental model. But I found it extremely helpful over the last few years to think about.
And we're gonna have a session a little bit later today on decentralization notions. So we'll be developing that further.
Hey, we'll
Be fantastic.
Marina, do we have a question from the audience?
Yes, we have another question from our audience here. This question is for Daniel, has holdover written or said anything about multi Asian swarms or what my wisdom is zero. Give us if we were still alive.
Can you, can you repeat that?
Yes, sorry, Daniel. I go again. So has hold over written or said anything about multi Asian swarms or what my wisdom EO give us if he were still alive?
Yes. Okay. I think this is asking about hold Dobler and EO Wilson, the two famous entomologists. It's cool there's entomologists out there. I really don't know. I think there's so many ways to take it, but I hope that I've ever asks, developed that question and continues to explore it professionally.
So let, let's talk, let's, I wanna stop for a second and ask, before we, we, we have about 20 minutes left, but what questions have we not asked yet? What, and so when we had our pre-discussion, I was informed that the questions that we had talked about among us were from five years ago, that we were already out of date.
And so, and I en embrace that. So what are we not yet asking in this session? What are the things that we haven't covered that really are core and other things?
I don't know what means core, but I mean, for me, a question as a, as a scientist and I, a professor of ai and I've been living in that and having such conversations throughout my, my life I would say is, is there a limit to intelligence?
And if so, what it is and how, how do we know we reached it? So, and, and there is a book called Open-Ended Intelligence written by a, a friend of mine. I recommend that as well. Which obviously I think it has the answer in its title. Like maybe there is no, no such limit. But I mean, where are we headed? I think that's, thank
You.
Trent, how about you?
I would say, you know, what is realistic timeline of where AI is headed in the next, you know, one year, three years, five years, 10 years or 20 years, right?
You know, is what I'm talking about 30 years away or is it a shorter timescale?
And Daniel, do you have thoughts on that?
Yeah, a lot. A lot comes to mind. One thought is how do we capture and archive our moments in its totality and not just reduce it to text data sets and not have regretful synthetic intelligence policy.
Marina David?
Yes, my question would be the data that we provide to the companies would be enough to train the future models because AI needs more and more data every time. So then what would happen with that? Especially with the privacy policy, the privacy regulations that are expanding around the world? Will data be enough to train the future models? I think that's something that we, we should wonder.
So one of the things that I think is, we're learning now, so we, we're all involved with organizations and institutions and the types of things we're talking about here.
There is an inevitability to these things, at least some subset, whatever future we embrace. So from a risk perspective, from a brand perspective, from an institutional continuation perspective, we're now meeting up with things we've never encountered before. There really is not a lot of precedent for us to go back and reassure our shareholders that we're doing the right thing or tell our customers that they, this is the place they should come for their services.
Are there things that we, that folks can be doing institutionally in existing institutional constraints that would help to prepare the institutions that we rely on for these new types of risks? These are, are they so exponential and so different that there isn't a, a basis for us to engage? Or what are some practical strategies for people in those communities, in those institutions to hope to prepare themselves for these kind of things?
Maha, I
I I can start. I'm sure they have more ideas, but, so just as an example, I think of course, obviously yes, education is, is, is critical. Yes. And it can be institutional or not, but singularity net is developing, if you want to call it a game in which you can interact with super intelligences, with intelligences, which are at the same level, or, and, and in that way they also learn how to design those to be, you know, attuned to you.
And, and I think it's very important yes, to start to use those tools, but those tools at their maximum capacity. And, and how would I-B-D-M-I must say, when I first, I mean used charge GPTI asked the question and then my, my immediate reaction and I, I wrote there, I'm afraid of you and it, it's answered and I'm a professor of ai, right?
I mean, it's really, yes.
So it, it's, it's mind blowing. Scientists start are calling it, I apologize, it's, it's not my jargon, but the HS is a holy shit moment.
It's like, oh my God, where did we get? So I think it's very important to have to provide those tools to your people and, and let them experiment and encourage them. I know that there are institutions, because my son is working in one that don't allow people to use GPT, they don't allow people to use the tools. And I'm like, oh, this is really wrong. Yes.
So, so this, that would
Be my, well, it's an attempt to control risk in a traditional way. Trent,
I, I saw a riff on your, I'll take your question on riff on Mahala too. So it was in 1997, I had just walked out of a class in analog circuit design as an undergraduate electrical engineer on manual analytic circuit design. And being a big nerd, I walked into the library and picked up the most recent issue of the IEE transactions on evolutionary computation. First paper in there was automatic synthesis of analog circuits written by a group out of Stanford.
And my first thought was, oh crap, am I gonna be obsolete? Right? And then I, I thought about it for a bit and because I'm a, an internal optimist, I'm like, this is so awesome. How do I be a part of it? So I very quickly turned around that, and that actually had a profound effect in my career. I took that research, I drove to a conference a few months later, 16 hours straight to meet the authors of that paper.
They said, yeah, we're not electrical engineers, we just wanted to prove a point that AI can be pretty powerful. Go ahead and chase this for actual analog circuit design.
So I did, I took, and I, I, you know, I took their system and ran with it and made it much more powerful and faster and brought that to industry. That was my first startup. And so the point there is I was, I'm glad at myself looking backwards that rather than being scared and running away from a thing, I embraced it wholeheartedly realizing that this was the future and then made a point of diving in with both feet, right? And I think for organizations, to your question and to riff on what Mahala said as well, you know, encourage your people.
Don't just dive in on the side, dive in with both feet, right? You, you know, treat chat GPT as a CAD tool for your people to be more productive, get them to explore, get them to teach you what they're finding useful and so on.
And you know, a friend of mine, he actually made a rule for all those software developers. They weren't allowed to write code at all manually. They can only use the GitHub copilot and, and then they, they protested and stuff, but actually about six months in, they were all about two x three x faster.
Personally, I don't quite like it's used that way, but it was a pretty cool philosophy, right? And so, you know, don't be scared of this embracing the tools and you might find that you'll get pretty big productivity gains in at least some places. And then your people are thinking about the future embracing it and you're, you know, this sort of adaptive mindset.
So yeah, that would be my way of thinking about it.
And Daniel, I know I'm looking at the collective institutions are a form of collective and are there thoughts you have from your experience on how people might engage with their institutions to try to help bring the institution along?
Sure.
So to round out what the previous two said in the encouragement and education, I'll also add, like offer a backpack or a separate standalone laptop or identity or however it's done because it can help simplify the switching so it's not being merged and it can, the contrast can really be experienced between like coding on the normal computer and then having the augmented coding. So I've just found that that's helpful.
In terms of the organizational, it makes me think about the niche that the org that the organization creates for itself, the documents and the ontologies and all the resources that it creates that are static. And then the people that are moving through that semantic or space of resources. And then also increasingly the, the, the non and other than human that are also moving through those space. Like a rag can be seen as doing a kind of active data sampling across resources.
And so there may be some organizations are setting where it's like 10 agents, 10 people, and it's all people in a physical library. And you can also imagine fully virtualized settings. So having the ability to, to compose those with the expressivity that we have for natural language, which is kind of like, if you can say it and you can think it, and you can think it, you can dream it and at least evaluate it and trying to prepare for what, who knows is coming.
But to bring together at least the ability to express the degrees of freedom that exist for using these systems and have things articulated in modularized out in certain ways and expectations of, of change and, and reliance and, and resilience around all of that,
You know. And we have a question from the
Audience, please.
Yes, we have a question from the audience. This is great. Thank you so much for engaging by the way, to our virtual audience. What can institutions do to encourage people to keep learning and not to rely only on ai? This is a good question actually, especially with the generative AI models going on.
So, so it's the use of ai. So we, using AI we just talked about and now not just relying on ai, I think
Yeah, yeah.
I, I think I will, I will just start. Yes. So I think it depends what learning is, and I think learning is changing also. It's transforming.
I mean, it is like, I remember decades ago I was driving in LA it was two babies in the back of my car and a, a map, a paper map. And I could find my way. Well now I couldn't anymore because of Google Maps, right? So I'm like, we, we are learning to use the tools and I think that's, that's pretty, pretty cool.
And we, if we are interested, I mean we have also, we already have Google, right? I mean, I have libraries of books on my audible and, and Kindle, but the point is only if we are interested to learn. So I think learning is a state of spirit. These tools are helping us to learn what we want and they are just tools who are enabling us. And if we want it, we can learn it. But I don't think an institution can force me neither can I as a professor or force any students to learn if they're not interested,
Allow me to paint an AV task mental model.
So two, two chip startups. One doesn't use AI at all. And one of them does, you know, one has AI ca cad power tools and one, one does, one doesn't. So a does and they manage to develop a chip with 10 billion transistors in, you know, two months. They ship it, they build it, company number two manages to produce a chip with 10 transistors. They build it, they ship it. Company number one makes sales a million, 10 million more company number two goes broke, right? So the question is, are you willing to embrace these new tools or not? They happen to be AI powered, right?
So it shouldn't be a question of how do we not rely on the tools? The question should be if you want your company to survive in this age of these radically more powerful tools, AI powered, then you have to embrace them. If you don't, you're dead. Right? And evolution will sort it out.
Daniel, do you have thoughts on that?
Yeah, I think I, I heard almost two questions there. One was about how to have the individuals in the organization in like a learning posture and then also about survival. And I think they're very linked because evolution can be seen as this multi-scale learning process.
And I think potentially the, the prompt and the interface, which is what really transformed the mass adoption to be able to have this conversational interface and the way that those will continue to develop out to create a space for that prompting with the person across the, the dot, dot, dot. Such that they feel like they know what the repercussions of their actions are, so that they feel like they can be in an active question asking or just appropriate frame and, and posture.
Otherwise, I think we can imagine a lot of passive consumption being tuned in ways that recommendation type algorithms would never be able to even touch. And so it, I think will be a moving target, but a critical one to have individuals maintain like an active stance and epistemic curiosity when interacting with things that will be doing all kinds of wild outputs.
So, you know, we are hearing a lot about the existent. There's a, a lot of existential aspects to what we've been talking about here. And in many of the rooms in many, many of the rooms and other sessions will be taught. There's conversation about privacy and what, what might privacy look like in the future. And is because there is an expectation right now of human populations to have that thing called privacy. Is that a relevant concept?
Does it, how does it change Daniel? Do you can take that on or not? Let me think a little bit more. Have a think.
Yeah,
Go for it. Okay. Yes. So because that's another question which I wasn't asked and you ask it now. So the AI definitely are going to know us and they already start to better than we know ourselves. That is a fact that is going to happen anyway. And because, and, and I will give you an example because now I riff off of your examples. So I was at a conference and I was asked a question and then Judge G, they asked judge GPT to look at my work and answer the same question. And it answered better than me from my own work. So it obviously has, I'm like, okay, thank you.
It's more than 300 publications and so on so maybe forgive, but so, so it's clearly yes, that they will, they will, and they do already know us and aspects of ourselves better than we know ourselves.
So the question I, I do not know if the question is about privacy with regard to the ai. I think the question is about using the AI for myself and keeping it, you know, to myself and having those controls to reveal what I want from my own ai. So the idea, and that is, I think we will tackle that also in the decentralization. And it has to do with the identity.
So I would call it, you know, SSI, self-sovereign identity, SS ai, it'll be my, my own AI self-sovereign ai. It knows me better than I know myself, but I keep control. It's here on my phone. And then if I want Scott to know about me, certain parts, yes, I will show that if I want friend, maybe to know more or less about me, then I can give him those parts from the ai.
So we're at the very end of our time. And so I don't, unless there's, is there a burning, burning answer that needs to be happen on the privacy?
Trent, do you have a couple of thoughts or
One sentence AI will want privacy to?
So I, so I, I wanna summarize this session by saying, wow, and that's, and I encourage all of you to please, I find our speakers here, I hope that this was not too theoretical. This is the re a new reality that we're going to all be engaging with both individually and organizationally. And so we can either shy away from what the reality or embrace the reality. And so please join me in thanking our panelists for today.
Thank you so much. Thank you.
Okay. We have our next panelist one we can come up.
Thank you.
Yes, thank you so much. This panel was very interesting and thank you to our audience for engaging with us.