Yes, we will introduce now the panel and I will allow you for the question because it is actually related as well. We are talking about transparency.
Please, Wenjing, we invite you to sit here and please join me in welcoming Mihaela, Trent, again, that they were with us this morning, and Richard for this panel. We will talk in this panel about decentralization. Is decentralization the best way to achieve transparency and verifiability in AI? That's a good question. Thank you and welcome again.
And while the panelists are coming up to the stage, I think it was interesting, I heard that you started with Cole, Wenjing, because I read recently, I haven't confirmed, that when you do an AI query, that 20% of the energy associated with answering that query is from Cole at present, is what I heard. So it's kind of interesting and ironic that we're moving to that.
Well, thank you. That was a fantastic presentation.
And, you know, it's interesting because, in a sense, from our earlier presentations today, you talk about the socialization of technology and it feels as if we're moving towards, the technology has become so intimate now and so connected that it feels like there's almost a mutual socialization going on, technology of us and us of the technology. And I'd like to ask the question, starting with RJ, what can we learn from the history of decentralized knowledge systems that can help us build and effectively manage or deal with decentralized AI-type systems? Mike Mahala, could you? Thank you. Okay.
All right. Well, I could spend 30 minutes answering that.
I think, so I've been working on applied knowledge management and thinking about the history of knowledge management, not corporate management of information, but how we act as a synthetic intelligence in managing information. At civilizational scale, down to organizational, down to independent. And I've been working on that for 20 years now, which means I either started very young or I am a very deceptive 45. So I think, you know, I think we can see, one, that requirements have always kind of been in the same set of categories, regardless of the tools used, right?
We need to be able to collect, set requirements for collection. Summarize, disseminate, right? These structures have always been the same. And now we're just pumping the gas really, really hard, if that makes sense.
Sorry, we have a question from down. So then, yes, exactly. Do other folks have notes on that, kind of that idea of the, is there information that we get from earlier knowledge systems that we can apply towards dealing with, again, the mutual socialization that's happening now with AI? Is that a question that any of the three of you would like to answer as well? I can add in maybe a very non-tech anecdote, and it is a question I think about, about, you know, read about what is agency or that.
And if we outsource decision-making to another party, no matter what that party or how much control you have, you've lost quite a bit. You lose quite a lot sometimes, right? I have respect with, you know, all the doctors and lawyers and everybody, but when you delegate, you always lose something. And this is the dilemma we're in, that AI system, we are really dedicating a lot of the decision-making or work to these systems. And they're going to make decisions not exactly what we want, because there's a communication barrier, even if you have full control over.
So there's complexity involved of delegation that's never complete, and the cost is very high if you want to complete it. So these are really external intelligence systems, they're like aliens, and they will never quite follow what you want. And so how to learn to manage that, like we're learning into dealing with, you know, maybe a business client that doesn't speak your language, for example. In this case, luckily, AI learned our language, maybe too well, but they really learned our language. So be careful, we should learn their language too, otherwise they will start to manipulate you.
And before we take the question, I just wanted to comment on that. So I work with the IEEE agentic AI workgroup, and one of the tasks they gave me was, what are the general agency constraints we should put on agentic AI? And as a lawyer, I went to the presidents, and there's uniform agency law out there, which sets the expectations of human-to-human agency. And so all sorts of elements, and sub-delegations, and the limitations, and the cancellations, all those things. As you go through it, it's fascinating because it becomes a catalog of paradox, right?
And so because our expectations from the human constructs don't all map directly, and so it's very interesting that it is a mutual socialization that's going to happen. Trent, do you have a thought on that? I think overall, right, if these entities are, you know, we're on path for ASI to come as soon as 2027, right? About 10x a year for the next few years. And maybe just to set context, right, about five years ago we had AIs at the level of a kindergartner, right, a GPD2. About three years ago we were at the level of a grade 2, a GPD3.
About a year ago we were at the level of a grade 8 high school student, right, a GPD4. And in the next six months to 12 months we'll be probably at the level of a, you know, first year university student, GPD5, roughly, right? And then it's going to keep going, right, every, you know, 10x a year, right? Thanks to Moore's Law, thanks to AI algorithm improvements, and thanks to other sort of tricks that people put in.
And so pretty soon, right, 10x, 10x, 10x takes us, you know, from university student to, you know, wizard, sage, programmer, or professor emeritus, to someone smarter than anyone we've ever met before, to a thousand people smarter than we've ever met before, right? And this is all happening soon, right? We're not talking 10 or 30 years from now. And so it's happening soon, three years, five years, seven years. It's really hard for us to comprehend. Our brains think linearly.
We're, in a sense, the conversation today shouldn't be just about how do we deal with the GPD4s of the world. It's how do we deal with the GPD8s of the world that will have agency that could be a thousand times smarter than us. And then we can ask how do we control these guys, right?
Well, who here has watched Avengers 1? Raise your hand. Right? Do you remember the scene where they put Loki into this super fancy prison, right? Loki is Thor's brother, right? He's super sneaky, super smart, right? Within two minutes he had escaped. He tricked Thor, right? And if you read books, for example, of hackers that are basically making Swiss cheese of security systems. Why? Because the weakest link is humans themselves, giving away the keys, accidentally all this, right?
Phishing, whatever, right? So we can think all we want about how we're going to manage these systems a thousand times smarter than us, but it ain't going to happen, right? So instead, we should ask how do we cooperate with these people, these AIs, whatever we want to call them. You want to call people, they want equal rights. I mentioned that, right? So we just have to be realistic about this. Let's not design for the today. Let's design for the tomorrow.
But again, before we have the question, let's complete this question, Archie and Mihaela. Oh, sorry. Either way. I thought I didn't say anything so far. Okay. My human rights, as I was saying. So thank you for setting the context, because initially I didn't want to say anything to your initial question about the management, knowledge management.
I'm like, what is that? When I was like some decades ago, I was at the University of Calgary, I was just telling Trent, and I was working on an expert system with knowledge elicitation from experts, right? And at the time, it was a pain. You go in the surgery room and ask the surgeon what he's doing and how, and so on and so forth.
And now, we have all that knowledge. We have access to it through, you know, GPT and LLMs and so on and so forth. So the questions are radically changing, right? So it's about, okay, a system which can discover things. It's like Nobel kind of discoveries. Ten of them per day. So what do we do then? And so I just wanted to maybe ask you, Scott, I'm sorry, to come back to you. To rephrase the question is from that future, is it about management or is it about co-creation maybe? And how do we work with that? How can we benefit from that?
How can we stimulate that to make more discoveries or maybe to stimulate us to co-discover? I might actually answer. So I think it's not just co- I think sometimes we hear AI and it's like one big thing, right? And I feel that a lot. It's just like this one megabot matrix, even in the way we talk about it. And what you're getting at is co-operation between us and them. Not just us and agents, but us and a variety of Chachapiti agents, right?
So when I say knowledge management, I'm thinking in terms of, and that's why I said, not like a corporate sense of enterprise knowledge management, but how we engage in trade and information, right? So something I think about is that the Roman army, for as much as they're known for engineering prowess, they had more information specializations than they did engineering specializations.
And it had to do with structuring and making sure that different groups that spoke different languages that could or couldn't write, could all trade in information in a way that allowed, you know, record keeping, dissemination, intelligence synthesis, et cetera. So, and when I think about that cooperation, what comes to mind is the notion of requisite variety. So that's that. The variety in the mechanisms of regulation and control and management and coordination in a system need to be roughly proportionate to the variety of states that that system is capable of expressing.
And information and knowledge have, they have infinite variety. Intelligence has infinite variety and not just infinite variety, but infinite to the infinite variety, because there's infinite potential valid states and infinite ways to validate those states and validate the validation, right?
So, so, yeah, I think it is very much about how we cooperate with these structures. And as a last thing, and then I'll shut up. I think that cognition is very expensive. We're very rarely at the wheel in terms of our own, you know, cognitive security. Cognitive control is fleeting and mostly illusory.
So, so maybe management of AI is the wrong word because we can't manage our own intelligence. How are we going to manage this? Yeah. Let's go to the question from the audience, please. Thank you. This is a great philosophical discussion, so I hope you can hear me with a financial question. I come from the blockchain space and we have been able to distribute the value chain of services across multiple different participants through economic incentives of tokens, gas fees, mining.
With generative AI, I've been told that there only may be around four companies that have the resources to train models. And I even heard the training can cost 100 million. I don't know if that's a correct number or not. But if that's the case, then how can decentralized AI models be designed so that there are incentives for a distribution of the, essentially all the inputs that go into creating generative AI models today? I'll answer this in two parts. First of all, anyone know what the largest computer in the world is? Anyone? Anyone? Bitcoin. It's not just the largest computer in the world.
It's the largest computer in the world by far. Yet it was only written by like one dude or maybe a team of people, you know, and just released via a mailing list. But with the power of incentives, like you hinted at, it grew to a computer that has more power than anything else in the world.
Now, in its case, it's doing not super useful computations, just hashing. It's securing the network, which is actually very useful. So in that sense, it's way less compute and energy and money than the world's banking systems to secure that. And it's ultimately securing hundreds of billions of dollars of value. And that was just with the power of incentives. And so you can ask yourself, what if we did proof of work, but rather than just barely securing a chain, can we somehow twist incentives to incentivize people to build super powerful models?
And there's actually quite a few approaches to this. So and I'd asked this question actually initially in 2013, finally went for it in 2017, but by founding Ocean Protocol. Right. And at the same time, helping to kickstart a field called token engineering, which takes mechanism design and center design and really cast this as an engineering discipline, working with Michael Zergam, who he works closely with, actually, and others. So and we really ran with token engineering and then thought about how do we decentralize AI towards these generative models.
And in the meantime, the centralized AI folks went super far, super fast, like impressively fast. Right.
So, you know, fast forward seven years and there was three OG projects in blockchain in 2017, Ocean, Fetch and SingularityNet. And we all started asking ourselves, like, crap, we've moved too slowly compared to what's going on in centralized land. What do we do about it? So we put our money where our mouth is and we merge our tokens. Right. So Ocean and Fetch and SingularityNet have merged to become an eight billion dollar entity, basically, where we're going for it. We're chasing. We're basically right now in catch up mode.
But where we can deploy serious capital to actually have a fighting chance for decentralized decentralized large, large models, et cetera. And the reason we have a fighting chance is the superpower called incentives.
And, you know, it's interesting, the notion of I thought about those four companies as a form of colonialization of information space. And it's interesting, the notion of a non-colonial power being able to be assembled in a new space. And so. Yeah. Yeah. So and it really aligns a lot with the identity notion for years. We've been talking about decentralization. What's that going to look like?
What we're seeing in technology may be the liberating force to actually make it happen, because even though we've been talking about it for years, it's been hosted on the apparatus of large, powerful centralized structures. We do see in commercial space that decentralization is also happening.
Remember, at the end of the day, all these algorithms we're talking about require a physical processor to run. And so we now all have a pretty powerful computer in our pocket that gets better. We also see there's a renewal of interest into more powerful laptops and PCs. Those actually you can own and run yourself. We see a lot of effort into sort of get the know-how of AI into public domain. So there's a lot of open source projects.
Lama 2, for example, is wonderful. There's many. There's some in Europe as well. So just like the AI, this software is learning our language. They speak English very well now, can do quite nice things. We should learn their language too. I don't think we should be thinking about somebody giving me a so-called decentralized thing. I think we need to go acquire them. Other folks?
Marina, do we have a question? Yes, we have a question here also from our audience. Once again, thank you for engaging our audience online. And I would like to mention that if someone here in the room has a question, please feel free to raise your hand. The question is, with all the data that we are giving to the companies, how can we be sure that the algorithms that they are using are transparent and verifiable? It's a good question, actually. I can talk a little bit about that. There's one, if you were earlier in my presentation, this little diagram of understanding how the sausage is made.
So we go through every single step, what information gets into it, etc. And so we do give a lot of information. Not all those information are very useful for AI training. AI training actually needs more high-quality data. So you heard about garbage in, garbage out. Some of them are not very useful, at least for the foundation model training. But it is one of the factors that how the data goes in somewhere, becomes a model, how the model is tested. And all that, I think, are very valid place for, as a society, to sort of understand how this is made. Is it being made safely, fairly, etc.?
So those are all very good. But I want to caution on this notion of controlling input data.
Because, remember, we think about data as the data we sort of type in, called structural data. If we type in a form, oh, I'm releasing data.
Now, I'm sitting here talking, I don't know how many gigabytes of data I'm releasing. All these are data. All those are very useful for AI training. And so let's keep in mind that, at the end of the day, we cannot control all data. We may be able to control a very small slice of it.
So, on that caution, I think, so first was naming nomenclature instability, a reference to object, right? And we solved that with location addressability, which is we changed the name of the object, but we have, we know where it is. And then we can change it, you know, version addressing, etc.
And then, okay, well, what happens when location becomes unstable? When, you know, our organizations are trading in information, and we have different locations for the same object, and we're using it twice, redundantly, then it becomes content addressability, which hashing, in addition to a variety of other mechanisms. But then we have content instability, which is we're trying to refer to what you just said in your last response, but from that angle, and from that angle, and on somebody's cell phone video. And we want to recognize that we're actually looking at the same thing.
Now we have to go to, and I think it's question addressability, and we need modes of common reference. And I think there's a lot of discussions going on about this right now, of how we establish those common reference systems for things that are, it's subjective whether or not we decide it's the same thing. And a good example of this is in record keeping for museums.
So one is we have the same object, but we disagree on fundamental attributes about it, but we need to have common reference, even though it's like, well, what is the canon name of this object is fully disagreed upon between maybe a British museum and an Indian museum. And there is no reconciliation, because they need to be able to structure it. So there's that piece. And then second is the fact that truth isn't scale-free. We can pinch, tear, and zoom. So is gravity real as it's been described? And it's like I have some friends who are physicists who are like, no.
But I do not want my architects to be confused about the nature of gravity and how it's applied. So I think there is a piece of this, that it's not just the verification of the data, but mapping to what context we actually want that as training data. I think that transparency and verifiability and unbiasedness are losing games. You can improve things a bit, but you'll never get that far, and it's not going to help that much in the end. And we have to be honest to ourselves about it. It's very politically incorrect of me to say this, but I have to be blunt and candid about this.
So let's give some examples. If you have an AI model that's 10 layers deep with, I don't know, 10,000 weights per layer, you can see all you want. You can see the exact floating point value of every single weight. You can draw pretty pictures of what's going on, and maybe if it's a CNN, you can see some images and stuff. But overall, you're not really going to know what's going on. So it's not going to be that transparent in terms of verifiability. AI is explainable, just not in the language we understand.
Well, yes, but it's not in a language that matters at all to anyone, in my view, right? So let me elaborate here. So overall, that's an example for an AI model. And then in terms of the verifiability, sure, you can verify that you have clean data going in, but if it goes through something you don't understand, then it doesn't really matter, right? So it can be clean in through something you don't understand. It means you don't understand the output anyway, right? It's black box. And then in terms of the biasness, too, you'll never get to a system that every human will agree is unbiased.
And let me give examples of all three for humans, for transparency, verifiability, and biasness. So for transparency, we don't see what's going on in our brains, yet we trust ourselves, right? We have friends. We have no idea what's going on in their brains for sure, but we trust them if we know them well, right? So transparency and trust are two different things. Or if you're driving a car, maybe someone knows how the motor works, but you don't, but you trust it anyway, right? Okay? There are lots of things we really don't know, and our human brain is the best example.
For verifiability, we definitely have no mathematical proofs or numbers of how our brain works, but it works, right? It's existence proof. That's it. That's all we need. So we don't need verifiability of neural networks from fancy math theorems. It doesn't matter. And in terms of biasness, you ask one religion to agree with another religion, and they will agree maybe 20%, maybe 50%, right? Or any two humans, right? We can agree on a few basic things, like don't kill people, a few others. But after that, it's wide open. We can disagree on 80% of things.
And then imagine that the government stipulates, hey, you need to have an AI that is unbiased, like the European government just passed. It's actually impossible to pass, because one person might say it's good, and another might not, right? So I think we need to get off of our politically correct high horse and think about what's actually practical and what the real problems are. There's no right line between bias and expertise. Thank you.
Yeah, so just because this is also the title of our session, right? It is about verifiability and that.
And I was, you know, when I read the title of our session, I was like, I remember the keynote yesterday, the first speaker, which was Mr. Kupinger, and his title, and he changed it. So I was wondering, I think we also should change our title, and it is, you know, is decentralized AI better than centralized AI? And when it comes, because what Trent is saying is actually that the title of our session is a mute point.
Yes, there's no point with the verifiability. However, to the point of verifiability, I have to say something. So there are two concepts, verifiability and validation, right? So when it comes to AI, AI is, intelligence is the ability to achieve complex goals. And which are the goals?
Yes, kill all humanity, or enable humanity, or each and every individual to flourish according to their abilities. Okay. In order to know that the AI is on the path to achieve those goals, so we can do validation.
That is, yes, it's related to goals. Verifiability is one step below. So it's once I have set the goal, okay, is it going to achieve it? But it's doing the operations in order to achieve the goal, but the validation of the system.
And again, Max Tegmark has a lot of work on that, and I invite you to read that verifiability validation. Sure, yeah. I would add, so there's a philosopher, Chandramaneeva, which said, where my language ends, my word ends. And almost all important problems are terminology problems. And so with that notion, right, I think you can see the debate going. We need to readjust what these words mean. What is verifiability? What is understandability? It used to be people insist that you explain this to me.
Well, I can print out all the parameters for you to read. What does that really explain? It's not much. And I would say any intelligence system are fundamentally not explainable. Otherwise it wouldn't be intelligent.
Therefore, you know, the complexity theory, that is not reducible, right? And so we would have to accept that and then come to a new notion of explaining.
You know, it's like a person explaining, and this is the work we need to do, is making them explainable in that fashion. But there's one thing we can talk about, essentialization, which is the system. We don't want a matrix. I hope we all agree. We don't want one single system happens to know everything, and no one knows what's going on inside.
And that, I think, is very fundamental. There are two things that I wildly agree with.
One is, and this maps to what I wildly agree with in what you're saying, which is not wanting the matrix. So there is so much talk about removal of bias. I said before, really quickly, it's like there's no bright line between bias and expertise. And I think we should want bias. We're just not in the, you know, not in the way that people are using the term, but there needs to be variation in terms of priority and goals, and there shouldn't be convergence across AI agents and the same answers. That's bias, right? I want a certain agent that's biased toward representation in math.
I want another agent that's biased toward representation, like legal engineering, let's say. So we want that bias. That avoids us converging on, you know, one giant, you know, we already tried single vendor lock-in with information during the Middle Ages. It didn't work very well, and I'd rather us not do it again.
So yeah, I think it's like, and it comes back to an ontology problem, because what we're really looking at is, you know, how this thing achieves its goals, intelligence, and under what conditions and what's its reachability, which we're able to do with black boxes. We write requirements. So it's like, why aren't we talking more? I see this in conversations about AI safety. Why aren't we talking more about writing requirements around the black box rather than trying to verify what's in it? That's what we do with any other system that's represented as a black box.
We, you know, write and validate requirements, but I guess it's hard with natural language. I'm sorry. We already are on the time to finish the session. I know that we could keep, you know, talking going on and on, but I would like to say thank you very much. All your words were very interesting.
No, like, I think this one is not correct. It's not correct. Yes. Can I add one final sentence? There is great USPs to decentralize AI. Just verifiability and unbiasedness is not at the top of the list. Thank you so much.
Yes, thank you so much. It was really, really interesting. I believe that the time is not even enough to continue with this topic. Thank you so much.