Welcome to this second part of the sessions. I really hope that you had a nice lunch. As our first session for this part of the track, we will have a very interesting discussion panel where we will discuss about from products to life management ecosystems in the next digital epoch. Please join me in welcoming Jorg, CEO and co-founder of Coer Coal Analyst y Glasser, founder of Web Identity, Patrick Parker, founder and CEO of Empower Id, Katrina do CEO and founder of Miko Martin Coppinger, principal Analyst of KuppingerCole Analysts, John Wunder, chief Privacy Officer in CHI Link Lab.
And thank you so much.
So this is, this is gonna be quite interesting. I think we have a, the, all of the presenters moments ago first learned that I was the moderator and there was no joy in any of their faces at that discovery. So we'll hopefully we'll overcome that challenge. So I think it might be useful to start since the, we're talking about life management platforms and ai, let's first start off with the idea of life man management platforms to make sure that everyone's on the same page in that.
So could one or more of you kind of talk about what is intended originally with a life management platform idea before we start developing it into what it might mean in the future? Martin, would you take that one? Just
Probably, can you hear me?
No.
Okay.
This
Is, this is
Gonna be unruly. This. Think we agreed that the two of us behave. We behave. Yes.
Anyway, it's something I think 13 years ago, so 2011 at EIC we published, or four ps EIC, we published a paper around live management platforms. I think platform is a bit of tricky term nowadays.
We, we, by the way, we will probably launch updated version of that, a modernized version, taking into account all the things like decentralized, all the new options this summer. So we are working on that. But basically the idea was to say, I have control about all my, my digital parts of my life.
So my, my contracts, my preferences, whatever. I can share it in a controlled manner with other parties. I can maybe even have bots or agents. I think we had, didn't have to term bots back then that act on my behalf using that, that again, under some sort of license share information for a defined period, bring it back, et cetera.
And I think, which would enable us to do a lot of things like the, the banking loan example I had in my opening keynote where we use all the information about salary, marital status, income, employment, et cetera, and, and share it in controlled manner to get the best offer for a loan and then finally sign the loan. So this was basically the idea. I have to admit, we probably were some 13 min 13 years early with that. But I believe we are very close to making this a reality with wallets that are much bigger than a wallet is today.
Because there's much more information in that thing we can use. And I still believe there's an incredible business potential behind that.
Maybe to, to add one more thing, what we had in mind those 13 years ago, that this is really something that is capable to help me managing my daily life.
Also in a way that, to give you an example, if I wanted to find an insurance for a case which is best for me in my current situation, that there will be some magic helpers or whatever, you know, we were not able to really name these things that, those 13 years ago to give me that best possible contract to propose it to me or to have, you know, a, a a joint statement above all the bank accounts I have at some places advising me in how, what to do with my money to, you know, to serve best my needs, needs. So that that was behind those life management platforms.
Wrong term, you know, it at that time. And we see possibly that this is coming true now
E even even beyond the life in the sense of delegating this in the case of deaths to, and all this stuff. So there were a lot of things in, I think we we're currently discussing again. And I think the, the big difference is we have a ton of cool technology nowadays to really make it work.
Yeah.
And, and the also the idea behind that, it's not enough to just be safe on the privacy side. That's good to have. It's a require a basic requirement.
Of course, I require this from, from my digital life, but beyond that, of course I want to take advantage to, you know, somehow be in that digital world. And these advantages, this is, this is what we called life management platforms.
I think the thing that was so interesting about paper, and I would encourage everyone to go back and read it because it was incredibly prescient because we're talking about banking aggregation, but there was no open banking. Yeah. So there was no fpi API we're talking about bots and algorithms.
We use that language now, but when you describe the outcomes in that paper, of course they need an agent or a bot or an algorithm. We describe the idea of our identity being portable, which is decentralized identity. We didn't have that language. And if you go back and read that paper, I, I think I first read it in 2012, and for me it was like, sign from the heavens. It was like, oh, this is, this is the trajectory of this is the way the thinking is moving.
And I think to your point about privacy, what's so important about that paper is it really talks about data not being monetized, but data moving. And inherent in that is provenance, is verification, is trust. You can't write a mortgage if you can't rely on the data.
But this, this idea that the data needs to flow. And I think that is what's so exciting about decisions or outcomes. But maybe there's a lot of things that take a side track and build. And that's really what's happened in the last decade.
You know,
And this, this builds on top of things. If we go a little bit further back, like doc's concept of intent casting, right? Yeah. And now we actually have technology to effectively facilitate intent casting, which is this concept that says, you know, I want a new piece of auto insurance, let's say go find the best rate, right?
This is, we've been so hamstrung into thinking search activated by the human is the end goal of what we've built in this digital realm. And it's such a paltry expression of what we could do. But we needed to get to an API economy.
Okay, we got one of those we needed. Now brokerage and agency, we start to see how AI gives us those components. So like we can start to put these puzzle pieces together, you know, back to your keynote and actually see it.
And it's interesting, at this year's EI, CT C, you eef and I another round starting talking about how can we bring together decentralized identity and all the, the, the consent aspects. And I think there's a huge potential to do that.
And I, in one of my, my, my comments on LinkedIn, I also referred back to Doc SROs because at the end of the day, I think this great thinking of doc around vendor relationship management, he called it about intent instead of at intention instead of attention economy. This is, I think about to come to a real life now.
And, and I think that a lot of the consumer use cases are really gonna move it forward. Because even right now storing information about yourself is always a very server centric focus. Even in the consumer market, like on Facebook, your likes, your dislikes, your profile. There is no model where you can be kind of self-sovereign for your own information about yourself, whether you're a vegan, your food allergies and other things. And to make that where you, you mint it, you you control it, you can share it when you choose. And it's portable.
'cause imagine the future walking through a food court. You might want for convenience sake to share your food preferences and allergies with an agent that can then interact with the stores around you to find what are the options for you, which is just a nice convenience that would, you know, and we all know that convenience is what typically drives a lot of the things we do instead of security.
And John, did you have a, John did you have a comment? I saw you waving your hand there.
Oh.
Oh, we can't hear. We,
We we're gonna work on your sound.
John, we, we can't hear you yet. So we we'll, the sound
Bites haven't reached us from Canada.
That's it. Do we have it?
Hello?
Oh,
Perfect. Yep, go ahead.
Yeah, I, I just wanted to say a couple of things. I, I love the vision set out in life management platforms, but like a lot of these things, there's the what you want and the how you get there and, and the how you get there is, is an interesting, an interesting problem. Katrina talked about provenance. And that's, that's absolutely key, especially in, in AI systems.
And, but I think it's, I have to phrase this carefully. We have to get past consent the way consent is currently implemented, because consent is a, as many writers ha have said, not, not least among them, Dan and SA ofv, it's murky and it's a weak read.
It's, it's a way for the platform in the bad sense of that to take over control of your data. So having permission or dynamic consent that you can operate on and your agent operates on, empowers you so that you can co-manage your data when you share it with, with, with another entity. So having the ability to control your permissions in a dynamic manner or through your agent and, and then having provenance on where that data comes from. So I think those are those, those are key elements that have to be addressed.
So before we get to the AI speculations and, and possible problems, I want to just develop a little bit more the notion of the challenges of this kind of platform. So from a individual perspective, trust, some of the pathways to trust are mechanistic. If you have a reliable system, a predictable system, that is a pathway to trust. It's not grandma to kind of trust, but it's a mechanistic trust like a machine.
And so when you have a platform where you're integrating in a banking identity and relationship and a healthcare identity in your children's schools and all those different systems, again, you can talk about AI if you want, but even before AI's additional layer of problems, what are some of the challenges that may be amplified in the AI context? What are some of the challenges with that integration providing a user interface and something that, so people can actually manage that reasonably?
Martin,
I, I think we need to be careful with the term platform because, and I think it's somewhere deep in this paper as well, that, that we said at the end of the day, it will not be one, one tool or so it, it, it had a very decentralized idea already them. So also that for instance, all the data, not rec resides in one place, but probably resides in a lot of places. So from a logical perspective, it's one thing from a, from an implementation perspective, it's, it's something which is a me network, something which is really different.
And I think this makes a lot of things simpler because, and we, we had a very interesting panel today about the wallet we want and, and Pramod, I believe its name from, from India, that we need to get away from the issuer, put something into the wallet, but the issuer issue something to the holder, and the holder then decides where to put it.
And I think basically this is exactly the thing, but from a, from an holder perspective, it's, it's, it's one of, it's, it's his or her or it's world to look at all the things in a sort of homogeneous way, even while they might be reside in different places.
So I think one of the things I talked about on Tuesday was the notion that for these things, platform, whatever you want to call it, to be useful, they've really gotta understand preference.
And I don't mean in a mechanistic way, which has like, you know, on Tuesdays I like coffee at 3:00 PM but actually learning our preference, that's gonna have to be represented somewhere. And when the, the concern I have, the challenge I think about is all of those preferences, all of that things that are learned now become incredibly powerful to an adversary because it is essentially the nucleus of a digital twin. And so it shifts the attack surface from, hey, here's a large database in the sky that some platform provider in the classic sense provides.
Now what we're saying is there is a very juicy representation that is, may be more diffuse, but the individual now is having to prevent or present a defense from these adversaries.
And it, so it very much changes the calculus around what we, and I'm gonna use small T trust from a technical perspective and what we secure and who's responsible for that. And this starts to touch on some of the liability issues and some of the, which also touches on providence.
So there's, I have concern about, and this is not a new concern, goes back a long way in the, the deep in the consent days is if you flood the individual with the libertarian vision of choice as freedom, it means you are responsible for those choices, whether you understand them or not. Now, if we also say to that we're flooding you with sort of security decisions, which I don't think you're necessarily should be making, I actually take a paternalistic view. What is the accountability there too? So I think there's some, there's some big challenges around that.
That's, that's one of the, one of the things I talk about when in this area is we have to get past user centric design. And people may have heard my reference this before, hog farms are very hog centric, but they're not beneficial to the hog at the end of the day. So user beneficial systems where a need to, to be talked about, and I know Scott is the lawyer on the, and I'm not a lawyer, but it seems to me that there needs to be a duty of care or a duty of loyalty for the operators of these platforms for them to take ownership and responsibility to exactly.
So that they're not acting paternalistic as Ian said, but in, in the use for the user's benefit and to help them get past the, the overload of choice that that's presented.
Maybe one, one thing that seems to be very important, we have been misled by the term platform. We do not talk about anything that has central components. We are talking about a decentralized, decently organized or unorganized whatever structure where there is no single point of of whatever.
Speaker 10 00:16:44 And, but
One thought and maybe just already leads into the AI thing, I think, and I touch my thought maybe if, if there's time left around how, how we, we can control AI or how can we make AI work on our behalf in a well sort manner. But I think justice, all I trust had is regarding this also the security thing. I think there's also the ai, ai element of AI can understand our individual behavior quite well and can help us with asking maybe then Martin, do you really want to do that is something you never did in that sense.
So I think we can, can, can leverage AI to, to augment and I like the augment thing intelligence more than artificial and to augment ourselves or the, the people all out there and saying, Hey, okay, be conscious,
Cautious. I totally agree, but at the same time it just flashes through my head, well we, I can deploy a user end user behavior analysis system on myself. Do I want to see the output? Like do I actually want to, to see how frequently I do things or where I share information? I'd be ho Well, yeah, you would, I would be horrified
This mark and are you drunk again? You behave strange.
Sadly that's consistent.
But the one good thing, back to Ian's comment, I mean you are putting more responsibility on the user. The user becomes an attack surface, but you are eliminating the possibility of an Ocean's 11 type scenario where there's the big heist and everything's in one spot. Yeah. I mean you're gonna, it'd be hard to get rich robbing every little piggy bank if there's only a couple dollars in each one. So it does,
Can I push back on that one a little bit? I'd push back,
John, just one second.
Katrina, the comment I was say we'll go to you next.
I think, I think we've had some evolutions that to help in some of that attack surface now, which is find my device, wipe my device. Yeah. And so I think we've, and we're all becoming increasingly educated.
I mean, I, I don't fly anymore without one of those little apple things in my, in my suitcase, you know, and I'm getting off the plane and is my bag here and I'm already at baggage handling saying I can't see it. And they're going, oh, not another person with an apple tag. But the point of it is, is I know if something goes wrong, I have this kill switch that will work really, really quickly and I know that within a reasonable period of time I can make that decision to flip that kill switch. Yeah. Whether or not it's to wipe it or shut it down or change a pin.
So I, I don't think that removes all of the threads, but I think we've evolved to actually understand that in everyday life we can manage many of our devices that are essential to our life in some sort of remote way, which doesn't mean we don't need a fail safe capsule with some root keys and a backup somewhere safe. So I don't think it's solved, but I think it's solvable.
It's 2:00 AM Do you know where your digital twin is?
Just John, you wanted to say something?
Yeah, on the a hundred percent that's true with the attack surface shifting from the individual to the database. If you, if, if somebody gets the database, but that's a view from the organization, it's a hundred percent of the, but now if the attack surface is the individual, if my account is breached at Acme Roadrunner Associates, then I lose, then I, all the data that's there is at risk, but if I'm hacked, it's a hundred percent of meat is hacked. Now the attack surface is a hundred percent me.
So if you, if you flip the mode of analysis from the organization managing the database to me, and now I have self-sovereign identity or, or an LMP that's centered on me where all my data is that I manage now it's a hundred percent of me that's been been pressed. So your frame of analysis matters when you say a hundred percent or versus just one user?
Yes, I I, I would like to wait here because we, we talk about the challenges and, and how we trust, we don't trust. And yesterday one of our speakers said something that I think, I never consider AI from that point of view and was that if you have a friend, you don't know what is happening in, in the mind of your friend. So maybe your friend is acting for certain reasons that you dunno, but you trust your friend. It's not that you don't trust him. So then when people get very afraid about, oh, shall I trust the outcome of ai? Well pretty much it is the same.
So then if you trust your friend, why you wouldn't trust, let's say a machine that is using your own data or data that is good quality data because we know that the data that we use will actually will conduct the outcome that we get. Yeah.
But, but I think there's, there's a massive difference here too in the use cases. So the one thing is to address trust the friends. The other thing is do I trust the friends to have access to my bank account? Yeah. And in the second case, and that would be the AI case. In the second case, I probably would take additional mitigating measures at least. And this is, I think what we need to have in place to ensure that if something happens to, to an individual that doesn't impact everything in, in that world.
And I think we have some starting means like, like biometric authentication, maybe even a recurring Oh yes. Show that your lifeness detection. We can bring in a lot of things, which I think really help to, to bring the, a attack surface down.
And, and I think it's also, again, it's a, it's a bit of anomaly detection.
You know, if, if a lot of things at one time then start to happen in that environment of me, for instance, then I would expect that to, to trigger alarm signals. I think it's the, the same thing, you know, we had a lot of discussions around deepfake detection, Hey, if there's a 25 million transaction, it's not only that we need a deepfake detection, but the all the GRC triggers around fraud prevention, so also should raise an alarm. If you don't do that, then we did something fundamentally wrong.
Because the, the problem is not just the deep fake, the problem is that your fraud detection, the financial systems don't work. And we need to do that proper across all the elements then. And then I think the risk will go down to a, a reasonable level.
So it sounds like we're we, that, that was a nice way to set it up. We haven't even talked about the problems of ai, but we see that there's a complexity but a pathway forward.
So putting aside the platform, let's call it life management enhancement systems, how might these systems be more effectively realized with AI on the positive side first, you can bring in negative if you want. We'll, we'll certainly get to that. But Katrina please, certainly.
Interesting. When we first read the paper, or sorry, when we first read the paper, we were used to hearing enterprises talk about share.
Speaker 11 00:24:17 Yes.
Is that, is that better? Can we use that? Try
Talking? Is that better?
Yeah,
Speaker 11 00:24:27 Let's use that.
Okay.
What, what we used to hear enterprises talk about with share of wallet, that marketing idea that, you know, you could extract so much value from a consumer. And I think one of the things that we found really insightful about that paper was this idea that you would move from share of wallet to share of value. That you could actually maybe help people to make much better decisions, but you would have to be worthy of that. So when you're extracting share of value, the individual probably doesn't know that's going on.
But if you are saying you want to actually generate value, then you need to be demonstrating why this insurance product is better than another, or why this is a good health decision or why it's helpful to signal something three hours ahead of when you might like the, there, there, there is inherent in that the need for a value proposition. And I think again, that was one of the very prescient threads through the paper was this idea that what would shift was the making the value proposition really inherent and and very visible.
Speaker 11 00:25:51 So
To the, maybe in tactically, this may actually be tactically speaking to the where AI can actually help here, I think there's three areas. One is because LLMs are good at translation, translating legalese into human is at least imaginable as an outcome. So reading privacy policy and privacy notice and making that digestible is a very valuable service.
One, two, observing what data I share in what contexts, and to begin with just push, tell that back to me and then be able to actually take action. So observation of data sharing and context. And then third is I would say preference extraction, which is to say, watching the way I interact in a digital setting and saying, these tend to be things, services you use, things that you are looking for. Would you like me to formalize that so I can now I as the agent can take agency and go autonomously, start to look for these things or express these things.
So I think those are three things that are not science fiction use cases that can happen reasonably near term,
But but also then stopping when, when you did the that the third time. No, don't, don't ask me that question. So it's like, you know, I sometimes IIII go to a vendor website and then the bot pops up and then I click it away and then I go to product and the bot pops up and then I go further down to that product and bot pops up. It's totally annoying.
By, by the way, I'm, I'm, I I I would be, I'm not exactly sure whether I agree with life management enhancement systems because to me it sounds a bit like life management crutches, but maybe we figure out a good term I think. Yeah, well, no, go
Ahead.
I thought, John, go ahead.
Oh, John, go ahead. I was just gonna say, I, I, I agree with Ian, but it might, there, there might be a intervening step. One of the things I think we've all found is that AI's really good or reasonably good at summarizing. So if you are tracking your data exchange events, that's what we call it.
When, when you change data under an agreement or, or with consent before the AI starts making inferences, you also have the opportunity to say, these are the choices you've made in the last month on your, does this represent what you intended to do? Or is this what you were doing? Inadvertently, Ian could, could take pictures of his socks every day and and say, am I repeating my crazy socks?
Or, or, or something or something like that. But making sure that the you, you have a yourself in the loop to make sure that the inferences are not, are are the ones that you consciously wanna do.
What John's suggesting, sorry. I think what John's suggesting then is also maybe policies forming. And to your point as policies form, you know, we are not so complex as humans. We like to think we are, but there are many things that we do every day that are habitual or repeatable that fit into certain patterns. Now one hopes those patterns evolve over the course of life and they change.
But we would learn very quickly which policies and patterns move us towards objectives we have in our life, and which policies and patterns don't move us, and which policies and patterns actually put us into a state of vulnerability. And if we think of AI really as extended intelligence, those are things that you can work out over the course of life. It's taken me some decades to recognize those patterns, which ones work, which ones don't. But imagine if I knew what I know now at 21 because some of those patterns or policies had already started to form.
And, and so I think if we think of it in access management terms of good policies that are being applied, then, then there are some heuristics that could really help with, with how that could work for our benefit. Yeah.
And and I think as the AI becomes more pervasive when you walk into a hotel, it's gonna be interrogating what, you know, what your lighting preferences, your room preferences, everything else as the, the pervasiveness and the speed of interactions increases, the human's not gonna want to keep consenting.
So you're gonna need some type of consent registry that is somehow distributed and shared you, and then your, your agents can interact with those agents and you, you're gonna have to have a consent. No, it can't
It can't be consent. It can't be this checkbox bullshit. It's gotta be, but
It's not checkbox, it's very granular.
It's what my, what are my preferences? Yeah. Last time you heard, last three times you've been here, you've set the lights at this level. I'm just gonna go do that. That's really different than how we expressed consent today. Maybe that's gonna change.
Wait, it's not looking backwards. It's looking at you have preferences and whether you're willing to share them with that, that entity, that commercial entity and, and their agents.
Okay, that's different. That's not consent in my mind, but I agree with that.
Well, it's consent to share.
I think consent and permission is something we will be arguing about for a long time. So can 26 minutes warning con consent, you know, consent is kind of binary.
I, yes or no. Yeah. Whereas permission can be fine grained. So with permission I can say at this hotel group over the years, I have come to trust them. So there's, again, back to this policy idea. These are the permissions which the minute I walk through the door are in place, I can still surface whether or not I wanna change those.
But I, I think the problem is with consent, we've been so schooled in it being binary. If I agree, I can keep going, I can check in. If I don't agree, I can't check in. Sure not. Yeah. Yeah.
And, and with permission we can get fine grained as well. Yeah.
And you can have a duration.
John, I think John has a comment. Let's go to John
Just very briefly. Consent at, at its core is a, a seating of control. I'm gonna give you this notice, I'm gonna tell you what I'm gonna do. Do I have your consent? Now I take control of your data. That's why we like the, or I like the word permission better. 'cause that can be dynamic and ongoing and, and controllable consent works in medical research where you actually do give control of my research data over to a researcher. There's very few places where consent the way it's formulated.
I think Katrina's right, as a binary works, especially in terms of the kind of quote platform unquote we're talking about here.
I I, I like that sort of imar or the proposal of Ima to think and licensing this. So really think license i i I license the right to do something that can bring in permissions with on behalf of me and I on the other end. I think with all the, the ideas by Ian, by eef, et cetera, which popped up at this conference, I think we have a really good chance that decentralized identity will help us killing the cookie consent boxes at websites.
And that would be a real great achievement.
I, I like to, to go back to Patrick because we had a question from the virtual audience. Once again, thank you for engaging and here in, in, if in the room there is any question, please feel free to raise your hand, Patrick, the question is, the challenge with the preferences is that it creates a conscious bias. How do you think that we can enable users to a service that still includes the opportunity for new content or services that are not constrained by who they are today? It's a challenging one. That's a challenge.
That's a challenging question because we do not like biases, but we also love biases. The reason TikTok is so addictive is because it literally learns who you are better than you want to admit to yourself who you are.
So, so I don't know, I mean, we don't wanna be biased, but we want personalized, personalized offerings. Well,
Pretty much what you said at the beginning, Ian, do I really want to see exactly the outcome of this?
Yeah, I think we wanna see Ian's TikTok stream and so, so it is a challenge because, but again, then you'll have the option to share or to not share how much information for it to formulate your, I mean a bias, but a bias that you're, you're guiding. I mean, I think at some point you'll even be able to have, with ar the fashionistas will be able to have different looks and share to others which look they want them to see. So visiting your parents, they see one thing walking, walking past somebody else in the street, they see something completely different.
Well, but you know, Patrick, you just hit on something that's actually we're kind of chuckling about, but it, there's a interesting social behavior, which is we suggest like, Hey, I'm gonna show you my TikTok stream, my Instagram stream. Most people have some sort of small physical like what you wanna do.
No, right, exactly. Okay. And this is room full of in the conference full of professionals know more about security and privacy and identity than anybody else. And we still have that sort of revulsive reaction to it. And now what we're proposing is it's cool, don't give it to a human, give it to your phone. And what we, one of the things that will have to be overcome is behavioral norm changes. There's a whole slew of them, but one of the big ones is a, I wouldn't say seeding of control, but sharing of control for the outcomes and decisions that this thing will start to produce for us.
Which means we're gonna have to start very slowly down the kinds of outcomes that are permissible. And how then society starts to actually make that a normative behavior over time.
Well, well, I I think in, in reality sometimes this works quite well. So if you have a bit of a autonomous vehicle, I think there's, at the beginning a bit scarcity shall address the vehicle, but you very quickly get used to say there some really cool things. And so when I, when I take my vehicle in the queue, I always let it drive for me because it's so much more convenient and so much less risky for me, especially when it's stop and go very slowly. So the risk is relatively low for me, but the risk that I crash into the car before me is going almost to zero. So it makes a ton of sense.
I got used to it very, very quickly to give away things, which I probably, if I wind back the clock 10 years or 15 years, wouldn't have thought about giving away.
But that, that's absolutely, I totally agree with that. But I'm just doing the math in my head.
I'm like, I'm cool with risking life in limb for my automobile. That weighs a couple of tons. But the idea of sharing a a set a view into my interests is uncomfortable.
And, and I there there's no comment on that other than that's where we are. Right. And starting to feel how does that, how does that change
John, John, before we had the other comments here, John, you had a comment question. Go ahead John.
Me?
Yes. Yeah. Yes.
Oh
Yeah. I was gonna, I was gonna say that it's really interesting what he brings up because we're all happy to push out.
I'm gonna, I'm gonna perform something, I'm gonna do a TikTok dance. I won't, I won't impose that on you. Do you win? I do You win. Do you win? But sharing what it is I consume is completely different. And if you look at the sociology of privacy, Irving Goffman talking about per what we, we perform our lives. We don't live our lives or patron, Sandra Petronius and Di and her work on communications, privacy management, what we choose to perform is fine. We'll share that, but inferences about what we consume, no, I think that's the difference.
One is performative and we can help our ais help train them to see what we wanna perform, but we can't do that in terms of sharing what it is we consume. And that's much a much more intimate sharing than just what we choose to perform.
We, we,
Speaker 12 00:39:01 We have a question here
From our audience Live. Please go ahead.
Speaker 13 00:39:05 Thanks Steve from Consult Hyperion. I mean it's kinda building on what John was just saying. So to just give a silly example, I have a, a niece who really likes unicorns. And so let's say for example, I decided to buy her a toy U unicorn for Christmas. That has nothing to do with me. It's nothing to do with my preferences or my personal interests, but, you know, so I I've actually, I've actually done an action there. I've gone and bought a toy unicorn.
So how, how do you different, you know, and there's this complex set of things. Some of those things are relevant to me and some of the things I just engage with because I'm, I'm a human and I'm living.
How do, how do you differentiate those things? Thank you.
Speaker 14 00:39:47 If I
Pick up the thread from what we've just been talking about and connect it to what Steve's just asked, I think the reality is that view that we're afraid of service providers already have that view.
So what, what we're really talking about is, are we ready to have the same insight that others have about us, that that is being used either positively or negatively. And then to flip that to Steve's question is remember this paper was written in the middle of the big data buzz.
You know, you think 10, 12 years ago, big data, big data, big data lakes hoover up all the data you can get. And what seemed so insightful was that it was really talking about small data. And we've always imagined that data is the most valuable when it's, it's accurate or you would say now with Providence or verified it's real time.
There's some, it's time bound. It is in context. The unicorn doll is not for me, it is for my, and it has some intent. I have bought it, you know, her birthday's over.
And I think this is the magic that that is possible is big data has always been able to do a lot of the predictive stuff. But if you can get context and intent to stay close to us, then, then that makes that interaction incredibly valuable. And then maybe it helps with the feedback loop because we then start to see things about ourselves that others are either assuming correctly or incorrectly, but it maybe creates a more, a safer proximity for that.
I, I I, I, yeah, I I'm, I'm, I'm getting, I'm getting nervous. You, you be talking long.
No, I, I think there's another angle in that, you know, your niece could just say in that world, Hey uncle, this is what I want to share with you so that you buy me the right thing. And then if she says, okay, I move from unicorns to whatever else, she's sort of ends the license or she changes what she's sharing with you.
So I, I, I think, I think I think that that, that fits extremely well in that what, what we probably didn't touch much is the relationships thing. Because in in that world, we envision we can handle all these relationships things much better.
So, you know, what I hate, hate with Amazon is I, I still really can't, well manage what is my wife purchasing? What is me?
What is for my daughter, what is for my son, et cetera, et cetera. And this is, I think are things we can handle much more.
And I, I also would dare to say, you know, when, when I can envision that this works, so I'm probably the, the most amongst, at least amongst the most reluctant people when it comes to new technology diagnosis quite well. So I'm the one with the analog watch. I'm the one who started using Uber three weeks ago. Finally.
I've, I've never used TikTok, I've never used in Instagram so far. And by, by the way, I, I thought about getting, getting rid of Uber again because then they sent me, oh, if you become this whatever sort of member, and the second mail was a 300 page terms and conditions or thing. And I thought about, okay, come on, forget about this. This is really annoying.
So if I can envision that, I have a feeling that it could work also for other people because I'm really reluctant or go this
And, and I think there's a clear segregation of the personal level of discomfort sharing our activity history, what we do versus who we say we are. So it's, I mean, it's almost like the bed stiller movie where he goes out on a date and she's like, you love spicy food, right? And he is like, of course I love spicy food, but if you look at the activity history, probably never ate spicy food in this life.
So definitely what we do is, is icky to share in most cases, but who we say we are is completely under our control.
But now maybe the question I would have really is how do those wallets and counselors look like technically or from the business side to become a reality
Speaker 15 00:44:19 Store, all of those
Permissions and the history.
Well, so I've think, I've been thinking a little bit about this and I keep coming to the conclusion that this is something that no one would build with commercial interest because the value, no one maybe a bit strong, the value is so intensely localized to the individual and there's not a clear path to monetization. Sadly, that is one of the things that's concerned which leaves, and this is the really bad conclusion that it's platform providers in the capital piece sense mobile operating systems. Exactly right.
So we're back to the same mess, which is one of the historic strangest decisions in my mind was Microsoft getting outta the mobile operating system. Why? Because it left them out of the conversation in a lot of ways for what's happening today around wallets and strong authentication and all of these other things because they, they are missing a toehold as an agent for the individual for a large population. And so I'm still trying to think about ways that this can happen, but right now I keep coming back to the same conclusions.
It's the same couple of companies and I'm not sure that's a great outcome and I, I really don't like that as an outcome. I'm just not sure. Another path I haven't thought about enough. So tell school me, Katrina, please.
So, well I have before that I, I have a comment that we go to John and then Katrina and then we have a question. One of the things to think about is there may be, there are plenty of economic models where people make money as fiduciaries, medical doctors or fiduciaries priests. They don't make a lot of money, but they're fiduciaries, lawyers are fiduciaries, priests and lawyers in the same sentence that that never happened before.
So, but, but we may, maybe what we're missing is a new layer that's both a machine and institutional and individual layer of dedication to the individual as such that is the economic proposition and where it gets paid for can be outta the entire system. But the one question may be of course, and not to be explored here, is it possible to program AI systems to be fiduciaries and reliably? But that's for a whole nother thing.
Let's go to, John had a question, a comment I think, and then we'll go to, I think a question from,
Well I
I think building on what you said, Scott, now, which goes back to what I said earlier about duty of care and duty of loyalty. There's a, Bruce Schneider has talked about public AI in that space to build an infrastructure and availability that that can be a common space or a common ai. We don't have public space on the internet, it's all private space. Building a public AI addresses that.
And to your point about other business models, if you think of data as an asset, think about credit unions, which are collective action where the members are, are acting on, on their, on their own collective, collective interest. And as a, they could be data fiduciaries or, or acting that way. And you could choose a data collective or I'm not sure that this works, but I've heard, I've seen startups in the Dow space, distributed autonomous organizations that work along the, along the same line.
So I think there's, there are options we just have to break out of the, our thinking box on, on this now.
And I think we have a question online.
We have a question from our virtual audience. The question is, human decisions are based on relations. And this is something that AI cannot mimic in this sense. Can we really trust AI powered decisions and how, so?
You know, I think we, we touched this a bit before, when, when we say, okay, we have the AI on behalf and I think we need a control mechanism, sure, we need to, to understand what it's doing, why is it doing, we need to explainable a AI and all that stuff. But I, but I think we, we have take the vehicle example, I think we have a quite good feeling about what, when we have to trust like we have in, in, in, in, in the daily life, what do we trust people to do on our behalf or not?
So I I think yes, we, we need a bit of control mechanism, but what I also wanna do quickly is coming back maybe to the who builds it and who pays for it thing. And and I like the, the credit union analogy, which is an interesting one I didn't think about before, but it's, it's a, it's a good one.
The other thing is part of that will come because there will be wallets which provide a lot of information. So the decentralized or the digital identity wallet is, is an element of that.
There's an element of yes, this, this information's held somewhere and there will be special use cases like where do you put all your, your, your really legally relevant papers into sort of a, a place. And that might be something where someone says, Hey, we offer you something which works in the model.
And, and for when we take them the bots or agents, there's also that model of, you know, i, I provide you with something that helps you with all the insurance stuff. And then it's something where you say, okay, I need to use this one. So I think there are elements in that if we have the ecosystem to where this works together, then we also have a lot of elements where, where interested parties can earn money with and will pay for it.
Yeah, that's very interesting. That model with trade unions and if we combine it with global reach governance body like United Nations, ITU, like the Open Wallet Foundation recently has announced that they are now partnering with, I think that could become something which really works and which really could start helping us making this concept a reality.
I'd like to pick up on the, the thread around the monetization aspect as well. What we've gotta layer into today is the fraud costs.
And even, you know, I had this conversation some years ago now with ad agencies that were not getting the outcomes they wanted from a campaign because there were bots involved charging their customer, customers not getting the return. And so we've been on this slippery slope for some time around the data monetization. I think what we're talking about is a shift to outcomes, decisions or outcomes.
And it comes back to that what's the net new value that that comes as a result of that that, that you're just talking about now, the idea that the data is verified or the providence of it, or you can get to decisions faster or this whole collapsing of all that back office cost and waste and fraud. And that is such a significant commercial opportunity because you, you are dialing up the value proposition at the same time as reducing cost at the same time as increasing trust. And it's just we've become so habitual about I want the data instead of I want the outcome. Yeah.
So it's, it's interesting em compared to unions had their start in Germany in 1850s as Protestant and Catholic support organizations and credit function was just a subset of that. And so it is interesting that the trust that started there was because of shared religion. And it's interesting that you have that maybe AI provides enough of a common threat that the human religion can now be the thing that brings folks together to think we need to have a collective action on the thing and credit will just be part of that. Mina do we have any other questions?
We do, the question is about the, the centralization of the information.
Just before we do that, John, John sent a big,
Sorry, John,
Go ahead.
Another, another trial, another trial balloon from John.
Right? Sorry about that. Just on the question of confidence, what popped into my head is, and we should treat artificial intelligence recommendations the same way the US treats the intelligence community. We don't know what's going on beneath the surface. It comes up with recommendations and we get a level of confidence in indication, but a human makes the final decision. I mean that's a model that makes a, a lot more, a lot more sense to me. And sorry about that.
I engaged my AI on my computer and you ended up with balloons,
He likes balloons preference. So let's go to the question then. How do you think that this decentralization of the information is helping in the AI ecosystem?
Well, so
So digitization helping in the AI ecosystem? Yes.
Oh, decentralization. The
Decentralization, yeah, the decentralization of the information.
Does anyone wanna take that?
I can, well
Just very, just very quickly. I think this is really where the power of the model is, is that we, we know Max Schrems talked about it this week that there is already some action against chat GBT because it happens to be a date of birth, which is really important for many services to be able to say you are eligible or this is the cost of your insurance or your policy's going up or whatever. And as I understand, the person that sparked this just wants to correct that data. Yeah.
So I think there's a massive opportunity in the AI space for securitizing that data, checking the providence, and I don't use the word centralized in any way to make everyone start screaming, but there is a central source of truth where that data, where you can actually train on a model where you have the providence and the verification, which is so much more valuable. And, and that's something we haven't been able to do before over the course of someone's life.
Yeah, I think
That that follows on in this room. A couple hours ago Kalia gave an amazing talk about the quality of institutional memory and how long that should persist and start to ask questions about the way institutions hold that information and absolutely fascinating talk. Go back and listen to it. Go see the slides because it, there's implications of this in there too, right? Yeah.
Interesting idea. They have those, well, you can have an AI generated from a loved one before they pass or after they pass.
And I know in the development realm, we always have the challenge of the guy who wrote the code or the gal who wrote the code, who's no longer here maybe before they pass or have them train in ai. So you still have Bob or June around that, ask them questions.
Thousands of cobalt programmers have been uploaded to chat GPT. There's still, let's see what happens next.
Suspenders and all together. Yeah.
So, so
I, I think going back to your point about, yes, at the end of the day we, we'll deliver from decentralized world a ton of information potentially to the AI that helps the AI being trained on our behalf for our specific purpose. But it also implicitly has a, has a back channel in the sense of that when we feel the AI is not doing well, it's much easier to drain it than to explain chat PT again, the same thing. So I tried, originally checked to BT to find the, the best beaches with sand on the mainland of Italy.
And they always brought up Sardinia and I, I said, oh no, not on the island, on the mainland. And I finally gave up, but it'll become much easier because it knows it's smart, has special interest, don't want to take the ferry in this case, blah, blah, blah. And then it'll be better. And the all the back channel thing also will be better. It will not be lost after the session
Until it gives you the right answer. You have multi-agent arguing on your behalf until you come up with something that's not hallucination.
Will this, we could, we could go on and knowing this group, we would go on the on and on. This is, I think this was ly absolutely a spectacular treatment of a very difficult subject. I hope people agree and please join me in thanking the panelists for this discussion. Thank you so much.