KuppingerCole's Advisory stands out due to our regular communication with vendors and key clients, providing us with in-depth insight into the issues and knowledge required to address real-world challenges.
Optimize your decision-making process with the most comprehensive and up-to-date market data available.
Compare solution offerings and follow predefined best practices or adapt them to the individual requirements of your company.
Configure your individual requirements to discover the ideal solution for your business.
Meet our team of analysts and advisors who are highly skilled and experienced professionals dedicated to helping you make informed decisions and achieve your goals.
Meet our business team committed to helping you achieve success. We understand that running a business can be challenging, but with the right team in your corner, anything is possible.
In this panel session we will discuss how decentralized identity techniques can help mitigate threats posed by LLMs such as deep fakes. Approaches relying on detection can result in an arms race; in contrast, decentralized identity techniques allow stronger infrastructure enabling trust by design, easy content authenticity detection, and more.
In this panel session we will discuss how decentralized identity techniques can help mitigate threats posed by LLMs such as deep fakes. Approaches relying on detection can result in an arms race; in contrast, decentralized identity techniques allow stronger infrastructure enabling trust by design, easy content authenticity detection, and more.
We will begin first with a quick round of introductions as soon as we're all micd up. But actually there was a prequel to the conversation we're gonna have right now. I think it was, it just last week feels a lot longer. Yeah. Think it just, yeah. Just last week, some of us from this group sat down and kinda did the, the pre-work to the conversation we're gonna have today, which is, what are the sort of threats that are out there? How is decentralized identity actually a, a part of the solution and, and disarming some of this. So we're gonna pick off where we left off there.
If you're interested in that conversation, feel free, go to CO or Cole. That was one of our recent webinars leading up to this event. But we'll pick it off here first with a round of introductions and then we'll dive in.
Wayne, do you want to quickly introduce yourself and also how you're connected to the topic here? Yeah. Great. So my name's Wayne Chang. I'm the founder and CEO of Spruce id. Our mission's to let users control their data across the web instead of people signing into platforms, platforms sign into your data vault that you control entirely. This is the architecture we use to build the California mobile driver's license. We also work with some other states in the US and federal agencies as well, largely around identity credentials, but also other credential types.
So we care a lot about this topic because digital identity has a clear tie in with content authenticity. C two PA was mentioned earlier and some of you might have just heard about remote proofing. All that is very connected and we'll dive more into that.
Hi, I am Linda Jang. I actually began my career as a financial regulator and then began working at blockchain startups. And now I am founder, CEO of Digital Self Labs, which is a tech advisory firm working with Web3 startups. I also teach into research at Georgetown Law.
Hi, I am Kim Hamilton Duffy, executive director of the Decentralized Identity Foundation, and I'm very passionate about the ability for decentralized identity to enable choices, opportunity, and trust for in individuals in their increasingly digital lives. And diff is a community of innovators and builders making that happen. Yeah. Thank you all of you. And so now we'll pick up the conversation where we left it off and, and build the bridge here. So how does decentralized identity function as a mitigation measure against some of these threats that we were talking about in earlier sessions?
DeepFakes content that looks and feels real, but isn't what, what we're not able to filter through. Yeah. How does decentralized identity stand against that and, and equip us? It might be most productive to align on the definition for decentralized identity because it's a bit of a dismal swamp of semantics.
So to me, decentralized identity is the ability for any party to play the role of issuer holder or verifier, right? So the idea that many, it's not just one central issuer as in an enterprise setting, but you might have data that's originated from a lot of players and used by a lot of different people who have wallets or agents or whatnot. And every many people can verify it, right? So this really explodes the complexity and I think requires new approaches to handle that.
But also it allow, it, it unlocks a lot of new use cases like moving to models where there are many ways to certify information instead of relying on one ministry of information or something like that. Yeah. And to build on that, to me, decentralized identity isn't really about identity, it's about data governance. And frankly, Berlin, I can't think of a better place to have a panel on data governance. I'm actually staying with a friend right now whose apartment building is built actually on top of where the wall used to be.
And I am meeting a bunch of East Berliners who tell me what it was like to grow up in East Berlin. And one of the very interesting comments that were made was the surveillance capitalism that we live in now actually reminds them of what it was like while growing up under the Azi in East Berlin.
So I, I thought that was really interesting to hear that because you know, one of 'em was like, yeah, I just assume whatever I say, right, et cetera is gonna be recorded somewhere. And that's actually how I felt while I was growing up in East Berlin. And so I think this particular technology set is extremely powerful and we need to really figure out how can we build out an ecosystem where we can actually get to use it and enjoy it. And so most recently, we actually put out a paper today about the ideas that we're discussing today.
So if you're interested, it's on SSRN, it's called Chains of Trust, combating synthetic DA data, risks of ai. And it's not really about digital identity, but really using the technology for allowing ourselves to express who we are.
And, and I, I'm the resident lawyer here, so I don't want to, I wanna give Kim the opportunity, but afterwards I really hope that you can give him the opportunity to talk a bit about the legal and policy foundations for why we think this is a really good policy solution. And we certainly need to hear that as well because I know there are a lot of questions on the legal aspect. But first, Kim, please, yes, and thank you for that, Linda.
We did miss out Linda's perspective from the previous one, so I would like to spend a good amount of time that Wayne and I fortunately did not wait into legal territory. So yeah. And I have decided that I like Wayne's definition of decentralized identity the most. So the one thing I want to add is that I also think of it as a set of standards, technologies and principles. So we focus a lot on the standards and technologies, but also part of it is these underlying principles that restore individuals control over their data. As Linda mentioned.
Now when we talk about threats due to, you know, LLM, so whether it's misinformation or fraud, a lot of these are are not new, we're just talking about the acceleration of these. So a first wave you can think of as, you know, as people start interacting online, the ability to interact with more sort of anonymity, but the possibility for misinformation to spread.
And so, you know, we, we were already uncomfortable but just barely plotting along, dealing with these sorts of threats. And now what we're seeing with the advent of very cheap, easy to implement deepfake technology, whether through voice or images videos, it, the risks are just accelerating. And so we're sort of past, past our breaking point. And so with these approaches and technologies and principles, we have the possibility to build on stronger foundations.
Yeah, j just a really spot example probably related to the previous presentation on remote proofing, right? It used to be that, and it still is today, that you would hold a, a driver's license or some identification up to a webcam front and back of it. You might have done it yourself or even built those systems, right? And a lot of the security features were just never meant to be held in front of webcam. How you supposed to feel the material activate the holograms, the UV activated markings, et cetera, right? Those things are totally inappropriate for the medium already.
And just by luck, facts and circumstances, we've been able to get by and reduce a lot of fraud that way as an industry, right? But as it is cheaper and cheaper to generate compelling renditions of these and even someone holding their card, right? This is stuff that's possible today, especially with targeted attacks where you use an AI to generate a very convincing image using technology like stable diffusion, you're able to fool a lot of these systems to the point where I think someone was able to get through a cryptocurrencies KYC process to do that.
Who is to say how the KYC system was implemented, right? But these are starting to show up at the edges and I think it's only gonna intensify, right? So transitioning to systems where you're able to use cryptography based trust, you know, we're seeing some of the things recommended by a RF that you've probably been hearing all day about that contain high fidelity portrait images or other biometric material that is held by the end user, maybe multi issued by their government or some other entity.
Why not, you know, an organization they trust or even a financial institution, you know, many different issuers for verifiers to be able to check against. Yeah. And isn't it interesting that if you look at policies that are coming out now on a regulation of ai, whether it's President Biden's executive order to the EU AI Act, they all rely on the AI model to self-report and self-reporting is, is an important component of a good AI regulatory regime, but it's not sufficient.
And so I think it's important that we also as individuals, as the creators of authentic content to be able to have the right to certify our data as authentic. And this is a new personal data, right? That really should be added to the pantheon of personal data rights that are already enshrined in the GDPR. Absolutely. And I think a really interesting concept that you guys just brought up is that the security features here need to match the medium, you know, and so when we're working with physical documents, those security features match the medium that you're able to share that with.
And now we're working much more in digital and hybrid settings. Those security features exist than in cryptography, for example. And I think that's a, a very compelling argument for bringing decentralized to the table. It's not a, not a fringe technology, it's it's security features that are matching the medium that we're working in. Yeah. But policymakers for some reason haven't really been thinking about decentralized technologies of they, they have been tackling AI regulation in a silo.
Yeah, I think, I think one of the big connections here is that if you look at how a lot of identity is provisioned and used, it's already decentralized, especially in the US, is the DMVs that control identity. But weight U-S-C-I-S has passports and utility bills are yet also used as part of the proofing process. So who really, you know, like my, my favorite jokes is, okay, do the pants hold the belt up or do the belt hold the pants up? And I think that it's very complicated. It's all interconnected.
So you have many different issuers and aligning with protocols that allow for that interoperability just make a ton of sense. So that, that's kind of in my opinion, working with several US states, one of the strongest arguments for it, even within the same state, oftentimes people can't agree with who owns identity, right? Because does the DMV own it or does the state CIO own it? And sometimes those are different entities with their own mandates, Or is it the Apple wallet?
Yeah, it's, and then I think, I think additionally to this, so bringing de having these digital identities decentralized as an issued by many different parties and the role that it plays with trust and safety in ai, I think a lot of it is content certification. Something that people maybe don't think enough about, I think is what content goes into the ai, what can it be trained on? Most states, CIOs in the US are only training ais on publicly available information from my personal anecdote because they don't, they haven't done the extensive data inventory necessary to use user data, right?
To, to know that they're allowed to train the model on that and it's not gonna leak PI or something, right? So because of this big data inventory problem, it's really difficult to even leverage the full utility of it in a way that's trusted and safe, right? So if we had, even I, I think of digital credentials as statements about reality that have digital representations, right? So if we had a bunch of statements about the data, some projects like C two PA have the ambition to provide frameworks for this, right?
Then it would be much simpler to have good data inventory and complete data supply chain of what goes into the ai, what gets scrubbed of PII and then it can be used for training and then results and where they came from. Yeah. And another aspect of the kinds of threats that people are encountering is they're not nicely contained in one channel of communication that you're interacting with.
So if you're getting say, a call that's a deep fake of a voice, it sounds like your CEO asking you to change money, I think the expectations of security in each of these channels that, that exist are very different. And so integrating those into a sort of consistent expectation for users. And so with decentralized identity, one of the key aspects is the ability to provide that consistency. So I think that that provides net a safer environment for individuals.
So I if, if it's okay, I want to provide a little bit more context behind the legal rights that you have to use decentralized identity technologies. So, you know, going back in history, probably one of the first instances of privacy, legal writing upon what a lot of our rights are based today was actually a paper called the Right to Privacy written by the, at the time a lawyer, Louis Brandeis, who later became the US Supreme Court Justice. In his paper, he describes the right to privacy as a right to be let alone.
And that right is a very passive right and a right that has become part of a lot of our laws, especially in the US under Graham Lech Blindly Act for example. We have a lot of passive data rights where banks are required to notify us of their data sharing policies, but we don't actually have active data rights to control our financial data. And now fast forward to today, GDPR has actually active data rights that allow you to control your data to decide how to port your data with whom you want to share the data with for how long, right? To erase a right to correct, et cetera.
And it's actually decentralized identities technologies that allow you to do these very actions. And so right now a lot of these rights are actually very difficult to enforce because we don't have the ecosystem built out. But it's that legal foundation that will really help support the use of the, these data governance tools. And if you're a technologist in the space, then it's really exciting to kind of watch how this can go hand in hand at, there are several exciting work items at canterra iso a lot of the places that Andrew Hughes and the audience cares about.
But basically things that can create automated compliance programs to data regulations, right? So imagine that you can give someone a, a personal data license for your data to do something right? And you can sign off on it and it can be linked to your national id, but in a way that's selectively disclosed and privacy preserving, being able to have that consent receipt or personal data license. We'll figure out the branding of it, but for policymakers as well.
But then if you're a data processor and you're working on some information, right, it's actually very easy to automatically have, be able to demonstrate compliance with all the terms of the consent receipts or data licenses, right? Even if some of those use cases is training an AI model with those data, right?
And maybe there's even ability to scope, you know, even that and it's kind of like it makes the problem where the accept all button goes away because now you can have these policies in your wallets and browsers where you can decide what kind of consent receipts that you might want to generate automatically for what kind of parties, right? So I think a lot of this stuff, it's on its way here. And one exciting, I think use case that is technically possible.
Now, if you look at some of the technologies across MDL, the open ID for VC stuff that you probably heard a bunch about, how do we send these digitally signed credentials online? You could, and C two PA, you could create a demonstration or even a product where you're taking a mobile driver's license and you're presenting it in such a way to link it to content that you authored, right? Maybe a politician does so for an election video and that's all possible with how things are, right?
So I think it just takes a will and people to collaborate on this effort and that's why I think we're at a very exciting time. And so that's a question then to, to bring these two together. We have digital frameworks with really passionate people behind them with working proofs of concepts for a lot of different use cases with a lot of benefits for users, for organizations. And we have a policy framework that in the US at least, is still oriented more towards passive privacy rights. How do we link those two together? What's happening?
I think we should demand it there, there should be economic incentives to get consumers to want these tools. And and the issue with decentralized technologies, including identity is it does break business models and we have all grown use to getting a lot of services for free because we are letting data aggregators take our data. And and Abby earlier today told me something slightly horrifying, which I didn't know, which is all cars manufacturers actually keep track of your data in that whenever you use the car and then they sell that onwards to to, to other parties.
And, and so the, the issue is we have grown complacent and we have grown used to not having to pay financially for our services. And this model's going to have to change in order for us to be incentivized to use decentralized identity technologies. But that means we'll have to start paying for services. And that is definitely not gonna be an economic incentive. So there may be other avenues where perhaps we will gar, we will start getting paid for our data and that could be something that is enabled by decentralized technology.
And from my understanding, a lot of the exciting work happening in terms of the policy framework mapping is a lot of it is underway at OOIX or open identity exchange. So I think that there's no doubt some events on that topic here. So I encourage anyone to to check that out. Absolutely. Thank you. And you know, we've been thinking about this sometimes from the consumer, the end user perspective here, but this also has benefits for the organizations as well.
So what are some recommendations that you have for them to get started in terms of implementation, in terms of identifying their use cases, where, where's the starting point and where do you go from there? I think that what we have to do to not set ourselves back for having this decentralized identity technology even take even further delays to boot up is really focus on the end-to-end value for the user.
So I'm, I think that the end user experience is the place to think about and the place to start and doing so in a way that provides value for them, but also meets criteria we have for security, privacy and sustainability. Sustainability meaning that if you have to pull a vendor, is there gonna be another vendor that could do the job, right? And I think that's very important for a public sector customer or an enterprise. And some of these standards are getting to that place where it can create that sustainability. But I think it all comes from the end user.
And a lot of the risk, I think from a, a legislative heavy approach to creating identity requirements is basically what if we reach a point where everyone just checks the boxes and it's kind of synthetic existence and it's not genuine use people wanting to get to the end use cases, right? So for example, we work with the state of Utah on a credential pilot to not MDL, but we work on things like off highway vehicle certificates where if you want to ride an A TV in the mountains of Utah, it's like a four wheel vehicle. Very fun. And a lot of people go to Utah to do exactly that.
You have to take an online education course so you can be safe on the trail, right? And they run into this issue where it takes like 30 minutes to check someone's pass or you know, they can't pull it up because they're in the mountains and their WiFi's not working, right? So basically to have this digitally signed as a credential makes a ton of sense. Other use cases are like food handling cards. All restaurant workers must take training and sometimes restaurants don't have great internet or we need to check it right away, it's paper or plastic today.
Think about how much money it saves someone to not have to make a paper or plastic thing again, right? And this thing is good enough for that use case. So I think that really focusing on the end value and then thinking about the per the restaurant manager who doesn't have to spend their whole weekend making photocopies of a paper thing, it really helps drive adoption. And also the commercials can result from that too. Because when you, when you solve a problem for someone, create convenience, there's typically some money that can be involved with that, right?
So I think that focusing on the end user experience is gonna be critical for these to be a success. I agree. And I think with decentralized identity, I think one thing that's sort of a bummer for us historically is we're seen as the, the naysayers or the hand wringers and worrying about privacy, worrying about this and that. And I think what we have here is a real advantage to shift the narrative so it's clear that people want to use LLM based technology in their lives.
So the productivity enhancements and you know, I think and we're seeing increasingly aggressive interfaces between say like the user and the the LLM. So you know, it's not just a chat interface, you might be giving it more and more access to your documents, your data in some cases some are asking for full access to your desktop environment. And so in today's world that sounds a little bit scary. We have to balance the sort of usability and the, the privacy concerns there.
And I think that we, what we're saying is that there are ways to provide people this convenience that they want and also the trust as well. So I think that it can work out really well in terms of already having worked long and laboriously on the standards, the technical primitives people who are finding strong product market fit. And so I think that this is a really good environment right now.
And you actually took it right where I wanted to bring the conversation next, which is, you know, what is the, the impact on public trust here if we're, you know, currently in a situation where we're halfway expecting to not trust the, the written text or the video that we see, or we're unsure what, what has been generated and what is authentic, what impacts can these solutions have on public trust? Yeah, I think, and I realize it's three people from the US at the EIC, but so but in the the FTC, the federal department Trade Commission, right?
It also monitors has stuff to say about communications. They recently banned ROBOCALLERS using ai, right? Automated messaging. But how do you detect and stomp that out? Because it's always a cat and mouse game of like, if you've heard some of the voice generations, it's really good, you know, to the point where a lot of people say that voice is not a valid factor anymore for authentication, for critical transactions. And also like a lot of the kinds of fraud that are on the rise today, not theoretical, but just today, is basically phishing attacks.
There's a really interesting one called pig butchering, really grizzly named, but it's on the rise where it's typically like a romance attack. Like someone's gonna pretend to be a potential romantic partner for someone, they'll fatten them up and get them all in love with them, right? And then they'll say, oh by the way, I invested like $10,000 in this really cool cryptocurrency.
Yeah, just send it to this address and you can get some too, right? And then they'll just never see the money again, right? And because there's all this trust built up. So think about how difficult it was to build that trust up before it was like you'd actually have to be a very competent native language speaker. You'd have to sound the part, right? But to ask the AI bots to use miry and be able to approximate that is now very inexpensive for anyone to do even if you don't speak that native language, right? So we're actually seeing attacks on that.
I wish I had raw statistics, but I can dig those up later. But yes, those are things that are, you know, impacting systems now. So to even add the layer of authenticity to the communications, you know, I think that we also don't want to require all chat messages to present your strong identity, right? I think what very some adult sites in the US that have put up walls for access to make sure people are over 18, people just use VPNs, right?
And basically I think I also personally don't really wanna live in a society where I have to present my strong identification every time I want to use the messaging app, right? So there's another concern too. So having systems that allow for attribute based identity from different issuers, again, decentralized identity make a lot of sense to be able to, you know, have some modicum of understanding that, okay, this is a human I'm talking to over this channel or something like that, right?
So I think that by providing more options to prove the minimum threshold you need to access services in this new world, we can build public trust by giving optionality instead of opting out of identity entirely, which is one consequence we're seeing in some areas where it's failed before it takes like 10 or 15 years again to try again. Yeah. And we also need to educate policy makers. They don't realize that this technology's actually available and can be applied in this way.
And the, the fact that you can just see in some of the legislation that's already out there, a just a complete lack of knowledge about how decentralizing technologies and blockchain are, can be a, a really great balance and check for generative ai. Doesn't mean that this couldn't be a good policy solution.
Actually it, it is a very good policy, solution could be one of of many options, but that means we need to spend a lot of time educating policymakers, getting them to help support building the ecosystem because this ecosystem only works if there are issuers and verifiers and, and getting that kind of public support. But of course in the end it all does come down to the end users.
And a good example of how powerful end users are is open banking in the US we don't have a requirement for open banking in the US but it happens because bank customers want their banks to share their data with FinTech so they can have access to the FinTech services via their apps. And so here we have to come up with the financial incentives to get users shaken out of their complacency and wanting to use this and make it as smooth and easy and and comfortable user experience as possible.
Yeah, and I think we've touched on the role of education and usability ux I think that, you know, similar to the browser check mark that we've grown accustomed to looking for, there is a huge role for, you know, education informing users, but then also ideally expanding the scope of trust as opposed to basically trying to minimize the wild west in which you have to be default paranoid of everything as you do right now when you're interacting online. So that is the, the goal with what we're building here. And that's, yeah, Absolutely. I'll take a quick pause here and scan the room.
Do we have questions from our audience members here in the room or online? You can of course send those in through the app. You can think about it. We'll come back to you. Yeah.
In, in the meantime, while people are thinking of questions, I want to elaborate more about this personal data, right? To certify your own data as authentic.
You know, this is an active right, which allows you to express yourself and it's voluntary, you don't have to certify, but if you want to, you can. And then others can be able to distinguish between what's authentic data and what's synthetic data. And that right now that's the problem. We can't tell what is synthetic that was from an AI model versus one that was from an original creator. And that is an ecosystem that does not necessarily judge whether synthetic synthetic data is good or bad. That synthetic data can actually be very productive and useful.
It's just gives us additional information about the data we're using, whether it's for our own analysis or to feed back into an AI model. Absolutely. Scanning the room. Yes.
Actually, can I borrow your microphone for a second? Come on. Yeah. This is mostly for our virtual audience. If you're not microphones, they won't hear you.
So please, Thanks for the talk. Really interesting. I was very encouraged to hear that the C two PA protocol or data model includes a provision for ver verifiable credentials in the identity attestation. That's great news, but it always brings us back to the same issue that I always see coming up. But it kind of predates the AI revolution of the lack of a really clear trust anchor for verifiable credentials. I know there's been a lot of improvements, there's a lot of talk, for example, in web of trust. I think the EUDI will also help a lot.
But what, maybe you can just summarize what are the best options? What do you think are the options that are going to be most likely to be adopted when it comes to verifying this chain of trust in a decentralized world? Thank you. Great question. So the question is how do we establish a trust framework in governance for things like C two PA that have these as really necessary inputs, right? So who are actually gonna be the qualified signers to these things to be able to make use of it?
And I think that we, I think we prefer solutions where you can have many different signers and hopefully the people viewing the content can decide which of the signers they, you know, trust, which they think that assertions are legitimate from, right? And this allows you to choose at the wallet level and it avoids this creation of a information choke point where you can do all sorts of nefarious things.
So basically I think that what's important to realize is a lot of these trust frameworks are booting up independently of initiatives like CDPA and because they're being booted up on interoperable protocols is the hope then you can just kind of directly rely on them, right? So for the use case of using your MDL to certify content, right?
In the United States, you can either collect all the certificates from the different DMVs that have MDL programs or AM VA, the American Association and Motor Vehicle Administrators hosts a digital trust service, which is a registry of the certificates across the states that are participating and meeting certain qualifications, right? So that's a, they call that a verified issuer certificate authority list of vial and it can be potentially root of trust if you want to allow that in your implementation of C two PA, right? So there are other organizations like that.
There are some banking, identity banking IDs in Canada and and parts of Europe that could also be interesting. But what's the missing link is basically going from the establishment of those and defining what's the interoperability profile to something like CTPA and how do you just start pointing to that as a potential root of trust. But having systems that do that are really powerful because then you don't have like the same six or seven companies that are deciding who gets to issue the credentials, right? You can accept a variety of sources. Yeah. And Wayne touched on the risks.
So what, what are the risks here? So the idea of right now in C two pa, which is fantastic, again, already using verifiable credentials, but say if, if you're using certificate authorities as the root of trust, you know, are you, do you end up being sort of locked in in terms of who can issue these claims? The good news is that we are already talking with them and their strong interest in generalizing that.
So you, you could imagine that we're talking about decentralized identifiers and enabling more open world kind of ecosystems, Right? We have time for either a last question from the audience or a quick wrap up statement from our speakers, but I'm seeing a very content and quiet audience, so we'll turn it over to you. Can you wrap up in a quick statement? What do we need to take with us from this conversation? Kim? I'll go first. Yeah. So I think that the main thing is that everything we're talking about is there, we're just about, it's just about connecting the pieces.
So I think it's, it's just a really good time to, whether you're, you know, you have a company, you're building products, I think this is a really great space to look into. I think as a, as a consumer, it's worth looking into what the options are and you know, making sure that whatever solution you're using is providing these sort of open world expectations, portability, things like that. So in essentially, if you're interested in talking more about this, come talk to me at diff, we're looking into this very heavily thank you, Right? To certify we have the right to control our data.
And if your jurisdiction does not have that, you should fight for it. Sadly, in the US we don't have this, right? Unless you live in one of the, I think, 13 states that have adopted data privacy laws. But we should have the right to not only control our data, but to mark our data as ours. Thank you. If you look at early mailing list discussions about the internet, you'll see a lot of mentions about user agents today. It's kind of like a string in A-H-T-D-P request and it says like Chrome or whatever. But I think back then it was a lot loftier of a goal, right?
It would be your agent in cyberspace and it would do a bunch of things for you and negotiate protocols or you know, decide what you're wanting to share or not, right? So I think that getting to that original definition, again, we really need that to make sure that we can take the most advantage of these new tools to originate new content or generate content, but also be able to certify content, things that are done on your behalf. So I'd like to see us move towards user agents' original definition.
Again, Thank you to all of you. Thank you for your interest here. When you have more questions, feel free to get in touch with our panelists. They'd love to speak more with you about it. Thank you very much.