When I was thinking about this, this topic, and it was really prompted by a earlier paper, a research paper we published a few years ago, the overall approach of thinking about it, ai. So even though we talking about socialization or that, but these concepts are, I think, a little bit of vague and we want to use this session to sort of really set in the right approach in, in thinking about this problem. So it's more of a thinking about thinking and how, you know, which direction we should be going, right? So that's a, that's why it's a path we are looking for rather than a automate solution.
There's no simple solution to it, but we can set the right path. So with that, if I
Can move,
I'll do it. There we go.
But yeah, I was going to just start with a quick introduction of some of the my work, and maybe that will be related to the topic as well. So there's a, a bit of a sort of a identity work I got involved with and in a governing board for Open Wallet Foundation. So wallet and Digital Wallet is definitely, I think about every day. I'm also involved in a trust of IP foundation work, which works with protocols or technical standards, how a trustworthy digital ecosystem would look like, right?
So IP as an internet protocol, how do we bring trust or a concept of trust into a internet and, and web world, and that we are gonna talk about how that will relate to ai. So I want to start with my term of sociotechnical approach or sociotechnical past. And this story goes back the postwar period of Britain.
There is a specific, well, at the time I think I'm not very good with history, but I think the Labor Party won as is as of today.
This is also like they're gonna be winning again, but the Labor Party won and they have a lot of ambition to help design a new economic system to help the workers, including miners. And so they introduced a lot of new regulation, new rules, and new technology and hoping that miners life will be improved. And also that, remember at the time, coal is still the main source of energy for Britain.
And so, you know, they hope this will be a no brainer, right? It, it turns out it doesn't work that way. And people, some morale actually dropped significantly because, well, you know, during that period for the miners and a lot of conflict, labor relationship, were not getting better.
And so, so at, at this juncture, there's some researchers learned about a particular case in, in this common called where it's happened to work better. And people, you know, the miners were happier, the productivity were high. And so they want to send some researchers that's there to understand what is going on. Maybe we can learn something. And that's the beginning of this. So-called sociotechnical method.
So I, I, I needed a photo to show this mind how it look like I give a prompt to charge g BT four oh and say, here is a period I, can you show me something about the miners then? And this is the picture it produced. I think it's pretty good, but when I first saw it, I immediately say, that looks like a two, like this is 1950s, it should be better than that. So I questioned, and in this case, Chad, GPD did not apologize and could remake one, it actually argued back with me and tell me why it is.
So it wasn't as good as thought was.
And so I had to go Google the, you know, photos about that period. And sure enough, this is a good one. So just share some anecdote.
But the, the researchers then start working on, on, you know, these social technical method, and these are the diagram. They'll show up. But basically what is come down to is recognition that humans or workers have needs other than work, other than productivity, other than efficiency, right? And so they are, they have, they want to exercise autonomy. They want to also have a, some say on how their life will look like.
So, and clearly their needs are more than just the work itself. And also the, as we talk about sociotechnical methods, it's usually compared with previous methods, which will be used to be called technical, or sorry, scientific, right? The scientific methods are basically long lines of people working on assembly, fully automated and et cetera.
And so the people are actors in this system and they have agency and they have needs and social structure. And so it would be best to understand this whole thing in a more integrated fashion.
And they come up with the word sociotechnical, and to them specifically, the word social means really social, structural, you know, governance, who's your boss, who you report to, who set the time of your work, et cetera, right? I'm gonna use the word more broadly to including, they call psycho as well. The psychology or human part of, I think is, you know, humanity, part of a social science. But anyway, they're talking about the emergence of a new paradigm of work.
And it turns out the bureaucrats in the, in Britain and elsewhere don't really like it, even though this, this particular group have, this has, you know, wonderfully self-organize and make things more efficient.
But the, the overall approach actually did not get implemented anytime soon.
So, but what does happen is that this gets into a more of management study. And so in that light, I want to basically start talking about my own work of when we study ai, I want to think about the same issue. So in AI systems, we, we have lot of very technical already the research about ai, fairness, about bias or about accountability. And there are many of these studies, and one of the particularly important study is about, about a, you know, a a, a United States legal system of deciding who to parole or who to, it's a legal system issue. And that is a study or report by a for public.
You, some of you may know that there's a very deep, you know, investigative journalism report about these core systems use these technical or, you know, computer tools to predict who would, or to make judgment or make decisions on who to pro or who not to who may recommit a crime again if released.
And they, they found tons of bias in this, right? And so it began the whole area of subfield of study on ai, what is bias and how it come to be.
And, but I, to me, what I think more interesting about this issue is actually people start to think about like, what do you mean by fairness? Because the company produce it will show you that, hey, look, you know, I give you a, a history of how many prediction or correct, how many are wrong. And they show that there is no bias among a different race groups or income groups, et cetera, right? And it turns out that if you dig this deeper, there will be many, many different definitions of fair. And so fair is really a concept that we constructed that we thought of it as if it's a clear definition.
Okay? So similar people should be treated similarly, not that simple.
It turns out there is no such a thing and social sciences or legal professions cannot find any better way to answer that either. So I look at the, a lot of EU and United States regulations about technology and how that's used, et cetera. I think I just added EU AI act and the ai it is, you know, all these regulations into it as well. And I think that there is a gap in between because it is a, you know, a very noble notion we want to achieve and the systems may or may not support that, and there's a big gap in between.
So I thought I would look at the study of a social technical systems in the, in this, you know, this methodology of looking them together and hopefully optimize the outcome, combining both. So that will be the beginning of that research work. And then I want to just very quickly jump into the, you know, the first one about fairness.
And I wouldn't have time to really go down into a lot of details here, but basically there's a wonderful presentation by a professor from Princeton. It's called 21 Definitions of Fairness and their Politics.
It's a very entertaining and interesting way of looking at this. And I provided a very concise and also accurate description of these problems. And there is no easy definition and you can't really argue one is better than the other.
So, you know, there's clear definition of a group, fairness versus individual fairness. There's issues of privacy and fairness. And Dr. Dork has a wonderful paper called It's Not Fair and it's not private.
And yeah, so, so this, it just brings out a bunch of complexity. I think a lot of the people who may be in, you know, criminal justice system understand this concept, but it's, it's this new study actually bring that into more clear and mathematical rigor in defining what these issues are.
So in my own work, I've worked with Sam Smiths and others on a, a trans we call trust spanning protocol. And it's a protocol we designed for internet use that trying to guarantee or assure privacy, authenticity, and confidentiality.
And we say, say, Hey, those three things sound all very good, we should have all of them. And it turns out that's not possible. We have to make choices. And these are the kind of a, i, I think fundings, that's not surprising if we think about it, but a lot of times we seem to think or we use the terms around to, to think that you can have all, I want to also very quickly bring the, the question on this particular case is more recent.
And so this on the right what I have is a New York, I think it's Washington Post headline, which says, Scarlet Johnson says Open eye, copied her voice after she said no, anybody have not seen the movie her? No.
Okay, if you haven't, please go watch it. It's a very interesting, very entertaining. And so in this case, you know, she thinks that the open eye copied the her voice right now I happen to use, picked that particular voice when I started using, when they announced it, I myself did not think of her, but I pick it because certain quality of that voice, it is interesting and it is comforting definitely.
And so, but is this copied?
Is that her voice? Do we have ownership to my voice that do I have, do you have, is there a right to, to my, the like closeness of my voice or my, the likeness of my look? Is that a right somewhere to do that, to have those things? And I think these are much harder question. We can always say, Hey, I feel offended. But it's very hard to actually delineate exactly what is being, you know, what's being taken away from you and what is the, the where the right stars and where it ends. It's very difficult.
And I think what I really enjoyed reading about is the, the Writer's Guild of America, WGA and also the the Sac Afro, which is an actress union in Hollywood. I really like their thinking. So they've negotiated through strikes of deals on assuring that how their right or their look likeness can be preserved. How to enable AI training on things or how to use AI tools for, you know, movie generation and voice, et cetera, but preserve their right. And I think that is a right way to think that. How do we really consciously thinking about where these rights are? They are not very clear.
And even if, you know, we have the choice to decide on our own, you cannot decide easily. So I invite you to think about these issues and you will, I think, come to a, a more deeper understanding of these issues.
So from there, we, we, we published a paper and was very well received as, that's why I'm being invited to talk to you here. But we also started to working on systems. Like we were thinking, how can we design something out of, you know, what we have today on the internet and that can incorporate these factors and take a more social technical approach.
So I'm gonna talk about some of the work in the technical side first and then social side. And so the technical side is related to digital identity. And of course that's why we are here and we, we would like to have a verifiable identifier. So this verifiable just means that the receiver have a way to appraise, to assess. It's not like this verify, duh, it's verifying, right? You have ability to trying to verify. Nothing is ever verified.
So, so that's appraisal process.
It, it's a power for a person's agency. If you can judge, well then you have agency, you can make decisions and maybe, you know, for your benefit is good or bad or acceptable or not acceptable, you are in the driving seat that you have a, a choice to do. So that's one enabler for, for I think a lot of these things. So digital identity is important, not because somehow somebody guaranteed for you, but that you will have ability to make an assessment. The second one is that we need a common inter interpretable language for trust.
And so we need a way to talk to each other that is somehow again, you know, give you the, the ability to, to make judgements, right? And we don't have such a language and we thought we would really need one. And this is the trust banning protocol. I won't go into details, but you do a Google search, you know, we have a, a draft paper document, a specification written for it.
There's some code as well in the open source repo if you really want to dig into.
And then with that we can formalize then, so-called trust tasks, things we actually do in life that think this morning we talk about consent. For example, currently if you have to been asked for consent on the internet, usually you have yes or no, usually you always say yes 'cause there's no other choice. That's not consent, it's not any kind of a negotiation from your part. 'cause you are not given any choice there, right? So when we talk about ai, we basically on the right side is this social technical model.
We've come to more complex or more complete pictures now, but I like they use the original picture. It's relatively simple. Where the information gets collected, how it is trained, and then how the model is produced, how then this model is actually used into a particular service, right?
So think about GPT-4 as a model, but Chad, GPT is a service, which is a piece of software.
No, you know, they have all the problems with any other software they can hack, they can, you know, there's a lot of thing can happen. So if we want to do, bring in some idea of where the power balance should be and where the regulation should go to, it's really to have a good picture in your mind on these. And those lines are where I think the most efficiently regulation or rules can be introduced. And I use these terms very generally can be social based rules and you know, could be cultural rules. We think that's good, that's bad or government impose rules too.
But that's where I think these, these, these, these technical aspect should be really derived from a particular model where incorporated into all the actors in this, in this model, this.
So the second part I wanna talk about will be the social aspect. The social aspect really gets into, I think what we sociologists or you know, I think scholars in, in humanities work called socialization of technology is the process that how we embed them and accept them into our world.
And we have agency, we the ai, you know, in the, the beginning of the whole process, so we can make it into a shape that we would really enjoy it, right? So this, we talk about how to then use these, the technical system we just built in the previous page into imposing some kind of rules. So that can be institutional rules, you know, we can say how your data must be anonymized, how things should be forgotten, but all those are good things. And then we also can have market rules of ensuring portability, for example.
A lot of the identity and, and the privacy issues are better served actually by introducing portability, being able to take your data and go somewhere else. And that's very important to have.
And, and again, you can, once you have a delineation of different modules, then you can get into all those interactions and impose some kind of accountability or audit or a visibility right into those system, whether that's imposed on or we simply decide it's a good thing to do. So, so I, I think that it introduces more of the design space for technology that serve a social need.
And this will be my last, you know, sort of like a midy slide. So this is another piece of work we are doing in, in a task force, our co-chairs, it is called AI and Metaverse.
And we talk about, you know, what are the big issues in this world and where we can make a difference. And one of the area we thought about would be auth authenticity of AI information. And so on this world, I think we, I personally got involved in this. Three different organizations. Each one of them specialize and, and their wonderful group of people specializing in their respective areas. So open wallet talks about, you know, wallet. And a lot of the wallet currently talking about is identity in the wallet, but wallet should be thought about as a way for our agency.
Again, it's a way to enable. So we should be thinking about what I, what do I want my wallet to do to be able to do for me?
And that will be the good way to think about how do you enable agency. A digital wallet is a good thing because it is the closest thing that we actually grab and carry with me. I have a much stronger control of that device than any other device, right? So it is a, a good way to think about where you actually have, you know, custody of the device itself.
The second one is the trust of ip, who I talked about already with designing protocols for trust, spanning and, and interaction. And the last one is the C two pa. Many of you may have heard that it is a lot of the US based AI companies decided to adopt a C two PA as a way to label the content it generates, they add a metadata to that so that the receiver can interpret and do assessment, right?
Again, they don't necessarily guarantee anything, but it's a ability for you to make assessment.
Okay?
So, so with, you know, all that study, we, we, we thought we should update the social technical approach. I I keep using the same name, but we really are incorporating not the traditional sense of social, but also a cultural or I think the scientific terms, the cycle, right? And so we should think about all those factors are coming together now because AI break not only the, like for the miners, they're really talking about a machine that is passive, right?
And now we talking a active participant of a, of, of a machine and we need to then rethink about human agency and how much we give for the machine agency. And so they already have a lot of machines can make a lot of decisions already today. We may have some time limit, we have certain limit to it, but they already make many, many decisions.
And so if we define those agencies clearly, then we can produce a system which will actually be understandable in, in those boundaries. So that will immediately relate to the meaning of work and the meaning of human life.
And this copy I have here on the right side, it's a very old document written in 1984 by tri, which I read a lot about the social techno original researcher, right, who talk about studied management of work and, but we recognize on the right side or the intrinsic values of our work or life, they are more and more becoming more, I think important. So we will be off loading a lot more to, to systems and machines and the harder question for us to really think than what is remaining of us.
And so those, you know, we, we talk about a variety of work, continuous learning discretion, autonomy recognition, social contribution.
Like what is my, you know, what is the other than money, what I'm getting out of it, this, this job, right? And what kind of desirable future we want to. And so those, you know, that was written in 1984 and original idea was formulated in the fifties. And I think not much have changed in that sense.
So I, I think AI simply bring this much, much closer to clarity. And I think this is a good opportunity for all of us. I'm talking not just about human beings, but also for companies and businesses like where your business model's going to be. And I think this is the area we should be thinking about the new business model. It will be new work and will be new business.
So I'll quickly try to end here my time. I have a few more minutes, but it's, I I think it's about which path we take.
So this, this is a common thing about red or blue pills to take. There's a lot of debate in Silicon Valley about the boomers and doomers. I don't know you you're into that thing, but we, I heard a few of those terms used here. So that's a very interesting philosophical debate and it's a constant sequential one, it sometimes you, you know, for people outside of that group, it may sound like a a little strange, but I think this are a right debate to have today. But I think it's not just that, it's also not just human in the loop. Now human in the loop.
I hope never like it because it's, I'm simply being kept in the loop. Seems like I'm a figurehead of my life and other people are simply keeping me in the loop.
That's never good, good thing to have.
So, but we can be in the driver's, driver's seat and we need to be also conscious that technology will move, will advance very fast. It's going to break some things. I know Zuckerberg was blamed for, for that statement, move fast and break things, but he was taken out of contest. He was talking about software and, and how, how to, you know, redesign software so that you don't need to keep trying to backward compatible that what he, his contest, but it's very similar is is like the, you know, Hollywood actress debate that is also taken out of contest.
But it brings us back to what really important to us. So I would say that we should be making effort design a social technical system and especially, you know, AI has in, in, in, in Europe, European Union has a very active role trying to influence and regulate. I think this would be a, you know, a really good approach to think about how do we put the two things together and this, this is a really message both for the regulators, for society at large, but also for technical designers.
So maybe last few second I will end the back to the English coal mine story.
First of all, the mine no longer exists. The last time was in December of Gauch day, December, 1315 of 2015. I hope coal will no longer be important to us. And in between I think tremendous progress has been made. So I'm very, very positive that AI will make a, a great contribution. I think we should be thinking, I eventually, earlier this morning, we should be thinking less about scarcity. I think productivity and all of that has been dominant feature in our life and our company's work, et cetera. But I think we should be thinking about really the meaning of all that.
What, you know, what what will make our life flourishing and meaningful without having to do a lot of, we used to cowork, right? I think that's a much more interesting question and what kind of life we want to have, what kind of society we want to have.
And that will help us to build the AI systems together. And I think the future is very exciting and bright and, and I I will end with a, a quick example.
Some people told me, I, I forgot who now, but you know, I was waiting each at the subway station this morning and the trains are coming and you, when you are waiting, you will look up when is it gonna come? What Now they have CLOs, but they used to be, you are never sure when it's gonna come in a exponentially growing system. You will be wait and wait and nothing comes. You will desperate. And then when actually comes, it's so fast it goes through you and you will never see it. Thank you.
Thank you. It was certainly a very interesting session.
When we have a question for you from our audience, the question is, do you think that a mathematical approach in the algorithms would help to ensure transparency?
Can you repeat
That last one?
Yes, yes. Do you think that the mathematical approach in the algorithms would help to ensure transparency? I believe that it is related to the explainability that you mentioned here.
Yeah, yeah.
Well the, the idea thinking about, you know, I've never mentioned too much about decentralization, even though the public is supposed to decentralizing ai. But you look at it, right? How do you, what do you do with ai? We don't want to build a matrix, you know what I'm saying? So I hope you can all agree we don't want a matrix. I think that would be the worst outcome. So what you want is we, we want the system to understand and the best way to understand. Sure.
Oh, thank you.
Yes. So the decentralized all have accountability or visibility I think is about the same idea, is to look at AI's overall system end to end and divide them into pieces and then you can start to regulate the interactions among them. And also I think this a very important section about, this is a concept of agency. Okay? In this system you need to own something. Yeah. I hope that's clear. You need to own something. What do you own? If you have nothing you have, you want, you will have nothing.
So, so we, I I think that is very important concept to have is to divide these systems so that we have a decentralization. It's not all aggregated into one.
And, and then, you know, the, the wonderful feature of these systems that we can actually enjoy is, so that's where agency comes in. That you, you know, we are also building this system together. We contribute in data every day and we better contribute to it, to the right place, through the right ways, right? And so I think that's what the real challenge is, rather than saying AI this AI that AI is announced not a entity yet, right? So we need to then go define and, and, and really sort of put a shape into this system so that maybe is a, is a way to thinking about, you know, institutions.
There's a lot of old institutions that needs update. There's some new institution we need to go invent.
Thank you. Thank you so much.