So last week I was going to my local pool and I have this really fine piece of high security tooling to get in. It's a plastic card with a barcode with my pool pass number on it.
I, I often put it under this scanner and they let me into the pool. And no matter which lifeguard is there, even one that just started working that week, they let me in because the institution remembers who I am with my pool pass. And there's a backup mechanism.
I, I misplace my pass last week and I said, Hey, can I just tell you my name and you can look up my record and you can deduct one of my pool swims off my tab. And they did it. And I didn't even show an id, right? 'cause it's the local pool. But it had this memory of me in its records with my, my, the information related to my pool pass.
I did something much more high assurance. I recently decided to actually change my name at the Social Security Administration, even though I had, technically I got divorced 12 years ago, but I hadn't changed any of the paperwork.
So I went there with that paperwork from the divorce, my driver's license with my name and photo, my green card with my name and photo. And I interacted with a lady behind a window in one of those little, sort of semi cubicles there and said, Hey, I am me. Please change my records. Never met this woman before. Never will, probably again.
But she was able to change this record for me and this government system because it remembered me and it, and they used the documents that I had from other authoritative sources that they trust to sort of match me, the human sitting in front of the window with a request that was being made. And lo behold, a few weeks later, my new social security card came with this very fancy new feature that us printed in shiny letters who woo-hoo. People are always surprised at how low tech these cards are, but it happened. So this is a slightly more advanced form of institutional memory.
And so this is the definition that we're putting forward. The information, particularly identity data that institutions have about us, which gives them the capacity to know and recognize us and to let people who work for them interact with us in a trusted manner.
So
It's these systems that help these institutions know us, even though we may never have met the people we interact with that I'm calling institutional memory.
Now, we had, we had institutions before we had computers and institutions used a really advanced technology called Paper, which if you study the history of paper records and how they work as pretty incredible. They used index systems, they used log books. You'd really advanced systems when you got hollow earth machines, which were sort of like using cards with holes in them that adding machines used to collate things. And in fact, the whole US social Security number system founded in the 1930s was entirely paper-based until computers came along in the sixties.
And I remember as a child going to the dentist and seeing these walls and walls and walls of paper charts, right? They don't have those anymore. So we had a revolution in the capabilities of institutional memory with the advent of computer systems and databases. And these started to proliferate in the 1960s and seventies. Now I'm gonna go to a diagram to sort of consider the, the what's happening with institutional memory. So this is a diagram that I worked on with a World Economic Forum report around rethinking personal data over, over 10 years ago.
And it wasn't specifically about institutional memory, but we can, it's very helpful in sort of thinking through the issues that we're facing today. And so on, on the rose side, we have linkages to the individual and along the bottom we have the nature and source and things get creepier up into the right.
And we have at the bottom information that isn't linked to a person's identity and is volunteered. Attributes linked without identity, linkable through combination, directly linked to an individual's identity. And we often really think only about volunteered information.
So this is things we know. We tell the institution, the things we write on the form, the tweet we tweeted the video, we posted like we, we are taking a conscious act to share information. And that's not actually particularly creepy because we, we kind of get it, we, we understand we've taken action to do this. But then you have observed data. So this could be in a electronic system like they're recording your IB IP address and knowing where in the world you are talking to them from. But we aren't really thinking that that data is observed.
It might be other things about us, like if they're in person, our eye color or, or other attributes about us that can be easily observed.
So if we thought about it, we would understand like, oh yeah, they could see that in our interactions with us. But then we get to the inferred side, which is taking information that's both volunteer and observed and com combined with other data to make inferences about us. So if we had geolocation data and we knew you went to mosque, synagogue, or church, we could probably infer your religion if you did that on a weekly basis.
If you happen to patronize a gay bar, we might infer things about your sexuality. And the list goes on about the range of inferences we can make from geolocation data browsing habit data and interactions with any number of institutions.
So institutional memory is not, is created tensions in society and with people. One of the sort of inflections point of this was in the late sixties, early syms where people started to get catalogs in the mail that were relevant to them, but they couldn't figure out why. And this was like, wait a second, what, how am I getting this catalog for fishing gear?
Or you know, whatever else it was that and people didn't understand. It was also the era where mainframes were coming in, data brokers were being digitized. And this raised concerns in the public imagination. You also had American government officials seriously considering building a national data bank in the late 1960s. And this would've merged all data of all departments about all people together in like a super database. The story was sort of like, we need to compete against the Soviet Union, so we need more data about us so we can be better.
And this also freaked people out.
There's covers of magazines with whole, like long form articles about what would happen. Books got written and basically people rejected this idea, said, no, you can't do it. There was a whole series of presidential commissions and different hearings in the Senate and the, and Congress. And the end result was legislation in 1974, the Privacy Act that basically limited the federal government's capabilities of sharing data between different agencies.
And another thing that was happened out of this, there was a report called the Records Computers and the Rights of Citizens Report in 1973, that that was the source of the original fair information practices and principles that evolved in another report in 1977, personal privacy in the Information Society. And then by 1980 the OECD had adopted these. So these are widespread throughout, widespread throughout the OECD and you can see them here, collection limitation, data quality, purpose specification, use limitation, individual participation, accountability.
Now there's a challenge that's led to a whole sort of evolution and like privacy being a big deal, but privacy notice and consent frameworks are really limited. They're co hard to read. You have to discern them. They're long, it's really difficult to communicate, especially with observed and inferred data, what organizations have about you. And it's not even clear that observed and was, and particularly, and for data is even covered by these because the institution is making leaps and just, you know, putting things together. And the law doesn't say they can't do that.
It just says they have to treat the information you give them. Well, so
Now privacy's inadequate. Now we're like adding this whole other layer of compute sensors into our environments. Like I sort of keep getting bombarded with like new sensors that find new things, do new things, and we have ai. So now you can take things like video feeds, feed it through AI analytics capabilities and read the emotional valence of people in a room. We could do that right now with all of you. And they tell us how you feel.
Things that we were really only possible by humans are now possible with algorithms and machines. And the area that we don't understand that people know about us, especially with all this compute tech is growing, right?
So this, we already had this problem in the sixties and seventies. You're getting the, the sports catalog or the, and and it's just accelerated. So for example, it makes a lot of sense with our medical records to, to have the doctor write a lot of good notes.
But what if there's cameras in our, the offices where the appointment is and those cameras are recording information about us and also putting that in our records. That's not inconceivable in the near future.
Or when we go into a store, the store reads our face, figures out who we are, maps everywhere we go in the store, add our emotional valence on top of it. We could think of all sorts of things. So now this is potentially in our sort of customer record with the store. And we could sometimes, when I listen to privacy advocates talk about identity, the issues with privacy, I feel like they're saying like, stop collecting any data about anybody any time. It's almost like they, they are inviting institutions as we know them that have records about people to evaporate.
We have to have institutional memory, we have to know who, who the patients, the customers, the employees.
Like we, we must have records about people for our contemporary institutions to work. But the question that we need to start asking ourselves and what we need to collectively interrogate is how good should institutional memory be?
This is a collective question that gets out of like personal privacy being the frame of, of, of the problem into a place where we can hopefully have a better conversation with civil society and other groups who are saying, wait a second, not protect my privacy, but like what are the norms for different types of institutions? Because we may come to understand certain context where more information kept in the records is better and helps them serve us well and in other places where it's superfluous and perhaps dangerous to have as much information.
So this is the question I leave you with and hopefully we can start a conversation and change, shift the narrative to one that's forward-looking and responsive to the advent of AI and intensive compute capabilities, which is how good should institutional memory be. So thank you very much.
No questions. So you
Thank you so much. The presentation was great. Maybe we have a question for you before you finish.
Yeah, Aliya, thank you so much. That is very clear and really makes things accessible and really has a nice reframing. And what are some of the things that institutions, you know, we have institutions represented out here in the audience and what are some of the suggestions for ways in which individuals can go back to their institutions and start to visit this? What are the some of the sensitivities and initiatives that they might take immediately to start these processes?
I know it's a long, that's like a, a four day conference I know, but are there some suggestions that people can take away and start to act on immediately? Sure.
That's a good question. I hadn't thought about the answer until you asked it.
One is, I guess there's a sort of inventory that's a a a step, right? Which is like what does the institution do now and what could it decide to adopt technologically in the near future?
And, and have a conversation about whether that's appropriate or not or what aspect of that new thing is useful and relevant and meaningful. But what aspects should we not maybe do? Like just because you can do it doesn't mean you should.
And last question.
Okay, this is an impossible question to answer, so I'll preface it that way. 15 years out you're 2040, which is ridiculous, right? Ridiculous.
What would, you asked me this yesterday,
What would awesome look like for institution and individual relationships in this regard?
Great. I think the awesome would look like there is deliberative dialogues about this in different, for different sectors.
Like, 'cause I also think this isn't like one institution, right? That's part of the problem is like my company can behave well, but the other companies in my industry don't. And because of the intensive differential and computes and stuff, they have an advantage and we need to be able to, to both have discerning conversation about values oriented companies doing the right thing, but also a regulatory stick that people are like, yeah, that's where the line should be.
And I think we need to get better at understanding there should be lines and then better at figuring out how to draw the lines fast enough to keep up with the technological evolutions. And I think this question can stand the test of time as one that can keep getting asked in and get new answers as technology evolves. Whereas I think the privacy question has already sort of run its course and become sort of useless in solving a problem of how we do right by people.
And so lastly, I'd just like to thank you, Lea not just for the presentation, but for your tireless work over decades in the area for your promotion of in the in IIW workshop. And I'd encourage people to do that and also for your new book, which I'd encourage people to go and find and purchase.
You mean domains of identity?
Yes,
That's okay. Great.
And the, and truly you help us to reframe things that are very complex in a way that makes them tractable. So please join me in thanking Kalia for the presentation today.