Good morning. We'd like to start this morning with a simple question. Who was the real Tara Simmonds on November 17th, 2017. She stood before the Washington state Supreme court in Washington state in the United States. And she was asking the Supreme court of that state to allow her to become a lawyer. Tara had a, a troubled background. She grew up in a, in a hard home with abuse and a home also where drugs were abused. And after starting out well in life, she fell into drug abuse, herself and drug addiction was sentenced and convicted to almost two years in us federal prison.
But while she was there, she found help. She had therapy. She had rehabilitation. Once she left prison, she went to law school and once she went to law school, she graduated Magna cum loud from her class. She had a prestigious fellowship waiting her upon graduation.
She took the bar exam to become a lawyer, but the Washington state bar association said, no. They said your past your trouble past indicates you will have a troubled future. And so that's was a question before the Washington state Supreme court, to whom she had appealed. Who would the real Tara Simmonds be?
Would it be the woman who was troubled and was addicted to drugs or, or was it this new Tara who had rehabilitated herself and then graduated the top of her class in law school? In short, the question was, would the past patterns of Tara dictate her future? And if that scenario sounds familiar, it should because algorithms and artificial intelligence use a lot of past data to predict our futures every day, right? From predicting what we're going to watch on television tonight to whether or not we are going to be financially stable to even in Tara's case.
For example, trying to predict whether or not we will commit a crime. And the thing is though, as we've seen this rise in numbers, we've seen the danger that blind trust in algorithms and artificial intelligence can present. We've begin to see we've begun to see the bias that can be perpetuated by dad data and bad models and how it can lock people who are already disenfranchised and disempowered into their current states.
And, and so we as identity professionals must not equate possible technology with beneficial technology. We shouldn't use artificial intelligence or technology because merely we can, but because we morally and ethically should. So this morning we're going to stand on previous work from organizations such as IBM and the I E who have done well to lay out pillars of ethics for artificial intelligence. And we're going to try and give you some practical ideas and tools to think about it and to implement this.
And we're going to start this morning where we think it should with a laser-like focus on human wellbeing.
Absolutely. And so when you take a look at this circle, right, there's, there's different aspects that we need to consider the main one being wellbeing. And when we think about wellbeing, it's more than the Hippocratic oath of thou shall do no harm. So let's take the case of Tara Simmonds. If we were to consider wellbeing, whose wellbeing should we be considering?
Should we be considering Tara's wellbeing, who from her troubled background had risen from that and stood a pillar of her community and was ready to serve and give back. Should we think about her wellbeing and the fact that we were judging her based on something that she did in her past, and she had shown that she could change. So her thinking of her well-being, we should allow her to then say, okay, we should trust you with this power to come back and give back to your community.
Or should we be thinking of the community's well-being here is this person that we would imbue with power to be legally made to represent her community and her peers in a court of law, but with their wellbeing at hand, can we trust her not to slide back into the past that she had before? When the pressure start to get to her, is she gonna stay sober? Is she gonna stay committed to her oath and protect the people? So it becomes a very interesting situation where we have to figure out who's wellbeing that we want to take advantage of, or be there. Right?
So then we think about what is harm, right? Who could we possibly harm with the technology that we're creating with the algorithms?
As we, as Mike talked about the bias that can sink in, how do we make sure that we aren't doing harm?
How do we know that the harm that we could do is not unintentional? And so one of the ways is we put on this board is the online ethic canvas that we could start to look at before we take a step on a project and start to see things like, who are the individuals affected? What are the relationships, what's the behavior? What can we do? What are some of the world world views around? What we're doing? What this forces us to do is ask the tough questions ahead of time.
So we could look and see, okay, what are the potential impacts of what we could do? Because sometimes the harm is not in the immediate thing that we think it's the thing that we didn't think of. It's the unintended consequence that can sometimes cause the most harm all in all. What this allows us to do is look at this and be accountable, accountable for what it is that we're going to create and the actions that this thing could take
That's right, David.
And so once we have that work done with that ethics canvas to establish what our ethics are, we need to hold ourselves accountable for meeting those standards. And this is really easy to seed control for this. And while that sounds counterintuitive, didn't we just decide on an accountability standard and ethics standard. If you're old, like me, think about how many phone numbers you used to have in your head, 15, 20, 30 friends and family, depending on how popular you were.
But over time, once you got a mobile device, that number shrunk more and more until maybe you only had your own number in your head. What you'd done is you'd seated that function, that memorization of numbers, maintaining contacts to technology. And that's part of, of the danger that we face with AI and ethics.
We're, we're running a danger of saying, well, that's the right answer because the algorithm, the AI told me.
So because math and that's a mistake to seed that control. And not only do we need to take ownership and accountability in the general sense, but it's not just for the people at the top of the food chain, either. It's everyone in our organizations from the sea level, all the way down to the person, running documentation for our AI implementation, it's similar to being a Naval ship fleet.
Back in the 18th century, everyone had to know what the battle plan was before the battle early on, it was easy, right? You had friends and you had FOS and they were geographically separated. But once the battle started, things got a lot more complicated friend and fo were intermingled. And it was too late to figure out what the standard was, what the battle plan. Every man from captain down to lowest sailor had to know and had to be equipped.
And the same thing is true of ethical standards.
And then not only do we have to be accountable, not only does everyone in our organizations have to embrace that accountability and know the ethics. We also have to provide a method for feedback for our users. I hired about a three year old classroom preschool classroom. Recently they took a phone like this red phone, not connected to anything. And they put it in one corner of the class. And then they told all the three year olds, they said, look, if you have a problem, you see something that's not fair. You go and call the fuck.
And they all did 20 of them lined up the first day calling to say, this is unfair. This is unethical. This is Timmy treated me poorly. That kind of opportunity for real feedback is essential for our users to be, be empowered, to hold us accountable to the ethical standard that we claim to hold ourselves to. And for them to do that well for them to complain in ways we can understand and they can know what's going on. And then we can make changes to comply with our ethics. It's essential that we have the characteristic of transparency, right?
And so when we think about transparency, let's go back to that class of three year olds. Right? So I don't know about you guys, but how many of you been around three year olds? Raise your hand. Right?
So the, the magic power of three, cuz we all know what happens when we turn three, right? We question everything. And so it's just a constant day of why, why or what is this? Or what is that? And so if you imagine we take this three year old and they love bubbles. And so they ask you a question of, Hey, why are bubbles around? So we could go, okay, great. Let's just break down this mathematical equation. We'll explain to you surface tension, right? And explain to them this long derived equation for why bubbles around.
Remember we're talking to with three year old that has the attention span of about that long.
They just wanna know why the bubbles around, right. So you can't sit there and break down a mathematical equation to them because at the end of the day, they just want go chase the bubbles. So there has to be a transparent and clear, easy answer that we have to give to them. Right. So think about trying to explain your job and what you do to a three year old. So we have to break it down simply and says, look, when you see the bubbles, water's very tight knit. They have friends.
They like to be around each other. And so when bubbles get made, the closest way the water can be together is to take this shape because they're friends and they like to be close. And the three year old goes, that's awesome. And they go and chase the bubbles. The key there is what it was a very transparent and clear answer that the three year olds could understand.
This is the same type of thing that we have to have when it comes to using these AI models.
When we interact with the user and a decision is made, we've got to be able to inform the user, just like we would a three year old, why exactly this decision was made. We can't bring out the data scientists and say, Hey, break down this algorithm. So this user understands why this model made the decision it did because the user's clearly not going to get it. And so here using another example, we show something here where you can explain the decisions of one of the mathematical models used behind. Right? So here we have the model, we have the data and prediction.
It's, it's trying to figure out if a, if a human being is sick or has some kind of condition. So it says, okay, sneezing, take the weight, headache, no fatigue age.
We're gonna say you have the flu and the, the algorithm or the, the program that's being used here is called lime. It's an open source tool that says, okay, here's the input that came in based on that. Here's the explanation of why we think you have the flu. And then at the end we have the human who makes the decision, right? There has to be these checkpoints.
So we can explain to the user, Hey, why did you come up with that decision? Hey, why was I denied for this? Why did I get that?
And again, we can't bring out the data scientists and give them that long equation. It's gotta be very transparent because when it's transparent, then it's fair. Right now we say, okay, cool. I understand a decision. And I feel that's fair. Whether or not I agree or not with the decision because we have that transparency. We now have an area of fairness.
And when we're talking about fairness, obviously we all know this, but what we're talking about, fairness, what we're actually talking about is examining our own bias. And we don't have time this morning.
There are other sessions we've had this week. They've talked about bias, but I just wanted to cover three general areas to give us a handle on it. The first of course is shortcut bias. This is where you're saying to yourself, I don't, I don't really have time or the energy to spend, to investigate this more fully. So I'll take the easiest quickest answer. And then there's impartiality bias, which is almost self deceiving. Cuz you say normally yes, in the past I've had some issues with, with my own bias, my own tendencies, but that was the old this today. I'm right about this.
And then the third is straight up. Self-interest bias.
This is where you feel like you deserve more than other people. And that's probably the easiest to understand and less you or I or David this morning make the false assumption that we've moved beyond bias. It's helpful to have concrete use cases, concrete examples to think about and explore. So this morning we've been telling the story of Tara Simmonds, but what we haven't done is we haven't told you anything other than basically where she probably lives her name and that she identifies as female.
We Teve told you her age, her height, her skin color, any other physical characteristic. So in your mind's eye, whatever image you have right now of Tara Simmonds automatically gives you insight into your own inherent bias. And this bias is endemic 18% of women. I mean 18% of speakers and 20% of faculty in 2018 that related to artificial intelligence were female. That's it. And if you looked at the numbers for minority minority representative groups, the numbers are much worse.
And we think we are convinced that the only way to really defeat bias is to have representation from groups that suffer bias representation from groups that don't normally have a voice that is the way to eliminate bias in the long term. And all of this is undergirded wellbeing, accountability, transparency, and fairness is undergirded by what should be a, an easy task or a front of mine thought for all of us in this room, user data rights,
Right?
And so as we kind of wrap up our circle of trust here, as I like to call it, we end with probably I think the most, most important, which is user data rights, right? So to do all of this right, to protect the user's wellbeing, to make sure we're being transparent and accountable and being fair, we have to give the user an entrance into this cycle. The user's gotta be able to control and say, here is the data that I have. I have access to this data. I have control over this data. I am now a integral part in this decision making process because I'm giving you access to the data.
You need to make those decisions. And I'm able to control what that data is, right? We've seen this storm coming for the last couple of years. I could name all the regulations and go through the alphabet soup, but I'm pretty sure we all know them by, by heart by now.
So the, the fact is this area is one that needs work. That needs us as identity professionals to really step in and drive, to make sure that we make that user a part of the active cycle. As we start to build these systems, because at the end, this is who they benefit. And quite frankly, that's who we're gonna profit off of. So we might as well have them included to make sure again, that the process is transparent. We are accountable, it's fair, and that we are protecting their wellbeing
And a, a final note about practicality.
You know, we've covered these five different areas and, and that's great and good, but like any standard, we, we need some aspect of measurability. Now this radar chart is one way of doing that. And I think it speaks to self-assessing where you're at at any point in time and comparing ethical standards and comparing how you're implementing them with different artificial intelligence solutions.
And again, this is a self-evaluation that you would do on a regular basis. And the goal here is not perfection because that's impossible. You're not going to be 100% accountable ever. But the idea is that you are slowly moving outward, consistently extending your reach outward each successive day, week, month, and year progress is what the goal should be. And as for Tara Simmonds well, she found out within days that the Washington state Supreme court found unanimously in her favor.
They said, no, no, no, you've changed. You're not the same Tara you were before. You're different now.
And they let her become a lawyer. And today she is serving on behalf of clients, just like her in settings. Like she found herself in November of 2017. And there was one quote that stood out to us in the report that the Supreme court wrote. And it was this my quote.
They said, we affirm this court's long history of recognizing that one's past does not dictate one's future. And that's a great caveat for all of us. As we speak to employee identity data in the service of artificial intelligence and algorithms one's past is not completely determinative of one's future. That should ring in our ears because we, as identity professionals cannot lose sight of the individual lives that are affected by the systems. We build that less, we be seduced by the power and the potential of artificial intelligence.
We cannot let it outpace our ethics in short, we must seek to embrace artificial intelligence with our humanity intact. Thank you.
Thank you. The two of you, very, very insightful. And you know, sometimes I think there might be a reason or logic again, that the pronunciation of math, M a T H is very close to mass me as, as, because it's super easy to create a mess with math yes.
In AI, but also in big data analytics. And so we definitely need to be super, super careful on that. I have one question here. So someone was already really awake, which is our ethics holding up against the incentives of the markets or, or if not, how could we counter that?
I, I think that what we're seeing is we're starting to see a shift, not just in, basically in how companies and organizations present themselves. If you saw the apple advertisement at CES last year, they've started a campaign where they're saying we will be trustworthy with your data.
Basically, they're saying what happens on the iPhone stays on iPhone or something like that. Right. Basically they're, they're saying they're marketing ethics as part of their brand. And I think we'll begin to see that more and more and provide an incentive for that to hold up.
Okay. So let's be hopeful on that. Thank you again for your keynote, Mike and David. Very insightful, very informative. Very helpful. Thank you. Thanks.
Thank you, Martin. Thank you.