And so, yeah, first of all, love to welcome you to the stage. Again, again, my name is Annie. I'm moderating the panel today, and I would love for you to briefly inter introduce yourself, share your name, your role, and a bit why you're here on this panel about AI governance today. Any volunteers to go first?
Sure. I'll start.
Go for it.
All right. Thanks Annie. So I'm Al Lynn, my team and I invent tech for good for Cisco's future product innovation and engineering group, or FPI a little bit about my background.
So I've been vice president of engineering for Cisco's emerging technology and incubation group. Also a VP for the Cisco's chief tech technology and architects for office, and believe it or not, I've done tech for good as a general in the us army. So that's a little bit about my background. I've got the best job in the world. So I invent new things.
I just, this, this month I worked on digital identity, firefighting, drones, gaming, school, testing, some machine learning stuff, some intelligent data capabilities, augmented reality. So that that's, that's my life in a nutshell.
So it's, it's, it's a good, good life. Why I'm here is cuz I've got, I've done some work in identity and have a couple of inventions in that space that I think might be interesting. Also I've got some interesting ideas on AI for the future.
Fantastic.
Well, we're glad you're here Al. And can you continue?
Yeah.
So Anin, BWA CTO of I now and one of the founders. So yeah, first of all, thank you for the invitation.
Of course, I now is one of the nacho identity providers in Europe. So here I'm responsible for our technology and the regulation side. I found it, I now in 2014, as we saw that digital identity was still not solved and we saw identity becoming more important in the future. Since then we have grown quite strongly. We have now roughly 950 customers across Europe and, and roughly 1000 employees. And we are also a big believer of course, in internalization. So we actively work in sanitization bodies to yeah.
To, to get the technology standardize across the industry and yeah, why I'm here. So we are actively using AI technology in our identity verification methods, and we clearly see this as a trend across Europes with more and more regulators adopting and approving that technology. And of course we see this as a very important technology.
Fantastic.
Well, thank you again to both of you for being here and as a, as a heads up, I heard both of you referring to identity solutions, identity verification. This is a question which came up in a session earlier. So maybe we can talk about this at the end of the panel, if we still have time, a first question.
So we're, we're talking about the governance of AI and also often wrapped up in this are the societal impacts of AI, who is this marginalizing? Who is this enabling?
What, what are the impacts here? But oftentimes what's not always clear is who is being impacted by AI.
What are, when you think about this question? Who, yeah. Which groups or, or how do you think about who is being impacted?
So I think the I'm thinking the question is, I think the, the question is maybe, maybe even a little bit backwards because it's not about who is impacted, but who is not impacted by the technology because even today, AI is already everywhere. You just often don't see it yet, but anytime you are searching, you are using Facebook, you're using Instagram. You're actually interacting with AI already in the background. Anytime you are unlocking your phone, AI is already there.
It's verifying your biometrics, one-to-one matching, we've just heard it. So it's already there and everybody's impacted by that and, and sustain for this identity. We clearly see the trend towards AI based solutions. And it's already, also there simply because AI is already for detecting fraud much better than, than human operators. And we see the gap between what AI can do and what, what human operators can do simply widening over the next year. And so the trend is already, already clearly there.
Yeah. Very interesting. Thanks for sharing that.
And that's a great way to be thinking about this as well is who's being left out, you know, who, who is not being touched by AI at all when they could be benefiting from it. So thank you for adding that Al what's your take on this?
Yeah, so I think, I think generally AI is, is, is tech for good, but I think there, there are some consequences, especially in the, the, the automation of different, different things that would mean jobs to people. So I think if there's a negative side, you know, those, those mundane type jobs that are, that are, you know, almost mechanical when they used AI to actually take over some of those jobs.
I think, I think that would be probably your negative impact. And perhaps if you're looking at governance, maybe something along the line of reeducating that workforce and, and broadening their, their ability to, to work in other fields.
I think, I think that's probably the big downside, but for me, it's all in how you design it and, and the things you think about and looking in a way that's that where you're thinking about tech for good, which means you have to think of how technology could be used negatively. And I think you just need to build those in when you're working on new tech.
Yeah. That's a great insight there.
Of course, any of those people who may be directly edged out of their work, making sure that there are work reeducation programs for that, a question that I have and what stuck out to me, I had spent some time going through the, the draft legislation from the EU commission on regulating AI. And it's of course, very focused on protecting the individual. This is of course, much easier to do than protecting people as a group mentality. But if we think of the, the impacts of fake news on social media, this has had really a, a group impact as well as a, as an individual impact.
Do you think there's any, any way to address those sorts of societal impacts of AI? I don't have any answers to this.
So I'm,
Let me take you across a quick use case. And, and maybe this will talk about, you know, kind of a group, a, a group thought about, about this.
So, you know, you know, imagine now that there's, there's, you know, 30% or 40% of the workforce working from home, if we could, if we could use AI to, to, to take care of, let's say something that's basic like human bias. So there's actually 188 human biases that are built into human beings. They're they're shortcuts. And some of our shortcuts are flawed because it's just the way our brains were developed. They were developed for survival.
So if our, if our biases are, what are keeping people from as a group moving forward, because we're, we're used to looking for people that look like us, perhaps say, I could be used in a video like this to AC actually change that.
Imagine a world where, you know, when, whenever I come on board with and talking to you that my facial recognition to you is something more similar to who you are. So if I could mimic the way you look and decrease your bias towards me, maybe that's some something that we could do to move forward.
But if we got identity right, as part of that, then whoever you're talking to the identity would match every single time. So I think as a group, if you look at a group that might be a way of increasing a group's ability to, to move to towards the future, because the bias would be less, which means your ability to move up in the world would be more just a thought.
Yeah. And that there is
Answer to the question.
Yeah.
Well, it's, it's again a question perhaps without an answer at this point, and that's part of why we're here talking about it and brainstorming today, but that made me think of something. And now the thought is gone. Unfortunately.
I mean, do you have any comments on this?
So I think we have to be, be careful about really focusing on the, on the positive and the negative side of this, of this technology. Think just focusing on the negative side is maybe a little bit, a little bit shortsighted because any new technology will always have some, some downside. And I think balancing the, the, the benefits that our new technology can bring with the, the danger that our new technology can bring, I think is, is very important.
And I think you mentioned the AI act and, and we see this definitely as a first step in that direction of, of trying to balance the, the, the dangers with the benefits, if the, if they are, I act achieve that, I mean remains to be, be seen and, and can be discussed, but we think it's a step in the, in the right direction. And of course, like I said, I mean, technology can also be used to, to, to help certain groups actually. And not only for, for, for negative effects.
I, I agree with you. I, I, I think it's, I think it's really, really early. And I think Annie, you talked about the hype curve.
So, so this is really, really early in this, the development of this technology. And, and I'm hoping we don't weight it down so much that we can't get the benefits out of it because there are, I think a lot more benefits than, than a downside.
So I, I agree with you. I agree with you. You're right.
And
By the way, these biases, I think they already exist in, in us every day.
And, and AI just has made these biases that exist in us now, more visible, but it's not that that AI is in any way more bias than, than we are. I think it's just more visible now.
And, but actually we, we now have the, the possibility to work on this biases because they are now visible and to, to reduce them. But I don't think that AI in any way, they are worse than, than us.
And that's a very good point, both bringing up that, you know, the, the bias exists in, in very much in part because we, as humans are biased, you know, it's, it's our problem that we've, you know, passed on.
And then on the other hand of, of focusing not only on the negative, but also on the positive Al, which you said had, had reminded me of the fact that oftentimes we, we, when we talk about governance, we get fixated on, you know, not, you know, having representative or, or fair AI for the people around us and the situations around us and not, not changing that. But on the other hand, how do we decide what is, what is fair?
And part of that is having some sort of social intervention, you know, the, the gender balance in a certain sector, or the giving a loan out to equally to different races or different backgrounds that is a social intervention that isn't reflecting the world that we have today.
And so in part, because as you said, Armen, you can recognize biases sometimes more easily, or we're focusing on them.
And we're, we're spending the time to discover those that allows us to make in some way, a social intervention for a world, which we would like now, how you come to that conclusion, you know, what, you know, what is the right proportion of, you know, female CEOs to not, you know, that's a question that I don't think any of us is, is able to make. And so then that comes the next question, which I don't think we'll find an answer to today. What I could do is go back to the list that I had prepared instead of going on this rabbit trail.
One of these, I think metrics is such a huge part of governance of AI. So how do we go about determining the metrics that we need are mean, what do you have? Yeah. What's your take here? Yeah.
So I think first of all, it's important to maybe separate two things that we are talking about. So number one are users of users of AI that are clearly negative. So for example, mass surveillance or deep fakes that are used for deception, I think basically the society agrees there that use of AI for those use cases would be, would be negative.
And there, I think it's not a good idea to talk about metrics. Is it performing right or not? I think there, it simply regulation needs to be in place to control these type of users. And if necessary also prohibits them if society agrees, yes, this use of AI is clearly negative and the other bucket is then users of AI, where there are benefits to society. But of course there are also then, then can be downside. And I think for those, it's important to establish those metrics that we really see, okay, are we getting the benefits that we want right.
At the same time, ensuring that we are limiting the downside and, and are controlling those. But actually, I, I think just talking about the, the metrics is not enough because quite often the question behind it is the data set that you have because the data set controls, how you can train your, your algorithm and also how you can actually measure your metrics. If a certain group of people is not included in your data set, then of course you will also not be able to measure any metrics for those.
So this is something I think that's, that's really important to, to not just focus on the algorithms, but also at the underlying data sets that are behind them, because a lot of times the biases and, and, and downsides are coming from the data steps.
Yeah, absolutely. I couldn't agree with you more Al do you have anything to add here?
Yeah, so I, I, I liked that. I like that you picked out deep fake, cause I, I think that's, that's, that's an interesting one because I think you can use deep fake for good technology as well.
I mean, if, if, if we have human biases and now you're presenting an, a deep, fake face that is more acceptable to you as the person receiving it, then I think you could use that for good as well. So I, I think there's, there's something to understanding that probably more than governance there's probably needs to be a, a group that actually looks, I mean, you know, physical group that actually looks at data sets to really understand what those data sets are doing within the context of the AI itself.
And, and you're a hundred percent, right. When you see a lot of the early facial recognition, AI work that was being done, they do this huge swaths of data sets and they'd leave out a whole group of people.
And then when the facial recognition tried to recognize those people, it, it couldn't, it was like, well, they're, they're not in the dataset. So you would, you would exclude a whole race of people or a whole group of people because it just didn't that wasn't the group of people in the dataset.
So I, I really think, you know, a watchdog group might be something that would be more important than, than trying to, you know, do a, a, a one over the world, you know, governance model for this, because it's, it's so broad. I don't think that that would work as well, as much as a smaller group. That's really looking for a bad design.
Yeah. Thanks for sharing that.
I mean, do you have any, any comments on that as well? Or any responses? Yeah.
Yeah.
So, I mean, I would actually not agree with, with a watch doc being the right approach. I mean, simply because, I mean, who would that watch be? And I mean, how could they actually monitor such technology? I think what the European union is, is trying with the AI act, I think goes in the right direction to basically decide on a political level.
What is, what is prohibited used of AI? What is acceptable use of AI? Because in the end, this is our society political question. And then for those users where it's acceptable to define boundaries, that that companies have to respect and, and, and have to adhere to. And of course, if they would be crossing that, that boundary of course had to have some signs, of course, that are, that I used to enforce the rules in the end. I think that approach, I think, might work time timely.
Well, I wasn't, I wasn't suggesting either, or I think for me, at least both, both would be good.
Yeah. And this is again, part of the, the conversation of, of finding how we proceed forward in a, in a way that is both facilitating and, and enabling further development, but also doing this in, in a way that is respective of people's rights and, and protection. So then the next question for the projects that you've been working on, or for your own organization, how are you facilitating governance of AI? Al maybe we can start with you.
Yeah.
For, for my team. Well, I've, I've got, I've got a team that's, it's all about tech for good. And so they're very guarded about the technology they use and they, they actually spend cycles reviewing how we build things and what, what exactly. We have data scientists that really take a hard look at what we're building and think about how could this be used in a negative way, and what do we have to do to make sure that, you know, it it's safeguarded?
What, what do we do to make sure there's security en enabled, you know, hardware roots of trust and things like that, to make sure that it can't be modified. So there's a, there's a lot of work that our team does to make sure that any tech that we build, any AI that we build can't be engineered or used in a way that could be done in a negatively.
So we, we actually do cycles thinking through that piece.
Yeah. Interesting. And so then is this integrated into your development team? Is this another responsible party? How does the, the team look then?
Well, I've, I've got, I've got a global team, so, you know, it it's helpful because depending on where they are from, they have a different slant on how they take a look at things. So I've got, I've got a team in Europe, you know, they're very G GDPR privacy based. I've got some folks in Israel, they're very security based. I've got a security team that spends cycles just on the security pieces alone.
So we've, we've kind of built that in because we've got such a broad team from a lot of different locations and they bring all of those to bear on each product that we're working on.
Really interesting. Thanks for sharing that insight.
Armin, how does it look in your organization?
Yeah, so, I mean, we are a highly regulated company because we are of working for, for a lot of regular industries. And so we already today have a clear policy in place where we are cataloging and monitoring all the, the AI algorithms that, that we have in place. And then as a second step, we are, we have defined performance criteria for all of those AI algorithms and we are continuously monitoring them. And even today in some areas, I mean, this already has to be reported in audit is externally.
And of course, where we do see that we are not meeting the performance criteria for certain groups. I mean, as explained initially for, for biometric systems, of course, are either working on our own to improve them, or are working with the vendors of their technology to, to improve. And this is really an process.
So, I mean, there always be certain, certain downsides and certain areas where the, the algorithms do not perform. I think it's just iteration monitoring it and, and, and finding what are those areas and then working on, on making it better.
Yeah. Thanks for sharing that. It was really interesting. Look into how companies, how projects are actually managing some of these problems today. So thank you. I have a question from the audience, which I'll read out how to govern, transfer, learned models when the base training data doesn't exist anymore. Does either one of you wanna take that first?
Maybe? I mean, yeah.
Yeah.
I mean, it's interesting question. So least how I understood the question is, okay.
You, you trained the model, but, but the data you trained on would not be available anymore. Do I, do I understand that the question correctly?
Yes.
Okay. It's an interesting question because I mean, if that happens, then, I mean, even if you would detect bias, then what do you do about it?
I mean, if you are, you're failing, set is done, then it would be hard to, to even, to even fix the issues that you have detected. And I think this really comes back to, to the, the other point that I made before. It's very often not a question of the algorithms that we're using, but really about the data set. And this is, I think very interesting in, in Europe, since we have very strict data privacy laws, we have GDPR that is in a sense, very limiting what can be, what can be done.
And, and so right. Often not what leads to the request.
I mean, even if you detect a buyer, do you have the, the capability in changing the underlying data set to even fix the, the underlying buyer, but that's a question cannot be answered here. How do you then balance data privacy against such topics? So what do you value hire fixing biases in, in AI algorithms or data privacy? That's interesting question.
I love the unanswerable question. I it's beautiful. Yeah.
So, you know, how do you, you know, short of, I guess, reverse engineering, I'm not sure how you would get to the dataset, so, but something would have to come out that would be flawed that for you to see that would let you know that there was a need to change that data set. So I don't, I don't know if there's a good answer to that question, but it was a good question.
We have a question from the audience as well. So just a moment.
Thank you. J just a word. There is a way to use the surrogate models like the sharply or the lime that we were talking before, and then have some synthetic data.
But the question is, from my side in your organizations, are you using any, a governance frameworks? Like, do you have like a strategy that you're following to make sure that you are taking care of the data sets of the biases and things like that? Thank you very much.
Any volunteers who wants to take that? I mean nodding again, or, yeah. Yeah.
Okay. So actually it's quite interesting because there are some, some, there's actually some regulatory movement in Europe regarding that, where there are, are certain requirements regarding the dataset and, and how reran the, the dataset has to be.
I mean, it remains to be seen how, how practical this is in reality, because quite often it's, it's, I mean, the data set is, is what you have and, and what you have to work with. And it's not that, or more, most of the time, it's not that you have too much data and you cannot decide it, which data to, to, to use. But most of the time, the question simply do not have enough data.
And, and the question is, how do you, how do you fix that? If, if certain groups are simply underrepresented in your data set, how do you fix that?
I mean, I think a long process to get that done, but I think there's no, there's no quick and easy solution for that. Unfortunately,
Al what's your take here?
You know, I, you know, I, I agree that to, to really get a good data set, you need a lot of data from a lot of places and around the world. So that's a, that's a tall order.
So to get it perfect, you know, early on the best you can do is, is hire, hire really good professional data scientists that can really take a look at what you've got so that you can kind of hedge your bets on understanding where, where either, where you need to grow your data. And, and, and then trying to figure out how you go about doing that in a way that, that makes sense. It's a hard problem.
And I think so at least what, what we are doing is we are using the techniques like data augmentation, synthetic data, to try to improve the area where there is simply not enough data.
But I mean, to be honest, I mean, those approaches will only take you so far. And in the end, nothing can, can replace actual real data. You can make some tweaks, but in the end you need a good data set.
Yeah. Interesting. Did this answer your question? Yes. We got a thumbs up. So thank you for adding some commentary here on a really challenging topic. I have another question from a virtual audience coming to us, asking about the skill sets that an auditor needs, especially to be dealing with hyper parameters, very technical aspects. And do these people exist?
What are your takes Al maybe you,
Well, yeah, so that's the beauty of, of new technology. There are jobs that don't exist yet. And I don't know of any jobs that audit these kind of data, data pass data fields that are, that are being used. So I don't know that that, that, that job exists right now, but I, I think that's could be part of the future for AI
Armin. Do you have any comments?
Yeah, it's a, it's a very interesting question because we, we just had this situation because we are providing AI based identification and we are now certified for this technology. So, and we had to go through exactly that question and we discussed this with the, the auditor. So how can we approve to the auditor that this technology works correctly? And I mean, in the end, this technology is also completely new for the auditor.
So they, they did not have experience yet. And I mean, what they're doing of course is they are also talking to technical experts and, and getting external input to, to better understand how this technology works and to then be able to, to really order that technology.
But I think it's, it's really something that, that has to be developed over time, because at the moment, this is also new technology for audits and, and so they, they have to, to work and understand and learn about this technology to then be able to audit that, like I said, which we went through the process and it was quite quite interesting.
Yeah. Fantastic for sharing that. What was that Al
I was just gonna say, that's, that's part of your government's model.
We were talking about earlier, where you have a group that that's checking the technology to make sure it's, it's properly designed and built. So maybe that is that new, new job that comes out and you're, you're, you're specifically trained to do that kind of audit.
And, and we also think that in the end, I mean, there will be new guidelines, new regulations that will in the future help with that. So we are, then you have certain technical standouts that basically will help all the tools to define the, the guardrail. And then I think it will become easier. Then you can simply work along existing regulation along existing technical standup, but at the moment, I mean, this does not exist yet. So it's really, really new.
And, and, and it's currently being developed.
Fantastic. Then perhaps to wrap up this session, we've got in many different directions, but what recommendations that the two do, the two of you have for organizations that are needing to set up governance programs, maybe we can boil it down to 30 seconds. Just the essence.
Armen, do you wanna go first?
Yeah. So my suggestion would be number one, to look at both the benefits of this technology and the downside, and then to, to clearly define internal guidelines and policies, how this is snapshot, and then work along those to then improve those metrics over time. So it's not going to be a single solution and then it's fixed, but it's really a process that will take time. And I think it's important that we move in the right direction then in reducing the downsides.
And I think it's, as long as this is achieved, then, then be at least moving in the, in the right direction.
Fantastic, Al
Yeah, actually I thought that was perfect. I think the only thing I would add to that is, is really deep dive on the data set that you're using to make sure that it's the best it can possibly be. I think that would be the only thing I would add, but no, I thought that was a brilliant, brilliant answer to that question.
Well, a big, thank you to both of you for being here today and for sharing your insights and yeah. For helping us navigate some of these muddy waters of, and, and unanswered questions about governance, but also looking towards the, the positive solutions that we can build and also the, that iterative process of always making this better. So thank you to both of you.