In a follow-up to an earlier episode, Matthias Reinwarth and Anne Bailey discuss practical approaches and recommendations for applying AI governance in your organization.
KuppingerCole's Advisory stands out due to our regular communication with vendors and key clients, providing us with in-depth insight into the issues and knowledge required to address real-world challenges.
Unlock the power of industry-leading insights and expertise. Gain access to our extensive knowledge base, vibrant community, and tailored analyst sessions—all designed to keep you at the forefront of identity security.
Get instant access to our complete research library.
Access essential knowledge at your fingertips with KuppingerCole's extensive resources. From in-depth reports to concise one-pagers, leverage our complete security library to inform strategy and drive innovation.
Get instant access to our complete research library.
Gain access to comprehensive resources, personalized analyst consultations, and exclusive events – all designed to enhance your decision-making capabilities and industry connections.
Get instant access to our complete research library.
Gain a true partner to drive transformative initiatives. Access comprehensive resources, tailored expert guidance, and networking opportunities.
Get instant access to our complete research library.
Optimize your decision-making process with the most comprehensive and up-to-date market data available.
Compare solution offerings and follow predefined best practices or adapt them to the individual requirements of your company.
Configure your individual requirements to discover the ideal solution for your business.
Meet our team of analysts and advisors who are highly skilled and experienced professionals dedicated to helping you make informed decisions and achieve your goals.
Meet our business team committed to helping you achieve success. We understand that running a business can be challenging, but with the right team in your corner, anything is possible.
In a follow-up to an earlier episode, Matthias Reinwarth and Anne Bailey discuss practical approaches and recommendations for applying AI governance in your organization.
In a follow-up to an earlier episode, Matthias Reinwarth and Anne Bailey discuss practical approaches and recommendations for applying AI governance in your organization.
Welcome to the KuppingerCole Analyst Chat. I'm your host. My name is Matthias Reinwarth, I'm an analyst and advisor at KuppingerCole analysts, My guest today is again, and I'm happy to have her again, Annie Bailey. She's an analyst covering emerging technologies, such as blockchain and artificial intelligence here at KuppingerCole. And we will talk again a bit about AI governance. Hi Annie. Hello. Thanks for having me back. Great to have you.
And actually this is a follow-up to an earlier episode that we did, where we, where we looked a bit more at the more philosophical, more society oriented aspects of AI governance. And today we want to follow up on that, but also get to more specific recommendations advise for, for companies, for businesses, thinking of applying AI within their solutions, what would be a good starting point for making sure that governance is applied adequately within an organization within a commercial company, wanting to apply AI to their solution?
Yeah, so a really good place to start actually is something called the Singaporean model AI governance framework. And so this isn't anything which is legally binding and it's not even specifically applied to Singapore. It was designed as having the title at a model framework that could be used anywhere in the world to, to help guide an organization with the development process for AI or even the acquisition, if you're simply purchasing it from a developer, the implementation, and also the kind of the end of the life cycle as well, how to, how to retire it.
And so this is really the most practical framework, which is out there. So this is a fantastic starting point. There's some kind of interesting things about it. There's a talk in the AI governance world of how to keep humans involved. There's something called human in the loop or human out of the loop. And so this is kind of referring to how, how involved a person actually is in a decision which is being made. So you can imagine with no AI involved, this is pretty usually a human in the loop situation where the human is the key decision maker here, because there is no AI there to support it.
So as AI is coming in with recommendation systems, with scenarios, with different types of predictions and analysis, the question becomes well, if the system already came up with our answer with our recommendation, do we really need a human to approve it or not? Or should it just be implemented? So the Singaporean model actually keeps a more open look at this.
You, you typically hear that there should always be human in the loop with AI and, and for the reason that, that you want a human to be able to say, no, this is not an appropriate decision. No, we should not go forward with that. You want some, some sort of control over halting a decision which is not good or correct. But a lot of the applications we have of AI are for, for non-mission critical things. So a recommendation on what other products you could buy, if you buy those choose or a t-shirt or a book, you know, what other things would you be interested in?
You know, it could be annoying if it's not very accurate or if it's, you know, recommending something offensive or, you know, there can be lots of hiccups and problems with that, but it's not necessarily a mission-critical thing where somebody's life or health or rights are going to be compromised.
And so this is a, I am more realistic approach to, to using governance for, for companies who have, yeah, who have an end goal of, of providing some sort of efficiency rather than staying simply on the philosophical level that humans need to always have the veto power, get that As also really one of the key concepts of, of machine learning. So always being capable of, of adjusting the rules that apply and that machine learning then codifies afterwards.
So really make sure that you, in the, in the beginning, I assume that you cannot provide a complete set of guidelines to such a machine learning system by just having it learned from an example. So this second line of defense to understand, okay, from what it has learned already, this decision is straightforward, but it's wrong anyway. So we have to, we have to provide more, more guidance, as you said, more rules, more real life experience to just also improve this machine learning model and the system behind it. Yeah.
And you, you bring up a really important point there too, which when you're referring to machine learning, you know, is sitting underneath this umbrella term of AI and AI is really a collection of technologies, which all operate in very, very different ways. And so it's, this is also an added level of complication for governing this well, because AI is such a broad term and there are going to be different processes for, for governing different types of machine learning or different types of robotics or computer vision, things like that.
So yes, that is, that is completely true that companies do need guidance on how to approach this though. And so there are some, some good standard questions to be asking.
So, so such as the human in the loop question, you know, as a, as a company, you really need to decide what is your organization's policy on that? What is your stance?
What, what is your AI project really going to be doing? Is this mission critical? Is this not? And where do you sit on this human in the loop, out of the loop, or there's even something called over the loop, kind of concerning having the ability to oversee or, or, you know, be flagged when something is, is a miss. But otherwise, when things are, are progressing normally, or, or recommendations are being made without any hiccups that they just are implemented automatically. So another recommendation though, or question that organizations should be asking is how the model is able to be explained.
And this is with a good explanation model. This can allow more human out of the loop situations for, you can have a foundation which can be audited and tracked, explained to individual customers, if there is an issue. So this is with a good foundation of explanation, then you can have different degrees of human in, or out of the loop. So this explainability, and I assume that there is an issue when it comes to when, when AI really does its job really well, so that when it runs on a, on a high volume of, of, of interactions.
So when they're all many interactions at the same time, providing for an individual use case might be difficult to impossible at that point in time when it really, yeah. It fulfilled its promise is it's really running at a high volume and the model is constantly evolving as well. Isn't that a challenge for the organization as well, so that it's just no longer mirror from the mere volume possible to do it.
Yeah, absolutely. So this is the question of explainability is one which is being answered, but it's not perfect yet. And so this is really one thing which is holding back the maturity of a lot of AI solutions, is that a local explanation. So an explanation which applies to one single dish or one single customer interaction, one single data point can be explained. So this local explanation really is challenging for a lot of the quote unquote black box algorithms. So a machine learning algorithm, which is kind of unreadable from the outside that it's it's processes are masked somehow.
And to, to a normal lay person is not going to be understandable. So although a data scientist may be able to understand the process, that's also very specific information that an end user will not have not be able to work with will not be able to understand. And so there are options, there are algorithms which can be attached on retrospectively to your model.
So if you have a recommendation model, which is being used, you can add depending on what it is, if it's, if it's a classification model, if it's, yeah, so you can, you can add things on, or you can add an algorithm on afterwards, which helps to identify what attributes most impacted the decision. So if you're classifying images, for example, you can have algorithms which go on afterwards and assess, okay, was this a tree? Or was this an apple? And it will highlight areas of a picture which most contributed to the decision of either tree or apple.
And so this can be a more human readable way explaining why that decision was what it was for that particular picture. That's an Important point that you mentioned, because when you say you can go back to the actual source that's behind the decision made afterwards, then we also should apply governance to, to the initial training data. So the data that is prepared for actually making the machine learning system learn. So I think there that is also an important area of governance. Absolutely because a model is, is only as good as its training data and also as as much.
And it's only as good if it's validation data. And so there are, there are a few phases to training a model. So you first start out with training, which is the first round of feeding, for example, an image classification systems. This is the first round of feeding it images of trees and apples and trees and apples. And it learns over and over again.
Oh, okay. This is a tree, this is an apple. And then you have to validate that. So you send in another set of data, which it hasn't seen before and you checking to see, okay, how well does it actually perform that? Can it accurately recognize that this or that? And then you have it actually in use and, you know, doing the job, which it was created to do coming from live data or data from your organizations own sources. So this data all had to come from somewhere, and it's really important that it's a representative set for your end goal.
And so if you're only identifying apple trees and red delicious apples, then you know, it's, it's important that you have enough examples of those trees and apple types in your data. But if you're wanting to identify all sorts of trees and all sorts of apples, then that needs to be accurately and proportionately displayed in your training set and your validation set, and also to keep checking on your live data on the data that it is processing in operation, that that's actually your goal that you're still continuing to need to look at all sorts of trees.
Oh, you would Recommend that it's not only an eight T problems, but that you really need to involve all types of experts from different angles to solve the solutions. So to understand the semantics behind, for example, training data, as you, as you've mentioned, it's easy with apples and, but it's, it's, it's much more difficult when it comes to real life recommendation systems. So that to, to, to judge upon the quality, to adjust it and to make sure that it actually is fit for training, the model is really a problem that is far outside of the it perspective.
So it's really a cross department team that is required to, to apply governance as well. Yeah, it's really a conversation and always keeping in mind the end goal, what is this model going to be used for? And so this is, this is something ongoing. It's really rare to find somebody who is both a data scientist and able to, to prepare and train an algorithm and the same person who will be overseeing it in operations, who is, who is actually going to be working with the results that it prepares and looking for specific specific results.
So this really has to be a conversation between these, both these two sides, these multiple sides to, to make sure that okay, the data is coming from a place that there's appropriate permission to use that data for training that it's in the correct proportions, but it's also achieving the correct goal. And of course that that goal, it still meets that goal three years in the future or five years in the future because goals change and input data changes. And so a model needs to be robust enough to reflect those changes over time. So it's an ongoing conversation, Right? Understood.
So that, again, a bit as a kind of summary, it puts a high level of responsibility onto each organization that is using AI technology in general and AI, especially, it really needs to be well justified for which purpose you, you use this technology, which data you build it with and where you apply it afterwards. If our audience is interested in learning more about that, I assume that there is also research available at KuppingerCole and that you can, if they can get in touch with you as well to, to discuss individual Issues.
Yeah, absolutely. So there's research definitely on explainability and options, which are out there piecing, earn, and, and parsing out this governance question, more, taking a look at the different frameworks, which are out there and, and how to approach this for your different needs. So your needs at a, as a company acquiring AI capabilities are going to be different than if you're developing them in house. So these are either resources which are available. Great. Thank You very much.
And I'm really looking forward to continuing this discussion with you in an upcoming episode of this podcast for now, the time is over. Thank you very much any again for joining me and thank you to the audience for listening. So thanks again. Bye-bye. Thank you. Bye