Hello everyone. Thank you for attending today's webinar. It is titled AI governance from a practical perspective. Thanks for your patience. I had some tech technical difficulties on my side, so I appreciate you waiting around for me to be logged back in. So today's webinar is going to give you a practical look at what governance could look like in your organization, whether you develop or procure AI technology. So first a little background information on our coal. We are an independent Analyst firm and also a deliverer of digital advisory products.
So here you get expert knowledge in IAM cybersecurity and beyond technical evaluations, portfolio analysis, roadmap, definitions, tool, selection, strategy, definition, and more. So I encourage you to check it out on our website.
And we also deliver master classes on timely and relevant technical topics like incident response management, business resilience, and access management, and more.
So these have interactive webinars up to date research certification possibilities in an all day virtual classroom, a workshop format with a final exam and the opportunity to discuss your individual challenges with an Analyst. So some highest keeping items, please know that you are muted centrally, and we are controlling these features. So there's no need to mute or unmute yourself. And we are recording this webinar and it will be made available to you shortly. And we will also provide this slide deck for download and I welcome questions.
So if you have a question during the session, please use the go to webinar panel, enter your question there, and now we'll be able to see it and I'll address it at the end of the webinar. So in this webinar, we're gonna go through the following topics. So first of all, what do we mean when we say AI governance? We'll take a look at some things to avoid and examples from the wild. We'll take a look at some key questions to ask regarding governance and then take a look at a practical governance in an IAM scenario.
So governance is the process of providing appropriate boundaries for execution execution, excuse me, AI governance is then the task of using such boundaries to guide the development and or procurement AI solutions. I'll make my caveat here about the use of the term AI. This is a collection of technologies and they don't all fall in a single field, but unfortunately, many of the proposed governance frameworks around the world do not differentiate between machine learning and the rest of AI technologies that often get lumped together.
So AI governance is typically about governing machine learning models. This isn't ideal, it's limited, but that's what we're working with at the moment. So it's helpful to think of governance as a combination of ethics, compliance and risk ethics is probably one of the hottest topics when discussing AI, especially when it comes to allowing an AI system to make decisions.
Is it ethical to allow an machine learning model to determine whether you are approved for loan or whether one of your users gained an authorized access to your system? And that should be locked ISD.
This guidelines on a couple of fronts first is that there are very few legally binding regulations out there that specifically have to do with AI technologies. So there must be compliant on issues like data privacy, but this is still within the limited scope of personally identifiable information governance of data sets used for machine learning models involves different concerns than just PII data. And even though there is little regulation at the present, it doesn't mean that it will stay that way.
In fact, it's very likely that there will be more and more regulation to guide AI development coming out in future. And it's best to anticipate the types of requirements that will be expected. And so lastly is risk, which is to protect, excuse me, protect the organization as much as the stakeholders, including end users. So mitigating risk when developing and implementing AI systems is a key part of governance. So to wrap this all up, why is the governance of AI necessary?
Because AI has no common sense to not explain itself and is not responsible for its actions, our processes for developing and end implementing such AI technologies must serve as a guidance for its lack of self governance.
So governance is typically something communally agreed upon in documents like guidelines, best practices or frameworks are usually in the first wave of official documents that could inform companies on what to develop or procure or can inform companies that want rather to develop or procure AI capabilities.
So I took a sampling of the global frameworks that are out there from politically driven task forces, academia, and the private sector. And this is by no means a comprehensive list, but it does identify some of the main players and what values are being consistently included. So we start with a relatively early paper from Harvard in 2017 that provided a very general framework for AI governance. And this focuses on ethical alignment, technical robustness, legal legitimacy, and mitigating negative social and environmental impact.
Then from China's ministry of science and technology, they put forward their new generation AI governance principles, which covers a huge range of topics, but notably they have a preference for a term like agile governance, which contains ideas like iterative improvement of governance structures. So that AI always develops in a human friendly way. This doesn't explicitly require human oversight popularly called human in the loop in 2020, the EU commission came out with their guidelines on ethics and AI, which does explicitly call for human oversight of decisions made by AI systems.
Then touch briefly on what is coming out of the private sector. Google published its perspectives on the issues in AI governance, calling for actions on particular topics so that AI governance can be better carried out. And these perspectives focus pretty narrowly on technical robustness and mitigating the internal risk of the corporation. And then to close out this high level comparison, take a look at the Singaporean model, AI governance framework.
It, in my opinion offers the most practical guidance for companies wanting to develop or procure AI capabilities by translating ethical principles into something actionable that organizations could re readily adopt to deploy AI. It also follows more on the side of calling for agile oversight rather than uncompromising on the need for human oversight, but it does offer support in making decisions on how involved a human should be in monitoring, monitoring the decisions of an AI system, depending on the risk and severity of potential outcomes.
So this is a pretty reasonable assessment given that AI systems are employed in mission critical situations as well as completely banal task. So we'll get in more on how this is set up as a framework later. So it's also worth noticing that similar concepts keep popping up in these governance frameworks from many different perspectives. And one topic worth mentioning is that legal legitimacy is featured less in a few of these frameworks. This is problematic, but mainly for the future at the moment, there's so little legally binding legislation out there.
And what there is is only applicable to a few jurisdictions. So practically it makes it very difficult for a company to align their AI governance with something that is as of yet non-existent. But as a tip, if these frameworks are any indication of the legislation to come, you should have a pretty good idea of the general topics. That would be the foundation of potential regulations regarding AI.
So best practices also exist for procurement in particular. So there are 10 steps composed by the UK office for artificial intelligence to consider when procuring an AI solution.
These have been slightly adapted and our make a good base for approaching a vendor who could provide AI capabilities. So first is to make a choice based on your challenge, not their solution. So don't get ed into the trap, a fitting, a really impressive solution to a problem you don't have. And second is to do a risk benefit analysis, including for the public. This is part of the ambiguous ethical responsibility that goes with using AI. If the technology you were considering procuring, they cause harm to the general public that could be eventually tracked down.
You make decisions early about what your risk tolerance is, and if there's a high risk associated with technology, then start putting risk mitigation measures into place.
Immediately third include relevant legislation in a contract agreement or invitation to tender.
If, and when there is relevant legislation in your jurisdiction or that of your end users be upfront about those requirements that you may have of your AI vendor in order to be compliant, specify how relevant data will be obtained, regardless of if you are procuring an AI system by your organization, that if you are, excuse me, procuring an AI system is pre-trained that learns from your own industry specific data or from data gathered by your organization. A discussion on how relevant data is obtained, prepared, used, and stored must happen.
This it's important to acknowledge and address the limitations of using training data inherent bias is always present. Be it selection bias or measurement bias, your team's engagement with and actions to mitigate the effects of that bias should be well documented.
Six, you should use a multidisciplinary team, your industry experts compared with your business decision makers will see different potential issues and may have different criteria to fill. It is certainly not easier to find an agreement from diverse perspectives, but your decision will be stronger for it. Seven create mechanisms for accountability and transparency.
So remember that this is looking at procurement, but while you are spending such dedicated time, considering what vendor could best meet your needs, signing, implementation, putting time and mechanism, place recommendation number acknowledging the limitations of your data is just one of many examples and how you can build transparency into your implementation process and eight consider the lifecycle management of your AI system. So this like number seven spans beyond just procurement process, but it must be considered before you commit yourself.
Are you choosing a technology that does not yet have feasible means of monitoring, assessing retraining, retiring it? So have some preliminary plans as you make your procurement choice.
AI governance is still the wild west. And although these recommendations are very rosy and painting the scene with fully cooperating stakeholders, accepting all appropriate responsibility. This is not the norm yet. It is highly likely that you, as a company developing or procuring AI capabilities will encounter other stakeholders that deflect responsibility.
It's very possible that vendors of AI products and services may admit that their solutions do not have the governance mechanisms built into it. But assure that if deployed correctly, the organization is able to fill all governance requirements. This is both an Sion of duties and shows an unwillingness to engage with the fact that there are no standard governance requirements and it requires extra effort from all sides.
So for example, a vendor I spoke with recently provides a platform to automate machine learning for enterprise use, primarily as a support for data scientists.
However, instead of integrating governance mechanisms into the product for accountability on data lineage or model explainability, the vendor explained that these governance steps should be done in pre-processing this before the vendor's product is relevant and explainability could be added retrospectively this out of the vendor's product reach. And so, although the product itself appears to have great AI capabilities, the vendor avoided responsibility, placing it instead on the customer's shoulders. And this probably isn't the type of vendor you want to work with.
So here is an outlet for my pessimistic side, don't assume risk that should be shared among a partner ecosystem. The AI industry has the potential to turn into an exploitative chain, train off the relative lack of knowledge and choice of different actors along this pipeline.
If care, isn't taken for date of Providence by the AI developer that may impact the end user, who is being treated based on a model, trained on mishandled or ill-fitting data, the way to ensure a positive partner ecosystem is to expect and demand reliability from your AI supply chain to so that you are best able to hold up. Your end, don't assume risk that doesn't originate with your actions.
And remember that AI frameworks often lump all AI technologies into one. Is it clear what the intelligent part of a solution even is?
Is it an NLP based chat bot that will interface with end customers and learn from their conversation patterns and your industry knowledge, or is it an internal anomaly detection system that sends all potential issues directly to a human agent? AI is not a one size fits all bundle of technologies and your governance tactics should reflect the actual technology that you are using. So take the effort to understand what it is that is being called AI.
So I also assured that this would be a practical look at governance. So let's dive a bit deeper.
What does governance look like in the context of your organization? This is an example, visualization of a, of governance for an organization that is developing their own in-house machine learning solution. When we read it from left to right, we can see the major areas that an organization should pay attention to. So things like internal governance structures, human involvement, operations, and transparency on the left side, moving down, you can see the areas that governance actions are relevant to.
So to development, to operations, the protection of internal stakeholders and the protection of ex external stakeholders.
So this visualization can also translate to the major questions that organizations should ask to get started with strong AI governance. So thinking about the internal governance structures, you should be asking who is responsible for overseeing the deployment model maintenance review, internal risk management, legal compliance, ethical implications, and social and environmental impact of the AI model.
Then thinking about the human involvement question, what is your organization's policy for human in the loop out of the loop or over the loop regarding decisions made by the model? We'll come back to this terminology a bit later, what sales safe or emergency interventions by humans are in place during the model deployment. And what is your organization's definition of human oversight in thinking about the operations, you need to have structures in place to govern data preparation, model development and the application of that model and the life cycle of that model.
And of course the explanation of that model, and then considering transparency, is there an opt out option for end users to not use the AI portion of your product or service? Is that feasible? Is that relevant? And how are your end user's options to interact with the AI system communicated to them?
Also, you should consider a general disclosure to all your stakeholders about the use of AI and is your system and process of developing it, able to generate audit trails. You have an explanation policy so that your staff and users know when to issue an explanation. And finally, do you have channels that are open to accept multi-stakeholder feedback?
So again, this is an example visualization, for an example, hypothetical company, yours may look a little bit different. You may have other and other things which you could sit here. So this is just a starting point. So let's take a scenario that's relevant to one of keeping your Kohl's domains, which is identity and access management.
So imagine a scenario where you've been hired to help manage the oversight of an AI system that should proactively detect and prevent unauthorized access to a software as a service, the proposed architecture looks like this, where simply put you want your users, not external adversaries to be authorized for the service. You could use a machine learning algorithm trained on data, curated from your and session logs and used in the form of an inference engine to support the authentication process. So a few questions come to mind immediately. What is the data Providence for this model?
Like where did the training set come from? Was there a validation set? How is the data coming from your organization used?
So it is possible that information taken from the authentication and session logs may have PII information, which is subject to the GDPR.
So for any of you who may have forgotten personal data, is any information relating to identified or identifiable natural person like an, an identification number, location, data, online identifier, or a multitude of other factors that are specific to the physical physiological, genetic, mental, economic, cultural, or social identity of that person. The processing purpose of this data is also a strong consideration. Is there a legitimate interest to use this data in a machine learning model, probably, but communication with your legal department would be prudent here.
If it is lawful, then you do need to take appropriate security measures and support data, subject access rights. And if outsourcing be certain that your contracts are also compliant, information in a breach or pseudonymization are possible options here, the rights of individuals while still gaining realistic value from data sets, pseudonymization refers to the processing of personal data in such a matter that the personal data can no longer be attributed to a specific data subject without the use of additional information.
So the data controller can take account of the existence of appropriate safeguards, which could include encryption for pseudonymization when considering weather processing for another purpose is compatible with the original purpose.
So here we can see an example of pseudonymization where each field is replaced by a random but realistic value. It's still possible to retain the relationship between these tables and it is optionally reversible, and this can be a potential option for training input or a validation set or for use when the model is operating.
Another topic you could consider is a bias. How is inherent minimized selection bias is when the data is not fully representative. For example, if the model was trained with data from users that have a narrow range of login habits, and this may not be representative of your actual users who, who have a wide range of logins, there may also be measurement bias where data collection device causes data to be systematically skewed. For example, if you're working with image classification and an image originally captured using an skewed output.
So here is a fairly well known example of your, of inherent bias in the training data where selection bias occurred. And there was not enough examples of colored faces, including in the training set of recognition models. This is from a paper title, gender shades, intersectional disparities in commercial gender classification.
And it was stated that the substantial disparities in the accuracy of classifying darker females, lighter females, darker males and lighter males in gender classification systems require urgent attention and commercial companies were to build genuinely fair, transparent, and accountable facial analysis algorithms. So bias is a critical part of governance if models are ever to be dependable and also trusted by end users.
So coming back to our scenario, we have a classification task between the legitimate attempts to log and unauthorized attempts to log in the system is trained and then implemented with realtime information, the actual behavior abusers, as well as the occasional input from an actual is categorized either as correct or legitimate and incorrect or unauthorized, but included in all of this is the potential of false positives and true negatives where data is incorrectly labeled by the model. So what mechanisms would you have here to monitor this?
And how often do you validate your model with a fitting validation set? This is another question to consider when building governance into your processes, a key question to ask is how to include human oversight and the development and implementation of your model. A simple exercise to start out with is to assess the severity of harm that could come to pass.
If the model produced a wrong decision, cease the function, or were otherwise compromised, then you should also consider the probability of that harm coming to pass.
This can be viewed as a simple matrix to indicate the approach that is most fitting for your use case. Human in the loop is the most popular approach, but it's not always necessary. So remember that AI is often used for, for repetitive tasks that may not carry critical weight things like recommending the next purchase. So if the system were to malfunction and start suggesting irrelevant items to purchase, it would be annoying, but the severity of harm is quite low. The probability could depend on everything from the robustness of the model, to the security of the implementing company.
And so if we tried to implement human in the loop approval for each decision, it's really not feasible. The number of recommendations produced would be beyond human capacity to keep up with. And so here, a human out of the loop approach could be appropriate.
If your company's risk tolerance is a lower, you could consider a human over the loop approach, which allows for human override, if a decision, and to adjust the parameters of the model during operation.
So as an example, think of a navigation system, which might dynamically suggest routes to a driver, but that a driver can choose to override that suggested choice while driving to avoid something like construction, a human in the loop approach would be necessary for decisions that really require human oversight with an AI system supporting with recommendations. So medical decisions for instance, could be augmented by the insights of an AI system, but the ultimate decision should come from a medical.
So we only had time for a few of the relevance questions of the relevant governance questions, excuse me, like data preparation, there's model maintenance and review, and the questions of human in out or over the loop. And so when these questions are addressed appropriately, they also fit well into a risk management approach where the risks are identified and reduced or mitigated. And that should be an ultimate goal with AI governance, which is to manage the AI development or procurement all the way through development of retirement in a compliant, low risk and ethical manner.
So I hope this webinar provided a helpful start on your AI journey, and I do welcome any questions. So if there are no other questions coming in, then I would close the webinar. Thank you again for your patience at the beginning, given the technical difficulties. And thank you for being here.