Thank you so much for inviting me and I'm really happy and glad to interact with such a nice audience. I wish I was there in person, but due to logistic issues, I have to do it online. So since we have less than 15 minutes, so we will do it faster and be very precise.
So we, we, we are talking about AI that will AI 2.0 be a good neighbor. Now what we mean by good neighbor is that will it be something which will help us or something which will trouble us. So more like neighbors, are we in a telematic situation that should we have that or not? So similarly, if you see couple of examples which are present. So for example, this was this painting generated by ai.
Again, it got second price or third price in international photographic competition. But then we do understand that AI do not generate anything which is new, but get inspired from multiple people, multiple dataset.
Now in this image we do not know that what were the inspiration behind this image? Who was the original creator and was the original creator informed about this? Same goes for, for Barack Obamas we see for Barack Obama. So we do not understand which is the real one and which is not. Out of four, only one is real and three are AI generated.
So this also put us to a point where we need to understand that when we interact with something, are we interacting, sorry, are we interacting with machines or are we interacting with something which is human bias is another. We do understand bias is a very, very important point. We have seen single mothers being not given financial support. People of color are being asked for higher price and you know, people of some race or region are being subjugated to some penalty just because they belong to that race or region.
Same goes for stereotypical behavior.
If you write on one of the gen AI thing that, okay, give me majors of flight attendants or given images of nurse. So you, you'll see that this, this is highly stereotypical and only female flight attendant and female nurses are presented, which is far away from truth because we do understand these are not gender specific jobs. Right? And that brings me to something called why AI should have responsibility. We break this into three parts.
We have principle behaviors, enablers principles are a few things which we cannot compromise with, like safety, privacy, fairness, accountability, transparency, explainability behavior as something which we need to ide in ourself like contestability. Adaptability, attribution and upskilling and enablers are tools which allows you to adopt these principles. Something like codes, canvas, mature assessments, synthetic data, and so on and so forth. This brings me to architecture diagram where we understand that everything should start with human in center and nothin in center.
So human centricity, human wellbeing, compliance and regulations are the pillar of what we are trying to do. And, and then when we design any kind of AI governance or responsible ai, we need to do a first thing which we shouldn't do is maturity assessment followed by thought leadership stakeholders, interviews and identify the gaps. Once we do that, then we can start the development process, which entails couple of things. Something like strategy, advisory committee, governance formulation, toolkits, APIs, blueprint certifications, canvas, and so on and so forth.
Now before we deploy any AI project, we need to have the right governance check, which is also human in the loop. So right governance check could be with a AI certification or responsible AI certification, analyze the risk, understand the errors, and also ensure that you have the right audit and documentation in place, right moving forward. We have seen how our has evolved over the years.
We have seen that few years back there was very limited awareness and there was very, very minimum regulatory rules and people were not talking about it.
Today, even normal people, common manner talking about anything in ai, even kids are aware of the harm and the problem with ai. We have seen a lot of ethical AI frameworks being established and released by multiple institute like OECD, world Economic Forums, UN UU AI Act, and people and companies are having integrated responsible AI principles in their AI workflow. Along with that, we are also seeing rise of AI ethics, board councils and regulatory, which is basically a human in the loop.
Anything which goes wrong with AI had three kind of risk, regulatory risk, reputational risk and revenue risk. We do understand regulatory and revenue. It's very simple to understand. The most critical is a reputational risk wherein if you do something wrong, people today would not keep quiet, would go to Instagram, would go to Twitter and start writing about your company, which will lead to lot, lot of problem which are irreversible and irreparable.
Couple of AI act across the world, EU AI acts Singapore, OECD, Australia, India, China has come out with their own AI act policies and have also implemented heavy fine when it comes to AI act. So this is one thing which we do in a company, which is something our responsible AI scorecard where we try to score almost all use cases and try to see where do you falter, what is the red, amber, green where you are good, bad and ugly and does it acts as a mirror.
This covers almost all country's regulations and also covers policy rules laid down by multiple organization and then moving towards AI governance. So AI governance is very, very critical. AI governance here means that how do you have the right kind of body? So we see that in the pink box, talks about AI council, which talks to AI center of excellence, also talks to advisory board and also talks to the AI core team.
And we also have the right we need, we also need the right roles and responsibility in order to ensure that everybody knows what to do and what not to do.
And they are very, and all the roles are listed down as per the workflow or the data science lifecycle, right? Again, then when you talk about responsible AI thing, we also need to talk about how these kind of things needs to have all the responsible AI frameworks needs to be organization wide, not a case study wide. Should be scalable and should be visible to everybody across the organization, across all levels. That would be all from my side. I will welcome any questions. We do have few minutes, so I welcome any questions before we wrap up.
Thank you.
Thank you so much for the insightful presentation. We have a question from our online audience and it says, now we are dealing with many changes related to ai, but the regulations take long time to be done. Do you think that there is something that could be done to avoid this problem?
Yeah, so the thing is you need to think ahead and plan ahead. Of course it takes long time.
It's, it cannot be done overnight, it'll take long time. So it's always better to get a company or hire a consultant or consulting company who has an experience in all of these who can bring in accelerators, can bring in the right policies and can come and implement. Rather than you inventing the reinventing the wheel, use the wheel which has already been imp implemented or invented and create your vehicle.
Thank you for the great presentation. I have a couple of other questions.
You know, there is existing authorities trying to create reliability out there in the world. So you have sources of authority from computational science, from ethics, legal, business, social, how might they be brought together to meet the, to create new duties of care, new standards of behavior in the AI context. Are there ways that that can be readily hybridized among those existing sources of authority? Or do we need some entirely new constructions?
So when we talk about a multidisciplinary thing, I think that's good because you will have multiple brains coming in together.
Now how to bring them in on a consensus is the major hurdle. Now, for that, we need to understand that every field have something to pitch in to responsible ai, AI governance team. Now when you bring in different set of people, we need to hear them out. We need to understand what a legal team will get, what are the regulations they will get, what a computational team will get. So you need to have something called a technical assurance format, which will cover everything which everybody wants, and then then funnel it down to few things which you need to implement.
So technical assurance sheet would have multiple taps of, for example, tap from legal, would've a tap from risk could have have a tap from computational, would've a tap from say AI researchers and that would, and then you have all the points, put them in the right bucket and then do it bucket wise. Say fairness, accountability, auditability monitoring, explainability. And as you will see that most of them are talking same thing but in different language.
And you know, another question is we, you know, we often talk about the the steps that should be taken by organizations and institutions.
And I wanted to ask about steps that need to be taken by the users, by people. You know, it's harder for individuals to embrace new patterns of behavior, but when you were using this so that a simple use case would be if someone communicates with someone else, peer to peer, let's say it's person to person. And should people have a responsibility to say, oh, I used AI in this expression or, and, and are there other contexts in which we need to start thinking about the responsibility of users of AI as individuals and how might we accomplish that?
How might we include them in the folks who are embracing new standards of care?
So I always believe this is a proactive measure. You cannot, you, you need to do it yourself. You cannot wait for somebody to come and sit on your head and tell that you need to do this or that, right? It is something which has to come from inside. So it is as simple as that. If you want to drive a car safely, you have to drive it safely. You cannot ensure that there's a drone on your head every time to drive a car safely. And that is what comes with this kind of things.
Also, you need to ensure that you yourself is committed to this particular regulation. So this particular, you know, entire concept and process or movement, you cannot ask or you cannot have somebody judging you or looking at you or analyzing your scrutiny. You every moment to ensure that things are in right shape and form.
And on the other side, in terms of human users to protect human users, should people be notified anytime that they're receiving a communication that involved ai
Ai?
Oh yes, absolutely. Absolutely. They should be aware that they are receiving a communication from machine or from human. That much transparency is as good. For example, you should know that you're talking to shal as a person made up of flesh and blood and not avatar shal generated by some AI tool.
So that, that's the human right to know.
And another question, again, protection for individuals. I mentioned previous in a previous panel that I'm involved with the IEE now on standards for agentic ai. The idea of being, you say to ai, Hey go plan a wedding for me. You give it your credit card number and it goes, does multiple steps. What kind of protections or limitations should be put on those systems? 'cause you'll have an individual at risk then with multiple decisions being made.
So
That that is where you need to ensure that you have the right kind of privacy measures, encryption and checks on toxicity. And almost everything which we talk about, it cannot be billing, you know, it can it once and it should have rights right to eraser. So the moment things are done, it has to get erased from the memory. You cannot keep it in the memory for long without right kind of checks and balance.
So right, right to eraser and prop. Proper privacy encryption is very, very important.
So if you, when you give your credit card the moment it is done, it should erase it from the memory or encrypt the entire thing so that it does not get leaked and it does not fall into hands of people whom it should not go to. And that's the major problem with agent, agent based LLM models today.
And, and one other question is, so you know, some folks assert including myself, that the human mind resides in language and culture. Is it intrinsically unethical that this current systems are being trained just on English?
Of course it's a issue. So for example, if it's trained non English and non not other regional language like Polish or German, you are not able to invite the culture and few things which are okay in English may not be okay in say some other language.
So it is very important for AI to be trained in multiple languages and regionally European languages too, and especially languages which are drastic to each other when it comes to culture. For example, German, French, Spanish, Polish, and English for sure, Chinese Hindi and and so on, so forth.
So please join me in thanking Shari for the fantastic presentation. Thank you.