So here, as I said, I've been just to introduce myself ly. So I've been in industry for good number of 25 plus years. Been predominantly a technologist, a product owner, long time, Microsoft data and AI research. Our CTO at KPMG before founding co AI.
We, when we, the idea of cogno came when I was in Microsoft, along with my co-founder ego MOS, we came up with the thinking, which we have been obsessing about for more than a decade is the artificial intelligence that we know, which is machine learning and deep learning as a concept need to have two things so that we can use in the forefront of business decision making. The first thing is it needs to be intelligent enough. What do I mean by that is can the engine or the AI can create contextual understanding. And the second thing we wanted to work on is explainability without explainability.
We can't use that for highly regulated industry like health educations and predominantly in financial services. So we wanted to make sure that the AI that we create in at cogno, it needs to be hundred percent explainable and trustworthy. And the second thing we wanted to make sure that the engine that whatever we, we innovate needs to have contextual understanding machine learning that we know till now is all about doing cheaper, faster, better.
We wanted to add another dimensions to that, that the, in any AI that we use in the mainstream organiz needs to have real contextual understanding needs to understand the real world data. It's not just numerical data and applying machine learning models. It's ability to understand fast amount of data that is happening in the real world and marry that we had a breakthrough 18 months back as an organization where our engine can understand real world data, contextual, real, and find context, and build a large understanding and ontology of the real world information.
We could also explain every decision the engine was taking. So those are the two innovation we had. And on that basis, we have been working with range of organizing predominantly in financial services and healthcare organizations. So without further due, I just wanted to take four key subject, which I wanted to explore. One is post COVID. Why and why the AI that we need to apply now at org and various organization needs to be different. The second thing is about contextual understanding and explainability.
Third thing is how this, all things comes together for innovating in a new way for AI and how an organizations or an enterprise needs to be intelligent going forward. So those are the kind of four constitutional talk more post COVID. If you see what is happening in the world of data famously, what I have been observing for last 10 months is 10 years worth of digital transformation has happened in last 10 months.
What does that mean is while we say data is the new experience. We have explosion of data. This is unprecedented. When you see 10 years worth of digital transformations.
When every organizations from governments to educations, to retail and various other types of, even from charity and frontline health educations, health and education have moved into digital digital first organization, they have transformed and moved their data ecosystems to the cloud. That means we are not going to create petabyte of data. We are going to create possibly multiple petabyte of data per organization, as we have digitized many of the process across the industry.
And what that brings to the challenge here is data growth is exponential now, and this exponential data growth requires a different thinking. You can't use just the same level of innovation that we have been doing till now, and expect the same about ability to understand the data and translate them into intelligence or knowledge.
So COVID has a huge impact on us, but a positive impact I see is lot of digital transformations, a negative impact or, or a challenge for the industry is how to really reason through this information and make real meaningful decision.
You had one set of group of people. Now you have entire organization have been transformed into a digital space. So how exactly we can find information reason through them and take decision is the big, big challenge. Think of the last I've been in this space for a long time. And we have been talking about deep learning for 15 or years back in white paper and, and thinking as a concept and how to move the industry into more real contextual AI now. But if you think of research as a whole, there are hyper vendors like Amazon, Google, Microsoft are all working on delivering cognitive service.
So great image recognized and presentation you saw from, from Google earlier and so on. So forth. Now they are cognitive service, brilliant progress in, in innovations, but they are in that context. Then you have machine learning, which we have been is a kind of sub context, or, ah, if you think of a broad subject, learning, learning, generalized learning, distributed learning, and so on, so forth, these are highly debatable. Great set of research has been happening in this space and progressing very fast.
What we are unable to get to the core of, if you say core of the real world requirement right now is contextual understanding as well as the black box AI. These are the things, these are the two things which we are unable to have a breakthrough, which I'm kind of quite confident, proud to say that we have got those innovation as part of cogno innovation, but it's important for the industry to move in these directions that any informations we reason through using it, artificial intelligence needs to have a contextual understanding.
And as, as well as a black focus black and how to move beyond black box AI, what do I mean by contextual understanding? And the contextual understanding is, as we know, you know, it's just not one set of AI or renovation is going to bring context. It's about sensing, informations, comprehending that and acting like human do that is contextual understanding. We do not always say, if you think of a kid, when we train a kid, when, when a baby or a toddler ask, what is this? And we saw a car and the car can be, and the, the kid normally understands that car has for wills and as a body.
And then when the kid normally exposed to a truck, highly likely the kid or a toddler would say that it's a car. And at the same time, we, as a parent, we reinforced their understanding and say, it's a truck.
So we introduce a new language or a new concept.
Now we don't train the kid every possible car or every possible truck in the world, the AI that we have right now, it needs every, if you think of GPT three, as a brilliant marble of human innovation, what we have done is we have trained huge amount of data, global set of data, and made sure that the engine has crazy amount of computational power. And it can also understand and Matthias in a probabilistic model, every car in the world so that it can say it is a car that is not a contextual understanding. And that is where industry has gone wrong.
What I mean by contextual understanding that we do not always need to be trained about every car to, to detect its car. We normally take our learning using our short term and long-term memory as a human, we sense using our vision visionary and other sensory, and we can interact and translate data into knowledge and reason through that information and act on it.
This is the core of contextual understanding what I mean by, and the AI needs to have this similar AB ability to abstract intelligence from the raw data, create multiple abstract intelligence scenarios, and then truly reason through an act on behalf of our apps. So that's the kind of one wave of innovation required.
The second wave of innovation to me is explainability, which is comes into in four different S a trustable AI, will we, as an, as an industry, as a, as a society will give AI the power to take decision on behalf of ours, without really knowing why and where it took certain type of decision. Now we cannot have, this is not a good to have. Explainability in AI cannot be just an option. It needs to be a must.
And if we want to use that in our forefront of our decision making, be it's governmental decision making, be it for businesses or for societal problem solving, whether it is for health predominantly for health and financial decision making, we must have trustable AI.
And what that means is there are peripheral concepts and policy that we need to think of whether it is, I, I see this as a four S as a valid AI. The second thing is, you know, privacy, preserving AI. Third thing is responsible AI and the fourth, which constitutes off all of that is explainable AI.
When I say valid AI, we need to say that, should we use artificial intelligence for a scenario where we can really think of human can do better? It's simple, common sense uses privacy. Preserving AI must be part of every organizational building block. When I say private, we cannot ever think of introducing biases. Introducing personally identified informations as part of the AI building block. And so we must must respect the human rights to, to make the AI successful. The third element is post COVID. The world has changed.
We will have, and right now, every country and, and an organization wide, you cannot just think about deploying AI because you want to save cost.
It is not. The kind of the time is right now is thinking about growth, not saving cost. So thinking about that, if we use it, this part of the AI, is it going to displace hundreds and thousands of people out of job, or it is going to re-skill them and provide them a new set of armory to participate in the wider economy? I think that is where we need to obsess.
When we think of deploying AI in any organizational situations, and that constitutes the fourth element of explainable AI, which is what is the rational behind the engine or AI takes dissent. I'm going to drive, especially if we, if I give an example, simple example for a driverless car, when you take the driverless car, if it is going to FA it is faced with individual in front of it, the car, the engine needs to take a dis, whether it is going to save the driver, or is going to save the individual who is in front of the car.
Now, each case will be different. And every decision how it came to that decisions of the, we should be able to see that and log it and go through the entire value chain of decision making. So that's what I mean by how we, as human take decision.
We, if we do a slow motion of our own brain, it is not always explainable, but if you really obsess about it, you can go through each steps of why we took a particular decision using our short term and long term memory, et cetera. So the same way the AI is being trained, it's trained with some base data it's trained with real world data.
It's real, it's trained with realtime data and how it is taking decision is must have as part of the AI deployment. So they're the two things which I think I wanna do share.
Now, if you think of how that comes as a, as a core part of any organizations, I think it's a new operating system. If you think of building AI as part of your organization, you need to think of contextual understanding and auditable AI, a trust worth AI. Then only you can deploy AI from, from throughout your value chain or throughout your processes.
And the way we are at cogno obsessed around our innovations is truly real learning and contextual creation of, of just like human, like understanding ability to learn from data and, and continuously learn from real world data and create connections in a multi-tier relationship across, across the real world data. And how do you do that is you need to bring causality to as part of the deep learning where you think of there is a earthquake in Chile has got some real impact to Siberian oil rig, or there is rain outside. So there may be, you know, there could be flood in, in downstream.
Those real world causality needs to be embedded with as part of the engine. And that's what our innovations brings into the table. And you cannot deal with the amount of data that we have right now in the world, especially post COVID, as I mentioned. So how would you really think of unsupervised learning, transferable transfer, learning, and generalized learning and distributed learning, bringing them together, not in silos.
Am I audiobook?
Yes, you are. Now we lost you briefly.
Sorry. So unsupervised learning and transferable learning, distributed learning, generalized learning along with deep learning, bringing them together.
I mean, deep learning is part of that. Can you hear me?
Yes, we can.
Yes. Thank you. And so it is, it is core of part of how we bring these things together and then having human judgment, because we are talking about explainability. We need to have human in the loop where we can auditable explainable AI. This is the new operating system. When I say about going forward, new wave of AI transformations. So we unprecedented challenge requires a completely reimagined innovations.
And I think this is where every organizations can really benefit if we think AI in as a holistic challenge, but deploy the right kind of AI, not just do cheaper, faster, better, not just try to save cost, but think about growth and societal impact. I call this built your intelligent core, where you an engine, this new engine can lift the informations that you have from consumer data to social media data, to operational data through sensory and IOT data.
How the engine can really lift informations and create this cons contextual construct.
What I call intelligent goal for every organization that can from customer onboarding to early running systems, to creating enterprise knowledge management, to predictive maintenance or capital asset planning, you can think of lots of workflows that you have right now, introduce and infuse AI as an intelligent agent, using the new type of AI that I would call, which is intelligent enough, but contextually explained so that you can take the right decision.
I think that can be only happening if you think end to end process thinking end to end new wave of deploying the AI that I explained earlier, just to have a, I just wanna have a go dos for all the audience, if you are a leader in your organization, or if you are working as part of AI innovations, think holistic because AI is not another thing, it cannot be residing as part of a, a one department as part of your enterprise or organization. You must think this as a reassess reinvent and reimagine opportunity, especially post COVID.
And how do you deploy explainable AI that you can trust the decision making. So think holistically, and then think of a AI readiness index demystify. What is truly AI?
You know, what is truly AI is not necessarily machine learning in a traditional format machine learning in the futuristic format, where you have contextual understanding of informations and decision making, but also deploy a true algorithm, these
Explainable. Yeah. Thank you so much for bringing us these really tangible concepts and recommendations for people to think about as they're moving towards their AI projects and, and conceptualizing how they should move forward with them. So for that, I thank you very, very much.
If the audience does have questions, you can bring them directly to our speaker or perhaps meet him in the networking lounge and have a longer discussion there.