Welcome to our KuppingerCole Analysts webinar, IAM Meets Data Management, a Recipe for Peak Performance. This webinar is supported by IndyKite, and the speakers today are Måns Håkansson, who is a solution architect at Indykite, and me, Martin Kuppinger, the principal analyst at KuppingerCole Analysts. Before we go into the subject of today's webinars, I'd like to give you a quick info about housekeeping. There's not much to say. We are controlling audio. We will run two polls. There is a Q&A session by the end of the webinar.
This is probably the most important aspect, so don't hesitate to enter your questions at any time so that we have a lot of questions for our Q&A session. And last but not least, we are recording the webinar, and the recording and the slide decks will be made available soon after the webinar. Having said this, I'd like to start with a poll. And this poll is in some way related, closely related to the topic of our today's webinar, because we will touch on it. It's not directly on the list, but it's part of it.
We'll also talk a lot about one element or one area where a lot of challenges and projects arise. So I'd like to ask you right now about what is, from your perspective, the number one reason for IM projects stalling or failing. So is it an insufficient stake in the management? Is it a lack of proper and comprehensive requirements gathering that projects run too technology-focused? The expectation management is wrong? The wrong tools are chosen? Or the lack of skilled resources?
There are more reasons, and one more, which is identity information quality will be the main theme of today's, and one of the main themes of today's webinars. The data part goes well beyond that, but it's one element here. So please participate in the poll. We will leave the poll open for a moment. It's on the right side of the event application, so you can continue responding to poll, and then we, at some point, we will close it. And in the meanwhile, I'd like to have a look at the agenda of today's webinars. So I look a bit at the role of data in IM, decision-making, and beyond.
And then Mons will look at data management in IM, a real-world showcase. This will be, I believe, a very, very insightful presentation because he looks at quite a number of different areas where data plays a role and why we need to handle data well for IM. And then we have the Q&A session. So this is the flow for today. My presentation will take some 15 minutes or so, and then I hand over to Mons, and then we will do the Q&A. As I've said, the more questions we have for the Q&A, the better it is. So data information on meta-entities, not an entirely new topic.
So when I looked up, when I started preparing for this webinar, I found this one, which is a webinar we did back in 2013 about identity information quality and why it is so important to have good data for succeeding IM. And I think we all, or the ones of us who have gone through IM processes, many of us have experienced that identity information quality can be one of the major challenges, major issues in IM. Because when data is not good, things don't work well. We don't sort of grant the right entitlements. People complain about incomplete data and other things.
And so data is very, very essential here. And that is something I feel is really super important to understand. It's not a new topic, but it's still there. In the next few minutes, I'd like to look at sort of five reasons, five fundamentals why we need to handle identity data well. One is to provide a strong identity for authentication, to deliver context for authorization, to enforce reliable identity data, to unify identity information, and finally, to cover all types of identities. So five aspects I want to look at. And the first one is about the strong identity for authentication.
This is when we look at zero trust. I'd like to bring up this term here because I still believe it's a super essential concept for security. And zero trust starts with identity. It is Martin using a device to authenticate Martin and its identity and the authentication. This is the starting point. Before we access the network, before we do anything else, it is there. So we need a strong identity. And we need a reliable identity, and we need a lot of information around that identity.
When we look at context and risk-based authentication, we need a lot of information around that identity that is reliable, that we can use for decision-making. If you don't have it, you're in trouble. Surely you can argue the advantage of using a lot of signals is that even if the one or other signal is not as good, you still will most likely result in a sort of a good decision. But at the end of the day, we need to know it's Martin, we need to have a proven identity, we need to have proven context, proven attributes around. The second aspect is about authorization.
So authorization is what follows authentication. It's a repeated verification in some level for every access. And that's where we also need data, where we need information about, for instance, the roles, attributes, when you look at attribute or policy-based access controls.
And again, they must be reliable. And the more flexible we want to be, and the better we want to be in identity management, the more we need data. And so I talk a lot about policy-based access managements for a couple of years. And I think there's a huge advantage in using policy-based access controls because policies are easy to describe. But what we need to be clear about, this is where data really comes in. When we have a policy, then we say, okay, Martin, whatever, can access this file under these conditions.
And then there are certain attributes around if Martin has this role, if Martin is in this location, if Martin is doing that, which there's a lot of information which is used and the data we use for decision-making must be correct. So the correctness of the decision on a policy depends on the correctness of the data. And we need to ensure the data is good. So we need some level of data governance. We need data quality. We need a data handling. Data is super essential for what we do here.
And so everything we do not well sort of limits the level of assurance and authentication and reliability of the authorization decision. So we must enforce reliable data. We must also ensure that we have access to the data, that we can get to the data, be it more static data, be it more dynamic data. We need to understand all the complexity of the data like in this picture. So achieving stability is not easy. So we need the right tools, the right technology for that. We need to bring data together and put data into context. The data relates to me, the data relates sometimes to others.
So I am part of an organization, there's data related to the organization, there's data related to Martin. How can we ensure that all this data is linked in the right way, that we can access it without getting overly complex in the way of how we need to connect to systems because the target must be that we can consume a lot of data around identities. And by the way, not only from a cybersecurity perspective, it's also when you want to provide better services then the more data we have, the more personalized services, the more targeted services we can deliver.
So depending on the type of use case, it can be also not just security, it can be much more. And so we need to be able to consume data to ensure the quality of the data, to link data, to combine data, to potentially limit the sources. I think it's always a bit tricky because we also know if we have redundant data, some data might not be as current as it should be, but we need to look at how can we sort of access a lot of data that is also linked, that is pertinent. This will be very interesting, I think later on when Mons is talking about graph-based approaches.
So he will go into detail, but graphs allow us to navigate through linked data in a very smart and efficient manner. So that makes a ton of sense here. We also need to cover all types of identities. So we need to understand there's a lot around it. Also when we look at relationships, then as part of the Zero Trust, Martin uses a device and there's a binding between Martin's identity and the device. So we also need to look at these things and different types of identities and organizational identities, et cetera, to bring these things together. And this is also very important.
So data is really key to a lot of things we are doing, maybe all things we are doing at Entity. And when we look at, so based on this, what do we want to do? What are the fundamentals, so to speak? I try to define four tenets here for identity data. Identity data must be comprehensive. So we need to get access to data we need in a simple manner, in a feasible way. It must be correct and current, which in some ways links to each other. So data can have been correct, but it's not current anymore, so it's not correct anymore. And it must be consistent.
So if we look at more than one source, we shouldn't end up with different states of data, again, which has to do with current and correct. So we need to look at data and we need to understand data is essential. My experience is that quite a number of challenges in identity management projects really derive from the lack of access, the lack of quality, correctness, currentness of data. We need to get better because data is essential and we need good data management for getting better than I am. This is my point on that.
Before I hand over to Mons, I'd like to quickly look at a second poll, an arrested second poll, which is a very simple one, which just asks, when we look at identity management, do you already have sort of a comprehensive blueprint for your architecture covering all of IM in a place we learned, where I talked about authentication authorization, about, in fact, IGA process onboarding all that stuff, which depends on data. So do you have an architecture, a blueprint for all of identity management, like we talk about a lot, like an identity fabric, yes or no?
So again, you have a bit of time to respond to that poll. And with that, I'd like to hand over already to Mons, who will talk about data management in IM, a real-world showcase. And so Mons, the stage is yours.
Okay, thank you, Martin, for those words. I will try and continue on those. So last time IndyKite was on the Copenhagen Call webinar series, we talked generically about our platform, but today I wanna be more specific around IM and how IM can benefit from such approach, meaning combining it with strong data management practices and talk about which problems and challenges we actually solve for. And I would like to be able to state that I wanna show the differences in this approach quite clearly.
I also want to picture IndyKite in some sort of identity ecosystem or identity fabric, if you like, how we can sort of together make even IM even better and more relevant than it is today. Lastly, talk a little bit about one of our pioneering customers, a global automotive manufacturer, who wants to create a data marketplace and monetize on that, and how IndyKite is helping in that service delivery. So before I get into the topic, I just want to pop just outside of the IM realm a little bit.
I just wanna talk briefly about how large enterprises, organizations, categorize or group their large set of applications. And they often do this in these three categories, meaning the systems of records that are the infrastructure applications of CRM, ERP, product production, billing systems whatsoever. And they score very high, obviously, on business value, whereas they perhaps lack in innovation power.
And here is where the systems of differentiation fits in and how you can derive a unique business value to your customers by digitalizing your products, exposing web applications, web portals, APIs, mobile applications, in order to stay ahead of your competition. And there is obviously also a lot of pressure today to innovate, being able to open up marketplaces, create new digital products and things like that. And it is a reason for me to say so that, that most organizations want to focus their investments and time on the two above currently.
Obviously, finding ways of creating new markets, creating more revenue. And it is not only the focus of the customer, it's also the focus of me in my presentation today. So I will try and focus on these two types of systems that needs strong IM. And if you can picture that in your head as well, looking at projects or innovations or even programs in your organizations that focus on these two categories, I think you would benefit a lot from this conversation.
Obviously, IM ties things together alongside shared data, business processes and security, which are also sort of key components or resources useful in this setup. And when looking into those initiatives, those new applications that customers primarily is interested in, you find often a good deal of focus around creating rich user experience or customer experience.
And there is often a lot of talk about user journeys, meaning that it shouldn't matter too much where in the life cycle of a customer or life cycle of a user, where you enter the organization, if that is for buying a product, using a product, servicing a product or recommending a product or even buying a new set of products. Anyway, you interact with the organization in this sort of an infinity model, making sure that you have a delightful user experience. And we as identity specialists, generically talking, we often claim that identity is the first touch point.
You touched on that also already, Martin. But in this setup, I think that we can go so much beyond the basic of doing authentication, identity verification, you know, where a user is moved from being an unknown user to a known user. It doesn't stop there. It is crucial that we engaged in creating those consistent and contextual experiences. And I believe, and we at Indica believe that we can do so much more for them. But this crosses a bunch of boundaries. It goes across applications, services and APIs. It goes across your business models, your processes, and it goes across user communities.
And that is a challenge for most. And that type of discussion is actually an application development discussion. And I believe that we as specialists in this, we need to be able to be more of architects, integrators, and even developers going in with an identity mindset and helping these projects fulfill their value of a delightful user journey. And in elevating that discussion, talking about the digital identity, and as you rightfully talked about, Martin, how we can embrace and embrace and deliver upon the identity of everything.
And what I mean, so examples could be like, obviously the application, the car, the device, and even the organization of a user can be the identity. But more importantly than just talking about a multitude of new identity types, it is important that it is also a matter of more identity types, but in combination. What I mean by that is it's about me and my car. It is about me, the organization that I'm currently representing, and the application I'm using. That type of discussion is essential.
And the reason for bringing this up is that the context in this matter is the primary and most essential part of that, being able to stitch and tie identities of different types together in a context. And how do you derive context?
For that, we need real data. What I mean by real data, especially when we talk about IAM data or customer IAM data, we are not so much considering like roles, privileges, and entitlements, and things like that. Those are constructs that you actually have created artificially, and nobody but the IAM team understands what they are doing. But real data, and there are a multitude of examples that we can use for knowing who your identity is, the properties or the attributes, the preferences, the locations, and the behaviors, and we can continue.
But more importantly, outside the IAM sphere, we really touch upon particularly CRM data and HR data where we start figuring out relationships, responsibilities, organizations. It can be addresses, concepts, and products that we are assigned to use. And when we start to continue on this, we start thinking about the things, just like identity, they have properties, they have locations, they have behaviors, just like human beings. And I also wanna add, and that might be sort of a little bit controversial to say that, okay, we also need a notion of business data. Business data, what is this?
Okay, the only thing we need to know, we need to know what these users, what these identities actually interact with. So in many ways, we need some control of that too. And when we enter this discussion about data, we are obviously caught in conversation about clear and present challenges.
Obviously, fragmented data and multiple data silos, these are things that know, it's a given. We use multiple repositories for different things. But as you were also pointing out, Martin, was the data quality aspects. And instead of saying that we have poor data quality, I think it's fair today that we have variations in data quality. And I wanna touch a little bit upon that because there are so many organizations that have started to take data quality and data analytics so seriously, building data lakes, starting to creating these golden records of customers, of products, and what have you.
And I think it's important that we tie into those processes very much, both from a business perspective, but also from a technical perspective. There are so much we can learn for those types of projects. But I just wanna continue a little bit also saying that data is not operational available, meaning that the data can only be used in the silos. And that is a key issue for creating user journeys. There is no visibility to this data, there is a lack of trust in the data, and it's impossible to drive the context. And just to add on that, some data is actually missing from this process.
And again, in user journeys, I will come back shortly with an example of what that can be. So when we start talking about this and trying to solve the challenge with data, we focus around what's known as knowledge graphs or connected data or graph databases. These are systems that are designed to be able to collect and connect these various data points. It's useful for aggregating and unify this data from multiple sources. But please, do not think in your head now that this is a data lake, that we want to gather all types of data.
No, we need the required data to fulfill the specific use cases, and we will talk about them later, how we actually can make use of this data in our daily life. Okay, at Indica, we want to create this system of knowledge, we call it, okay? It's like the brain depicted in the middle here. We want to enable feeding of systems of records data into the brain, connect the dots, enrich the relationships, and want to use that data for your innovation, differentiation, and customer journeys.
So in this case, we want to create an operational data layer that can be used not only to read and query for, but we can actually also write back to that thing, meaning that we can deal with the missing data that I mentioned earlier, meaning it could be, for instance, building up a relationship of my household. Only me, not only I, know who is part of that household, and then many organizations are struggling with that, building services for a household or a family. It can also be like my consents or a power of attorney delegation or a preference, okay?
So we need this information also to be able to be able to write back. When it comes to IAM and how IAM fits into this, I have tried to sort of derive capabilities of our platform and how those can be applied to your custom applications. So at the right here of the screen, we see three apps, custom applications, and you can use this for the authorization decision-making process. Martin talked about this also. It's important to have a policy-based approach utilizing very dynamic and fine-grained authorization essential for Zero Trust, for example, but we can also use it for consent controls.
We can actually update, create, review, and grant consents through this process. And if consents exist, we might wanna be an attribute provider for those applications. So you can extract attributes to the applications from the systems of knowledge. We provide the context. An example would obviously be which organizations can I directly or indirectly represent? This is something that the knowledge-based system can hold and present to an application. It can also derive signals or events passing.
If data changes in the system, we can pass like an event-based architecture, the information to other systems. And lastly, we do the data insights queries to be able to derive data of various kinds. There are also, if we think of identity and access management tooling, I think of primarily on the two top ones here to the right, meaning identity providers, modern like ForgeLock, Okta, Ping, Azure, IDPs that can utilize for authentication flows or OAuth OpenID flows, authorize, provide attributes, provide context, or even get signals. That is a given.
That is actually sort of one of the things that we actually can clearly demonstrate. But I also want to talk a little bit about, and maybe, Martin, we can talk later in Q&A. I would think that even identity verification tooling and credentials verification and wallets initiatives would benefit of a powerful backend that can store data that is collected, but also tie that to existing data and even future data, becoming that sort of backend that these services can rely on. This is what Indicate is all about.
Unify the data, make it available for applications, and we build upon a essential identity knowledge graph. This thing is designed to hold the data, build up a data catalog, bring visibility to this data, not only the nodes that we see here, but we can actually drill down and see metadata of the information to actually derive trust in those data points. We also expose, obviously, APIs that can be consumed by various applications.
And I want to spend just some few moments also on digging a little bit deeper into the knowledge graph, because I think it's here comes the answer to what is different. And I'm going to give an example. It's not directly tied to the case study that I will present later, but it is using a manufacturing example. It is a business-to-business example where we capture information about users and dealerships. Dealerships, in this case, would be an organization.
And we would know, because these things would come from the CRM system, both things, meaning that the dealer, as an organization, would live in CRM. And also, the first and most interested user, we have one contact in CRM who is the administrator of. We can already see now sort of how this data model starts to build out. One user works for and can be admin of a dealership. There is also a notion of having an arrow pointing to itself.
In the identity sphere, it can be me being the father of my daughter, but in this case, it is a dealer, it's a partner, it's a part of another dealer, meaning a dealer network, sort of locations of one single dealership, but represented in different markets. And you can build it out, nested, hierarchical, and it's very powerful of actually creating those clusters in hierarchies, nested to be able to sort of derive context. We can now start adding, okay, who works where? And we can see one user works for three of these dealers.
But even here, even if it's possible to stop here, there is some friction in the terms of making context separation. So what we can do, we can create in this model an artificial node that for each of the works for relationships, we create three profiles. So here we have one single identity, we have three profiles, we know that they work for these three dealerships, and we can start at the administration of relationship. And please note that relationship does not have to follow each other, so you can definitely sort of be an administrator of a dealer who you don't work for.
There is full flexibility in this case. This scenario would fit nicely into sort of a doctor-hospital example, or a corporate banking customer. We can represent various companies over power of attorneys setups.
Still, we are in the IAM realm, and I want to continue that because I'm soon gonna sort of expand this model quite a bit. But before that, I wanna jump into something that is extremely important, and that is what we call properties in the connected data space. Attributes is also known as. We would see, for instance, that we have a user, John Doe, with an email address, we would have a dealer with an invoice, a ship-to address, and a dealer type. We could know that the profile type is a chief financial officer, and that this person actually started working for this company back in 2018.
This is still not sort of anything magic, but I wanna take you even further and go into these properties, talk about what we can capture in terms of metadata of this. And this is truly important to get trust into this discussion. We can start looking at the first one on top here. We see that, for instance, this address is a self-registered address. It is not verified, hence we give it a score of level of assurance one. It could have been called something else because this is customer-specific definitions, but in this case, I call it level of assurance.
We also have, for the invoice address, we track information from the CRM system, which we trust, but we haven't verified it, but still, the level of assurance can be two on this property. And lastly, looking at the email address, where we use an external governmental identity provider or verifier, where we have used an onboarding process for this user, so we know both from a source and a verification that this is a true identity, hence giving it a level of assurance three. This means that we can now start differentiating services across our service delivery.
We can start feeding this information into the processes and make decisions that are different. Continuing just a little bit, expanding the model, and now we're going to go outside the identity realm. We're going to add cars. They are sold by an inventory of dealers. We're going to add warranties to the cars. We're going to add records that describe, for instance, sales orders and service order and invoices and things like that. And we add the notion of an application that consists of modules and is tied to an access role. Please do not get bogged down in understanding the model at the moment.
I just want you to see how the graph can, for various purposes, expand once you get more and more data into it. I'm not going to stop here because we're going to need to input the consumer as well, the owner of the car who buys from the dealership, the manufacturer provides multiple brands and multiple product categories. Those are also part of this, in this differentiating story about who can get access to what, who can represent certain brands, for instance. These are things that are contractually bound between the manufacturer and the dealerships.
We also have the notion of geographies and sales organizations covering countries, continents, cities, or what we may have. There might be different regions, locations. There might be also different sales organization that supports the dealers.
Lastly, I'm going to add a subscription. Subscription is one of the things that the customer can own. For instance, I may subscribe to a certain feature of the car that I haven't paid for before. I own the car, but I add a subscription to be able to access certain data. That can be data or functionality in the car. So the subscriptions are key also to providing access, not only from the consumer side of things, but also from the dealer side of things. Please note in this model, the various pointers to itself. Several of these things are like clusters. The car consists of parts.
The categories can be more. The regions can be divided into two locations and cities and things like that. Also note that even seven of these nodes are considered digital twins. The main important thing now is to ask ourself, why do we capture this data? And the answer to this is that these are constructs that are similar to what an application developer would need to implement as business logic. The graph can be helpful for the developer to actually construct these user journeys. And when you put this to work, it looks like a chaos, but it's a controlled chaos.
It is something that you can use and query to derive certain decisions. You can derive attributes. You can derive context. You can derive signals out of this. And this is sort of the promise of the knowledge-based access and the knowledge, excuse me, the systems of knowledge, being able to query this information and find these paths across the multitude of the nodes. And just to complete, I know I'm running a little bit out of time, but Martin had given me permission to do so. The dealership is one business model. Another business model is the consumer.
A third one is the field service, and a fourth one is the electronic charging. What about sort of, and there might be reasons why these are separated, practically, legally, function, business-wise, but there might be reasons to actually connect these things. And what if the end user actually can give, by the click of a text message, be able to add a consents that ties the dealership to the records that is controlled in a different business domain? That is also a clear possibility. It's not vision.
Many organizations struggle with this, but you could also think of tying, actually removing all the barriers and saying that the car, which is a service at the dealership, maybe need to support it by the field service, or maybe even the dealership may need to get access to the charging network because they need to charge the cars that is in service. These types of use cases, and I know that they sort of futuristic to think about it, but this is the art of the possible in using a knowledge graph in order to derive and connect this data. So this is the art of the possible.
It would be interesting, Martin, to hear if you think this is IAM or not. Many people may think it's perhaps something else, but to me, it's definitely where IAM need to go. We need to be able to help connecting the dots to create those user journeys.
Lastly, and I will just take a sip on drink, the case study. So in this case, it is a global manufacturer that manufactures cars. They collect a lot of data, obviously, to the history of the car, from its born, sold, shipped, sold, and referenced. There is also telemetry aspects of this where the car is using. They want to tie this data into a data marketplace and be sure that this is not Indica. This is a data marketplace consisting of a lot of technology. That technology includes data APIs to be able to access the data.
It includes applications and portals for usage, but it also includes identity and access management. And in this identity and access management fabric, this customer is using Auth0, Indicate, and Open Policy Agent together to handle anything from authentication, federation, session management, authorization decisions. And the idea is obviously that the customer, so the manufacturer, wants to sell data access subscriptions to its local customers, anything from a B2C customer, B2B customer, a partner, a dealer, or even a supplier.
And in order to do so, they have strong and very complex authorization rules, taking care of contracts, brand, product categories, and such. This is where the knowledge graph comes in, the identity knowledge graph with policies on top of it in order to fulfill sort of who can get access to what. There is even a futuristic use case here where they want to allow their business partners to even resell the data even further. The example would be that the manufacturer sells to a fleet manager, and the fleet manager would sell this data to the drivers, the service company, the fuel company.
That is possible with this setup as well. Can't go into details since I am so much over time. But that concludes, presentation-wise, what I wanted to talk about.
Martin, maybe over to you. Thank you, Måns. We already have quite a number of questions here, and I already see a couple of the participants actively voting for their questions, which is good. So that helps us to concentrate on the most interesting questions. We have already a couple of multiple votes here. So as I'd like to get started, and so the one which is currently on top is, where do you see the primary use case for data in the context of an entity? Is it really the business process? And you talked a lot about business process. Is it access control and security? Is it privacy?
Or is it something else? Or maybe to add from my end, is it all of that? From what we see primarily, so one of the key drivers is obviously authorization, but I believe strongly that it goes so much beyond that. It needs to cover other aspects also. Obviously authorization can be used for data protection. It can be for automating processes. And I think that it is still perhaps what most of the organizations are struggling with, going from what happens after authorization and remove the tension of the roles and the groups and all that.
That is one of the aspects, but I do believe that there will be, for instance, this thing about being able to provide even context to other applications, signals to other applications based on this data. That is sort of for us to explore, but that is my answer to that. Yeah. So what you're saying is we should really think beyond the personal identity management use cases and look at what else we need the data of these relationships for. Those relationships are key. Finding out, especially, you know, you talked about the policy-based access control, it ties to the back data.
It ties to what are the actual records that users could get access to. In the telemetry world, for instance, you know, which of your driver's history, engine history, how you want to be able to share that across a network and an ecosystem is really of importance. And that is also privacy-related.
Okay, got it. So the next question, and I think it's a bit related to that, but also goes maybe a bit more back into the core identity management world, which is, do you suggest that IAM should go over traditional IGA and KEEM, et cetera, across all these areas to one central system? And how could these worlds live together? And maybe there's a comment or a question that is related to that, which is I have always concerns that identity backbone leads to tight, to maybe too tightly integrated systems, even.
And as systems become tighter integrated, it becomes more difficult in terms of managing the whole ecosystem. I think that is a concern. So the comment says, therefore I prefer the identity fabric with lesser integration.
Yes, and I mean, I didn't really catch the IGI aspects of this, but if you think of the message that I was trying to say, we want to operationalize and unify a lot of data. That data will still need to reside in most of these systems. We will want to add like an abstraction layer that multiple applications can use for further use cases. I think that is dead serious and dead important that we can do that, because otherwise we will tie ourselves into one single system that tries to sort of hold all data. And that I don't see really happening. We need to sort of abstract this.
Which relates to another question. We have so many questions here, we will not be able probably to answer all of these. How can this model you described work without loading all data into one graph? So can it build on dynamic access to data sources, which also would avoid duplication of data?
Yeah, we definitely want to avoid for all good reasons. And I mean, that is also, so we often only want what we know as references to data. We want to capture like a VIN number of a car. We might want to capture like a doctor's ID, a patient's ID perhaps. When we don't want to sort of have a duplication of this, we only want, and I was trying to explain that we only want the data that makes sense to connect in order to support a certain use case.
And the use case would be like, okay, which sort of patients either directly because you are treating them or because there is some other ruling around why you should get access to that data. That is the important thing. We do not want to build a data like, please do not misunderstand or think that that is what we are into. We want to take the dots and connect them. Yeah. So you're saying Martin owns that vehicle from that manufacturer, but it is the details about the vehicle are somewhere else. Details about a manufacturer are somewhere else.
And depending on what exactly you want to achieve in your research. Exactly. And that can actually be also like part of the reference thing. Either we talk about this as a reference ID or we talk about it that we store the reference to where you as an application can go and get it. That is also of interest that I'd be able to use the graph in order to sort of capture even that.
So reference means in this case, two different things, either you just the ID or the least information we need to know in order to make some decision perhaps, but also where it actually is located, especially the telemetry data, for instance, we don't want to get that into the graph, no way. Okay. Next one. How would you envision decentralized identity and wallets benefiting from such central stores and maybe graphs providing high quality identity data? You touched this a number of times during your presentation. So questions, can you dive a bit deeper into that?
So I'm not an expert in how, you know, you define the relying part in service providers and the issuers and the verifier. It's really not in my expertise, but I understand so much that we are going to collect a lot of things. And if you are a service provider, meaning that you would be the one authenticating the wallet, the user with the wallet, you may want to store that data. If you legally can, obviously, but if you legally can, you will want to store that information.
And you may want to say that, okay, now that I know that this person has a driver's license for driving trucks, then I might want to sort of give him access to certain things over a period of time until next verification. So that's why I see like a service provider would actually store that data and then connect it with the existing data that they know and grant access and derive other information. So that's how I see it fits in. We don't fit in in the actual verification or the wallet to the service provider interaction.
But I do believe that both the actual verifier, the issuers and the service provider may want to use this for controlling and especially that metadata, knowing which data they can release and which quality the data has. Yeah, I think to maybe expand a bit on that. So basically what you also have in there is a decentralized identifier. So which in fact helps us having a link between information in the wallet and what we do.
And for all the decentralized identity stuff, at the end of the day, it will be, I come with the wallet, I go to whatever an organization on the verifier and they still have some backend systems behind. They will not go away.
In fact, we need to integrate it. And that means that at the end for that graph of data around, okay, this is Martin Kupinger and Martin Kupinger is whatever the owner of this vehicle and blah, blah, blah. It then might be that there's, and this is the decentralized identifier that links sort of speak to Martin.
So again, it would be just one point which helps us then to connect the sort of the internal graph, the internal system of records and all the relationships with ways Martin comes in. It might be that, you know, historically Martin came in with username password. Exactly. Then Martin says, okay, I switched to that. And then you say, okay, he's linked to that and that. To that point, it's one of the key things that we really want to use.
Either, I don't know exactly what this will be, but we call this identity stitching. We obviously know that if you are a customer or for instance, this bank, you may have already been identified using, look, weak identifier, stronger authentication, and now you come with a wallet, okay? You are now represented again as the sort of the third identity. How can we stitch these things together? So that is obviously one of the use cases we are really interested in also on top of.
When we get data about these things, how can we apply some ruling or algorithms to actually sort of stitch these things together? I may even come in with more than one wallet because I surely would have a personal and business wallet, but I still may have a relationship to the same company. And in different relations, which need to be treated correctly because my business persona, I'm different, I'm acting different. On the other hand, I might want to have some overlap between this. And this is clearly what graph and this technology enables in a much smarter way than we can do without that.
So at the end of the day, I think what I take as a learning is that a lot of things you're doing really enable organizations to better deal with the identities they have to serve. And these are not only humans, these are different types of identities, also the, so to speak, Silicon identities of things and so on. They potentially also help to deal better with complex things, because a vehicle is not one thing. A vehicle is an amalgamation of a lot of things with different identities, with a lot of complex relationships.
And I think you can do a ton of smart thing, but maybe in the interest of time, we have only a few minutes left. I want to pick at least one or two more questions. You mentioned AI on one of your slides. So what are you doing with AI? How does AI play into this? So what we are doing sort of currently, soon to be delivered, the first aspect is that we're going to tie like a user behavior risk scoring based on the authorization engine that we provide.
So in that case, we more or less generate fingerprinting out of user behavior and can send and compare fingerprints of those sessions and be able to sort of derive, here is a user starting to behave differently. That is one of the aspects we are using. We also want to be able to, as you know, ingestion of data, because that is sort of crucial to what we do. In order to build this knowledge graph, we might want to say that what happens if the ingestion feeding, the streaming of data actually starts to deviate. That is another aspect of this.
The third one was the identity stitching, and there is more coming to this because as we know, the semantic description of this database fulfills or helps even sort of great to build an AI on top of it in a quite simple way. But this is where we are focusing our efforts right now. Yeah. Okay. Final question we can take. There was a connected data slide. And the question is, is the schema ontology you showed your point of view on identity or just a presentation of what one schema sample of one schema could look like? If I get the question right, it was a sample. Yes.
The whole identity knowledge graph is what we call schema-less. It is up for the customers to define any model that they want. There is no representation of a profile of an identity of a car or something like that. It is completely empty, hence give a lot of flexibility to actually sort of build out use cases. And I just can sort of give the recommendation to everyone, play a bit around, learn a bit about graph databases. This is really a smart thing because it allows you to combine data in a very elegant and very smart manner, very flexible manner.
So this is really something which is worth to at least gain a bit of a basic understanding of once we are done with the time, we still would have quite a number of questions. So thank you to the audience. We do it again, probably. Thank you to the audience for asking all these questions.
Super, super helpful. Makes it very interesting.
Thank you, Mans, for all your insights and thank you to Indieguide for supporting this Google Analysts webinar. And I hope to have you soon back at one of our other webinars or see you in Berlin in June at our European Identity Conference, which is definitely the place to be in identity. Thank you. Thank you.