KuppingerCole Webinar recording
KuppingerCole's Advisory stands out due to our regular communication with vendors and key clients, providing us with in-depth insight into the issues and knowledge required to address real-world challenges.
Unlock the power of industry-leading insights and expertise. Gain access to our extensive knowledge base, vibrant community, and tailored analyst sessions—all designed to keep you at the forefront of identity security.
Get instant access to our complete research library.
Access essential knowledge at your fingertips with KuppingerCole's extensive resources. From in-depth reports to concise one-pagers, leverage our complete security library to inform strategy and drive innovation.
Get instant access to our complete research library.
Gain access to comprehensive resources, personalized analyst consultations, and exclusive events – all designed to enhance your decision-making capabilities and industry connections.
Get instant access to our complete research library.
Gain a true partner to drive transformative initiatives. Access comprehensive resources, tailored expert guidance, and networking opportunities.
Get instant access to our complete research library.
Optimize your decision-making process with the most comprehensive and up-to-date market data available.
Compare solution offerings and follow predefined best practices or adapt them to the individual requirements of your company.
Configure your individual requirements to discover the ideal solution for your business.
Meet our team of analysts and advisors who are highly skilled and experienced professionals dedicated to helping you make informed decisions and achieve your goals.
Meet our business team committed to helping you achieve success. We understand that running a business can be challenging, but with the right team in your corner, anything is possible.
KuppingerCole Webinar recording
KuppingerCole Webinar recording
Good afternoon, ladies and gentlemen, welcome to this. A call webinar solving the million records challenge with XL. This webinar is supported by Matics. The speakers today are me Martin equipper of a call and cherry Gable of Matics. Before we start some information about keeping a call, some upcoming events and some housekeeping information. So first of all, keeping a call is an Analyst company need providing enterprise it research advisory decision support, and networking for it professionals through our subscription services, our advisory services and our events amongst these events.
The next one will be drum speaking event. So called industry round table topic. This time is cloud computing security and data protection will be in drum language. We'll do same things in English language soon as well. There are a lot of pretty interesting topics. So here drum speaking, and you shouldn't miss the opportunity to attend this event. Then the big thing for sure, again, is European identity and cloud conference, which will be held next time in April, 2012.
In Munich, you should miss this conference. It's deleted event around these topics in Europe with a lot of delegates and all the information is available at our event website ID com.com. So that's it from the yeah, from the events and the other things, some guidelines for the webinar, you are muted centrally, so you don't have to mute or around mute yourself. We are controlling these mute and unmute features. We are recording the webinar and the podcast recording will be available to mirror at our website. By the way, we also will provide the PDF version of our presentations.
So also them, they will be available for download them Q and a will be at the end. But you can ask questions using the Q and a tool of the questions tool and go to webinar at any time. We'll kick the questions usually at the end or in some cases, if appropriate, we might also pick questions during the webinar, the leads directly to our trend, which consists like always of three parts. The first part is presentation of me. I'll talk about the need to share, and the fact that it's about millions, not the numbers of things we are dealing with.
We have to look at our it's really significantly growing. And we have to, to think about how can we handle these things in a lot of scenarios and that we also have to standardize and standardize a lot of things, especially authorization, and it's very CASEL then comes into blame. The second part then will be by cherry Gable of Matics. We will talk about exec and big data scenarios and how to solve 2 million record challenge. And as I've said before, the third part then will be the Q and a, okay, let's start with this, the need to share.
So if you look at trust a little bit of the history of, of computing, then it's not that long ago. So I even, I experienced the time when I go back to my, my school days, we had a mid-size or a midrange system there an IBM I think slash 31st or something is cool centralized infrastructure. And these things were mainly for internal use that came PCs and networking a lot more of users.
In that case over time, we had emerging internet, adding some things where we have our websites was users very started to build bigger directories, but very purpose built things, focus on a specific use case, always. So we had a standard director technology, but we didn't fully integrated with all the applications we have internally. Sometimes we did some things sometimes not. We integrated business partners and we are moving more and more towards the tighter integration of customers, really opening up more systems enhancing what we've started to do.
And so days where, where, where we said, okay, we, we are looking at the internal it, our employees in many areas around identity and access and authorization all the time stuff. I think these days are past. It's really about, we have to think about everyone, everyone who can access our information, our systems. And that then is I think a very important point because what we really have is we have a lot of corporate information and the target we really have is to protect our corporate information.
That's one of the things we have to provide services, and we have to protect information when they are accessed through services. And in fact, it's, it's one big set you could imagine of, of information for sure, distributed in a lot of chunks of different, very different types from word documents to very large database and all that type of stuff. And this information success through, through different services, it's accessed by different users, which are of different classes. So they are employees, partners, and customers.
But I think a very central part of this is that we shouldn't start with thinking about, okay, we have an information access of that class of persons. And then we say, okay, for the externals, we do it differently. It's the same information or it's our subsets of which are access potentially by different partners, different by different groups, different ways, but it's always the same information.
And if you want to really protect that information in a, in a consistent and in a, in let's say, well, governed way, then we need to understand, we have to manage this information, access for everyone in a consistent way. So it means for this one set of information, we should apply one set of policies. We shouldn't think, okay, we have our employee policies and our partner policies and our customer policies and whatever we have information and we have things which regulate information access. And so then we need one approach to manages access. And also for sure, that's a consequence of it.
That's something which this enables one approach to government that access to ensure that we, we have really a governance. If you have a lot of different approaches and everything, sec separated, then we build a little bit here for external users where we trust really. And when the new approach of accessing and securing that information, then it's virtually impossible to, to really gather access, to manage access to that information. So the target has to be, as we say, okay, that's, that's our information. These are our services.
And then we, we start thinking about which policies apply and thinking about Mer moving towards a framework, which really helps us to deal with this and to address these situations. So it's really, I think, a change because still today, the main main approaches, the typical approach is we do a sort of a system security.
So we say, okay, we secure that system. Then we build a new system secure that and so on. But many of these systems and services, access, same information, and we don't have really a control and understanding of do they follow the same policies? Do they really frequently that just don't know, do they follow follow the, the correct policies? So we have more complexity. We have changing requirements, which means we have more applications, more services. That's another aspect send this. The number of applications we are dealing with is growing continuously.
We have new services for new types of users. We say, okay, we have to open up that system because we want this business partner group of business partners with these customers allow access to the information. In specific cases, we want to build new business processes. So the number of applications, the number of services is growing. And I think that's one of the, the things which is really affecting what we are doing. We have more users and I think that's the other point, you know, we might have thousands or 10 thousands, or even maybe a few hundred thousand of employees.
But when we look at how many customers we have in most organizations, not in all, but in most organizations, the number of customers far bigger not to speak about the potential customers leads, the, the people who are interested in our organization, which have some other form of contact. So it's really, since are getting bigger, we have more applications. We have more users. And in consequence, we have also more, more rules we have with respect to these rules. We might have more things to consider, which is not only the user and the attributes around the users.
There there's the context, concept of context coming in. So we, we need to take into account. So maybe if someone accesses the application with a smartphone, we might not grasp this as much as when he's accessing it from his corporate network and the well managed notebook he has. So there are a lot of other things which come in on the other hand, it's not only that we have more rules because we have a complex environment. On the other hand, we also have so more business cases. On the other hand, we also have more regulations.
So things are also getting bigger because we have to implement new rules. We have to adopt rules. We have to strengthen rules due to the regulatory compliance things. If you look at what happens during the last few years, there are so many new rules, which we have to enforce across a lot of systems. And a lot of these rules are not related to your system. A lot of these rules are related to information and they have to be enforced very consistently. So if you look at PII, if you have PII somewhere third, it's not the regulation.
Usually doesn't care about the system, which accesses to PI enforces you, or it requires you to better protect the PII. That's really the thing.
So again, it's about having these policies, which are focusing on the information and which are working consistently for this end. The result of this is the fact that decisions become more complex. So we have to take more things into account. We have more users, we have more attributes. We have a lot of more complexity in there. And on the other hand, all these things still have to be made very quickly, especially when we think about relying on a central set of policies.
When you think about relying on centralized information, when you think about really using sort of dynamic authorization management system. So externalizing the security out of applications, which from my perspective definitely makes sense. And we had a lot of webinars around exactly related topics in the past few months, which you, where you could look at our podcast, our recordings of these webinars. And I think it's without out, we have to do it, but we have to do it in a way which is still very performant.
And so from that perspective, we have also different scenarios, but all of them are somewhat complex. They have different complexities in different organizations and, but all of them are, you should now underestimate them.
So we, we might have situations where we have a few use a few apps, but then frequently in these, or sometimes in these smaller scenarios, we have a lot of very specific policies around this, where we say, okay, these applications have very specific requirements. We have this situation, which is only one of the, the most interesting, many uses few applications. So scenarios where you have, let's say an eCommerce example or something like this, where you have very complex query identity attributes, where you have to deal with a immense number of, of attributes.
On the other hand, you might have the situation where you say, okay, I don't have that many users. So if few users that are relative term in the bank, you might still have some 40, 50 or 100,000 employees. But the problem is you have hundreds of applications. You have to protect there. And in contrast to an eCommerce environment where it's very few applications. So there's really about building a stable set of central policies, which work for all these applications and all which protect all these things.
And again, this is a complex thing. And for sure, if you combine these things, many users, many applications, you, you, you have no chance. It should mean chance, not change, sorry for governance, without a scalable central policy approach. You really need these things there.
And that, again goes back to the need to share. So a flexible environment, which allows to quickly support new use cases, which can adopt to changing policies and requirements. Can't rely on hardcore applications, specific policies and attribute stores.
If you, if, if you first are hardcore security, it won't work. We have to externalize. We have to move forward to dynamic authorization management systems. So policies and attributes have to be shared, but it has to be done in a flexible and efficient way. And that's where exactly comes into play. And that's where also this million record challenge comes into play.
When we have a lot of users with a lot of attributes out of different services, a lot of policies, and we have to bring it together, all these different things and to make decisions very quickly, to make things understandable, to, to allow for quick forensics for quick analysis, for all these things, that's where these things come into play. And that's also where I want to hand over to Sherry, who will right now start talking about how do we do this in reality? So how does exactly help in big data scenarios, Sherry it's your term? Thank you very much, Martin.
That was I think, a great introduction for, for this portion of the webinar. What you said earlier on the need to share is definitely one of the driving factors behind Matics addressing what we call this million record challenge. Because as you say, the, you know, the need to share with customers, partners in other outside parties, the most sensitive and valuable data you have typically is in some kind of database or big data repository.
So that's why we're here today to talk about how exact more based access control can be applied to these kinds of business and security scenarios where the resources to be protected are are of this very large variety. And, and we think this is a realm that would previously was underserved by the exact mall community because we seek enterprises that want to implement externalized and dynamic authorization, not just for a part of their applications, but for as many as they can.
So in the past, we've run up against, you know, scalability or integration challenges when dealing with these kind of big data scenarios. But today we want to talk about how the exact more policies actually can be applied to this a broader range of applications now. So you can expand an enterprises used of a externalized and dynamic authorization program.
So we, we do use this catchphrase, the million record challenge because it's, it's sort of, you know, catchy or, or what have you that, you know, you think you can think about as you described Martin millions of records, millions of users, millions of services. And it's not just the, the quantity of them, but what makes it even more challenging are some of the multidimensional or multi-level authorization requirements that it organizations are facing?
You know, how do they implement these very complex business could be a business rule that says who can view what kind of data. Of course it could be a variety of, of regulations depending on what industry RN that also require protection of data and, and, and mandate how personal data or, or sensitive data can be shared with other parties. And so where is this an issue?
Obviously, databases, even the relational databases of a lot of organizations are, are very large content management systems are another logical place where, where this is an issue, even something like a search engine. You know, if you're searching through your repositories, they say of unstructured data, you need to be concerned about what gets returned to the user, because you want to be sure that you return results that only they, they are authorized to see big data. This is an interesting term. Maybe some in the audience are, are familiar with this.
If you look at wi EDIA, they call big data, a term applied to data sets whose size is beyond the ability of commonly used software tools to capture, manage, and process with within a tolerable elapsed time. So, yeah, so just huge repositories of data that require grid computing and such to, to process it.
It could be another place where this is an issue of access control over that data, even complex portals, customer portals, or internally used portals that have lots of different widgets with many different options can be a, another case where the, this application of exact mold can be utilized. It's really any scenario with a search or a filter kind of mechanism. And why is this a challenge for example?
Well, because the, the protocol for making access requests is really designed to answer permit, deny kinds of questions for a single access query. So as I have here in a couple of examples, can Bob view the account balance for this client? Or can Dr. Alice update this particular patient record, those kind of queries are fine, right?
They result in a yes or no perimeter deny kind of response, but what if the application needs to ask this other kind of question, things like of the 50 million credit card records in this database, which ones can account managers view or update, and you can think of other kinds of combinations that might might apply to these kinds of data sources, or if you have a customer service representative, you know, what are all of their privileges or entitlements so that I can render this, this complex multi-level Porwal page in one go, rather than asking, you know, 500 or a thousand different questions before rendering the Porwal page.
So we commonly refer to these kinds of questions as reverse queries, hence the name of the product that we introduced earlier this year, the axiomatic reverse query product, and what it does is it evaluates these reverse access control queries. And in essence, under the hood, what it does is find the conditions under which an access can be permitted, where it it's also symmetrical. You can say, you can ask for conditions where an access would also be denied and, and just quickly, again, to look at a, what we would call a normal or forward query.
Again, it's can a user Bob view document D and the exchange between the policy enforcement point in policy decision point, we would return a permit or deny or not applicable or indeterminate, but essentially it's a yes or no kind of questioning, but the reverse of this is how can I get a permit in a given constrained situation, such as which documents in region us can Alice view. And again, it's symmetrical, you can turn around and, and do it for a resource and say, who can view confidential documents if that user is outside of the originating country?
And what happens is the reverse query engine returns to the policy enforcement point? Well, the permit is possible if the classification of the document is restricted and the state is Arizona, and the case number is X 92 dash 33. So these filter expressions are used by the policy enforcement points to enforce the access rules.
Now, these PPPs are built for different environments. We have some in the works already, but it can be applied to all of these different kind of application scenarios I described earlier, earlier. So the P E P now enforces the access rules. And so if it's retrieving data from a database, the results set that it gets returned to the user will only contain the records that the user is actually authorized to view. And so you're not, you know, queer, you know, selecting all of the records and then trying to redact or filter them after the fact you adjust the query that gets sent to the database.
In this example, it might be a select statement where a wear clause is added to that, to restrict the data that comes back to the user, to the application. What's especially interesting here, and the topic of discussion about performance and, you know, scalability over large quantities of data is that the operation I just described happens on the query or the search or the filter. So there really is no restriction on the number of records that might happen to be in the source repository.
In fact, you, we think in some cases it would make the system more efficient and faster because you are retaining potentially a smaller set of data back to the application or user. So this is where it really, the power of this approach really comes into play because it, it happens at the filter level, not at each in database, because if you thought about how you would do this with a regular, exact mold approach, you would ask, well, can I view, record one? Can I view record two?
And if you had 50 million records, well, maybe in five minutes or so, you could come back and it's still processing, but here it's, it's an immediate response. And, and the proper data is, is returned. What what's interesting to consider is how people are doing this today, or before there was something like the amatic reverse query capability.
Of course, you see a lot of proprietary authorization systems or policy languages to do similar kinds of things. You might have obligations, in example, policies, cuz this could be one way you try to deal with the situation on the next page.
I have, you know, some issues with, with these kinds of approaches listed as well. The third one here is, is somewhat interesting because we've seen customers that actually store entitlement data within the database. So in fact, you've polluted your system or your database with security data instead of just keeping it clean and only containing the application, the application or customer data, for example, we've seen other customers where they actually pre-calculate all the possible permissions and then store those results for use by applications.
And then there's, you know, various other kinds of workarounds. I'm sure many of you have seen and, and we've seen. And then of course we also see customers that, that live with the risk of course grained access that is, you know, they've got joint venture scenarios set up where they might be working with competitors and they're actually sharing more data with competitors than they would prefer to, but they're dealing with that risk because they don't necessarily have a means of, of working around that.
So as I said there, you know, there's lots of issues with some of those proposed solutions. They're not standard based, you know, we're introducing approach where you can actually use exact more policies to secure this kind of data. These systems definitely can be difficult to manage and complex to manage of course, complex and, and difficult to certify and make sure they're adhering to regulatory requirements. We think we see a lot of situations where the security is actually inadequate, right?
The, as I was just mentioning the, the security level is, is too core really for the situation. And in some cases we've seen where a customer cannot generate revenue because they cannot surely, I'm sorry. They cannot securely share data with, with customers. So they cannot introduce a new service because they don't have adequate and dynamic security context built around that. Also we've seen the systems be very error prone because, you know, in order to change the system, it's very difficult to do so and very difficult to keep track of changes in the system.
And, and finally that, you know, these older methods or mechanisms just don't adapt quickly enough or easily enough to changing business scenarios and changing business requirements. So, you know, to summarize some of the things we've been talking about here, we think that using something like the axiomatic reverse query, you can now apply exact more access policies for this broader range of business use cases. So I'd like to say we break through a glass ceiling here of the old permit deny paradigm. So we're no longer restricted by that for an exact based authorization system.
And so now we can use exact Mo for things like database access control and use your exact Mo policies to protect the row column and cell level. And this means also that now you have a, a broader set of business applications and application platforms and data types where you can, you can apply a consistent level of access control across all of these systems.
And auditors definitely appreciate this compliance officers and also appreciate this ability because it, it makes their life easier, but also gives them better proof that there are proper security controls in place to meet these very specific regulatory issues. And next up here that you can now apply access control at a, at the filter mechanism.
Again, you're, you're doing this at the filter or query point in an application. So you get the benefits of standardized access control without performance limitations. And this is the, the key to dealing with these incredibly large data sets.
And, and we, and we, we think this is going to enable a lot of new business opportunities because it is going to permit the secure sharing of sensitive data with, with partners and customers. And it's this need to share model that we see so prevalent when we're, we're talking to customers now I've not gone into a lot of detail of how the system works.
You know, all the magic behind the scenes. We're, we're happy to set up some one on one time to, to do discuss that in, in more detail. But I think for now, we're happy to take some questions here that Martin can moderate and see what your reaction might be. So Martin for now we will turn it back to you to address some of the questions from the audience. Okay.
Thank you, cherry. And we have the first questions here. So if anyone else has questions, please tell them in the area questions of, to go to webinar control panel. I just leave this slide for, for, for some two or three minutes so that you can bring down the email address if you want to contact Xs directly. So I think the first question I have here is what are the problems with exec exact obligations? They are standard based, Yes, they are a standard.
But if you, if you think about the database access example, if you used obligations to send the, where clause back to the policy enforcement point, the construction of these obligations and these policies have to be written in such a form that they are more complex to manage. Now, I'm not adequately equipped to describe all of those details, but again, we could have a follow up call to just discuss that further, but it does mean you have to write additional policies.
You have to, pre-calculate all the possible conditions that would occur, and it's just not as dynamic and it's less efficient to process than they're the reverse query approach. Okay. The second question I think is a very interesting one. Is there a best practice how to move to exact from old access control mechanisms?
Well, there's, there's a number of best practices depending on the nature of the application, but we do work with customers to do smooth migration from their existing to the new, and you can do that in a transition phase where you're actually doing both right. You leave the, the current authorization in place, and then you would start to install the exact mold based method. And then until ultimately you, you turn off the old and you turn on the exact mold based method.
There's a variety of ways to do that again, depending on what kind of system you're talking about, but there's definitely best practices. So you make that a smooth transition and you, you don't incur downtime and you, you ensure that, you know, the security is being maintained throughout that transition phase.
However, it's definitely depends also on the, the type of systems you have to migrate. So the better the, the architecture of these systems is the easier it is to migrate.
Yes, there's, there's definitely a lot of variables involved. You know, it's hard to come up with a single recipe that is going to work in every case, but there's many techniques to ensure a smooth transition. Another question here. So a R Q is a pre-processor, does it have to be run each time? There is a significant change to the underlying data. So addressing latency and access control filters, What do you mean by changing underlying data?
If, if the data format, I Think the point is, is a RQ acting as a pre-processor or is it acting in real time? Oh, it's in real time.
Yeah, definitely not a preor. Yeah. Okay. So it's really at a point when, when you vary and it goes to the target systems, which means you don't have to pre-process anything and you, you don't have to look at changes in the underlying Data that, that that's right.
So that's one of the problems with precalculating or pre-processing results is that if the user data changes, well, then maybe you're granting access to someone that should not, or you have a new person that's not being, that's not in the results set, so they can't access till tomorrow or next week when you, when you do the calculations again. And, and as you say, if, if the underlying data changes, then that also changes the, the results. Yeah. There's lots of issues with precalculating. So any additional questions from the audience?
So if yes, then you should enter them. So from an application point of view, could you just very quickly explain how the flow is? Yes. If you take an example of a, of a search engine, you know, if, if you are working in law enforcement and you want to do a search for a certain kind of individual across a number of databases at that search point is where a policy enforcement point in the exact wall vernacular, you know, so you would have a P P there at that location that would intercept the request.
And then it would send this, this query to the, a RQ engine saying, okay, under these conditions, how can this user get a permit for these search results? And the filter conditions would be returned to that P E P, which would modify the search string so that they would get the authorized data back to this user, to this investigator.
Okay, perfect. Are there any restrictions defining policies to use, which are used by our a RQ? So can you in fact take any exact mobile policy, turn it round for a I Q Yes.
It, it, it just uses standard policies. So you, you define policies that if it's a database, you know, protects the part particular columns in the database. So they're defined as resources, you know, similar for the content management system.
You, so you would define certain kinds of documents and create rules based on the metadata attributes about those documents. For example, if it's classification or if it's listed as, you know, customer sensitive data or restricted by export control.
So you, you apply these regular, exact mobile policies and rules to, to the content, and yes, you can use any kind of exact policy in the arc system. Okay. So there's one other question ISEC used only for the policy, or do you actually use exactly request response at one time as well? So between the PDP and P P So yes, to the first part, it it's standard exact policies, but the request response is different from the currently defined, exact more request response protocol, because you're asking, you know, you're sending different kinds of elements to the reverse query engine. Okay.
To your, your opinion, what would should be the key selling points of exec for business? Well, if you're talking just of a general question there, I think there are a number of them. One is to enable this kind of data sharing that you talked about earlier, Martin, you know, this is a, a huge business driver is to be able to share the right data with the right recipients.
You know, if that's your customers, you know, to make sure they can only see their balances or their shipping information or their invoices and not those of another customer. So to be able to apply that kind of context, where authorization is, is a big value proposition for exact mall. Okay. If you look at, you know, cloud computing models, you know, the exact mall architecture is ready for the cloud.
So if you want to move workloads from your data center to a place where you have more cost effective or efficient compute power or storage, or what have you, then you can protect the data using the exact more architecture, regardless of where it's hosted. So you get this consistent access control level across applications. Right? Okay. So I'll pick the last question I have here. The focus executing on SQL databases. What happens if attributes and conditions cannot be applied to the underlying data model? Is the policy wrong or a data model?
Well, Can't that just happen? No, say that, please, please repeat it. I'm not sure. I quite understood It. What happens if attributes in conditions or in your, your reverse queries cannot be applied to the underlying data model in the database?
I, I don't see why they wouldn't be able to be applied Because, okay. So, so maybe that's something you, you can answer offline more in detail was the, the one who has been asking it. Sure.
But I, I can just make a couple comments that, you know, the, the conditions are expressed in the exact mobile policy. Yeah. And I I've got a feedback from the other one who asked the question that he would be very, it would be okay for him to have off land as cash. Okay. Fair enough. No problem. There's another question is about, do you, do you observe performance issues, in fact, it's that it can delay PP returning or to results to the app? What is your experience until now?
No, the, the processing performance benchmarks we've seen so far at are, you know, some relatively complex reverse queries can be processed in about a millisecond or less at the server. And once the results are sent back to the, you know, say the database engine itself, I mean, then it just carries on as it normally does.
So it, it does not negatively impact the, the performance there. As I said, during the, the talk, in fact, it could improve it if you are returning a smaller set of data back to the application or user, because you've restricted, what content is going to be displayed. Okay. So I think it's a, a pretty cool thing you've been talking about. We had a lot of questions and for sure everyone feels free to directly forward questions to cherry or me.
It's now up to me to thank cherry and the attend for participating this a call webinar and hope to have you again soon in one of the next webinars we are doing. Thank you. Thanks very much, Martin.