All right, ladies and gentlemen, welcome to another K. Call webinar. My name is Alexei Abel Analyst at K call. And today I am joined by Matthew ye, who is the director of product marketing at Delphix. And the topic for today is security versus innovation. Why not choose both? It's likely a vague topic I have to confess or what? Just remind you. We are going to talk about securing your data, whether it's on premises in the cloud somewhere in between. Let me start by saying a few words about a call.
We are an independent Analyst company, headquarter in Germany, but with a pretty global footprint from us all the way down to Singapore and Australia founded 14 years ago. And focusing on information cybersecurity since as well as identity access management governance, risk management in compliance.
One thing we do is organizing different events, ranging from three online ones like this webinar to pretty substantial scale physical events. And I would like to draw your attention to a couple of the upcoming ones.
Next year, our flagship conference, the European identity cloud conference will be held for 13th time in Munich next may. And we already have pre-announced our next cyber security oriented summit in Washington, DC. Next October, stay tuned for more information and you'll find more on our website. If you guidelines for the webinar today, you are all muted centrally. So you don't have to worry about that. We are the only people who will be talking today and we will record all our, all our talking and we will publish this recording on our website tomorrow.
The latest and everyone will get a link to the recording. We will have a Q and a session at the end of the webinar, but I strongly encourage you to submit your questions.
The moment you have them. And please view the questions box in that go to webinar control panel you have on your screen, probably on the right side. The agenda for today's webinar is pretty standard.
First, I will do a general Analyst focused introduction into the topic we are discussing today. Then I will hand over to Matthew who will do a deeper dive into the technological aspects or, and some presentation of our, the actual platform Delphix is offering. And at the end, we will have the questions and answers session.
And again, please submit the questions anytime you have them. And I guess I will start our webinar, the cooking Nicole's favorite picture. Everything is connected. We are living in a hyper connected world and proverbial notorious digital transformation has profoundly changed the way we as a society communicate our businesses or partners, customers leads, or even robots and smart things.
And devices are all communicate, all exchange or digital information, which has to be collected somewhere processed, managed, stored, secured, current you name it. And this digital data is every everywhere.
Now it's no longer protected in a single silo behind a castle wall in the mode, or, or rather behind a massive digital firewall. It's everywhere it's in your, on premises data center in the cloud, or maybe multiple clouds somewhere on the move or along with your mobile workers or your partners or your connected devices, maybe even in your plant or manufacturing facility or anywhere else. And the problem, of course, the more data you collect, the more important this data is becoming for your daily business.
The more valuable it becomes, and the more valuable becomes to your competitors or hackers or nation states, basically all the bad guys out there. And unfortunately the statistics show that there is no positive trend in this development data breaches are becoming more of more, often more costly, bigger scale.
And on the slide, I've collected a few key statistics, according to one very nice website, which tracks a real time or world bridge statistics every day, 6 million personal records are stolen on average.
And unfortunately for a typical company, it takes almost 2200 days to actually even notice something that's going on. And again, this attacks are only increasing. There is no down water trends in any vertical, and unfortunately only for the attempt of those breach attempts were one way or another prevented or the data was safe, saved by some kind of mitigation solution. A more interesting statistic is about the so-called mega breaches and or the poly nstitute defined that a bridge which involves at least 15 million records stolen, but those the costs are dramatically higher on average.
And the detection times are also dramatically longer. The latest one you've probably heard about is the merit total hotel change, which was breached four years ago, actually.
And only last, I think only in, in September, they have actually noticed that something that was going on and only last week, it was made public and 500 million customers affected and lots of financial and other sensitive letter stolen.
I'm really curious how it aligns with the GDPR requirements, but more on that later, there are with all this crazy stuff happening in the world, like why is it, what is the reason behind it? Why, or do we have no possibility to, to fight this raising tide? First of all, because our, our traditional way of dealing with these problems, that that is letting our it departments deal with that problem is no longer efficient enough. There are way too many heterogeneous assets spread across a typical carbon infrastructure.
Be it again on premises in the cloud, or rather multiple clouds or somewhere else, or in the supply chain within Europe, business partners, infrastructures, it just has grown completely out of control in the number of interactions between them is even higher and less, or the, the overall attack surface is growing rapidly.
And it's just managing the daily routine stuff is way out hand. It's too complicated. It's too time consuming. And for many companies still actually manual.
The whole situation is not, not really improved by the fact that control those infrastructure systems are still controlled, are still split between different teams, security teams, ops teams, development teams, database managers, application, name it who have different requirements, different budgets, different reporting chains, if you will.
And when are little opportunities to communicate directly to each other and even fewer opportunities to, to make the problems are visible to management by means of some kind of clear KPIs, key performance indicator showing that it's the situation is not just bad. It's actually rapidly becoming even worse. This led to my safe problems that it departments have insufficient visibility into what's going on in the infrastructure.
There is definitely not enough people to fill this skills gap and not enough people to, to react to those bridges and other security incidents in time to prevent catastrophic outcomes or even worse in the problem that the money that is available is spent in efficiently.
It's again, it's split between different teams and it, it just relates to it departments being seen as a, rather than business level of four, the, the actual lines of business and the way it's supposed to work now in the modern agile business connected.
If you bill the term that our found Martin hasn't went a few years go, is that it is no longer in control it's business that decides what's supposed to be done first. And it must follow security is no longer an it discipline. It's something which has to be it's has to involve all the people in every enterprise or in every line of business and operations and development and any other department. And of course, with all the recent developments in complaints, regulation, field, namely dreadful GDPR regulation, which we have since May, 2018 security breaches are no longer just hugely problematic.
It, they are also hugely more costly with up to 4% of your global, to know where you will have to hand over to the authorities for data breach not handled properly.
You really have to rethink your priorities massively a again, a data leak or a data breach is no longer an it problem. It's purely a business risk and it has to be dealt with with appropriate business risk controls. And let's talk about that a little bit again, just from those who still think in terms of, well, I am not in the EU.
I, I am not affected by GDPR, so why should I bother? First of all, even you are affected, even if you are not, not in the EU. And there is also all what 20 different global and regional compliance regulations, which affect companies dealing with the cloud and with the data in the cloud. Some of them are probably only interesting for American companies working within the government contracts, filled others for Asian companies in European ones. But anyway, compliance is complicated compliant. You have to deal with it with all this hugely complicated compliance landscape, and it won't get easier.
And finally, as we are talking about data digital information and has to remember that that the whole scope of data protection has dramatically explained as well, maybe even 10 years ago, we, we will only talk about a handful of relational databases, probably Oracle instances, somewhere in your on premises data center, behind the Meier world. It's only few people haven't accessed directly to sensitive information to the DBA administrators nowadays, or it has changed dramatically.
Or a typical company would probably run not one, not even 10 but 20 and even more different database types just to accommodate all the different types of data they have to manage both structured and unstructured. Remember that files and records store in your enterprise business application is sensitive information just as well, and you still have to protect it. There is also widely expected range of the actual business reasons. You have to manage some data.
So besides the usual production database, which actually holds the original of the data, you have to manage multiple copies used for development and testing of the applications, the backup COPI, and archives, or analytics or data for your data scientists for doing business intelligence, as well as some training for machine learning.
If you are that advanced already, and maybe having a copy for legal reasons, you name it, the problem is that for each copy of, I would say for each sensitive data record, you probably have in your control as an original, you may end up having 10, 20, 50, even a hundred copies or the data somewhere totally out of your direct control. And you are still responsible for securing that data. Why?
Because, or because reasons, or whenever people are thinking about moving to the cloud, they obviously, but a cloud provider is actually supposed to take care of it, right?
He has so many different security features to offer. Yes and no. The typical shared responsibility model for your data in the cloud differs depending on how exactly you're using that cloud. But in any case, or you as a customer are still responsible for protecting the data.
You are legally still the data owner data controller in its up to you to face all the consequences of breaking that, that compliance regulation and all this of course leads to the, this situation, which are companies like Delphix and other Analyst called data friction, basically. And the more data you have, the more difficult to become to manage and the more parties so called data consumers want to access the data. The more complicated it becomes to actually not just secure the original, but to provision safe and compliant copies of the data for various business purposes.
And similarly, this is a ANSO problem, right?
It's like if breaking the security regulation complexion is so expensive. Can you even think about making the life of your data consumers any easier, maybe the fact that your data scientist has to wait for a whole week to get properly cleaned and filtered and vet copy of your production database for some business intelligence research, maybe that's okay, or maybe not. And the whole reason we are doing this webinar today is to show that this is actually a completely false economy.
And if you probably are already aware the same or similar dichotomy has been a hot topic for the traditional developers for quite a few years, eventually led to the whole DevOps methodology to appear, which would try to through intelligent automation and the organizational restructuring of your business process and development process, you ensure that all this process are made as easy as convenient and as agile and as present as fast as possible, which would dramatically increase the complexity and improve security and compliance of your development process.
Well, you know what the same developments have now reason for data related operations, the whole data ops methodology was originally created by traditional data scientists who were, as I mentioned earlier, tied of waiting for a whole week to get a copy of, or sensitive data for doing some analytics.
And they decided that they want to apply the same or approach as developers with they develops methodology and also incorporate some very well established practices from agile development and lean manufacturing methodology to ensure that this friction between owners of data and data consumers is reduced as much as possible on each step on that workflow. So every, all those steps are listed in the slide has to be taken care of individually and every SP every step of that data lifecycle data work flow, if you will, is re refactor to be as lean as agile, as automated as possible.
My idea, my theory or the question is can this actually be translated into practical applications? Yes, it can. And we could probably spend another half an hour talking about different approaches, but today we are focusing on, on, on one and namely data virtualization. The idea behind it is actually as simple or as possible just instead of making a copy of your database for any data consumer, you just give them way to access it directly.
So any data scientist, any developer, any tester who, or earlier would have to wait and somehow obtain a copy, a standalone copy of your production base would just be able to make a virtual copy, which to him, for all intent, intent and purposes appear as a proper locally hosted physical database at the same time or the technology behind it would ensure that this data is automatically compressed or optimized, cleaned, or vetted and or made compliant with all possible security and compliance policies to ensure that no sensitive data would ever leave your actual premises, even though your business intelligence software is probably running in the cloud and your developer is doing his work on his laptop in the Starbucks cafe, and your archive or copy is hosted somewhere on a tape drive in your, on premises data center.
All of those copies are actually supposed to be virtual. They have no very little own footprint and all the magic or of synchronizing think the data exchange in both ways, by the way, if taken over by a very specific and very tangible data virtualization platform, and probably the biggest, the biggest advantage of such a platform and that it's not just reduces our friction in a way that it makes your data safer and more compliant. It also makes it much more accessible through self-service and intelligence automation.
Basically, you no longer have to wait. You just click a button and this copy will be provisioned for you automatically. And you are in a complete control of course, within the limits of a corporate wide security policy of this copy of the data in minutes instead of weeks. And basically on that note, I believe it's time to handle what Matthew to provide you more technical explanation of how such a platform would work and which advantages it brings to your data operations with you. It's your turn now?
So thanks Alexei.
I think you did a great job of describing the, the challenges, concerns, and priorities that, that we also see here at Delphix. And we, you know, we've arrived at this very interesting place where our customers pretty much without exception tell us that they, they absolutely, they need to go faster. They're worried about getting off behind, but security is such a concern that they become paralyzed, even when they can clearly see the competition. Moving ahead.
For example, we have customers that know they need to take advantage of the cloud for the agility benefits, but they're, they're also scared about operating in that new environment. We have customers that know they need to bring devs into more parts of their organization, bring in more agile methods for software development, but they get hamstrung by an inability to make those processes secure. So for us at Delphix the one thing that really sits at the center of the dilemma between speed and security is, is data.
And all of the key transformation initiatives that your organization must undertake. That the things you see here in the triangle on the slide tie back to being able to efficiently secure manage and deliver the data critical to driving business innovation.
So ensuring that data is available to fuel your most important projects requires enterprises to have the proper controls in place, right? Data needs to flow across the enterprise, but it needs to do so in a way that's also very safe and secure.
And, you know, I'd argue that this responsibility rests with multiple stakeholders from different parts of the business. It's not simply a CSO or CSO concern. It's a concern that touches the entire organization, including the CIO risk and compliance organizations, cloud teams, application owners, et cetera, and given the breadth of stakeholders involved here each with their own individual requirements, it's clear that data and data security challenges, they can't really be addressed with a point solution or, or even really a set of point solutions.
Instead, you know, like we really believe modern enterprises need to adopt a holistic approach to, to data security.
Unfortunately, I would argue that many current approaches to data security are, are anything but holistic. So when we talk to security teams and, and perhaps application teams, one of their number one complaints about the solutions they evaluate perhaps adopt is, is not that they don't work.
In fact, they say most things, they adopt work quite well. Actually, the problem is, is that they operate well within a very specific context. And the customer has to go out and adopt a series of solutions to, to really make progress. They might need to buy one set of solutions for specific region or line of business, another set of solutions for, for this application that supports the particular part of their business. And this ends up increasing complexity and costs to the point where the solution loses its value.
And I think a byproduct of this overhead, this complexity is that it slows down your people and processes, and it doesn't align well with your goals around speed of innovation.
So this complexity is really exacerbated in an environment. The one that Alexei say described where data is growing at a really tremendous rate, much of this sprawl is due to the habit that organizations have around copying and distributing versions of data from all the different sources that generate new data.
And as Alexei mentioned for every original production data source organizations are making, you know, 10, 20 quote unquote non-production copies for, for development, for reporting, for analytics, for testing, for backup, we do need this data to drive innovation, but the key is really finding a way to manage all of this data in a way that's manageable and very responsible.
So at Delphix we think that businesses are really looking for a different approach to managing data. One that works throughout the entire enterprise, instead of just parts of the enterprise.
One that really puts security at the heart of how you build software that drives innovation. And one that speeds you up and says, if slows you down and at Delphix the recipe that we see working is an approach to security that achieves the things here on the slide. So the solution must help businesses understand data and provide an enterprise view of, of where, where data resides, who has access to it, who needs access to it, how risky or sensitive is it. This is really important given the level of sprawl that I just alluded to the solution also needs to protect and control your data.
So this means reducing privilege, user risk, securing personal data from breach, and then building control measures into how your data is distributed both internally and externally. And finally your solution needs to be embedded in how you work, right? So data security should be designed into the key processes that drive your business rather than bolted on. After the fact, this means the solution should easily integrate with the other technologies. Your business depends on and ensure that your controls are, are really adopted and, and effective.
At this point, I want to transition and talk a little bit about our solution that we call the Delphix dynamic data platform. Something that provides many of the capabilities that I just underscored. Ultimately, the outcome that we wanna achieve with our platform is ensuring that data flows freely yet safely, and we help you mitigate security and privacy risks while also allowing you to easily move that data that fuels your most important project.
So let's go into how the, the platform works starting on the left and moving to the right we'll, we'll walk through sort of a, a day in the life or a workflow of how Delphix provisions data. So Delphix is a, a software solution. It installs as a virtual appliance, either in your on-prem data center or it's cloud ready. You can put it into a cloud environment like Amazon web services or Microsoft Azure when it starts by collecting data.
Oftentimes this is production data that resides in a relational database like Oracle or SQL server. We can also bring in data from file systems.
So we will, non-destructively ingest that data and we'll stay synchronized with the production sources as they change over time. Next we'll apply the data virtualization that Alexei talked about. So we can spin up very SP space, efficient, lightweight virtual data copies that are fully readable and writeable. And these copies can be provisioned. In a matter of minutes, they can be spun up automatically or spun down automatically, and they can provision to a specific point in time.
So remember that we're keeping track of production data changes on a very granular basis that allows us to quickly provision virtual copies as of venue point in time, once we've provisioned copies, those data copies can be secured with a masking solution will go into a little bit more around what masking does, but essentially that's gonna allow you to eliminate sensitive information.
That's in the copies that you provision with. Delphix once we've secured those copies, we give you functionality to manage those copies.
Again, you can spin them up. You can deprovision them. When you want to, you can audit against usage of those copies. This allows you to have great governance of, of the data allows you to determine who has access to what data when and for how long. And you can see on the right, those copies will be automatically provisioned to, to end users. And it could be a developer. It could be a tester, it could be a data scientist and those data consumers have self-service control over the data.
So the data copies that we provision are, are packaged and, and handed off to end users in the form of what we call a data pod, a self service data pod. So what a pod is essentially is a, a collection of virtual data copies.
You know, they could come from originate from different data sources. The copies can include a mix of, you know, unasked copies, mass copies, but essentially the pod allows the, the end user to access data very quickly. And the pod is also associated with self-service data controls. So an end user can do things like click a button and instantly refresh a data copy. So that that copy reflects the latest state of production data.
We can even do things like bookmark or create save points within the data as the data changes, and then, you know, go and do some destructive testing and then quickly rewind a data.
Copy back to the bookmark state. We have functionality that enhances collaboration. We allow end users to branch data as they might, you know, branch versions of code. For example, say you are a tester you're working, you're testing against a virtual data. Copy you find a bug in the application.
What you could do is take your copy, split that copy into two different copies and then pass one virtual data copy to a, to a, a QA engineer or developer rather for, for bug fix. Right? So all of these capabilities here can be driven via a self-service model. The end user has a, a gooey that they interact with to manipulate the data. And all of these, these features can be accomplished without intervention from a administrator, right?
So I wanna come back and elaborate a little bit on masking because it's a key part to making sure that the data that Delphix delivers is secure.
What a asking solution does is that it transforms data by changing sensitive data values into realistic, but fictitious versions. So if you're say a developer or a tester, you don't necessarily need the information resident in the data, but you do need that data to look and feel and operate like the real thing, right? So that's essentially what a masking solution does. And this capability is integrated into our, our platform. So Delphix provides an end end solution for first pinpointing the sensitive data that you might have in your, your data copies.
Then we very efficiently apply masking to the data to ensure that sensitive information names, email addresses, social security numbers, like how numbers are limited. This essentially neutralizes risk in the copies of data that Delphix deploys. And I think the key differentiator for the Delphix solution is that masking is integrated with a data delivery solution. Many masking projects fail, not because the masking solution itself doesn't work.
It's it's because organizations fail to operationalize a process for moving that mess data Delphix on the other hand is very powerful because it lets you easily distribute that mass data OnPrem or in the cloud, it works across all the different data sources you depend on, and it preserves referential integrity across the mass data for all those different sources.
So let's go through a couple of scenarios to make this cap, these capabilities, more concrete, concrete.
So in this example, the Delphix dynamic data platform is going to efficiently deliver mass production data to development and testing teams. So starting on the left here, we ingest data from a production source. This could be a database underneath a custom build application or packaged application, say SAP or Guidewire, Oracle abs. We ingest the data, we virtualize it and we can provision a virtual copy. Then we'll apply our masking to that copy and to eliminate personal or sensitive information.
And then rather than making and moving new copies of that mass data, new, physical copies, what Delphix can do is create child copies from that, that mass version and distribute those, those copies, package them into data pods, and then hand them to developers and testers who can then control those copies, access, those copies via self-service right. This allows you to distribute as many mass copies as you need very quickly, very efficiently.
We can go through this next scenario.
So this next scenario illustrates how a customer might might use Delphix within the context of hybrid cloud environment, in which the customer wants to say, keep their production data in their on-prem data center. But they'd like to take advantage of, of a cloud environment for development and testing. So what happens here is you would have one instance of Delphix in your on-prem data center. It synchronizes with the on-prem production source.
It will take the data, it will mask the data, and then it can replicate only the mass data to a second instance of Delphix that resides in say a us from there, from that second instance of Delphix data operators can provision data pods again, to developers and testers within the cloud. And those copies will have, will have mass data. Now this process of moving data to from on-prem to the cloud is something that happens over and over and over again.
What Delphix allows you to do is rather than having to do a full migration, a full upload of all your on-prem data to the cloud, the replication between the first instance of Delphix and the second instance stay synchronized such that on a continuous basis, change data, incremental change data is brought from on-prem to the cloud. So the downstream copies that developers and testers are accessing, if they need to be refreshed, they can do so instantaneously without requiring that full process to be repeated of, of migrating the data from on-prem to AWS.
So some key benefits of our solution.
So the first piece really relates to accelerating application releases. So your data, consumers, developers, and testers don't have to wait for data. In many large enterprises, we see that provisioning data copies takes days or weeks or even months with Delphix developers and testers have instant access to data that they can control via self service. So this helps compress development and testing cycles eliminates wait times.
We've talked about the security of the data with masking Delphix allows enterprises to pinpoint their sensitive data, effectively mass that data, and then distribute mass data very efficiently to any environment. And then we also enable organizations to more easily adopt the cloud.
So Delphix is helpful in scenarios where companies want to lift and shift applications, or as we just discussed, help organizations take advantage of, of high hybrid cloud models, where they'd like to, for example, keep some of their environments on prem, but also take advantage of the scalability and agility and cost benefits of cloud environment.
A little bit about who we are as a company, over 30% of the fortune 100 have adopted Delphix for, for different use cases. And these are all companies that have really invested in, in Delphix. They're not using us for small science experiments.
They're really standardizing us as a solution to help them move their data very efficiently and securely. I'll go through one customer example here. The customer is, is a large insurance company. It's actually the largest provider of dental benefits in the us called Ventra. Andra is a company that wanted to take advantage of AWS. They wanted to use hybrid cloud, but what they found was as the, the most challenging barrier was actually moving their data from their claims processing applications from, from on-prem to the cloud.
And when they investigated how long it would take to migrate all that data terabytes of data, they found it would take up to eight weeks to move all the data they needed.
And that process involved actually loading their data into a physical appliance, moving that data by truck to a, a cloud data center, and then having that data loaded, which took far too long for them, particularly when that movement of data from on-prem to the cloud had to be something that was very repeatable. They knew it would happen many times.
They were also concerned about moving Phi data into the cloud as a, as a healthcare company, they needed to ensure that patient information was protected. They needed to stay compliant with the HIPAA regulation in the us. So this company brought in Delphix and we helped them not only with the, the movement of data from on-prem to the cloud, these are masking capability to secure that data before it was provisioned.
So they went from taking, you know, eight weeks to upload their data to AWS, to be able, able to, to do that very quickly and, and, and literally minutes, and to be in a position where they could provision development and testing environments in the cloud very quickly for a team of, of hundreds of developers with that, I would like to pass it back over to Alexa.
I believe we're going to open up the session for, for Q and a
All right.
Well, thanks a lot, Matthew. That was a really interesting presentation. Let me quickly switch back to my screen.
So yeah, I remember I actually was a developer myself quite a few years ago. It was way before the cloud times, but we actually had very same problems 20 years ago. Like how to ensure that not just a database can be handed over to a developer securely and compliant, but also as quickly and as easy possible cause developers are lazy about all.
So, and by the way, that's kind of a short question from myself. You talked about self-service, but self-service is still a manual process. Right. Do you have any automation capabilities as well? Can you include Delphix into this C I C D tool chain, for example.
Yeah, so we can, so we have many customers who have integrated us with their, their C I C D tool chain. So our platform comes with a full, robust API set. So anything that, you know, you can do through a command line with Delphix or through the user interface with Delphix, you can also do programmatically through our API set.
So we do have quite a few companies who have integrated us within their, their automation, their, their automation tool chain, their, their SCLC pipeline with tools like, like Jenkins, for example, and for companies that want to move towards C I C D what we see is they, they find that, you know, with, with automation at different parts of, of the application tier, they're able to stand up infrastructure very quickly, able to provision cloud environments very quickly compute they're able to deliver the right code builds.
The they're able to automate that piece within their pipeline.
But the piece that is not fully automated for many people is the data, the data area, right? Provisioning databases is still a bottleneck. And what they can do with Delphix is integrate us within their tool chain to provision, allows them to essentially provision complete environment. So the application environments, the, the code, the code layer with Delphix are also able to automate the data provisioning so they can send up complete environments to, to help them do their automated tests and to essentially speed up the, the flow of, of, of the process flow through their C I CD pipeline.
All right, great. So we had infrastructure code with DevOps. We have securities code with dev ops, but now we have data code
That's right.
And, and one thing I'd like to point out too, is that Delphix, it allows you to really provision the exact right kind of data that you want to, to run your test against in a C I C D environment otherwise. So you can use Delphix to provision, you know, full copies of mass data to your testing environment. You can ingest subsets, you can ingest synthetic data. You can essentially create a catalog of different test data sets and ensure that those, those data sets the right data set is delivered to the right environment in a very efficient way.
Okay. Right.
So let me remind you again, please submit your questions through the go to control panel. We already have a few, so let me just read them aloud for you. First question is, do developers actually need to change anything in their applications to use Delphix instead of directly connecting to a database?
Yeah. So that's a great question. Nothing needs to be configured at the, at sort of the application layer to work with virtual data copies from Delphix. So the virtual data copies, they they're fully functionable they're, they're fully readable. They're they're fully writeable.
It should be seamless to anyone, you know, to an end user, you know, using or testing or to building application. They, in many cases, they, they will not even know that they're working against a virtual data copy. That is provision from Delphix.
So, so no changes are needed there.
Does it extend to access controls, user rights, stuff like that?
It can, yes.
Okay. Next question are, how do you keep the data between replications in sync, if you have production data replicated?
Yeah. So what happens is the way that Delphix collects data is you will install it as a software platform. You will point it to your production data source, and then Delphix will first do a one time copy of everything that you have in prod Delphix will ingest that data, the, the full copy. And then from that point forward Delphix will say synchronize with the production source.
So what that means is that Delphix will bring in incremental changes, just the Delta. So that could be snapshots. It could be logs from, from the data source, right. That allows Delphix to build the very granular, continuous timeline of data changes. So when data is provisioned, we provision a point in time copy. And at any point in time that copy can be refreshed to the latest point in time, because Delphix is continuously keeping track of, of changes that happen in production. Right. I hope that answers the, the question there. Yeah.
Can you make this refreshes real time or near real time?
Yes. So the, the copy, if you would like, can be refreshed on demand.
So, you know, the end user could click the button. It could be refreshed by policy. So you could create a policy to say, you know, I'd like that this copy to be refreshed, you know, every day or every hour or every, you know, 20 minutes even. So you can refresh at a quite granular level.
And again, the, the timeline that we maintain within the Delphix platform of, of the changes in production are at, at a, like a log file, a very granular level. So we, the timeline can be down to the second or, or even the, the transaction.
Okay. Next question, or not the most interesting one. So what's the pricing model for your solution?
So the pricing model is, is we basically, well price based on the raw size of the amount of data in the physical, you know, off times production database that is ingested into the platform.
So it's a, it's an ingestion based model. And then we will license based on a, a subscription model. Typically customers will purchase a subscription. It's an annual subscription essentially.
And the follow up to that, how does it, sorry, how does it stick up for Delphix when our development is outsourced or done by many parties?
Yeah. So we actually have quite a few companies who will use us with, you know, say an offshores offshore or outsource development. And oftentimes those teams resides in different locations.
So the typical model, what it looks like is Delphix, you will, you know, install Delphix in one location, we ingest the production data into that first Delphix instance. We, we can mask the data, right. In case you're concerned about your offshore teams, seeing sensitive information from that first instance, the Delphix you'll replicate the mass data to another instance of Delphix that resides, you know, in the cloud or whoever that offshore team sits. And then copies of data can be provisioned that, you know, offshore developers and testers can, can access.
And you can be sure that the data will, will be masked for any, you know, privacy or compliance concerns that you might have. You can, you can control, or essentially put permissions around what that team, what controls they have around the data copies, role based access controls, cetera, all that can be applied, you know, within your, within your model for how you wanna work with, with those types of teams.
Well, I imagine that another follow question to that would be like, can you split the bill for those or outsource teams? Like, for example, can you say, okay, 50% of this month subscription have to go to the team a and the 20% to team B and stuff like that. Do you track the type of information
We can, we can potentially track that information within, within, within the software, within the platform, how we would license that is, is, you know, would be like a business conversation though. Right.
Okay. Yeah. Makes sense. Right. So please keep them coming.
The questions in the next one is, so can you explain a little bit to which data types and sources and cloud environments does the platform actually support?
Right. So Delphix, again, it's SAS is a virtual compliance, so you can put it into your on-prem data center. You can put it into a private cloud environment where we're cloud ready for Amazon web services or Microsoft Azure.
You can, you know, download us in the, the marketplaces, for example, for those, those cloud environments, in terms of data sources that we support, you can use us to ingest and virtualize file system data. Oftentimes customers are using us with a relational database.
So we support, you know, the most popular DVMs systems, Oracle SQL server Postgres, ASE DB, two, both in the mainframe and, and systems like as 400 SAP ASE SAP HANA, and for more exotic data sources for which we do not have, you know, native support, we do have toolkits that will allow customers to build integrations into those sort of quote, unquote, more exotic data sources. So most anything that you have in your enterprise Delphix can, can support in, in one way or another.
Okay.
And do I understand you correctly that the same, my skin technology applies to all of them, even to unstructured files as well?
That's correct. Yes.
Okay.
Okay, cool. I think we have some time left for a couple more questions. So how exactly does the self service data control work? Like do people have to click buttons or what you ed already addressed that automation, but can you maybe still elaborate on the Phillip service
Workflow?
Yeah, so that's a, a great, great, great way. And I think I can prompt me to describe a little bit around, you know, how, how the data pods are, are provisioned and how people use them. So you can think of it as in two steps.
So, you know, administrator will, you know, essentially design a data pod, decide what copies are in the pod, are the copies masked or unasked? What permissions are associated with the pod, the pod is handed to the developer or, or the tester or data scientists to, to use manipulate the copies via self-service right. Essentially what that user would would use to control the pod is they'll have their own user interface where they will, you know, they'll see a timeline of their data.
They can manipulate co their copies from a temporal perspective, create bookmarks along the timeline for the copies branch, those copies, refresh the copies by clicking a button. All of the controls are, are through a self-service UI. The UI is designed for someone who is not necessarily, you know, a programmer, a tester it's usable by, you know, essentially a, a business user or an Analyst.
So if I am a developer who decides which data sources I can control, which I cannot, and, or if I don't see a source, which I would actually very much like to access, can I send a request for approval
That would have to be, so the administrator would essentially determine what, what the downstream user sees or has access to within, within the pod. Right. So there is a, a sort of initial configuration process of, you know, deciding what, what downstream people see.
So, yeah.
Okay, great. I think we are reaching the end of our allocated time for this webinar, and I don't see any open questions. So it only remains to me to say, thank you very much, Matthew, for doing this webinar with us today. Thanks to all the attendees for spending your time, learning about this fascinating technology, hope to see you in the future in one of our next webinars, or maybe even our conferences and other events, please check our website for all the latest research and new webinars and have a nice day.