Modern businesses depend on digital information. For quite a few years already, it’s been considered the world’s most valuable commodity, worth more than oil, gold, or even printer ink. Some of the world’s largest companies have turned data into their primary source of income, generating billions in profits every year, but for the rest of us, there is always data that is critical for our business’s success or even basic survival. Information such as intellectual property, customer data, or financial records is rightfully called “crown jewels.”
Unfortunately, as opposed to gold that you can just put into a safe, data only generates value when it’s being actively used - accessed, transformed, visualized, or exchanged with partners – otherwise it only amounts to deadweight with recurring storage costs. Even worse, unless access to the data is properly regulated and secured, it can potentially lead to a data breach and a hefty compliance fine. In other words, it is usually not the data itself, but the underlying data management platform that unlocks the true value of your crown jewels – that is, a database.
There is an ongoing, almost religious debate about the kind of database most suitable for working with different types of data. Should it be a single universal system for handling all your data or should you rather use a combination of specialized engines optimized for, say, financial transactions vs. analytics, full-text search, or time-series data management. Personally, I believe that from the business perspective this dichotomy is completely irrelevant.
For a customer, the ideal database is the one that is essentially invisible, providing all the necessary functions without any overhead of managing the underlying hardware and software infrastructure. The closest thing to this ideal we can get today is a managed database in a cloud platform; these DBaaS solutions are very popular with customers since they come with a promise of fully elastic scalability, high performance, and extremely low maintenance. Since such services are usually based on popular open-source engines improved and managed by cloud service providers, the initial learning curve is also very low.
In reality, however, many companies working with databases in the cloud will eventually face multiple reality checks. First of all, not all data can be moved to the cloud at all, so it becomes necessary to maintain separate database technology stacks in hybrid environments. Secondly, open-source databases running on commodity hardware often do not scale as well as promised, leading to performance issues and other bottlenecks. Finally, when data is sprawled across multiple disparate data sources, running analytics across them becomes increasingly difficult without bringing in yet another specialized database just for this purpose (to say nothing about moving data between databases and the security and compliance implications of such designs).
Does it mean that every company with a cloud-first strategy will face these challenges? Not at all, as many are perfectly happy with their highly distributed and loosely coupled applications. Unfortunately, not all business use cases can be so easily adapted for modern architectures like microservices, and even those that can, will eventually feel the burden of consolidating their data for analytical purposes. Perhaps, at that moment, all they’ll be dreaming about would be an alternate universe where they’ve decided to keep all their data in a single unified database.
Do such solutions even exist today? Yes, and one notable example is Oracle Exadata, which is a combination of a hardware platform optimized for OLTP and OLAP Oracle Database workloads and the database management software to run on it. Introduced back in 2008 as a high-performance platform to run Oracle Database on-premises, since 2015 it also powers the company’s cloud services. Exadata offers identical capabilities on-prem, in the Oracle public cloud, and as the Exadata Cloud@Customer private database cloud service in customers’ data centers. It is also the foundation of one of the company’s flagship database products – the Oracle Autonomous Database.
Yesterday, the company introduced a new generation of the platform, Exadata X9M. From the hardware perspective, it is based on a shared Persistent Memory architecture and Remote Direct Memory Access (RDMA) that powers massive improvements in IOPS and latency compared to commodity hardware. Exadata Cloud@Customer X9M allows organizations to have Oracle manage the private database cloud infrastructure deployed in their data center with a single-rack system providing up to 992 vCPUs, more than 11 TB of memory, and intelligent storage servers with up to 576 CPU cores for SQL processing, 18 TB of Intel Optane persistent memory (PMem), and 769 TB of usable storage capacity. This provides unprecedented performance for OLTP workloads, with up to 22.4M read IOPS and ultra-low I/O latency of less than 19 microseconds - up to 87% better performance at the same price, effectively lowering the cost of running databases by nearly 50%.
Running the Autonomous Database on Exadata Cloud@Customer X9M in customer datacenters or on Exadata in Oracle Cloud Infrastructure enables fully elastic scaling of database compute consumption and supports the full range of automated maintenance capabilities (patching, backup, performance tuning, etc.) regardless of whether it is deployed on-prem or in the cloud. New Operator Access Control capabilities on Exadata Cloud@Customer ensure that all access to customer databases is highly regulated and based on strict principles of Privileged Access Management so even Oracle cloud operators can’t access the data or make unapproved changes to the system. Most crucially, however, all customer data always resides in the same single database, accessible for any kind of transactional, analytical, or document workload, and is always encrypted to ensure its security and compliance posture.
With this news, Oracle has clearly elevated the customer expectations for a database cloud service and is on a trajectory of demonstrable rapid innovation. Simply put, other database cloud services, based on separate code bases and engines, don’t come close to matching the capabilities of Oracle Autonomous Database on the Exadata platform.
Still, for quite a few companies, especially those focusing on open-source and cloud-native development, this approach towards data management might look controversial at the first glance. What about the freedom from vendor lock-in? Isn’t Oracle software always very expensive? To answer these questions, one has to consider the numerous benefits of Oracle’s approach: an architecture, where all data and multiple types of workloads reside in a single, strongly secured data platform without the need to ever migrate them to a different one (and remember, migrating between on-prem and the cloud is completely transparent since both run the same database software and on the same Exadata infrastructure).
Combined with the promise of up to 50x performance improvement compared to Oracle’s competitors, the resulting total costs of running critical business applications on Oracle often turns out to be even lower than relying on random tools and multiple cloud databases, when you include the time and money needed to move data among them. Also, let’s not forget that Oracle Database can run virtually anywhere you want—on a Dell server, in AWS, on an Exadata—it’s the customer’s choice. The same cannot be said about “cloud-native” database services like AWS Aurora and Redshift—they can only be run in the public cloud, as not even AWS Outposts supports them on-premises.
Customers who run their Oracle Databases on Exadata simply attain net incremental capabilities that don’t exist anywhere else. In this regard, one should carefully consider what “vendor lock-in” even means nowadays and which option gives customers more freedom. In the end, only you can decide what kind of setting your crown jewels are worth. Just keep your mind open to all available alternatives and focus on business results, not philosophical arguments.