To further strengthen our commitment to providing industry-leading coverage of data technology, VentureBeat is excited to welcome Andrew Brust and Tony Baer as regular contributors. Watch for their articles in the Data Pipeline.
Amazon Web Services (AWS) is updating its Amazon Aurora database to make the serverless configuration more practical and economical. In the second version of the serverless capability, Amazon Aurora Serverless v2, which will significantly reduce latency for scale up and scale down while making compute sizing far more granular — and affordable, according to AWS.
Putting this in perspective, Amazon Aurora is a fully managed MySQL– and PostgreSQL — compatible database service. AWS mounts Aurora databases on its own optimized storage for higher scale and performance compared to the vanilla MySQL and PostgreSQL databases with conventional attached storage that it offers on its RDS service. AWS claims that its Aurora implementations are up to 5x faster than standard MySQL, and 3x faster than standard PostgreSQL implementations.
The intelligent side of Aurora
Aurora storage is designed as a distributed storage system, with a minimum of six copies maintained across three availability zones (AZs) within a region. The key to Aurora’s performance is the intelligence baked into the storage layer.
Aurora Serverless v2 is designed to reduce latency for scale-ups from minutes to under a second and scale-downs by up to 15x, while allowing more flexibility by supporting more granular compute sizing (expressed as Aurora Capacity Units, or ACUs). Like the original provisioned counterpart, Aurora Serverless v2 supports the same replica footprint across three AZs within a region, with 1-15 copies, depending on customer choice.
Event
Transform 2022
Join us at the leading event on applied AI for enterprise business and technology decision makers in-person July 19 and virtually from July 20-28.
The finer granularity and related enhancements are responsible for making autoscaling more efficient, making the process almost instant. It can ramp up to hundreds-of-thousands of transactions without disrupting the workload.
Specifically, v2 of Aurora serverless can scale up or down in increments as small as 0.5 ACUs and 1 GByte of memory, to a maximum footprint of 128 ACUs including 256 GBytes of memory. There is one subtle distinction in the way that AWS implements serverless in Aurora. It will not scale down to zero because transaction databases are assumed to operate in always-on mode; in this case; that’s a much different use case from analytics, where it is not unusual to flat line, Amazon Aurora Serverless v2 keeps the lights on with a minimum 0.5 ACU and 1 GByte of memory; but customers can still choose to manually stop the instances when they are not in use.
Conversely, customers can mix and match serverless and provisioned Aurora instances on the same cluster. For instance, customers who shard their databases, such as by line organization or region, can designate some shards with firm provisioned capacity and others for serverless. This is especially useful for organizations where workload patterns vary by workgroup or region. It even works within a single Aurora cluster where read replicas can be a mix of serverless and provisioned resources.
So, why use serverless databases? Two triggers stand out. The first is the case for developer simplicity; just focus on developing the application and don’t worry about capacity planning or provisioning. The other is about traffic levels that are spikey by nature, or unpredictable. The cloud provides the opportunity for serverless because, unlike on-premises, you don’t have to buy just-in-case capacity. With all these benefits for serverless, provisioned is typically a better deal if traffic is relatively stable and predictable – cloud providers always give price breaks when you commit to fixed capacity up front.
The appeal of going serverless
Serverless was originally associated with NoSQL operational databases that have few requirements for strong ACID transaction capability, and where traffic levels have often been less predictable than traditional transaction processing scenarios. AWS has always offered Amazon DynamoDB as serverless; significantly, many of AWS’s newer database offerings, including Amazon Timestream for time series; Amazon Keyspaces (for Apache Cassandra); and Amazon Quantum Ledger Database (QLDB) for immutable (blockchain-like) workloads, are also being offered serverless by default. Serverless is also offered either by default or as an option for NoSQL databases from Microsoft Azure, Google Cloud, and independents such as DataStax.
But we are now seeing serverless expand outside its traditional NoSQL operational niche. Google Cloud broke ground when introducing BigQuery for analytics, originally only as a serverless service where you pay by query. Since then, they have added back in slotted engagements for organizations demanding more predictable monthly costs. And with Amazon Aurora, AWS has become one of the first to bring serverless to transaction processing. But it is no longer alone; a few weeks ago, Cockroach Labs joined the crowd by adding a serverless option to its distributed transaction database, initially within a single region.
We expect going forward that serverless, as an option, will become a checkbox item, especially for cloud databases that support distributed deployment. While traffic to transaction databases was traditionally considered more predictable than, say, live mobile apps, a retailer or content provider that adds a new product or service that addresses a different market or demographic may find that some patterns of demand will also be less predictable. Being able to mix and match serverless with provisioned instances, such as what Amazon Aurora Serverless v2 will support, should fit the bill.
Disclosure: AWS is a dbInsight client.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.