Revisit Amazon Web Services re:Invent 2024’s biggest moments and watch keynotes and innovation talks on demand

 ✕

Amazon DynamoDB features

Amazon DynamoDB is a NoSQL database that supports key-value and document data models. Developers can use DynamoDB to build modern, serverless applications that can start small and scale globally to support petabytes of data and tens of millions of read and write requests per second. DynamoDB is designed to run high-performance, internet-scale applications that would overburden traditional relational databases.

Performance at scale

DynamoDB is a key-value and document database that can support tables of virtually any size with horizontal scaling. This enables DynamoDB to scale to more than 10 trillion requests per day with peaks greater than 20 million requests per second, over petabytes of storage.

Key-value and document data models

DynamoDB supports both key-value and document data models. This enables DynamoDB to have a flexible schema, so each row can have any number of columns at any point in time. This allows you to easily adapt the tables as your business requirements change, without having to redefine the table schema as you would in relational databases.

Learn more »

Microsecond latency with DynamoDB Accelerator

DynamoDB Accelerator (DAX) is an in-memory cache that delivers fast read performance for your tables at scale by enabling you to use a fully managed in-memory cache. Using DAX, you can improve the read performance of your DynamoDB tables by up to 10 times—taking the time required for reads from milliseconds to microseconds, even at millions of requests per second.

Automated global replication with global tables

DynamoDB global tables replicate your data automatically across your choice of Amazon Web Services China Regions and automatically scale capacity to accommodate your workloads. With global tables, your globally distributed applications can access data locally in the selected regions to get single-digit millisecond read and write performance.

Advanced streaming applications with Kinesis Data Streams for DynamoDB

Amazon Kinesis Data Streams for DynamoDB captures item-level changes in your DynamoDB tables as a Kinesis data stream. This feature enables you to build advanced streaming applications such as real-time log aggregation, real-time business analytics, and Internet of Things data capture. Through Kinesis Data Streams, you also can use Amazon Kinesis Data Firehose to deliver DynamoDB data automatically to other Amazon Web Services services.

Learn more »

Serverless

With DynamoDB, there are no servers to provision, patch, or manage, and no software to install, maintain, or operate. DynamoDB automatically scales tables to adjust for capacity and maintains performance with zero administration. Availability and fault tolerance are built in, eliminating the need to architect your applications for these capabilities.

Read/write capacity modes

DynamoDB provides capacity modes for each table: on-demand and provisioned. For workloads that are less predictable for which you are unsure that you will have high utilization, on-demand capacity mode takes care of managing capacity for you, and you only pay for what you consume. Tables using provisioned capacity mode require you to set read and write capacity. Provisioned capacity mode is more cost effective when you’re confident you’ll have decent utilization of the provisioned capacity you specify. 

Learn more »

On-demand mode

For tables using on-demand capacity mode, Amazon DynamoDB instantly accommodates your workloads as they ramp up or down to any previously reached traffic level. If a workload’s traffic level hits a new peak, Amazon DynamoDB adapts rapidly to accommodate the workload. You can optionally configure maximum read or write (or both) throughput for individual on-demand tables and associated secondary indexes, making it easy to balance costs and performance. You can use on-demand capacity mode for both new and existing tables, and you can continue using the existing Amazon DynamoDB APIs without changing code.

Auto scaling

For tables using provisioned capacity, DynamoDB delivers automatic scaling of throughput and storage based on your previously set capacity by monitoring the performance usage of your application. If your application traffic grows, DynamoDB increases throughput to accommodate the load. If your application traffic shrinks, DynamoDB scales down so that you pay less for unused capacity.

Learn more »

Change tracking with triggers

DynamoDB integrates with Amazon Lambda to provide triggers. Using triggers, you can automatically perform a custom function when item-level changes in a DynamoDB table are detected. With triggers, you can build applications that react to data modifications in DynamoDB tables. The Lambda function can perform any actions you specify, such as sending a notification or initiating a workflow.

Learn more »

Enterprise ready

DynamoDB is built for mission-critical workloads, including support for ACID transactions for a broad set of applications that require complex business logic. DynamoDB helps secure your data with encryption and continuously backs up your data for protection.

ACID transactions

DynamoDB provides native, server-side support for transactions, simplifying the developer experience of making coordinated, all-or-nothing changes to multiple items both within and across tables. With support for transactions, developers can extend the scale, performance, and enterprise benefits of DynamoDB to a broader set of mission-critical workloads. 

Learn more »

Encryption at rest

DynamoDB encrypts all customer data at rest by default. Encryption at rest enhances the security of your data by using encryption keys stored in Amazon Key Management Service. With encryption at rest, you can build security-sensitive applications that meet strict encryption compliance and regulatory requirements. The default encryption using Amazon Web Services owned customer master keys is provided at no additional charge.

Learn more »

Point-in-time recovery

Point-in-time recovery (PITR) helps protect your DynamoDB tables from accidental write or delete operations. PITR provides continuous backups of your DynamoDB table data, and you can restore that table to any point in time up to the second during the preceding 35 days. You can enable PITR or initiate backup and restore operations with a single click in the Amazon Web Services Management Console or a single API call.

Learn more »

On-demand backup and restore

On-demand backup and restore allows you to create full backups of your DynamoDB tables’ data for data archiving, which can help you meet your corporate and governmental regulatory requirements. You can back up tables from a few megabytes to hundreds of terabytes of data and not affect performance or availability to your production applications.

Learn more»

Private network connectivity

Amazon DynamoDB supports Gateway Virtual Private Cloud (VPC) endpoints and Interface VPC endpoints for connections within a VPC or from on-premises data centers. You can configure private network connectivity from your on-premises applications to DynamoDB through interface VPC endpoints enabled with Amazon PrivateLink. This enables customers to simplify private connectivity to DynamoDB and maintain compliance. 

Fine-grained access control

DynamoDB uses Amazon Identity and Access Management (IAM) to authenticate and authorize access to resources. You can specify IAM policies, resource-based policies, define attribute-based access control (ABAC) using tags in the policies, and specify conditions that allow fine-grained access, restricting read or write access down to specific items and attributes in a table, based on identities.

Amazon DynamoDB zero-ETL integration with Amazon Redshift

Amazon DynamoDB zero-ETL Integration with Amazon Redshift provides a no-code, fully managed ETL pipeline with replication from DynamoDB to Amazon Redshift. This zero-ETL integration enables customers to seamlessly synchronize their data from DynamoDB to Redshift, eliminating the need to write custom code to build and maintain complex data pipelines for extract, transform and load (ETL) operations. This integration reduces the operational burden and cost involved in keeping the data in sync between transactional database and data warehouse, enabling customers to focus on their core business problems. 

You can quickly create your first pipeline from the Integrations tab in the DynamoDB console. The integration leverages Amazon DynamoDB exports and Redshift’s compute to process the DynamoDB data and map it to your Redshift database table.