Services or capabilities described in this page might vary by Region. To see the differences applicable to the China Regions, see Getting Started with Amazon Web Services in China Regions. Only “Region Availability” and “Feature Availability and Implementation Differences” sections for specific services (in each case exclusive of content referenced via hyperlink) in Getting Started with Amazon Web Services in China Regions form part of the Documentation under the agreement between you and Sinnet or NWCD governing your use of services of Amazon Web Services China (Beijing) Region or Amazon Web Services China (Ningxia) Region (the “Agreement”). Any other content contained in the Getting Started pages does not form any part of the Agreement.

Amazon Kinesis Data Firehose Documentation

Amazon Kinesis Data Firehose is designed to load streaming data into data stores and analytics tools. Kinesis Data Firehose is designed to be a fully managed service that allows you to capture, transform, and load massive volumes of streaming data from hundreds of thousands of sources into Amazon Simple Storage Service (Amazon S3), Amazon Redshift, Amazon OpenSearch Service (successor to Amazon Elasticsearch Service), Amazon Kinesis Data Analytics, generic HTTP endpoints, and third-party service providers, enabling near real-time analytics and insights.

Delivery streams

A delivery stream is the underlying entity of Kinesis Data Firehose. You can use Kinesis Data Firehose by creating a delivery stream and then sending data to it.

Key features

Launch and configuration

You can launch Amazon Kinesis Data Firehose and create a delivery stream to load data into Amazon S3, Amazon Redshift, Amazon OpenSearch Service, HTTP endpoints, and third-party service providers in the Amazon Management Console. You can send data to the delivery stream by calling the Firehose API, or running the Linux agent we provide on the data source. Kinesis Data Firehose is designed to then load the data into the specified destinations.

Load new data

You can specify a batch size or batch interval to control how quickly data is uploaded to destinations. For example, you can set the batch interval to 60 seconds if you want to receive new data within 60 seconds of sending it to your delivery stream. Additionally, you can specify if data should be compressed. The service is designed to support common compression algorithms. Batching and compressing data before uploading enables you to control how quickly you receive new data at the destinations.

Elastic scaling to handle varying data throughput

The service is designed so that once launched, your delivery streams can scale up and down to handle large amounts of input data, and maintain data latency at levels you specify for the stream, within the limits. 

Apache Parquet or ORC format conversion

Kinesis Data Firehose supports Columnar data formats such as Apache Parquet and Apache ORC that can be used for storage and analytics using other Amazon Web Services or third-party services. Kinesis Data Firehose is designed so that it can convert the format of incoming data from JSON to Parquet or ORC formats before storing the data in Amazon S3, which helps you to save storage and analytics costs.

Deliver partitioned data to S3

You can dynamically partition your streaming data before delivery to Amazon S3 using static or dynamically defined keys like “customer_id” or “transaction_id”. Kinesis Data Firehose is designed to group data by these keys and deliver into key-unique Amazon S3 prefixes, helping you to perform high performance, cost efficient analytics in Amazon S3. 

Integrated data transformations

You can configure Amazon Kinesis Data Firehose to prepare your streaming data before it is loaded to data stores. Select an Amazon Lambda function from the Amazon Kinesis Data Firehose delivery stream configuration tab in the Amazon Management console. Amazon Kinesis Data Firehose is designed to apply that function to every input data record and load the transformed data to destinations. Amazon Kinesis Data Firehose is designed to provide pre-built Lambda blueprints for converting common data sources such as Apache logs and system logs to JSON and CSV formats. You have the option to utilize pre-built blueprints without any change, or customize them further, or write your own custom functions. You can also configure Amazon Kinesis Data Firehose to automatically retry failed jobs and back up the raw streaming data. 

Support for multiple data destinations

Amazon Kinesis Data Firehose currently supports destinations including Amazon S3, Amazon Redshift, Amazon OpenSearch Service, HTTP endpoints, and certain third-party providers. You can specify the destination Amazon S3 bucket, the Amazon Redshift table, the Amazon OpenSearch Service domain, generic HTTP endpoints, or a service provider where the data should be loaded.

Optional encryption

Amazon Kinesis Data Firehose provides you the option to have your data encrypted after it is uploaded to the destination. As part of the delivery stream configuration, you can specify an Amazon Key Management System (KMS) encryption key.

Metrics for monitoring performance

Amazon Kinesis Data Firehose is designed to expose several metrics through the console, as well as through Amazon CloudWatch, including volume of data submitted, volume of data uploaded to destination, time from source to destination, the delivery stream limits, throttled records number and upload success rate. You can use these metrics to monitor the health of your delivery streams, take any necessary actions such as modifying destinations, setting alarms when getting closer to the limits, and confirm that the service is ingesting data and loading it to destinations.

Additional Information

For additional information about service controls, security features and functionalities, including, as applicable, information about storing, retrieving, modifying, restricting, and deleting data, please see https://docs.amazonaws.cn/en_us/. This additional information does not form part of the Documentation for purposes of the Sinnet Customer Agreement for Amazon Web Services (Beijing Region), Western Cloud Data Customer Agreement for Amazon Web Services (Ningxia Region) or other agreement between you and Sinnet or NWCD governing your use of services of Amazon Web Services China Regions.