Services or capabilities described in this page might vary by Region. To see the differences applicable to the China Regions, see Getting Started with Amazon Web Services in China Regions. Only “Region Availability” and “Feature Availability and Implementation Differences” sections for specific services (in each case exclusive of content referenced via hyperlink) in Getting Started with Amazon Web Services in China Regions form part of the Documentation under the agreement between you and Sinnet or NWCD governing your use of services of Amazon Web Services China (Beijing) Region or Amazon Web Services China (Ningxia) Region (the “Agreement”). Any other content contained in the Getting Started pages does not form any part of the Agreement.

Amazon Redshift Documentation

Amazon Redshift is designed to accelerate your time to insights with cloud data warehousing at scale.

Analytics

Focus on getting from data to insights quickly and delivering on your business outcomes, without worrying about managing your data warehouse.

Amazon Redshift Serverless

Amazon Redshift Serverless is a serverless option of Amazon Redshift that is designed to make it easier to run and scale analytics without the need to set up and manage data warehouse infrastructure. With Redshift Serverless, users—including data analysts, developers, business professionals, and data scientists—can get insights from data by loading and querying data in the data warehouse.

Query Editor v2

Use SQL to make your Amazon Redshift data and data lake more accessible to data analysts, data engineers, and other SQL users with a web-based analyst workbench for data exploration and analysis. Query Editor v2 lets you visualize query results in a single click, create schemas and tables, load data visually, and browse database objects. It also provides an editor for authoring and sharing SQL queries, analyses, visualizations, and annotations, and sharing them with your team.

Table Design

Amazon Redshift is designed to monitor user workloads and uses sophisticated algorithms to help you find ways to improve the physical layout of data to optimize query speeds. Automatic Table Optimization is designed to select the best sort and distribution keys to optimize performance for the cluster’s workload. If Amazon Redshift determines that applying a key will improve cluster performance, tables will be altered without requiring administrator intervention. Additional features Automatic Vacuum Delete, Automatic Table Sort, and Automatic Analyze are designed to eliminate the need for manual maintenance and tuning of Redshift clusters to get the best performance for new clusters and production workloads. 

Query using your own tools

Amazon Redshift gives you the ability to run queries within the console or connect SQL client tools, libraries, or data science tools including Amazon Quicksight, Tableau, PowerBI, QueryBook and Jupyter Notebook.

API to interact with Amazon Redshift

Amazon Redshift is designed to enable you to access data with many types of traditional, cloud-native, and containerized, serverless web services-based applications and event-driven applications. The Amazon Redshift Data API can help simplify data access, ingest, and egress from programming languages and platforms supported by the Amazon SDK such as Python, Go, Java, Node.js, PHP, Ruby, and C++. The Data API helps eliminate the need for configuring drivers and managing database connections. Instead, you can run SQL commands to an Amazon Redshift cluster by calling a secured API endpoint provided by the Data API. The Data API takes care of managing database connections and buffering data. The Data API is asynchronous, so you can retrieve your results later. Your query results are stored for 24 hours.

Fault tolerant

There are multiple features that can enhance the reliability of your data warehouse cluster. For example, Amazon Redshift is designed to continuously monitor the health of the cluster, and re-replicates data from failed drives and replaces nodes as necessary for fault tolerance. Clusters can also be relocated to alternative Availability Zones (AZ’s).

Analyze your data

Get integrated insights running real-time and predictive analytics on data across your operational databases, data lake, data warehouse and third-party data sets. 

Federated Query

With the new federated query capability in Redshift, you can reach into your operational, relational database. Query live data across one or more Amazon RDS and Aurora PostgreSQL to get visibility into the end-to-end business operations without requiring data movement. You can join data from your Redshift data warehouse, data in your data lake, and now data in your operational stores which can help you make better data-driven decisions. Redshift is designed to provide optimizations to reduce data moved over the network and complements it with its parallel data processing for high-performance queries.

Query and export data to and from your data lake

You can query open file formats such as Parquet, ORC, JSON, Avro, CSV, and more directly in S3 using familiar ANSI SQL. To export data to your data lake you use the Redshift UNLOAD command in your SQL code and specify Parquet as the file format and Redshift takes care of data formatting and data movement into S3. This is designed to give you the flexibility to store highly structured, frequently accessed data in a Redshift data warehouse, while also keeping structured, semi-structured, and unstructured data in S3. Exporting data from Redshift back to your data lake helps you to analyze the data further with services of Amazon Web Services like Amazon Athena, Amazon EMR, and Amazon SageMaker.  

Services of Amazon Web Services integration

Native integration with the Amazon Web Services analytics, database, and machine learning services makes it easier to handle end-to-end analytics workflows. For example, Amazon Lake Formation is a service that helps set up a secure data lake in days. Amazon Glue can extract, transform, and load (ETL) data into Redshift. Amazon Kinesis Data Firehose can help you to capture, transform, and load streaming data into Redshift for analytics. Amazon EMR is designed to process data using Hadoop/Spark and load the output into Amazon Redshift for BI and analytics. Amazon QuickSight is the BI service that you can use to create reports, visualizations, and dashboards on Redshift data. You can use Amazon Redshift to prepare your data to run machine learning (ML) workloads with Amazon SageMaker. To accelerate migrations to Amazon Redshift, you can use the Amazon Schema Conversion tool and the Amazon Database Migration Service (DMS). Amazon Redshift is also integrated with Amazon Key Management Service (KMS) and Amazon CloudWatch for security, monitoring, and compliance. You can also use Lambda user-defined functions (UDFs) to invoke a Lambda function from your SQL queries as if you are invoking a User Defined Function in Redshift.   You can write Lambda UDFs to integrate with Amazon Partner services and to access other services of Amazon Web Services such as Amazon DynamoDB and Amazon SageMaker.  

Partner console integration

You can accelerate data onboarding and create business insights by integrating with select Partner solutions in the Amazon Redshift console. With these solutions you can bring data from applications into your Redshift data warehouse. It also lets you join these datasets and analyze them together to produce insights. 

Data sharing

Amazon Redshift data sharing can help you scale by sharing live data across Redshift clusters. Data Sharing is designed to improve the agility of organizations by giving fast, granular and high-performance access to data inside any Redshift cluster without the need to copy or move it. Data sharing is designed to provide live access to data so your users can see information as it’s updated in the data warehouse. You can share live data with Redshift clusters in the same or different accounts of Amazon Web Services and across Regions.

Amazon Data Exchange for Amazon Redshift

Query Amazon Redshift datasets from your own Redshift cluster without extracting, transforming, and loading (ETL) the data. You can subscribe to Redshift cloud data warehouse products in Amazon Data Exchange. As soon as a provider makes an update, the change is visible to subscribers. If you are a data provider, access is granted when a subscription starts and revoked when it ends, invoices are generated when payments are due, and payments are collected through Amazon Web Services. You can license access to flat files, data in Amazon Redshift, and data delivered through APIs, all with a single subscription. 

Redshift ML

Amazon Redshift ML is designed to enable customers to use SQL statements to create and train Amazon SageMaker models on their data in Amazon Redshift and then use those models for predictions such as churn detection, financial forecasting, personalization, and risk scoring directly in their queries and reports.

Native support for advanced analytics

Amazon Redshift supports standard scalar data types such as NUMBER, VARCHAR, and DATETIME and provides native support for the following advanced analytics processing:

  • Spatial data processing: Amazon Redshift provides a polymorphic data type, GEOMETRY, which supports multiple geometric shapes such as Point, Linestring, Polygon etc. Redshift also provides spatial SQL functions to construct geometric shapes, import, export, access and process the spatial data. You can add GEOMETRY columns to Redshift tables and write SQL queries spanning across spatial and non-spatial data. This capability enables you to store, retrieve, and process spatial data and can enhance your business insights by integrating spatial data into your analytical queries. With Redshift’s ability to query data lakes, you can also extend spatial processing to data lakes by integrating external tables in spatial queries.
  • HyperLogLog sketches: HyperLogLog is a novel algorithm that estimates the approximate number of distinct values in a data set. HLL sketch is a construct that encapsulates the information about the distinct values in the data set. Redshift provides datatype HLLSKETCH and associated SQL functions to generate, persist, and combine HyperLogLog sketches. The Amazon Redshift's HyperLogLog capability uses bias correction techniques and is designed to provide high accuracy with low memory footprint.
  • DATE & TIME data types: Amazon Redshift is designed to provide multiple data types DATE, TIME, TIMETZ, TIMESTAMP and TIMESTAMPTZ to natively store and process data/time data. TIME and TIMESTAMP types store the time data without time zone information, whereas TIMETZ and TIMESTAMPTZ types store the time data including the timezone information. You can use various date/time SQL functions to process the date and time values in Redshift queries.
  • Semi-structured data processing: The Amazon Redshift SUPER data type is designed to natively stores JSON and other semi-structured data in Redshift tables, and uses the PartiQL query language to process the semi-structured data. The SUPER data type is schemaless in nature and allows storage of nested values that may contain Redshift scalar values, nested arrays and nested structures. PartiQL is an extension of SQL and provides querying capabilities such as object and array navigation, unnesting of arrays, dynamic typing, and schemaless semantics. This can help you to achieve advanced analytics that combine the classic structured SQL data with the semi-structured SUPER data.
Integration with third-party tools

You can enhance Amazon Redshift by working with industry-leading tools and experts for loading, transforming, and visualizing data. Amazon Partners have certified their solutions to work with Amazon Redshift. 

Performance at Scale

Gain better price performance than other cloud data warehouses with optimizations to improve query speed. 

RA3 instances

RA3 instances are designed to improve speed for performance-intensive workloads that require large amounts of compute capacity, with the flexibility to pay separately for compute independently of storage by specifying the number of instances you need. 

AQUA (Advanced Query Accelerator) for Amazon Redshift

AQUA is a new distributed and hardware-accelerated cache that enables Redshift to run faster by boosting certain types of queries. AQUA uses solid state storage, field-programmable gate arrays (FPGAs) and Amazon Nitro to speed queries that scan, filter, and aggregate large data sets. AQUA is included with the Redshift RA3 instance type.

Storage and query processing

Amazon Redshift is designed to provide fast query performance on datasets of varying sizes. Columnar storage, data compression, and zone maps are designed to reduce the amount of I/O needed to perform queries. Along with the encodings such as LZO and Zstandard, Amazon Redshift also offers compression encoding, AZ64, for numeric and date/time types which can help you achieve both storage savings and optimized query performance.

Concurrency

Amazon Redshift is designed to provide consistently fast performance, even with thousands of concurrent queries, whether they query data in your Amazon Redshift data warehouse, or directly in your Amazon S3 data lake. Amazon Redshift Concurrency Scaling supports many concurrent users and concurrent queries with availability by adding transient capacity in as concurrency increases. 

Materialized views

Amazon Redshift materialized views is designed to help you to achieve faster query performance for iterative or predictable analytical workloads such as dashboarding, and queries from Business Intelligence (BI) tools, and Extract, Load, Transform (ELT) data processing jobs. You can use materialized views to store and manage pre-computed results of a SELECT statement that may reference one or more tables, including external tables. Subsequent queries referencing the materialized views can run faster by reusing the precomputed results. Amazon Redshift is designed to maintain the materialized views incrementally to continue to provide the low latency performance benefits.

Automated Materialized Views

Automated Materialized Views (AutoMVs) improve throughput of queries, lower query latency, shorten execution time through automatic refresh, auto query rewrite, incremental refresh, and continuous monitoring of Amazon Redshift clusters. Amazon Redshift balances the creation and management of AutoMVs with resource utilization.

Machine learning to enhance throughput and performance

Advanced machine learning capabilities in Amazon Redshift can help deliver high throughput and performance, even with varying workloads or concurrent user activity. Amazon Redshift uses algorithms to predict and classify incoming queries based on their run times and resource requirements to dynamically manage performance and concurrency. Short query acceleration (SQA) sends short queries from applications such as dashboards to an express queue for processing rather than being starved behind large queries. Automatic workload management (WLM) uses machine learning to help dynamically manage memory and concurrency, helping improve query throughput. In addition, you can now set the priority of your most important queries. Amazon Redshift is also designed to be a self-learning system that observes the user workload, determining the opportunities to improve performance as the usage grows, applying optimizations seamlessly, and making recommendations through Redshift Advisor when an explicit user action is needed to further enhance Amazon Redshift performance. 

Result caching

Amazon Redshift uses result caching to deliver fast response times for repeat queries. Dashboard, visualization, and business intelligence tools that run repeat queries can experience a significant performance boost. When a query runs, Amazon Redshift searches the cache to see if there is a cached result from a prior run. If a cached result is found and the data has not changed, the cached result is returned quickly instead of re-running the query.

Data warehousing at scale

Amazon Redshift is designed to be simple and quickly scale as your needs change. Through the console or with a simple API call, you can change the number or type of nodes in your data warehouse, and scale up or down as your needs change. You can also run queries against large amounts of data in Amazon S3 without having to load or transform any data with the Redshift Spectrum feature. You can use S3 as a highly available, secure, and effective data lake to store data in open data formats. Amazon Redshift Spectrum is designed to run queries across thousands of parallelized nodes to help deliver fast results. 

Flexible pricing options

Amazon Redshift is a cost-effective data warehouse, and you can optimize how you pay. You can start small for just cents per hour with no commitments, and scale out to terabytes per year. Amazon Redshift offers on-demand pricing with no upfront costs, Reserved Instance pricing that can save you money by committing to a fixed term, and per-query pricing based on the amount of data scanned in your Amazon S3 data lake. Amazon Redshift’s pricing includes security, data compression, backup storage, and data transfer features. As the size of data grows, you use managed storage in the RA3 instances to store data cost-effectively. 

Predictable cost, even with unpredictable workloads

Amazon Redshift allows you to scale with minimal cost impact, as each cluster earns Concurrency Scaling credits . This provides you with the ability to predict your month-to-month cost, even during periods of fluctuating analytical demand. 

Choose your node type to get the best value for your workloads

You can select from three instance types to optimize Amazon Redshift for your data warehousing needs: RA3 nodes, Dense Compute nodes, and Dense Storage nodes.

RA3 nodes let you scale storage independently of compute. With RA3, you get a data warehouse that stores data in a separate storage layer. You only need to size the data warehouse for the query performance that you need.

Dense Compute (DC) nodes allow you to create data warehouses using fast CPUs, large amounts of RAM, and solid-state disks (SSDs), and are a recommended choice for less than 500 GB of data.

Dense Storage (DS2) nodes let you create data warehouses using hard disk drives (HDDs). 

Scaling your cluster or switching between node types can be done with an API call or in the Amazon Management Console.

Security and compliance
End-to-end encryption

With the implementation of parameter settings, you can set up Amazon Redshift to use SSL to secure data in transit, and hardware-accelerated AES-256 encryption for data at rest. Amazon Redshift takes care of key management by default

Network isolation

Amazon Redshift enables you to configure firewall rules to control network access to your data warehouse cluster. You can run Redshift inside Amazon Virtual Private Cloud (VPC) to isolate your data warehouse cluster in your own virtual network and connect it to your existing IT infrastructure using encrypted IPsec VPN.

Audit and compliance

Amazon Redshift integrates with Amazon CloudTrail to enable you to audit your Redshift API calls. Redshift logs all SQL operations, including connection attempts, queries, and changes to your data warehouse. You can access these logs using SQL queries against system tables, or choose to save the logs to a secure location in Amazon S3.

Tokenization

Amazon Lambda user-defined functions (UDFs) enable you to use an Amazon Lambda function as a UDF in Amazon Redshift and invoke it from Redshift SQL queries. This functionality can help you to write custom extensions for your SQL query to achieve tighter integration with other services or third-party products. You can write Lambda UDFs to enable external tokenization, data masking, identification or de-identification of data by integrating with vendors like Protegrity. 

Granular access controls

Granular row and column level security controls are designed so that users see only the data they should have access to. Amazon Redshift is integrated with Amazon Lake Formation so that Lake Formation’s column level access controls are also enforced for Redshift queries on the data in the data lake.

Amazon Redshift Concurrency Scaling

Analytics workloads can be highly unpredictable resulting in slower query performance and users competing for resources.

The Concurrency Scaling feature is designed to support thousands of concurrent users and concurrent queries, with consistently fast query performance. As concurrency increases, Amazon Redshift adds query processing power to process queries. Once the workload demand subsides, this extra processing power is removed.

Concurrency Scaling is designed to help you:

  1. Get consistently fast performance for thousands of concurrent queries and users.
  2. Allocate the clusters to specific user groups and workloads, and control the number of clusters that can be used.
  3. Continue to use your existing applications and Business Intelligence tools.

To enable Concurrency Scaling, set the Concurrency Scaling Mode to Auto in the Redshift Console. 

Amazon Redshift Data Sharing

Amazon Redshift data sharing is designed to extend the benefits of Amazon Redshift to multi-cluster deployments while being able to share data. Data sharing enables granular and fast data access across Amazon Redshift clusters without the need to copy or move it. Data sharing is designed to provide live access to data so that your users can see information as it’s updated in the data warehouse. You can share live data with Amazon Redshift clusters in the same or different accounts of Amazon Web Services and across Regions.

Amazon Redshift data sharing is designed to provide:

1.      A simple and direct way to share data across Amazon Redshift data warehouses

2.     Fast, granular, and high performance access without data copies and data movement.

3.     Live and transactionally consistent views of data across all consumers.

4.      Secure and governed collaboration within and across organizations and external parties.  

Data sharing builds on Amazon Redshift RA3 managed storage, which is designed to decouple storage and compute, allowing either of them to scale independently. With data sharing, workloads accessing shared data are isolated from each other. Queries accessing shared data run on the consumer cluster and read data from the Amazon Redshift managed storage layer directly without impacting the performance of the producer cluster. Workloads accessing shared data can be provisioned with flexible compute resources that meet their workload-specific requirements and be scaled independently as needed in a self-service fashion.

Amazon Redshift Serverless

Amazon Redshift Serverless is designed to make it easier to run and scale analytics without having to manage data warehouse infrastructure. Users such as developers, data scientists, and analysts can work across databases, data warehouses, and data lakes to build reporting and dashboarding applications, perform analytics, share and collaborate on data, and build and train machine learning (ML) models. Amazon Redshift Serverless is designed to provision and scale data warehouse capacity to deliver fast performance for all workloads.  

Get insights from data

Amazon Redshift Serverless is designed to help you focus on obtaining insights by getting starting quickly and running real-time or predictive analytics on all your data without managing data warehouse infrastructure.  

Performance

Amazon Redshfit Serverless is designed to scale data warehouse capacity up or down to deliver fast performance for all workloads.  

Manage costs and budget

You can pay on a per-second basis. You can set your spend limit and manage your budget with granular spend controls. 

Get started quickly

Amazon Redshift Serverless is designed to allow you to load data and get started with your favorite BI tool.  

Amazon Redshift Security & Governance

Amazon Redshift supports industry-leading security with built-in identity management and federation for single-sign on (SSO), multi-factor authentication, column-level access control, row-level security, role-based access control, Amazon Virtual Private Cloud (Amazon VPC), and faster cluster resize. You can configure Amazon Redshift to protect data in transit and at rest.  

Infrastructure security

You can control network access to your data warehouse cluster through firewall rules. Using Amazon Virtual Private Cloud (VPC), you can isolate your Redshift data warehouse cluster in your own virtual network, and connect to your existing IT infrastructure using industry-standard encrypted IPSec VPN without using public IPs or requiring traffic to traverse the Internet. You can keep your data encrypted at rest and in transit.  

Audit and compliance

Amazon Redshift integrates with Amazon CloudTrail to enable you to audit Redshift API calls. Redshift logs all SQL operations, including connection attempts, queries, and changes to your data warehouse. It enables delivery of audit logs for analysis by minimizing latency while also adding Amazon CloudWatch as a log destination. You can choose to stream audit logs directly to Amazon CloudWatch for real-time monitoring. Amazon Redshift offers tools and security measures that customers can use to evaluate, meet, and demonstrate compliance with applicable legal and regulatory requirements. 

Identity Management

Access to Amazon Redshift requires credentials that Amazon Web Services can use to authenticate your requests. Those credentials must have permissions to access Amazon Web Services resources, such as an Amazon Redshift cluster. You can use Amazon Identity and Access Management (IAM) and Amazon Redshift to help secure your resources by controlling who can access them. 

Authorization controls

Role-based Access Control (RBAC) helps you simplify the management of security privileges in Amazon Redshift and control end user access to data at a broad or granular level based on job role/permission rights and level of data sensitivity. You can also map database users to IAM roles for federated access. Column-level Access Control helps you manage data access on the column level. Row-level Security (RLS) allows you to restrict row access based on roles.  

Amazon Redshfit Query Editior v2.0

Amazon Redshift Query Editor v2.0 is a web-based analyst workbench designed to help you explore, share, and collaborate on data in SQL through a common interface.

Amazon Redshift Query Editor v2.0 allows you to query your data using SQL and visualize your results using charts and graphs. With Amazon Redshift Query Editor v2.0, you can collaborate by sharing saved queries, results, and analyses.

Amazon Redshift is designed to help simplify organizing, documenting, and sharing multiple SQL queries with support for SQL Notebooks (preview) in Amazon Redshift Query Editor v2.0. The new Notebook interface is designed to enable users to author queries more easily, organizing multiple SQL queries and annotations on a single document. They can also share Notebooks. 

Access

Amazon Redshift Query Editor v2.0 is a web-based tool that allows you to query and analyze data without requiring permissions to access the Amazon Redshift console. 

Browsing and visualization

Use Amazon Redshift Query Editor v2.0 navigator to browse database objects including tables, views, and stored procedures. Use visual wizards to create tables, functions, and load and unload data. 

Query editor

Amazon Redshift Query Editor v2.0’s query editor can auto-complete commands, run multiple queries, and execute multi-statement queries with multiple results. 

Exporting and building charts

Amazon Redshift Query Editor v2.0 is designed to help you analyze and sort data without having to re-run queries, then export results as JSON/CSV, and build charts for visual analysis. 

Collaboration

You can use Amazon Redshift Query Editor v2.0’s version management for saved queries to collaborate with other SQL users using a common interface. You can collaborate and share different versions of queries, results, and charts. 

Amazon Redshift RA3 instances with managed storage

With Amazon Redshift RA3 instances with managed storage, you can choose the number of nodes based on your performance requirements. Built on the Amazon Nitro System, RA3 instances with managed storage use high performance SSDs for your hot data and Amazon S3 for your cold data.

The new RA3 instances with managed storage are designed to:

1.   Allow you to pay per hour for the compute and separately scale data warehouse storage capacity without adding any additional compute resources and paying only for what you use. 

2.   Include AQUA, the distributed and hardware accelerated cache that enables Redshift to run faster by boosting certain types of queries.

3.   Use fine-grained data eviction and intelligent data pre-fetching to deliver fast performance, while scaling storage to S3.

4.   Feature high bandwidth networking that can help reduce the time for data to be offloaded to and retrieved from Amazon S3.

Amazon Redshift ML

Amazon Redshift ML can help data analysts and database developers to create, train, and apply machine learning models using familiar SQL commands in Amazon Redshift data warehouses. With Redshift ML, you can take advantage of Amazon SageMaker, a managed machine learning service, without learning new tools or languages. Simply use SQL statements to create and train Amazon SageMaker machine learning models using your Redshift data and then use these models to make predictions. 

Because Redshift ML allows you to use standard SQL, this can help you to be productive with new use cases for your analytics data. Redshift ML provides integration between Redshift and Amazon SageMaker and enables inference within the Redshift cluster, so you can use predictions generated by ML-based models in queries and applications. There is no need to manage a separate inference model end point, and the training data is secured end-to-end with encryption.

Use ML on your Redshift data using standard SQL

To get started, use the CREATE MODEL SQL command in Redshift and specify training data either as a table or SELECT statement. Redshift ML is designed to then compiles and imports the trained model inside the Redshift data warehouse and prepare a SQL inference function that can be immediately used in SQL queries. Redshift ML handles all the steps needed to train and deploy a model.

Predictive analytics with Amazon Redshift

With Redshift ML, you can embed predictions like fraud detection, risk scoring, and churn prediction directly in queries and reports. Use the SQL function to apply the ML model to your data in queries, reports, and dashboards. 

Bring your own model (BYOM)

Redshift ML supports using BYOM for local or remote inference. You can use a model trained outside of Redshift with Amazon SageMaker for in-database inference local in Amazon Redshift. You can import SageMaker Autopilot and direct Amazon SageMaker trained models for local inference. Alternatively, you can invoke remote custom ML models deployed in remote SageMaker endpoints. You can use any SageMaker ML model that accepts and returns text or CSV for remote inference.

Additional Information

For additional information about service controls, security features and functionalities, including, as applicable, information about storing, retrieving, modifying, restricting, and deleting data, please see https://docs.amazonaws.cn/en_us. This additional information does not form part of the Documentation for purposes of the Sinnet Customer Agreement for Amazon Web Services (Beijing Region), Western Cloud Data Customer Agreement for Amazon Web Services (Ningxia Region) or other agreement between you and Sinnet or NWCD governing your use of services of Amazon Web Services China Regions.