General
Q: What is Amazon DataSync?
A: Amazon DataSync is an online data movement and discovery service that simplifies and accelerates data migrations to Amazon Web Services as well as moving data to and from on-premises storage, edge locations, other cloud providers, and Amazon Web Services Storage.
For online data transfers, Amazon DataSync simplifies, automates, and accelerates copying large amounts of data between on-premises storage, edge locations, or other clouds, and Amazon Web Services Storage services, as well as between Amazon Web Services Storage services. DataSync can copy data between Network File System (NFS) shares, Server Message Block (SMB) shares, Hadoop Distributed File Systems (HDFS), self-managed object storage, Azure Files, Azure Blob Storage including Azure Data Lake Storage Gen2, Amazon Simple Storage Service (Amazon S3), Amazon Elastic File System (Amazon EFS) file systems, Amazon FSx for Windows File Server file systems, and Amazon FSx for Lustre file systems.
Q: Why should I use Amazon DataSync?
A: Amazon DataSync enables you to discover and move your data, securely and quickly. Using DataSync Discovery (Preview), you can better understand your on-premises storage utilization and receive recommendations to inform your cost estimates and plans for migrating to Amazon Web Services For data movement, you can use DataSync to copy large datasets with millions of files, without having to build custom solutions with open-source tools, or license and manage expensive commercial network acceleration software. You can use DataSync to migrate active data to Amazon Web Services, archive data to free up on-premises storage capacity, replicate data to Amazon Web Services for business continuity, or transfer data to the cloud for analysis and processing.
Q: What problem does Amazon DataSync solve for me?
A: Amazon DataSync reduces the complexity and cost of online data transfer, making it simple to transfer datasets between on-premises, edge, or other cloud storage and Amazon Storage services, as well as between Amazon Storage services. DataSync connects to existing storage systems and data sources with standard storage protocols (NFS, SMB), as an HDFS client, or using the Amazon S3 API. It uses a purpose-built network protocol and scale-out architecture to accelerate data transfer between on-premises storage systems and Amazon Storage services. DataSync automatically scales and handles moving files and objects, scheduling data transfers, monitoring the progress of transfers, encryption, verification of data transfers, and notifying customers of any issues. With DataSync you pay only for the amount of data copied, with no minimum commitments or upfront fees.
Data movement
Q: Where can I move data to and from?
A: DataSync supports the following storage location types: Network File System (NFS) shares, Server Message Block (SMB) shares, Hadoop Distributed File Systems (HDFS), self-managed object storage, Azure Files, Azure Blob Storage including Azure Data Lake Storage Gen2, Amazon Simple Storage Service (Amazon S3), Amazon Elastic File System (Amazon EFS) file systems, Amazon FSx for Windows File Server file systems, Amazon FSx for Lustre file systems.
Q: How do I use Amazon DataSync to migrate data to Amazon Web Services?
A: You can use Amazon DataSync to migrate data located on premises, at the edge, or in other clouds to Amazon S3, Amazon EFS, Amazon FSx for Windows File Server, and Amazon FSx for Lustre. Configure DataSync to make an initial copy of your entire dataset, and schedule subsequent incremental transfers of changing data until the final cut-over from on-premises to Amazon Storage services. DataSync includes encryption and integrity validation to help make sure your data arrives securely, intact, and ready to use. To minimize impact on workloads that rely on your network connection, you can schedule your migration to run during off-hours, or limit the amount of network bandwidth that DataSync uses by configuring the built-in bandwidth throttle. DataSync preserves metadata between storage systems that have similar metadata structures, enabling a smooth transition of end users and applications to using your target Amazon Web Services Storage service. Read the storage blog, "Migrating storage with Amazon DataSync," to learn more about migration best practices and tips.
Q: How do I use Amazon DataSync to archive cold data?
A: You can use Amazon DataSync to move cold data from on-premises storage systems directly to durable and secure long-term storage, such as Amazon S3 Glacier Flexible Retrieval (formerly S3 Glacier) or Amazon S3 Glacier Deep Archive. Use DataSync’s exclude filters to exclude copying temporary files and folders or use include filters or manifests to copy only a subset of files from your source location. You can select the most cost-effective storage service for your needs: transfer data to any S3 storage class, or use DataSync with EFS Lifecycle Management to store data in Amazon EFS Infrequent Access storage class (EFS IA). Use the built-in task scheduling functionality to regularly archive data that should be retained for compliance or auditing purposes, such as logs, raw footage, or electronic medical records.
Q: How do I use Amazon DataSync to replicate data to Amazon Web Services for business continuity?
A: With Amazon DataSync, you can periodically replicate files into any Amazon S3 storage classes, or send the data to Amazon EFS, Amazon FSx for Windows File Server, and Amazon FSx for Lustre for a standby file system. Use the built-in task scheduling functionality to ensure that changes to your dataset are regularly copied to your destination storage. Read this Amazon Storage blog to learn more about data protection using Amazon DataSync.
Q: How do I use Amazon DataSync for recurring transfers between on-premises and Amazon Storage services for ongoing workflows?
A: You can use Amazon DataSync for ongoing transfers from on-premises systems into or out of Amazon Storage services for processing. DataSync can help speed up your critical hybrid cloud storage workflows in industries that need to move active files into Amazon Storage quickly. This includes machine learning in life sciences, video production in media and entertainment, big data analytics in financial services, and seismic research in oil and gas. DataSync provides timely delivery to ensure dependent processes are not delayed. You can specify include and exclude filters or manifests to specify which files or objects should be transferred each time your task runs.
Q: Can I use Amazon DataSync to copy data from other clouds to Amazon Web Services?
A: Yes. Using Amazon DataSync, you can copy data from Azure Files using the SMB protocol or from Azure Blob Storage including Azure Data Lake Storage Gen 2. Deploy the DataSync agent in your cloud environment or on Amazon EC2, create your source and destination locations, and then start your task to begin copying data.
Q: How do I use Amazon DataSync to transfer data between Amazon Storage services?
A: You can use DataSync to transfer files or objects between Amazon S3, Amazon EFS, Amazon FSx for Windows File Server, and Amazon FSx for Lustre within the same Amazon Web Services account. You can transfer data between Amazon Web Services in the same Amazon Web Services Region. This does not require deploying a DataSync agent, and can be configured end to end using the Amazon DataSync console, Command Line Interface (CLI), or Software Development Kit (SDK).
Usage
Q: How do I get started moving my data with Amazon DataSync?
A: You can transfer data using Amazon DataSync with a few clicks in the Amazon Web Services Management Console or through the Amazon Command Line Interface (CLI). To get started, follow these 3 steps:
1. To transfer data between on-premises, edge, or other cloud storage systems and Amazon Storage services, deploy an agent - Deploy a DataSync agent and associate it to your Amazon Web Services account via the Management Console or API. The agent will be used to access your NFS server, SMB file share, Hadoop cluster, or self-managed or cloud object storage to read data from it or write data to it. Deploying an agent is not required to transfer data between Amazon Storage services within the same Amazon Web Services account.
2. Create a data transfer task - Create a task by specifying the location of your data source and destination, and any options you want to use to configure the transfer, such as scheduling the task and enabling task reports.
3. Start the transfer - Start the task, monitor data movement in the console or with Amazon CloudWatch, and audit transfer tasks using task reports.
Q: How do I deploy an Amazon DataSync agent?
A: You deploy an Amazon DataSync agent to your on-premises hypervisor, in your public cloud environment, or in Amazon EC2. To copy data to or from an on-premises file server, you download the agent virtual machine image from the Amazon Web Services Console and deploy to your on-premises VMware ESXi, Linux Kernel-based Virtual Machine (KVM), or Microsoft Hyper-V hypervisor. When a DataSync agent is used, the agent must be deployed so that it can access your file server using the NFS, SMB protocol, access NameNodes and DataNodes in your Hadoop cluster, or access your self-managed object storage using the Amazon S3 API. Deploying an agent is not required to transfer data between Amazon Web Services Storage services within the same Amazon Web Services account.
Q: How do I start an Amazon DataSync data transfer task?
A: Amazon DataSync copies data when you initiate a task via the Amazon Web Services Management Console or Amazon Web Services Command Line Interface (CLI). Each time a task runs, it scans the source and destination for changes, and performs a copy of any data and metadata differences between the source to the destination. You can configure which characteristics of the source are used to determine what changed, define include and exclude filters or manifests to transfer specific file and object data, and control if files or objects in the destination should be overwritten when changed in the source or deleted when not found in the source.
Q: How does Amazon DataSync ensure my data is copied correctly?
A: As Amazon DataSync transfers and stores data, it performs integrity checks to ensure the data written to the destination matches the data read from the source. Additionally, an optional verification check can be performed to compare source and destination at the end of the transfer. DataSync will calculate and compare full-file checksums of the data stored in the source and in the destination. You can check either the entire dataset or just the files or objects that DataSync transferred.
Q: How can I audit and monitor the status of data being transferred by Amazon DataSync?
A: You can use task reports to audit your data transfer processes by verifying the transfer operations across all of your task executions. Using task reports, you can get a summary report along with detailed reports for all files transferred, skipped, verified, and deleted, for each task execution. Task reports give you the total number of files and bytes transferred, and include file attributes such as size, path, timestamps, file checksums, and object version IDs where applicable. You can also leverage Amazon Glue and Amazon Athena to automatically catalog and query task reports to gain critical insights into your data transfer processes.
You can use the Amazon Web Services Management Console or CLI to monitor the status and progress of data being transferred. Using Amazon CloudWatch Metrics, you can see the number of files and amount of data which has been copied. You can also enable logging of individual files to CloudWatch Logs, to identify what was transferred at a given time, as well as the results of the content integrity verification performed by DataSync.
These solutions together simplify auditing, monitoring, reporting, and troubleshooting, and enable you to provide timely updates to stakeholders.
Q: Can I filter the files and folders that Amazon DataSync transfers?
A: Yes. You can specify an exclude filter, an include filter, or both to limit which files, folders, or objects are transferred each time a task runs. Alternatively, you can use manifests to specify a subset of files or objects that should be transferred from your source location.
Include filters specify the file and folder paths or object keys that should be included when the task runs and limits the scope of what is scanned by DataSync on the source and destination. Exclude filters specify the file and folder paths or object keys that should be excluded from being copied. When creating or updating a task, you can configure both exclude and include filters. When starting a task, you can override and update the filters configured on the task. Read this Amazon Web Services Storage blog to learn more about using common filters with DataSync.
A manifest is a CSV-formatted file that lists the file paths or object keys that should be included when the task runs and limits the scope of what is scanned by DataSync on the source and destination. When creating or updating a task, you can provide a manifest file with millions of source files or objects, and DataSync will only compare and transfer the files listed in the manifest. When starting a task, you can override and update the manifest file. When copying data from Amazon S3, you can also specify an optional S3 version ID of each object to transfer. Read this blog for more details.
Note that filters and manifests cannot be used together.
Q: How is using a manifest file different from using include filters?
A: Whereas a manifest is an explicit list of files or objects to be transferred from the source location, an include filter is a string specifying patterns of files and folders to be transferred from the source. Only files and folders that match the patterns in the filter are copied. A pattern can be an entire file or folder path, or a prefix ending with a wildcard (*) character, indicating that all files or objects that match the prefix should be copied. Include filters are ideal for customers that only want to copy a small set of files or objects, or a few specific folders. Customers with well-known datasets, such as those moved as part of an automated workflow, can use manifests to avoid scanning their entire file or object storage systems to determine changes. Using a manifest file, customers can specify millions of source files or objects to be transferred, and DataSync will only compare the files listed in the manifest. Customers can also use manifests to copy specific versions of objects from their Amazon S3 bucket.
Q: Can I configure Amazon DataSync to transfer on a schedule?
A: Yes. You can schedule your tasks using the Amazon DataSync Console or Amazon Web Services Command Line Interface (CLI), without needing to write and run scripts to manage repeated transfers. Task scheduling automatically runs tasks on the schedule you configure, with hourly, daily, or weekly options provided directly in the Console. This enables you to ensure that changes to your dataset are automatically detected and copied to your destination storage.
Q: Does Amazon DataSync preserve the directory structure when copying files?
A: Yes. When transferring files, Amazon DataSync creates the same directory structure on the destination as on the source location's structure.
Q: What happens if an Amazon DataSync task is interrupted?
A: If a task is interrupted, for instance, if the network connection goes down or the Amazon DataSync agent is restarted, the next run of the task will transfer missing files, and the data will be complete and consistent at the end of this run. Each time a task is started it performs an incremental copy, transferring only the changes from the source to the destination.
Q: Can I use Amazon DataSync with Amazon Direct Connect?
A: You can use Amazon DataSync with your Direct Connect link to access public service endpoints or private VPC endpoints. When using VPC endpoints, data transferred between the DataSync agent and Amazon Web Services does not traverse the public internet or need public IP addresses, increasing the security of data as it is copied over the network.
Q: Does Amazon DataSync support VPC endpoints or Amazon PrivateLink?
A: Yes, VPC endpoints are supported for data movement use cases. You can use VPC endpoints to ensure data transferred between your Amazon DataSync agent, either deployed on-premises or in-cloud, doesn't traverse the public internet or need public IP addresses. Using VPC endpoints increases the security of your data by keeping network traffic within your Amazon Virtual Private Cloud (Amazon VPC). VPC endpoints for DataSync are powered by Amazon PrivateLink , a highly available, scalable technology that enables you to privately connect your VPC to supported Amazon Web Services.
Q: How do I configure Amazon DataSync to use VPC endpoints?
A: To use VPC endpoints with Amazon DataSync, you create an Amazon PrivateLink interface VPC endpoint for the DataSync service in your chosen VPC, and then choose this endpoint elastic network interface (ENI) when creating your DataSync agent. Your agent will connect to this ENI to activate, and subsequently all data transferred by the agent will remain within your configured VPC. You can use either the Amazon DataSync Console, Amazon Command Line Interface (CLI), or Amazon SDK, to configure VPC endpoints. To learn more, see Using Amazon DataSync in a Virtual Private Cloud.
Moving to and from Amazon Storage
Q: Which Amazon Web Services Storage services are supported by Amazon DataSync?
Amazon DataSync supports moving data to, from, or between Amazon Simple Storage Service (Amazon S3), Amazon Elastic File System (Amazon EFS), Amazon FSx for Windows File Server, and Amazon FSx for Lustre.
Amazon S3
Q: Can I copy my data into Amazon S3 Glacier Flexible Retrieval (formerly S3 Glacier), Amazon S3 Glacier Deep Archive, or other S3 storage classes?
A: Yes. When configuring an S3 bucket for use with Amazon DataSync, you can select the S3 storage class that DataSync uses to store objects. DataSync supports storing data directly into S3 Standard, S3 Intelligent-Tiering, S3 Standard-Infrequent Access (S3 Standard-IA), S3 One Zone-Infrequent Access (S3 One Zone-IA), Amazon S3 Glacier Flexible Retrieval, and Amazon S3 Glacier Deep Archive (S3 Glacier Deep Archive). More information on Amazon S3 storage classes can be found in the Amazon Simple Storage Service Developer Guide.
Objects smaller than the minimum charge capacity per object will be stored in S3 Standard. For example, folder objects, which are zero-bytes in size and hold only metadata, will be stored in S3 Standard. Read about considerations when working with Amazon S3 storage classes in our documentation, and for more information on minimum charge capacities see Amazon S3 Pricing.
Q: Can I copy data out of S3 Standard-IA and S3 One Zone-IA storage classes?
A: Yes. When using S3 as the source location for an Amazon DataSync task, the service will retrieve all objects from the bucket which need to be copied to the destination. Retrieving objects from S3 Standard-IA and S3 One Zone-IA storage will incur a retrieval fee based on the size of the objects. Read about considerations when working with Amazon S3 storage classes in our documentation.
Q: Can I copy data out of Amazon S3 Glacier Flexible Retrieval (formerly S3 Glacier) and Amazon S3 Glacier Deep Archive?
A: When using S3 as the source location for an Amazon DataSync task, the service will attempt to retrieve all objects from the bucket which need to be copied to the destination. Retrieving objects which are archived in the S3 Glacier Flexible Retrieval or S3 Glacier Deep Archive storage class results in an error. Any errors retrieving archived objects will be logged by DataSync and will result in a failed task completion status. Read about considerations when working with Amazon S3 storage classes in our documentation.
Q: How does Amazon DataSync access my Amazon S3 bucket?
A: Amazon DataSync assumes an IAM role that you provide. The policy you attach to the role determines which actions the role can perform. DataSync can auto generate this role on your behalf or you can manually configure a role.
Q: How does Amazon DataSync convert files and folders to or from objects in Amazon S3?
A: When files or folders are copied to Amazon S3, there is a one-to-one relationship between a file or folder and an object. File and folder timestamps and POSIX permissions, including user ID, group ID, and permissions, are stored in S3 user metadata. For NFS shares, file metadata stored in S3 user metadata is fully interoperable with File Gateway, providing on-premises file-based access to data stored in Amazon S3 by Amazon DataSync.
When DataSync copies objects that contain this user metadata back to an NFS server, the file metadata is restored. Symbolic links and hard links are also restored when copying back from NFS to S3.
When copying from an SMB file share, default POSIX permissions are stored in S3 user metadata. When copying back to an SMB file share, ownership is set based on the user that was configured in DataSync to access that file share, and default permissions are assigned.
When copying from HDFS, file and folder timestamps, user and group ownership, and POSIX permissions are stored in S3 user metadata. When copying from Amazon S3 back to HDFS, file and folder metadata are restored.
Learn more about how DataSync stores files and metadata in our documentation.
Q: What object metadata is preserved when transferring objects between self-managed object storage or Azure Blob Storage and Amazon S3?
A: When transferring objects between self-managed object storage or Azure Blob Storage and Amazon S3, DataSync copies objects together with object metadata and tags.
Q: What object metadata is preserved when transferring objects between Amazon S3 buckets?
A: When transferring objects between Amazon S3 buckets, DataSync copies objects together with object metadata and tags. DataSync does not copy other object information such as object ACLs or prior object versions.
Q: Which Amazon S3 request and storage costs apply when using S3 storage classes with Amazon DataSync?
A: Some S3 storage classes have behaviors that can affect your cost, such as data retrieval, minimum storage capacities, and minimum storage durations. DataSync automates management of data to address these factors, and provides settings to minimize data retrieval.
To avoid minimum capacity charge per object, Amazon DataSync automatically stores small objects in S3 Standard. To minimize data retrieval fees, you can configure DataSync to verify only files that were transferred by a given task. To avoid minimum storage duration charges, DataSync has controls for overwriting and deleting objects. Read about considerations when working with Amazon S3 storage classes in our documentation.
Amazon EFS
Q: How does Amazon DataSync access my Amazon EFS file system?
A: Amazon DataSync accesses your Amazon EFS file system using the NFS protocol. The DataSync service mounts your file system from within your VPC from Elastic Network Interfaces (ENIs) managed by the DataSync service. DataSync fully manages the creation, use, and deletion of these ENIs on your behalf. You can choose to mount your EFS file system using a mount target or an EFS Access Point.
Q: Can I use Amazon DataSync with all Amazon EFS storage classes?
A: Yes. You can use Amazon DataSync to copy files into Amazon EFS and configure EFS Lifecycle Management to migrate files that have not been accessed for a set period of time to the Infrequent Access (IA) storage class.
Q: How do I use Amazon DataSync with Amazon EFS file system resource policies?
A: You can use both IAM identity policies and resource policies to control client access to Amazon EFS resources in a way that is scalable and optimized for cloud environments. When you create a DataSync location for your EFS file system, you can specify an IAM role that DataSync will assume when accessing EFS. You can then use EFS file system policies to configure access for the IAM role. Because DataSync mounts EFS file systems as the root user, your IAM policy must allow the following action: elasticfilesystem: ClientRootAccess.
Q: Can I use Amazon DataSync to replicate my Amazon EFS file system to a different Amazon Web Services China Region?
A: Yes. In addition to the built-in replication provided by Amazon EFS, you can also use Amazon DataSync to schedule periodic replication of your Amazon EFS file system to a second Amazon EFS file system within the same Amazon Web Services account. This capability is available for both same-region and cross-region deployments, and does not require using a DataSync agent.
Q: What metadata is preserved when copying data between an NFS share and Amazon EFS, or between two Amazon EFS file systems?
A: Amazon DataSync copies file and folder timestamps and POSIX permissions, including user ID, group ID, and permissions. You can learn more and see the complete list of copied metadata in our documentation.
Q: What metadata is preserved when copying data between HDFS and Amazon EFS?
A: Amazon DataSync copies file and folder timestamps and POSIX permissions and applies default values for user ID and group ID. You can learn more and see the complete list of copied metadata in our documentation.
Amazon FSx for Windows File Server
Q: How does Amazon DataSync access my Amazon FSx for Windows File Server file system?
A: Amazon DataSync accesses your Amazon FSx for Windows File Server file system using the SMB protocol, authenticating with the username and password you configure in the Amazon Web Services Console or CLI. The DataSync service mounts your file system from within your VPC from Elastic Network Interfaces (ENIs) managed by the DataSync service. DataSync fully manages the creation, use, and deletion of these ENIs on your behalf.
Q: What Windows metadata is transferred when copying between an SMB share to Amazon FSx for Windows File Server file system, or between two Amazon FSx file systems?
A: Amazon DataSync copies Windows metadata, including file timestamps, file owner, standard file attributes, NTFS discretionary access lists (DACLs), and NTFS system access control lists (SACLs). You can learn more and see the complete list of copied metadata in our documentation.
Q: Can I use Amazon DataSync to replicate my Amazon FSx for Windows File Server file system to a different Amazon Web Services China Region?
A: Yes. You can use Amazon DataSync to schedule periodic replication of your Amazon FSx for Windows File Server file system to a second file system within the same Amazon Web Services account. This capability is available for both same-region and cross-region deployments, and does not require using a DataSync agent.
Amazon FSx for Lustre
Q: How does Amazon DataSync access my Amazon FSx for Lustre file system?
A: When you create a DataSync task to copy to or from your FSx for Lustre file system, the DataSync service will create Elastic Network Interfaces (ENIs) in the same VPC and subnet where your file system is located. DataSync uses these ENIs to access your FSx for Lustre file system using the Lustre protocol as the root user. When you create a DataSync location resource for your FSx for Lustre file system, you can specify up to five security groups to apply to the ENIs and configure outbound access from the DataSync service. The security groups must be configured to allow outbound traffic on the network ports required by FSx for Lustre. The security groups on your FSx for Lustre file system should be configured to allow inbound access from the security groups you assigned to the DataSync location resource for your FSx for Lustre file system.
Q: What metadata is preserved when either copying data between an NFS share or Amazon EFS file system and Amazon FSx for Lustre, or between two Amazon FSx for Lustre file systems?
A: Amazon DataSync copies file and folder timestamps and POSIX permissions, including user ID, group ID, and permissions. You can learn more and see the complete list of copied metadata in our documentation.
Q: Can I use Amazon DataSync to migrate data from one FSx for Lustre file system to another?
A: Yes. You can use Amazon DataSync to copy from your FSx for Lustre file system to a second file system within the same Amazon Web Services account. This capability is available for both same-region and cross-region deployments, and does not require using a DataSync agent.
Q: Can I use Amazon DataSync to replicate my Amazon FSx for Lustre file system to a different Amazon Web Services ChinaRegion?
A: Yes. You can use Amazon DataSync to schedule periodic replication of your Amazon FSx for Lustre file system to a second file system within the same Amazon Web Services account. This capability is available for both same-region and cross-region deployments, and does not require using a DataSync agent.
Q: Will DataSync copy the striping or layout settings when copying from one Amazon FSx for Lustre file system to another?
A: No. Files are written using the file layout and striping configuration on the destination’s file system.
Performance
Q: How fast can Amazon DataSync copy my file system to Amazon Storage services?
A: The rate at which Amazon DataSync can copy a given dataset is a function of amount of data, I/O bandwidth achievable from the source and destination storage, network bandwidth available, and network conditions. For data transfer between on premises and Amazon Web Services Storage services, a single DataSync task is capable of fully utilizing a 10 Gbps network link.
Q: Can I control the amount of network bandwidth that an Amazon DataSync task uses?
A: Yes. You can control the amount of network bandwidth that Amazon DataSync will use by configuring the built-in bandwidth throttle. You can increase or decrease this limit while your data transfer task is running. This enables you to minimize impact on other users or applications who rely on the same network connection.
Q: How can I monitor the performance of Amazon DataSync?
A: Amazon DataSync generates Amazon CloudWatch Metrics to provide granular visibility into the transfer process. Using these metrics, you can see the number of files and amount of data which has been copied, as well as file discovery and verification progress. You can see CloudWatch Graphs with these metrics directly in the DataSync Console.
Q: Will Amazon DataSync affect the performance of my source file system?
A: Depending on the capacity of your on-premises file store, and the quantity and size of files to be transferred, Amazon DataSync may affect the response time of other clients when accessing the same source data store, because the agent reads or writes data from that storage system. Configuring a bandwidth limit for a task will reduce this impact by limiting the I/O against your storage system.
Security and compliance
Q: Is my data encrypted while being transferred and stored?
A: Yes. All data transferred between the source and destination is encrypted via Transport Layer Security (TLS), which replaced Secure Sockets Layer (SSL). Data is never persisted in Amazon DataSync itself. The service supports using default encryption for S3 buckets, Amazon EFS file system encryption of data at rest, and Amazon FSx encryption at rest and in transit.
Q: How does Amazon DataSync access my NFS server or SMB file share?
A: Amazon DataSync uses an agent that you deploy into your IT environment or into Amazon EC2 to access your files through the NFS or SMB protocol. This agent connects to DataSync service endpoints within Amazon Web Services, and is securely managed from the Amazon Web Services Management Console or CLI.
Q: How does Amazon DataSync access HDFS on my Hadoop cluster?
A: Amazon DataSync uses an agent that you deploy into your IT environment or into Amazon EC2 to access your Hadoop cluster. The DataSync agent acts as an HDFS client and communicates with the NameNodes and DataNodes in your clusters. When you start a task, DataSync queries the primary NameNode to determine the locations of files and folders on the cluster. DataSync then communicates with the DataNodes in the cluster to copy files and folders to, or from, HDFS.
Q: How does Amazon DataSync access my self-managed or cloud object storage?
A: Amazon DataSync uses an agent that you deploy into your IT environment or into Amazon EC2 to access your objects using the Amazon S3 API. This agent connects to DataSync service endpoints within Amazon Web Services, and is securely managed from the Amazon Web Services Management Console or CLI.
Q: How does Amazon DataSync access my Azure Blob Storage containers?
A: Amazon DataSync uses an agent that you deploy into your Azure environment or into Amazon EC2 to access objects in your Azure Blob Storage containers. The agent connects to DataSync service endpoints within Amazon Web Services, and is securely managed from the Amazon Web Services Management Console or CLI. The agent authenticates to your Azure container using a SAS token that you specify when creating a DataSync Azure Blob location.
Q: Does Amazon DataSync require setting up a VPN to connect to my destination storage?
A: No. When copying data to or from your premises, there is no need to setup a VPN/tunnel or allow inbound connections. Your Amazon DataSync agent can be configured to route through a firewall using standard network ports. You can also deploy DataSync within your Amazon Virtual Private Cloud (Amazon VPC) using VPC endpoints. When using VPC endpoints, data transferred between the DataSync agent and Amazon Web Services does not need to traverse the public internet or need public IP addresses.
Q: How is my Amazon DataSync agent patched and updated?
A: Updates to the agent VM, including both the underlying operating system and the Amazon DataSync software packages, are automatically applied by DataSync once the agent is activated. Updates are applied non-disruptively when the agent is idle and not executing a data transfer task.
When to choose Amazon DataSync
Q: How is Amazon DataSync different from using command line tools such as rsync or the Amazon S3 command line interface?
A: Amazon DataSync fully automates and accelerates moving large active datasets to Amazon Storage services. It is natively integrated with Amazon S3, Amazon EFS, Amazon FSx, Amazon CloudWatch, and Amazon CloudTrail, which provides seamless and secure access to your storage services, as well as detailed monitoring of the transfer.
DataSync uses a purpose-built network protocol and scale-out architecture to transfer data. For data transfer between on premises and Amazon Web Services Storage services, a single DataSync task is capable of fully utilizing a 10 Gbps network link.
DataSync fully automates the data transfer. It comes with retry and network resiliency mechanisms, network optimizations, built-in task scheduling, auditing via task reports, monitoring via the DataSync API and Console, and CloudWatch metrics, events, and logs that provide granular visibility into the transfer process. DataSync performs data integrity verification both during the transfer and at the end of the transfer.
DataSync provides end-to-end security, and integrates directly with Amazon Web Services Storage services. All data transferred between the source and destination is encrypted via TLS, and access to your Amazon Web Services Storage is enabled via built-in Amazon Web Services security mechanisms such as IAM roles. DataSync with VPC endpoints are enabled to ensure that data transferred between an organization and Amazon Web Services does not traverse the public internet, further increasing the security of data as it is copied over the network.
Q: To transfer objects between my buckets, when do I use Amazon DataSync, when do I use S3 Replication, and when do I use S3 Batch Operations?
A: Amazon Web Services provides multiple tools to copy objects between your buckets.
Use Amazon DataSync for ongoing data distribution, data pipelines, and data lake ingest, as well as for consolidating or splitting data between multiple buckets.
Use S3 Replication for continuous replication of data to a specific destination bucket.
Use S3 Batch Operations for large-scale batch operations on S3 objects, such as to copy objects, set object tags or access control lists (ACLs), initiate object restores from Amazon S3 Glacier Flexible Retrieval (formerly S3 Glacier), invoke an Amazon Lambda function to perform custom actions using your objects, manage S3 Object Lock legal hold, or manage S3 Object Lock retention dates.
Q: When do I use Amazon DataSync and when do I use Amazon Snowball?
A: Amazon DataSync is ideal for online data transfers. You can use DataSync to migrate active data to Amazon Web Services Storage services, transfer data to the cloud for analysis and processing, archive data to free up on-premises storage capacity, or replicate data to Amazon Storage services for business continuity.
Amazon Snowball is ideal for offline data transfers, for customers who are bandwidth constrained, or transferring data from remote, disconnected, or austere environments.
Q: When do I use Amazon DataSync and when do I use Amazon Transfer Family?
A: If you currently use SFTP to exchange data with third parties, Amazon Transfer Family provides a fully managed SFTP, FTPS, and FTP transfer directly into and out of Amazon S3, while reducing your operational burden.
If you want an accelerated and automated data transfer between NFS servers, SMB file shares, Hadoop clusters, self-managed or cloud object storage, Amazon S3, Amazon EFS, and Amazon FSx, you can use Amazon DataSync. DataSync is ideal for customers who need online migrations for active data sets, timely transfers for continuously generated data, or replication for business continuity.