General

Q. What is Amazon Elastic File System?

Amazon EFS is a fully-managed service that makes it easy to set up, scale, and cost-optimize file storage in the Amazon Cloud. With a few clicks in the Amazon Management Console, you can create file systems that are accessible to Amazon EC2 instances via a file system interface (using standard operating system file I/O APIs) and support full file system access semantics (such as strong consistency and file locking).

Amazon EFS file systems can automatically scale from gigabytes to petabytes of data without needing to provision storage. Tens, hundreds, or even thousands of Amazon EC2 instances can access an Amazon EFS file system at the same time, and Amazon EFS provides consistent performance to each Amazon EC2 instance. Amazon EFS is designed to be highly durable and highly available. With Amazon EFS, there is no minimum fee or setup costs, and you pay only for what you use.

Q. What use cases does Amazon EFS support?

Amazon EFS is designed to provide performance for a broad spectrum of workloads and applications, including Big Data and analytics, media processing workflows, content management, web serving, and home directories.

Q. When should I use Amazon EFS vs. Amazon S3 vs. Amazon Elastic Block Store (EBS)?

Amazon Web Services  offers cloud storage services to support a wide range of storage workloads.

Amazon EFS is a file storage service for use with Amazon EC2. Amazon EFS provides a file system interface, file system access semantics (such as strong consistency and file locking), and concurrently-accessible storage for up to thousands of Amazon EC2 instances.

Amazon EBS is a block level storage service for use with Amazon EC2. Amazon EBS can deliver performance for workloads that require the lowest-latency access to data from a single EC2 instance.

Amazon S3 is an object storage service. Amazon S3 makes data available through an Internet API that can be accessed anywhere.

Q. How do I get started using Amazon EFS?

To use Amazon EFS, you must have an Amazon Web Services account. If you do not already have an Amazon Web Services account, you can sign up for an Amazon Web Services account and instantly get access to the Amazon Web Services Free Tier in the Amazon Web Services China Region.

Once you have created an Amazon Web Services account, please refer to the Amazon EFS Getting Started guide to begin using Amazon EFS. You can create a file system via the Amazon Management Console, the Amazon Command Line Interface (Amazon CLI), and Amazon EFS API (and various language-specific SDKs).

Q. How do I access a file system from an Amazon EC2 instance?

To access your file system, you mount the file system on an Amazon EC2 Linux-based instance using the standard Linux mount command and the file system’s DNS name. To simplify accessing your EFS file systems, we recommend using the EFS mount helper utility. Once mounted, you can work with the files and directories in your file system just like you would with a local file system.

Amazon EFS uses the Network File System version 4 (NFS v4) protocol. For a step-by-step example of how to access a file system from an Amazon EC2 instance, please see the guide here.

Q. What Amazon EC2 instance types and AMIs work with Amazon EFS?

Amazon EFS is compatible with all Linux-based AMIs for Amazon EC2. You can mix and match the instance types connected to a single file system. For a step-by-step example of how to access a file system from an Amazon EC2 instance, please see the instance type guide here.

Q. How do I manage a file system?

Amazon EFS is a fully-managed service, so all of the file storage infrastructure is managed for you. When you use Amazon EFS, you avoid the complexity of deploying and maintaining complex file system infrastructure. An Amazon EFS file system grows and shrinks automatically as you add and remove files, so you do not need to manage storage procurement or provisioning.

You can administer a file system via the Amazon Management Console, the Amazon command-line interface (CLI), or the Amazon EFS API (and various language-specific SDKs). The Console, API, and SDK provide the ability to create and delete file systems, configure how file systems are accessed, create and edit file system tags, enable features like Provisioned Throughput and Lifecycle Management, and display detailed information about file systems.

Q. How do I load data into a file system?

You can also use standard Linux copy tools to move data files to Amazon EFS.
For more information about accessing a file system from an on-premises server, please see the On-premises Access section of this FAQ.

Storage classes and lifecycle management

Q. What storage classes does Amazon EFS offer?

Amazon EFS offers four storage classes: two regional storage classes, Amazon EFS Standard (EFS Standard), and Amazon EFS Standard-Infrequent Access (EFS Standard-IA), and two One Zone storage classes, Amazon EFS One Zone (EFS One Zone), and Amazon EFS One Zone-Infrequent Access (EFS One Zone-IA). EFS Standard-IA and EFS One Zone-IA provide price/performance that's cost-optimized for files not accessed every day. By simply enabling EFS Lifecycle Management on your file system, files not accessed according to the lifecycle policy you choose will be automatically and transparently moved into EFS Standard-IA or EFS One Zone-IA, depending on whether your file system uses regional or One Zone storage classes.

Q. How do I move files to EFS IA?

Moving files to EFS Standard-IA and EFS One Zone-IA starts by enabling Amazon EFS Lifecycle Management and choosing an age-off policy for your files. Lifecycle Management automatically moves your data to the EFS Standard-IA storage class or the EFS One Zone-IA storage class according to the lifecycle policy you choose. For example, you can automatically move files into EFS Standard-IA after your files have not been accessed for fourteen days.

Q. When should I enable Lifecycle Management?

Enable Lifecycle Management when your file system contains files that are not accessed every day to reduce your storage costs. Both EFS Standard-IA and EFS One Zone-IA storage classes are ideal if you need your full data set to be readily accessible and want to automatically save on storage costs as your files become less frequently accessed. Examples include satisfying audits, performing historical analysis, or backup and recovery. 

Q. What happens when I disable Amazon EFS Lifecycle Management?

When you disable Lifecycle Management, files will no longer be moved to either the EFS Standard-Infrequent Access or EFS One Zone-Infrequent Access storage classes (depending on whether your file system uses regional or One Zone storage classes), and any files that have already moved to an Infrequent Access storage class will remain there.

Q. What Amazon EFS features are supported when using EFS IA and EFS One Zone-IA storage classes?

All Amazon EFS features are supported when using the EFS Standard-IA and EFS One Zone-IA storage classes. Files smaller than 128 KiB are not eligible for Lifecycle Management and will always be stored on EFS Standard and EFS One Zone storage classes, depending on whether your file systems use regional or One Zone storage classes.

Q. Is there a latency difference between EFS Standard, EFS One Zone and EFS Standard-Infrequent Access, and EFS One Zone-Infrequent Access storage classes?

When reading from or writing to EFS Standard-IA, or EFS One Zone-IA your first-byte latency is higher than that of EFS Standard or EFS One Zone. EFS Standard and EFS One Zone are designed to provide single-digit latencies on average, and EFS Standard-IA and EFS One Zone-IA are designed to provide double-digit latencies on average.

Q. What throughput can I drive against files stored in the EFS Standard-Infrequent Access storage class?

You can drive the same amount of throughput against data stored in Infrequent Access storage classes (EFS Standard-IA and EFS One Zone-IA) as you can with the Standard storage classes (EFS Standard and EFS One Zone). Throughput of bursting mode file systems scales linearly with the amount of data stored. If you need more throughput than you can achieve with your amount of data stored, you can configure Provisioned Throughput. For more information, see scale & performance section below.

Q. What is EFS Intelligent-Tiering?

EFS Intelligent-Tiering delivers automatic cost savings for workloads with changing access patterns. EFS Intelligent-Tiering uses EFS Lifecycle Management to monitor the access patterns of your workload and is designed to automatically move files that are not accessed for the duration of the Lifecycle policy (e.g. 30 days) from performance-optimized storage classes (EFS Standard or EFS One Zone), to their corresponding cost-optimized Infrequent Access (IA) storage class (EFS Standard-Infrequent Access or EFS One Zone-Infrequent Access), helping you take advantage of IA storage pricing that is up to 91% lower than EFS Standard or EFS One Zone file storage pricing. If access patterns change and that data is accessed again, Lifecycle Management automatically moves the files back to EFS Standard or EFS One Zone, eliminating the risk of unbounded access charges. If the files become infrequently accessed again, Lifecycle Management will transition the files back to the appropriate IA storage class based on your Lifecycle policy.

Q. When should I use Lifecycle Management to move files to the IA storage classes without a policy to move files back to EFS Standard or EFS One Zone, if accessed?

Use EFS Lifecycle Management to automatically to move files to EFS Standard-IA or EFS One Zone-IA, if your file system contains files that you are certain will be accessed infrequently or not at all. Enable Lifecycle Management by choosing a policy to move files to EFS Standard-IA or EFS One Zone-IA, depending on whether your file system uses EFS Standard or EFS One Zone storage classes. Both EFS Standard-IA and EFS One Zone-IA storage classes are ideal for you if you need your full data set readily accessible, and want to automatically save on storage costs as your files are accessed less frequently. Examples include satisfying audits, performing historical analysis, or backup and recovery.

Q. When should I use EFS Intelligent-Tiering?

Use EFS Intelligent-Tiering to automatically move files between performance-optimized and cost-optimized storage classes when data access patterns are unknown. Enable EFS Lifecycle Management by choosing a policy to automatically move files to EFS Standard-IA or EFS One Zone-IA. Additionally, choose a policy to automatically move files back to EFS Standard or EFS One Zone when they are accessed. With EFS Intelligent-Tiering, you can save on storage costs even if your application access patterns are unknown, or access patterns change over time. With these two Lifecycle Management policies set, you pay only for data transition charges between storage classes, and not for repeated data access. Examples of workloads that may have unknown access patterns include web assets and blogs stored by content management and files will remain in storage classes they resided on when you disabled the lifecycle policies. To disable EFS Intelligent-Tiering, you must disable both the policy that moves files to the EFS Standard-IA or EFS One Zone-IA storage classes, and the policy that moves files to the EFS Standard or EFS One Zone storage class on first access.

Q. What happens if I enable the policy to move files to EFS Standard or EFS One Zone on first access and disable the policy to move files to EFS Standard or EFS One Zone?

Any remaining files in the IA storage classes will move to EFS Standard or EFS One Zone if accessed.

Q. What Amazon EFS features are supported when using EFS Standard-IA and EFS One Zone-IA storage classes?

All Amazon EFS features are supported when using the EFS Standard-IA and EFS One Zone-IA storage classes. Files smaller than 128 KiB are not eligible for Lifecycle Management and will always be stored on either the EFS Standard storage class or the EFS One Zone storage class, depending on whether your file system uses Standard or One Zone storage classes.

Q. What is the latency difference between the performance-optimized storage classes (EFS Standard, EFS One Zone) and the cost-optimized infrequently accessed storage classes (EFS Standard-IA, EFS One Zone-IA)?

When reading from or writing to EFS Standard-IA storage class, or EFS One Zone-IA storage class, your first-byte latency is higher than that of EFS Standard or EFS One Zone storage classes. EFS Standard and EFS One Zone storage classes are designed to provide single-digit millisecond latencies on average, and EFS Standard-IA and EFS One Zone-IA storage classes are designed to provide double-digit millisecond latencies on average.

Data protection and availability

Q: How is Amazon EFS designed to provide high durability and availability?

By default, every Amazon EFS file system object (i.e. directory, file, and link) is redundantly stored across multiple AZs for all regional storage classes. If you select Amazon EFS One Zone storage classes, your data is redundantly stored within a single AZ. Amazon EFS is designed to sustain concurrent device failures by quickly detecting and repairing any lost redundancy. In addition, a file system can be accessed concurrently from all Availability Zones in the region where it is located, which means that you can architect your application to failover from one AZ to other AZs in the region in order to ensure the highest level of application availability. Mount targets are designed to be highly available within an AZ for all Amazon EFS storage classes.

Q: How durable is Amazon EFS?
Amazon EFS is designed to provide 99.999999999% (11 9’s) of durability over a given year. In addition, EFS Standard and EFS Standard-IA storage classes are designed to sustain data in the event of an entire Availability Zone loss. Because EFS One Zone storage classes store data in a single Amazon Web Services Availability Zone, data stored in these storage classes may be lost in the event of a disaster or other fault within the Availability Zone that affects all copies of the data, or in the event of Availability Zone destruction. As with any environment, best practice is to have a backup and to put in place safeguards against accidental deletion. For Amazon EFS data, that best practice includes replicating your file system across Regions using Amazon EFS Replication, and a functioning, regularly tested backup using Amazon Backup. File systems using EFS One Zone storage classes are configured to automatically back up files by default at file system creation, unless you choose to disable this functionality.

Q: What failure modes do I have to consider when using Amazon EFS One Zone compared to Standard storage classes?
File systems using Amazon EFS One Zone storage classes are not resilient to a complete AZ outage. In the event of an AZ outage, you will experience a loss of availability, because your file system data is not replicated to a different AZ. In the event of disaster or fault within an AZ affecting all copies of your data, or a permanent AZ loss, you may experience loss of data that has not been replicated using Amazon EFS Replication to keep an up-to-date copy of your file system in a second Amazon Web Services Region or an AZ. EFS Replication is designed to meet a recovery point objective (RPO) and recovery time objective (RTO) of minutes. You can also use Amazon Backup to store additional copies of your file system data and restore them to a new file system in an AZ or Region of your choice. Amazon EFS file system backup data created and managed by Amazon Backup is replicated to 3 AZs and is designed for 99.999999999% (11 9’s) durability.

Q. How can I guard my EFS One Zone file system against the loss of an AZ?
You can use Amazon EFS Replication or Amazon Backup to guard your EFS One Zone file system against the loss of an AZ. Amazon EFS Replication replicates your file system data to another Amazon Web Services Region or within the same Region in a few clicks, without requiring additional infrastructure or a custom process to monitor and synchronize data changes. EFS replication is continuous and designed to provide a recovery point objective (RPO) and a recovery time objective (RTO) of minutes for most file systems. 

Backups are enabled by default for all file systems using Amazon EFS One Zone storage classes. You can disable this setting when creating file systems. You are able to restore your file data from a recent backup to a newly created file system in any operating AZ in the event of an AZ loss. If Amazon EFS is impacted by an AZ loss, and your data is stored in One Zone storage classes, you may experience data loss for files that have changed since the last automatic backup.

Q: What is Amazon EFS Replication?
EFS Replication allows you to replicate your file system data to another Amazon Web Services Region or within the same Region in a few clicks, without requiring additional infrastructure or a custom process to monitor and synchronize data changes. Amazon EFS Replication automatically and transparently replicates your data to a second file system in a Region or AZ of your choice. You can use the Amazon EFS console, Amazon Web Services CLI, and APIs to enable replication on an existing file system. EFS Replication is continuous and designed to provide a recovery point objective (RPO) and a recovery time objective (RTO) of minutes, enabling you to meet your compliance and business continuity goals.

Q: Why should I use EFS Replication?
If you have requirements to maintain a copy of your file system hundreds of miles apart for purposes of disaster recovery, compliance, or business continuity planning, EFS Replication can help you meet those requirements. For applications that require a low network latency cross-region access, Amazon EFS Replication provides a read-only copy in the region of your choice. With Amazon EFS Replication, you can cost-optimize and save up to 75% on your disaster recovery storage costs by using low-cost EFS One Zone storage classes and a 7-day age-off lifecycle management policy for your destination file system. There is no need to build and maintain a custom process for data replication. EFS Replication also makes it easy to monitor and alarm on your RPO status using Amazon CloudWatch.

Q: How do I get started with EFS Replication?
Using the Amazon EFS console, simply enable Replication on the file system you want to replicate (source file system) and choose the Region or AZ where you want to store the replica (destination). You can also use the CreateReplicationConfiguration API from the Amazon Web Services CLI or SDK to enable EFS Replication. As part of configuring EFS Replication, you’ll choose the Region in which to create your replica. If you choose to use EFS One Zone storage classes for your replica, you must also select your file system’s AZ. Once EFS Replication is enabled, Amazon EFS will automatically create a new destination file system in the destination Region or AZ you’ve selected. You can select the destination file system’s lifecycle management policy, backup policies, provisioned throughput, mount targets, and access points independent of the source file system. For example, you can optimize the destination file system storage costs by enabling EFS Lifecycle Management with a shorter age-off policy (such as 7 days) when compared to the source file system’s age-off policy (such as 7, 14, 30, 60, or 90 days). EFS Replication configurations such as the replication pair (source and destination), replication status, and last completed replication timestamp can be accessed using the DescribeReplicationConfigurations API.

Q: How does EFS Replication work?
When you enable EFS Replication on a file system, Amazon EFS automatically creates a new file system in the destination region and begins copying your data into it. Once the initial copy is completed, EFS Replication copies changes incrementally to deliver an RPO of minutes for most file systems. EFS Replication preserves all metadata, such as owners and permissions, when copying changes to files and folders. While EFS Replication is enabled, your destination file system is in read-only mode and can be updated only by EFS Replication. In the event that your source file system is unavailable, you can failover to the destination file system by deleting replication. Deleting the Replication makes the destination file system writeable for your applications to use.

Q: Can I change my destination file system’s settings when EFS Replication is enabled?
Yes. When EFS Replication is enabled, you can modify your destination file system configuration settings, such as its lifecycle management policy including intelligent-tiering, backup policy, mount targets, access points, and provisioned throughput. All destination file systems are created with encryption of data at rest enabled irrespective of the source file system setting. You cannot change the performance mode of the destination file system. It always matches that of the source file system, except when you create a One Zone replica. In that case, General Purpose performance mode is used because Max I/O performance mode is not supported by EFS One Zone storage classes.

Q: Can I change which Region I’m replicating data to?

No. In order to change the Region of your destination, you first have to delete the replication configuration between your source and destination file system. You can then create a new replication configuration from the source by selecting the desired Region. Amazon EFS will create a new destination file system in the selected Region and begin to replicate the source file system's contents.

Q: Can I delete my source or destination file system if they’re part of a replication pair?
You cannot delete either your source or your destination file system if it’s part of a replication pair. In order to delete one of the file systems in the pair, you first need to delete the replication configuration.

Q: Is my replica file system point-in-time consistent?
No. EFS Replication doesn’t provide point-in-time consistent replication. EFS Replication publishes a timestamp metric on Amazon CloudWatch called TimeSinceLastSync. All changes made to your source file system at least as of the published time will be copied over to the destination. Changes to your source file system after the recorded time may not have been replicated over. You can monitor the health of your EFS Replication using Amazon CloudWatch. If you interrupt the replication process due to a disaster recovery event, some files from the source file system may have transferred over but are not yet copied to their final locations on your destination file system. These files and their contents can be found on your destination file system in a lost+found directory created by EFS Replication under the root directory.

Q: Can I select the VPC in which my mount targets are created?
Yes. When you enable EFS Replication for the first time, the replica file system will be automatically created for you. It’s created in the Region of your choosing without mount targets. You can then create mount targets for your replica file system in the VPC of your choosing. You can also change the VPC for your replica file system by deleting any existing mount targets and creating new ones in a VPC of your choosing.

Q: How can I utilize my destination file system while replication is enabled and when replication is deleted?
When your replication is in Enabled state, only EFS Replication is allowed to make changes to your destination file system. You can access your replica in the read-only mode during this time. In the event of a disaster you can fail over to your destination file system by deleting your replication configuration from the Amazon EFS console or by using the DeleteReplicationConfiguration API. When you delete the Replication, Amazon EFS will stop replicating additional changes and make the destination file system writeable. You can then point your application to your destination file system to continue your operations. You can use the Amazon EFS console or the DescribeReplicationConfigurations API call to check your destination file system status after you’ve failed over.

Q: Is the data for my file system replica encrypted in transit and at rest?
For all file systems, Amazon EFS automatically and transparently encrypts all Amazon EFS network traffic using Transport Layer Security (TLS) version 1.2. Your destination file system is created with encryption at rest enabled. You can select an encryption key from those available in the destination Region Amazon Key Management Service (KMS) or by using the default service “aws/elasticfilesystem” key in the Region where your destination file system is located.

Q: What permissions do I need to use EFS Replication?
To create and delete a replication, your Amazon IAM or resource-based policy must have permission for the Amazon EFS API calls CreateFileSystem, CreateReplicationConfiguration, and DescribeReplicationConfigurations.

Q: Does my replication traffic go over the public internet?
No. EFS Replication traffic always stays on the China Amazon Web Services backbone.

Q: Can I use EFS Replication to replicate my file system to more than one Amazon Web Services Region or to multiple file systems within a second Region?
No. EFS Replication supports replication between exactly two file systems.

Q: Can I replicate Amazon EFS file systems across Amazon Web Services accounts?
No. Amazon EFS does not support replicating file systems to a different Amazon Web Services account.

Q: Does EFS Replication consume my file system burst credits, IOPS limit, and throughput limits?
No. EFS Replication activity does not consume burst credits or count against the file system IOPS and throughput limits for either file system in a replication pair.

Q: Can I expect my destination file system to be available as soon as I enable EFS Replication?
Yes. When you first enable EFS Replication, your replica file system will be created in read-only mode and your entire source file system will be copied to the destination you selected. The time to complete this operation depends on the size of your source file system. Although you can failover to your destination file system at any time, it is recommended that you wait until the copy is complete to minimize data loss. You can monitor the progress of your replication from the Amazon EFS console, which displays a timestamp that indicates the last time your source file system and destination file system were synchronized.

Scale and performance

Q. How much data can I store?

Amazon EFS file systems can store petabytes of data. Amazon EFS file systems are elastic, and automatically grow and shrink as you add and remove files. You do not provision file system size up front, and you pay only for what you use.

Q. How many Amazon EC2 instances can connect to a file system?

Amazon EFS supports one to thousands of Amazon EC2 instances connecting to a file system concurrently.

Q. How many file systems can I create?

You can create up to 1,000 file systems per region. For information on Amazon EFS limits, please visit the Amazon EFS Limits page.

Q. What’s the difference between “General Purpose” and “Max I/O” performance modes? Which one should I choose?

“General Purpose” performance mode is appropriate for most file systems, and is the mode selected by default when you create a file system. “Max I/O” performance mode is optimized for applications where tens, hundreds, or thousands of EC2 instances are accessing the file system — it scales to higher levels of aggregate throughput and operations per second with a tradeoff of slightly higher latencies for file operations. For more information, please see the documentation on File System Performance.

Q. What latency can I expect for my Amazon EFS file system?

The expected latency for your Amazon EFS file system depends on the storage class, the performance mode (General Purpose or Max I/O), and the file system operation type (read or write). The table that follows displays the average expected latency for General Purpose file systems.

 

Reads

Writes

EFS One Zone

As low as 600 microseconds

Low single-digit milliseconds

EFS One Zone-IA

Double-digit milliseconds

Double-digit milliseconds

EFS Standard

As low as 600 microseconds

Low single-digit milliseconds

EFS Standard-IA

Double-digit milliseconds

Double-digit milliseconds

 

Latency on Max I/O file systems is single-digit to double-digit milliseconds.

Q. How much throughput can a file system support?

With bursting mode, the default throughput mode for Amazon EFS file systems, the throughput available to a file system scales as a file system grows. Because file-based workloads are typically spiky — requiring high levels of throughput for periods of time and lower levels of throughput the rest of the time — Amazon EFS is designed to burst to allow high throughput levels for periods of time. Also, because many workloads are read-heavy, read operations are metered at a 1:3 ratio to other NFS operations (like write). All file systems deliver a consistent baseline performance of 50 MB/s per TB of Standard class storage, all file systems (regardless of size) can burst to 100 MB/s, and file systems with more than 1TB of Standard class storage can burst to 100 MB/s per TB. Read operations are metered at a 1:3 ratio, so you can drive up to 300 MiB/s per TiB of read throughput. As you add data to your file system, the maximum throughput available to the file system scales linearly and automatically with your storage in the Amazon EFS Standard or Amazon EFS One Zone storage class. If you need more throughput than you can achieve with your amount of data stored, you can configure Provisioned Throughput to the specific amount your workload requires.

File system throughput is shared across all Amazon EC2 instances connected to a file system. For example, a 1TB file system that can burst to 100 MB/s of throughput can drive 100 MB/s from a single Amazon EC2 instance, or 10 Amazon EC2 instances can each drive 10 MB/s (100 MB/s collectively). For more information, please see the documentation on File System Performance.

Q. What is Provisioned Throughput and when should I use it?

Provisioned Throughput enables Amazon EFS customers to provision their file system’s throughput independent of the amount of data stored, optimizing their file system throughput performance to match their application’s needs.

Amazon EFS Provisioned Throughput is available for applications with a high throughput to storage (MB/s per TB) ratio. For example, customers using Amazon EFS for development tools, web serving or content management applications, where the amount of data in their file system is low relative to throughput demands, are able to instantly get the high levels of throughput their applications require.

You can select your file system’s throughput mode via the Amazon Web Services Console, Amazon CLI, or Amazon API. For more details, see the documentation on Provisioned Throughput.

Q. How does Amazon EFS Provisioned Throughput work?

When you select Provisioned Throughput for your file system, you can provision the throughput of your file system independently from the amount of data stored and pay for the storage and Provisioned Throughput separately. (ex. $0.30 per GB-Month for EFS Standard storage and $6.00 per MB/s-Month for Provisioned Throughput in US-East (N. Virginia)). Read operations are metered at a 1:3 ratio, so you can drive up to 3 MiB/s of read throughput for each 1 MiB/s of throughput provisioned.

Provisioned Throughput also includes 50 KB/s per GB (or 1 MB/s per 20 GB) of throughput in the price of Standard storage. For example, if you store 20 GB for a month on Amazon EFS Standard and configure a throughput of 5 MB/s for a month you will be billed for 20 GB-Month of storage and 4 (5-1) MB/s-Month of throughput.

Q: How do I monitor my read and write throughput usage?

You can monitor your throughput using Amazon CloudWatch. The TotalIOBytes, ReadIOBytes, WriteIOBytes, and MetadataIOBytes metrics reflect the actual throughput your applications are driving. PermittedThroughput and MeteredIOBytes reflect your metered throughput limit and usage, respectively, after metering read requests at a 1:3 ratio to other requests. Using the Amazon EFS console, you can use the Percent Throughput Limit graph to monitor your throughput utilization. If you use custom CloudWatch dashboards or another monitoring tool, you can also create a CloudWatch metric math expression that compares MeteredIOBytes to PermittedThroughput. If these values are equal, you are consuming your entire amount of throughput, and should consider configuring Provisioned Throughput or increasing the amount of throughput configured. For bursting mode file systems, you should monitor the BurstCreditBalance metric and alert on a balance that is approaching 0 to ensure your file system is operating at its burst rate rather than its base rate.

Q. How will I be billed in Provisioned Throughput mode?

In the Provisioned Throughput mode, you are billed for storage you use and throughput you provisioned independently. You are billed hourly in the following dimensions:

  • Storage (per GB-Month) - You are billed for the amount of storage you use in GB-Month.
  • Throughput (per MB/s-Month) – You are billed for throughput you provision in MB/s-Month.

Q. How often can I change my file system's Provisioned Throughput?

If your file system is in Provisioned Throughput mode, you can increase the provisioned throughput of your file system as often as you want. You can decrease your file system throughput in Provisioned Throughput mode or change between Provisioned Throughput and the default Bursting Throughput modes as long as it’s been more than 24 hours since the last decrease or throughput mode change.

Q. What is the throughput of my file system if the Provisioned Throughput mode is set less than the Baseline Throughput I am entitled to in the bursting mode?

In the default Bursting Throughput mode, the throughput of your file system scales with the amount of data stored. If your file system in the Provisioned Throughput mode grows in size after the initial configuration, it is possible that your file system has a higher baseline rate in the Bursting Throughput mode than the Provisioned Throughput mode.

In such cases, your file system throughput will be the throughput it is entitled to in the default Bursting Throughput mode and you will not incur any additional charge for the throughput beyond the bursting storage cost. You will also be able to burst according to the Amazon EFS throughput bursting model.

Access Control

Q. How do I control which Amazon EC2 instances can access my file system?

You control which EC2 instances can access your file system using VPC security group rules and Amazon Identity and Access Management (IAM) policies. Use VPC security groups to control the network traffic to and from your file system. Attach an IAM policy to your file system to control which clients can mount your file system and with what permissions, and use EFS Access Points to manage application access. Control access to files and directories with POSIX-compliant user and group-level permissions.

Q. How can I use IAM policies to manage file system access?

Using the EFS console, you can apply common policies to your file system such as disabling root access, enforcing read-only access, or enforcing that all connections to your file system are encrypted. You can also apply more advanced policies such as granting access to specific IAM roles, including those in other Amazon Web Services accounts.

Access Points

Q. What is an EFS Access Point?

EFS Access Points simplify providing applications access to shared data sets in an EFS file system. EFS Access Points work together with Amazon IAM and enforce an operating system user and group, and a directory for every file system request made through the access point. You can create multiple access points per file system and use them to provide access to specific applications.

Q. Why should I use EFS Access Points?

EFS Access Points represent a flexible way to manage application access in NFS environments with increased scalability, security, and ease of use. Use cases that can benefit from EFS Access Points include container-based environments where developers build and deploy their own containers, data science applications that require access to production data, and sharing a specific directory in your file system with other Amazon Web Services accounts.

Q. How do EFS Access Points work?

When you create an EFS Access Point, you can configure an operating system user and group, and a root directory for all connections that use it. If you specify the root directory’s owner, EFS will automatically create it with the permissions you provide the first time a client connects to the access point. You can also update your file system’s IAM policy to apply to your access points. For example, you can apply a policy that requires a specific IAM identity in order to connect to a given access point. For more information, see the EFS user guide.

Encryption

Q: What is Amazon EFS Encryption?

Amazon EFS offers the ability to encrypt data at rest and in transit.

Data encrypted at rest is transparently encrypted while being written, and transparently decrypted while being read, so you don’t have to modify your applications. Encryption keys are managed by the Amazon Key Management Service (KMS), eliminating the need to build and maintain a secure key management infrastructure.

Data encryption in transit uses industry standard Transport Layer Security (TLS) 1.2 to encrypt data sent between your clients and EFS file systems.

Encryption of data at rest and of data in transit can be configured together or separately to help meet your unique security requirements.

For more details, see the user documentation on Encryption.

Q: What is the Amazon Key Management Service (KMS)?

Amazon KMS manages the encryption keys for encrypted data at rest on EFS file systems. Amazon KMS is a managed service that makes it easy for you to create and control the encryption keys used to encrypt your data. Amazon Key Management Service is integrated with Amazon Web Services services including Amazon EFS, Amazon EBS, and Amazon S3, to make it simple to encrypt your data with encryption keys that you manage. Amazon Key Management Service is also integrated with Amazon CloudTrail to provide you with logs of all key usage to help meet your regulatory and compliance needs.

Q: How do I enable encryption for my Amazon EFS file system?

You can enable encryption at rest in the EFS console or by using the Amazon CLI or SDKs. When creating a new file system in the EFS console, click “Create File System” and click the checkbox to enable encryption.

Data can be encrypted in transit between your Amazon EFS file system and its clients by using the EFS mount helper.

Encryption of data at rest and of data in transit can be configured together or separately to help meet your unique security requirements.

For more details, see the user documentation on Encryption.

Q: Does encryption impact Amazon EFS performance?

Encrypting your data has a minimal effect on I/O latency and throughput.

On-premises access

Q: How do I access an EFS file system from servers in my on-premises datacenter?

To access EFS file systems from on-premises, you must have an Amazon Direct Connect connection between your on-premises datacenter and your Amazon VPC.

You mount an EFS file system on your on-premises Linux server using the standard Linux mount command for mounting a file system via the NFSv4.1 protocol.

For more information about accessing EFS file systems from on-premises servers, please see the documentation.

Q: What can I do by enabling access to my EFS file systems from my on-premises servers?

You can mount your Amazon EFS file systems on your on-premises servers, and move file data to and from Amazon EFS using standard Linux tools and scripts. The ability to move file data to and from Amazon EFS file systems enables three use cases.

First, you can migrate data from on-premises datacenters to permanently reside in Amazon EFS file systems.

Second, you can support cloud bursting workloads to offload your application processing to the cloud. You can move data from your on-premises servers into your EFS file systems, analyze it on a cluster of EC2 instances in your Amazon VPC, and store the results permanently in your EFS file systems or move the results back to your on-premises servers.

Third, you can periodically copy your on-premises file data to EFS to support backup and disaster recovery scenarios.

Q: Can I access my Amazon EFS file system concurrently from my on-premises datacenter servers as well as Amazon EC2 instances?

Yes, you can access your Amazon EFS file system concurrently from servers in your on-premises datacenter as well as Amazon EC2 instances in your Amazon VPC. Amazon EFS provides the same file system access semantics, such as strong data consistency and file locking, across all EC2 instances and on-premises servers accessing a file system.

Q: What is the recommended best practice when moving file data to and from on-premises servers?

Because of the propagation delay tied to data traveling over long distances, the network latency of the network connection between your on-premises datacenter and your Amazon VPC can be tens of milliseconds. If your file operations are serialized, the latency of the network connection directly impacts your read and write throughput; in essence, the volume of data you can read or write during a period of time is bounded by the amount of time it takes for each read and write operation to complete. To maximize your throughput, parallelize your file operations so that multiple reads and writes are processed by EFS concurrently. Standard tools like GNU parallel enable you to parallelize the copying of file data. For more information, see the online documentation.

Compatibility

Q. What interoperability and compatibility is there between existing Amazon Web Services services and Amazon EFS?

Amazon EFS is integrated with a number of other Amazon Web Services services, including Amazon CloudWatch, Amazon CloudFormation, Amazon CloudTrail, Amazon IAM, and Amazon Tagging services.

Amazon CloudWatch allows you to monitor file system activity using metrics. Amazon CloudFormation allows you to create and manage file systems using templates.

Amazon CloudTrail allows you to record all Amazon EFS API calls in log files.

Amazon Identity and Access Management (IAM) allows you to control who can administer your file system. Amazon Web Services Tagging services allows you to label your file systems with metadata that you define.

Q. What type of locking does Amazon EFS support?

Locking in Amazon EFS follows the NFSv4.1 protocol for advisory locking, and enables your applications to use both whole file and byte range locks.

Q. Are file system names global (like Amazon S3 bucket names)?

Every file system has an automatically generated ID number that is globally unique. You can tag your file system with a name, and these names do not need to be unique.

Pricing and billing

Q. How much does Amazon EFS cost?

With Amazon EFS, you pay only for what you use per month.

When using the Provisioned Throughput mode you pay for the throughput you provision per month. There is no minimum fee and there are no set-up charges.

EFS Standard-IA and EFS One Zone-IA are priced based on the amount of storage used and the amount of data accessed. Until Lifecycle Management fully moves your file to EFS Standard-IA or EFS One Zone-IA, it is stored on EFS Standard or EFS One Zoned and billed at the Standard or EFS One Zone rate, respectively, depending on where your data is stored.

For more Amazon EFS pricing information, please visit the Amazon EFS Pricing page.

Learn about Amazon EFS Pricing

Visit the pricing page
Ready to build?
Get started with Amazon EFS
Have more questions?
Contact us