Amazon / AWS S3 is an object storage. You can imagine it as a cloud file system. This article shows you how you can download files from Amazon S3 to your local machine.

Sometimes it is neccessary to download files from Amazon S3 to your local machine. This can be the case if you want to examine the files more closely and want to use the tools you love (grep, …).

AWS CLI

In order to do so you can use the AWS CLI. It is available for Linux, macOS and Windows. After it has been installed you can use it as follows to download a folder to your local work station:

$ aws s3 cp s3:// <S3 URI>  <local destination folder> --recursive

Of course you have to provide proper credentials to the AWS CLI. This can be done in a number of ways. If you want to provide the credentials explicitely to the invokation of the AWS CLI you can do so by setting a few enviroment variables:

$ AWS_ACCESS_KEY_ID="<...>" \
AWS_SECRET_ACCESS_KEY="<...>" \
AWS_SESSION_TOKEN="<...>" \
AWS_DEFAULT_REGION="<...>" \
aws s3 cp s3:// <S3 URI>  <local destination folder> --recursive

Other options

Cyberduck

Another option is to use Cyberduck. Cyberduck is only available for macOS and Windows. It is a graphical user interface to browse various cloud file systems (Amazon S3, Google Drive, …). Besides browsing it also supports downloading folders and files to your local work station.

Mountain Duck

Mountain Duck allows you to mount cloud file systems on you local operating system. Again, it is only available for macOS and Windows.

A note about Netcup (advertisement)

Netcup is a German hosting company. Netcup offers inexpensive, yet powerfull web hosting packages, KVM-based root servers or dedicated servers for example. Using a coupon code from my Netcup coupon code web app you can even save more money (6$ on your first purchase, 30% off any KVM-based root server, ...).