Tutorial: Setting up s3cmd on Linux

s3cmd is a popular cross-platform command-line tool for managing S3-compatible object stores. In this tutorial, we will see how to configure and use S3cmd with E2E Object Storage.

Prerequisites

  1. Bucket in E2E Object Store. If you have not created a bucket yet, please refer Getting Started Guide

  2. Access and Secret keys with permissions for the target bucket

  3. Linux system for installing s3cmd CLI

Step 1 : Installing s3cmd on Linux

s3cmd is available in default rpm repositories for CentOS, RHEL and Ubuntu systems, You can install it using simply executing following commands on your system.

On CentOS/RHEL

yum install s3cmd

On Ubuntu/Debian

sudo apt-get install s3cmd

Step 2 : (Option 1) Configure s3cmd in Interactive Mode

Let us first use interactive mode. Please keep the below information handy as we will need it during the process.

Access Key  : xxxxxxxxxxxxxxxxxxxx
Secret Key  : xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
S3 Endpoint : objectstore.e2enetworks.net
Bucket URL : %(bucket)s.objectstore.e2enetworks.net
Default Region: Leave blank (skip)

To start configuration in interactive mode, enter the following command:

s3cmd --configure

Below is the snapshot of installation wizard. Follow the process in similar way to configure EOS with s3cmd CLI.

Enter new values or accept defaults in brackets with Enter.
Refer to user manual for detailed description of all options.

Access key and Secret key are your identifiers for Amazon S3. Leave them empty for using the env variables.
Access Key: xxxxxxxxxxxxxxxxxxxxx
Secret Key: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Default Region [US]:

Use "s3.amazonaws.com" for S3 Endpoint and not modify it to the target Amazon S3.
S3 Endpoint [s3.amazonaws.com]: objectstore.e2enetworks.net

Use "%(bucket)s.s3.amazonaws.com" to the target Amazon S3. "%(bucket)s" and "%(location)s" vars can be used
if the target S3 system supports dns based buckets.
DNS-style bucket+hostname:port template for accessing a bucket [%(bucket)s.s3.amazonaws.com]: % (bucket)s.objectstore.e2enetworks.net

Encryption password is used to protect your files from reading
by unauthorized persons while in transfer to S3
Encryption password:
Path to GPG program [/usr/bin/gpg]:

When using secure HTTPS protocol all communication with Amazon S3
servers is protected from 3rd party eavesdropping. This method is
slower than plain HTTP, and can only be proxied with Python 2.7 or newer
Use HTTPS protocol [Yes]:

On some networks all internet access must go through a HTTP proxy.
Try setting it here if you can't connect to S3 directly
HTTP Proxy server name:

New settings:
  Access Key: xxxxxxxxxxxxxxxxxxxxx
  Secret Key: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
  Default Region: US
  S3 Endpoint: objectstore.e2enetworks.net
  DNS-style bucket+hostname:port template for accessing a bucket: %(bucket)s.objectstore.e2enetworks.net
  Encryption password:
  Path to GPG program: /usr/bin/gpg
  Use HTTPS protocol: True
  HTTP Proxy server name:
  HTTP Proxy server port: 0

Test access with supplied credentials? [Y/n] n
Save settings? [y/N] y
Configuration saved to '/root/.s3cfg'

Note

If you test the access with supplied credentials,you will get an error Test failed,Are you sure your keys have s3:ListallMybuckets permission You can ignore this error since you are using this with EOS bucket

Step 2 : (Option 2) Setup s3cmd with Configuration file

You can also manually edit the file ~/.s3cfg and credentials in the following format.

# Setup endpoint
host_base = objectstore.e2enetworks.net
host_bucket = %(bucket)s.objectstore.e2enetworks.net


# Setup access keys
access_key = <<enter your access key here>>
secret_key = <<enter your secret key here>>

Step 3 : Test access with configured credentials

List the contents of target bucket (eg. e2e-test) using a command like below. Please note the access and secret key that you had chosen during interactive/manual setup must have access to this bucket.

s3cmd ls s3://e2e-test

Note

You will not be able to list all the buckets (i.e.,with just s3cmd ls s3://),You need to specify the bucket name

You may also test by moving local file to your target bucket (e.g. e2e-test) use below command

touch testingfile
s3cmd sync testingfile s3://e2e-test/
upload: 'test' -> 's3://e2e-test/testingfile'  [1 of 1]
 0 of 0     0% in    0s     0.00 B/s  done

s3cmd ls s3://e2e-test
2019-11-22 12:51         0   s3://e2e-test/testingfile

Conclusion

We have now successfully configured s3cmd to work with E2E Object Service. The complete user guide on usage of s3cmd is available here