Configuration files for s3cmd command on CREODIAS
s3cmd can access remote data using the S3 protocol. This includes EODATA repository and object storage on the CREODIAS cloud.
To connect to S3 storage, s3cmd uses several parameters, such as an access key, secret key, S3 endpoint, and others. During configuration, you can enter this data interactively, and the command saves it into a configuration file. This file can then be passed to s3cmd when issuing commands using the connection described within.
If you want to use multiple connections from a single virtual machine (such as connecting both to the EODATA repository and to object storage on CREODIAS cloud), you can create and store multiple configuration files — one per connection.
This article provides examples of how to create and save these configuration files under various circumstances and describes some potential problems you may encounter.
The examples are not intended to be executed sequentially as part of a workflow; instead, they illustrate different use cases of s3cmd operations.
Prerequisites
No. 1 s3cmd installed
To use s3cmd, it must first be installed. Here is the necessary information:
How to install s3cmd on Linux on CREODIAS
No. 2 Knowledge of using s3cmd
To run examples later on in this article, you will need the s3cmd to be set up properly using this article:
Initializing the configuration process
Saving an s3cmd configuration file is a two-part process:
answering a series of interactive questions and then
saving the answers to a configuration file.
Execute this command
s3cmd -c eodata-config --configure
to start an interactive session. You enter data for
Access Key – your access key from Prerequisite No. 2
Secret Key – your secret key from Prerequisite No. 2
Default Region – US
S3 Endpoint – the actual value will depend on the cloud you are using – see below.
For all other questions, keep on pressing Enter on the keyboard to accept the defaults.
The whole procedure looks like this on the screen:
Enter new values or accept defaults in brackets with Enter.
Refer to user manual for detailed description of all options.
Access key and Secret key are your identifiers for Amazon S3. Leave them empty for using the env variables.
Access Key: <your EC2 access key>
Secret Key: <your EC2 secret key>
Default Region [US]: US
Use "s3.amazonaws.com" for S3 Endpoint and not modify it to the target Amazon S3.
S3 Endpoint [s3.amazonaws.com]: s3.waw4-1.cloudferro.com
Use "%(bucket)s.s3.amazonaws.com" to the target Amazon S3. "%(bucket)s" and "%(location)s" vars can be used
if the target S3 system supports dns based buckets.
DNS-style bucket+hostname:port template for accessing a bucket [%(bucket)s.s3.amazonaws.com]:
Encryption password is used to protect your files from reading
by unauthorized persons while in transfer to S3
Encryption password:
Path to GPG program [/usr/bin/gpg]:
When using secure HTTPS protocol all communication with Amazon S3
servers is protected from 3rd party eavesdropping. This method is
slower than plain HTTP, and can only be proxied with Python 2.7 or newer
Use HTTPS protocol [Yes]:
On some networks all internet access must go through a HTTP proxy.
Try setting it here if you can't connect to S3 directly
HTTP Proxy server name:
New settings:
Access Key: <your EC2 access key>
Secret Key: <your EC2 secret key>
Default Region: US
S3 Endpoint: s3.waw4-1.cloudferro.com
DNS-style bucket+hostname:port template for accessing a bucket: %(bucket)s.s3.amazonaws.com
Encryption password:
Path to GPG program: /usr/bin/gpg
Use HTTPS protocol: True
HTTP Proxy server name:
HTTP Proxy server port: 0
Test access with supplied credentials? [Y/n]
Please wait, attempting to list all buckets...
Success. Your access key and secret key worked fine :-)
Now verifying that encryption works...
Not configured. Never mind.
Save settings? [y/N] y
Configuration saved to 'eodata-config'
Enter new values or accept defaults in brackets with Enter.
Refer to user manual for detailed description of all options.
Access key and Secret key are your identifiers for Amazon S3. Leave them empty for using the env variables.
Access Key: <your EC2 access key>
Secret Key: <your EC2 secret key>
Default Region [US]: US
Use "s3.amazonaws.com" for S3 Endpoint and not modify it to the target Amazon S3.
S3 Endpoint [s3.amazonaws.com]: s3.waw3-1.cloudferro.com
Use "%(bucket)s.s3.amazonaws.com" to the target Amazon S3. "%(bucket)s" and "%(location)s" vars can be used
if the target S3 system supports dns based buckets.
DNS-style bucket+hostname:port template for accessing a bucket [%(bucket)s.s3.amazonaws.com]:
Encryption password is used to protect your files from reading
by unauthorized persons while in transfer to S3
Encryption password:
Path to GPG program [/usr/bin/gpg]:
When using secure HTTPS protocol all communication with Amazon S3
servers is protected from 3rd party eavesdropping. This method is
slower than plain HTTP, and can only be proxied with Python 2.7 or newer
Use HTTPS protocol [Yes]:
On some networks all internet access must go through a HTTP proxy.
Try setting it here if you can't connect to S3 directly
HTTP Proxy server name:
New settings:
Access Key: <your EC2 access key>
Secret Key: <your EC2 secret key>
Default Region: US
S3 Endpoint: s3.waw1-1.cloudferro.com
DNS-style bucket+hostname:port template for accessing a bucket: %(bucket)s.s3.amazonaws.com
Encryption password:
Path to GPG program: /usr/bin/gpg
Use HTTPS protocol: True
HTTP Proxy server name:
HTTP Proxy server port: 0
Test access with supplied credentials? [Y/n]
Please wait, attempting to list all buckets...
Success. Your access key and secret key worked fine :-)
Now verifying that encryption works...
Not configured. Never mind.
Save settings? [y/N] y
Configuration saved to 'eodata-config'
Enter new values or accept defaults in brackets with Enter.
Refer to user manual for detailed description of all options.
Access key and Secret key are your identifiers for Amazon S3. Leave them empty for using the env variables.
Access Key: <your EC2 access key>
Secret Key: <your EC2 secret key>
Default Region [US]: US
Use "s3.amazonaws.com" for S3 Endpoint and not modify it to the target Amazon S3.
S3 Endpoint [s3.amazonaws.com]: s3.waw3-2.cloudferro.com
Use "%(bucket)s.s3.amazonaws.com" to the target Amazon S3. "%(bucket)s" and "%(location)s" vars can be used
if the target S3 system supports dns based buckets.
DNS-style bucket+hostname:port template for accessing a bucket [%(bucket)s.s3.amazonaws.com]:
Encryption password is used to protect your files from reading
by unauthorized persons while in transfer to S3
Encryption password:
Path to GPG program [/usr/bin/gpg]:
When using secure HTTPS protocol all communication with Amazon S3
servers is protected from 3rd party eavesdropping. This method is
slower than plain HTTP, and can only be proxied with Python 2.7 or newer
Use HTTPS protocol [Yes]:
On some networks all internet access must go through a HTTP proxy.
Try setting it here if you can't connect to S3 directly
HTTP Proxy server name:
New settings:
Access Key: <your EC2 access key>
Secret Key: <your EC2 secret key>
Default Region: US
S3 Endpoint: s3.waw3-2.cloudferro.com
DNS-style bucket+hostname:port template for accessing a bucket: %(bucket)s.s3.amazonaws.com
Encryption password:
Path to GPG program: /usr/bin/gpg
Use HTTPS protocol: True
HTTP Proxy server name:
HTTP Proxy server port: 0
Test access with supplied credentials? [Y/n]
Please wait, attempting to list all buckets...
Success. Your access key and secret key worked fine :-)
Now verifying that encryption works...
Not configured. Never mind.
Save settings? [y/N] y
Configuration saved to 'eodata-config'
Enter new values or accept defaults in brackets with Enter.
Refer to user manual for detailed description of all options.
Access key and Secret key are your identifiers for Amazon S3. Leave them empty for using the env variables.
Access Key: <your EC2 access key>
Secret Key: <your EC2 secret key>
Default Region [US]: US
Use "s3.amazonaws.com" for S3 Endpoint and not modify it to the target Amazon S3.
S3 Endpoint [s3.amazonaws.com]: s3.fra1-2.cloudferro.com
Use "%(bucket)s.s3.amazonaws.com" to the target Amazon S3. "%(bucket)s" and "%(location)s" vars can be used
if the target S3 system supports dns based buckets.
DNS-style bucket+hostname:port template for accessing a bucket [%(bucket)s.s3.amazonaws.com]:
Encryption password is used to protect your files from reading
by unauthorized persons while in transfer to S3
Encryption password:
Path to GPG program [/usr/bin/gpg]:
When using secure HTTPS protocol all communication with Amazon S3
servers is protected from 3rd party eavesdropping. This method is
slower than plain HTTP, and can only be proxied with Python 2.7 or newer
Use HTTPS protocol [Yes]:
On some networks all internet access must go through a HTTP proxy.
Try setting it here if you can't connect to S3 directly
HTTP Proxy server name:
New settings:
Access Key: <your EC2 access key>
Secret Key: <your EC2 secret key>
Default Region: US
S3 Endpoint: s3.fra1-2.cloudferro.com
DNS-style bucket+hostname:port template for accessing a bucket: %(bucket)s.s3.amazonaws.com
Encryption password:
Path to GPG program: /usr/bin/gpg
Use HTTPS protocol: True
HTTP Proxy server name:
HTTP Proxy server port: 0
Test access with supplied credentials? [Y/n]
Please wait, attempting to list all buckets...
Success. Your access key and secret key worked fine :-)
Now verifying that encryption works...
Not configured. Never mind.
Save settings? [y/N] y
Configuration saved to 'eodata-config'
If this is the first time you are issuing this command for file eodata-config, there will be no default data in the interactive session.
To cancel the configuration process, press CTRL+C.
Explanation of parameters
The most often used s3cmd parameters are:
- -c
Specifies the name and/or location of the configuration file (note: single dash).
- --config
Alternative to -c (note: two dashes).
- --configure
Initiates the on-screen question session and saves the answers to the file.
You can use -c or --config alongside --configure.
Ensure that you pass the file path correctly to the shell, paying attention to spaces, quotation marks, or escape characters.
Destination: default file
The default location for the s3cmd configuration file is a hidden file named .s3cfg located in your Home directory.
To check your Home directory, use:
echo $HOME
On Linux, the Home directory is usually /home/<username>.
On a VM hosted by CREODIAS, it will typically be /home/eouser. Thus, the configuration file will be /home/eouser/.s3cfg.
To initialize the configuration process using the default location:
s3cmd --configure
Securing the configuration file
After the configuration file is created, it is highly recommended to protect it by setting appropriate permissions. This ensures that your access and secret keys are not readable by unauthorized users.
Example command for securing the default configuration file:
chmod 600 ~/.s3cfg
This command makes the file readable and writable only by your user.
Destination: custom file
If your destination of choice is a custom file, pass its name and/or location to the command using the -c parameter. Finish the command with --configure to instruct s3cmd to create the file.
Examples:
File named object-storage-access in your current working directory
s3cmd -c object-storage-access --configure
File named eodata-access in /home/eouser/ directory
s3cmd -c /home/eouser/eodata-access --configure
File named object-storage-access located in the parent directory of your current working directory
s3cmd -c ../object-storage-access --configure
Again, if you save the configuration file outside the default location (e.g., as /home/eouser/eodata-access), you should also set proper file permissions to protect it.
Example command:
chmod 600 /home/eouser/eodata-access
This ensures your access and secret keys remain secure, just like with the default .s3cfg file.
Using --configure on an existing file
When you use --configure, s3cmd will operate on a file:
If -c or --config are omitted, it uses the default location.
If -c or --config are specified, it uses the given file.
If the configuration file such as eodata-config already exists, the application will offer you default values and accept them if you just press Enter.
Existing and valid s3cmd configuration file
If you:
pass an existing valid s3cmd configuration file,
use --configure, and
approve saving after finishing the session,
then the answers will update the existing configuration.
If you cancel before saving, the original configuration remains unchanged.
Existing file but not a valid s3cmd configuration file
If you:
pass an existing file that is not a valid s3cmd configuration file, and
use --configure,
it may lead to unexpected results.
Double-check that the correct file path is specified.
Executing S3 commands
Once you have a valid configuration file, you can use s3cmd commands with it. Get EC2 credentials first by using Prerequisite No. 2.
In this article, we focus only on the ls command (listing available buckets).
Existing and valid configuration file — non-default location
To execute S3 commands using a non-default config file:
s3cmd -c eodata-config ls
Example output:
2017-11-15 10:40 s3://DIAS
2017-11-15 10:40 s3://EODATA
Existing and valid configuration file — default location
If your configuration file is saved at the default location:
s3cmd ls
No -c parameter is needed.
Non-existent configuration file
If you:
pass a non-existent file path, and
do not use --configure,
you will get an error.
Example:
s3cmd -c /home/eouser/nonexistentfile ls
Error output:
ERROR: /home/eouser/nonexistentfile: None
ERROR: Configuration file not available.
ERROR: Consider using --configure parameter to create one.
Existing file that is not a valid s3cmd configuration file
If you:
pass a file that exists but
is not a valid s3cmd configuration file,
and do not use --configure,
then unexpected results may occur.
This warning also applies if the default configuration file is invalid.
Creating a minimal configuration file manually
Instead of using the interactive --configure process, you can create a minimal s3cmd configuration file manually.
This is useful when you are
scripting or working in automated environments, or
when you want to quickly set up access using an editor.
Minimal content required
Below is the minimum content required for a valid configuration file that connects to object storage of CREODIAS cloud:
[default]
access_key = <your EC2 access key>
secret_key = <your EC2 secret key>
host_base = s3.waw4-1.cloudferro.com
host_bucket = %(bucket)s.s3.waw4-1.cloudferro.com
signature_v2 = False
use_https = True
[default]
access_key = <your EC2 access key>
secret_key = <your EC2 secret key>
host_base = s3.waw3-1.cloudferro.com
host_bucket = %(bucket)s.s3.waw3-1.cloudferro.com
signature_v2 = False
use_https = True
[default]
access_key = <your EC2 access key>
secret_key = <your EC2 secret key>
host_base = s3.waw3-2.cloudferro.com
host_bucket = %(bucket)s.s3.waw3-2.cloudferro.com
signature_v2 = False
use_https = True
[default]
access_key = <your EC2 access key>
secret_key = <your EC2 secret key>
host_base = s3.fra1-2.cloudferro.com
host_bucket = %(bucket)s.s3.fra1-2.cloudferro.com
signature_v2 = False
use_https = True
User Prerequisite No. 2 to obtain <your EC2 access key> and <your EC2 secret key> and use them as your actual credentials.
Creating the file using nano
To create a configuration file manually using the nano text editor, run:
nano ~/.s3cfg
Paste the configuration content into the editor.
Save and exit the file with:
CTRL+O to write the file
ENTER to confirm the filename
CTRL+X to exit the editor
To protect the file, set secure permissions:
chmod 600 ~/.s3cfg
Using a custom configuration path
You can also save the configuration file to a custom location:
nano ~/my-s3cfg-file
Once saved, you can use it with the -c option:
s3cmd -c ~/my-s3cfg-file ls
The [default] section header is required. s3cmd will not recognize the file as valid without it, and commands may fail silently or with cryptic errors.
Maintaining separate s3cmd configuration files
When developing in Python and s3cmd, the best practice is to maintain separate configuration files and have separate
production,
testing and
development environments.
By way of example, let us concentrate only on production and testing environments.
The benefits of separating s3cmd config files are that
you don’t accidentally upload or delete data in production while testing;
each environment can use a different set of credentials, endpoints, or permissions;
you retain clarity and control over which connection is active.
Example setup for production and testing environments
Create two separate files in your home directory:
nano ~/s3cfg-prod
nano ~/s3cfg-test
Each file should contain the required configuration, with the correct credentials for its environment.
You can then run s3cmd with the appropriate file using the -c flag:
# For production
s3cmd -c ~/s3cfg-prod ls
# For testing
s3cmd -c ~/s3cfg-test ls
To prevent accidental edits or exposure, restrict permissions on each file:
chmod 600 ~/s3cfg-prod ~/s3cfg-test
Tip
Use meaningful names such as s3cfg-prod and s3cfg-test to distinguish environments clearly in scripts and commands.
What To Do Next
You can use s3cmd for several common tasks: