Create a bucket using the S3 API (with s3curl)

You can use the S3 API to create a bucket in a replication group. Because ECS uses custom headers (x-emc), the string to sign must be constructed to include these headers. In this procedure the s3curl tool is used. There are also several programmatic clients you can use, for example, the S3 Java client.


  • To create a bucket, ECS must have at least one replication group configured.
  • Ensure that Perl is installed on the Linux machine on which you run s3curl.
  • Ensure that curl tool and the s3curl tool are installed. The s3curl tool acts as a wrapper around curl.
  • To use s3curl with x-emc headers, minor modifications must be made to the s3curl script. You can obtain the modified, ECS-specific version of s3curl from the EMCECS Git Repository.
  • Ensure that you have obtained a secret key for the user who will create the bucket. For more information, see ECS Data Access Guide, available from

About this task

The EMC headers that can be used with buckets are described in Bucket HTTP headers.


  1. Obtain the identity of the replication group in which you want the bucket to be created, by typing the following command:
    GET https://<ECS IP Address>:4443/vdc/data-service/vpools
    The response provides the name and identity of all data services virtual pools. In the following example, the ID is urn:storageos:ReplicationGroupInfo:8fc8e19bedf0-4e81-bee8-79accc867f64:global.
  2. Set up s3curl by creating a .s3curl file in which to enter the user credentials.
    The .s3curl file must have permissions 0600 (rw-/---/---) when is run.
    In the following example, the profile my_profile references the user credentials for the account, and root_profile references the credentials for the root account.
    %awsSecretAccessKeys = (
        my_profile => {
            id  => '',
            key => 'sZRCTZyk93IWukHEGQ3evPJEvPUq4ASL8Nre0awN'
       root_profile => {
            id  => 'root',
            key => 'sZRCTZyk93IWukHEGQ3evPJEvPUq4ASL8Nre0awN'
  3. Add the endpoint that you want to use s3curl against to the .s3curl file.
    The endpoint is the address of your data node or the load balancer that sits in front of your data nodes.
    push @endpoints , (
        '',  '',
  4. Create the bucket using and specify the following parameters:
    • Profile of the user
    • Identity of the replication group in which to create the bucket (<vpool_id>, which is set using the x-emc-dataservice-vpool header
    • Any custom x-emc headers
    • Name of the bucket (<BucketName>).
    The following example shows a fully specified command:
    ./ --debug --id=my_profile --acl public-read-write 
    --createBucket -- -H 'x-emc-file-system-access-enabled:true' 
    -H 'x-emc-dataservice-vpool:<vpool_id>' http://<DataNodeIP>:9020/<BucketName>
    The example uses thex-emc-dataservice-vpool header to specify the replication group in which the bucket is created and the x-emc-file-system-access-enabled header to enable the bucket for file system access, such as for NFS or HDFS.
    NOTE: T2he -acl public-read-write argument is optional, but can be used to set permissions to enable access to the bucket. For example, if you intend to access to bucket as NFS from an environment that is not secured using Kerberos.
    If successful, (with --debug on) output similar to the following is displayed:
    s3curl: Found the url: host=; port=9020; uri=/S3B4; query=;
    s3curl: ordinary endpoint signing case
    s3curl: StringToSign='PUT\n\n\nThu, 12 Dec 2013 07:58:39 +0000\nx-amz-acl:public-read-write
    s3curl: exec curl -H Date: Thu, 12 Dec 2013 07:58:39 +0000 -H Authorization: AWS 
    root:AiTcfMDhsi6iSq2rIbHEZon0WNo= -H x-amz-acl: public-read-write -L -H content-type:  
    --data-binary  -X PUT -H x-emc-file-system-access-enabled:true 
    -H x-emc-dataservice-vpool:urn:storageos:ObjectStore:e0506a04-340b-4e78-a694-4c389ce14dc8:

Next steps

You can list the buckets using the S3 interface, using:
./ --debug --id=my_profile http://<DataNodeIP>:9020/