Create a bucket using the S3 API (with s3curl)

You can use the S3 API to create a bucket in an replication group. Because ECS uses custom headers (x-emc), the string to sign must be constructed to include these headers. In this procedure the s3curl tool is used; there are also a number of programmatic clients you can use, for example, the S3 Java client.

Before you begin

  • To create a bucket, ECS must have at least one replication group configured.
  • Ensure that Perl is installed on the Linux machine on which you will run s3curl.
  • Ensure that curl tool and the s3curl tool are installed. The s3curl tool acts as a wrapper around curl.
  • To use s3curl with x-emc headers, minor modifications must be made to the s3curl script. You can obtain the modified, ECS-specific version of s3curl from the EMCECS Git Repository.
  • Ensure that you have obtained a secret key for the user who will create the bucket. For more information, see the ECS Data Access Guide, available from the ECS Product Documentation page.

About this task

The EMC headers that can be used with buckets are described in Bucket HTTP headers.

Procedure

  1. Obtain the identity of the replication group in which you want the bucket to be created, by typing the following command.
    
    GET https://<ECS IP Address>:4443/vdc/data-service/vpools
    
    The response provides the name and identity of all data services virtual pools. In the following example, the ID is urn:storageos:ReplicationGroupInfo:8fc8e19bedf0-4e81-bee8-79accc867f64:global.
    <data_service_vpools>
    <data_service_vpool>
        <creation_time>1403519186936</creation_time>
        <id>urn:storageos:ReplicationGroupInfo:8fc8e19b-edf0-4e81-bee8-79accc867f64:global</id>
        <inactive>false</inactive>
        <tags/>
        <description>IsilonVPool1</description>
        <name>IsilonVPool1</name>
        <varrayMappings>
            <name>urn:storageos:VirtualDataCenter:1de0bbc2-907c-4ede-b133-f5331e03e6fa:vdc1</name>
            <value>urn:storageos:VirtualArray:793757ab-ad51-4038-b80a-682e124eb25e:vdc1</value>
        </varrayMappings>
    </data_service_vpool>
    </data_service_vpools>
  2. Set up s3curl by creating a .s3curl file in which to enter the user credentials.
    The .s3curl file must have permissions 0600 (rw-/---/---) when s3curl.pl is run.
    In the following example, the profile my_profile references the user credentials for the user@yourco.com account, and root_profile references the credentials for the root account.
    
    %awsSecretAccessKeys = (
        my_profile => {
            id  => 'user@yourco.com',
            key => 'sZRCTZyk93IWukHEGQ3evPJEvPUq4ASL8Nre0awN'
        },
       root_profile => {
            id  => 'root',
            key => 'sZRCTZyk93IWukHEGQ3evPJEvPUq4ASL8Nre0awN'
        },
    );
    
  3. Add the endpoint that you want to use s3curl against to the .s3curl file.
    The endpoint is the address of your data node or the load balancer that sits in front of your data nodes.
    
    push @endpoints , (
        '203.0.113.10',  'lglw3183.lss.dell.com',
    );
    
  4. Create the bucket using s3curl.pl and specify the following parameters:
    • Profile of the user
    • Identity of the replication group in which to create the bucket (<vpool_id>, which is set using the x-emc-dataservice-vpool header
    • Any custom x-emc headers
    • Name of the bucket (<BucketName>).
    The following example shows a fully specified command.
    
    ./s3curl.pl --debug --id=my_profile --acl public-read-write 
    --createBucket -- -H 'x-emc-file-system-access-enabled:true' 
    -H 'x-emc-dataservice-vpool:<vpool_id>' http://<DataNodeIP>:9020/<BucketName>
    
    
    The example uses thex-emc-dataservice-vpool header to specify the replication group in which the bucket is created and the x-emc-file-system-access-enabled header to enable the bucket for file system access, such as for NFS or HDFS.
    T2he -acl public-read-write argument is optional, but can be used to set permissions to enable access to the bucket (for example, if you intend to access to bucket as NFS from an environment that is not secured using Kerberos).
    If successful (with --debug on) output similar to the following appears:
    
    s3curl: Found the url: host=203.0.113.10; port=9020; uri=/S3B4; query=;
    s3curl: ordinary endpoint signing case
    s3curl: StringToSign='PUT\n\n\nThu, 12 Dec 2013 07:58:39 +0000\nx-amz-acl:public-read-write
    \nx-emc-file-system-access-enabled:true\nx-emc-dataservice-vpool:
    urn:storageos:ReplicationGroupInfo:8fc8e19b-edf0-4e81-bee8-79accc867f64:global:\n/S3B4'
    s3curl: exec curl -H Date: Thu, 12 Dec 2013 07:58:39 +0000 -H Authorization: AWS 
    root:AiTcfMDhsi6iSq2rIbHEZon0WNo= -H x-amz-acl: public-read-write -L -H content-type:  
    --data-binary  -X PUT -H x-emc-file-system-access-enabled:true 
    -H x-emc-dataservice-vpool:urn:storageos:ObjectStore:e0506a04-340b-4e78-a694-4c389ce14dc8: http://203.0.113.10:9020/S3B4
    

What to do next

You can list the buckets using the S3 interface, using:
./s3curl.pl --debug --id=my_profile http://<DataNodeIP>:9020/