Object Storage Overview

Object Storage Overview

Object Storage is a computer data storage architecture that manages data as objects, as opposed to other traditional storage architectures. Instead of accessing blocks and/or files on a hierarchically organized remote filesystem, the client application gets or stores data using standard HTTPS communication. By nature, object storage is a technology focused on delivering storage capacity to applications, and hence, its management is essentially via HTTPS RESTFUL API calls, including the creation of buckets, user management, metadata, storage policies and, of course, object push and retrieval. 

Redundancy and availability of the data is provided by Leaseweb. Your data is stored in three availability zones. 

The pool where the application can push, query, and retrieve objects is usually named bucket or space, accessible via a unique FQDN that defines it (..), where the space identifies the pool. 

For the available regions, the following FQDNs allow you to access the S3 API:

You can get your API Key Id and Secret by following the steps described on the following article.

Object Storage Overview - Frequently Asked Questions

  • How to make a bucket public?

    By default, the bucket policies do not allow you to access files directly:
    You can change the default policy by adding this:

    {
      "Statement": [
        {
          "Effect""Allow",
          "Principal""*",
          "Action": [
            "s3:GetObject",
            "s3:ListBucket"
          ],
          "Resource": [
            "urn:sgws:s3:::bucket1",
            "urn:sgws:s3:::bucket1/*"
          ]
        }
      ]
    }

    In this example the bucket name is “bucket1” this needs to be changed to your chosen bucket name.
    You can do this using S3 Browser as follows:

      • Right click your bucket and select ‘Edit Bucket Policy’

      • Then paste the example from above (do not forget to change it to your bucket name):

      • Click on ‘Apply’ to apply this new policy.

    You can do the same using s3cmd by doing the following:

    • Use your favorite text editor to create a file named (for this example): bucket1.json
    • Paste in the policy that is showed above which should result in a file as follows:

    $ cat bucket1.json
    {
      "Statement": [
        {
          "Effect""Allow",
          "Principal""*",
          "Action": [
            "s3:GetObject",
            "s3:ListBucket"
          ],
          "Resource": [
            "urn:sgws:s3:::bucket1",
            "urn:sgws:s3:::bucket1/*"
          ]
        }
      ]
    }

    • Now you can run the following commando to apply the policy:

    $ s3cmd setpolicy bucket1.json s3://bucket1
    s3://bucket1/: Policy updated

     

     

  • IP restricted access

    It is also possible to limit the access to reach your files by using the following example:

    {
      "Statement": [
        {
          "Sid""AllowEveryoneReadWriteAccessIfInSourceIpRange",
          "Effect""Allow",
          "Principal""*",
          "Action": [
            "s3:*Object",
            "s3:ListBucket"
          ],
          "Resource": [
            "urn:sgws:s3:::bucket1",
            "urn:sgws:s3:::bucket1/*"
          ],
          "Condition": {
            "IpAddress": {"sgws:SourceIp""ip1.ip1.ip1.ip1"},
            "NotIpAddress": {"sgws:SourceIp""ip2.ip2.ip2.ip2"}
          }
        }
      ]
    }

    Again, do not forget to change the bucket name, the ip1.ip1.ip1.ip1 and ip2.ip2.ip2.ip2 and then apply it using one of the methods described above using S3 Browser or s3cmd.