Quickstart for Cloud Files#

Rackspace offers a flexible and scalable solution to object storage through its Cloud Files service.

The core storage system is designed to provide a secure, network-accessible way to store an unlimited number of files. You can store as much as you want and pay only for storage space that you actually use.

Cloud Files also provides a simple yet powerful way to publish and distribute content behind a content delivery network (CDN). As a Cloud Files user, you get access to this network automatically.

Concepts#

To use this service effectively, you should understand how these key ideas are used in this context:

CDN (Content Delivery Network)

A highly available network for delivering content to users. Cloud Files uses the Akamai CDN.

container

A storage compartment that provides a way for you to organize your data.

object

The basic storage entity in Cloud Files. An object represents a single file and its optional metadata that you upload to the system.

Authentication#

To use this service you have to authenticate first. To do this, you will need your Rackspace username and API key. Your username is the one you use to login to the Cloud Control Panel at http://mycloud.rackspace.com/.

To find your API key, use the instructions in View and reset your API key.

You can specify a default region. Here is a list of available regions:

  • DFW (Dallas-Fort Worth, TX, US)

  • HKG (Hong Kong, China)

  • IAD (Blacksburg, VA, US)

  • LON (London, England)

  • SYD (Sydney, Australia)

Some users have access to another region in ORD (Chicago, IL). New users will not have this region.

Once you have these pieces of information, you can pass them into the SDK by replacing {username}, {apiKey}, and {region} with your info:

CloudIdentity cloudIdentity = new CloudIdentity()
{
   APIKey = "{apikey}",
   Username = "{username}"
};
CloudFilesProvider cloudFilesProvider = new CloudFilesProvider(cloudIdentity);
import (
  "github.com/rackspace/gophercloud"
  "github.com/rackspace/gophercloud/rackspace"
  osObjects "github.com/rackspace/gophercloud/openstack/objectstorage/v1/objects"
  "github.com/rackspace/gophercloud/rackspace/objectstorage/v1/containers"
  "github.com/rackspace/gophercloud/rackspace/objectstorage/v1/objects"
  "github.com/rackspace/gophercloud/rackspace/objectstorage/v1/cdncontainers"
  "github.com/rackspace/gophercloud/rackspace/objectstorage/v1/cdnobjects"
)

ao := gophercloud.AuthOptions{
  Username: "{username}",
  APIKey: "{apiKey}",
}
provider, err := rackspace.AuthenticatedClient(ao)

serviceClient, err := rackspace.NewObjectStorageV1(provider, gophercloud.EndpointOpts{
  Region: "{region}",
})

cdnClient, err := rackspace.NewObjectCDNV1(provider, gophercloud.EndpointOpts{
  Region: "{region}",
})
// Authentication in jclouds is lazy and happens on the first call to the cloud.
CloudFilesApi cloudFilesApi = ContextBuilder.newBuilder("rackspace-cloudfiles-us")
    .credentials("{username}", "{apiKey}")
    .buildApi(CloudFilesApi.class);
pkgcloud = require('pkgcloud');

// each client is bound to a specific service and provider
var client = pkgcloud.storage.createClient({
  provider: 'rackspace',
  username: '{username}',
  apiKey: '{apiKey}',
  region: '{region}'
});
require 'vendor/autoload.php';

use OpenCloud\Rackspace;

// Instantiate a Rackspace client.
$client = new Rackspace(Rackspace::US_IDENTITY_ENDPOINT, array(
    'username' => '{username}',
    'apiKey'   => '{apiKey}'
));
import pyrax

pyrax.set_setting("identity_type", "rackspace")
pyrax.set_default_region('{region}')
pyrax.set_credentials('{username}', '{apiKey}')
require 'fog'

@client = Fog::Storage.new(
  :provider => 'rackspace',
  :rackspace_username => '{username}',
  :rackspace_api_key => '{apiKey}',
  :rackspace_region => '{region}'
)
# {username}, {apiKey} below are placeholders, do not enclose '{}' when you replace them with actual credentials.

curl -s -X POST https://identity.api.rackspacecloud.com/v2.0/tokens \
  -H "Content-Type: application/json" \
  -d '{
    "auth": {
      "RAX-KSKEY:apiKeyCredentials": {
        "username": "{username}",
        "apiKey": "{apiKey}"
      }
    }
  }' | python -m json.tool

# From the resulting json, set three environment variables: TOKEN, ENDPOINT, and CDN_ENDPOINT.

export TOKEN="{tokenId}"
export ENDPOINT="{publicUrl}" # For the Cloud Files service
export CDN_ENDPOINT="{cdnEndpoint}" # Also from the cloud files service

Use the API#

Some of the basic operations you can perform with this API are described below.

To find information about endpoints and accessing tokens, use the instructions in Authentication Token.

Create container#

Before you can upload any objects to Cloud Files, you must create a container to receive the objects. To create a container:

CloudFilesProvider cloudFilesProvider = new CloudFilesProvider(cloudIdentity);
cloudFilesProvider.CreateContainer("{container_name}", region: "{region}");
_, err := containers.Create(serviceClient, "{containerName}", nil)
ContainerApi containerApi = cloudFilesApi.getContainerApiForRegion("{region}");

containerApi.create("{containerName}");
client.createContainer({
  name: 'gallery'
}, function (err, container) {
  if (err) {
    // TODO handle as appropriate
    return;
  }

  // TODO use your container
});
// Obtain an Object Store service object from the client.
$objectStoreService = $client->objectStoreService(null, '{region}');

// Create a container for your objects (also referred to as files).
$container = $objectStoreService->createContainer('gallery');
container = pyrax.cloudfiles.create_container("gallery")
# Fog calls containers "directories."

directory = @client.directories.create(:key => 'gallery')
curl -i -X PUT $ENDPOINT/{containerName} \
  -H "X-Auth-Token: $TOKEN"

Get container#

After a container has been created or if you want to inspect an existing container’s objects or metadata, you can retrieve it as shown below:

Dictionary<string,string> container =
      cloudFilesProvider.GetContainerHeader("{container_name}", "{region}");
_, err := containers.Get(serviceClient, "{containerName}").ExtractMetadata()
ContainerApi containerApi = cloudFilesApi.getContainerApiForRegion("{region}");

Container container = containerApi.get("{containerName}");
client.getContainer({containerName}, function(err, container) {
  if (err) {
    // TODO handle as appropriate
  }

  // TODO use your container
});
$container = $objectStoreService->getContainer('{containerName}');
container = pyrax.cloudfiles.get_container("gallery")
directory = @client.directories.get('{containerName}')
curl -i -X GET $ENDPOINT/{containerName} \
  -H "X-Auth-Token: $TOKEN" \
  -H "Accept: application/json"

CDN-enable container#

To make any objects within a container publicly readable, enable the container for access on the CDN (Content Delivery Network):

CloudFilesProvider cloudFilesProvider = new CloudFilesProvider(cloudIdentity);
long timeToLive = 604800;
Dictionary<string, string> header =
      cloudFilesProvider.EnableCDNOnContainer("{container_name}", timeToLive);
opts := cdncontainers.EnableOpts{
  CDNEnabled: true,
  TTL:        300,
}
_, err := cdncontainers.Enable(cdnClient, "{containerName}", opts).ExtractHeader()
CDNApi cdnApi = cloudFilesApi.getCDNApiForRegion("{region}");

URI cdnUri = cdnApi.enable("{containerName}");
container.enableCdn(function(err) {
  if (err) {
    // TODO handle as appropriate
  }
});
$container->enableCdn();
container.make_public()
directory.public = true
directory.save
curl -i -X PUT $ENDPOINT/{containerName} /
  -H "X-Auth-Token: $TOKEN" \
  -H "X-CDN-Enabled: True" \
  -H "X-TTL: 604800"

Disable CDN for container#

If you no longer wish to have your objects publicly readable, disable CDN access for the container:

CloudFilesProvider cloudFilesProvider = new CloudFilesProvider(cloudIdentity);
cloudFilesProvider.DisableCDNOnContainer("{container_name}");
opts := cdncontainers.EnableOpts{CDNEnabled: false}
_, err := cdncontainers.Enable(cdnClient, "{containerName}", opts).ExtractHeader()
CDNApi cdnApi = cloudFilesApi.getCDNApiForRegion("{region}");

cdnApi.disable("{containerName}");
container.disableCdn(function(err) {
  if (err) {
    // TODO handle as appropriate
  }
});
$container->disableCdn();
container.make_private()
directory.public = false
directory.save
curl -i -X POST $ENDPOINT/{containerName} /
  -H "X-Auth-Token: $TOKEN" \
  -H "X-CDN-Enabled: False"

Delete container#

To delete a container:

CloudFilesProvider cloudFilesProvider = new CloudFilesProvider(cloudIdentity);
cloudFilesProvider.DeleteContainer("{container_name}");
_, err := containers.Delete(serviceClient, "{containerName}").ExtractErr()
ContainerApi containerApi = cloudFilesApi.getContainerApiForRegion("{region}");

containerApi.deleteIfEmpty("{containerName}");
client.destroyContainer(container, function(err) {
  if (err) {
    // TODO handle as appropriate
  }
});
// Delete an empty container.
$container->delete();

// Delete all the objects in the container and delete the container.
$container->delete(true);
container.delete()

# Delete all the objects in the container and delete the container
container_deleted = pyrax.cloudfiles.delete_container("gallery",
                                                      del_objects=True)
directory.destroy
curl -i -X DELETE $ENDPOINT/{containerName} -H "X-Auth-Token: $TOKEN"

For data safety reasons, you may not delete a container until all objects within it have been deleted.

Upload objects to container#

To upload objects into a container:

// Option 1: Upload an object using a Stream
CloudFilesProvider cloudFilesProvider = new CloudFilesProvider(cloudIdentity);
using (FileStream fileStream = File.OpenRead("{path_to_file}"))
{
    cloudFilesProvider.CreateObject("{container_name}", fileStream, "{object_name}");
}

// Option 2: Upload a file directly using its filename
cloudFilesProvider.CreateObjectFromFile("{container_name}", "{path_to_file}", "{object_name}");
f, err := os.Open("{pathToFile}")
defer f.Close()
reader := bufio.NewReader(f)

_, err := objects.Create(
  serviceClient,
  "{containerName}",
  "{objectName}",
  reader,
  nil,
).ExtractHeader()
ObjectApi objectApi =
    cloudFilesApi.getObjectApiForRegionAndContainer("{region}", "{containerName}");

// Upload a String
Payload stringPayload = Payloads.newByteSourcePayload(ByteSource.wrap("sample-data".getBytes()));
objectApi.put("{objectName}", stringPayload);

// Upload a File
ByteSource byteSource = Files.asByteSource(new File("{filePath}"));
Payload filePayload = Payloads.newByteSourcePayload(byteSource);
objectApi.put("{objectName}", filePayload);
// we need to use the fs module to access the local disk
var fs = require('fs');

// TODO use a real file here
var filePath = '/tmp/somefile.txt';

// create a read stream for our source file
var source = fs.createReadStream(filePath);

// create a writeable stream for our destination
var dest = client.upload({
  container: 'sample-container-test',
  remote: 'somefile.txt'
});

dest.on('error', function(err) {
  // TODO handle err as appropriate
});

dest.on('success', function(file) {
  // TODO handle successful upload case
});

// pipe the source to the destination
source.pipe(dest);
// Upload an object to the container.
$localFileName  = __DIR__ . '/php-elephant.jpg';
$remoteFileName = 'php-elephant.jpg';

$handle = fopen($localFileName, 'r');
$object = $container->uploadObject($remoteFileName, $handle);

// Note that while we call fopen to open the file resource, we do not call fclose at the end.
// The file resource is automatically closed inside the uploadObject call.
container = pyrax.cloudfiles.create_container("gallery")
obj = container.store_object("thumbnail", data)
# :body can also be an open IO object like a File, to stream content instead
# of providing it all at once.

file = directory.files.create(
  :key => 'somefile.txt',
  :body => 'Rackspace is awesome!'
)
curl -i -X PUT $ENDPOINT/{containerName}/{objectName} /
  -H "X-Auth-Token: $TOKEN" \
  -H "Content-Type: image/jpeg" \
  -H "Content-Length: 0"

Upload objects to a subdirectory#

While you cannot create nested containers, Cloud Files does support subdirectories or subfolders. Objects are uploaded to a subdirectory through a special naming convention. This naming convention includes the subdirectory path in the object name, separating path segments with the forward-slash character /.

For example, if you want the relative URL of the object to be /images/kittens/thumbnails/kitty.png, upload the object to a container using that relative path as the object name.

cloudFilesProvider.CreateObjectFromFile("{container_name}", "{path_to_file}", "{subdirectories}/{object_name}");
_, err := objects.Create(
  serviceClient,
  "{containerName}",
  "{subdirectories}/{objectName}",
  reader,
  nil,
).ExtractHeader()
objectApi.put("{subdirectories}/{objectName}", filePayload);
var dest = client.upload({
  container: 'sample-container-test',
  remote: '{subdirectories}/{objectName}'
});
$object = $container->uploadObject('{subdirectories}/{object_name}', $handle);
obj = container.store_object("{subdirectories}/{object_name}", data)
file = directory.files.create(
  :key => '{subdirectories}/{object_name}',
  :body => 'Rackspace is awesome!'
)
curl -i -X PUT $ENDPOINT/{containerName}/{subdirectories}/{objectName} /
  -H "X-Auth-Token: $TOKEN" \
  -H "Content-Type: image/jpeg" \
  -H "Content-Length: 0"

Change object metadata#

To change object metadata:

CloudFilesProvider cloudFilesProvider = new CloudFilesProvider(cloudIdentity);
Dictionary<string, string> metadata = new Dictionary<string,string>();
metadata.Add("{key}","{value}");
cloudFilesProvider.UpdateObjectMetadata("{container_name}", "{object_name}", metadata, "{region}");
metadata := map[string]string{"some-key": "some-data"}
_, err := objects.Update(
  serviceClient,
  "{containerName}",
  "{objectName}",
  objects.UpdateOpts{Metadata: metadata},
).ExtractHeader()
ObjectApi objectApi =
    cloudFilesApi.getObjectApiForRegionAndContainer("{region}", "{containerName}");

objectApi.updateMetadata("{objectName}", ImmutableMap.of("some-key", "some-value"));
file.metadata = {
  'some-key': 'some-value'
};

file.updateMetadata(function(err) {
  if (err) {
    // TODO handle as appropriate
  }
});
// Update object metadata.
$object->saveMetadata(array(
    'some-key' => 'some-value'
));
obj.change_content_type("application/json")

# Generic metadata can be set with:
obj.set_metadata({"some-key": "some-value"})
file.content_type = 'application/json'
file.save

# Generic metadata can be set with:
file.metadata['some-key'] = 'some-value'
file.save
curl -i -X POST $ENDPOINT/{containerName}/{objectName} \
  -H "X-Auth-Token: $TOKEN" \
  -H "Content-Type: application/json" \
  -H "X-Object-Meta-Some-Key: some-value"

After you have an object uploaded to a container, you can change its metadata in-place. For instance, you can change its content-type so that when delivered to requesting clients, it can be treated accordingly.

Get object#

You and your clients can retrieve objects from Cloud Files in several ways. To retrieve objects, the most common ways are:

Get object via temporary URL#

To retrieve an object via temporary URL:

var cloudFilesProvider = new CloudFilesProvider(cloudIdentity);

// Create or initialize your account's key used to generate temp urls
// This is one-time setup and only needs to be performed once.
const string accountTempUrlHeader = "Temp-Url-Key";
var accountMetadata = cloudFilesProvider.GetAccountMetaData("{region}");
string tempUrlKey;
if (!accountMetadata.ContainsKey(accountTempUrlHeader))
{
    tempUrlKey = Guid.NewGuid().ToString();
    var tempUrlMetadata = new Dictionary<string, string> { {accountTempUrlHeader, tempUrlKey} };
    cloudFilesProvider.UpdateAccountMetadata(tempUrlMetadata, "{region}");
}
else
{
    tempUrlKey = accountMetadata[accountTempUrlHeader];
}

// Generate a public URL for a cloud file which is good for 1 hour
DateTimeOffset expiration = DateTimeOffset.UtcNow + TimeSpan.FromHours(1);
Uri tempUrl = cloudFilesProvider.CreateTemporaryPublicUri(HttpMethod.GET, "{container-name}", "{object-name}", tempUrlKey, expiration, "{region}");
// Set the temp URL secret key
accountOpts := accounts.UpdateOpts{
  TempURLKey: "jnRB6#1sduo8YGUF&%7r7guf6f",
}
_, err := accounts.Update(serviceClient, accountOpts)

// Create the temp URL
createTempURLOpts := osObjects.CreateTempURLOpts{
  Method: osObjects.GET,
  TTL:    3600,
}
tempURL, err := objects.CreateTempURL(serviceClient, "example_container", "someobject", createTempURLOpts)
// Create a new ContextBuilder
ContextBuilder builder = ContextBuilder.newBuilder("rackspace-cloudfiles-us")
        .credentials("{username}", "{apiKey}");

// Access the RegionScopedBlobStore and get the Cloud Files API
BlobStore blobStore = builder.buildView(RegionScopedBlobStoreContext.class)
        .blobStoreInRegion("{region}");
CloudFilesApi cloudFilesApi = blobStore.getContext().unwrapApi(CloudFilesApi.class);

// Get the AccountApi and update the temporary URL key if not set
AccountApi accountApi = cloudFilesApi.getAccountApiForRegion("{region}");
accountApi.updateTemporaryUrlKey("jnRB6#1sduo8YGUF&%7r7guf6f");

// Get the temporary URL
BlobRequestSigner signer = blobStore.getContext().signerInRegion("{region}");
HttpRequest request = signer.signGetBlob("example_container", "someobject");
URI tempUrl = request.getEndpoint();
// This is not supported through the pkgcloud SDK at this time
// First, you'll need to set the "temp url key" on your Account. This is an
// arbitrary secret shared between Cloud Files and your application that's
// used to validate temp url requests. You only need to do this once.
$account = $service->getAccount();
$account->setTempUrlSecret();

// Get a temporary URL that will expire in 3600 seconds (1 hour) from now
// and only allow GET HTTP requests to it.
$tempUrl = $object->getTemporaryUrl(3600, 'GET');
# First, you'll need to set the "temp url key" on your Account. This is an
# arbitrary secret shared between Cloud Files and your application that's
# used to validate temp url requests. You only need to do this once.

# Let pyrax set the temp URL key for you.
pyrax.cloudfiles.set_temp_url_key()

# Or, you can set your own.
# pyrax.cloudfiles.set_temp_url_key("jnRB6#1sduo8YGUF&%7r7guf6f")

# Get a temporary URL that will expire in 3600 seconds (1 hour) from now.
temp_url = obj.get_temp_url(1800)
# First, you'll need to set the "temp url key" on your Account. This is an
# arbitrary secret shared between Cloud Files and your application that's
# used to validate temp url requests. You only need to do this once.

account = @client.account
account.meta_temp_url_key = 'jnRB6#1sduo8YGUF&%7r7guf6f'
account.save

# Then, when you want to generate temp urls, pass it to the Fog::Storage
# constructor as ":rackspace_temp_url_key":

@client = Fog::Storage.new(
  :provider => 'rackspace',
  :rackspace_username => '{username}',
  :rackspace_api_key => '{apiKey}',
  :rackspace_region => '{region}',
  :rackspace_temp_url_key => 'jnRB6#1sduo8YGUF&%7r7guf6f'
)

# Now, you can create a temporary url for any file you access from that
# @client with the #url method. Its argument is the expiration time for
# the generated URL, expressed as seconds since the epoch (1970-01-01 00:00).

directory = @client.directories.get('example_container')
file = directory.files.get('someobject')
temp_url = file.url(Time.now.to_i + 600)
# To create a TempURL, first set the X-Account-Meta-Temp-Url-Key metadata
# header on your Cloud Files account to a key that only you know.

curl -i -X POST $ENDPOINT \
  -H "X-Auth-Token: $TOKEN" \
  -H "X-Account-Meta-Temp-Url-Key: {arbitraryKey}"

# Create the temp_url_sig and temp_url query parameter values. OpenStack
# Object Storage provides the swift-temp-url script that auto-generates
# the temp_url_sig and temp_url_expires query parameters. For example,
# you might run this command:

bin/swift-temp-url GET 3600 $ENDPOINT/{containerName}/{objectName} {arbitraryKey}

# To create the temporary URL, prefix this path that is returned by the swift-temp-url
# command with the storage host name. For example, prefix the path with
# https://swift-cluster.example.com, as follows:

$ENDPOINT/{containerName}/{objectName}\
  ?temp_url_sig=5c4cc8886f36a9d0919d708ade98bf0cc71c9e91\
  &temp_url_expires=1374497657

Get object via SDK#

To download objects directly into your local storage drive via SDK download:

cloudFilesProvider.GetObjectSaveToFile(
      "{container_name}",
      "{output_folder}",
      "{object_name}",
      "{output_filename}");
result := objects.Download(serviceClient, "{containerName}", "{objectName}", nil)
content, err := result.ExtractContent()
// result.Body is also an io.ReadCloser of the file content that may be consumed as a stream.

err := ioutil.WriteFile("/tmp/somefile.txt", []byte(content), 0644)
ObjectApi objectApi =
    cloudFilesApi.getObjectApiForRegionAndContainer("{region}", "{containerName}");

SwiftObject object = objectApi.get("{objectName}");

// Write the object to a file
InputStream inputStream = object.getPayload().openStream();
File file = File.createTempFile("{objectName}", ".txt");
BufferedOutputStream outputStream = new BufferedOutputStream(new FileOutputStream(file));

try {
    ByteStreams.copy(inputStream, outputStream);
}
finally {
    inputStream.close();
    outputStream.close();
}
// We need to use the fs module to access the local disk.
var fs = require('fs');

// TODO use a real file here
var filePath = '/tmp/somefile.txt';

// create a writeable stream for our source file
var dest = fs.createWriteStream(filePath);

// create a writeable stream for our destination
var source = client.download({
  container: 'sample-container-test',
  remote: 'somefile.txt'
}, function(err) {
  if (err) {
    // TODO handle as appropriate
  }
});

// pipe the source to the destination
source.pipe(dest);
// Get the object content (data) as a stream.
$stream = $object->getContent();

// Cast to string
$content = (string) $stream;

// Write object content to file on local filesystem.
$stream->rewind();
$localFilename = tempnam("/tmp", 'php-opencloud-');
file_put_contents($localFilename, $stream->getStream());
# Get the data as a string
data = obj.get()

# Download the object locally to a file
obj.download("/tmp")
file.body
curl -X GET $ENDPOINT/{containerName}/{objectName} \
  -H "X-Auth-Token: $TOKEN"

Get object via CDN URL#

Pre-requisite: CDN-enable the object’s container.

To retrieve an object through a CDN URL, that, unlike a temporary URL, never expires and may be considered a publicly-accessible permalink:

CloudFilesProvider cloudFilesProvider = new CloudFilesProvider(cloudIdentity);
ContainerCDN container = cloudFilesProvider.GetContainerCDNHeader(container: "{container_name}");
string urlForHTTP = container.CDNUri;
string urlForHTTPS = container.CDNSslUri;
string urlForiOSStreaming = container.CDNIosUri;
string urlForStreaming = container.CDNStreamingUri;
cdnURL, err := cdnobjects.CDNURL(cdnClient, "{containerName}", "{objectName}")
CDNApi cdnApi = cloudFilesApi.getCDNApiForRegion("{region}");

CDNContainer cdnContainer = cdnApi.get("{containerName}");

URI uri = cdnContainer.getUri();
URI sslUri = cdnContainer.getSslUri();
URI streamingUri = cdnContainer.getStreamingUri();
URI iosUri = cdnContainer.getIosUri();
var cdnUrl = container.cdnUri + '/' + encodeURIComponent(file.name);
$cdnUrl = $object->getPublicUrl();
import urllib
import urlparse

encoded_name = urllib.quote(obj.name)
cdn_url = urlparse.urljoin(container.cdn_uri, encoded_name)
file.public_url
curl -i -X HEAD $CDN_ENDPOINT/{containerName}/{objectName} /
    -H "X-Auth-Token: $TOKEN"

Delete object#

To delete an object from its container:

CloudFilesProvider cloudFilesProvider = new CloudFilesProvider(cloudIdentity);
cloudFilesProvider.DeleteObject("{container_name}", "{object_name}");
err := objects.Delete(serviceClient, "{containerName}", "{objectName}", nil).ExtractErr()
ObjectApi objectApi =
    cloudFilesApi.getObjectApiForRegionAndContainer("{region}", "{containerName}");

objectApi.delete("{objectName}");
client.removeFile('gallery', 'somefile.txt', function(err) {
  if (err) {
    // TODO handle as appropriate
  }
});
$object->delete();
obj.delete()
file.destroy
curl -i -X DELETE $ENDPOINT/{containerName}/{objectName} \
  -H "X-Auth-Token: $TOKEN"

More information#

This quickstart is intentionally brief, demonstrating only a few basic operations. To learn more about interacting with Rackspace cloud services, explore the following sites: