Support is available by email at firstname.lastname@example.org Monday through Friday, 9AM to 5PM CST. Tech support will be happy to answer any questions you have about Cloud Files, except for code related issues. If you do have questions about design or coding please try our forums or documentation.
Cloud Files: FAQs
- Getting Started -
Rackspace Cloud Files uses Akamai Technologies, Inc. a leading, tier one, global Content Delivery Network (CDN) provider to offer the benefits to all Cloud Files users. Today Akamai handles tens of billions of daily Web interactions for their customers such as Audi, NBC, Fujitsu, U.S. Department of Defense, and NASDAQ.
The Rackspace CloudFiles/Akamai relationship brings full-fledged, robust CDN capabilities and unlimited file storage to developers and corporate IT departments alike. The CDN capability will greatly enhance the quality of the end user experience by speeding the delivery of bandwidth-heavy rich content, including audio and video. For literally pennies per gigabyte of bandwidth and storage and no upfront commitments, the CDN advantage is now available to all not just to the giants of the internet. This partnership brings unlimited online storage, scalable content delivery, and application acceleration services, thereby allowing businesses to more easily and affordably distribute content to millions of end users around the world. Together with Akamai, Rackspace has democratized content delivery.
With Akamai’s service, Cloud Files brings a powerful and easy way to publish content over a world-class, industry leading CDN. A Cloud Files user automatically gets access to this network. Users have to mark containers for publishing to CDN, and then they are instantly accessible through Akamai CDN. The propagation of content to the edge locations is done automatically behind the scenes. The Rackspace Cloud/Akamai offering is not a one-off solution; content published through it is distributed across their entire infrastructure just as it is for other customers.
In the Rackspace Cloud control panel, it is a matter of creating a Container (the storage compartment for data), uploading Objects (the files to serve over CDN), and marking the Container as “public”. The Container is then assigned a unique URL which can be combined with Object names to embed in web pages, email messages, blog posts, etc. For example, a user could upload a photo to a Container called “images”. When this Container is published, it will be assigned a unique URL like
http://c0000532.cdn.cloudfiles.rackspace.com. The user could then share a link to the photo with link like
http://c0000532.cdn.cloudfiles.rackspace.com/IMG_3432.jpg. When that link is accessed, the photo is served from the CDN; it’s that simple!
There is a growing list of tools and extensions. A sampling is as follows.
Rackspace CDN is a WordPress plug-in that allows any media in your WordPress uploads folder to be uploaded to Rackspace Cloud Files, powered by CDN. Once the file is uploaded, it is deleted from the local server. Link: https://wordpress.org/plugins/rackspace-cloud-files-cdn/
Cyberduck is a full-featured browser to publish your content on Cloud Files and manage your CDN distributions with a click of a button. Access your Rackspace Cloud Files remote storage using the familiar Cyberduck browser interface to create containers and upload content using drag and drop. Create a distribution to register that container with the Akamai's Content Distribution Network through using the Info panel of the browser that displays the CDN URL. Distribution can be toggled on or off at any time. Supporting the latest and greatest version of the Cloud Files protocol, you can also browse hierarchical pseudo-folder structures. Link: http://cyberduck.ch/
Fireuploader is a nifty tool to manage the files on Cloud. It provides a user-friendly interface to download/upload files to the cloud. It also allows users to set properties of the uploaded files like CDN and headers etc. It has all the features supported by Rackspace Cloud Files as of now. Link: http://www.fireuploader.com,
Django-Cloudfiles is an extension to the management system of Django, the popular website framework. Django-Cloudfiles lets you synchronize the static content directory of your Django-powered website to your Cloudfiles account effortlessly. Django-Cloudfiles is…Cool! 1. it only uploads files that have been modified (but can force upload-all) 2. it can create a new container for you 3. it preserves your file hierarchy by naming your remote files such that they emulate nested directories: no need to flatten your existing structure! 4. it lets you store credentials in your site’s configuration file (for easy use) or specify them on the command line (for greater security) 5. it ignores files you probably don’t want to upload, like .DS_Store, .git, Thumbs.db, etc. Original! 6. plug-ins for the Django management system do exist (e.g. django-extensions), but none integrate with Cloudfiles yet (or any CDN to my knowledge) Easy! 7. a dead simple drop-in (no coding necessary) 8. no external dependancies required Going to be Popular! 9. Django is all about reusability; Django developers always look for an existing solution first (like this one)! 10. Django is gaining steam: it’s supported by Google App Engine, and it is gaining traction. Link: http://github.com/rossdakin/django-cloudfiles/
Media manager plug-in will mirror your media library to your Cloud Files CDN. All URL’s to this content will use the Cloud Files path when you insert them via the media manager. You can import all of your media to the CDN.
New tools are added constantly - Check out Cloud Tools
Cloud Files™ is not a “file system” in the traditional sense. You will not be able to map or mount virtual disk drives like you can with other forms of storage such as a SAN or NAS. Since Cloud Files is a different way of thinking when it comes to storage, following is a review of key concepts.
The Cloud Files system is designed to be used by many different customers. Your user account is your slice of the Cloud Files system. A user must identify themselves with a valid Username and their API Access Key. Once authenticated, that user has full read/write access to the Objects (files) stored under that user account.
A Container is a “storage compartment” for your data and provides a way for you to organize that data. You can think of a Container as a folder in Windows(R) or a directory in UNIX(R). The primary difference between a Container and these other “file system” concepts is that Containers cannot be nested. You can, however, create up to 500,000 Containers under your account.
An “Object” is the basic storage entity and its metadata that represents the “files” you store in Cloud Files. When you upload data to Cloud Files the data is stored as-is (no compression or encryption) and consists of a location (Container), its name, and optional metadata consisting of key/value pairs. For instance, you may chose to store a backup of your digital photos and add a metadata key/value pair of “PhotoAlbum-CaribbeanCruise”. Objects are grouped into Containers and you can have any number of Objects within a Container.
Operations are the actions you perform against your account in Cloud Files. Creating or deleting Containers, uploading or downloading Objects, etc. The full list of operations is documented under the ReST API section of the Developer Guide. Operations are performed via the ReST web service API or a language-specific API (currently we support Python, PHP, Java, and C#/.NET).
There are no permissions or access-controls around Containers or Objects other than being split into separate accounts. Users must authenticate with a valid API Access Key, but once authenticated they can create/delete Containers and Objects only within that Account.
At this time, there is no way to make a storage object publicly accessible.
To publish your data so that it can be served by Akamai's Content Distribution Network (CDN), you need to “publish to CDN” the Container that houses that data. When a Container is published any files will be publicly accessible and not require an authentication token for read access. Uploading content into a CDN-enabled Container is a secure operation and will require a valid authentication token.
Each published Container has a unique Uniform Resource Locator (URL) that can be combined with its Object names and openly distributed in web pages, emails, or other applications. For example, a published Container named “photos” can be referenced as “
http://c0344252.cdn.cloudfiles.rackspace.com ”. If that Container houses an image called “cute_kids.jpg”, then that image can be served by Akamai's CDN with the full URL of “
http://c0344252.cdn.cloudfiles.rackspace.com/cute_kids.jpg”. This URL can be embedded in HTML pages, email messages, blog posts, etc. When that URL is accessed, a copy of that image is fetched from the Cloud Files storage system and cached in Akamai's CDN and served from there for all subsequent requests.
This guide provides an overview of Rackspace Cloud Files and its features to help you get started quickly and serve content over Akamai's CDN service.
Cloud Files allows you to store data on the Rackspace infrastructure. The Cloud Control Panel allows customers to accomplish most tasks for managing data, but the Cloud Files API and some third-party tools are also available.
The basics of Cloud Files include:
- Cloud Files is a “cloud” storage system and not a traditional file system.
- Customers need to create “Containers” in the storage system to store data.
- You cannot create Containers within other Containers.
- Customers can have any number of top level Containers.
- Your data is stored in “Objects” within those Containers.
- You can have any number of Objects within a Container.
- Objects can vary in size from a few bytes to very large.
- Customers can interact with Cloud Files through the Rackspace Cloud Control Panel or language-specific programming interfaces.
More detail can be found in Cloud Files Key Concepts.
Cloud Files Usage Scenarios
There are a number of basic usage scenarios for Cloud Files. We have classified them based on user objectives.
For Application Developers
- A development guide is available at Cloud Files Dev Guide
- Cloud Files has several Software Development Kits (SDKs) for popular programming languages
- An API Getting Started Guide is available for the Cloud Files API
For Content Providers and Website Designers
- Use Cloud Files to serve static content for websites
- Check out some Cloud Files tools which can improve website performance and scalability.
For IT Managers
- Check out some Cloud Files tools which can ease day to day operations such as data backup and archival.
For all users
Additional Cloud Files Resources
- Rackspace Official SDKs and Tools for Java, .NET, node.js, PHP, Python, and Ruby
A Container is a “storage compartment” for your data and provides a way for you to organize that data. You can think of a Container as analogous to a folder in Windows® or a directory in UNIX®. The primary difference between a Container and these other “file system” constructs is that Containers cannot be nested. You can have up to 500,000 Containers in your account, but they only exist at the “top level” of your account and Containers cannot reside within other Containers.
Note: Containers scale to about one million objects before peformance degrades. Containers can only be removed from Cloud Files if they do NOT contain any storage Objects. In other words, make sure the Container is empty before attempting to delete it.
First you must make sure you have generated a valid API Access Key. Then you can use either the Cloud Files user interface in the Rackspace Cloud Control Panel or one of our programming interfaces.
Please see the How do I use Cloud Files and CDN? Knowledge Center article for more details.
Please click here to view The Rackspace Cloud Terms of Service.
No. The Cloud Files CDN does not support exposing a custom crossdomain.xml file, as this is a required file by the Openstack Swift project. OpenStack Swift uses this as a global configuration file for the installation, and can not be modified for multiple tenants, such as our Public Cloud.
There are no permissions or access controls around containers or objects other than being split into separate accounts. Users must authenticate with a valid user name and API Access Key, but once authenticated, they can create or delete containers and objects only within that account.
At this time, there is no way to publicly access the objects stored in Cloud Files unless that container is published to CDN. Each request to Cloud Files must include a valid "storage token" in an HTTP header transmitted over a HTTPS connection.
When you upload a file in the Cloud Control Panel, an Allow-Origin header is set on the container to support cross-origin resource sharing (CORS). Browsers prevent AJAX requests between different domains by default. Because the Cloud Files API and the Control Panel reside on different domains, CORS must be enabled to support uploads directly to a container. When the upload succeeds, the CORS headers are removed.
By allowing the browser to upload directly to the Cloud Files API, maximum upload performance can be achieved.
Read more about CORS at http://en.wikipedia.org/wiki/Cross-origin_resource_sharing.
Streaming content through Cloud Files lets you deliver video content quickly and easily, without making your users download the content first. They can begin viewing your content immediately and can jump around the video stream without needing to buffer.
Because Rackspace uses HTTP delivery for streaming content, you can use the Akamai CDN network to deliver your content. This means the performance should be identical to CDN speeds customers are used to. Before you begin, you must CDN-enable the container that holds your streaming content. In the Cloud Control Panel, click the gear icon of the container and select "Make Public (Enable CDN)".
Once your container is CDN-enabled, you will need its Streaming URLs. In the Cloud Control Panel, click the gear icon for the container and select "View All Links...". Below is an example of the CDN links that display:
HTTP: http://cdc4c16471588d4846bf-cc339a649709710bbecd3db1e126ec2b.r3.cf1.rackcdn.com HTTPS: https://ac3c779acb946eaf4819-cc339a649709710bbecd3db1e126ec2b.ssl.cf1.rackcdn.com Streaming: http://b0c42c537095921be66c-cc339a649709710bbecd3db1e126ec2b.r3.stream.cf1.rackcdn.com iOS Streaming: http://09ac235af93af07922d6-cc339a649709710bbecd3db1e126ec2b.iosr.cf1.rackcdn.com
There are several different ways to stream your content with Cloud Files. Click a link below to find out more for each approach.
- JW Player
- OSMF (Open Source Media Framework), which allows you to build your own player
- iOS Device Streaming (link goes to API Developer Guide)
Why have we chosen to support specific players?
Many Rackspace customers are not flash developers, but still want to use a streaming offer. There are a few players that are dominating the market, and we will plan to support each of them. Custom plugins are required in order for Streaming delivery to work properly over the Akamai network. As Akamai adds support for more players, our customers will have access to them.
Why are we not using RTMP?
- RTMP is probably the most popular delivery format today, but the market is quickly moving towards HTTP delivery for Streaming content. Here are just a few reasons the market is moving towards HTTP.
- Accessibility- Many firewalls block RTMP and RTSP streaming protocols because corporations don't want users watching video at work. HTTP appears to be normal web traffic, meaning that videos served over HTTP are usually left open.
- Startup times- Akamai sees a significant reduction in stream startup times on average for traffic served via HTTP.
- Throughput (image quality)- With the HTTP network being larger than many other networks, Akamai is closer to the end user on their HTTP network meaning they get better throughput of data. that means customers will experience higher bit rates uninterupted (and without buffering) and increase the end user's over all experience.
Is this available internationally?
Yes, this is available to both US and UK Cloud customers.
Content Delivery Network
This article describes how to create a container within Cloud Files and manage files in it through the Cloud Files interface.
What is the CDN?
Using the Akamai content delivery network (CDN) service, Cloud Files brings you a powerful and easy way to publish content over a world-class industry leading CDN. Customers automatically get access to this network as part of using the Cloud Files service. The way it works is by distributing the content that you upload to Cloud Files across a global network of edge servers. What this means is that when someone is viewing content from your site, your CDN-enabled content will be served to them from the closest geographic edge server to their location. This feature dramatically increases the speed at which websites can load, no matter where your viewer is located.
This can be a great advantage when hosting content for an international audience. Though you can also use the API to upload content to Cloud Files, another way is to use the File Manager interface in the Cloud Control Panel. To store content on Cloud Files, you start by creating a container for your content. The container name should have no breaks, spaces, or special characters. Unlike a folder or directory, a container cannot have subdirectories. All your content will be at one level below the container name.
Create a container
- Log in to the Cloud Control Panel.
- In the top navigation bar, select Storage > Files.
- Click Create Container.
- Name the container, and then click Create Container.
- Click the gear icon next to the container to be made public and select Make Public (Enable CDN).
- Click Publish to CDN.
You can now share the files within the container.
Upload files to the container
- Click on the name of the container to which you want to upload files.
- Click Upload Files and select the files to upload.
- Click Open.
- After the file is uploaded, click Close Window.
The file appears the list of available files within the container.
- Click the gear icon next to your file and select View All Links.
Options for sharing your file are displayed.
Who is Akamai?
Akamai Technologies, Inc. is publicly traded: (NASDAQ: AKAM) company founded in 1998. Akamai has a pervasive, highly-distributed cloud optimization platform with over 73,000 servers in 70 countries within nearly 1,000 networks.
What will I experience when Akamai is implemented as my new CDN provider?
Rackspace expects no customer impact during your transition to Akamai. Once we flip the switch to have a customer’s content served by Akamai, Akamai will begin supporting both new URLs and all other existing CDN provider URLs.
This means that CDN customers who currently have Limelight URLs coded into their websites will continue to serve content using those URLs when they are transitioned to Akamai, but they will be distributed over the Akamai network. At this time, we do not have any plans to discontinue the legacy URLs.
If a customer requests their URL (either in the Control Panel or via API) for an object, they will be presented with a new Akamai URL. This does not mean that old URLs are invalid. However, as Rackspace releases new features like CNAME and SSL, customers will need to reference their new Akamai URL instead of their legacy URL.
Will customers have to change their code to use Akamai?
Will customers have to do anything different in the control panel?
No, as we add new features, we will educate our customers.
Should customers anticipate any downtime during this implementation?
No downtime is expected during the implementation of the Akamai platform.
Will the Cloud Files API be different?
No, all customers facing API calls will remain the same.
Benefits of a CDN
- Higher capacity and scale- Strategically placed servers increase the network backbone capacity and number of concurrent users handled. For instance, when there is a 10 Mbit/s network backbone and 100 Mbit/s central server capacity, only 10 Mbit/s can be delivered. But when 10 servers are moved to 10 edge locations, total capacity can be 10*10 Mbit/s.
- Lower delivery costs - Strategically placed edge servers decrease the load on interconnects, public peers, private peers and backbones, freeing up capacity and lowering delivery costs. A CDN offloads traffic on a backbone network by redirecting traffic to edge servers.
- Lower network latency and packet loss - End users experience less jitter and improved stream quality. CDN users can therefore deliver high definition content with high Quality of Service, low costs and low network load.
- Higher Availability - CDNs dynamically distribute assets to strategically placed core, fallback and edge servers. CDNs may have automatic server availability sensing with instant user redirection. CDNs can thus offer 100% availability, even with large power, network or hardware outages.
- Better Usage analytics – CDNs can give more control of asset delivery and network load. They can optimize capacity per customer, provide views of real-time load and statistics, reveal which assets are popular, show active regions and report exact viewing details to customers
The Wikipedia entry for CDN states: “A content delivery network or content distribution network (CDN) is a system of computers networked together across the Internet that cooperate transparently to deliver content to end users, most often for the purpose of improving performance, scalability, and cost efficiency.”
To elaborate, we have to start with the fact that the Internet is a network of networks. To get content from a server on the other side of the planet, IP packets have to travel through a whole series of backbone servers and public network cables.
Content Delivery Networks (CDNs) augment the transport network by employing various techniques to optimize content delivery. It is fairly easy to see how CDNs help by digging a little deeper on how the Internet works. A trace route to an internet address tells us how many network hops a simple request takes. Following is one to Yahoo.com.
>tracert www.yahoo.com Tracing route to www-real.wa1.b.yahoo.com [126.96.36.199] over a maximum of 30 hops: 1 1 ms 1 ms 1 ms 192.168.1.1 2 11 ms 9 ms 9 ms 188.8.131.52 3 11 ms 9 ms 9 ms 184.108.40.206 4 11 ms 9 ms 9 ms bb1-10g0-0.aus2tx.sbcglobal.net [220.127.116.11] 5 16 ms 19 ms 16 ms ex2-p14-1.eqdltx.sbcglobal.net [18.104.22.168] 6 16 ms 16 ms 18 ms asn10310-10-yahoo.eqdltx.sbcglobal.net [22.214.171.124] 7 19 ms 17 ms 106 ms ae2-p101.msr1.mud.yahoo.com [126.96.36.199] 8 18 ms 18 ms 17 ms te-8-1.bas-c1.mud.yahoo.com [188.8.131.52] 9 18 ms 18 ms 17 ms f1.www.vip.mud.yahoo.com [184.108.40.206] Trace complete.
Additional hops mean more time to render data from a request on the user’s browser. The speed of delivery is also constrained by the slowest network in the chain. The solution is a CDN that places servers around the world and, depending on where the end user is located, serves them with the closest or most appropriate server. CDNs cut down on the hops back and forth to handle a request. This is shown in the figures below.
CDNs focus on improving performance of web page delivery. CDNs like Akamai's support progressive downloads, which optimizes delivery of digital assets such as web page images. CDN nodes and servers are deployed in multiple locations around the globe over multiple internet backbones. These nodes cooperate with each other to satisfy data requests by end users, transparently moving content to optimize the delivery process. The larger the size and scale of their Edge Network deployments, the better the CDN.
They generally push the Edge Network closer to end users. The Edge Network is grown outward from the origins by purchasing co-location facilities, bandwidth, and servers. CDNs choose the best location for serving content while optimizing for performance. They may choose locations that are the fewest hops or fewest number of network seconds away from the requesting client. CDNs choose the least expensive locations while optimizing for cost. CDNs use various techniques such as web caching, server-load balancing, and request routing to achieve the optimization goals.
Because closer is better, web caches store popular content closer to the user. These shared network appliances reduce bandwidth requirements, reduce server load, and improve the client response times for content stored in the cache.
Server-load balancing uses a web switch, content switch, or multilayer switch to share traffic among a number of servers or web caches. Here, the switch is assigned a single virtual IP address. Traffic arriving at the switch is then directed to one of the real web servers attached to the switch. This has the advantages of balancing load, increasing total capacity, improving scalability, and providing increased reliability by redistributing the load of a failed web server and providing server health checks.
Request routing directs client requests to the content source best able to serve the request. This may involve directing a client request to the service node that is closest to the client, or to the one with the most capacity. A variety of algorithms for Global Server Load Balancing (shown in diagram) are used to route the request. Choosing the closest service node is done using a variety of techniques including proactive probing and connection monitoring.
For more information, see What are the benefits of using a CDN?
This article describes the use of the Time To Live (TTL) attribute and how it works.
When you create a container in Cloud Files and you make that container public, the files within that container have a designated TTL. The TTL is the time interval after which the CDN will reread the contents of the container. This attribute and its value can be modified in the CDN through the Cloud Files user interface.
New values take effect after the current TTL cycle is completed. The TTL can be any value between 15 minutes and 50 years. Use higher numbers for static content that doesn't change often, and use smaller numbers for content that changes more often. If you require a longer TTL, see the following blog post about using the API to set TTL: Extending TTL for Cloud Files CDN Users.
Use the following steps to modify a container’s TTL within the Cloud Control Panel:
- Log in to the Cloud Control Panel.
- In the top navigation bar, select Storage > Files.
- If the container is not already public, click the gear icon next to the container and select Make Public (Enable CDN). In the popup box, click Publish to CDN.
- Click the gear icon next to the container again and select Modify Time To Live (TTL).
- Enter the TTL for the container in seconds, and then click Save TTL.
There are a number of reasons to choose the Cloud Files service over similar services available in the market.
- Simplicity of CDN functionality.
- Utility pricing - paying only for what you use, after you use it.
- Multiple APIs for developers to choose from.
- Fast and reliable performance.
- World class support
Cloud Files is simple to use for developers and non-developers alike. Users can get started in as little as five minutes. Users do not have to know how to code to use Cloud Files and CDN. Users can, within minutes, sign up for Cloud Files, create a Container, upload a file and publish that Container’s content through the CDN (Refer to “Cloud Files with CDN QuickStart guide”. The Cloud Files GUI is easy to navigate and use. Rackspace browser based control panel let users easily upload files and distribute on a CDN without writing any code.
All the content can be backed by a CDN without complex negotiations and details of updating content for optimizing delivery. Given the complexity of the CDN, users may have to make a series of choices such as number of servers to use etc. before obtaining the service. After making the choices users must ensure the usage bills are accurate. All these issues are handled by Cloud Files. CDNs have, in the past, been the prerogative of companies with more money but Cloud Files has changed that.
Cloud Files is a user and developer friendly service. Security mechanisms described in more detail in the Security section are simple to use. Third party tools which further simplify use of the storage service are available. Developers can use a language specific application programming interfaces to develop utilities or applications. The API’s are easy to use and are documented with examples to get started quickly.
Pricing for Cloud Files with CDN is comprised of the amount of total data you have stored (per GB), and the amount of bandwidth used to serve the data (outbound bandwidth only, charged per GB to any edge location on the CDN around the globe). There are no 'per request' fees for CDN. The current pricing for Cloud Files can always be found on our website.
Cloud Files is a dynamically expanding storage system very flexible and is built on the pay for use principle. Customers can use as much or little storage as necessary, while paying only for storage space used. There is no limit on total space use and individual files can range in size from a few bytes to extremely large. The GUI control panel allows users to check storage space used and bandwidth utilized. There are no upfront fees or contracts and end users pay only for storage space used and outgoing bandwidth that they use.
System administrators can use this for simple manual updates as well. Users do not have to know how to code to use Cloud Files and CDN. No API is required to share files. Non-developers have a simple web-interface, that can be used to quickly and easily upload data and enable CDN access. There are multiple third party tools (refer to Tools and Applications section) which make it even more flexible for users of specific environments such as the Mac OS.
Developers can use the ReST web service and language-speicifc API’s in PHP, Python, Java, Ruby, and C#/.NET. The API's provide full support for managing content in Cloud Files and publishing content over the CDN. The API allow developers to work in the language they feel most comfortable
Rackspace has built networks with superior performance for years. The Akamai CDN capability improves the performance further for distribution of digital content. In short, Akamai can bring data closer to end users and serve popular content faster.
World Class support
With Cloud Files world-class free technical support is only a click away. Live support, with real people is available 24/7. Fanatical Support is built into the price. Users can get peace of mind knowing that technical support is just a phone call or online chat away – at any time of the day.
The Cloud Files storage system is designed to be highly available and fault tolerant. Cloud Files achieves client data redundancy by replicating three full copies on different storage nodes. Storage nodes are grouped in logical Zones within Rackspace datacenters. Zones are connected to redundant Internet backbone providers and reside on redundant power supplies and generators. The system has been engineered in such a way as to continue to by fully functional even in the event of a major service disruption within a Datacenter. Content on the CDN provides an additional layer of data redundancy.
Cloud Files uses a number of different measures to ensure that your data is kept safe. First and foremost, all traffic between clients and the Cloud Files system uses SSL to establish a secure, encrypted communication channel. This ensures that any data (usernames, passwords, and content) cannot be intercepted and read by a third-party. Users authenticate with a valid username and API Access Key and are granted a session authentication token. These authentication tokens are used to validate all operations against Cloud Files. There is no way to terminate a valid session by the user, but the session tokens will automatically expire after 24 hours, forcing clients to resend their credentials.
The API Access Key is only available from within the Rackspace Cloud’s control panel. Users must enter their valid username and password to gain access to view the API Access Key or to generate a new key.
It is important to note that Cloud Files does not apply any transformations to data before storing it. This means that Cloud Files *will not* store data in encrypted form unless the client encrypts it prior to transmission. This allows users to select the type and level of encryption best suited for their application.
When deleting storage Objects from the Cloud Files system, all copies of data are permanently removed within a matter of minutes. Furthermore, the physical blocks making up the customer’s data is zeroed over before that space is re-used for other customer data. In other words, after a delete request, the data will be unrecoverable.
Cloud Files™ is an affordable, redundant, scalable, and dynamic storage service offering. The core storage system is designed to provide a safe, secure, automatically re-sizing and network accessible way to store data.
You can store an unlimited quantity of files ranging in size from a few bytes to extremely large. Users can store as much as they want and pay only for storage space they actually use.
Cloud Files makes it easy to serve content through a CDN. This allows users to take advantage of a proven world-class content distribution network that is affordable and easy to use.
Cloud Files also allows users to store/retrieve files and CDN-enable content with a simple Web Service (ReST: Representational State Transfer) interface. There are also language-specific API’s that utilize the ReST API but make it much easier for developers to integrate into their applications.
Ideal uses for Cloud Files
There are a number of uses for the Cloud Files service. Cloud Files is an excellent storage solution for a number of scenarios and is well suited for a number of applications such as:
- Backing up or archiving data
- Serving images/videos (streaming data to the user’s browser)
- Serving content with a world-class CDN (Akamai)
- Storing secondary/tertiary, static web-accessible data
- Developing new applications with data storage integration
- Storing data when predicting storage capacity is difficult
- Storing data for applications affordably
Cloud Files™ is not a “file system” in the traditional sense. You will not be able to map or mount virtual disk drives like you can with other forms of storage such as a SAN or NAS. Since Cloud Files is a different way of thinking when it comes to storage, please take a few moments to review the concepts.
Using Cloud Files
There are two ways to use Cloud Files:
- GUI interface such as the Rackspace Cloud Control Panel, Cyberduck or Fireuploader
- Programming interfaces via ReST, Python, PHP, Ruby, Java, or C#/.NET.
Control Panel Interface
The Control Panel provides an browser based, intuitive, easy to use graphical user interface. The interface allows you to manage your Containers and Objects without any programming knowledge. From there, users can CDN-enable the Container by marking it “public”. Any Objects stored in a public, CDN-enabled Container are directly accessible over the Akamai’s CDN.
There are several programming interfaces for Cloud Files that will allow you to integrate the storage solution into your applications, or provide automated ways of accessing the system. Currently, we support a ReST web-services API and several programming language API’s (Python, PHP, Java, Ruby, and C#/.NET).
Please refer to the Developer Guide for more details about using these interfaces. You can access the Developer Guide on our API documentation site.
The naming requirements for Cloud Files objects and containers (such as illegal characters and name length limits) include:
- Container names may not exceed 256 bytes and cannot contain a slash (/) character.
- Object names may not exceed 1024 bytes, but they have no character restrictions.
- Object and container names must be URL-encoded and UTF-8 encoded.
File Transfers and File Sharing
Yes, the Rackspace Cloud now supports the transfer and storage of larger files. Following is a list of frequently asked questions about our large file support.
How does Rackspace support the upload of large files to Cloud Files?
Although support for uploading content to Cloud Files through the Cloud Control Panel is limited to files smaller than 5 GB, we can accommodate the transfer of files larger than 5 GB by allowing you to segment your files into multiple file segments.
How large should my file segments be?
Rackspace does not enforce any lower limits on the file size. File segments cannot be larger than 5 GB, and we recommend not storing file segments that are smaller than 100 MB.
Can I serve my large files over the CDN?
At this time, you cannot serve files larger than 10 GB from the CDN.
Is there a simpler way to use this process?
We have created a tool called Swift to make this process easier. Swift segments your large file for you, creates a manifest file, and uploads the segments accordingly. After it uploads the segments, Swift manages the segments for you, deleting and updating them as needed. You can get information about the Swift Tool and download the Swift tool.
When should I use the API instead of the Swift tool?
If you are interested in developing against the Rackspace Large File Support code to incorporate into your application, you should work directly with the Cloud Files API. Use the following steps:
- Upload the segments:
curl -X PUT -H 'X-Auth-Token: <token>' \ http://<storage_url>/container/myobject/1 --data-binary '1' curl -X PUT -H 'X-Auth-Token: <token>' \ http://<storage_url>/container/myobject/2 --data-binary '2' curl -X PUT -H 'X-Auth-Token: <token>' \ http://<storage_url>/container/myobject/3 --data-binary '3'
- Create the manifest file:
curl -X PUT -H 'X-Auth-Token: <token>' \ -H 'X-Object-Manifest: container/myobject/' \ http://<storage_url>/container/myobject --data-binary ''
- Download the segments as a single object:
curl -H 'X-Auth-Token: <token>' \ http://<storage_url>/container/myobject
When should I use the Swift tool instead of the API, and what is the process?
If you want to upload large files but do not want to incorporate our code into an application, you might find it easier to use the Swift tool for your uploads and management. If you are using the tool, the process looks as follows:
The following code uploads largefile.iso to test_container in 10 segments (in this case, 10 MB per segment for a total of 100 MB).
$ st -A https://auth.api.rackspacecloud.com/v1.0 -U glh -K 3a25c2dc74f24c3122407a26566093c8 upload -S 1048576 test_container largefile.iso largefile.iso segment01 largefile.iso segment06 largefile.iso segment09 largefile.iso segment04 largefile.iso segment05 largefile.iso segment03 largefile.iso segment07 largefile.iso segment08 largefile.iso segment02 largefile.iso segment00 largefile.iso
The following code downloads the large file as a single object:
$ st -A https://auth.api.rackspacecloud.com/v1.0 -U glh -K 3a25c2dc74f24c3122407a26566093c8 download test_container largefile.iso largefile.iso
Alternatively, you can list the segments and download them separately:
$ st -A https://auth.api.rackspacecloud.com/v1.0 -U glh -K 3a25c2dc74f24c3122407a26566093c8 list test_container_segments largefile.iso/1300906481.0/10485760/00000000 largefile.iso/1300906481.0/10485760/00000001 largefile.iso/1300906481.0/10485760/00000002 largefile.iso/1300906481.0/10485760/00000003 largefile.iso/1300906481.0/10485760/00000004 largefile.iso/1300906481.0/10485760/00000005 largefile.iso/1300906481.0/10485760/00000006 largefile.iso/1300906481.0/10485760/00000007 largefile.iso/1300906481.0/10485760/00000008 largefile.iso/1300906481.0/10485760/00000009
You can list the main large object itself:
$ st -A https://auth.api.rackspacecloud.com/v1.0 -U glh -K 3a25c2dc74f24c3122407a26566093c8 list test_container largefile.iso
When you use the object name to list large object segments, the segment names are returned in alphabetical order. Plan segment names accordingly. For example, name10 would come before name9 alphabetically, so to preserve the correct order, the latter segment should be named name09.
What will the download experience be like?
After files are segmented and uploaded with a manifest file, your large file will be served as a single file, so the experience will mimic the download or service of any other object retrieval.
Do I have access to my file segments?
Yes, you can edit your file segments just like any other object within Cloud Files.
How do I ensure that my files are linked correctly?
Include your manifest file in your upload. You can change your file name by editing this manifest file as well. We recommend using prefixing in your file segments to easily map your manifest file to the portions of your large file. For example, you could name your segments as follows:
Myfavoritemovie-01 Myfavoritemovie-02 Myfavoritemovie-03 ..etc..
In this case, you would point your manifest file to the prefix:
Can I use this feature from the Cloud Control Panel?
At this time, Rackspace has not implemented this functionality into the Rackspace Cloud Control Panel.
Developer guides are available on the Rackspace API documentation site. Documentation is available for the raw API and for language-specific SDKs.
The API documentation is available from the Rackspace API documentation site.