Support is available by email at email@example.com Monday through Friday, 9AM to 5PM CST. Tech support will be happy to answer any questions you have about Cloud Files, except for code related issues. If you do have questions about design or coding please try our forums or documentation.
Cloud Files: FAQs
- Getting Started -
Rackspace Cloud Files uses Akamai Technologies, Inc. a leading, tier one, global Content Delivery Network (CDN) provider to offer the benefits to all Cloud Files users. Today Akamai handles tens of billions of daily Web interactions for their customers such as Audi, NBC, Fujitsu, U.S. Department of Defense, and NASDAQ.
The Rackspace CloudFiles/Akamai relationship brings full-fledged, robust CDN capabilities and unlimited file storage to developers and corporate IT departments alike. The CDN capability will greatly enhance the quality of the end user experience by speeding the delivery of bandwidth-heavy rich content, including audio and video. For literally pennies per gigabyte of bandwidth and storage and no upfront commitments, the CDN advantage is now available to all not just to the giants of the internet. This partnership brings unlimited online storage, scalable content delivery, and application acceleration services, thereby allowing businesses to more easily and affordably distribute content to millions of end users around the world. Together with Akamai, Rackspace has democratized content delivery.
With Akamai’s service, Cloud Files brings a powerful and easy way to publish content over a world-class, industry leading CDN. A Cloud Files user automatically gets access to this network. Users have to mark containers for publishing to CDN, and then they are instantly accessible through Akamai CDN. The propagation of content to the edge locations is done automatically behind the scenes. The Rackspace Cloud/Akamai offering is not a one-off solution; content published through it is distributed across their entire infrastructure just as it is for other customers.
In the Rackspace Cloud control panel, it is a matter of creating a Container (the storage compartment for data), uploading Objects (the files to serve over CDN), and marking the Container as “public”. The Container is then assigned a unique URL which can be combined with Object names to embed in web pages, email messages, blog posts, etc. For example, a user could upload a photo to a Container called “images”. When this Container is published, it will be assigned a unique URL like
http://c0000532.cdn.cloudfiles.rackspace.com. The user could then share a link to the photo with link like
http://c0000532.cdn.cloudfiles.rackspace.com/IMG_3432.jpg. When that link is accessed, the photo is served from the CDN; it’s that simple!
There is a growing list of tools and extensions. A sampling is as follows.
Rackspace CDN is a WordPress plug-in that allows any media in your WordPress uploads folder to be uploaded to Rackspace Cloud Files, powered by CDN. Once the file is uploaded, it is deleted from the local server. Link: https://wordpress.org/plugins/rackspace-cloud-files-cdn/
Cyberduck is a full-featured browser to publish your content on Cloud Files and manage your CDN distributions with a click of a button. Access your Rackspace Cloud Files remote storage using the familiar Cyberduck browser interface to create containers and upload content using drag and drop. Create a distribution to register that container with the Akamai's Content Distribution Network through using the Info panel of the browser that displays the CDN URL. Distribution can be toggled on or off at any time. Supporting the latest and greatest version of the Cloud Files protocol, you can also browse hierarchical pseudo-folder structures. Link: http://cyberduck.ch/
Fireuploader is a nifty tool to manage the files on Cloud. It provides a user-friendly interface to download/upload files to the cloud. It also allows users to set properties of the uploaded files like CDN and headers etc. It has all the features supported by Rackspace Cloud Files as of now. Link: http://www.fireuploader.com,
Django-Cloudfiles is an extension to the management system of Django, the popular website framework. Django-Cloudfiles lets you synchronize the static content directory of your Django-powered website to your Cloudfiles account effortlessly. Django-Cloudfiles is…Cool! 1. it only uploads files that have been modified (but can force upload-all) 2. it can create a new container for you 3. it preserves your file hierarchy by naming your remote files such that they emulate nested directories: no need to flatten your existing structure! 4. it lets you store credentials in your site’s configuration file (for easy use) or specify them on the command line (for greater security) 5. it ignores files you probably don’t want to upload, like .DS_Store, .git, Thumbs.db, etc. Original! 6. plug-ins for the Django management system do exist (e.g. django-extensions), but none integrate with Cloudfiles yet (or any CDN to my knowledge) Easy! 7. a dead simple drop-in (no coding necessary) 8. no external dependancies required Going to be Popular! 9. Django is all about reusability; Django developers always look for an existing solution first (like this one)! 10. Django is gaining steam: it’s supported by Google App Engine, and it is gaining traction. Link: http://github.com/rossdakin/django-cloudfiles/
Media manager plug-in will mirror your media library to your Cloud Files CDN. All URL’s to this content will use the Cloud Files path when you insert them via the media manager. You can import all of your media to the CDN.
New tools are added constantly - Check out Cloud Tools
Cloud Files™ is not a “file system” in the traditional sense. You will not be able to map or mount virtual disk drives like you can with other forms of storage such as a SAN or NAS. Since Cloud Files is a different way of thinking when it comes to storage, following is a review of key concepts.
The Cloud Files system is designed to be used by many different customers. Your user account is your slice of the Cloud Files system. A user must identify themselves with a valid Username and their API Access Key. Once authenticated, that user has full read/write access to the Objects (files) stored under that user account.
A Container is a “storage compartment” for your data and provides a way for you to organize that data. You can think of a Container as a folder in Windows(R) or a directory in UNIX(R). The primary difference between a Container and these other “file system” concepts is that Containers cannot be nested. You can, however, create up to 500,000 Containers under your account.
An “Object” is the basic storage entity and its metadata that represents the “files” you store in Cloud Files. When you upload data to Cloud Files the data is stored as-is (no compression or encryption) and consists of a location (Container), its name, and optional metadata consisting of key/value pairs. For instance, you may chose to store a backup of your digital photos and add a metadata key/value pair of “PhotoAlbum-CaribbeanCruise”. Objects are grouped into Containers and you can have any number of Objects within a Container.
Operations are the actions you perform against your account in Cloud Files. Creating or deleting Containers, uploading or downloading Objects, etc. The full list of operations is documented under the ReST API section of the Developer Guide. Operations are performed via the ReST web service API or a language-specific API (currently we support Python, PHP, Java, and C#/.NET).
There are no permissions or access-controls around Containers or Objects other than being split into separate accounts. Users must authenticate with a valid API Access Key, but once authenticated they can create/delete Containers and Objects only within that Account.
At this time, there is no way to make a storage object publicly accessible.
To publish your data so that it can be served by Akamai's Content Distribution Network (CDN), you need to “publish to CDN” the Container that houses that data. When a Container is published any files will be publicly accessible and not require an authentication token for read access. Uploading content into a CDN-enabled Container is a secure operation and will require a valid authentication token.
Each published Container has a unique Uniform Resource Locator (URL) that can be combined with its Object names and openly distributed in web pages, emails, or other applications. For example, a published Container named “photos” can be referenced as “
http://c0344252.cdn.cloudfiles.rackspace.com ”. If that Container houses an image called “cute_kids.jpg”, then that image can be served by Akamai's CDN with the full URL of “
http://c0344252.cdn.cloudfiles.rackspace.com/cute_kids.jpg”. This URL can be embedded in HTML pages, email messages, blog posts, etc. When that URL is accessed, a copy of that image is fetched from the Cloud Files storage system and cached in Akamai's CDN and served from there for all subsequent requests.
This information is intended to get customers started quickly on Cloud Files. This guide is intended to provide a high level conceptual overview and general instructions to enable you to get started quickly and serve content over Akamai's CDN service. Cloud Files from The Rackspace Cloud is an offering which allows you to store data on the Rackspace infrastructure. A control panel allows customers to accomplish most tasks for managing data. Key Concepts are documented in an article, but in a nutshell they are as follows.
- Cloud Files is a “cloud” storage system and not a traditional file system.
- Customers need to create “Containers” in the storage system to store data.
- You cannot create Containers within other Containers.
- Customers can have any number of top level Containers.
- Your data is stored in “Objects” within those Containers.
- You can have any number of Objects within a Container.
- Objects can vary in size from a few bytes to very large.
- Customers can interact with Cloud Files through the Rackspace Cloud Control Panel or language specific programming interfaces.
Cloud Files Usage Scenarios
There are a number of basic usage scenarios for Cloud Files. We have classified them based on user objectives.
For Application Developers
For Content Providers and Website Designers
- Use Cloud Files to serve static content for websites
- Check out some Cloud Files tools which can improve website performance and scalability.
For IT Managers
- Check out some Cloud Files tools which can ease day to day operations such as data backup and archival.
For all users
Additional Cloud Files Resources
- Rackspace Official SDKs and Tools: (Java, .NET, node.js, PHP, Python, Ruby)
A Container is a “storage compartment” for your data and provides a way for you to organize that data. You can think of a Container as analogous to a folder in Windows® or a directory in UNIX®. The primary difference between a Container and these other “file system” constructs is that Containers cannot be nested. You can have up to 500,000 Containers in your account, but they only exist at the “top level” of your account and Containers cannot reside within other Containers.
Note: Containers scale to about one million objects before peformance degrades. Containers can only be removed from Cloud Files if they do NOT contain any storage Objects. In other words, make sure the Container is empty before attempting to delete it.
First you must make sure you have generated a valid API Access Key. Then you can use either the Cloud Files user interface in the Rackspace Cloud Control Panel or one of our programming interfaces.
Please see the How do I use Cloud Files and CDN? Knowledge Center article for more details.
Please click here to view The Rackspace Cloud Terms of Service.
The API documentation is available from the Rackspace API documentation site.
Client code samples can be accessed via links from the Rackspace Cloud Control Panel. To reach them, log into Rackspace Cloud Control Panel, then click the "Support" section on the left, then "Developer Resources". You will then see the download links under "Client Code". There are links to the documentation on that page as well.
Developer guides are available on the Rackspace API documentation site. Documentation is available for the raw API and for language-specific SDKs.
Bulk Import to Cloud Files is a product that enables customers to safely and securely send Rackspace physical media to be uploaded directly at the datacenters. This means customers can avoid the time and bandwidth charges that go along with uploading large amounts of data from a standard office connection.
When the import is complete, you have the option to have your drive shipped back to you or destroyed.
When destroying the drive, we use a machine called a Degaussing Machine. A degaussing machine is a powerful electro-magnet that destroys the data on the platters by magnetizing the device, which changes the magnetic alignment. This realignment leaves the drive with random patterns and no preference to orientation. This makes the data unrecoverable. This service is an additional fee of $15 per device.
Your Rackspace Migration Specialist will provide you with continuous updates along the way. You will be notified of the following:
- When Rackspace receives the device
- When we begin the data import
- When the import is complete
- When the device has been shipped back or destroyed
Simply contact your Cloud Files support technician, and they can get the process started for you. You will be assigned a Migration Specialist who can answer any questions you have about your import along the way.
Currently, files cannot be encrypted for shipping and decrypted before upload. We plan to offer this feature in the future.
We accept data in the most common devices and file systems:
Accepted Devices: USB, SATA, eSATA
Accepted File Systems: NTFS, ext2, ext3, ext4, FAT32
For more detailed information about the Bulk Import process, please read the section for Bulk Importing Data in the Cloud Files API Developer Guide.
Just follow these simple steps:
- If you do not have a Cloud Files account, create one and activate it
- Contact your Cloud Files support technician using Control Panel. He or she will get the process started for you. You will be assigned a Migration Specialist who can answer any questions you have about your import and who will provide you with instructions, the shipping location of our data center and other information required to ship your device.
- Fill out a form with information about your data and physical device and prepare it for shipping.
- Copy the data to your device. Unless you want your data to be stored and retrieved encrypted in Cloud Files, you should not encrypt your data, as Rackspace will not decrypt it for you.
- Your Migration Specialist will review all the logistics with respect to shipping your device to Rackspace (e.g. datacenter address, method of shipment, security guidelines, etc.).
When we receive your data, your Migration Specialists will hook up your device to a workstation that that has a direct link to our Cloud Files infrastructure. This will ensure the process of loading the data happens as quickly as possible. Once your data is imported, the drive will either be shipped back to you or destroyed via degaussing, depending on your instructions.
Rackspace charges $90 per drive for bulk imports. If you would like your drive destroyed instead of shipped back to you, there is an additional $15 fee per drive. If you chose to have your device shipped back, the return shipping is free within North America.
Content Delivery Network
Note: UK Cloud customers should use https://mycloud.rackspace.co.uk/ to access our New Cloud Control Panel.
The following article will demonstrate how to create a Container within Cloud Files and also manage files within this Container through the Cloud Files interface. Let's take a look:
What is the CDN?
Using Akamai’s service, Cloud Files brings you a powerful and easy way to publish content over a world-class industry leading CDN. Customers automatically get access to this network included in the cost of using the Cloud Files service. The way it works is by distributing the content that you upload to Cloud Files across a global network of edge servers. What this means is that when someone is viewing content from your site, your CDN-enabled content will be served to them from the closest geographic edge server to their location. This feature dramatically increases the speed at which websites can load, no matter where your viewer is located.
This can be a great advantage when hosting content for an international audience. Though you can also use the API to upload content to Cloud Files, another way is to use the File Manager interface in the Control Panel. To store content on Cloud Files, you start by creating a Container for your content. The container name should have no breaks, spaces, or special characters; and unlike a folder or directory, a Container cannot have sub-directories. All your content will be at one level below the Container name. Let's take a look at how to setup and manage a container:
- First, let's login to the Cloud Control Panel and once you're logged in, select Files at the top.
- Next, we'll select the Create Container option, name our container and then select Create Container once more. Next we'll locate our container with in the list and select the socket icon next to our container and select the Make Public (Enable CDN) option and then select Make Public once more. Enabling your Container will allow to share any files the reside with in that container.
- Next, we'll upload a file to the container through the UI (user interface). For this exercise we'll select a .png file. Select Upload Files at the top, Choose File from the pop up screen, select your desired file, and then select Upload File.
- After you've uploaded your file, you'll see the file available in the Cloud Files list. Select the socket next to your file and then select View All Linksand you'll then be presented with 3 diferent options to share your file.
Who is Akamai?
Akamai Technologies, Inc. is publicly traded: (NASDAQ: AKAM) company founded in 1998. Akamai has a pervasive, highly-distributed cloud optimization platform with over 73,000 servers in 70 countries within nearly 1,000 networks.
What will I experience when Akamai is implemented as my new CDN provider?
Rackspace expects no customer impact during your transition to Akamai. Once we flip the switch to have a customer’s content served by Akamai, Akamai will begin supporting both new URLs and all other existing CDN provider URLs.
This means that CDN customers who currently have Limelight URLs coded into their websites will continue to serve content using those URLs when they are transitioned to Akamai, but they will be distributed over the Akamai network. At this time, we do not have any plans to discontinue the legacy URLs.
If a customer requests their URL (either in the Control Panel or via API) for an object, they will be presented with a new Akamai URL. This does not mean that old URLs are invalid. However, as Rackspace releases new features like CNAME and SSL, customers will need to reference their new Akamai URL instead of their legacy URL.
Will customers have to change their code to use Akamai?
Will customers have to do anything different in the control panel?
No, as we add new features, we will educate our customers.
Should customers anticipate any downtime during this implementation?
No downtime is expected during the implementation of the Akamai platform.
Will the Cloud Files API be different?
No, all customers facing API calls will remain the same.
Benefits of a CDN
- Higher capacity and scale- Strategically placed servers increase the network backbone capacity and number of concurrent users handled. For instance, when there is a 10 Mbit/s network backbone and 100 Mbit/s central server capacity, only 10 Mbit/s can be delivered. But when 10 servers are moved to 10 edge locations, total capacity can be 10*10 Mbit/s.
- Lower delivery costs - Strategically placed edge servers decrease the load on interconnects, public peers, private peers and backbones, freeing up capacity and lowering delivery costs. A CDN offloads traffic on a backbone network by redirecting traffic to edge servers.
- Lower network latency and packet loss - End users experience less jitter and improved stream quality. CDN users can therefore deliver high definition content with high Quality of Service, low costs and low network load.
- Higher Availability - CDNs dynamically distribute assets to strategically placed core, fallback and edge servers. CDNs may have automatic server availability sensing with instant user redirection. CDNs can thus offer 100% availability, even with large power, network or hardware outages.
- Better Usage analytics – CDNs can give more control of asset delivery and network load. They can optimize capacity per customer, provide views of real-time load and statistics, reveal which assets are popular, show active regions and report exact viewing details to customers
The Wikipedia entry for CDN states: “A content delivery network or content distribution network (CDN) is a system of computers networked together across the Internet that cooperate transparently to deliver content to end users, most often for the purpose of improving performance, scalability, and cost efficiency.”
To elaborate, we have to start with the fact that the Internet is a network of networks. To get content from a server on the other side of the planet, IP packets have to travel through a whole series of backbone servers and public network cables.
Content Delivery Networks (CDNs) augment the transport network by employing various techniques to optimize content delivery. It is fairly easy to see how CDNs help by digging a little deeper on how the Internet works. A trace route to an internet address tells us how many network hops a simple request takes. Following is one to Yahoo.com.
>tracert www.yahoo.com Tracing route to www-real.wa1.b.yahoo.com [188.8.131.52] over a maximum of 30 hops: 1 1 ms 1 ms 1 ms 192.168.1.1 2 11 ms 9 ms 9 ms 184.108.40.206 3 11 ms 9 ms 9 ms 220.127.116.11 4 11 ms 9 ms 9 ms bb1-10g0-0.aus2tx.sbcglobal.net [18.104.22.168] 5 16 ms 19 ms 16 ms ex2-p14-1.eqdltx.sbcglobal.net [22.214.171.124] 6 16 ms 16 ms 18 ms asn10310-10-yahoo.eqdltx.sbcglobal.net [126.96.36.199] 7 19 ms 17 ms 106 ms ae2-p101.msr1.mud.yahoo.com [188.8.131.52] 8 18 ms 18 ms 17 ms te-8-1.bas-c1.mud.yahoo.com [184.108.40.206] 9 18 ms 18 ms 17 ms f1.www.vip.mud.yahoo.com [220.127.116.11] Trace complete.
Additional hops mean more time to render data from a request on the user’s browser. The speed of delivery is also constrained by the slowest network in the chain. The solution is a CDN that places servers around the world and, depending on where the end user is located, serves them with the closest or most appropriate server. CDNs cut down on the hops back and forth to handle a request. This is shown in the figures below.
CDNs focus on improving performance of web page delivery. CDNs like Akamai's support progressive downloads, which optimizes delivery of digital assets such as web page images. CDN nodes and servers are deployed in multiple locations around the globe over multiple internet backbones. These nodes cooperate with each other to satisfy data requests by end users, transparently moving content to optimize the delivery process. The larger the size and scale of their Edge Network deployments, the better the CDN.
They generally push the Edge Network closer to end users. The Edge Network is grown outward from the origins by purchasing co-location facilities, bandwidth, and servers. CDNs choose the best location for serving content while optimizing for performance. They may choose locations that are the fewest hops or fewest number of network seconds away from the requesting client. CDNs choose the least expensive locations while optimizing for cost. CDNs use various techniques such as web caching, server-load balancing, and request routing to achieve the optimization goals.
Because closer is better, web caches store popular content closer to the user. These shared network appliances reduce bandwidth requirements, reduce server load, and improve the client response times for content stored in the cache.
Server-load balancing uses a web switch, content switch, or multilayer switch to share traffic among a number of servers or web caches. Here, the switch is assigned a single virtual IP address. Traffic arriving at the switch is then directed to one of the real web servers attached to the switch. This has the advantages of balancing load, increasing total capacity, improving scalability, and providing increased reliability by redistributing the load of a failed web server and providing server health checks.
Request routing directs client requests to the content source best able to serve the request. This may involve directing a client request to the service node that is closest to the client, or to the one with the most capacity. A variety of algorithms for Global Server Load Balancing (shown in diagram) are used to route the request. Choosing the closest service node is done using a variety of techniques including proactive probing and connection monitoring.
For more information, see What are the benefits of using a CDN?
What is a CNAME?
A CNAME record is way to link your Cloud Files container to a branded URL that you display instead of a CDN URL. For example, you might want to CNAME your CDN URL
http://c186397.r00.cf1.rackcdn.com to a shorter or branded URL (images.mycompany.com).
How do I set up my CNAMEs?
You can set up your CNAME simply by managing your DNS. Within your DNS settings, request a new record. You will need to ensure that your CNAME record points to your container CDN URL and not your object CDN URL. If you are using DNS through Rackspace Cloud Sites, for example, it would look like this:
If you wish to edit or delete your CNAME, you can also do that by managing your DNS in your existing tool.
Will I be charged extra for using CNAMEs?
Do CNAMEs work with SSL (or https) delivery?
At this time, CNAMEs will not work with SSL delivery. While we are investigating this opportunity, there are significant challenges with this technology.
Where do I find my CDN URL?
You can find your CDN URL by clicking on the gear menu for your container in the Cloud Files section of the Cloud Control Panel Control Panel or by requesting your container information via our Cloud Files API. If you are using our Control Panel the display will look something like the following.
The following article will discuss the use of a TTL (Time To Live) and how it works. Let's take a look below:
When creating a Container in cloud files and making that Container public, each file within that container will have what you call a TTL (Time to Live). The attribute and its value can be viewed / modified in the CDN through the Next Generation Cloud Files user interface. The TTL is the time interval after which the CDN will reread the contents of the container. This time interval can be viewed / modified / stored as number of days using the up/down arrow in the CDN & Metadata pane. After modifying the number of days by using the up/down arrows user must press the "Apply TTL" button.
The new value will take effect after the current TTL cycle is completed. The TTL can take values between 15 minutes and 50 years. Higher numbers should be used for static content which doesn't change often. Smaller numbers should be used if the content changes more often. If you believe you require a longer TTL, see the following blog post on using the API to set TTL. Take a look at the steps below on how to modify your TTL within the Cloud Control Panel:
- First login into the Cloud Control Panel and select Cloud Files at the top.
- Next, select a container that you've just created and then select the socket icon next to that container. Next, select Make Public (Enable CDN) followed by Publish To CDN.
- Next, select the socket icon once more and then select Modify Time To Live (TTL). Enter in your desired TTL and when you're finished, select Save TTL.
File Transfers and File Sharing
Yes, the Rackspace Cloud now supports transfer and storage of larger files! Here is a list of frequently asked questions about our large file support:
How does Rackspace support upload of large files to Cloud Files?
Though support for uploading content to Cloud Files through the Control Panel is limited to files below 5GB, we can accommodate the transfer of files larger than 5GB by allowing you to segment your files into multiple file segments.
How large should my file segments be?
This is a matter of preference and Rackspace does not enforce any lower limits on the file size. You may not exceed file segments of 5 GB, and Rackspace recommends not storing file segments below 100-200MB.
Can I serve my large files over the CDN?
The CDN does come with a few restrictions on file size. At this time, you may not serve files larger than 10GB from the CDN.
Is there a simpler way to use this code?
We have created a tool called the Swift tool to make this process easier. You can get information about the Swift Tool here and download the Swift tool. The tool will segment your large file for you, create a manifest file, and upload them accordingly. After the file has been uploaded, this tool will manage your file segments for you, deleting and updating segments as needed.
When should I use the API vs the ST Tool?
If you are interested in developing against the Rackspace Large File Support code to incorporate into your application, you will want to work directly with the Cloud Files API. You will be following the steps outlined below:
# First, upload the segments curl -X PUT -H 'X-Auth-Token: <token>' \ http://<storage_url>/container/myobject/1 --data-binary '1' curl -X PUT -H 'X-Auth-Token: <token>' \ http://<storage_url>/container/myobject/2 --data-binary '2' curl -X PUT -H 'X-Auth-Token: <token>' \ http://<storage_url>/container/myobject/3 --data-binary '3' # Next, create the manifest file curl -X PUT -H 'X-Auth-Token: <token>' \ -H 'X-Object-Manifest: container/myobject/' \ http://<storage_url>/container/myobject --data-binary '' # And now we can download the segments as a single object curl -H 'X-Auth-Token: <token>' \ http://<storage_url>/container/myobject
If you are interested in uploading large files, but not interesting in incorporating our code into an application, you may find it easier to use the ST (Swift Tool) for your uploads and management. You can download that tool here. If you are using the tool, your process will look like this:
This will upload largefile.iso to test_container in 10 segments (in this case 10 MB per segment for a total of 100 MB).
$ st -A https://auth.api.rackspacecloud.com/v1.0 -U glh -K 3a25c2dc74f24c3122407a26566093c8 upload -S 1048576 test_container largefile.iso largefile.iso segment01 largefile.iso segment06 largefile.iso segment09 largefile.iso segment04 largefile.iso segment05 largefile.iso segment03 largefile.iso segment07 largefile.iso segment08 largefile.iso segment02 largefile.iso segment00 largefile.iso
Now, to download the large file as a single object:
$ st -A https://auth.api.rackspacecloud.com/v1.0 -U glh -K 3a25c2dc74f24c3122407a26566093c8 download test_container largefile.iso largefile.iso
You can list the segments, and download them separately if you want:
$ st -A https://auth.api.rackspacecloud.com/v1.0 -U glh -K 3a25c2dc74f24c3122407a26566093c8 list test_container_segments largefile.iso/1300906481.0/10485760/00000000 largefile.iso/1300906481.0/10485760/00000001 largefile.iso/1300906481.0/10485760/00000002 largefile.iso/1300906481.0/10485760/00000003 largefile.iso/1300906481.0/10485760/00000004 largefile.iso/1300906481.0/10485760/00000005 largefile.iso/1300906481.0/10485760/00000006 largefile.iso/1300906481.0/10485760/00000007 largefile.iso/1300906481.0/10485760/00000008 largefile.iso/1300906481.0/10485760/00000009
You can list the main large object itself:
$ st -A https://auth.api.rackspacecloud.com/v1.0 -U glh -K 3a25c2dc74f24c3122407a26566093c8 list test_container largefile.iso
When using the object name to list large object segments, the segment names will be returned in alphabetical order. Plan segment names accordingly. For example, "name10" would come before "name9" alphabetically, so to preserve proper order the latter segment should be named "name09".
What will the download experience be like?
Once files are segmented and uploaded with a manifest file, your large file will be served as a single file, so the experience will mimic the download or service of any other object retrieval.
Do I keep access to my file segments?
Yes, you may edit your file segments just like any other object within Cloud Files.
How do I ensure my files are linked correctly?
This is done by including your manifest file in your upload. You can change your file name by editing this manifest file as well. Rackspace recommends using prefixing in your file segments in order to easily map your manifest file to the portions of your large file. For example, you could name your segments as so:
Myfavoritemovie-01 Myfavoritemovie-02 Myfavoritemovie-03 ..etc..
In this case, you would just point your manifest file to the prefix:
Can I use this feature from the Control Panel?
At this time, Rackspace has not implemented this functionality into the Rackspace Cloud Control Panel.
There are many tools available for use with Cloud Files, including the Rackspace Cloud Control Panel and third-party tools like Cyberduck and and FireUploader. For more information see What can I do with Cloud Files tools?.
When you upload a file in the Cloud Control Panel, an Allow-Origin header is set on the container to support cross-origin resource sharing (CORS). Browsers prevent AJAX requests between different domains by default. Because the Cloud Files API and the Control Panel reside on different domains, CORS must be enabled to support uploads directly to a container. When the upload succeeds, the CORS headers are removed.
By allowing the browser to upload directly to the Cloud Files API, maximum upload performance can be achieved.
Read more about CORS at http://en.wikipedia.org/wiki/Cross-origin_resource_sharing.
There are no permissions or access controls around containers or objects other than being split into separate accounts. Users must authenticate with a valid user name and API Access Key, but once authenticated, they can create or delete containers and objects only within that account.
At this time, there is no way to publicly access the objects stored in Cloud Files unless that container is published to CDN. Each request to Cloud Files must include a valid "storage token" in an HTTP header transmitted over a HTTPS connection.
The naming requirements for Cloud Files objects and containers (such as illegal characters and name length limits) include:
- Container names may not exceed 256 bytes and cannot contain a slash (/) character.
- Object names may not exceed 1024 bytes, but they have no character restrictions.
- Object and container names must be URL-encoded and UTF-8 encoded.
There are a number of reasons to choose the Cloud Files service over similar services available in the market.
- Simplicity of CDN functionality.
- Utility pricing - paying only for what you use, after you use it.
- Multiple APIs for developers to choose from.
- Fast and reliable performance.
- World class support
Cloud Files is simple to use for developers and non-developers alike. Users can get started in as little as five minutes. Users do not have to know how to code to use Cloud Files and CDN. Users can, within minutes, sign up for Cloud Files, create a Container, upload a file and publish that Container’s content through the CDN (Refer to “Cloud Files with CDN QuickStart guide”. The Cloud Files GUI is easy to navigate and use. Rackspace browser based control panel let users easily upload files and distribute on a CDN without writing any code.
All the content can be backed by a CDN without complex negotiations and details of updating content for optimizing delivery. Given the complexity of the CDN, users may have to make a series of choices such as number of servers to use etc. before obtaining the service. After making the choices users must ensure the usage bills are accurate. All these issues are handled by Cloud Files. CDNs have, in the past, been the prerogative of companies with more money but Cloud Files has changed that.
Cloud Files is a user and developer friendly service. Security mechanisms described in more detail in the Security section are simple to use. Third party tools which further simplify use of the storage service are available. Developers can use a language specific application programming interfaces to develop utilities or applications. The API’s are easy to use and are documented with examples to get started quickly.
Pricing for Cloud Files with CDN is comprised of the amount of total data you have stored (per GB), and the amount of bandwidth used to serve the data (outbound bandwidth only, charged per GB to any edge location on the CDN around the globe). There are no 'per request' fees for CDN. The current pricing for Cloud Files can always be found on our website.
Cloud Files is a dynamically expanding storage system very flexible and is built on the pay for use principle. Customers can use as much or little storage as necessary, while paying only for storage space used. There is no limit on total space use and individual files can range in size from a few bytes to extremely large. The GUI control panel allows users to check storage space used and bandwidth utilized. There are no upfront fees or contracts and end users pay only for storage space used and outgoing bandwidth that they use.
System administrators can use this for simple manual updates as well. Users do not have to know how to code to use Cloud Files and CDN. No API is required to share files. Non-developers have a simple web-interface, that can be used to quickly and easily upload data and enable CDN access. There are multiple third party tools (refer to Tools and Applications section) which make it even more flexible for users of specific environments such as the Mac OS.
Developers can use the ReST web service and language-speicifc API’s in PHP, Python, Java, Ruby, and C#/.NET. The API's provide full support for managing content in Cloud Files and publishing content over the CDN. The API allow developers to work in the language they feel most comfortable
Rackspace has built networks with superior performance for years. The Akamai CDN capability improves the performance further for distribution of digital content. In short, Akamai can bring data closer to end users and serve popular content faster.
World Class support
With Cloud Files world-class free technical support is only a click away. Live support, with real people is available 24/7. Fanatical Support is built into the price. Users can get peace of mind knowing that technical support is just a phone call or online chat away – at any time of the day.
The Cloud Files storage system is designed to be highly available and fault tolerant. Cloud Files achieves client data redundancy by replicating three full copies on different storage nodes. Storage nodes are grouped in logical Zones within Rackspace datacenters. Zones are connected to redundant Internet backbone providers and reside on redundant power supplies and generators. The system has been engineered in such a way as to continue to by fully functional even in the event of a major service disruption within a Datacenter. Content on the CDN provides an additional layer of data redundancy.
Cloud Files uses a number of different measures to ensure that your data is kept safe. First and foremost, all traffic between clients and the Cloud Files system uses SSL to establish a secure, encrypted communication channel. This ensures that any data (usernames, passwords, and content) cannot be intercepted and read by a third-party. Users authenticate with a valid username and API Access Key and are granted a session authentication token. These authentication tokens are used to validate all operations against Cloud Files. There is no way to terminate a valid session by the user, but the session tokens will automatically expire after 24 hours, forcing clients to resend their credentials.
The API Access Key is only available from within the Rackspace Cloud’s control panel. Users must enter their valid username and password to gain access to view the API Access Key or to generate a new key.
It is important to note that Cloud Files does not apply any transformations to data before storing it. This means that Cloud Files *will not* store data in encrypted form unless the client encrypts it prior to transmission. This allows users to select the type and level of encryption best suited for their application.
When deleting storage Objects from the Cloud Files system, all copies of data are permanently removed within a matter of minutes. Furthermore, the physical blocks making up the customer’s data is zeroed over before that space is re-used for other customer data. In other words, after a delete request, the data will be unrecoverable.
Cloud Files™ is an affordable, redundant, scalable, and dynamic storage service offering. The core storage system is designed to provide a safe, secure, automatically re-sizing and network accessible way to store data.
You can store an unlimited quantity of files ranging in size from a few bytes to extremely large. Users can store as much as they want and pay only for storage space they actually use.
Cloud Files makes it easy to serve content through a CDN. This allows users to take advantage of a proven world-class content distribution network that is affordable and easy to use.
Cloud Files also allows users to store/retrieve files and CDN-enable content with a simple Web Service (ReST: Representational State Transfer) interface. There are also language-specific API’s that utilize the ReST API but make it much easier for developers to integrate into their applications.
Ideal uses for Cloud Files
There are a number of uses for the Cloud Files service. Cloud Files is an excellent storage solution for a number of scenarios and is well suited for a number of applications such as:
- Backing up or archiving data
- Serving images/videos (streaming data to the user’s browser)
- Serving content with a world-class CDN (Akamai)
- Storing secondary/tertiary, static web-accessible data
- Developing new applications with data storage integration
- Storing data when predicting storage capacity is difficult
- Storing data for applications affordably
Cloud Files™ is not a “file system” in the traditional sense. You will not be able to map or mount virtual disk drives like you can with other forms of storage such as a SAN or NAS. Since Cloud Files is a different way of thinking when it comes to storage, please take a few moments to review the concepts.
Using Cloud Files
There are two ways to use Cloud Files:
- GUI interface such as the Rackspace Cloud Control Panel, Cyberduck or Fireuploader
- Programming interfaces via ReST, Python, PHP, Ruby, Java, or C#/.NET.
Control Panel Interface
The Control Panel provides an browser based, intuitive, easy to use graphical user interface. The interface allows you to manage your Containers and Objects without any programming knowledge. From there, users can CDN-enable the Container by marking it “public”. Any Objects stored in a public, CDN-enabled Container are directly accessible over the Akamai’s CDN.
There are several programming interfaces for Cloud Files that will allow you to integrate the storage solution into your applications, or provide automated ways of accessing the system. Currently, we support a ReST web-services API and several programming language API’s (Python, PHP, Java, Ruby, and C#/.NET).
Please refer to the Developer Guide for more details about using these interfaces. You can access the Developer Guide on our API documentation site.
You may stream video content through Cloud Files. Streaming lets you deliver video content quickly and easily, without making your users download the content first. They can begin viewing your content immediately and can jump around the video stream without needing to buffer.
Because Rackspace uses HTTP delivery for streaming content, you can use the Akamai CDN network to deliver your content. This means the performance should be identical to CDN speeds customers are used to. Before you begin, you must CDN-enable the container that holds your streaming content. In the New Cloud Control Panel, click the gear icon of the container and select "Make Public (Enable CDN)".
Once your container is CDN-enabled, you will need its Streaming URLs. In the New Cloud Control Panel, click the gear icon for the container and select "View All Links...". Below is an example of the CDN links that display:
iOS Streaming: http://09ac235af93af07922d6-cc339a649709710bbecd3db1e126ec2b.iosr.cf1.rackcdn.com
There are several different ways to stream your content with Cloud Files. Click a link below to find out more for each approach.
- JW Player
- OSMF (Open Source Media Framework), which allows you to build your own player
- iOS Device Streaming (link goes to API Developer Guide)
Why have we chosen to support specific players?
Many Rackspace customers are not flash developers, but still want to use a streaming offer. There are a few players that are dominating the market, and we will plan to support each of them. Custom plugins are required in order for Streaming delivery to work properly over the Akamai network. As Akamai adds support for more players, our customers will have access to them.
Why are we not using RTMP?
- RTMP is probably the most popular delivery format today, but the market is quickly moving towards HTTP delivery for Streaming content. Here are just a few reasons the market is moving towards HTTP.
- Accessibility- Many firewalls block RTMP and RTSP streaming protocols because corporations don't want users watching video at work. HTTP appears to be normal web traffic, meaning that videos served over HTTP are usually left open.
- Startup times- Akamai sees a significant reduction in stream startup times on average for traffic served via HTTP.
- Throughput (image quality)- With the HTTP network being larger than many other networks, Akamai is closer to the end user on their HTTP network meaning they get better throughput of data. that means customers will experience higher bit rates uninterupted (and without buffering) and increase the end user's over all experience.
Is this available internationally?
Yes, this is available to both US and UK Cloud customers.