Building a Rackspace Cloud Server from Cloud Files Manually

This article will cover how to manually take image files in Cloud Files and build them to a new Cloud Server. This will ONLY work for Linux. I don’t have a clue how to make this work on Windows :-p

There can be several reasons why you would want to do this. Maybe you want to manually move from a US datacenter to a UK datacenter. Maybe you have an account and your boss/co-worker/friend has an account and you want to share images. Whatever the reason, these are the steps to make it work.

Credit where credit is due: The idea for this was originally published at (Thanks Jordan and Dewey). My article will just cover doing it using curl instead of wget, and some of the potential pitfalls not covered in their article.

So here goes! First things first, you will need to start up a new stock server in the receiving account that is the EXACT same image as the server that the image was built from. For example, if the image in Cloud Files was originally taken from a server running CentOS 5.5, you will build a stock image that is running CentOS 5.5 in the receiving account. Login to the new server you built.

Make a backup of the new server’s /etc directory. You will need this later:

If necessary, install curl (Some distros of linux come with it, others don’t).

Authenticate to the Cloud Files Account where the image is stored:

After you run that, it will spit out a list of names and values, like this:

We care about X-Storage-URL (That is where the image files are stored) and X-Storage-Token (This is your authentication token that lets you actually download stuff). Now let’s see a list of all of the image files in the account. Replace your Storage Token and your URL below. Don’t forget the /cloudservers at the end of the URL.

As you can see above, there are several files associated with each image. All of the data is stored in the .tar.gz files. The .yml file is a configuration file that for this article we don’t care about. You will see that some of the images have more than one .tar.gz file. This happens when the image is larger than 5GB and it gets chunked into multiple objects in Cloud Files. We will assume that we are working with a chunked image because that will make it just a little bit harder.

Let’s grab the delweb1ssl image. Grab the first chunk like this:

This can be up to 5GB, so it may take a little while. Next up, download the 2nd chunk (and then third, fourth, etc)

Note that we are just changing what file we are getting and what we are calling it locally. Do this for as many .tar.gz files as there are in the account.

Now that we have all of the images downloaded, cat them together to make one big image

If the new server isn’t large enough to hold the stock image, the downloaded images from Cloud Files, AND the concatenated image, you may run out of disk space. For this reason, you might want to just start with a huge 8GB (320GB Hard Drive) or 16GB (640GB Hard Drive) server and downsize after you are done with this.

Now that we have the one big image, we need to extract that out onto the filesystem. More than likely, you will need the newest version of tar to have the –hard-dereference option available. Your choices are to either download tar and install it from source, or grab a fully compiled version of tar here. (Thanks again Jordan). We’ll use the compiled version because it is just easier.

This can take awhile.

Remember when we backed up /etc above? (You did that, right?) Now we will want to bring that back in. However, is we just completely overwrite the /etc directory that we just extracted we will lose things like our users, groups, iptables, etc because they will be overwritten with the default values. To make sure we always have the /etc directory from the tar available, save that as another backup directory:

Ok, now we have 3 etc directories:
/etc = The version off of the backup
/etc.bak = The stock image /etc directory with all defaults
/etc.tar = A backup copy of the etc directory from the tarball

From here you can manually bring over your network config files and anything else necessary from the default image, but I prefer to just replace the entire /etc directory with stock data and bring over what I need from the /etc.tar directory later.

Depending on what distro you are running, you will also want to grab your iptables rules from /etc.tar In anything RHEL based it would be:

That’s pretty much it. Cross your fingers, reboot and see if it comes back up!

  1. This article is exactly what I was searching for. If I understand this, I should be able to pull down a server image from the cloud files and run that image on a local server. When ready, I could make an image of my local server and restore it on a cloud server.

    Have you tried doing that?

    • Hi Carl!
      The major flaw with your plan would be that Rackspace Cloud Servers (currently) run as Para-Virtualization instead of Hardware Virtualization. This means that the kernel and bootloader is provided by the hypervisor and not by the server itself. (You’ll notice the lack of a grub.conf file on a stock server)

      In theory, I think this could work if you ran your own kernel and had support change you to a server running PV-Grub instead of the kernel pushed down from the hypervisor. You could then download it, make changes, upload it, and try to build from it. Let me know if it works!

  2. This worked so well! Great Post/Tutorial

  3. Great article. Had to tweak it slightly for my UK account (as the API URI is different.

    When I copy m /etc.bak back over like this:

    cp -a /etc.bak/* /etc/

    …I have to conform each file individually and there are literally hundreds. I’ve tried to use -f to force, with -a (-fa) but this doesn’t work.

    Any ideas?


  4. Reynaldo Zabala

    Do you know if any of these steps might be tweaked to allow for the following:

    1) Create a new VMWare image locally and upload it to Rackspace so it can be used as a base for creating new cloud servers?

  5. Great article.

    Anyone else doing this for UK cloud should use this API URL:

    instead of:

  6. Or script it. Where f will be the filename (the curl -o option saves to a file).

    for f in $(curl .. | grep _20131213_);
    curl -o $f --url "../container/${f}"

Leave a Comment

NOTE - You can use these HTML tags and attributes:
<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code class="" title="" data-url=""> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong> <pre class="" title="" data-url=""> <span class="" title="" data-url="">