Monthly Archives: March 2011

Building a Rackspace Cloud Server from Cloud Files Manually

This article will cover how to manually take image files in Cloud Files and build them to a new Cloud Server. This will ONLY work for Linux. I don’t have a clue how to make this work on Windows :-p

There can be several reasons why you would want to do this. Maybe you want to manually move from a US datacenter to a UK datacenter. Maybe you have an account and your boss/co-worker/friend has an account and you want to share images. Whatever the reason, these are the steps to make it work.

Credit where credit is due: The idea for this was originally published at (Thanks Jordan and Dewey). My article will just cover doing it using curl instead of wget, and some of the potential pitfalls not covered in their article.

So here goes! First things first, you will need to start up a new stock server in the receiving account that is the EXACT same image as the server that the image was built from. For example, if the image in Cloud Files was originally taken from a server running CentOS 5.5, you will build a stock image that is running CentOS 5.5 in the receiving account. Login to the new server you built.

Make a backup of the new server’s /etc directory. You will need this later:

If necessary, install curl (Some distros of linux come with it, others don’t).

Authenticate to the Cloud Files Account where the image is stored:

After you run that, it will spit out a list of names and values, like this:

We care about X-Storage-URL (That is where the image files are stored) and X-Storage-Token (This is your authentication token that lets you actually download stuff). Now let’s see a list of all of the image files in the account. Replace your Storage Token and your URL below. Don’t forget the /cloudservers at the end of the URL.

As you can see above, there are several files associated with each image. All of the data is stored in the .tar.gz files. The .yml file is a configuration file that for this article we don’t care about. You will see that some of the images have more than one .tar.gz file. This happens when the image is larger than 5GB and it gets chunked into multiple objects in Cloud Files. We will assume that we are working with a chunked image because that will make it just a little bit harder.

Let’s grab the delweb1ssl image. Grab the first chunk like this:

This can be up to 5GB, so it may take a little while. Next up, download the 2nd chunk (and then third, fourth, etc)

Note that we are just changing what file we are getting and what we are calling it locally. Do this for as many .tar.gz files as there are in the account.

Now that we have all of the images downloaded, cat them together to make one big image

If the new server isn’t large enough to hold the stock image, the downloaded images from Cloud Files, AND the concatenated image, you may run out of disk space. For this reason, you might want to just start with a huge 8GB (320GB Hard Drive) or 16GB (640GB Hard Drive) server and downsize after you are done with this.

Now that we have the one big image, we need to extract that out onto the filesystem. More than likely, you will need the newest version of tar to have the –hard-dereference option available. Your choices are to either download tar and install it from source, or grab a fully compiled version of tar here. (Thanks again Jordan). We’ll use the compiled version because it is just easier.

This can take awhile.

Remember when we backed up /etc above? (You did that, right?) Now we will want to bring that back in. However, is we just completely overwrite the /etc directory that we just extracted we will lose things like our users, groups, iptables, etc because they will be overwritten with the default values. To make sure we always have the /etc directory from the tar available, save that as another backup directory:

Ok, now we have 3 etc directories:
/etc = The version off of the backup
/etc.bak = The stock image /etc directory with all defaults
/etc.tar = A backup copy of the etc directory from the tarball

From here you can manually bring over your network config files and anything else necessary from the default image, but I prefer to just replace the entire /etc directory with stock data and bring over what I need from the /etc.tar directory later.

Depending on what distro you are running, you will also want to grab your iptables rules from /etc.tar In anything RHEL based it would be:

That’s pretty much it. Cross your fingers, reboot and see if it comes back up!

Rackspace Cloud Load Balancer as a Service Cheater Script

Rackspace Cloud Load Balancer as a Service is awesome. It is an amazing product that makes load balancing sites really easy and abstracts away having to setup and configure one on your own. As of right now, it is only available via the API while a full blown GUI is being developed for the control panel. The API docs are very good and can be found at

UPDATE: Forget this exists. Caleb Groom has an awesome project on github that uses python and will let you manage your Load Balancers.

Creating a Load Balancer requires you to authenticate with your Username and API key, and then create an XML request that you send that has all of your settings in it. I made a very, very simply bash script that will write the XML for you. I’m not a programmer. This can be improved immensely and there is no error catching or validation that what you type in is right.

Anyway, here is the script:

So before you run that you will need to authenticate and get your Auth token. to do that, run the following curl:

curl -D – -H “X-Auth-User: YourUsername” -H “X-Auth-Key: YourAPIKey”

After you run that, it will spit out a list of names and values, like this:

HTTP/1.1 204 No Content
Date: Wed, 30 Mar 2011 04:15:28 GMT
Server: Apache/2.2.3 (Mosso Engineering)
X-Storage-Token: 63ea9670-c80f-402d-9657-1234567890
X-Auth-Token: 63ea9670-c80f-402d-9657-c59bdb123456
Content-Length: 0
Connection: close
Content-Type: application/octet-stream

You will need the Auth Token. In the made up example above that would be 63ea9670-c80f-402d-9657-c59bdb123456. You will also need your account number. In the example above that is listed under X-Server-Management. In the fake example that is 123456.

Once you have those, invoke the bash script above with something like


It will ask you some questions, most of them give you a list of available options. Once it is done asking questions it will spit out the curl command for you to run. Here is an example:

That’s it, copy and paste the curl command that it spits out and that will create the Load Balancer for you. Like I said, this is a VERY simple script that I primarily use just for setting up test load balancers. If you improve on it and make it totally awesome drop me a link!

Using Rackspace Cloud Load Balancers as a Service to host multiple SSL Sites

A few days ago I wrote an article about using SNI to host multiple SSL sites on a single IP. This method is excellent, and it works with Rackspace Cloud Load Balancers as a Service very well. The major drawback is that if your users are using a browser that does not support SNI, they won’t get the desired results. I want to first say that I am a huge fan of SNI, and as available IPv4 addresses become fewer and fewer, I think that SNI will be the preferred method of doing this. That being said, a lot of websites simply can not afford to write off anyone with an unsupported browser. For that reason, here are instructions on using Rackspace Cloud Load Balancer as a Service to host multiple SSL sites from the same pool of Web Servers.

High Level Overview

A high level overview is that you will have 2 Load Balancers for each site. The 2 Load Balancers will share a single Public IP address. One will listen on port 80 for standard HTTP traffic and the other will listen on port 443 for HTTPS traffic. In my proof of concept below, I will have two sites: and, ergo I will have four Load Balancers.

Create the Load Balancers

Let’s create the Load Balancers. Since Rackspace Cloud LoadBalancers as a Service is only available via the API at the time of this writing, that is what we will use.

First we authenticate (Obviously change out your username and API key for the made up values:

This will return a few headers, the one we care about is X-Auth-Token.

Now using that token we will build a Load Balancer in the datacenter of our choice. For my example, I will build into the DFW datacenter. If you want to build into ORD, just change out ‘dfw’ for ‘ord’ below.

First up, create an xml file for test1-http. Let’s call it createtest1-http.xml

Those values are pretty self explanatory, but you are giving the Load Balancer a name, telling it to listen on port 80 for HTTP traffic, requesting a public IP, and assigning it two nodes that it should send traffic to on port 80 as well.

Now that we have the xml file, let’s create the Load Balancer. Change out your Auth Code and your Account number in the example below:

The important take away from above is the new Public IP address and IP address ID. We will use the ID when we build the https load balancer so that they share the same IP.

Now, let’s build the test1-https Load Balancer. First, the xml file:

The changes here are going to be that we are giving it a different name, telling the Load Balancer to listen on port 443 for HTTPS traffic, and instead of requesting a new public IP, we are asking it to use the IP that we created above. In my case, that was IP ID 88. Also, note that we are asking it to send all traffic to the nodes on port 444. That’s not a typo. In order for the web nodes to distinguish¬† from we are going to send the traffic on different ports.

Now, the command to create this Load Balancer is just like above:

test2-http and test2-https will be just like above, but give them different names and have test2-https send traffic to the nodes on port 445.

Configure Apache on the Individual Nodes

Ok, we now have 4 Load Balancers, now to take a look at the apache config for one of the web nodes. You will want to add the following:

That’s it – Apply those settings to all of the web nodes, open up iptables on the nodes for ports 80, 444 and 445, and start apache and you will be good to go. (Obviously, don’t forget to point DNS for to the IP of the loadbalancer for test1 and the same for test2.)

Related resource: The API guide for Load Balancers as a Service:

I hope this helps! If anything doesn’t make sense or you have any comments leave a message below.

Using SNI to host multiple SSL Sites on a single IP in apache using Rackspace Cloud

1/15/14 Update: Fr0X from the Rackspace Cloud Community solved the issue of Browser and OS limitations by leveraging CloudFlare with this solution. His post is at:

Using SNI to host multiple SSL Sites on a single IP in apache using Rackspace Cloud

Common hosting knowledge has always been that if you want to host multiple SSL Sites on a single server you need to assign each website it’s own unique IP address. This makes sense. The whole purpose of SSL is that the request and response are encrypted. So when the request gets to Apache, Apache can not use standard name based hosting because it can not read the name of the site being requested since it is encrypted. To get around this, you can put sites on separate IP addresses and Apache will look at the request and say “I don’t know what site you are requesting, but I know you are requesting it on X IP address, so I will send you to the default site I have for X IP address.”

In a world with unlimited IP addresses this works just fine. The problem is that the world is quickly running out of IPv4 Addresses and that we might be stuck limping around on IPv4 for awhile waiting for ISPs to catch up with IPv6.

Enter SNI (Server Name Indication). SNI allows for browsers to send the hostname (domain) being requested separately un-encrypted so that the web server can understand the request and serve the right virtual host. It is not without it’s drawbacks though, let’s look at what those are:

Server Pre-Requisites

You must be running Apache 2.12 or higher, and you must be running openSSL 0.9.8f or higher. RHEL/CentOS 5.5 do not have both of these version available in the standard repositories or the extended EPEL repos, so yum install is out the window on those Distros. You will be stuck building from source. This is a game changer since your package manager is no longer aware of the installation of that software and will cause all sorts of headaches. Your options would be to search for a repo that does include these later versions and install it (I haven’t looked too hard yet), install from source, pray that 5.6 has it and wait, or go with Fedora 14.

For this example, I am going to go with Fedora 14 because it is the easiest way to demonstrate SNI since the necessary versions are in the yum repos.

Browser Limitations

Oh yeah, you don’t just have to worry about your server. You also have to worry about your user’s browsers. Not all of them support SNI, but most do. The following browsers work:

  • Mozilla Firefox V2 and up.
  • Chrome
  • Opera 8.0 or higher
  • IE 7 or higher on Vista or higher. (Sorry, IE 7 on XP won’t work)
  • Safari 3.2.1 on OS X 10.5.6 or higher

Ok, so that’s all the bad news. Is that enough to scare you away from it? Maybe. Only you can decide that, and as IPv4 addresses become more scarce and supply and demand kicks in prices for IPv4 addresses will go up. Only you can determine if this is the right solution for your business and website. Now let’s dive into a server!

Proof of Concept

Let’s see this in action! I am going to do these steps using a Fedora 14 Rackspace Cloud Server. I will do this using self-signed certs, and it will be a minimum install because it is simply proof of concept. I will be using domains and and modifying my hosts file to point to the IP of my server. Start up the server and login as root.

Install what you will need

Add in Firewall Rules

Create Some Directories

Create Some Index Files

Put something in them so we can see if it works

Create the self-signed Certs

Again, I am using and Several of these commands will prompt you for input, just roll with it.

Repeat those steps above for a self signed cert for

Edit your Apache Config File

add into /etc/httpd/conf/httpd.conf

Start Apache

That’s it – after you edit your local hosts file you should be able to go to and in a browser and see your 2 test files. Note that you WILL get SSL Errors in your browser with the above, but that is only because they are self signed certs. If you look at the error, you will see that it is NOT due to a host name mismatch, but because the signer is not trusted. If you actually buy the certs you won’t get an error.

Leave me a comment and let me know what you think!

Install PhpMyAdmin on CentOS 5.5 using Epel Repo

My last post on installing phpMyAdmin became pretty popular, so I wanted to let everyone know that there is another way to do it too.

You can use the EPEL repository and do a yum install phpmyadmin and it will work. Warning: The Epel repo is what is used by Fedora and is pretty much the latest and not-always-greatest version of software. If you choose to do this, be sure and disable the EPEL repo when done.

First up, install MySQL.

Install MySQL

To get started we need to install MySQL

Now start it up

Now we need to secure it:

It is going to ask you handful of questions:

Current Root Password

You will be asked for your current root password. Because this is a new installation it is set to none. Press enter.

Set Root Password

If the above step worked correctly you should be prompted with a question asking you if you would like to set your root password. Please press Y and press Enter.

You will be asked for your root password twice. If it works you will see Success!

Removing Anonymous Users

You will be prompted to remove the MySQL anonymous users. For security reasons we want to do this. The text above the question explains this topic in more detail. Press Y and then Enter.

Disallow Root Login

You will be asked if you would like to disallow remote login for the root user and only allow connections from the server itself. To keep our server secure you want to say Y and press Enter.

Delete test Database

MySQL ships with a default database called test. This is not needed and can be deleted. Press Y and then Enter to delete the test database and it’s associated users.

Reload Privilege Tables

This step will reload the user settings (called privilege tables) so all user changes will take effect. Press Y and then Enter to continue.

This post won’t go into setting up additional users besides root and assigning them privileges. For information on that, check out the Cloud Servers Knowledge Base:

Install EPEL Repo

Now that MySQL is installed, let’s install the EPEL Repo. To do that, as root run

Now, if you ls -al /etc/yum.repos.d/ you should see epel.repo and epel-testing.repo. Nice.

Install phpMyAdmin

Next up, just do a yum install of phpmyadmin:

This will gather all of the dependencies and install apache and php for you if they aren’t already there.

Next, we need to modify the phpMyAdmin apache conf file because by default it restricts access to only localhost. Personally, I think you are better off getting your IP and only allowing that IP instead of opening up phpMyAdmin to the world. Head over to and grab your IP address (thanks to Major at for the link). Once you have your IP, edit the file /etc/httpd/conf.d/phpMyAdmin.conf to allow that IP. For example, let’s say your IP is You would change the part of the file that looks like this:

To instead look like this:

Save the file, and then start apache:

You may need to also add in iptables rule by running the following commands:

That’s it. Visit http://IpOfTheServer/phpmyadmin in a browser and login as the root user and whatever mysql password you setup.

House Cleaning

Disable the EPEL repo. This is generally a good idea because if you wanted Fedora you would have installed Fedora instead of CentOS. Edit the /etc/yum.repos.d/epel.repo file and epel-testing.repo file and change anywhere that it says ‘enabled=1’ to be ‘enabled=0’

If you found this useful or have anything to add please post a comment!