Category Archives: Website Related

Fixing the Add Coupon button in Flatsome theme for Woocommerce

As I was building out Texas Forever (Shameless plug: has the greatest Texas Forever Shirts in the world), I ran into an issue with the Flatsome theme. When you were in the shopping cart and added a coupon code and then pressed “enter”, the page would refresh without actually applying the coupon code. The only way to actually apply the coupon for your Texas Forever shirt was to click the button.logo2-400x400

That wouldn’t do at all, so I went code diving. The file we need to update is buried in /wp-content/themes/flatsome/woocommerce/cart/cart.php

Ideally, you are running a child theme of Flatsome. Assuming you are, just copy the file to your child theme to duplicate it, and then work on it there. If you are NOT, you can still edit the Flatsome theme file, but know that when you update the theme your change will be gone.

The solution itself is really simple: The theme author has a </form> tag out of place, so when you pressed enter in the shopping cart of it interprets the ‘enter’ as being to submit the form to “Update Cart” and not to submit the form to apply the coupon code.

By moving the </form> tag from line 170 to line 148, the problem is solved and everything works as expected!

dsh (Dancer’s Shell / Distributed Shell) and you

dsh is an awesome tool for administering pools of servers where you would just want to run the same few commands on each one. I run Mac OSX locally, so I’ll write the article from that perspective:

Install DSH on a Mac

First and foremost, you need to install dsh. The downloads page for the project is a nightmare (, but you basically want the latest version of libdshconfig and dsh. At the time of this writing, that would be 0.20.13 and 0.25.9 respectively.

I just dropped them into /tmp for the time being:

Then go through the normal install from source process, starting with libdshconfig

now you should be able to run dsh and have it return an error that no machine was specified:

Configuring DSH

You will want to setup RSA keys for your user on each of the machines that you want to log in to remotely so that you are not prompted for a password. (This is outside the scope of this article, there are about a gazillion different articles online that will teach that). Once the keys are in place, you will want to create group files. You will need to mkdir -p ~/.dsh/group and then create a text file in the group directory that lists the machines you want to connect to. Here is an example:

This sets the user and the host that you want in the “web” group.

Next up is a very important configuration change. dsh wants to use rsh by default instead of ssh. You will need to edit /usr/local/etc/dsh.conf as an Administrator to change that. Just change the line:

to read:

Save the file, and you are ready to go.

Actually using DSH

Ok, now for the magic. Assuming you have a group named ‘web’, you could run:

This will return the results of uname -a for each server. The -c flag does it concurrently instead of going to each machine one at a time. The -M flag tells it to list the machine name by the response.

Other stuff

I prefer to always see the machine name, so instead of always specifying -M, I created a new file at ~/,dsh/dsh.conf and included the line “showmachinenames=1”. You can set other options here too. For example, say you use a non standard ssh port. You could specify on the command line with -o:

OR, you can set dsh to always use a different port by adding the line “remoteshellopt=-p 2222” to your configuration file.

Other sources if my article didn’t make sense:

Check out Racker Hacker’s post:

Upgrading to PHP 5.3 on Centos 5.5

The latest version of wordpress requires that you be running at least version 5.3 of php. This poses a problem for a lot of people who are still running 5.1 or 5.2 since that was the latest version available in the CentOS or Epel repositories for a long time.

Fortunately, php 5.3 is now available in the CentOS Base repo, so upgrading shouldn’t be too much of a nightmare. Here is what I did:

First and foremost, hopefully you are using a Cloud Hosting provider like Rackspace that will allow you to take a quick image of the server before you go messing with it. I strongly encourage you to have a recent backup of the server available, just in case. Once you have your image, move on:

First, you want to know what modules you currently have installed. The easiest way to do that would be to query rpm:

This will query all packages for php and output to a file in your home directory named php52. For example, on one of my old servers, that list looked like this:

Now, you will want to make a copy of that list, and modify the names to be php53.

Using a text editor, open up php53 and remove from the major version to the end of the line, then replace ‘php’ with ‘php53’. For example, the above list became this:

Now you have your list of what was installed (php52) and what you want installed (php53). Remove the old version of php:

(Note that those are backticks before cat and after php52. The backtick is the weird looking character next to the number 1 on your keyboard.)

Now that all of those packages are removed, install the php53 ones.

Expect some of the packages to fail. Some modules are now built into php53 common (mhash for one, I believe) and others simply don’t have a php53 package available yet (pear). Make note of the ones that yum complained were not available.

Any packages that were listed as not available will need to be examined one at a time to determine if you can use the old version, if it is deprecated, etc.

Once you are done, just restart apache and you should be good to go.

The one gotcha I ran into is that on some old custom sites I wrote sloppy code and used shorthand to open php code blocks (I used <? instead of <?php ) In the php.ini that comes with php53 from Centos, they have disabled this shorthand tag. To resolve this, just open up php.ini in a text editor and find the line for short_open_tag and turn it on. You can also use sed to make the change:

That should be it! The nice part about doing it this way is that if you screw something up, you can just yum install cat ~/php52 to return all of your old packages.

Setting up MemCache to handle PHP Sessions for a Web Cluster

A really common issue when people start to look towards scaling horizontally (adding on additional web/app servers) is session persistence.

Rackspace Cloud Load Balancers as a Service offers session persistence for HTTP (Port 80) traffic. This is done by the LB injecting a cookie into the response that specifies a node. The next time the user requests a page, they send the cookie and the load balancer reads it then directs traffic to the correct node.

This Session Persistence does NOT work on HTTPS (Port 443) because the LB is not able to terminate SSL. This means that the LB has no way to read the cookie being sent by the browser to achieve persistent sessions (and for that matter, no way to inject the cookie either).

Even if you are just load balancing port 80 traffic, what happens if you want to change or modify some code on a node? If you pull it out of rotation, it will go into a draining state where existing sessions can still connect; Not exactly a fast solution.

The solution to Load Balancing HTTPS or simply to load balancing without having to worry about session persistence at the LB is to store your sessions somewhere else. But where? You can store them in a Database if you want, but more than likely your database is busy enough as it is. A better solution would be to store the sessions on a separate memcache server.

For the uninitiated, memcache was originally created by livejournal. What it does is fairly simple: Gives you control over a certain amount of memory on the server so that you can store anything you want in there. This allows you to retrieve it much faster than if you had to read from disk. You can store DB query results, pages, or practically anything. We are going to store sessions.

This is assuming a brand new install of CentOS 5.5 from Rackspace Cloud Servers. First, let’s setup the memcache server.

MemCache Server

You will need the EPEL repo, so run this to install it:

Now that the EPEL repo is available, you can use yum to install it (while we are doing this, might as well install vim and tcpdump so we can watch it working)

Next up, we need to setup the very simply config file that memcache uses. It will be at /etc/sysconfig/memcached. Here is mine:

The variables are pretty straight forward: Port (default for memcache is 11211) User, Maximum Connections, Cache Size (How much memory in MB you are allowing memcache access to), Options (I specified -l for listen and told it to listen on the private IP address only. This would be the IP of the Memcache server, NOT the web server(s))

Next up, start the daemon and make it start on boot.

Next up, secure the memcache server. We don’t want to allow just anyone on the private network access to memcache. A rule set like this should do. Note that and would represent my Web Server Private IPs. If you don’t know the difference between -A and -I read up on it Here

That’s it for the memcache server. Now all you have to do is setup the web server to write the sessions to the correct place.

The Web Server

Again, this is assuming a stock Centos 5.5 server from Rackspace, so we have to install what we need.

Now you can test that php has the memcache module loaded in.

Look for memcache; it should be there.

Start apache and make it start on boot

Open up iptables on the websevrer:

Now we need to edit the php configuration file to tell it that we want to save sessions to memcache, and where our memcache server is. This file will be at /etc/php.ini.

Look for these 2 lines:

You will change these to (where is your memcache servers IP address:

This simply tells php to write sessions to memcache and gives the address and port of the memcache server.

Now restart apache so that this will take effect:

All that is left to do now is test it. I created a file named sessiontest.php in /var/www/html that contains:

Start up a tcpdump on the memcaceh server listening for 11211:

Then access your test page at http://YourIpAddress/sessiontest.php. You will see that there is activity on the memcache server when the page is activated. If you really want to see the test in action, start up a 2nd web server with the exact same configuration, but change the script to:

You will see it echo out the session that was created on the 1st web server.

That about covers it, Leave a message if you have any questions!

Building a Rackspace Cloud Server from Cloud Files Manually

This article will cover how to manually take image files in Cloud Files and build them to a new Cloud Server. This will ONLY work for Linux. I don’t have a clue how to make this work on Windows :-p

There can be several reasons why you would want to do this. Maybe you want to manually move from a US datacenter to a UK datacenter. Maybe you have an account and your boss/co-worker/friend has an account and you want to share images. Whatever the reason, these are the steps to make it work.

Credit where credit is due: The idea for this was originally published at (Thanks Jordan and Dewey). My article will just cover doing it using curl instead of wget, and some of the potential pitfalls not covered in their article.

So here goes! First things first, you will need to start up a new stock server in the receiving account that is the EXACT same image as the server that the image was built from. For example, if the image in Cloud Files was originally taken from a server running CentOS 5.5, you will build a stock image that is running CentOS 5.5 in the receiving account. Login to the new server you built.

Make a backup of the new server’s /etc directory. You will need this later:

If necessary, install curl (Some distros of linux come with it, others don’t).

Authenticate to the Cloud Files Account where the image is stored:

After you run that, it will spit out a list of names and values, like this:

We care about X-Storage-URL (That is where the image files are stored) and X-Storage-Token (This is your authentication token that lets you actually download stuff). Now let’s see a list of all of the image files in the account. Replace your Storage Token and your URL below. Don’t forget the /cloudservers at the end of the URL.

As you can see above, there are several files associated with each image. All of the data is stored in the .tar.gz files. The .yml file is a configuration file that for this article we don’t care about. You will see that some of the images have more than one .tar.gz file. This happens when the image is larger than 5GB and it gets chunked into multiple objects in Cloud Files. We will assume that we are working with a chunked image because that will make it just a little bit harder.

Let’s grab the delweb1ssl image. Grab the first chunk like this:

This can be up to 5GB, so it may take a little while. Next up, download the 2nd chunk (and then third, fourth, etc)

Note that we are just changing what file we are getting and what we are calling it locally. Do this for as many .tar.gz files as there are in the account.

Now that we have all of the images downloaded, cat them together to make one big image

If the new server isn’t large enough to hold the stock image, the downloaded images from Cloud Files, AND the concatenated image, you may run out of disk space. For this reason, you might want to just start with a huge 8GB (320GB Hard Drive) or 16GB (640GB Hard Drive) server and downsize after you are done with this.

Now that we have the one big image, we need to extract that out onto the filesystem. More than likely, you will need the newest version of tar to have the –hard-dereference option available. Your choices are to either download tar and install it from source, or grab a fully compiled version of tar here. (Thanks again Jordan). We’ll use the compiled version because it is just easier.

This can take awhile.

Remember when we backed up /etc above? (You did that, right?) Now we will want to bring that back in. However, is we just completely overwrite the /etc directory that we just extracted we will lose things like our users, groups, iptables, etc because they will be overwritten with the default values. To make sure we always have the /etc directory from the tar available, save that as another backup directory:

Ok, now we have 3 etc directories:
/etc = The version off of the backup
/etc.bak = The stock image /etc directory with all defaults
/etc.tar = A backup copy of the etc directory from the tarball

From here you can manually bring over your network config files and anything else necessary from the default image, but I prefer to just replace the entire /etc directory with stock data and bring over what I need from the /etc.tar directory later.

Depending on what distro you are running, you will also want to grab your iptables rules from /etc.tar In anything RHEL based it would be:

That’s pretty much it. Cross your fingers, reboot and see if it comes back up!

Rackspace Cloud Load Balancer as a Service Cheater Script

Rackspace Cloud Load Balancer as a Service is awesome. It is an amazing product that makes load balancing sites really easy and abstracts away having to setup and configure one on your own. As of right now, it is only available via the API while a full blown GUI is being developed for the control panel. The API docs are very good and can be found at

UPDATE: Forget this exists. Caleb Groom has an awesome project on github that uses python and will let you manage your Load Balancers.

Creating a Load Balancer requires you to authenticate with your Username and API key, and then create an XML request that you send that has all of your settings in it. I made a very, very simply bash script that will write the XML for you. I’m not a programmer. This can be improved immensely and there is no error catching or validation that what you type in is right.

Anyway, here is the script:

So before you run that you will need to authenticate and get your Auth token. to do that, run the following curl:

curl -D – -H “X-Auth-User: YourUsername” -H “X-Auth-Key: YourAPIKey”

After you run that, it will spit out a list of names and values, like this:

HTTP/1.1 204 No Content
Date: Wed, 30 Mar 2011 04:15:28 GMT
Server: Apache/2.2.3 (Mosso Engineering)
X-Storage-Token: 63ea9670-c80f-402d-9657-1234567890
X-Auth-Token: 63ea9670-c80f-402d-9657-c59bdb123456
Content-Length: 0
Connection: close
Content-Type: application/octet-stream

You will need the Auth Token. In the made up example above that would be 63ea9670-c80f-402d-9657-c59bdb123456. You will also need your account number. In the example above that is listed under X-Server-Management. In the fake example that is 123456.

Once you have those, invoke the bash script above with something like


It will ask you some questions, most of them give you a list of available options. Once it is done asking questions it will spit out the curl command for you to run. Here is an example:

That’s it, copy and paste the curl command that it spits out and that will create the Load Balancer for you. Like I said, this is a VERY simple script that I primarily use just for setting up test load balancers. If you improve on it and make it totally awesome drop me a link!

Using Rackspace Cloud Load Balancers as a Service to host multiple SSL Sites

A few days ago I wrote an article about using SNI to host multiple SSL sites on a single IP. This method is excellent, and it works with Rackspace Cloud Load Balancers as a Service very well. The major drawback is that if your users are using a browser that does not support SNI, they won’t get the desired results. I want to first say that I am a huge fan of SNI, and as available IPv4 addresses become fewer and fewer, I think that SNI will be the preferred method of doing this. That being said, a lot of websites simply can not afford to write off anyone with an unsupported browser. For that reason, here are instructions on using Rackspace Cloud Load Balancer as a Service to host multiple SSL sites from the same pool of Web Servers.

High Level Overview

A high level overview is that you will have 2 Load Balancers for each site. The 2 Load Balancers will share a single Public IP address. One will listen on port 80 for standard HTTP traffic and the other will listen on port 443 for HTTPS traffic. In my proof of concept below, I will have two sites: and, ergo I will have four Load Balancers.

Create the Load Balancers

Let’s create the Load Balancers. Since Rackspace Cloud LoadBalancers as a Service is only available via the API at the time of this writing, that is what we will use.

First we authenticate (Obviously change out your username and API key for the made up values:

This will return a few headers, the one we care about is X-Auth-Token.

Now using that token we will build a Load Balancer in the datacenter of our choice. For my example, I will build into the DFW datacenter. If you want to build into ORD, just change out ‘dfw’ for ‘ord’ below.

First up, create an xml file for test1-http. Let’s call it createtest1-http.xml

Those values are pretty self explanatory, but you are giving the Load Balancer a name, telling it to listen on port 80 for HTTP traffic, requesting a public IP, and assigning it two nodes that it should send traffic to on port 80 as well.

Now that we have the xml file, let’s create the Load Balancer. Change out your Auth Code and your Account number in the example below:

The important take away from above is the new Public IP address and IP address ID. We will use the ID when we build the https load balancer so that they share the same IP.

Now, let’s build the test1-https Load Balancer. First, the xml file:

The changes here are going to be that we are giving it a different name, telling the Load Balancer to listen on port 443 for HTTPS traffic, and instead of requesting a new public IP, we are asking it to use the IP that we created above. In my case, that was IP ID 88. Also, note that we are asking it to send all traffic to the nodes on port 444. That’s not a typo. In order for the web nodes to distinguish from we are going to send the traffic on different ports.

Now, the command to create this Load Balancer is just like above:

test2-http and test2-https will be just like above, but give them different names and have test2-https send traffic to the nodes on port 445.

Configure Apache on the Individual Nodes

Ok, we now have 4 Load Balancers, now to take a look at the apache config for one of the web nodes. You will want to add the following:

That’s it – Apply those settings to all of the web nodes, open up iptables on the nodes for ports 80, 444 and 445, and start apache and you will be good to go. (Obviously, don’t forget to point DNS for to the IP of the loadbalancer for test1 and the same for test2.)

Related resource: The API guide for Load Balancers as a Service:

I hope this helps! If anything doesn’t make sense or you have any comments leave a message below.

Using SNI to host multiple SSL Sites on a single IP in apache using Rackspace Cloud

1/15/14 Update: Fr0X from the Rackspace Cloud Community solved the issue of Browser and OS limitations by leveraging CloudFlare with this solution. His post is at:

Using SNI to host multiple SSL Sites on a single IP in apache using Rackspace Cloud

Common hosting knowledge has always been that if you want to host multiple SSL Sites on a single server you need to assign each website it’s own unique IP address. This makes sense. The whole purpose of SSL is that the request and response are encrypted. So when the request gets to Apache, Apache can not use standard name based hosting because it can not read the name of the site being requested since it is encrypted. To get around this, you can put sites on separate IP addresses and Apache will look at the request and say “I don’t know what site you are requesting, but I know you are requesting it on X IP address, so I will send you to the default site I have for X IP address.”

In a world with unlimited IP addresses this works just fine. The problem is that the world is quickly running out of IPv4 Addresses and that we might be stuck limping around on IPv4 for awhile waiting for ISPs to catch up with IPv6.

Enter SNI (Server Name Indication). SNI allows for browsers to send the hostname (domain) being requested separately un-encrypted so that the web server can understand the request and serve the right virtual host. It is not without it’s drawbacks though, let’s look at what those are:

Server Pre-Requisites

You must be running Apache 2.12 or higher, and you must be running openSSL 0.9.8f or higher. RHEL/CentOS 5.5 do not have both of these version available in the standard repositories or the extended EPEL repos, so yum install is out the window on those Distros. You will be stuck building from source. This is a game changer since your package manager is no longer aware of the installation of that software and will cause all sorts of headaches. Your options would be to search for a repo that does include these later versions and install it (I haven’t looked too hard yet), install from source, pray that 5.6 has it and wait, or go with Fedora 14.

For this example, I am going to go with Fedora 14 because it is the easiest way to demonstrate SNI since the necessary versions are in the yum repos.

Browser Limitations

Oh yeah, you don’t just have to worry about your server. You also have to worry about your user’s browsers. Not all of them support SNI, but most do. The following browsers work:

  • Mozilla Firefox V2 and up.
  • Chrome
  • Opera 8.0 or higher
  • IE 7 or higher on Vista or higher. (Sorry, IE 7 on XP won’t work)
  • Safari 3.2.1 on OS X 10.5.6 or higher

Ok, so that’s all the bad news. Is that enough to scare you away from it? Maybe. Only you can decide that, and as IPv4 addresses become more scarce and supply and demand kicks in prices for IPv4 addresses will go up. Only you can determine if this is the right solution for your business and website. Now let’s dive into a server!

Proof of Concept

Let’s see this in action! I am going to do these steps using a Fedora 14 Rackspace Cloud Server. I will do this using self-signed certs, and it will be a minimum install because it is simply proof of concept. I will be using domains and and modifying my hosts file to point to the IP of my server. Start up the server and login as root.

Install what you will need

Add in Firewall Rules

Create Some Directories

Create Some Index Files

Put something in them so we can see if it works

Create the self-signed Certs

Again, I am using and Several of these commands will prompt you for input, just roll with it.

Repeat those steps above for a self signed cert for

Edit your Apache Config File

add into /etc/httpd/conf/httpd.conf

Start Apache

That’s it – after you edit your local hosts file you should be able to go to and in a browser and see your 2 test files. Note that you WILL get SSL Errors in your browser with the above, but that is only because they are self signed certs. If you look at the error, you will see that it is NOT due to a host name mismatch, but because the signer is not trusted. If you actually buy the certs you won’t get an error.

Leave me a comment and let me know what you think!

Install PhpMyAdmin on CentOS 5.5 using Epel Repo

My last post on installing phpMyAdmin became pretty popular, so I wanted to let everyone know that there is another way to do it too.

You can use the EPEL repository and do a yum install phpmyadmin and it will work. Warning: The Epel repo is what is used by Fedora and is pretty much the latest and not-always-greatest version of software. If you choose to do this, be sure and disable the EPEL repo when done.

First up, install MySQL.

Install MySQL

To get started we need to install MySQL

Now start it up

Now we need to secure it:

It is going to ask you handful of questions:

Current Root Password

You will be asked for your current root password. Because this is a new installation it is set to none. Press enter.

Set Root Password

If the above step worked correctly you should be prompted with a question asking you if you would like to set your root password. Please press Y and press Enter.

You will be asked for your root password twice. If it works you will see Success!

Removing Anonymous Users

You will be prompted to remove the MySQL anonymous users. For security reasons we want to do this. The text above the question explains this topic in more detail. Press Y and then Enter.

Disallow Root Login

You will be asked if you would like to disallow remote login for the root user and only allow connections from the server itself. To keep our server secure you want to say Y and press Enter.

Delete test Database

MySQL ships with a default database called test. This is not needed and can be deleted. Press Y and then Enter to delete the test database and it’s associated users.

Reload Privilege Tables

This step will reload the user settings (called privilege tables) so all user changes will take effect. Press Y and then Enter to continue.

This post won’t go into setting up additional users besides root and assigning them privileges. For information on that, check out the Cloud Servers Knowledge Base:

Install EPEL Repo

Now that MySQL is installed, let’s install the EPEL Repo. To do that, as root run

Now, if you ls -al /etc/yum.repos.d/ you should see epel.repo and epel-testing.repo. Nice.

Install phpMyAdmin

Next up, just do a yum install of phpmyadmin:

This will gather all of the dependencies and install apache and php for you if they aren’t already there.

Next, we need to modify the phpMyAdmin apache conf file because by default it restricts access to only localhost. Personally, I think you are better off getting your IP and only allowing that IP instead of opening up phpMyAdmin to the world. Head over to and grab your IP address (thanks to Major at for the link). Once you have your IP, edit the file /etc/httpd/conf.d/phpMyAdmin.conf to allow that IP. For example, let’s say your IP is You would change the part of the file that looks like this:

To instead look like this:

Save the file, and then start apache:

You may need to also add in iptables rule by running the following commands:

That’s it. Visit http://IpOfTheServer/phpmyadmin in a browser and login as the root user and whatever mysql password you setup.

House Cleaning

Disable the EPEL repo. This is generally a good idea because if you wanted Fedora you would have installed Fedora instead of CentOS. Edit the /etc/yum.repos.d/epel.repo file and epel-testing.repo file and change anywhere that it says ‘enabled=1’ to be ‘enabled=0’

If you found this useful or have anything to add please post a comment!

Setting up DNS For Cloud Servers With Rackspace

DNS is one of those things that seems like magic until you understand it. This post will first give a brief overview of how DNS works, and then walk through the steps of setting up DNS for a Cloud Server with Rackspace.

What is DNS?
In short, Domain Name Service (DNS) translates hostnames to IP Addresses. This is what allows us to simply type in instead of trying to remember

A high level overview of DNS:
When a visitor requests a website such as, a query is made for where the Start of Authority (SOA) is. This is simply saying “Hey, where can I find some information about” A response is made with the Name Servers. The Name Servers contain additional information on exactly what services are available on a domain. Common Name Servers are:
Rackspace Cloud: and
Rackspace Dedicated: and
GoDaddy: – (and maybe more. This many is just plain silly, but to each their own).

So now the requester goes to the NameServer and says “Hey! I’m looking for a website that will be under the name”. The Name server looks for a Zone for If it finds it, it then looks for a record for exactly to tell the requester where to go next.

So to summarize:
NameServers contain Zones
Zones contain Individual Records

What are some common Record Types?
A: This is the biggie. An A record stands for address record and points a hostname ( to an IP address (
CNAME: Stands for Canonical Name. This is best described as an alias. It will point a hostname ( to another host name (
MX: Mail Exchanger record. This guy handles all email inquiries. Per the RFC (Think of it as Internet Law), this must point to a Host Name.

So, here are some examples to show you how DNS can chain together records to get where it is going.

Simple A Record –>

CNAME record of www.joshprewitt to the A record of –> –>

MX Record Example –> –>

Ok, so now we have a basic understanding of DNS, and we know how the records can link together. How do we put this into practice?

Making it work

Let’s add a zone and some example records for a made up site:

First, add the zone:
1) Login to
2) Click Hosting > Servers > Choose any Server > Click the DNS tab
3) Click “Add”
4) You will be prompted for a domain name, enter the domain without a www. For example, I would enter:
5) Press “OK”

That’s it, the zone is added! Now, click the zone and you can manage the actual records. Rackspace Cloud does not add any records by default, so you must add the ones that you need!

Before we talk about the records we are going to add, let’s look at the DNS options that we are going to run into.

DNS Options

Name: This is what the record will be known as or what the user will type in to the address bar. Examples would be:,,, etc

Content: This is where the Record will point. Think of it like this: A request comes in with a NAME and is directed to the CONTENT. For an A record, this will always be an IP address, usually the IP address of your server. For a CNAME or MX Record, this will always be a hostname.

TTL is Time To Live. This is the value in seconds for how long you want this record to be cached. The higher it is, the longer it will be cached, so performance will slightly improve. We usually suggest 3600 as TTL because this is 1 hour. This will allow you to get the benefits of the record being cached, but you can also make a DNS change later on and it will only take 1 hour to be effective.

Priority: This is an option when adding an MX record. A request will try the lowest priority first, and then the next lowest and so on. Common entries are 10, 20, etc. The number is arbitrary. If the lowest priority for one zone is 1 and the lowest priority for another zone is 100, there will be no performance difference.

Now that we know what everything means, let’s look at some typical records:


You will almost always want an A record for and Using as the example domain, let’s see how these records would be added.

1) Click Hosting > Cloud Servers > Any Server Name > DNS Tab > The Zone you created above
2) Click ‘Add’ to create a new Record.
3) Input the Record like this:
Type: A
Content: Your IP Address goes here
TTL: 3600

For my example, this would be:
Type: A
TTL: 3600

You will want to add at least another record for the ‘www’ version of your domain name. This will be very similar to the one above:

Input the Record like this:
Type: A
Content: Your IP Address goes here
TTL: 3600

For my example, this would be:
Type: A
TTL: 3600

You can think of it as when a request comes in with NAME, send it over to CONTENT.

This should get you started with setting up DNS in Cloud Servers.