Category Archives: Non-Website Related

Fix Clipboard History for Mac Crashing

Everyone needs a clipboard history app. I just don’t understand how people are able to be efficient when their clipboard can only hold one item!

For the past few years, I have used Clipboard History and while I am sure that there are newer/better options that have popped up, this has worked for me for years and I just had no reason to change. That is until tonight.logo

I apparently copied WAY too much out of Excel, multiple times, and caused Clipboard History to freak out and required me to kill the process from terminal. Every time when I restarted the app, it would immediately crash again since it was trying to load my old history. This even persisted after reboot. The app immediately crashed, not allowing me to do anything with it.

In case anyone else runs into this, I was able to restore the app by manually tracking down the history and deleting it.

Fire up terminal, and navigate to ~/Library/Application Support/com.agileroute.clipboardhistory

In this directory, there is a directory for each of your clips, each with its own ID number. Inside those directories, there is the actual clip itself, in HTML, txt, etc and a properties file. It is important to note that your “Favorite” clips that never go away are also stored in here.

To fix Clipboard History, I did “ls -alt | head” to see what the IDs were for my oldest clips. These are your “Favorites” and you likely want to save them. Once you have identified what you want to keep, move them somewhere safe, delete everything else, and then move them back. I did the following (assume that 322 and 2608 are my “Favorites”)

Close terminal, and then fire up Clipboard History again, and all should be right with the world.

TabIt – Open Tabs in Chrome with a Variable

I constantly find myself needing to open a lot of similar tabs to research something at work. For example, maybe I want to review account notes on a few dozen accounts, or I want to audit 30-40 support tickets.

Most web apps usually just pass unique identifiers in the URL itself. For example, if you use Zendesk, a ticket URL might be https://subdomain.zendesk.com/tickets/123456 where 123456 is the Ticket ID number. You might have an internal CRM tool that has a URL like https://internal.company.com/accounts/95438.browsertabs

It doesn’t take a genius to figure out that by just changing the 95438 in the URL to a different number, you pull up a different account. So instead of going through the trouble of searching in the web app, most people just change the URL to the account they are looking for, or create a custom search in their browser.

When I have more than 4 or 5 of the same type of item I want to review (Tickets, Accounts, whatever), it becomes tedious to copy each item, open a new tab manually, paste the variable, and wait for the page to load.

For cases like this, I wrote a simple bash utility that I call tabit. Tabit has one job: Read input from a file and open a bunch of tabs with the specific variables. Here is a quick example using Youtube (since I obviously won’t share the links to Rackspace’s internal tools).

Say you have a list of the 10 most popular Youtube Video IDs.

We know that Youtube’s URL format is https://www.youtube.com/watch?v=<Unique ID>. Instead of manually copying/pasting that 10 times, here is how you would do it with Tabit.

First, grab the source from github: https://github.com/joshprewitt/tabit or copy the script below.

Place it somewhere in your default path. If you don’t know what that means, just put it at /usr/local/bin/tabit

Make it executable:

By default, Tabit looks for a file in your user’s home directory called ‘.tabit’. Paste the list of Youtube IDs into that file. If you are on a Mac and just copied the list above, the following will do the trick:

All that is left to do now is run the script. Using the defaults, the syntax is really simple: Just run

I added a few options like the ability to specify a file instead of the default ~/.tabit and also the option to have the tabs open in your current Chrome window. Lastly, if you happen to have a URL where the variable is NOT the last part of the URL, you can add on a suffix. You get the following help documentation if you run tabit with no arguments:

Help documentation for tabit

Basic usage: tabit [options] <start of URL with trailing slash> [Remainder of URL with leading slash]

Options:
-f <path to file>. — File to read variables from. Default is ~/.tabit
-c — Use Current Window.

That’s it! After you get used to using this, you will find that it is much easier to just paste the variables into ~/.tabit and run the script instead of manually opening a bunch of tabs.

Keyboard Shortcuts

When working with dozens (to hundreds) of tabs, don’t forget these shortcuts

Shortcut Keys Action
Cmd+Shift+{ Go to Left Tab
Cmd+Shift+} Go to Right Tab
Cmd+w Close Tab
Cmd+Shift+T Re-Open Most Recently Closed Tab
Click a tab + Shift + Click a different tab Select Range of Tabs (Cmd+w will close all selected tabs)

Gotchas

The script assumes a standard installation of Google Chrome. If you installed it somewhere else, you will need to edit lines 58 and 62 with the correct path to Chrome on your system.

References

Optargs help came from http://tuxtweaks.com/2014/05/bash-getopts/

Picture: Browsers by Eightemdi from the Noun Project

Using an Amazon Dash button to control Philips Hue lights

It’s been about 3.5 years since I wrote a post – better late than never 😀

All of my old content was in some way inspired by customers that I was supporting as an admin for Rackspace Cloud. This one is different and just something geeky and fun.

The Dash button is a nifty little piece of technology that Amazon put out last year. A lot of people thought it was an April Fools joke when announced, but it was no joke and is meant to let Amazon customers purchase consumables at the push of a button anywhere in their house. I received one when my son was born in February that would let me purchase Huggies Diapers.

41Q4Pp9l8zL._SX342_

I thought instead of buying diapers with it, I could probably figure out a way to hijack the URL that it was calling to make the purchase and instead have it call my Philips Hue hub. It turns out, it’s actually even easier than that. See, Amazon was so deliberate about preserving battery life that they built these little guys to be completely turned off and when the button is pressed it connects to your router to get an IP address before making the call to Amazon for the purchase. A few seconds later, it turns off and the IP is released.

This is important because after the Dash button is issued an IP, it sends an ARP probe broadcast out to the rest of the network. The image below explains an ARP Probe.

arpProbe-DashButtonSo all we need to do is have some other device on the network constantly waiting to see the ARP probe, and when it does, have it kick off a script to interact with the lights. This can be some other computer in the house. For me, I used a RaspberryPi.

Getting Started – Setting up the Dash Button

The first thing you need to do is connect the dash button to your wireless network. The dash button uses the speaker on your phone to send a magically encrypted sound to the dash button, which it interprets as the wifi settings to be able to connect to your network. Can we just take a moment to appreciate how freakin’ cool that is?

Following the below steps will connect the dash button to your network, but not actually configure it to buy a product. Otherwise, you will have a box of diapers (or ziploc bags, or razor refills) showing up on your doorstep every time you turn on the lights. Not cool.

  1. Use your phone to Download the Amazon app
  2. In the Amazon App, login to your Amazon account
  3. In the app, go to “Your Account -> Dash Devices -> Set up a new device
  4. Go through the steps to enter your wifi credentials and connect the dash button. But..
  5. Do NOT actually pick a product that you want the dash button to buy. After it is connected to your network and you are on the “Choose a Product” page, just close the app, never actually completing setup.

Collecting all of the information

To make this work, you will need to know the following

  1. IP address of your Hue Bridge
  2. A valid user for the bridge
  3. The ID number of the light(s) you want to control
  4. The MAC address of the Dash button

IP address of your Hue Bridge

If you have never before tinkered with the API for your Philips Hue system, then you have probably never setup a new user. No worries, we will make it pretty easy. First, you need to find the IP of your bridge. Philips makes this pretty painless in most cases. Just go to https://www.meethue.com/api/nupnp while on your network and it will look for the bridge. Tada!

Note: If that didn’t work, you can use the official Hue app: Go to the settings menu in the app. Go to My Bridge. Go to Network settings. Switch off the DHCP toggle. The ip address of the bridge will show. Note the ip address, then switch DHCP back on.

A valid user for the bridge

Now to register a new user. Craft a POST to http://<IP-of-Your-Bridge>/api with the payload ‘{“devicetype”:”my_dash_button_user”}’. The following cURL command should do the trick. Note: You MUST push the button on your bridge FIRST and run the following command while the light is flashing. Otherwise, you will get a response that says “link button not pressed”

In the response, you are looking for the randomly generated username. It is 32 characters long and will be between quotation marks. Copy that and save it somewhere.

The ID number of the light(s) you want to control

Now that we have the IP of the bridge and a valid user, we need to figure out what the ID number is for the light we will be controlling. The script below can be ran to list all of your Hue bulbs and their ID.

Edit it to include the correct variables for your bridge IP and user, save it as showLights.py, and run it!

The MAC address of the Dash Button

Now we need to capture the MAC address of the Dash button. The easiest way to do this is with scapy. Scapy is a packet manipulation library for Python. I installed it via pip:

With it installed, you can use a simple script to find the MAC address. I created a file called findMac.py

Paste this into a text document, and run it from terminal.

While the script is running, press your dash button. Within 1-5 seconds, you should see a MAC address show up. Press ctrl+c to stop the findMac.py tool. Copy the MAC address and save it somewhere.

You’ve come a long way – now we just need to put it all together

You can edit the script below to toggle a single light on/off using your dash button

Copy/Paste the script, modify the variables at the top and then save it as toggle.py. Make the file executable and then run it!

With the script running, push the Dash button. With any luck, you will get a few blinks on the dash button and in about 3 seconds your hue light will turn on. Push it again and the light will turn off!

Three Second Delay?!? What the heck!

Yes, this isn’t a light switch. Think about everything that has to happen:

  1. Your Dash button needs to power on
  2. Dash connects to your wireless network and requests an IP address
  3. IP is assigned to Dash
  4. Dash sends a probe out to the rest of the network
  5. Your RaspberryPi (or other computer) sees the probe
  6. Your RaspberryPi calls to your hue bridge to get the current status of the light
  7. Your RaspberryPi calls to your hue bridge to request the light turn off/on

So yeah, it takes 3 seconds. Deal with it. :-p

Making it Persist

The script above is a neat party trick, but if you actually intend to use the dash button regularly to turn on/off your lights, you need to make sure that the python script is always running, that multiple copies of the script aren’t running simultaneously, and that it starts on boot automatically in case your Pi loses power. This is where an init script comes in handy. Below is a modified script from http://blog.scphillips.com/posts/2013/07/getting-a-python-script-to-run-in-the-background-as-a-service-on-boot/

Create this script as /etc/init.d/dash-lights and edit lines 14-16 and line 20 based on where your files are stored, then change the permissions so it is executable:

Next, make sure that it starts on boot:

You can now start/stop this service just like you would any other daemon:

Extending the toggle script

The script above can get you started, but you may want to extend it to control multiple lights with the push of a button, or maybe have multiple dash buttons around the house for different lights. Here is an example of my toggle.py script. I have one dash button that controls two lamps in the living room, and a different dash button that controls the light in the nursery.

 

References

Here were some of the resources that helped along the way

Creating an init script for a python script http://blog.scphillips.com/posts/2013/07/getting-a-python-script-to-run-in-the-background-as-a-service-on-boot/

Using scapy to catch ARP probes from a dash button: https://medium.com/@edwardbenson/how-i-hacked-amazon-s-5-wifi-button-to-track-baby-data-794214b0bdd8#.spdcsx9ou

Philips Hue API Docs: http://www.developers.meethue.com/philips-hue-api

I put all the scripts I referenced above in github at https://github.com/joshprewitt/dash-lights

Using Sed to add a new line in Mac Terminal

This was driving me crazy until I hunted down the correct syntax. The following will replace a semicolon with a new line in Terminal for Mac

sed -e ‘s/;/\’$’\n/g’ /tmp/blah

dsh (Dancer’s Shell / Distributed Shell) and you

dsh is an awesome tool for administering pools of servers where you would just want to run the same few commands on each one. I run Mac OSX locally, so I’ll write the article from that perspective:

Install DSH on a Mac

First and foremost, you need to install dsh. The downloads page for the project is a nightmare (http://www.netfort.gr.jp/~dancer/software/downloads/list.cgi), but you basically want the latest version of libdshconfig and dsh. At the time of this writing, that would be 0.20.13 and 0.25.9 respectively.

I just dropped them into /tmp for the time being:

Then go through the normal install from source process, starting with libdshconfig

now you should be able to run dsh and have it return an error that no machine was specified:

Configuring DSH

You will want to setup RSA keys for your user on each of the machines that you want to log in to remotely so that you are not prompted for a password. (This is outside the scope of this article, there are about a gazillion different articles online that will teach that). Once the keys are in place, you will want to create group files. You will need to mkdir -p ~/.dsh/group and then create a text file in the group directory that lists the machines you want to connect to. Here is an example:

This sets the user and the host that you want in the “web” group.

Next up is a very important configuration change. dsh wants to use rsh by default instead of ssh. You will need to edit /usr/local/etc/dsh.conf as an Administrator to change that. Just change the line:

to read:

Save the file, and you are ready to go.

Actually using DSH

Ok, now for the magic. Assuming you have a group named ‘web’, you could run:

This will return the results of uname -a for each server. The -c flag does it concurrently instead of going to each machine one at a time. The -M flag tells it to list the machine name by the response.

Other stuff

I prefer to always see the machine name, so instead of always specifying -M, I created a new file at ~/,dsh/dsh.conf and included the line “showmachinenames=1”. You can set other options here too. For example, say you use a non standard ssh port. You could specify on the command line with -o:

OR, you can set dsh to always use a different port by adding the line “remoteshellopt=-p 2222” to your configuration file.

Other sources if my article didn’t make sense:

Check out Racker Hacker’s post: http://rackerhacker.com/2010/01/20/crash-course-in-dsh/

Bash One Liner that uses multiple variables

I’m not sure if this particular post will come in handy to a lot of people, but my bash scripting is still pretty weak and I think the best way to commit this to memory will be to write a short post on it.

I find myself doing one line “for” statements from the command line all the time to make quick loops. Maybe I need to loop through a list of servers and delete them all, maybe I need to ping a group of servers and see if they all reply, maybe I need to build a lot of servers of different flavors at once, etc.

Whatever the reason, I have occasionally come across the need for multiple variables in my loop. For example, let’s say that I have a list of data like this:

That format may look familiar if you use the Python Command Line tool for Rackspace Cloud Servers (http://pypi.python.org/pypi/python-cloudservers/1.2)

Now let’s say that I want to create an image of the 8 servers listed above. I would usually put the above table in a tmp file, and just call out the Server ID. Something like:

This gets the job done, but it is pretty ugly. I end up with image names like “Image-of-11114”, which can be more difficult to read than “Image-of-Cherry”

Using something like the following, I can allow multiple variables in my for loop:

The whole idea is to read the full line into a variable (x) and then loops through the line assigning a specific variable to whichever fields I need.

Building a Rackspace Cloud Server from Cloud Files Manually

This article will cover how to manually take image files in Cloud Files and build them to a new Cloud Server. This will ONLY work for Linux. I don’t have a clue how to make this work on Windows :-p

There can be several reasons why you would want to do this. Maybe you want to manually move from a US datacenter to a UK datacenter. Maybe you have an account and your boss/co-worker/friend has an account and you want to share images. Whatever the reason, these are the steps to make it work.

Credit where credit is due: The idea for this was originally published at http://failverse.com/manually-creating-a-cloud-server-from-a-cloud-files-image/ (Thanks Jordan and Dewey). My article will just cover doing it using curl instead of wget, and some of the potential pitfalls not covered in their article.

So here goes! First things first, you will need to start up a new stock server in the receiving account that is the EXACT same image as the server that the image was built from. For example, if the image in Cloud Files was originally taken from a server running CentOS 5.5, you will build a stock image that is running CentOS 5.5 in the receiving account. Login to the new server you built.

Make a backup of the new server’s /etc directory. You will need this later:

If necessary, install curl (Some distros of linux come with it, others don’t).

Authenticate to the Cloud Files Account where the image is stored:

After you run that, it will spit out a list of names and values, like this:

We care about X-Storage-URL (That is where the image files are stored) and X-Storage-Token (This is your authentication token that lets you actually download stuff). Now let’s see a list of all of the image files in the account. Replace your Storage Token and your URL below. Don’t forget the /cloudservers at the end of the URL.

As you can see above, there are several files associated with each image. All of the data is stored in the .tar.gz files. The .yml file is a configuration file that for this article we don’t care about. You will see that some of the images have more than one .tar.gz file. This happens when the image is larger than 5GB and it gets chunked into multiple objects in Cloud Files. We will assume that we are working with a chunked image because that will make it just a little bit harder.

Let’s grab the delweb1ssl image. Grab the first chunk like this:

This can be up to 5GB, so it may take a little while. Next up, download the 2nd chunk (and then third, fourth, etc)

Note that we are just changing what file we are getting and what we are calling it locally. Do this for as many .tar.gz files as there are in the account.

Now that we have all of the images downloaded, cat them together to make one big image

***POTENTIAL PITFALL***
If the new server isn’t large enough to hold the stock image, the downloaded images from Cloud Files, AND the concatenated image, you may run out of disk space. For this reason, you might want to just start with a huge 8GB (320GB Hard Drive) or 16GB (640GB Hard Drive) server and downsize after you are done with this.

Now that we have the one big image, we need to extract that out onto the filesystem. More than likely, you will need the newest version of tar to have the –hard-dereference option available. Your choices are to either download tar and install it from source, or grab a fully compiled version of tar here. (Thanks again Jordan). We’ll use the compiled version because it is just easier.

This can take awhile.

Remember when we backed up /etc above? (You did that, right?) Now we will want to bring that back in. However, is we just completely overwrite the /etc directory that we just extracted we will lose things like our users, groups, iptables, etc because they will be overwritten with the default values. To make sure we always have the /etc directory from the tar available, save that as another backup directory:

Ok, now we have 3 etc directories:
/etc = The version off of the backup
/etc.bak = The stock image /etc directory with all defaults
/etc.tar = A backup copy of the etc directory from the tarball

From here you can manually bring over your network config files and anything else necessary from the default image, but I prefer to just replace the entire /etc directory with stock data and bring over what I need from the /etc.tar directory later.

Depending on what distro you are running, you will also want to grab your iptables rules from /etc.tar In anything RHEL based it would be:

That’s pretty much it. Cross your fingers, reboot and see if it comes back up!

Rackspace Cloud Load Balancer as a Service Cheater Script

Rackspace Cloud Load Balancer as a Service is awesome. It is an amazing product that makes load balancing sites really easy and abstracts away having to setup and configure one on your own. As of right now, it is only available via the API while a full blown GUI is being developed for the control panel. The API docs are very good and can be found at http://docs.rackspacecloud.com/loadbalancers/api/clb-devguide-latest.pdf

UPDATE: Forget this exists. Caleb Groom has an awesome project on github that uses python and will let you manage your Load Balancers. https://github.com/calebgroom/clb

Creating a Load Balancer requires you to authenticate with your Username and API key, and then create an XML request that you send that has all of your settings in it. I made a very, very simply bash script that will write the XML for you. I’m not a programmer. This can be improved immensely and there is no error catching or validation that what you type in is right.

Anyway, here is the script:

So before you run that you will need to authenticate and get your Auth token. to do that, run the following curl:

curl -D – -H “X-Auth-User: YourUsername” -H “X-Auth-Key: YourAPIKey” https://auth.api.rackspacecloud.com/v1.0

After you run that, it will spit out a list of names and values, like this:

HTTP/1.1 204 No Content
Date: Wed, 30 Mar 2011 04:15:28 GMT
Server: Apache/2.2.3 (Mosso Engineering)
X-Storage-Url: https://storage101.dfw1.clouddrive.com/v1/MossoCloudFS_6f597497-4986-44ea-9081-1234567890
X-Storage-Token: 63ea9670-c80f-402d-9657-1234567890
X-CDN-Management-Url: https://cdn1.clouddrive.com/v1/MossoCloudFS_6f597497-4986-44ea-9081-68b8ee123456
X-Auth-Token: 63ea9670-c80f-402d-9657-c59bdb123456
X-Server-Management-Url: https://servers.api.rackspacecloud.com/v1.0/123456
Content-Length: 0
Connection: close
Content-Type: application/octet-stream

You will need the Auth Token. In the made up example above that would be 63ea9670-c80f-402d-9657-c59bdb123456. You will also need your account number. In the example above that is listed under X-Server-Management. In the fake example that is 123456.

Once you have those, invoke the bash script above with something like

sh makelb.sh

It will ask you some questions, most of them give you a list of available options. Once it is done asking questions it will spit out the curl command for you to run. Here is an example:

That’s it, copy and paste the curl command that it spits out and that will create the Load Balancer for you. Like I said, this is a VERY simple script that I primarily use just for setting up test load balancers. If you improve on it and make it totally awesome drop me a link!

Troubleshooting iptables on Rackspace Cloud Servers

A common issue when setting up iptables on a new cloud server is that users may append the record to the existing chain, without looking at the ruleset first.

Iptables is read top to bottom, with a default installation of CentOS 5.5, the command iptables-L –line-number yields the following:

Looking at this, you can see that the INPUT chain has a single rule: to go read the  RH-Firewall-1-INPUT chain. That chain then has 10 rules, with the last one being to reject all traffic. This means that if it isn’t explicitly allowed in rules 1-9, it ain’t gonna happen.

The problem comes in when you try to add a new rule using the -A flag which appends the rule, meaning that the new rule goes to the bottom. Here is an example of that, and is what you do NOT want to do:

Let’s assume that we did run this command. The new output of iptables -L –line-number would be:

See anything wrong here? Let’s look at the INPUT chain. The first rule is to read the RH-Firewall-1-INPUT chain, which has 10 rules. After it reads through that chain, the next rule from the INPUT chain would be read, the rule that we just added for opening port 80.

Problem is, RH-Firewall-1-INPUT said in line 10 to reject anything that didn’t match. That means that your rule for opening port 80 will never even be looked at, requests will just be rejected.

Ok, so we need to remove the bad rule and do it right. First, let’s get rid of the bad rule by removing it based off of the line number

To break this command down for you:
iptables: should be pretty obvious…
-D: This option is for DELETE
INPUT: Specify the chain we want to delete from
2: Specify the line number of the rule to remove.

After running that, my bad rule from above will be gone. Now I need to do it the RIGHT way:

This rule looks an awful lot like the one above that I told you not to use, but look closely and you will see that instead of -A for append, this rule uses -I for insert, which will put the rule at the TOP of the list. Running iptables -L –line-number now yields the following:

Nice – Now the rule about allowing port 80 will be read FIRST, and then it will read the RH-Firewall-1-INPUT chain.

Always remember to save! If you do not save your ruleset, when the box reboots all of your rules will be lost!

For Redhat, CentOS, and Fedora:

For Ubuntu:

For all other distros:

Figured out how to get rid of the (!) Exclamation point in iTunes

I finally sat down and installed iTunes on the new computer and was pleased to find that it found my library from my dead computer. The only problem was that all my music on my old computer was in E:/users/Josh/Music/whateverFolder and on my new computer I just have the 1 partition (because let’s face it – Windows is so jacked up that multiple partitions to conserve data in a catastrophe is a joke) so the files are in C:/users/Josh/Music/SameDirectoryStructure.

Surely this wouldn’t be too difficult, I mean, obviously Apple saw this coming and would give me some super easy way to fix it like a search button. Nope. Ok, maybe if I change one song it will be smart enough to look in the same directory structure for any other song that it can’t locate. Nope again.

I found a few freeware and shareware programs that said they could do the job, no dice there either so I decided to take things into my own hands.

In your iTunes directory you should have a iTunes Music Library.xml file. Make a copy (Always a good idea) and work with the live one. A page wide find and replace from E:/users/ to C:/users did the trick for me. Save it, open up iTunes, and…. It failed miserably.

Turns out that iTunes first looks in the iTunes Library.itl file and if it finds something in there it just overwrites the xml file and never even reads it. To fix that little feature, move iTunes Library.itl to a bak directory, make the changes to the xml file, and then kick up iTunes. This will confuse the hell out of iTunes because now it can’t find anything. Go to File>Library>Import a playlist, navigate to the xml file, load it up and everything will work.