This site got ‘hacked’

For the last few days I’ve been trying to figure out how this site became compromised. This is not the first time its happened and probably not the last. Its always fun to get an email from the good folks at Bluehost saying that you’ve violated their TOS and your site is shut down. Every time it has happened it seems like the same type of attack. Somehow there is remote file inclusion, which leads to code execution which turns to full compromise. Last time it happened there was just files littered about in every directory. Easy enough to clean up. This time it randomly patched the WordPress application php files, completely destroying my wordpress install. The wordpress backup also duplicated these files into the backup making restoring impossible.

I chose the easy way out, I took the wp-config.php file and moved it to a completely new set of wordpress files. After some minor configuration changes I was back up and running. Of course I did the required things as well: Password changes, SSH key changes and permission resetting.

Since this is not the first compromise this year, I started thinking about how the attack may have happened. It seems most likely that a plugin was to blame for the remote file inclusion since Bluehost automatically updates WordPress for me. Being more curious about how it happened I’ve decided to put my site on a Git repo so that I can quickly track changes to the files and roll back quickly if I get compromised again. Since I’m on Bluehost’s shared hosting, it will be difficult to get file monitoring so I think running a Git repo is the next best alternative. I’ll let you know how it turns out.

For the record, do not put your site on a public Git repo like Github without sanitizing confidential files like wp-config.php in the .gitignore

If you want to run your WordPress install in a Git repo, do the following:

#> cd
#> git init
#> git add .
#> git commit -am "Git repo for my WordPress"
#> cd .git
#> echo "Deny from all" > .htaccess #If you don't want the world to view your .git repo

The missing SSH command reference

I referenced the following website a lot over the years, but it has now gone offline :( So I grabbed a copy of the info and am posting it here, for me and for you! Careful, some of these commands may over power you.

Network File Copy using SSH
Updated February 20, 2003
Created April 23, 2001

Please note that &&, ||, and -, are documented at the bottom of this page.

tar cvf - . | gzip -c -1 | ssh user@host cat ">" remotefile.gz
ssh target_address cat " remotefile
ssh target_address cat " remotefile
cat localfile | ssh target_address cat ">" remotefile
cat localfile | ssh target_address cat - ">" remotefile
dd if=localfile | ssh target_address dd of=remotefile
ssh target_address cat  remotefile.tar )"
( cd SOURCEDIR && tar czvf - . ) | ssh target_address "(cd DESTDIR && cat - > remotefile.tgz )"
( cd SOURCEDIR && tar cvf - . | gzip -1 -) | ssh target_address "(cd DESTDIR && cat - > remotefile.tgz )"
ssh target_address "( nc -l -p 9210 > remotefile & )" && cat source-file | gzip -1 - | nc target_address 9210
cat localfile | gzip -1 - | ssh target_address cat ">" remotefile.gz

ssh target_address cat remotefile > localfile
ssh target_address dd if=remotefile | dd of=localfile
ssh target_address cat "<" remotefile >localfile
ssh target_address cat "<" remotefile.gz | gunzip >localfile

###This one uses CPU cycles on the remote server to compare the files:
ssh target_address cat remotefile | diff - localfile
cat localfile | ssh target_address diff - remotefile
###This one uses CPU cycles on the local server to compare the files:
ssh target_address cat  get file.gif "| xv -"
ftp> get README "| more"

ftp> put "| tar cvf - ." myfile.tar
ftp> put "| tar cvf - . | gzip " myfile.tar.gz

ftp> get myfile.tar "| tar xvf -"

Pipes and Redirects:
zcat | gv -
gunzip -c | gv -
tar xvf mydir.tar
tar xvf - < mydir.tar
cat mydir.tar | tar xvf -
tar cvf mydir.tar .
tar cvf - . > mydir.tar
tar cf - . | (cd ~/newdir; tar xf -)
gunzip -c foo.gz > bar
cat foo.gz | gunzip > bar
zcat foo.gz > bar
gzip -c foo > bar.gz
cat foo | gzip > bar.gz
cat foo | gzip > bar.gz

SSH Keys

Explanation of &&, ||, and -
&& is shorthand for "if true then do"
|| is shorthand for "if false then do"
These can be used separately or together as needed. The following examples will attempt
to change directory to "/tmp/mydir"; you will get different results based on whether 
"/tmp/mydir" exists or not.
cd /tmp/mydir && echo was able to change directory
cd /tmp/mydir || echo was not able to change directory
cd /tmp/mydir && echo was able to change directory || echo was not able to change to directory
cd /tmp/mydir && echo success || echo failure
cd /tmp/mydir && echo success || { echo failure; exit; }

The dash "-" is used to reference either standard input or standard output. The context in which the dash is used is what determines whether it references standard input or standard output.

Site Map:

True Path of the Command Line Ninja – UT Code Camp 2014

Here is my write up for UT Code Camp. More will be added here when I get a few minutes.

CLI History
In 1979 the Bourne Shell was included with Unix version 7 known on the system as sh. It has been the standard command line for Unix.

C Shell was one of the first alternatives which was written by Bill Joy at UC Berkley and was included in the Berkley Software Distribution (BSD).

Borne Again Shell or BASH is a rising favorite since it is included in the GNU project (Most modern day *Nix and even Mac OS X)

Windows has DOS. In the early years, it was the entire operating system. As Microsoft released Windows, it initially ran on top of DOS until they created the NT Kernel. The DOS like command line tool still allows access to many low level applications and functions. PowerShell in my opinion has to be the best implementation Microsoft made for a command line.

cd, ls (not dir), mv, cp, cat, touch, pwd, which, find, df
head, tail, uniq, sort
man bash


ifconfig, iwconfig, ping, wget, netcat, curl, nmap
traceroute, ssh
curl '' -H 'accept-encoding: gzip,deflate,sdch' -H 'accept-language: en-US,en;q=0.8' -H 'user-agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/33.0.1750.146 Safari/537.36' -H 'accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8' -H 'cache-control: max-age=0' -H 'cookie: PREF=ID=af8ee0d69bafb80a:FF=0:TM=1394877984:LM=1394877984:S=wZX38BMQRFKqC2Cp; NID=67=cBtItqV2FibJcSyFnBCqTs5fPY6r9AX7c7UdEeMIKdgxOlz7-KVML_kntMNmewG8QYaPh8EeaQ6yhPMOnPt4iCTVfx07hyHD7DUVC_D6JZ5MVjbA2lkXCdhiN04lq6qa' --compressed

perl -e 'print "Perl via CLI is fun"x10'
python -c "print unichr(190)"
awk -F ":" '{print $1 | "sort" }' /etc/passwd

|, >, <, >>, <<, 2>&1

tar vxfz helloworld.tgz && gcc *.c -o helloworld && ./helloworld


One liners
cd -
curl -> IP Adress
curl -> Remote Host
curl ->User Agent
curl -> Port
curl -A "Mozilla" "" > hello.mp3 && afplay hello.mp3

[3] Linux Complete Command Reference - J. Purcell (Red Hat Software, Inc)

You should come to SAINTCON!

Utah’s annual information security conference is coming up October 15-18. This is an excellent opportunity for you to get outside of your comfort zone and learn something new. If you’re looking to hone some skills, I bet you can find an expert here.

I am grateful to be one of the speakers this year. My talk is titled, “Hack yourself before someone else does.” I will discuss the mindset and tools that can help you proactively defend your system. I will be including Recon-ng ( by Tim Tomes, Social-Engineer Toolkit ( by TrustedSec, and the classics like NMap, Wireshark, etc! Should be lots of fun!!

There will be lots amazing speakers with a breadth of experience! Check out the website and I hope to see you there!

Feel free to contact me if you have questions.


Wakemate Revisited

After months of deliberation, I am finally back to write my experience with the Wakemate. Let me start by saying this incredible device has helped me track an analyze my sleep. It has also helped me feel more rested when I wake up in the mornings. Overall this system has been pretty good.

Hardware – The wrist band was a little bit snug for me, but has relaxed more as I wear it. Its a bit bulky when your not used to sleeping with something attached to your wrist. I also have not had any issues with it connecting to my Droid X phone. The main computer in the wristband comes out easily for washing. It has a standard mini USB plug which worked with a USB charger I had kicking around.

Software – The software is extremely easy to use. The App installed without issue on my Droid X. I quickly setup the alarm and was able to start my sleep cycle. The only beef I have about the App is that if you go to another program or switch to the homescreen, then when you come back to the app, it is as if you opened it for the first time. It should be able to run in the background like other alarm applications. The included ring tones are quite pleasant.

Success – This product works by measuring your body’s movement through the accelerometer and transmits via Bluetooth to your phone. Once you wake up it uploads the entire nights data to the cloud for analysis. I have had a number of nights where I needed to stay up late working on a project and then wake up only after a few hours. After a few times of using the Wakemate, I was able to get nearly the same quality of sleep, but with alot less time actually asleep.

One night I went to sleep at around 3:30 am and had a meeting at 7:00 am. I set the Wakemate and it woke me up at 6:45. I didn’t feel the same groggy that I would normally feel because it woke me up from higher levels of sleep and not from within REM. The coolest part of this system is that it attaches a number to the result of your sleep. 100 being the best, 1 being the worst. It calculates the amount of time you were in REM, and each level sleep to determine how quality of a sleep you had. My average is 80 which isn’t too bad. You can also add tags to your sleep to describe your sleep so that you can further analyze your sleep pattern given certain tags. For example, I have the tag “Working Late” which I attach to any of my scores where I go to bed right after I finish work. I can go back and see if I get as restful of sleep when I fall asleep directly from work or when I have some down time.

Final Thoughts – This system is a great system if you are consistent with it. My biggest problem is that I forget to charge the device when I leave for work and then am unable to use it the next night because the battery is low or dead. The next problem I have is leaving enough time each night to set it up with and connect the Bluetooth. It is literally 2 steps to do this so this is pure laziness. I recommend this product to anyone looking to get more from their sleep, or that is a nerd and loves stats.



WakeMate Sleep System

I am always fascinated by sleep systems and peoples inherent want to defeat sleep. I started in college with all night “Lan Parties” where we would have contests to see who could consume the most caffeine and stay up the longest. Then homework replaced my parties and long drawn out projects were my games. Once I graduated and joined the real world of work and worry, I still find it useful to conquer sleep. I have read many statistics of how the human body doesn’t really need 8 hours, or that you can break up your sleep cycle, or that you can sleep less and accomplish more.

I initially started following a system where the theory was getting up was a conditioned behavior. The idea was to train yourself to jump up at the sound of your alarm. Like any good habit, after a few days of sleeping in, my ability to jump up on subsequent days was completely gone.

After frustration my research lead me to a scientific way of waking yourself up. It is WakeMate. This system is based on an accelerometer worn around your wrist that communicated via bluetooth to your Android or iOS device. I researched it and read the reviews and finally bought it. The 30 day money back guarantee finally sold me on the deal, because if there were any problems, I’d ship it back and be right where I was when I got it.

Tonight is my first night of using it. I pulled it out of the package, charged the battery. I synced it to my Droid X, which people on the reviews said was not possible. It is possible and it worked seamless. I was able to log in to my account and was able to set all the settings. I am ready to go to sleep now. Everything looks like it is working properly. Now all that is left to do if push the sleep button once I finish this write up.

I am going to try and log my results with this system since my goals are to sleep less while having more fulfilling sleep. About 80% of my mornings are a fight with myself and my determination is starting to falter. Hopefully this system gives me the edge I need to avoid waking up during quality sleep. Wish me luck!!!



Locked SVN Repo

Have you ever been working on an SVN server and had to ask yourself, “Why in tar-nation is this file not commiting???!! And who is Joey to be so important to lock a file.” For which you gather all your rage and ask Joey why he had the audacity to lock the files you needed to commit? He tells you he didn’t lock them, nor is he working in the same area of the code. Then you discover that by some way he accidentally locked 1/4 of all the files scattered throughout the repository. Since you are very smart you went to the server command line and did:
svnadmin lslocks [path to repo]
And it told you all the locks in the system.

Now you should be thinking, how can I take that list and then unlock the whole server? I too faced this same problem. This is how I dealt with it. First don’t mess with svnadmin rmlock… I couldn’t make it work to save my life. The setup is like this, we are going to grep out some keywords from the svnadmin lslocks to get only paths, then we are going to use awk to help us build perfect paths, then we are going to use the svn client to finish the job. And it all fits on one line.
*Important: Make sure your local copy is updated to the latest revision.
svnadmin lslocks /usr/local/svn/repos/[repo-name]/ |grep Path | awk '{print "file:///usr/local/svn/repos/[repo-name]" $2}' | xargs svn unlock --force

Thanks to some nifty piping, you have just unlocked your whole SVN. Let me explain some key points here, “Path” next to grep will draw out something that looks like this:

Path: [Path to locked file]
Here is a really quick run down of awk, basically it takes any white space and then seperates the information into variables starting with $0 being the whole line, $1 being the first, $2, the second, etc. $1 would be the word “Path:” and $2 is our path. But that is not enough because we have to make it a qualified repo path, and since I am doing it on my local server I can use the “file:///” prefix. After that we send it to xargs and then use svn unlock and xargs will apend the argument to the end of the line. –force is also important because that will make sure you steal the lock in the unlock process.

FINE PRINT aka Caution: SVN locks were designed to protect files while a person is working on those files so no one else could over write them while they were developing on those files. The idea is that someone can get exclusive access to the files, change them, commit them, and then release the lock when the work is completed. If you are using this type of idea for your system, please send the list of locks to all of your developers and then have them manually unlock the locks they set. Otherwise you can ruin and destroy work in progress if you are not careful. Now if you do not care about locks and know what is being developed, by all means use the function above.

If you have a better way to unlock your svn repository I’d love to hear it, I created this method from my own knowledge of the shell. If you need some help or have questions, post a comment and I’ll get back with you.



Cisco VPN Multiple or Overlapping L2L Tunnels Using NAT

This post will have the details on how to configure multiple or overlapping tunnels which use NAT while having an existing one already created. This will effectively show you how to create multiple L2L tunnels to completely different networks and how to setup the access-list rules to make sure your traffic gets to where it needs to go. This has been one of the more difficult VPN configuration that I have seen so far.

Begin by first setting up an access-list for interesting traffic. The world interesting in the Cisco context means any traffic that is bound for the VPN. Then you will configure any other ACL rules that you want. After that you must define your crypto maps. Crypto maps are the instructions for how the VPN work. They include encryption, hashing, who their talking to and what access-list rules to use. After that you define tunnel-group attributes which are the pre-shared key if one is used. This first code block will feature the use of a static NAT example. Static NAT is used in the situation of a single host mapping to a single outside IP. For example, if you had a local server at and you wanted outside to inside access to this server from its outside IP address on all ports, then you would define a static NAT rule in the ASA to accommodate this. If this is foreign to you and you want a blog entry on it specifically, drop me a line in the comments. In the following example, here is the break down of IP’s and locations:
Site A is a Pre-configured L2L Vpn that you need to connect to. At site A, they use a local Subnet of They have given you a NAT IP address for your outside interface of Their Outside IP address of their ASA is Your internal network is Your server that needs to talk to the other side is What you are trying to accomplish is to get to resources by NATing through while giving Site A access back to your server at

! Access list for our interesting traffic. This is traffic that goes from the NAT to the other side of the VPN.
access-list vpn1 extended permit ip host

! Access List to allow traffic from the local server to the other side of the VPN and to allow traffic
access-list static-vpn1 extended permit ip host

! Setup the encryption transform-set
crypto ipsec transform-set newset esp-3des esp-md5-hmac

! Crypto map configuration that will match Site A, starting with match address to match interesting traffic
crypto map newmap 1 match address vpn1
crypto map newmap 1 set peer
! Set the crypto map to use the transform set
crypto map newmap 1 set transform-set newset
crypto map newmap interface outside
crypto isakmp enable outside
! These settings will come from your Site A configuration, match that.
crypto isakmp policy 1
authentication pre-share
encryption 3des
hash sha
group 1
lifetime 86400

! Static will configure your Static NAT from your access list (local subnet to remote VPN subnet) through the outside interfaces NAT address
static (inside,outside) access-list static-vpn1

! And now for the tunnel-group configuration
tunnel-group type ipsec-l2l
tunnel-group ipsec-attributes
pre-shared-key [Match Pre-shared Key]

Now if everything went well you now have a functioning tunnel to Site A. Test it by pinging. I am going to write an article on advanced VPN troubleshooting one of these days because Cisco is quite cryptic and difficult to troubleshoot if the VPN doesn’t come up. For now I’ll assume all went well.

Now it is time to program the second VPN. This VPN will be much the same as the last, but instead of seeking local access to only one machine, we want the whole subnet to access resources across the VPN. But we do not want the other side of the VPN talking back to our local machines. This is made possible by a Global NAT or Dynamic NAT policy. This is much the same as the policy on your home generic wireless router. It provides you NAT to the internet, but it is difficult for traffic to come back to your network unless you allow it. The IP’s for the local network will stay the same, but the remote and NAT address are different. The remote side of the VPN has a local subnet of The remote side has provided you a NAT address of Their Peer address is

There are a few caveats that I am going to save you alot of trouble I found out the hard way. The first deals with crypto maps. You can not define a new crypto map name for a new VPN, you must use the same map name as you used previously, but you must change the priority (the number next to the map name). The second is creating a NAT rule to stop traffic from going to the internet across the ASA. When doing this, if you already have existing NAT rules (besides default rule) then you must use that ACL to define it. Only one NAT rule ACL will work, all others drop. If you need clarification on this, type show run | in nat and if you see more than 2 lines of NAT listed and you are not sure what you are doing, then you are doing it wrong. Remove one of the NAT lines and combine access-lists. The rules are very similar to our previous one, I’ll give the play by play again:

! Access list for our interesting traffic. This is traffic that goes from the NAT to the other side of the VPN.
access-list new-vpn extended permit ip host

! Access List to allow traffic from the local subnet to the remote subnet
access-list new-nat extended permit ip

! Setup access-list to stop traffic from going over the primary NAT to the internet
access-list inside_nat_outbound extended permit ip

! Setup the encryption transform-set
crypto ipsec transform-set another-set esp-3des esp-sha-hmac

! Crypto map configuration that will match Site A, starting with match address to match interesting traffic
crypto map newmap 1 match address new-vpn
crypto map newmap 1 set peer
! Set the crypto map to use the transform set
crypto map newmap 1 set transform-set another-set
! These settings will come from your next remote sites configuration, match that.
crypto isakmp policy 5
authentication pre-share
encryption 3des
hash sha
group 2
lifetime 86400

! Global will setup the interface to do a Dynamic NAT of all local traffic to remote NAT
global (outside) 2 netmask

! NAT rule to stop traffic destine for the VPN to go out over the primary outside interface. Make sure it goes over the VPN
nat (inside) 2 access-list inside_nat_outbound

! And now for the tunnel-group configuration
tunnel-group type ipsec-l2l
tunnel-group ipsec-attributes
pre-shared-key [Match Pre-shared Key]

Now if everything has gone well, then you have got yourself 2 functioning L2L tunnels to two separate networks. If things didn’t go well, then you’ll have to wait for my trouble shooting guide which will be coming shortly. Or you could leave me a message int he comments and I’ll get back with you.

I hope you have enjoyed my guide to setting up multiple L2L VPN’s and have found this useful. Good LUCK!



Ubuntu 10.10 and Grub 2 Fun

I am a few distros behind the current Ubuntu. I have been using 9.04 and 8.10 since I have found them very stable and familiar. For kicks, I installed 10.10 Server on a new project and thought I would find the same things I found in previous distros. For the most part I did. However, I stumbled across a stupid change that crippled my server. First off, the server is a headless and keyboardless setup. I know, I should run an IP KVM for complete control, but my other servers haven’t warranted it yet.

The issue is that when the server looses power, and then during boot it looses power again, it throws a “Recordfail” flag that can be used to change the way Grub2 boots. In the default configuration of Ubuntu 10.10, they have chosen to display the boot menu without a timer when there is a record fail. It is much reminiscent of Windows when it fails to boot properly and gives you the boot menu for different modes.

In a desktop environment, this would be fine because I could choose to run recovery, run normally or whatever. However, in my configuration, I want this thing to boot even it if is on fire. If it doesn’t boot, it should be cause some hardware needs to be replaced. After some digging, I found the solution. Edit /etc/grub.d/10_linux and commment the following lines like so:

# recordfail=1
# save_env recordfail

Save the file and then run sudo update-grub to generate a new grub.cfg file. Viola, no more stalled boots.



Boot Script (Startup Script) with Ubiquiti AirOS

I have been hitting my head against the wall trying to set the bridge priority of a few radios involved in spanning tree. It is easy to log in to the radio and change the priority with the brctl setbridgeprio br0 . But what if you want to change it on boot and automatically? I dug through the Ubiquiti forums only to find bits and pieces that lead me to the final solution.

There are a number of key files that don’t exist by default, but can be setup to provide the scripting functionality you may need.




If you have read any of my other posts you will see that I am a huge supporter of VIM, and will assume you are using it. You can simply do vi /etc/persistent/ and then write your script. For my example, I will give you a copy of my bridge priority script:

brctl setbridgeprio br0 7000
brctl setpathcost ath0 10

In a previous post I talked about the command that makes your /etc/ directory and changes to it persistent.

cfgmtd -w -p /etc/

This command will write the changes to /etc/ to flash and then you will be able to reboot the system and the new script will take effect. Hopefully you found this helpful.