Scanning for confidential information on external web servers

One of my clients wanted us to scan their web servers for confidential information. This was going to be done both from the Internet, and from an internal intranet location (between cooperative but separate organizations). In particular they were concerned about social security numbers and credit cards being exposed, and wanted us to double-check their servers. These were large Class B network.

I wanted to do something like the Unix “grep”, and search for regular expressions on their web pages. It would be easier if I could log onto the server and get direct access to the file system. But that’s not what the customer wanted.

I looked at a lot of utilities that I could run on my Kali machine. I looked at several tools. It didn’t look hopeful at first. This is what I came up with, using Kali and shell scripts.  I hope it helps others. And if someone finds a better way, please let me know,


Start with Nmap

As I had an entire network to scan, I started with nmap to discover hosts.

NMAP-GREP to the rescue

By chance nmap 7.0 was released that day, and I was using it to map out the network I was testing. I downloaded the new version, and noticed it had the http-grep script. This looked perfect, as it had social security numbers and credit card numbers built in! When I first tried it there was a bug. I tweeted about it and in hours Daniel “bonsaiviking” Miller  fixed it. He’s just an awesome guy.

Anyhow, here is the command I used to check the web servers:

nmap -vv -p T:80,443  $NETWORK --script \
http-grep --script-args \
'http-grep.builtins, http-grep.maxpagecount=-1, http-grep.maxdepth=-1 '

By using ‘http-grep.builtins’ – I could search fo all of the types of confidential information http-grep understood. And by setting maxpagecount and maxdepth to -1, I turned off the limits. It outputs something like:

Nmap scan report for (
Host is up, received syn-ack ttl 45 (0.047s latency).
Scanned at 2015-10-25 10:21:56 EST for 741s
80/tcp open http syn-ack ttl 45
| http-grep:
| (1)
|   (1) email:
|     +
|   (2) phone:
|     + 555-1212

Excellent! Just what I need. A simple grep of the output for ‘ssn:’ would show me any social security numbers (I had tested it on another web server to make sure it worked.) It’ always a good idea to not put too much faith in your tools.

I first used nmap to identify the hosts, and then I iterated through each host, and did a separate scan for each host, storing the outputs in separate files. So my script was  little different. I ended up with a file that contained the URL’s of the top web page of the servers (e.g.,, etc.) So the basic loop would be something like

while IFS= read url
    nmap [arguments....] "$url"
done <list_of_urls.txt

Later on, I used wget instead of nmap, but I’m getting ahead of myself.

Problem #1:  limiting scanning to a specific time of day

We had to perform all actions during a specific time window, so I wanted to be able to break this into smaller steps, allowing me to quit and restart.  I first identified the hosts, and scanned each one separately, in a loop. I also added a double-check to ensure that I didn’t scan past 3PM (as per our client’s request, and that I didn’t fill up the disk. So I added this check in the middle of my loop

LIMIT=5 # always keep 5% of the disk free
HOUR=$(date "+%H") # Get hour in 0..24 format 
AVAIL=$(df . | awk '/dev/ {print $5}'|tr -d '%') # get the available disk space 
if [ "$AVAIL" -lt "$LIMIT" ] 
        echo "Out of space. I have $AVAIL and I need $LIMIT" 
if [ "$HOUR" -ge 15 ] # 3PM or 12 + 3 == 15 
        echo "After 3 PM - Abort"

Problem #2:  Scanning non-text files.

The second problem I had is that a lot of the files on the server were PDF files, Excel spreadsheets, etc. using the http-grep would not help me, as it doesn’t know how to examine non-ASCII files. I therefore needed to mirror the servers.

Creating a mirror of a web site

I needed to find and download all of the files on a list of web servers. After searching for some tools to use, I decided to use wget. To be honest – I wasn’t happy with the choice, but it seemed to be the best choice.

I used wget’s  mirror (-m) option. I also disabled certificate checking (Some servers were using internal certificate an internal network. I also used the –continue command in case I had to redo the scan. I disabled the normal spider behavior of ignoring directories specified the the robots.txt file, and I also changed my user agent to be “Mozilla”

wget -m –no-check-certificate  –continue –convert-links   -p –no-clobber -e robots=off -U mozilla “$URL”

Some servers may not like this fast and furious download. You can slow it down by using these options: “–limit-rate=200k  –random-wait –wait=2 ”

I sent the output to a log file. Let’s call it wget.out. I was watching the output, using

tail -f wget.out

I watched the output for errors.  I did notice that there was a noticeable delay  in a host name lookup. I did a name service lookup, and added the hostname/ip address to my machine’s /etc/hosts file. This made the mirroring faster. I also was counting the number of fies being created, using

find . -type f | wc

Problem #3:  Self-referential links cause slow site mirroring.

I noticed that an hour had passed, and only  10 new files we being downloaded. This was a problem. I also noticed that some of the files being downloaded had several consecutive “/” in the path name. That’s not good.

I first grepped for the string ‘///’ and then I spotted the problem. To make sure, I typed

grep /dir1/dir2/webpage.php wgrep.log | awk '{print $3}' | sort | uniq -c | sort -nr 
         15 `webserver/dir1/dir2/webpage.php' 
          2 http://webserver/dir1/dir2/webpage.php 
          2 http://webserver//dir1/dir2/webpage.php 
          2 http://webserver///dir1/dir2/webpage.php 
          2 http://webserver////dir1/dir2/webpage.php 
          2 http://webserver/////dir1/dir2/webpage.php 
          2 http://webserver//////dir1/dir2/webpage.php 
          2 http://webserver///////dir1/dir2/webpage.php 
          2 http://webserver////////dir1/dir2/webpage.php 
          2 http://webserver/////////dir1/dir2/webpage.php 
          2 http://webserver//////////dir1/dir2/webpage.php 

Not a good thing to see. Time for plan B.

Mirroring a web site with wget –spider

I use a method I had tried before – the wget –spider function. This does not download the files. It just gets their name. As it turns out, this is better in many ways. It doesn’t go “recursive” on you, and it also allows you to scan the results, and obtain a list of URL’s. You can edit this list and not download certain files.

Method 2 was done using the following command:

wget --spider --no-check-certificate --continue --convert-links -r -p --no-clobber -e robots=off -U mozilla "$URL"

I sent the output to a file. But it contains filenames, error messages, and a lot of other information. To get the URL’s from this file, I then extracted all of the URLS using

cat wget.out | grep '^--' | \ grep -v '(try:' | awk '{ print $3 }' | \ grep -v '\.\(png\|gif\|jpg\)$' | sed 's:?.*$::' | grep -v '/$' | sort | uniq >urls.out

This parses the wget output file. It removes all *.png *.gif and *.jpg files. It also strips out any parameters on a URL (i.e. index.html?parm=1&parm=2&parm3=3 becomes index.html). It also removes any URL that ends with a “/”. I then eliminate any duplicate URL’s using sort and uniq.

Now I have a list of URLS. Wget has a way for you to download multiple files using the -i option:

wget -i urls.out --no-check-certificate --continue \
--convert-links -p --no-clobber -e robots=off -U Mozilla

Problem #4:   Using a customer’s search engine

A scan of the network revealed a search engine that searched files in its domain. I wanted to make sure that I had included these files in the audit.

I tried to search for meta-characters like ‘.’ , but the web server complained. Instead, I searched for ‘e’ – the most common letter, and it gave me the largest number of hits – 20 pages long.  I examined the URL for page 1, page 2, etc. and noticed that they were identical except for the value “jump=10”, “jump=20”, etc. I wrote a script that would extract all of the URL’s the search engine reported:


for i in $(seq 0 10 200)
    wget --force-html -r -l2 "$URL" 2>&1  |  grep '^--' | \
    grep -v '(try:' | awk '{ print $3 }'  | \
    grep -v '\.\(png\|gif\|jpg\)$' | sed 's:?.*$::'

It’s ugly, and calls extra processes. I could  write a sed or awk script that replaces five processes with one, but the script would be more complicated and harder to understand to my readers. Also – this was a “throw-away” script. It took me 30 seconds to write it, and the limited factor was network bandwidth. There is always a proper balance between readability, maintainability, time to develop, and time to execute. Is this code consuming excessive CPU cycles? No. Did it allow me to get it working quickly so I can spend time doing something else more productive? Yes.

Problem #5:  wget isn’t consistent

Before I mentioned that I wasn’t happy with wget. That’s because I was not getting consistent results. I ended up repeating the scan of the same server from a different network, and I got different URL’s. I checked, and the second scan found URL’s that the first one missed. I did the best I could to get as many files as possible. I ended up writing some scripts to keep track of the files I scanned before. But that’s another post.

Scanning PDF’s, Word and Excel files.

Now that I had a clone of several websites, I had to scan them for sensitive information. But I have to convert some binary files into ASCII.


Scanning Excel files

I installed gnumeric, and used the program ssconvert to convert the Excel file into text files. I used:

find . -name '*.xls' -o -name '*.xlsx' | \
while IFS= read file; do ssconvert -S "$file" "$file.%s.csv";done

Converting Microsoft Word files into ASCII

I used the following script to convert word files into ASCII

find . -name '*.do[ct]x' -o -name '*. | \
while IFS= read file; do unzip -p "$file" word/document.xml | \
sed -e 's/<[^>]\{1,\}>//g; s/[^[:print:]]\{1,\}//g' >"$file.txt";done

Potential Problems with converting PDF files

Here are some of the potential problems I expected to face

  1. I didn’t really trust any of the tools. If I knew they were perfect, and I had a lot of experience, I could just pick the best one. But I wasn’t confident, so I did not rely on a single tool.
  2. Some of the tools crashed when I used them. See #1 above.
  3. The PDF to text tools generated different results. Also see #1 above.
  4. PDF files are large. Some were more than 1000 pages long.
  5. It takes a lot of time to convert some of the PDF’s into text files. I really needed a server-class machine, and I was limited to a laptop. If the conversion program crashed when it was 90% through, people would notice my vocabulary in the office.
  6. Some of the PDF files were created by scanning paper documents. A PDF-to-text file would not see patterns unless it had some sort of OCR built-in.

Having said that, this is what I did.

How to Convert Acrobat/PDF files into ASCII

This process is not something that can be automated easily. Some of the times when I converted PDF files into text files, the process either aborted, or went into a CPU frenzy, and I had to abort the file conversion.

Also – there are several different ways to convert a PDF file into text. Because I wanted to minimize the risk of missing some information, I used multiple programs to convert PDF files. If one program broke, the other one might cach something.

The tools I used included

  • pdftotext – part of poppler-utils
  • pdf2txt – part of python-pdfminer

Other useful programs were exiftool and peepdf and Didier Steven’s pdf-tools. I also used pdfgrep, but I had to download the latest source, and then compile it with the perl PCRE library.


ConvertPDF – a script to convert PDF into text

I wrote a script that takes each of the PDF files and converts them into text. I decided to use the following convention:

  • *.pdf..txt – output of the pdf2txt file
  • *.pdf.text – output of the pdftotext file

As the conversion of each file takes time, I used a mechanism to see if the output file exists. If it does, I can skip this step.

I also created some additional files naming conventions

  • *.pdf.txt.err – errors from the pdf2txt program
  • *.pdf.txt.time – output of time(1) when running the pdf2txt program
  • *.pdf.text.err – errors from the pdftotext program
  • *.pdf.text.time – output of time(1) when running the pdftotext program

This is useful because if any of the files generate an error, I can use ‘ls -s *.err|sort -nr’ to identify both the program and the input file that had the problem.

The *.time files could be used to see how long it took to run the conversion. The first time I tried this, my script ran all night, and did not complete. I didn’t know if one of the programs  was stuck in an infinite loop or not. This file allows me to keep track of this information.

I used three helper functions in this script. The “X” function lets me easily change the script to show me what it would do, without doing anything. Also – it made it easier to capture STDERR and the timing information. I called it ConvertPDF

# Usage
#    ConvertPDF filename
FNAME="${1?'Missing filename'}"

# Debug command - do I echo it, execute it, or both?
X() {
# echo "$@" >&2
 /usr/bin/time -o "$OUT.time" "$@" 2> "$OUT.err"

 if [ ! -f "$OUT" ]
     X pdf2txt -o "$OUT" "$IN"

 if [ ! -f "$OUT" ]
     X pdftotext "$IN" "$OUT"
if [ ! -f "$FNAME" ]
 echo missing input file "$FNAME"
 exit 1
echo "$FNAME" >&2 # Output filename to STDERR

Once this script is created, I called it using

find . -name '*.[pP][dD][fF]' | while IFS= read file; do ConvertPDF "$file"; done

Please note that this script  can be repeated. If the conversion previously occurred, it would not repeat it. That is, if the output files already existed, it would skip that conversion.

As I’ve done it often in the past, I used a handy function above called “X” for eXecute. It just executes a command, but it captures any error message, and it also captures the elapsed time. If I move/add/replace the “#” character at the beginning of the line, I can make it just echo, and not execute anything. This makes it easy to debug without it executing anything.   This is Very Useful.


Some of the file conversion process took hours. I could kill these processes. Because I captured the error messages, I could also search them to identify bad conversions, and delete the output files, and try again. And again.

Optimizing the process

Because some of the PDF files are so large, and the process wasn’t refined, I wanted to be more productive, and work on the smallest files first, where I defined smallest by “fewest number of pages”. Finding scripting bugs quickly was desirable.

I used exiftool to examine the PDF metadata.  A snippet of the  output of “exiftool file.pdf” might contain:

ExifTool Version Number : 9.74
File Name : file.pdf
Producer : Adobe PDF Library 9.0
Page Layout : OneColumn
Page Count : 84

As you can see, the page count is available in the meta-data. We can extract this and use it.

Sorting PDF files by page count

I sorted the PDF files by page count using

for i in *.pdf
  NumPages=$(exiftool "$i" | sed -n '/Page Count/ s/Page Count *: *//p')
  printf "%d %s\n" "$NumPages" "$i"
done | sort -n | awk '{print $2}' >pdfSmallestFirst

I used sed to search for ‘Page Count’ and then only print the number after the colon. I then output two columns of information: page count and filename. I sorted by the first column (number of pages) and then printed out the filenames only. I could use that file as input to the next steps.

Searching for credit card numbers, social security numbers, and bank accounts.

If you have been following me, at this point I have directories that contain

  • ASCII based files (.htm, .html, *css, *js, etc.)
  • Excel files converted into ASCII
  • Microsoft Word files converted into ASCII
  • PDF files converted into ASCII.

So it’s a simple matter of using grap to find files.  My tutorial on Regular Expressions is here if you have some questions    Here is what I used to search the files

find dir1 dir2...  -type f -print0| \
xargs -0 grep -i -P '\b\d\d\d-\d\d-\d\d\d\d\b|\b\d\d\d\d-\d\d\d\d-\d\d\d\d-\d\d\d\d\b|\b\d\d\d\d-\d\d\d\d\d\d-\d\d\d\d\d\b|account number|account #'

The regular expressions I used are perl-compatible. See pcre(3) and PCREPATTERN(3) manual pages. The special characters are
\d – a digit
\b – a boundary – either a character, end of line, beginning of line, etc. – This prevents 1111-11-1111 from matching a SSN.

This matches the following patterns
\d\d\d-\d\d-\d\d\d\d – SSN
\d\d\d\d-\d\d\d\d-\d\d\d\d-\d\d\d\d – Credit card number
\d\d\d\d-\d\d\d\d\d\d-\d\d\d\d\d – AMEX credit card

There were some more things I did, but this is a summary
It should be enough to allow someone to replicate the task

Lessons learned

  • pdf2txt is sloooow
  • Your tools aren’t perfect. You can’t assume a single tool will find everything. Plan for failures and backup plans.
  • Look for ways to make your work more productive, e.g. find errors faster. You don’t want to wait 30 minutes to discover a coding error that will cause you to redo the operation. If you can find the error in 5 minutes you have saved 25 minutes.
  • Keep your shell scripts out of the directory containing the files. I downloaded more than 20000 files, and it became difficult to keep track of the names and jobs of the small scripts I was using, and the temporary files they created.
  • Consider using a Makefile to keep track of your actions. It’s a great way to document and reuse various scripts. I’ll write a blog on that later.
  • Watch out for duplicate names/URLs.
  • You have to remember that when you find a match in a file, you have to find the URL that corresponds to it. So consider your naming conventions.
  • Be careful of assumptions. Not all credit cards use the xxxx-xxxx-xxxx-xxxx format. Amex uses xxxx-xxxxxx-xxxxx


Have fun


Posted in Linux, Security, Shell Scripting | Tagged , , , , , , , , , , , , , , , , , | Leave a comment

How to custom fit the Hammond 1455J1201 case to the HackRF

I purchased a HackRF device from Kickstarter, and some people recommend that shielding will help improve the reception. Nooelec sells an optional shield, but I thought a metal case would provide better shielding, for a few more dollars. Mike Ossmann says the HackRF is made to work with the Hammond  1455J1201 so I searched Element14’s site and bought a black case. At the time the case was $20.68, but as I write this it seems to have jumped up to $35. Mouser sells this case for $18.70.

Here is the Hammond case shown next to the original plastic case

Hammond Case

Hammond Case

And here is the case taken apart

Case disassembled

Case disassembled

I wanted to drill the holes carefully, and make it look nice. I asked on the hackrf mailing list for some information on the location of the holes, and Stefano Probst game me a link to the output from Kicad, which he’s made available here.

However, I wasn’t going to use any sort of CNC-controlled mill/drill. I was going to drill the holes by hand.  How was I going to accurately drill the holes from the above file?

First I examined the specifications of the Hammond case.  This says the end plates are 3.071 inches long. So I just needed a way to print out the SVG file to be the save size. I used the free software Sure Cuts-a-lot 4 software, which had a ruler tool in the program that allowed me to measure the size of the end plate before I printed it. Then I cut out the paper.

Checking the size

Checking the size

I checked that the paper was the same size as the end plate. I then used rubber cement to glue the paper onto the end plate, carefully lining up the edges.

Then I used a punch to mark the exact center of the holes.

Using the punch

Using the punch

To be more precise, I first used a scriber or prick punch to mark the center of the hole. Then I examined the mark carefully to make sure it was in the exact center (this is important). Then I used an automatic center punch (or you can use a simple metal punch and a hammer) to make the hole deeper.

If you didn’t get the holes in the proper place, you can place the punch in the correct position, which may be a little on the other side of the center, and try again. By overshooting the center a little, the new hole will “fill in” towards the old hole, and end up between the new and old positions.

Once the holes are correctly marked, and made deep enough for a drill bit to go through the metal without wandering, you should fasten the end plate to a wooden block. I used 6×3/4″ wood screws to do this:

Fastening the cap to the board

Fastening the cap to the boardThi

Make sure you have the plates on the correct side, as the screw holes are countersunk, and you want that side to be upward.

This wooden block is a safety precaution, because drilling metal plates can be dangerous when the drill jams in the metal and the entire plate starts revolving around the drill bit. The wooden block also provides a backing board for the through holes.

Now we have to drill the holes. I used a drill gauge to measure the hole diameters. The drill bits you need are 5/64″, 5/32″ and 1/4″.  You should use the 5/64″ to drill pilot holes for the larger holes. I used a table-top drill press, but a hand drill should also work.

The odd-shape USB connecter cut-out is made from smaller holes used as a pilot, and then a small flat needle file is used to smooth out the cut-out:

Cleaning up the USB cutout

Cleaning up the USB cutout

The holes may have a rough edge, so you probably want to remove these edges. You can use sandpaper, or a deburring tool. To use a deburring tool, insert the blade and spin it around the hole. You can also use a counter-sink bit. The little holes for the LED’s were too small to allow this. I used a file to eliminate the burrs in that case.

I then testing the fit by hand:

Testing the fit

Testing the fit

I did have to use a round needle file to make one of the holes a little wider. But everything fit together very nicely.

I plan to add a shielding strap to the case, and test the changes in RF sensitivity to the plastic case vs the aluminum case.

I did have a little problem with the screws into the case. I may have to re-tap the threads in the holes.

Aside | Posted on by | Tagged , , | 1 Comment

Setting up Kali 1.1.0 on the new Raspberry Pi 2

My new Raspberry Pi 2 arrived, and I wanted to install Kali on it. I was preparing to follow the steps of Richard Brain,  but before I started, the folks at Kali tweeted that there was now a download available. 

I downloaded the image, checked the hash, and burned it onto a 32GB SD card using

sudo dd if=kali-1.1.0-rpi2.img of=/dev/mmcblk0  bs=4M

I placed the SD card into my RPi2 and booted it up. Of course I generated new ssh keys

rm /etc/ssh/ssh_host_*
dpkg-reconfigure openssh-server
service ssh restart

I changed the root password


I updated the software

apt-get update
apt-get upgrade

Extending the root partition on Kali Raspberry Pi 2

Normally one you execute raspi-config to extend the root file system. However, Kali didn’t have it. Following the lead from rageweb, I use the following commands to install the necessary files

dpkg -i triggerhappy_0.3.4-2_armhf.deb
dpkg -i lua5.1_5.1.5-7.1_armhf.deb
dpkg -i raspi-config_20150131-1_all.deb

The information from the above link was out of date. So if these files don’t exist, go to the parent directory  and search for the appropriate file with the correct revision number. Also note that with a RPi 2, you need the armhf instead of the armel files.

Once I started raspi-config,  I selected the resize root partition option, and rebooted, and that problem was solved.

Improving the security of remote access on Kali

The next steps are obvious to experts, but I can’t tell how experienced the readers are. So feel free to skip this part if you are experienced. Advanced users should look into this post on setting up an encrypted filesystem (LUKS) on a Raspberry Pi.

I wanted to make sure that password-based root access was not allowed. Instead, to gain access, the user has to access the device physically (an attached monitor and keyboard, a serial interface, etc.) or else the user has to place public key into the account.

I copied my account’s public key onto the device

cd ~/.ssh
scp root@rpi2kali:/tmp

I had to type the password of course. Then I logged onto the machine

ssh -l root rpi2kali
Password: XXXXXXXX

Setting up a non-root account on Kali

It’s generally a bad idea to allow someone to get root access directly. I recommend that you create a new user, (I used the user ID of ‘kali’), grant them sudo access, and set their password:

useradd -m -s /bin/bash -d /home/kali kali
adduser kali sudo
passwd kali

Now we have to set up this account to allow ssh key-based remote access, using the public key that was copied to this device previously. (I prefer this, because copying and pasting text can modify the string).

su - kali
mkdir ~/.ssh
chmod 700 ~/.ssh
cp /tmp/ ~/.ssh/authorized_keys

Now test all this out. Make sure you can remotely log into the system, and execute the sudo command. It’s a good idea to have several remote windows open, so you can correct errors in one window, and test things out in the other.

Disabling remote root access and preventing password-based remote access on Kali

Once this is done, you can disable remote root access by changing yes to no in the line in /etc/ssh/sshd_config. You can also disable remote access that uses passwords.

Change the lines to be the following

#PermitRootLogin yes
PermitRootLogin no
#PasswordAuthentication yes
PasswordAuthentication no

Then restart ssh

service ssh restart

Then make sure this all works. Try to log onto the root account remotely and you should see something like

ssh -l root rpi2kali 
Permission denied (publickey).

Then make sure that password-based remote access to the kali account is not allowed. You should get a similar error when trying to log onto the kali account from an account that isn’t in the authorized_keys file

Just remember to keep a window logged onto the machine while you test this, and to experiment by renaming the authorized_keys file. Also – you can use the ssh -vvv option to debug your remote ssh connection.

There’s a lot more you can do, like

  • Move the ssh service to a different port
  • add the ufw firewall package
  • Limit the remote access to specific IP ranges
  • Limit access to the built-in Ethernet port only, and prevent WiFI access
  • etc.

I’ll fill in more later.  But that’s enough to get you started.


Posted in Hacking, Linux, Security, System Administration | Tagged , , | 1 Comment

Extracting shell commands from Kali’s application menu

I use the Linux command line whenever I can.  Using the mouse to execute something when my fingers are on the keyboard irritates me.

I was using the Kali linux distribution to do some pentesting. And I was getting frustrated.

  • Some menu commands I wanted to execute on every reboot
  • Some menu commands had to be navigated down 4 or 5 menus to select
  • Some menu commands had to be executed multiple times in a row (like openvas check setup)
  • Some menu commands had a description that didn’t match the command line at all
  • I wanted the list of tools that were available, and paste this into a report. But there was no easy way to copy and paste the text from the menu into a file.

In general, I wanted to find out what exactly was executed when I used the menu to select an option from the Kali software.

So I wrote a script.

I wanted the script to generate this information for me. It didn’t take long, and it’s not very elegant, and this blog post takes much longer that it took me to write the script.

Yes, I could write a single program that does this by reading the file once, and generating the information in whatever format I wanted. But I just wanted I thought others may like this simple script.


# parsemenu - Bruce Barnett 2015
# this script will parse the kali gnome menu and get the 
# name of the tools that are in the menu

# First find all of the menu names
menus=`cat  $TOP/*desktop  | sed -n 's/Categories=//p' | \
tr ';' '\n' | sort -n | uniq`
cd $TOP
for m in $menus
    echo $m
    # Which apps are in this menu?
    files=`grep -l $m *.desktop` 
    for file in $files
        # get the name of the menu entry
        name=`sed -n 's/Name=//p' <$file`
        # get the shell command that is executed
        exec=`sed -n 's/Exec=//p' <$file | sed 's/^sh -c "\(.*\)"/\1/'`
        # Print it out
        echo "\t$name : $exec"

Kali Programs Available in from the menu

And here is the output:

    aircrack-ng : aircrack-ng --help;${SHELL:-bash}
    burpsuite : java -jar /usr/bin/burpsuite
    hydra : hydra -h;${SHELL:-bash}
    john : john;${SHELL:-bash}
    maltego : maltego
    metasploit framework : msfconsole;${SHELL:-bash}
    nmap : nmap;${SHELL:-bash}
    sqlmap : sqlmap -h;${SHELL:-bash}
    wireshark : wireshark
    owasp-zap : zap
    dnsdict6 : dnsdict6;${SHELL:-bash}
    dnsenum : dnsenum -h;${SHELL:-bash}
    dnsmap : dnsmap;${SHELL:-bash}
    dnsrecon : dnsrecon -h;${SHELL:-bash}
    dnsrevenum6 : dnsrevenum6;${SHELL:-bash}
    dnstracer : dnstracer;${SHELL:-bash}
    dnswalk : dnswalk --help;${SHELL:-bash}
    fierce : fierce -h;${SHELL:-bash}
    maltego : maltego
    nmap : nmap;${SHELL:-bash}
    urlcrazy : urlcrazy -h;${SHELL:-bash}
    zenmap : zenmap;${SHELL:-bash}
    alive6 : alive6;${SHELL:-bash}
    arping : arping;${SHELL:-bash}
    cdpsnarf : cdpsnarf -h;${SHELL:-bash}
    detect-new-ip6 : detect-new-ip6;${SHELL:-bash}
    detect_sniffer6 : detect_sniffer6;${SHELL:-bash}
    dmitry : dmitry;${SHELL:-bash}
    dnmap-client : dnmap_client;${SHELL:-bash}
    dnmap-server : dnmap_server;${SHELL:-bash}
    firewalk : firewalk;${SHELL:-bash}
    fping : fping -h;${SHELL:-bash}
    hping3 : hping3 -h;${SHELL:-bash}
    inverse_lookup6 : inverse_lookup6;${SHELL:-bash}
    masscan : masscan --help;${SHELL:-bash}
    miranda : miranda -h;${SHELL:-bash}
    ncat : ncat -h;${SHELL:-bash}
    netdiscover : netdiscover -h;${SHELL:-bash}
    nmap : nmap;${SHELL:-bash}
    passive_discovery6 : passive_discovery6;${SHELL:-bash}
    thcping6 : thcping6;${SHELL:-bash}
    unicornscan : us -h;${SHELL:-bash}
    wol-e : wol-e -h;${SHELL:-bash}
    xprobe2 : xprobe2 -h;${SHELL:-bash}
    zenmap : zenmap;${SHELL:-bash}
    firewalk : firewalk;${SHELL:-bash}
    fragroute : fragroute -h;${SHELL:-bash}
    fragrouter : fragrouter -h;${SHELL:-bash}
    ftest : ftest;${SHELL:-bash}
    lbd : lbd;${SHELL:-bash}
    wafw00f : wafw00f -h;${SHELL:-bash}
    dmitry : dmitry;${SHELL:-bash}
    dnmap-client : dnmap_client;${SHELL:-bash}
    dnmap-server : dnmap_server;${SHELL:-bash}
    masscan : masscan --help;${SHELL:-bash}
    netdiscover : netdiscover -h;${SHELL:-bash}
    nmap : nmap;${SHELL:-bash}
    unicornscan : us -h;${SHELL:-bash}
    zenmap : zenmap;${SHELL:-bash}
    0trace :;${SHELL:-bash}
    cdpsnarf : cdpsnarf -h;${SHELL:-bash}
    ftest : ftest;${SHELL:-bash}
    intrace : intrace;${SHELL:-bash}
    irpas-ass : ass -h;${SHELL:-bash}
    irpass-cdp : cdp;${SHELL:-bash}
    p0f : p0f -h;${SHELL:-bash}
    tcpflow : tcpflow -h;${SHELL:-bash}
    wireshark : wireshark
    xplico start : service xplico start;${SHELL:-bash}
    xplico stop : service xplico stop;${SHELL:-bash}
    xplico : xdg-open http://localhost:9876
    dnmap-client : dnmap_client;${SHELL:-bash}
    dnmap-server : dnmap_server;${SHELL:-bash}
    masscan : masscan --help;${SHELL:-bash}
    miranda : miranda -h;${SHELL:-bash}
    nmap : nmap;${SHELL:-bash}
    unicornscan : us -h;${SHELL:-bash}
    zenmap : zenmap;${SHELL:-bash}
    casefile : casefile
    creepy : creepy
    dmitry : dmitry;${SHELL:-bash}
    jigsaw : jigsaw -h;${SHELL:-bash}
    maltego : maltego
    metagoofil : metagoofil;${SHELL:-bash}
    recon-ng : recon-ng
    theharvester : theharvester;${SHELL:-bash}
    twofi : twofi -h;${SHELL:-bash}
    urlcrazy : urlcrazy -h;${SHELL:-bash}
    0trace :;${SHELL:-bash}
    dnmap-client : dnmap_client;${SHELL:-bash}
    dnmap-server : dnmap_server;${SHELL:-bash}
    intrace : intrace;${SHELL:-bash}
    netmask : netmask -h;${SHELL:-bash}
    trace6 : trace6;${SHELL:-bash}
    dnmap-client : dnmap_client;${SHELL:-bash}
    dnmap-server : dnmap_server;${SHELL:-bash}
    implementation6 : implementation6;${SHELL:-bash}
    implementation6d : implementation6d;${SHELL:-bash}
    ncat : ncat -h;${SHELL:-bash}
    nmap : nmap;${SHELL:-bash}
    sslscan : sslscan;${SHELL:-bash}
    sslyze : sslyze -h;${SHELL:-bash}
    tlssled : tlssled;${SHELL:-bash}
    unicornscan : us -h;${SHELL:-bash}
    zenmap : zenmap;${SHELL:-bash}
    acccheck : acccheck;${SHELL:-bash}
    nbtscan : nbtscan -h;${SHELL:-bash}
    nmap : nmap;${SHELL:-bash}
    zenmap : zenmap;${SHELL:-bash}
    nmap : nmap;${SHELL:-bash}
    smtp-user-enum : smtp-user-enum -h;${SHELL:-bash}
    swaks : swaks --help;${SHELL:-bash}
    zenmap : zenmap;${SHELL:-bash}
    braa : braa -h;${SHELL:-bash}
    cisco-auditing-tool : CAT;${SHELL:-bash}
    cisco-torch : cisco-torch;${SHELL:-bash}
    copy-router-config :;${SHELL:-bash}
    merge-router-config :;${SHELL:-bash}
    nmap : nmap;${SHELL:-bash}
    onesixtyone : onesixtyone;${SHELL:-bash}
    snmpcheck : snmpcheck -h;${SHELL:-bash}
    zenmap : zenmap;${SHELL:-bash}
    sslcaudit : sslcaudit -h;${SHELL:-bash}
    ssldump : ssldump -h;${SHELL:-bash}
    sslh : sslh -h;${SHELL:-bash}
    sslscan : sslscan;${SHELL:-bash}
    sslsniff : sslsniff;${SHELL:-bash}
    sslsplit : sslsplit -h;${SHELL:-bash}
    sslstrip : sslstrip -h;${SHELL:-bash}
    sslyze : sslyze -h;${SHELL:-bash}
    stunnel4 : stunnel4 -h;${SHELL:-bash}
    tlssled : tlssled;${SHELL:-bash}
    ace : ace;${SHELL:-bash}
    ace : ace;${SHELL:-bash}
    enumiax : enumiax -h;${SHELL:-bash}
    ike-scan : ike-scan -h;${SHELL:-bash}
    0trace :;${SHELL:-bash}
    dnmap-client : dnmap_client;${SHELL:-bash}
    dnmap-server : dnmap_server;${SHELL:-bash}
    intrace : intrace;${SHELL:-bash}
    netdiscover : netdiscover -h;${SHELL:-bash}
    netmask : netmask -h;${SHELL:-bash}
    trace6 : trace6;${SHELL:-bash}
    cisco-auditing-tool : CAT;${SHELL:-bash}
    cisco-global-exploiter :;${SHELL:-bash}
    cisco-ocs : cisco-ocs;${SHELL:-bash}
    cisco-torch : cisco-torch;${SHELL:-bash}
    copy-router-config :;${SHELL:-bash}
    merge-router-config :;${SHELL:-bash}
    yersinia : yersinia --help;${SHELL:-bash}
    copy-router-config :;${SHELL:-bash}
    merge-router-config :;${SHELL:-bash}
    bed : bed;${SHELL:-bash}
    fuzz_ip6 : fuzz_ip6;${SHELL:-bash}
    ohrwurm : ohrwurm;${SHELL:-bash}
    powerfuzzer : powerfuzzer;${SHELL:-bash}
    sfuzz : sfuzz -h;${SHELL:-bash}
    siparmyknife : siparmyknife;${SHELL:-bash}
    spike-generic_chunked : generic_chunked;${SHELL:-bash}
    spike-generic_listen_tcp : generic_listen_tcp;${SHELL:-bash}
    spike-generic_send_tcp : generic_send_tcp;${SHELL:-bash}
    spike-generic_send_udp : generic_send_udp;${SHELL:-bash}
    clusterd : clusterd -h;${SHELL:-bash}
    golismero : golismero -h;${SHELL:-bash}
    lynis : lynis -h;${SHELL:-bash}
    nikto : nikto -h;${SHELL:-bash}
    nmap : nmap;${SHELL:-bash}
    unix-privesc-check : unix-privesc-check;${SHELL:-bash}
    zenmap : zenmap;${SHELL:-bash}
    casefile : casefile
    maltego : maltego
    recon-ng : recon-ng
    bbqsql : bbqsql;${SHELL:-bash}
    dbpwaudit : dbpwaudit;${SHELL:-bash}
    hexorbase : hexorbase
    jsql : jsql
    mdb-export : mdb-export;${SHELL:-bash}
    mdb-hexdump : mdb-hexdump;${SHELL:-bash}
    mdb-parsecsv : mdb-parsecsv;${SHELL:-bash}
    mdb-sql : mdb-sql -h;${SHELL:-bash}
    mdb-tables : mdb-tables;${SHELL:-bash}
    oscanner : oscanner;${SHELL:-bash}
    sidguesser : sidguess;${SHELL:-bash}
    sqldict : sqldict
    sqlmap : sqlmap -h;${SHELL:-bash}
    sqlninja : sqlninja;${SHELL:-bash}
    sqlsus : sqlsus -h;${SHELL:-bash}
    tnscmd10g : tnscmd10g;${SHELL:-bash}
    openvas check setup : openvas-check-setup;${SHELL:-bash}
    openvas feed update : openvas-feed-update;${SHELL:-bash}
    openvas initial setup : openvas-setup;${SHELL:-bash}
    openvas start : openvas-start;${SHELL:-bash}
    openvas stop : openvas-stop;${SHELL:-bash}
    openvas-gsd : gsd
    blindelephant : -h;${SHELL:-bash}
    plecost : plecost -h;${SHELL:-bash}
    wpscan : wpscan --help;${SHELL:-bash}
    ua-tester : ua-tester;${SHELL:-bash}
    apache-users : apache-users;${SHELL:-bash}
    burpsuite : java -jar /usr/bin/burpsuite
    cutycapt : cutycapt --help;${SHELL:-bash}
    dirb : dirb;${SHELL:-bash}
    dirbuster : dirbuster;${SHELL:-bash}
    owasp-mantra-ff : owasp-mantra-ff
    vega : vega
    webscarab : webscarab
    webslayer : webslayer;${SHELL:-bash}
    owasp-zap : zap
    arachni_web : arachni_web;${SHELL:-bash}
    burpsuite : java -jar /usr/bin/burpsuite
    cadaver : cadaver;${SHELL:-bash}
    clusterd : clusterd -h;${SHELL:-bash}
    davtest : davtest;${SHELL:-bash}
    deblaze : -h;${SHELL:-bash}
    fimap : fimap -h;${SHELL:-bash}
    golismero : golismero -h;${SHELL:-bash}
    grabber : grabber -h;${SHELL:-bash}
    joomscan : joomscan;${SHELL:-bash}
    jsql : jsql
    nikto : nikto -h;${SHELL:-bash}
    owasp-mantra-ff : owasp-mantra-ff
    padbuster : padbuster;${SHELL:-bash}
    proxystrike : proxystrike
    skipfish : skipfish -h;${SHELL:-bash}
    sqlmap : sqlmap -h;${SHELL:-bash}
    uniscan-gui : uniscan-gui
    vega : vega
    wapiti : wapiti -h;${SHELL:-bash}
    webscarab : webscarab
    webshag-gui : webshag-gui;${SHELL:-bash}
    websploit : websploit;${SHELL:-bash}
    whatweb : whatweb -h;${SHELL:-bash}
    wpscan : wpscan --help;${SHELL:-bash}
    xsser : xsser -h;${SHELL:-bash}
    owasp-zap : zap
    w3af : w3af
    burpsuite : java -jar /usr/bin/burpsuite
    owasp-mantra-ff : owasp-mantra-ff
    paros : paros
    proxystrike : proxystrike
    vega : vega
    webscarab : webscarab
    owasp-zap : zap
    burpsuite : java -jar /usr/bin/burpsuite
    owasp-mantra-ff : owasp-mantra-ff
    powerfuzzer : powerfuzzer;${SHELL:-bash}
    webscarab : webscarab
    webslayer : webslayer;${SHELL:-bash}
    websploit : websploit;${SHELL:-bash}
    wfuzz : wfuzz;${SHELL:-bash}
    xsser : xsser -h;${SHELL:-bash}
    owasp-zap : zap
    bbqsql : bbqsql;${SHELL:-bash}
    sqlninja : sqlninja;${SHELL:-bash}
    sqlsus : sqlsus -h;${SHELL:-bash}
    hydra-gtk : xhydra
    acccheck : acccheck;${SHELL:-bash}
    burpsuite : java -jar /usr/bin/burpsuite
    cewl : cewl --help;${SHELL:-bash}
    cisco-auditing-tool : CAT;${SHELL:-bash}
    dbpwaudit : dbpwaudit;${SHELL:-bash}
    findmyhash : findmyhash;${SHELL:-bash}
    hydra : hydra -h;${SHELL:-bash}
    keimpx : keimpx -h;${SHELL:-bash}
    medusa : medusa -h;${SHELL:-bash}
    ncrack : ncrack -h;${SHELL:-bash}
    onesixtyone : onesixtyone;${SHELL:-bash}
    owasp-mantra-ff : owasp-mantra-ff
    patator : patator -h;${SHELL:-bash}
    phrasendrescher : pd -h;${SHELL:-bash}
    thc-pptp-bruter : thc-pptp-bruter;${SHELL:-bash}
    webscarab : webscarab
    owasp-zap : zap
    cachedump : cachedump -h;${SHELL:-bash}
    chntpw : chntpw -h;${SHELL:-bash}
    cmospwd : cmospwd;${SHELL:-bash}
    crackle : crackle;${SHELL:-bash}
    crunch : crunch;${SHELL:-bash}
    dictstat : dictstat -h;${SHELL:-bash}
    fcrackzip : fcrackzip --help;${SHELL:-bash}
    hash-identifier : hash-identifier;${SHELL:-bash}
    hashcat : hashcat --help;${SHELL:-bash}
    hashid : hashid -h;${SHELL:-bash}
    john : john;${SHELL:-bash}
    johnny : johnny;${SHELL:-bash}
    lsadump : lsadump -h;${SHELL:-bash}
    maskgen : maskgen -h;${SHELL:-bash}
    multiforcer : multiforcer --help;${SHELL:-bash}
    oclhashcat : oclhashcat;${SHELL:-bash}
    ophcrack-cli : ophcrack-cli;${SHELL:-bash}
    ophcrack : ophcrack
    policygen : policygen -h;${SHELL:-bash}
    pwdump : pwdump -h;${SHELL:-bash}
    pyrit : pyrit -h;${SHELL:-bash}
    rainbowcrack : rcrack;${SHELL:-bash}
    rcracki_mt : rcracki_mt;${SHELL:-bash}
    rsmangler : rsmangler -h;${SHELL:-bash}
    samdump2 : samdump2 -h;${SHELL:-bash}
    sipcrack : sipcrack -h;${SHELL:-bash}
    sucrack : man sucrack;${SHELL:-bash}
    truecrack : truecrack -h;${SHELL:-bash}
    oclhashcat : oclhashcat;${SHELL:-bash}
    pyrit : pyrit -h;${SHELL:-bash}
    pth-curl : pth-curl -h;${SHELL:-bash}
    pth-net : pth-net help;${SHELL:-bash}
    pth-openchangeclient : pth-openchangeclient --help;${SHELL:-bash}
    pth-rpcclient : pth-rpcclient -h;${SHELL:-bash}
    pth-smbclient : pth-smbclient -h;${SHELL:-bash}
    pth-smbget : pth-smbget --help;${SHELL:-bash}
    pth-sqsh : pth-sqsh --help;${SHELL:-bash}
    pth-winexe : pth-winexe -h;${SHELL:-bash}
    pth-wmic : pth-wmic -h;${SHELL:-bash}
    pth-wmis : pth-wmis -h;${SHELL:-bash}
    pth-xfreerdp : xfreerdp;${SHELL:-bash}
    aircrack-ng : aircrack-ng --help;${SHELL:-bash}
    asleap : asleap -h;${SHELL:-bash}
    bully : bully;${SHELL:-bash}
    cowpatty : cowpatty;${SHELL:-bash}
    eapmd5pass : eapmd5pass -h;${SHELL:-bash}
    fern-wifi-cracker : fern-wifi-cracker
    freeradius-wpe : freeradius -h;${SHELL:-bash}
    genkeys : genkeys;${SHELL:-bash}
    genpmk : genpmk;${SHELL:-bash}
    giskismet : giskismet -h;${SHELL:-bash}
    kismet : kismet -h;${SHELL:-bash}
    mdk3 : mdk3 --help;${SHELL:-bash}
    wash : wash -h;${SHELL:-bash}
    wifi-honey : wifi-honey -h;${SHELL:-bash}
    wifiarp : wifiarp -h;${SHELL:-bash}
    wifidns : wifidns -h;${SHELL:-bash}
    wifiping : wifiping -h;${SHELL:-bash}
    wifitap : wifitap -h;${SHELL:-bash}
    wifite : wifite --help;${SHELL:-bash}
    bluelog : bluelog -h;${SHELL:-bash}
    bluemaho :;${SHELL:-bash}
    blueranger :;${SHELL:-bash}
    bluesnarfer : bluesnarfer;${SHELL:-bash}
    btscanner : btscanner -h;${SHELL:-bash}
    crackle : crackle;${SHELL:-bash}
    redfang : fang -h;${SHELL:-bash}
    spooftooph : spooftooph -h;${SHELL:-bash}
    mfcuk : mfcuk -h;${SHELL:-bash}
    mfoc : mfoc -h;${SHELL:-bash}
    mfterm : mfterm -h;${SHELL:-bash}
    mifare-classic-format : mifare-classic-format -h;${SHELL:-bash}
    nfc-list : nfc-list -h;${SHELL:-bash}
    nfc-mfclassic : nfc-mfclassic -h;${SHELL:-bash}
    select tag : -R RFIDIOt.rfidiot.READER_PCSC;${SHELL:-bash}
    continuous select tag : -R RFIDIOt.rfidiot.READER_PCSC;${SHELL:-bash}
    chip & pin info :;${SHELL:-bash}
    jcop mifare read/write : -R RFIDIOt.rfidiot.READER_PCSC;${SHELL:-bash}
    jcop info : -R RFIDIOt.rfidiot.READER_PCSC INFO;${SHELL:-bash}
    jcop set atr historical bytes : -R RFIDIOt.rfidiot.READER_PCSC;${SHELL:-bash}
    bruteforce mifare : -R RFIDIOt.rfidiot.READER_PCSC;${SHELL:-bash}
    calculate jcop mifare keys : ;${SHELL:-bash}
    epassport read/write/clone : -R RFIDIOt.rfidiot.READER_PCSC;${SHELL:-bash}
    read mifare : -R RFIDIOt.rfidiot.READER_PCSC;${SHELL:-bash}
    read tag : -R RFIDIOt.rfidiot.READER_PCSC;${SHELL:-bash}
    identify hf tag type : -R RFIDIOt.rfidiot.READER_PCSC;${SHELL:-bash}
    test acg lahf :;${SHELL:-bash}
    select tag : -R RFIDIOt.rfidiot.READER_ACG -l /dev/ttyUSB0 -s 9600;${SHELL:-bash}
    continuous select tag : -R RFIDIOt.rfidiot.READER_ACG -l /dev/ttyUSB0 -s 9600;${SHELL:-bash}
    copy iso15693 tag : -R RFIDIOt.rfidiot.READER_ACG -l /dev/ttyUSB0 -s 9600;${SHELL:-bash}
    read acg reader eeprom : -R RFIDIOt.rfidiot.READER_ACG -l /dev/ttyUSB0 -s 9600;${SHELL:-bash}
    set fdx-b id : -R RFIDIOt.rfidiot.READER_ACG -l /dev/ttyUSB0 -s 9600;${SHELL:-bash}
    format mifare 1k value blocks : -R RFIDIOt.rfidiot.READER_ACG -l /dev/ttyUSB0 -s 9600;${SHELL:-bash}
    brute force hitag2 : -R RFIDIOt.rfidiot.READER_ACG -l /dev/ttyUSB0 -s 9600;${SHELL:-bash}
    jcop mifare read write : -R RFIDIOt.rfidiot.READER_ACG -l /dev/ttyUSB0 -s 9600;${SHELL:-bash}
    jcop info : -R RFIDIOt.rfidiot.READER_ACG -l /dev/ttyUSB0 -s 9600 INFO;${SHELL:-bash}
    jcop set atr historical bytes : -R RFIDIOt.rfidiot.READER_ACG -l /dev/ttyUSB0 -s 9600;${SHELL:-bash}
    identify lf tag type : -R RFIDIOt.rfidiot.READER_ACG -l /dev/ttyUSB0 -s 9600;${SHELL:-bash}
    bruteforce mifare : -R RFIDIOt.rfidiot.READER_ACG -l /dev/ttyUSB0 -s 9600;${SHELL:-bash}
    calculate jcop mifare keys : -R RFIDIOt.rfidiot.READER_ACG -l /dev/ttyUSB0 -s 9600;${SHELL:-bash}
    epassport read write clone : -R RFIDIOt.rfidiot.READER_ACG -l /dev/ttyUSB0 -s 9600;${SHELL:-bash}
    reset q5 tag : -R RFIDIOt.rfidiot.READER_ACG -l /dev/ttyUSB0 -s 9600 CONTROL ID;${SHELL:-bash}
    read lf tag : -R RFIDIOt.rfidiot.READER_ACG -l /dev/ttyUSB0 -s 9600;${SHELL:-bash}
    read mifare : -R RFIDIOt.rfidiot.READER_ACG -l /dev/ttyUSB0 -s 9600;${SHELL:-bash}
    read tag : -R RFIDIOt.rfidiot.READER_ACG -l /dev/ttyUSB0 -s 9600;${SHELL:-bash}
    read write clone unique (em4x02) : -R RFIDIOt.rfidiot.READER_ACG -l /dev/ttyUSB0 -s 9600;${SHELL:-bash}
    identify hf tag type : -R RFIDIOt.rfidiot.READER_ACG -s 9600 -l /dev/ttyUSB0; ${SHELL:-bash}
    test frosch reader
test frosch reader : -R RFIDIOt.rfidiot.READER_FROSCH -l /dev/ttyUSB0 -s 9600;${SHELL:-bash}
    set fdx-b id : -R RFIDIOt.rfidiot.READER_FROSCH -l /dev/ttyUSB0 -s 9600;${SHELL:-bash}
    read write clone unique (em4x02) : -R RFIDIOt.rfidiot.READER_FROSCH -l /dev/ttyUSB0 -s 9600;${SHELL:-bash}
    reset hitag2 tag : -R RFIDIOt.rfidiot.READER_FROSCH -l /dev/ttyUSB0 -s 9600 CONTROL;${SHELL:-bash}
    ubertooth util : ubertooth-util -h;${SHELL:-bash}
    zbassocflood : zbassocflood -h;${SHELL:-bash}
    zbdsniff : zbdsniff;${SHELL:-bash}
    zbdump : zbdump -h;${SHELL:-bash}
    zbfind : zbfind
    zbgoodfind : zbgoodfind -h;${SHELL:-bash}
    zbreplay : zbreplay -h;${SHELL:-bash}
    zbstumbler : zbstumbler -h;${SHELL:-bash}
    gnuradio-companion : gnuradio-companion
    gqrx : gqrx
    gr-scan : gr-scan --help;${SHELL:-bash}
    modes_gui : modes_gui
    rfcat : rfcat -h;${SHELL:-bash}
    rtl_adsb : rtl_adsb -h;${SHELL:-bash}
    rtl_fm : rtl_fm -h;${SHELL:-bash}
    rtl_sdr : rtl_sdr;${SHELL:-bash}
    rtl_tcp : rtl_tcp -h;${SHELL:-bash}
    rtl_test : rtl_test -h;${SHELL:-bash}
    rtlsdr-scanner : rtlsdr-scanner
    copy-router-config :;${SHELL:-bash}
    merge-router-config :;${SHELL:-bash}
    cisco-auditing-tool : CAT;${SHELL:-bash}
    cisco-global-exploiter :;${SHELL:-bash}
    cisco-ocs : cisco-ocs;${SHELL:-bash}
    cisco-torch : cisco-torch;${SHELL:-bash}
    yersinia : yersinia --help;${SHELL:-bash}
    metasploit framework : msfconsole;${SHELL:-bash}
    metasploit diagnostic logs  : /opt/metasploit/;${SHELL:-bash}
    metasploit diagnostic shell : /opt/metasploit/diagnostic_shell;${SHELL:-bash}
    metasploit community / pro : /opt/metasploit/scripts/
    update metasploit : msfupdate;${SHELL:-bash}
    creepy : creepy
    armitage : armitage;${SHELL:-bash}
    exploit6 : exploit6;${SHELL:-bash}
    ikat : ikat;${SHELL:-bash}
    jboss-autopwn-linux : jboss-linux;${SHELL:-bash}
    jboss-autopwn-win : jboss-win;${SHELL:-bash}
    termineter : termineter -h;${SHELL:-bash}
    beef : beef-xss;${SHELL:-bash}
    setoolkit : setoolkit;${SHELL:-bash}
    sandi-gui : sandi-gui
    searchsploit : searchsploit;${SHELL:-bash}
    ginguma : ginguma
    inguma : inguma;${SHELL:-bash}
    edb-debugger : edb;${SHELL:-bash}
    NASM shell : cd /usr/share/metasploit-framework/tools && ./nasm_shell.rb;${SHELL:-bash}
    ollydbg : ollydbg
    pattern create : cd /usr/share/metasploit-framework/tools && ./pattern_create.rb;${SHELL:-bash}
    pattern offset : cd /usr/share/metasploit-framework/tools && ./pattern_offset.rb;${SHELL:-bash}
    shellnoob : shellnoob;${SHELL:-bash}
    ace : ace;${SHELL:-bash}
    msgsnarf : msgsnarf -h;${SHELL:-bash}
    iaxflood : iaxflood;${SHELL:-bash}
    inviteflood : inviteflood -h;${SHELL:-bash}
    ohrwurm : ohrwurm;${SHELL:-bash}
    protos-sip : protos-sip -help;${SHELL:-bash}
    rtpbreak : rtpbreak -h;${SHELL:-bash}
    rtpflood : rtpflood;${SHELL:-bash}
    rtpinsertsound : rtpinsertsound -h;${SHELL:-bash}
    rtpmixsound : rtpmixsound -h;${SHELL:-bash}
    sctpscan : sctpscan;${SHELL:-bash}
    siparmyknife : siparmyknife;${SHELL:-bash}
    sipp : sipp -h;${SHELL:-bash}
    sipsak : sipsak -h;${SHELL:-bash}
    svcrack : svcrack -h;${SHELL:-bash}
    svcrash : svcrash -h;${SHELL:-bash}
    svmap : svmap -h;${SHELL:-bash}
    svreport : svreport -h;${SHELL:-bash}
    svwar : svwar -h;${SHELL:-bash}
    voiphopper : voiphopper;${SHELL:-bash}
    darkstat : darkstat;${SHELL:-bash}
    dnschef : dnschef -h;${SHELL:-bash}
    dnsspoof : dnsspoof -h;${SHELL:-bash}
    dsniff : dsniff -h;${SHELL:-bash}
    ettercap-graphical : ettercap -G
    ettercap-text : ettercap -h
    hexinject : hexinject -h;${SHELL:-bash}
    mailsnarf : mailsnarf -h;${SHELL:-bash}
    msgsnarf : msgsnarf -h;${SHELL:-bash}
    netsniff-ng : netsniff-ng -h;${SHELL:-bash}
    passive_discovery6 : passive_discovery6;${SHELL:-bash}
    responder : responder -h;${SHELL:-bash}
    sslsniff : sslsniff;${SHELL:-bash}
    tcpflow : tcpflow -h;${SHELL:-bash}
    urlsnarf : urlsnarf -h;${SHELL:-bash}
    webmitm : webmitm -h;${SHELL:-bash}
    webspy : webspy -h;${SHELL:-bash}
    wireshark : wireshark
    dnschef : dnschef -h;${SHELL:-bash}
    ettercap-graphical : ettercap -G
    ettercap-text : ettercap -h
    fake_advertise6 : fake_advertise6;${SHELL:-bash}
    fake_dhcps6 : fake_dhcps6;${SHELL:-bash}
    fake_dns6d : fake_dns6d;${SHELL:-bash}
    fake_dnsupdate6 : fake_dnsupdate6;${SHELL:-bash}
    fake_mipv6 : fake_mipv6;${SHELL:-bash}
    fake_mld26 : fake_mld26;${SHELL:-bash}
    fake_mld6 : fake_mld6;${SHELL:-bash}
    fake_mldrouter6 : fake_mldrouter6;${SHELL:-bash}
    fake_router26 : fake_router26;${SHELL:-bash}
    fake_router6 : fake_router6;${SHELL:-bash}
    fake_solicitate6 : fake_solicitate6;${SHELL:-bash}
    fiked : fiked -h;${SHELL:-bash}
    evilgrade : evilgrade;${SHELL:-bash}
    macchanger : macchanger -h;${SHELL:-bash}
    parasite6 : parasite6;${SHELL:-bash}
    randicmp6 : randicmp6;${SHELL:-bash}
    rebind : rebind;${SHELL:-bash}
    redir6 : redir6;${SHELL:-bash}
    responder : responder -h;${SHELL:-bash}
    sniffjoke : sniffjoke --help;${SHELL:-bash}
    sslsplit : sslsplit -h;${SHELL:-bash}
    sslstrip : sslstrip -h;${SHELL:-bash}
    tcpreplay : tcpreplay -h;${SHELL:-bash}
    wifi-honey : wifi-honey -h;${SHELL:-bash}
    yersinia : yersinia --help;${SHELL:-bash}
    driftnet : driftnet -h;${SHELL:-bash}
    burpsuite : java -jar /usr/bin/burpsuite
    dnsspoof : dnsspoof -h;${SHELL:-bash}
    ferret : ferret;${SHELL:-bash}
    hamster : hamster;${SHELL:-bash}
    mitmproxy : mitmproxy -h;${SHELL:-bash}
    owasp-mantra-ff : owasp-mantra-ff
    urlsnarf : urlsnarf -h;${SHELL:-bash}
    webmitm : webmitm -h;${SHELL:-bash}
    webscarab : webscarab
    webspy : webspy -h;${SHELL:-bash}
    owasp-zap : zap
    cymothoa : cymothoa -h;${SHELL:-bash}
    dbd : dbd -h;${SHELL:-bash}
    intersect : intersect;${SHELL:-bash}
    powersploit : cd /usr/share/powersploit/ && ls;${SHELL:-bash}
    sbd : sbd -h;${SHELL:-bash}
    u3-pwn : u3-pwn;${SHELL:-bash}
    cryptcat : cryptcat -h;${SHELL:-bash}
    dbd : dbd -h;${SHELL:-bash}
    dns2tcpc : dns2tcpc;${SHELL:-bash}
    dns2tcpd : dns2tcpd;${SHELL:-bash}
    iodine : iodine-client-start -h;${SHELL:-bash}
    miredo : miredo -h;${SHELL:-bash}
    ncat : ncat -h;${SHELL:-bash}
    proxychains : proxychains;${SHELL:-bash}
    proxytunnel : proxytunnel -h;${SHELL:-bash}
    ptunnel : ptunnel -h;${SHELL:-bash}
    pwnat : pwnat -h;${SHELL:-bash}
    sbd : sbd -h;${SHELL:-bash}
    socat : socat -h;${SHELL:-bash}
    sslh : sslh -h;${SHELL:-bash}
    stunnel4 : stunnel4 -h;${SHELL:-bash}
    udptunnel : udptunnel -h;${SHELL:-bash}
    webacoo : webacoo -h;${SHELL:-bash}
    weevely : weevely;${SHELL:-bash}
    edb-debugger : edb;${SHELL:-bash}
    ollydbg : ollydbg
    jad : jad;${SHELL:-bash}
    jd-gui : jd-gui
    rabin2 : rabin2 -h;${SHELL:-bash}
    radiff2 : radiff2;${SHELL:-bash}
    rasm2 : rasm2;${SHELL:-bash}
    recstudio-cli : recstudio-cli;${SHELL:-bash}
    recstudio : recstudio
    apktool : apktool;${SHELL:-bash}
    clang++ : clang++ --help;${SHELL:-bash}
    clang : clang --help;${SHELL:-bash}
    dex2jar : d2j-dex2jar -h;${SHELL:-bash}
    flasm : flasm;${SHELL:-bash}
    javasnoop : javasnoop
    radare2 : radare2 -h;${SHELL:-bash}
    rafind2 : rafind2 -h;${SHELL:-bash}
    ragg2-cc : ragg2-cc;${SHELL:-bash}
    ragg2 : ragg2 -h;${SHELL:-bash}
    rahash2 : rahash2 -h;${SHELL:-bash}
    rarun2 : rarun2;${SHELL:-bash}
    rax2 : rax2 -h;${SHELL:-bash}
    denial6 : denial6;${SHELL:-bash}
    dhcpig : -h;${SHELL:-bash}
    dos-new-ip6 : dos-new-ip6;${SHELL:-bash}
    flood_advertise6 : flood_advertise6;${SHELL:-bash}
    flood_dhcpc6 : flood_dhcpc6;${SHELL:-bash}
    flood_mld26 : flood_mld26;${SHELL:-bash}
    flood_mld6 : flood_mld6;${SHELL:-bash}
    flood_mldrouter6 : flood_mldrouter6;${SHELL:-bash}
    flood_router26 : flood_router26;${SHELL:-bash}
    flood_router6 : flood_router6;${SHELL:-bash}
    flood_solicitate6 : flood_solicitate6;${SHELL:-bash}
    fragmentation6 : fragmentation6;${SHELL:-bash}
    inundator : inundator;${SHELL:-bash}
    kill_router6 : kill_router6;${SHELL:-bash}
    macof : macof -h;${SHELL:-bash}
    rsmurf6 : rsmurf6;${SHELL:-bash}
    siege : siege -h;${SHELL:-bash}
    smurf6 : smurf6;${SHELL:-bash}
    t50 : t50 --help;${SHELL:-bash}
    mdk3 : mdk3 --help;${SHELL:-bash}
    reaver : reaver -h;${SHELL:-bash}
    iaxflood : iaxflood;${SHELL:-bash}
    inviteflood : inviteflood -h;${SHELL:-bash}
    slowhttptest : slowhttptest -h;${SHELL:-bash}
    thc-ssl-dos : thc-ssl-dos -h;${SHELL:-bash}
    android-sdk : android;${SHELL:-bash}
    apktool : apktool;${SHELL:-bash}
    baksmali : baksmali --help;${SHELL:-bash}
    dex2jar : d2j-dex2jar -h;${SHELL:-bash}
    smali : smali --help;${SHELL:-bash}
    arduino : arduino
    fcrackzip : fcrackzip --help;${SHELL:-bash}
    chkrootkit : chkrootkit -h;${SHELL:-bash}
    rkhunter : rkhunter -h;${SHELL:-bash}
    chkrootkit : chkrootkit -h;${SHELL:-bash}
    autopsy : autopsy;${SHELL:-bash}
    binwalk : binwalk -h;${SHELL:-bash}
    bulk_extractor : bulk_extractor -h;${SHELL:-bash}
    chkrootkit : chkrootkit -h;${SHELL:-bash}
    dc3dd : dc3dd --help;${SHELL:-bash}
    dcfldd : dcfldd --help;${SHELL:-bash}
    extundelete : extundelete --help;${SHELL:-bash}
    foremost : foremost -h;${SHELL:-bash}
    fsstat : fsstat;${SHELL:-bash}
    galleta : galleta;${SHELL:-bash}
    tsk_comparedir : tsk_comparedir;${SHELL:-bash}
    tsk_loaddb : tsk_loaddb;${SHELL:-bash}
    affcompare : affcompare -h;${SHELL:-bash}
    affcopy : affcopy -h;${SHELL:-bash}
    affcrypto : affcrypto -h;${SHELL:-bash}
    affdiskprint : affdiskprint -h;${SHELL:-bash}
    affinfo : affinfo -h;${SHELL:-bash}
    affsign : affsign -h;${SHELL:-bash}
    affstats : affstats -h;${SHELL:-bash}
    affuse : affuse -h;${SHELL:-bash}
    affverify : affverify -h;${SHELL:-bash}
    affxml : affxml -h;${SHELL:-bash}
    autopsy : autopsy;${SHELL:-bash}
    binwalk : binwalk -h;${SHELL:-bash}
    blkcalc : blkcalc;${SHELL:-bash}
    blkcat : blkcat;${SHELL:-bash}
    blkstat : blkstat;${SHELL:-bash}
    bulk_extractor : bulk_extractor -h;${SHELL:-bash}
    ffind : ffind;${SHELL:-bash}
    fls : fls;${SHELL:-bash}
    foremost : foremost -h;${SHELL:-bash}
    galleta : galleta;${SHELL:-bash}
    hfind : hfind;${SHELL:-bash}
    icat-sleuthkit : icat-sleuthkit;${SHELL:-bash}
    ifind : ifind;${SHELL:-bash}
    ils-sleuthkit : ils-sleuthkit;${SHELL:-bash}
    istat : istat;${SHELL:-bash}
    jcat : jcat;${SHELL:-bash}
    mactime-sleuthkit : mactime-sleuthkit;${SHELL:-bash}
    missidentify : missidentify -h;${SHELL:-bash}
    mmcat : mmcat;${SHELL:-bash}
    pdfbook : pd-fbook -h;${SHELL:-bash}
    pdgmail : pdgmail -h;${SHELL:-bash}
    readpst : readpst -h;${SHELL:-bash}
    reglookup : reglookup;${SHELL:-bash}
    regripper : regripper
    sigfind : sigfind;${SHELL:-bash}
    sorter : sorter;${SHELL:-bash}
    srch_strings : srch_strings -h;${SHELL:-bash}
    tsk_recover : tsk_recover;${SHELL:-bash}
    vinetto : vinetto -h;${SHELL:-bash}
    binwalk : binwalk -h;${SHELL:-bash}
    bulk_extractor : bulk_extractor -h;${SHELL:-bash}
    foremost : foremost -h;${SHELL:-bash}
    jls : jls;${SHELL:-bash}
    magicrescue : magicrescue;${SHELL:-bash}
    pasco : pasco;${SHELL:-bash}
    pev : pev -h;${SHELL:-bash}
    recoverjpeg : recoverjpeg -h;${SHELL:-bash}
    rifiuti : rifiuti;${SHELL:-bash}
    rifiuti2 : rifiuti2 -h;${SHELL:-bash}
    safecopy : safecopy -h;${SHELL:-bash}
    scalpel : scalpel -h;${SHELL:-bash}
    scrounge-ntfs : scrounge-ntfs -h;${SHELL:-bash}
    md5deep : md5deep -h;${SHELL:-bash}
    rahash2 : rahash2 -h;${SHELL:-bash}
    affcat : affcat -h;${SHELL:-bash}
    affconvert : affconvert -h;${SHELL:-bash}
    blkls : blkls;${SHELL:-bash}
    dc3dd : dc3dd --help;${SHELL:-bash}
    dcfldd : dcfldd --help;${SHELL:-bash}
    ddrescue : dd_rescue -h;${SHELL:-bash}
    ewfacquire : ewfacquire -h;${SHELL:-bash}
    ewfacquirestream : ewfacquirestream -h;${SHELL:-bash}
    ewfexport : ewfexport -h;${SHELL:-bash}
    ewfinfo : ewfinfo -h;${SHELL:-bash}
    ewfverify : ewfverify -h;${SHELL:-bash}
    fsstat : fsstat;${SHELL:-bash}
    guymager : guymager
    img_cat : img_cat;${SHELL:-bash}
    img_stat : img_stat;${SHELL:-bash}
    mmls : mmls;${SHELL:-bash}
    mmstat : mmstat;${SHELL:-bash}
    tsk_gettimes : tsk_gettimes -h;${SHELL:-bash}
    autopsy : autopsy;${SHELL:-bash}
    dff gui : dff -g;${SHELL:-bash}
    dff : dff;${SHELL:-bash}
    p0f : p0f -h;${SHELL:-bash}
    xplico start : service xplico start;${SHELL:-bash}
    xplico stop : service xplico stop;${SHELL:-bash}
    xplico : xdg-open http://localhost:9876
    chntpw : chntpw -h;${SHELL:-bash}
    pdf-parser : pdf-parser -h;${SHELL:-bash}
    peepdf : peepdf -h;${SHELL:-bash}
    volafox : volafox;${SHELL:-bash}
    volatility : vol -h;${SHELL:-bash}
    casefile : casefile
    magictree : magictree
    maltego : maltego
    metagoofil : metagoofil;${SHELL:-bash}
    pipal : pipal -h;${SHELL:-bash}
    truecrypt : truecrypt -h;${SHELL:-bash}
    cutycapt : cutycapt --help;${SHELL:-bash}
    recordmydesktop : recordmydesktop -h;${SHELL:-bash}
    dradis : service dradis start; xdg-open
    keepnote : keepnote
    apache2 restart : service apache2 restart;${SHELL:-bash}
    apache2 start : service apache2 start;${SHELL:-bash}
    apache2 stop : service apache2 stop;${SHELL:-bash}
    mysql restart : service mysql restart;${SHELL:-bash}
    mysql start : service mysql start;${SHELL:-bash}
    mysql stop : service mysql stop;${SHELL:-bash}
    sshd restart : service ssh restart;${SHELL:-bash}
    sshd start : service ssh start;${SHELL:-bash}
    sshd stop : service ssh stop;${SHELL:-bash}
    beef start : service beef-xss start;${SHELL:-bash}
    beef stop : service beef-xss stop;${SHELL:-bash}
    community / pro start : /opt/metasploit/scripts/;${SHELL:-bash}
    community / pro stop : /opt/metasploit/scripts/;${SHELL:-bash}
    dradis start : service dradis start;${SHELL:-bash}
    dradis stop : service dradis stop;${SHELL:-bash}
    openvas check setup : openvas-check-setup;${SHELL:-bash}
    openvas feed update : openvas-feed-update;${SHELL:-bash}
    openvas initial setup : openvas-setup;${SHELL:-bash}
    openvas start : openvas-start;${SHELL:-bash}
    openvas stop : openvas-stop;${SHELL:-bash}
    xplico start : service xplico start;${SHELL:-bash}
    xplico stop : service xplico stop;${SHELL:-bash}
Posted in Linux, Security, Shell Scripting | Tagged , , , | Leave a comment

CBC Padding Oracle Attacks Simplified – Key concepts and pitfalls

There are hundreds of web sites that describe the Padding Oracle attack, but many people find the concept confusing. I am going to try to explain everything you need to know. I am not going to write a bunch of equations to explain it. I’m not going to throw a big complicated diagram in front of you. You don’t have to understand encryption. I’ll just teach it to you one step at a time.  It won’t make your head hurt. I promise.

This will help you write your own tool, or use an existing tool, and I’ll make it as simple as possible, while pointing out all of the tricky bits…

If you remember that A-B == C and you can add B to both side of the equation to get A== B+C, and if you understand HEX values and the Exclusive Or (XOR) function, you have all of the deep knowledge needed to understand the attack.

Why are Padding Oracle Attacks important?

This type of attack is well known, and a lot of sites are vulnerable to this attack. It’s a common error. And it’s devastating because it works so fast. To understand the attact, you have to understand a a few simple things about cryptography.

What is Cryptographic Padding?

There are two types of encryption  – stream based and block based.  A stream based encryption system can be used to encrypt any number of bytes. A block-based encryption algorithm encrypts text in blocks. AES-128 uses 16-byte (i.e. 16*8 or 128-bit)  blocks. If you want to encrypt a single character with AES, you get 16 characters out. 17 Characters takes 2 blocks, or 32 characters total.

What happens if your cleartext is shorter than a block? It’s padded with extra characters to fill up the block before it’s encrypted because you have to encrypt a block.  These could be null characters, or random characters, but how do you tell the difference between important characters and extra characters?

There are several ways to do this. PKCS#7 is a typical example. If you have 15 bytes and need to add one more byte to fill up the block, you append hex (01). If you need to add 2 bytes, you append  hex(02 02).  3 Bytes requires you add the 3-byte pad of hex(03 03 03). Note that this allows a form of error checking, because there is some  redundancy when more that a single byte is added. If the last byte has the hex value of 04, then the previous 3 bytes have the same value. If not, then that is a padding error.

If the text fills up the 16-byte-block exactly, you add another block that contains 16 bytes hex(10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10).  Each of these values have the binary value of 16.

The important point is that you always have padding, from 1 to 16 characters., to the last block.

Here is a table that shows how cleartext (in blue) is padded (in black) before the block of 16 bytes is encrypted, just in case this isn’t clear.


Table 1 – PKCS#7 padding on a 16-byte block

What is an Padding Oracle?

An Oracle is a system that reveals information. Normally, a system that uses encryption either works or doesn’t. However, if the system reveals extra information – like if padding was valid, then this is called a Padding Oracle.

Hey Oracle! Does this block have proper padding? Tell me!

While the technical term is Padding Oracle, think of it as a blabbermouth.

What is Cipher Block Chaining (CBC)?

Classic block encryption, such as AES, has a problem. Plain AES is called ECB or Electronic Cookbook Mode. When you encrypt the same data, the output or ciphertext is identical. If I encrypt 16 “A” characters, the output will be the same.

Or to put it another way, every time I encrypt my “Beans and Franks” recipe, the result is identical. If I send it to Alice, and I send to Bob, someone can see that I sent the same recipe to both people. It’s a cookbook. The recipes don’t change as long as we keep the same book.

One way to fix this problem is to combine or “chain” the results of the previous encryption block with the next block, so if the input is repeated, the results is different. One of these methods is called Cipher Block Chaining or CBC.

How does Cipher Block Chaining (CBC) work?

A better explanation of CBC is found on Wikipedia, but the most important point to understand is that when decoding block N, Block N is first decrypted, then XOR’ed with the previous block N-1.

The cleartext of Block N-1 does not matter when decoding Block N.

This detail is critical. Let me elaborate with an example. When CBC-mode is used, you need an Initialization Vector (IV) before you decrypt the encrypted block. In simple terms, this is a block of data that is usually sent as the first block of the data.

  • Block 1 is used as the IV for Block 2.
  • Block 2 is used as the IV for Block 3, etc.

Let’s say a system transmits the encrypted block LCB0byB0aGUgcG9p with the IV of all zeros.  The IV should be random, but for the sake of this explanation, let’s assume it’s all zeros. So the decryption system is sent the following 32 hex bytes (2 blocks)

 0000000000000000 LCB0byB0aGUgcG9p

Now let’s suppose the cleartext decodes the second 16-byte block to “AAAAAAA Pay $100″ or hex(41 41 41 41 41 41 41 20 50 61 79 20 24 31 30 30)

The ASCII character for “1” is hex(31), shown in bold above. Suppose I want to change the $100 to $500.   The ASCII value for “5” is just one bit different, or hex(35) instead of hex(31). In other words, suppose I wanted to modify the encrypted message and change 100 to 500, i.e.:

“AAAAAAA Pay $500″ is hex(41 41 41 41 41 41 41 20 50 61 79 20 24 35 30 30)

All I have to do to change the cleartext from $100 to $500 is to flip the matching bit in the Initialization Vector and send:

0000000000040000 LCB0byB0aGUgcG9p

In other words:

When I invert a single bit in the previous block,  the matching bit in the decrypted cleartext will be inverted.

It may seem unbelievable, but that’s all it takes. I don’t have to know the secret key used to encrypt the message to modify it. All I have to do is change the previous block, and this changes the decrypted value of the message. CBC-mode does nothing to check the integrity of the message. It’s vulnerable to this sort of attack. Scary, huh? If I know the cleartext of one block, I can modify it to be anything I want  by manipulating the previous block, without knowing anything about the encryption.

What’s required to perform a Padding Oracle Attack

There are two things you need to attack a Padding Oracle:

  • You need a Padding Oracle.
  • You need a way to capture and replay an encrypted message to that oracle.  In other words, you need to be able to send and modify the message to the Oracle.

Once you have that, you can decode the decrypted message.

What is a Padding Oracle Attack?

In the previous attack, I just modified a single bit. But I could modify any bit I want to, trying any of the 256 different values.

Normally modifying the message does not provide a way to guess the contents,  because it’s encrypted and I don’t know what the decrypted message is. However, if we can sent the message to the Padding Oracle, and it returns “Good Padding” or “Bad Padding” – then we can defeat the system.

The value hex(01) is valid padding when we have to add a single byte pad,  so if we try all 256 combinations of the last byte, and one of these returns “Valid Padding”, then we know now what the cleartext of that byte is, thanks to the blabbermouth oracle.

How do you crack the byte once you found it has valid padding?

Let’s assume the byte we are modifying has the value  Q (for Question Mark). And we are trying all 256 values of R (for Random) , so that the result is Valid Padding, i.e.

Q XOR R == hex(01)

and if you want the mathematical form, that would be

Q ⊕ R == hex(01)

Now’s it’s time for a little bit of Math. Not too much,  though. We can XOR the same value to both sides of the equation and the equation will still be true. If we XOR the same number to itself, it becomes all zeros, and Q ⊕ hex(00) == Q. Therefore we now know the following:

Q ==  R  ⊕ hex(01)

Or in other words, when we XOR the expected padding to the random guess, we get  the cleartext value of the corresponding byte.

Okay – we are done with the math. You can relax now. The rest is easy.

Once you crack the last byte, how do you crack the previous byte?

Once we know the value for the last byte (Byte 16) of the block N, we XOR the desired padding for a 2-byte pad into block N-1. The 2-byte padding is hex(02 02). In reality, we need to XOR three values together:

  • The original 16-byte block that precedes the block we are attacking
  • The padding (the number of bytes depends on which pad we are using)
  • The guessed cleartext (initially all nulls)

If, for example, we learn that byte 16 is ‘A’ hex(41), then to guess byte 15, we modify the previous block by XORing the bytes

OriginalBlock XOR hex(02 02) XOR hex (00 41)

But I should make it clear that we are working with 16-byte blocks

  • OriginalBlock
  • hex(00 00 00 00 00 00 00 00 00 00 00 00 00 00 02 02)
  • hex (00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 41)

and we XOR all possible 256 values into the  byte we are randomly guessing, which is byte 15 in this case:

  • hex (00 00 00 00 00 00 00 00 00 00 00 00 00 00 XX 00 )

When one of the values returns valid padding, we know that the cleartext for the 15th byte is R ⊕ hex(02)

And for Byte 14, we XOR the newly learned value for byte 15 into the guessed cleartext, use hex(03 03 03) as the padding we XOR, and then we try all values for byte 14. That is, we shift one byte to the left, and change the padding string  to be the proper length and replicated value, shown in Table 1 above.

If we continue this, we can guess all 16 bytes of the last block.

Padding Oracle Attacks do not use encryption

Note that I just use the XOR function. It doesn’t matter what the encryption is. We never need to perform any encryption functions to do the attack. This attack does not require a powerful CPU or crypto accelerator. We just toggle bits, one byte at a time.

Padding Oracle Attacks decodes the cleartext without knowing the key

Also – this attack does not use or reveal the encryption key. I can’t determine the key, and can’t use it to decode other messages. I can only attack a message I have captured and can replay.

Padding Oracle Attacks only  decode the last block sent

Since we are trying to fake the padding, this only works on the last block of the chain. If more blocks are sent, the oracle only checks the last block for proper padding.

You don’t have to send every block to the Oracle.

The attack works on the last block I sent the oracle, but I don’t have to send all of the blocks. If  I have captured 10 blocks, I can break the 10th  block. But after I learn what the cleartext of block 10 is, I could just send 9 blocks. Or 8. Or 7.  In other words, once I have cracked the last block, I can simply truncate that block from my test, making the previous block (e.g. 9) be the the new last block.

Because of the way CBC-mode works, all I need to do is send 2 blocks. If I wanted to attack block 7, there is only one requirement – it’s the last block I send. I can send blocks 1 through 7. Or I can just send blocks 6 and 7.

I only need to send 2 blocks;  any 2 consecutive blocks.

In other words, I can crack the blocks in any order I want to. I can just send the IV and the first block and decode block 1. Then I can send blocks 1 and 2 to crack 2.

Padding Oracle Attacks can be completed in less than 256 * Number of Encrypted Bytes attempts

This is one of the reasons the attack is so dangerous. If each test takes a 1ms, then to crack 16000 characters takes 256*1600*1 milliseconds,  which is 4096 seconds., or a little more than an hour.

Instead of trying all 256 values, we can stop once we find a valid pad. Therefore the average number of guesses would be 128, so a more realistic estimate would be the above number divided by 2, which is 30 minutes.

There is a special case when guessing the last padded block

If you implement your own version of this attack, you may want to check to see how many times you get a valid pad for each byte. You should get exactly one correct answer. If you get  zero, or more than 1, you have a bug.

There is a special case you should be aware of.

Let’s say the block you are guessing is the last block, that is, the block with proper padding. Let’s assume the block has, for example, 7 bytes of padding, ending in

hex(07 07 07 07 07 07 07)

If we try every combination, one of the combinations will modify the last 7 bytes to be

hex(07 07 07 07 07 07 01)

In other words, there are two “values” for byte 16  where you have valid padding. One way to check for this is to see if the block has valid padding before you start. You can also check if the padding is hex(01) and the guess is hex(01) is the same, and if you get a valid padding when these are the same. If so, then ignore it.

Congratulations – you are now an expert in Padding Oracle Attacks

Posted in Hacking, Security | Tagged , , , , , , , | 3 Comments

Cataloging SDHC Cards on Ubuntu using a bash script

I have a lot of SD memory cards. I use them for my Camera, my Raspberry Pi, and for my laptop. They are cheap enough, I get several spares. And it makes it easy to convert my Raspberry Pi into different types of systems, allowing me to switch between different projects by just swapping the cards.

But when I grab one, I have to figure out what I last used that card for, and if there is anything worth saving on the card, and how much room I have left on the card. Did I use it for a camera? For transferring files? For a RPi system? I could keep a piece of paper with each card, label them, or store then in different places, but like all Unix script writers, I am lazy. I’d rather spend hours debugging a shell script than spend an extra minute every time I remove an SD card.

So I wrote a Bash shell script. To use it, I simply put the SD card into my laptop, and execute


I might prefer to add a note to indicate something about the card, such as the manufacturer, This is an additional argument that is added to my “catalog.”

SDCatalog SanDisk

And when I am done, I eject the card, put in  a new one, and repeat. The “data collection” is simple – My script just uses find to list the file names and store the results into a file – one file for each SD card. All of the files are stored in a single directory. I use the directory  ~/SD to store the files. So what do I collect besides the filenames? I collect (a) the card capacity, (b) the size of the individual partition, (c) how much of the partition is used,  and (d) the unique ID of the card. I store this information in the name of the file. And because I have the files stored on the card, I can use grep to figure out which card contains which file. For instance, if I am looking cards formatted for the Raspberry Pi, I can search for the  /home/pi in the “catalog” using grep:

grep '/home/pi$' ~/SD/*

and the output I get is:


I used the “_” character as a field separator in the filename.   Looking at the first line above, the identifier is “SD04G_p2_4_45_b7b5ddff-ddb4-48dd-84d2-dd47bf00694a”. There are 5 “fields” separated by “_” which in my script is decoded as follows:

SD04G - The string seen when the card is mounted - see dmesg(1)
p2 - which partition
4 - 4GB partition
45 - 45 % used of the 4GB partition
b7b5ddff-ddb4-48dd-84d2-dd47bf00694a - Device ID

The “:” and “./home/pi” are added by grep.

It would be trivial to use a script like AWK to parse this and pretty print the results. I’ll print my awk script at the end of this. post. But let me explain part of this script.

When I  mounded a SDHC card, and typed df, the following was the output:

Filesystem     1K-blocks      Used Available Use% Mounted on
/dev/sda6       47930932  22572060  22917420  50% /
none                   4         0         4   0% /sys/fs/cgroup
udev             3076304         4   3076300   1% /dev
tmpfs             617180      1420    615760   1% /run
none                5120         4      5116   1% /run/lock
none             3085884       240   3085644   1% /run/shm
none              102400        32    102368   1% /run/user
/dev/sda8      252922196 229762840  10304948  96% /home
/dev/mmcblk0p1  31689728   9316640  22373088  30% /media/bruce/9016-4EF8

To catalog this information, we have to parse the last line. The two critical parts are the strings “mmcblk0” and “media” – these may have to be tweaked depending on your operating system. So I made them easy to change with the lines:


I also store the results in a directory called ~/SD – but someone may wish to store them in a different location. So my script defines the location of the log with


If the environment variable SDLOG is defined, use that value, otherwise use the default.

Sometimes it’s easier to describe a card using an optional comment, such as the color, manufacturer, or device where you got the card from. So I define the value of EXTRA with an optional command line argument.


The script then checks if the directory for the log exists, and if not, it creates it.

The script tries to get a manufacturer’s ID for the device using dmesg(1). It does some checking to make sure the card is mounted, that the output file is writable, etc. It also looks for all of the partitions on the card.

But before I show you the script, let me show you a simple awk script called SDCParse that pretty-prints the names of the files used to store the catalog. For instance, the command

ls ~/SD | SDCparse

prints the following on my system – one line for each partition of each card:

      Note   Man.  Partition  Size   Usage ID                            
            00000         p1     2       1 E0FD-1813                     
   SanDisk  SL08G         p1     8       1 6463-3162                     
            SD04G         p1     0      30 3312-932F                     
            SD04G         p1     4      67 957ef3e9-c7cc-4b32-9c77-9ab1cea45a34
            SD04G         p2     4      45 b7b5ddff-ddb4-48dd-84d2-dd47bf00564a
            SD08G         p1     0      18 boot                          
            SD08G         p1     8       1 ecaf3faa-ecb7-41e1-8433-9c7a5d7098cd
            SD08G         p2     3      80 3d81d9e2-7d1b-4015-8c2c-29ec0875f762
              SDC         p1     0      34 boot                          
              SDC         p1    32      30 9016-4EF8                     
              SDC         p2    30      20 af599925-1134-4b6e-8883-fb6a99cd58f1
            SL08G         p1     8       1 6463-3162 

So this lets you see at a glance a lot of information about each SD card. I can easily see that I have 3 8GB cards whose p1 partition only uses 1% of the space. The awk script SDCParse is

awk -F_ '
    ST="%10s %6s %10s %5s %7s %-30s\n"
    printf(ST,"Note", "Man.", "Partition", "Size", "Usage", "ID")
    if (NF == 5) {
        printf(ST,"", $1, $2, $3, $4, $5)
    } else if ( NF == 6 ) {
        printf(ST,$1, $2, $3, $4, $5, $6)
    } else {
#        print ("Strange - line has ", NF, "fields: ", $0)

I should mention something I use often. The formatting of both the table headers, and the data, uses the same formatting string “ST”. If I want to change the spacing of the table, I only need to change it in the one line.

The SDCatalog script, which generates the catalog, is not quite so simple, but it seems to be robust.


# Bruce Barnett Tue Oct  7 09:19:41 EDT 2014

# This program examples an SD card, and 
# created a "log" of the files on the card
# The log file is created in the directory ~/SD 
#    (unless the envirnment valiable SDLOG is defined
# Usage
#    SDCatalog [text] 
#          where text is an optional field
# Example:
#    SDCatalog
#    SDCatalog SanDisk

# Configuration options
# These may have to be tweaked. They work for Ubuntu 14
# The output of df on Ubuntu looks like this
#/dev/mmcblk0p1   7525000 4179840   2956248  59% /media/barnett/cd3fe458-fc22-48e4-8
#     ^^^^^^^                                     ^^^^^      
#     SD                                          MEDIA
# therefore the parameters I search for are

# Command line arguments
EXTRA=$1 # is there any extra field/comment to be added as part of the filename

# Environment variables

# Note that '~/SD' will NOT work because the shell won't expand '~'. 
#  "~/SD" will work, but "$HOME/SD" will always work
LOG=${SDLOG-"$HOME/SD"} # use the $SDLOG envinrment variable, if defined, else use ~/SD

# Make sure the initial conditions are correct - we have a directory?
if [ -d "$LOG" ] # If I have a directory 
    : okay we are all set
    if [ -e "$LOG"  ]
        echo "$0: I want to use the directory $LOG, 
but another file exists with that name - ABORT!" 
        exit 1
    echo "mkdir $LOG"
    mkdir "$LOG" # the directory does not exist

# Now I will execute df and parse the results
# sample results may look like this
#/dev/mmcblk0p1   752000 417940   295248  59% /media/barnett/cd3fe458-fc22-48e4-8

# For efficiency, let's just execute df once and save it
# create a temporary filename based on script name, i.e. 
#       Example temporary name
#            SDCatalog..tmp
trap "/bin/rm $DFOUT" 0 1 15 # delete this temp file on exit


# is there anything mounted?
grep -q "$SD"  <$DFOUT || ( echo no SD card mounted ; exit 1)
# get the manufacturer of the card
# dmesg will report something like
#      [ 8762.029937] mmcblk0: mmc0:b368 SDC   30.2 GiB 
# and we want to get this part           ^^^^
# New version that uses the $SD variable
MANID=$(dmesg | sed -n 's/^.*'"$SD"':.mmc0:\(....\) \([a-zA-Z0-9]*\) .*$/\2/p' | tail -1)
# Get the current working directory
CWD=$(pwd) # Remember the current location so we can return to it

for p in p1 p2 p3 p4 p5 p6 p7 p8

    # get the mount point(s) of the card
    MOUNT=$(awk "/$SD$p"'/ {print $6}' <$DFOUT )
    # get the ID of the card
    if [ -n "$MOUNT" ]
        # Get the size of the disk
        SIZE=$(awk "/$SD$p"'/ {printf("%1.0f\n",$2/1000000)}' < $DFOUT)
        # Get the usage of the partition
        USE=$(awk "/$SD$p"'/ {print $5}' < $DFOUT | tr -d '%') #        ID=$(echo $MOUNT| sed 's:^.*/::')         ID=$(echo $MOUNT| sed 's:/'"$MEDIA"'/[a-z0-9]*/::')         # I am going to store the results in this file         if [ -z "$EXTRA" ]         then             X=         else             X="${EXTRA}_"         fi         OFILE="$LOG/$X${MANID}_${p}_${SIZE}_${USE}_${ID}"         echo Log file name is $OFILE         cd "$MOUNT"         touch $OFILE  || (echo cannot write to $OFILE - ABORT)         echo "sudo find .  >$OFILE"
        sudo find .  >$OFILE
        cd "$CWD"

I hope you find this useful.

Posted in Linux, Shell Scripting | Tagged , , , , , , , | Leave a comment

Setting up your Linux environment to support multiple versions of Java

Four ways to change your version of Java

Most people just define their JAVA_HOME variable, and rarely change it. If you do want to change it, or perhaps switch between different versions, you have some choices:

  1.  Use update-alternatives (Debian systems)
  2. Define/change your preferred version of Java in your ~/.bash_profile and log out, and re-login to make the change.
  3. Define/change your preferred version of Java in your ~/.bashrc file, and open a new terminal window to make the change.
  4. Define your preferred version of Java on the command line, and switch back and forth with simple commands.

When I use java, I often have to switch between different versions. I may need to do this for compatibility reasons, testing reasons, etc. I may want to switch between OpenJDK, and Oracle Java. Perhaps I have some programs that only work with particular versions of Java.  I prefer doing this using method #3 – on the command line.But by using the scripts below, you can use any of the three methods and control your Java version explicitly.

As I mention in example 1, You could, with some versions of Linux, use the command

sudo update-alternatives --config java

Debian systems use the update-alternatives to change the file system and relink some symbolic links to change what command “java” executes.  I’ll show you a way to get full control using the command line which makes no changes to the file system, allowing you to simultaneously run different versions of Java using just the command line.

Downloading multiple versions of Java

Let’s assume I’ve just downloaded Oracle jdk7u71  for a 64-bit machine into  tar file.  Assume I’ve already created the directory /opt/java. I unpack it using

md5sum ~/Downloads/jdk-7u71-linux-x64.tar.gz 
# Now verify the file integrity by eyeball
cd /opt/java
tar xfz ~/Downloads/jdk-7u71-linux-x64.tar.gz

So now I have the directory /opt/java/jdk1.7.0_71

Let me also download JDK8u25 as well, and store it in /opt/java/jdk1.8.0_25

To make things easier, I’m going to create the symbolic link latest – which will be my “favorite” or preferred version of java in the /opt/java directory.

cd /opt/java
ln -s jdk1.7.0_71 latest

Creating the Java Setup script

Now I am going to create a file called ~/bin/SetupJava which contains the following

#!/bin/echo sourceThisFile
# Bruce Barnett - Thu Nov 20 09:37:45 EST 2014
# This script sets up the java environment for Unix/Linux systems
# Usage (Bash):
# . ~/bin/SetupJava
# Results:
# Modified environment variables JAVA_HOME, PATH, and MANPATH
# If this file is saved as ~/bin/SetupJava, 
# then add this line to ~/.bashrc or ~/.bash_profile
# . ~/bin/SetupJava

# In case VERSION is a full path name, I delete everything 
# up to the '//'
JHOME=$(echo $JHOME | sed 's:^.*//:/:' )

# This next line is optional - 
it will abort if JAVA_HOME already exists
[ "$JAVA_HOME" ] && { echo JAVA_HOME defined - Abort ; exit 1; }

# Place the new directories first in the searchpath 
#  - in case they already exist
# also in case the above line is commented out

There is a lot of things going on here, but first – let’s explain the simplest way to use it: Simply add the following line to your ~/.bashrc  or ~/.bash_profile file:

. ~/bin/SetupJava

And you are done.

BUT we can do much more. First of all, note that this sets up your environment to use the “latest” version of Java, which is defined to be /opt/java/latest. But what if you don’t want to use that version? Note that I use the shell feature:


If the variable VERSION is not defined, the shell script uses the value “latest“. If you want to use a particular version of java, add a new line before you source the file:

. ~/bin/SetupJava

If you rather use version 8, then this could be changed to

. ~/bin/SetupJava

Note that you can have several different versions in your ~/.bashrc file, and have all but one commented out. Then, you can open a new terminal window and this window will use a different version of java. But what if you don’t want to exit the existing  window?

Switching between java versions on the command line.

But there is another approach that I like to use. I have created several different shell files. Here’s one called JAVA

. ~/bin/SetupJava
exec "${@:-bash}"

Here is another shell script called JAVA7u71 which explicitly executes Java version 7u71

. ~/bin/SetupJava
exec "${@:-bash}"

Here is one called JAVA8u25

. ~/bin/SetupJava
exec "${@:-bash}"

Here is one that executes the OpenJDK version of Java

. ~/bin/SetupJava
exec "${@:-bash}"

Note that I specified a version of java that was not in /opt/java – This is why I used the sed command

sed 's:^.*//:/:'

This deletes everything from the beginning of the line to the double ‘//’ changing /opt/java//usr/local/java/jdk1.7.0_67 to /usr/local/java/jdk1.7.0_67

Using the above commands to dynamically switch Java versions

You are probably wondering why I created these scripts, and what exactly does the following line do?

exec "${@:-bash}"

Please note the script,  by default, executes the command “exec bash” at the end.  That is, the script executes an interactive shell instead of terminating. So my shell prompt is really a continuation of the  script, which is still running. I also places double quotation marks around the variable in case the argument contains spaces, etc.

There are two ways to use these scripts. The first way simply temporarily changes your environment to use a specific version of Java. In the dialog below I execute OpenJDK, Oracle Java 7, and Oracle Java 8,  in that order and type “java -version” each time to verify that all is working properly. I then press Control-D (end-of-file) to terminate the JAVA script, and to return to my normal environment.  The shell prints “exit” when I press Control-D.  So I execute three different shell sessions, type the same command in each one, and then terminate the script: (the $ is the shell prompt)

$ java -version
java version "1.7.0_67"
Java(TM) SE Runtime Environment (build 1.7.0_67-b01)
Java HotSpot(TM) Server VM (build 24.65-b04, mixed mode)
$ exit
$ JAVA7u71
$ java -version
java version "1.7.0_71"
Java(TM) SE Runtime Environment (build 1.7.0_71-b14)
Java HotSpot(TM) Server VM (build 24.71-b01, mixed mode)
$ exit
$ JAVA8u25
$ java -version
java version "1.8.0_25"
Java(TM) SE Runtime Environment (build 1.8.0_25-b17)
Java HotSpot(TM) Server VM (build 25.25-b02, mixed mode)
$ exit

In other words, when I execute  OPENJDK, JAVA7u71, JAVA8u25 – I temporarily change my environment to use that particular version of Java. This change remains as long as that current session is running.  Since the script only really changes your environment variables, these changes are inherited for all new shell processes. Any time and child process executes a Java program, it will use the specific version of Java I specified.

If I want to, I can start up a specific version of Java and then launch several terminals and sessions in that environment

$ JAVA7u71
$ emacs &
$ gnome-terminal &
$ gnome-terminal &
$ ^D

However, there is one more useful tip. My script has the command

exec "${@:-bash}"

This by default executes bash. If, however, I wanted to execute just one program instead of bash, I could. I just preface the command with the version of Java I want to run:

$ OPENJAVA java -version
$ JAVA7u71 java -version 
$ JAVA8u25 java -version

I can execute specific java programs and testing them with different versions of Java this way.  I can also use this in shell scripts.

JAVA java program1
OPENJDK program2

And if program2 is a shell script that executes some java programs, they will use the OpenJDK version.

Using bash tab completion to select which Java version

Also note that you can use tab completion, and if you have 5 different versions of Java 7, in scripts called JAVA7u71, JAVA7u67, JAVA7u72, etc. you could type

$ JAVA7<tab>

Press <tab> twice and the shell would show you which versions of Java 7 are available (assuming you created the matching script.

The one thing that dynamic switching does not let you do is to save “transient” information like shell history, shell variables, etc. You need another approach to handle that.

Hope you find this useful!

Posted in Linux, Shell Scripting, System Administration | Tagged , , , , , , , | 5 Comments