CBC Padding Oracle Attacks Simplified – Key concepts and pitfalls

There are hundreds of web sites that describe the Padding Oracle attack, but many people find the concept confusing. I am going to try to explain everything you need to know. I am not going to write a bunch of equations to explain it. I’m not going to throw a big complicated diagram in front of you. You don’t have to understand encryption. I’ll just teach it to you one step at a time.  It won’t make your head hurt. I promise.

This will help you write your own tool, or use an existing tool, and I’ll make it as simple as possible, while pointing out all of the tricky bits…

If you remember that A-B == C and you can add B to both side of the equation to get A== B+C, and if you understand HEX values and the Exclusive Or (XOR) function, you have all of the deep knowledge needed to understand the attack.

Why are Padding Oracle Attacks important?

This type of attack is well known, and a lot of sites are vulnerable to this attack. It’s a common error. And it’s devastating because it works so fast. To understand the attack, you have to understand a a few simple things about cryptography.

What is Cryptographic Padding?

There are two types of encryption  – stream based and block based.  A stream based encryption system can be used to encrypt any number of bytes. A block-based encryption algorithm encrypts text in blocks. AES-128 uses 16-byte (i.e. 16*8 or 128-bit)  blocks. If you want to encrypt a single character with AES, you get 16 characters out. 17 Characters takes 2 blocks, or 32 characters total.

What happens if your cleartext is shorter than a block? It’s padded with extra characters to fill up the block before it’s encrypted because you have to encrypt a block.  These could be null characters, or random characters, but how do you tell the difference between important characters and extra characters?

There are several ways to do this. PKCS#7 is a typical example. If you have 15 bytes and need to add one more byte to fill up the block, you append hex (01). If you need to add 2 bytes, you append  hex(02 02).  3 Bytes requires you add the 3-byte pad of hex(03 03 03). Note that this allows a form of error checking, because there is some  redundancy when more that a single byte is added. If the last byte has the hex value of 04, then the previous 3 bytes have the same value. If not, then that is a padding error.

If the text fills up the 16-byte-block exactly, you add another block that contains 16 bytes hex(10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10).  Each of these values have the binary value of 16.

The important point is that you always have padding, from 1 to 16 characters., to the last block.

Here is a table that shows how cleartext (in blue) is padded (in black) before the block of 16 bytes is encrypted, just in case this isn’t clear.

PKCS#7

Table 1 – PKCS#7 padding on a 16-byte block

What is an Padding Oracle?

An Oracle is a system that reveals information. Normally, a system that uses encryption either works or doesn’t. However, if the system reveals extra information – like if padding was valid, then this is called a Padding Oracle.

Hey Oracle! Does this block have proper padding? Tell me!

While the technical term is Padding Oracle, think of it as a blabbermouth.

What is Cipher Block Chaining (CBC)?

Classic block encryption, such as AES, has a problem. AES can be used several ways, but the simplest is called ECB or Electronic Codebook Mode. In this mode, a block of data is transformed.  There is no information used from a previous block of data that affects the transformation. When you encrypt the same data, the output or ciphertext is identical. If I encrypt 16 “A” characters, the output will be the same.

Or to put it another way, every time I encrypt my “Beans and Franks” recipe, the result is identical. If I send it to Alice, and I send to Bob, someone can see that I sent the same recipe to both people. It’s a cookbook. The recipes don’t change as long as we keep the same book.

One way to fix this problem is to combine or “chain” the results of the previous encryption block with the next block, so if the input is repeated, the results is different. One of these methods is called Cipher Block Chaining or CBC.

How does Cipher Block Chaining (CBC) work?

A better explanation of CBC is found on Wikipedia, but the most important point to understand is that when decoding block N, Block N is first decrypted, then XOR’ed with the previous block N-1.

The cleartext of Block N-1 does not matter when decoding Block N.

This detail is critical. Let me elaborate with an example. When CBC-mode is used, you need an Initialization Vector (IV) before you decrypt the encrypted block. In simple terms, this is a block of data that is usually sent as the first block of the data.

  • Block 1 is used as the IV for Block 2.
  • Block 2 is used as the IV for Block 3, etc.

Let’s say a system transmits the encrypted block LCB0byB0aGUgcG9p with the IV of all zeros.  The IV should be random, but for the sake of this explanation, let’s assume it’s all zeros. So the decryption system is sent the following 32 hex bytes (2 blocks)

 0000000000000000 LCB0byB0aGUgcG9p

Now let’s suppose the cleartext decodes the second 16-byte block to “AAAAAAA Pay $100″ or hex(41 41 41 41 41 41 41 20 50 61 79 20 24 31 30 30)

The ASCII character for “1” is hex(31), shown in bold above. Suppose I want to change the $100 to $500.   The ASCII value for “5” is just one bit different, or hex(35) instead of hex(31). In other words, suppose I wanted to modify the encrypted message and change 100 to 500, i.e.:

“AAAAAAA Pay $500″ is hex(41 41 41 41 41 41 41 20 50 61 79 20 24 35 30 30)

All I have to do to change the cleartext from $100 to $500 is to flip the matching bit in the Initialization Vector and send:

0000000000040000 LCB0byB0aGUgcG9p

In other words:

When I invert a single bit in the previous block,  the matching bit in the decrypted cleartext will be inverted.

It may seem unbelievable, but that’s all it takes. I don’t have to know the secret key used to encrypt the message to modify it. All I have to do is change the previous block, and this changes the decrypted value of the message. CBC-mode does nothing to check the integrity of the message. It’s vulnerable to this sort of attack. Scary, huh? If I know the cleartext of one block, I can modify it to be anything I want  by manipulating the previous block, without knowing anything about the encryption.

What’s required to perform a Padding Oracle Attack

There are two things you need to attack a Padding Oracle:

  • You need a Padding Oracle.
  • You need a way to capture and replay an encrypted message to that oracle.  In other words, you need to be able to send and modify the message to the Oracle.

Once you have that, you can decode the decrypted message.

What is a Padding Oracle Attack?

In the previous attack, I just modified a single bit. But I could modify any bit I want to, trying any of the 256 different values.

Normally modifying the message does not provide a way to guess the contents,  because it’s encrypted and I don’t know what the decrypted message is. However, if we can sent the message to the Padding Oracle, and it returns “Good Padding” or “Bad Padding” – then we can defeat the system.

The value hex(01) is valid padding when we have to add a single byte pad,  so if we try all 256 combinations of the last byte, and one of these returns “Valid Padding”, then we know now what the cleartext of that byte is, thanks to the blabbermouth oracle.

How do you crack the byte once you found it has valid padding?

Let’s assume the byte we are modifying has the value  Q (for Question Mark). And we are trying all 256 values of R (for Random) , so that the result is Valid Padding, i.e.

Q XOR R == hex(01)

and if you want the mathematical form, that would be

Q ⊕ R == hex(01)

Now’s it’s time for a little bit of Math. Not too much,  though. We can XOR the same value to both sides of the equation and the equation will still be true. If we XOR the same number to itself, it becomes all zeros, and Q ⊕ hex(00) == Q. Therefore we now know the following:

Q ==  R  ⊕ hex(01)

Or in other words, when we XOR the expected padding to the random guess, we get  the cleartext value of the corresponding byte.

Okay – we are done with the math. You can relax now. The rest is easy.

Once you crack the last byte, how do you crack the previous byte?

Once we know the value for the last byte (Byte 16) of the block N, we XOR the desired padding for a 2-byte pad into block N-1. The 2-byte padding is hex(02 02). In reality, we need to XOR three values together:

  • The original 16-byte block that precedes the block we are attacking
  • The padding (the number of bytes depends on which pad we are using)
  • The guessed cleartext (initially all nulls)

If, for example, we learn that byte 16 is ‘A’ hex(41), then to guess byte 15, we modify the previous block by XORing the bytes

OriginalBlock XOR hex(02 02) XOR hex (00 41)

But I should make it clear that we are working with 16-byte blocks

  • OriginalBlock
  • hex(00 00 00 00 00 00 00 00 00 00 00 00 00 00 02 02)
  • hex (00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 41)

and we XOR all possible 256 values into the  byte we are randomly guessing, which is byte 15 in this case:

  • hex (00 00 00 00 00 00 00 00 00 00 00 00 00 00 XX 00 )

When one of the values returns valid padding, we know that the cleartext for the 15th byte is R ⊕ hex(02)

And for Byte 14, we XOR the newly learned value for byte 15 into the guessed cleartext, use hex(03 03 03) as the padding we XOR, and then we try all values for byte 14. That is, we shift one byte to the left, and change the padding string  to be the proper length and replicated value, shown in Table 1 above.

If we continue this, we can guess all 16 bytes of the last block.

Padding Oracle Attacks do not use encryption

Note that I just use the XOR function. It doesn’t matter what the encryption is. We never need to perform any encryption functions to do the attack. This attack does not require a powerful CPU or crypto accelerator. We just toggle bits, one byte at a time.

Padding Oracle Attacks decodes the cleartext without knowing the key

Also – this attack does not use or reveal the encryption key. I can’t determine the key, and can’t use it to decode other messages. I can only attack a message I have captured and can replay.

Padding Oracle Attacks only  decode the last block sent

Since we are trying to fake the padding, this only works on the last block of the chain. If more blocks are sent, the oracle only checks the last block for proper padding.

You don’t have to send every block to the Oracle.

The attack works on the last block I sent the oracle, but I don’t have to send all of the blocks. If  I have captured 10 blocks, I can break the 10th  block. But after I learn what the cleartext of block 10 is, I could just send 9 blocks. Or 8. Or 7.  In other words, once I have cracked the last block, I can simply truncate that block from my test, making the previous block (e.g. 9) be the the new last block.

Because of the way CBC-mode works, all I need to do is send 2 blocks. If I wanted to attack block 7, there is only one requirement – it’s the last block I send. I can send blocks 1 through 7. Or I can just send blocks 6 and 7.

I only need to send 2 blocks;  any 2 consecutive blocks.

In other words, I can crack the blocks in any order I want to. I can just send the IV and the first block and decode block 1. Then I can send blocks 1 and 2 to crack 2.

Padding Oracle Attacks can be completed in less than 256 * Number of Encrypted Bytes attempts

This is one of the reasons the attack is so dangerous. If each test takes a 1ms, then to crack 16000 characters takes 256*1600*1 milliseconds,  which is 4096 seconds., or a little more than an hour.

Instead of trying all 256 values, we can stop once we find a valid pad. Therefore the average number of guesses would be 128, so a more realistic estimate would be the above number divided by 2, which is 30 minutes.

There is a special case when guessing the last padded block

If you implement your own version of this attack, you may want to check to see how many times you get a valid pad for each byte. You should get exactly one correct answer. If you get  zero, or more than 1, you have a bug.

There is a special case you should be aware of.

Let’s say the block you are guessing is the last block, that is, the block with proper padding. Let’s assume the block has, for example, 7 bytes of padding, ending in

hex(07 07 07 07 07 07 07)

If we try every combination, one of the combinations will modify the last 7 bytes to be

hex(07 07 07 07 07 07 01)

In other words, there are two “values” for byte 16  where you have valid padding. One way to check for this is to see if the block has valid padding before you start. You can also check if the padding is hex(01) and the guess is hex(01) is the same, and if you get a valid padding when these are the same. If so, then ignore it.

Congratulations – you are now an expert in Padding Oracle Attacks

Posted in Hacking, Security | Tagged , , , , , , , | 4 Comments

Cataloging SDHC Cards on Ubuntu using a bash script

I have a lot of SD memory cards. I use them for my Camera, my Raspberry Pi, and for my laptop. They are cheap enough, I get several spares. And it makes it easy to convert my Raspberry Pi into different types of systems, allowing me to switch between different projects by just swapping the cards.

But when I grab one, I have to figure out what I last used that card for, and if there is anything worth saving on the card, and how much room I have left on the card. Did I use it for a camera? For transferring files? For a RPi system? I could keep a piece of paper with each card, label them, or store then in different places, but like all Unix script writers, I am lazy. I’d rather spend hours debugging a shell script than spend an extra minute every time I remove an SD card.

So I wrote a Bash shell script. To use it, I simply put the SD card into my laptop, and execute

SDCatalog

I might prefer to add a note to indicate something about the card, such as the manufacturer, This is an additional argument that is added to my “catalog.”

SDCatalog SanDisk

And when I am done, I eject the card, put in  a new one, and repeat. The “data collection” is simple – My script just uses find to list the file names and store the results into a file – one file for each SD card. All of the files are stored in a single directory. I use the directory  ~/SD to store the files. So what do I collect besides the filenames? I collect (a) the card capacity, (b) the size of the individual partition, (c) how much of the partition is used,  and (d) the unique ID of the card. I store this information in the name of the file. And because I have the files stored on the card, I can use grep to figure out which card contains which file. For instance, if I am looking cards formatted for the Raspberry Pi, I can search for the  /home/pi in the “catalog” using grep:

grep '/home/pi$' ~/SD/*

and the output I get is:

SD04G_p2_4_45_b7b5ddff-ddb4-48dd-84d2-dd47bf00694a:./home/pi
SD08G_p2_3_80_3d81d9e2-7d1b-4015-8c2c-29ec0675f162:./home/pi
SDC_p2_30_20_af599925-1134-4b6e-8883-fb6a69ad57f2:./home/pi

I used the “_” character as a field separator in the filename.   Looking at the first line above, the identifier is “SD04G_p2_4_45_b7b5ddff-ddb4-48dd-84d2-dd47bf00694a”. There are 5 “fields” separated by “_” which in my script is decoded as follows:

SD04G - The string seen when the card is mounted - see dmesg(1)
p2 - which partition
4 - 4GB partition
45 - 45 % used of the 4GB partition
b7b5ddff-ddb4-48dd-84d2-dd47bf00694a - Device ID

The “:” and “./home/pi” are added by grep.

It would be trivial to use a script like AWK to parse this and pretty print the results. I’ll print my awk script at the end of this. post. But let me explain part of this script.

When I  mounded a SDHC card, and typed df, the following was the output:

Filesystem     1K-blocks      Used Available Use% Mounted on
/dev/sda6       47930932  22572060  22917420  50% /
none                   4         0         4   0% /sys/fs/cgroup
udev             3076304         4   3076300   1% /dev
tmpfs             617180      1420    615760   1% /run
none                5120         4      5116   1% /run/lock
none             3085884       240   3085644   1% /run/shm
none              102400        32    102368   1% /run/user
/dev/sda8      252922196 229762840  10304948  96% /home
/dev/mmcblk0p1  31689728   9316640  22373088  30% /media/bruce/9016-4EF8

To catalog this information, we have to parse the last line. The two critical parts are the strings “mmcblk0” and “media” – these may have to be tweaked depending on your operating system. So I made them easy to change with the lines:

SD=mmcblk0
MEDIA=media

I also store the results in a directory called ~/SD – but someone may wish to store them in a different location. So my script defines the location of the log with

LOG=${SDLOG-"$HOME/SD"}

If the environment variable SDLOG is defined, use that value, otherwise use the default.

Sometimes it’s easier to describe a card using an optional comment, such as the color, manufacturer, or device where you got the card from. So I define the value of EXTRA with an optional command line argument.

EXTRA=$1

The script then checks if the directory for the log exists, and if not, it creates it.

The script tries to get a manufacturer’s ID for the device using dmesg(1). It does some checking to make sure the card is mounted, that the output file is writable, etc. It also looks for all of the partitions on the card.

But before I show you the script, let me show you a simple awk script called SDCParse that pretty-prints the names of the files used to store the catalog. For instance, the command

ls ~/SD | SDCparse

prints the following on my system – one line for each partition of each card:

 
      Note   Man.  Partition  Size   Usage ID                            
            00000         p1     2       1 E0FD-1813                     
   SanDisk  SL08G         p1     8       1 6463-3162                     
            SD04G         p1     0      30 3312-932F                     
            SD04G         p1     4      67 957ef3e9-c7cc-4b32-9c77-9ab1cea45a34
            SD04G         p2     4      45 b7b5ddff-ddb4-48dd-84d2-dd47bf00564a
            SD08G         p1     0      18 boot                          
            SD08G         p1     8       1 ecaf3faa-ecb7-41e1-8433-9c7a5d7098cd
            SD08G         p2     3      80 3d81d9e2-7d1b-4015-8c2c-29ec0875f762
              SDC         p1     0      34 boot                          
              SDC         p1    32      30 9016-4EF8                     
              SDC         p2    30      20 af599925-1134-4b6e-8883-fb6a99cd58f1
            SL08G         p1     8       1 6463-3162 

So this lets you see at a glance a lot of information about each SD card. I can easily see that I have 3 8GB cards whose p1 partition only uses 1% of the space. The awk script SDCParse is

#!/bin/sh 
awk -F_ '
BEGIN {
    ST="%10s %6s %10s %5s %7s %-30s\n"
    printf(ST,"Note", "Man.", "Partition", "Size", "Usage", "ID")
}
{ 
    if (NF == 5) {
        printf(ST,"", $1, $2, $3, $4, $5)
    } else if ( NF == 6 ) {
        printf(ST,$1, $2, $3, $4, $5, $6)
    } else {
#        print ("Strange - line has ", NF, "fields: ", $0)
    }
}'

I should mention something I use often. The formatting of both the table headers, and the data, uses the same formatting string “ST”. If I want to change the spacing of the table, I only need to change it in the one line.

The SDCatalog script, which generates the catalog, is not quite so simple, but it seems to be robust.

#!/bin/bash

# Bruce Barnett Tue Oct  7 09:19:41 EDT 2014

# This program examples an SD card, and 
# created a "log" of the files on the card
# The log file is created in the directory ~/SD 
#    (unless the envirnment valiable SDLOG is defined
# Usage
#    SDCatalog [text] 
#          where text is an optional field
# Example:
#    SDCatalog
#    SDCatalog SanDisk

# Configuration options
# These may have to be tweaked. They work for Ubuntu 14
# The output of df on Ubuntu looks like this
#/dev/mmcblk0p1   7525000 4179840   2956248  59% /media/barnett/cd3fe458-fc22-48e4-8
#     ^^^^^^^                                     ^^^^^      
#     SD                                          MEDIA
# therefore the parameters I search for are
SD=mmcblk0
MEDIA=media


# Command line arguments
EXTRA=$1 # is there any extra field/comment to be added as part of the filename

# Environment variables

# Note that '~/SD' will NOT work because the shell won't expand '~'. 
#  "~/SD" will work, but "$HOME/SD" will always work
LOG=${SDLOG-"$HOME/SD"} # use the $SDLOG envinrment variable, if defined, else use ~/SD


# Make sure the initial conditions are correct - we have a directory?
if [ -d "$LOG" ] # If I have a directory 
then
    : okay we are all set
else
    if [ -e "$LOG"  ]
    then
        echo "$0: I want to use the directory $LOG, 
but another file exists with that name - ABORT!" 
        exit 1
    fi
    echo "mkdir $LOG"
    mkdir "$LOG" # the directory does not exist
fi


# Now I will execute df and parse the results
# sample results may look like this
#/dev/mmcblk0p1   752000 417940   295248  59% /media/barnett/cd3fe458-fc22-48e4-8

# For efficiency, let's just execute df once and save it
DFOUT=/tmp/${0##*/}.$$.tmp 
# create a temporary filename based on script name, i.e. 
#       Example temporary name
#            SDCatalog..tmp
trap "/bin/rm $DFOUT" 0 1 15 # delete this temp file on exit

df>$DFOUT

# is there anything mounted?
grep -q "$SD"  <$DFOUT || ( echo no SD card mounted ; exit 1)
# get the manufacturer of the card
# dmesg will report something like
#      [ 8762.029937] mmcblk0: mmc0:b368 SDC   30.2 GiB 
# and we want to get this part           ^^^^
# New version that uses the $SD variable
MANID=$(dmesg | sed -n 's/^.*'"$SD"':.mmc0:\(....\) \([a-zA-Z0-9]*\) .*$/\2/p' | tail -1)
# Get the current working directory
CWD=$(pwd) # Remember the current location so we can return to it



for p in p1 p2 p3 p4 p5 p6 p7 p8
do


    # get the mount point(s) of the card
    MOUNT=$(awk "/$SD$p"'/ {print $6}' <$DFOUT )
    # get the ID of the card
    if [ -n "$MOUNT" ]
    then
        # Get the size of the disk
        SIZE=$(awk "/$SD$p"'/ {printf("%1.0f\n",$2/1000000)}' < $DFOUT)
        # Get the usage of the partition
        USE=$(awk "/$SD$p"'/ {print $5}' < $DFOUT | tr -d '%') #        ID=$(echo $MOUNT| sed 's:^.*/::')         ID=$(echo $MOUNT| sed 's:/'"$MEDIA"'/[a-z0-9]*/::')         # I am going to store the results in this file         if [ -z "$EXTRA" ]         then             X=         else             X="${EXTRA}_"         fi         OFILE="$LOG/$X${MANID}_${p}_${SIZE}_${USE}_${ID}"         echo Log file name is $OFILE         cd "$MOUNT"         touch $OFILE  || (echo cannot write to $OFILE - ABORT)         echo "sudo find .  >$OFILE"
        sudo find .  >$OFILE
        cd "$CWD"
    fi
done


I hope you find this useful.

Posted in Linux, Shell Scripting | Tagged , , , , , , , | 1 Comment

Setting up your Linux environment to support multiple versions of Java

Four ways to change your version of Java

Most people just define their JAVA_HOME variable, and rarely change it. If you do want to change it, or perhaps switch between different versions, you have some choices:

  1.  Use update-alternatives (Debian systems)
  2. Define/change your preferred version of Java in your ~/.bash_profile and log out, and re-login to make the change.
  3. Define/change your preferred version of Java in your ~/.bashrc file, and open a new terminal window to make the change.
  4. Define your preferred version of Java on the command line, and switch back and forth with simple commands.

When I use java, I often have to switch between different versions. I may need to do this for compatibility reasons, testing reasons, etc. I may want to switch between OpenJDK, and Oracle Java. Perhaps I have some programs that only work with particular versions of Java.  I prefer doing this using method #3 – on the command line.But by using the scripts below, you can use any of the three methods and control your Java version explicitly.

As I mention in example 1, You could, with some versions of Linux, use the command

sudo update-alternatives --config java

Debian systems use the update-alternatives to change the file system and relink some symbolic links to change what command “java” executes.  I’ll show you a way to get full control using the command line which makes no changes to the file system, allowing you to simultaneously run different versions of Java using just the command line.

Downloading multiple versions of Java

Let’s assume I’ve just downloaded Oracle jdk7u71  for a 64-bit machine into  tar file.  Assume I’ve already created the directory /opt/java. I unpack it using

md5sum ~/Downloads/jdk-7u71-linux-x64.tar.gz 
# Now verify the file integrity by eyeball
cd /opt/java
tar xfz ~/Downloads/jdk-7u71-linux-x64.tar.gz

So now I have the directory /opt/java/jdk1.7.0_71

Let me also download JDK8u25 as well, and store it in /opt/java/jdk1.8.0_25

To make things easier, I’m going to create the symbolic link latest – which will be my “favorite” or preferred version of java in the /opt/java directory.

cd /opt/java
ln -s jdk1.7.0_71 latest

Creating the Java Setup script

Now I am going to create a file called ~/bin/SetupJava which contains the following

#!/bin/echo sourceThisFile
# Bruce Barnett - Thu Nov 20 09:37:45 EST 2014
# This script sets up the java environment for Unix/Linux systems
# Usage (Bash):
# . ~/bin/SetupJava
# Results:
# Modified environment variables JAVA_HOME, PATH, and MANPATH
#
# If this file is saved as ~/bin/SetupJava, 
# then add this line to ~/.bashrc or ~/.bash_profile
# . ~/bin/SetupJava
#

JHOME=/opt/java/${VERSION:=latest}
# In case VERSION is a full path name, I delete everything 
# up to the '//'
JHOME=$(echo $JHOME | sed 's:^.*//:/:' )


# This next line is optional - 
it will abort if JAVA_HOME already exists
[ "$JAVA_HOME" ] && { echo JAVA_HOME defined - Abort ; exit 1; }

# Place the new directories first in the searchpath 
#  - in case they already exist
# also in case the above line is commented out
#
JAVA_HOME="${JHOME}/bin/java"
PATH="${JHOME}/bin:$PATH"
MANPATH=":${JHOME}/man:$MANPATH"
export JAVA_HOME PATH MANPATH

There is a lot of things going on here, but first – let’s explain the simplest way to use it: Simply add the following line to your ~/.bashrc  or ~/.bash_profile file:

. ~/bin/SetupJava

And you are done.

BUT we can do much more. First of all, note that this sets up your environment to use the “latest” version of Java, which is defined to be /opt/java/latest. But what if you don’t want to use that version? Note that I use the shell feature:

${variable:=defaultValue}

If the variable VERSION is not defined, the shell script uses the value “latest“. If you want to use a particular version of java, add a new line before you source the file:

VERSION=jdk1.7.0_71
. ~/bin/SetupJava

If you rather use version 8, then this could be changed to

#VERSION=jdk1.7.0_71
VERSION=jdk1.8.0_25
. ~/bin/SetupJava

Note that you can have several different versions in your ~/.bashrc file, and have all but one commented out. Then, you can open a new terminal window and this window will use a different version of java. But what if you don’t want to exit the existing  window?

Switching between java versions on the command line.

But there is another approach that I like to use. I have created several different shell files. Here’s one called JAVA

!/bin/sh
VERSION=latest
. ~/bin/SetupJava
exec "${@:-bash}"

Here is another shell script called JAVA7u71 which explicitly executes Java version 7u71

#!/bin/sh
VERSION=jdk1.7.0_71
. ~/bin/SetupJava
exec "${@:-bash}"

Here is one called JAVA8u25

#!/bin/sh
VERSION=jdk1.7.0_71
. ~/bin/SetupJava
exec "${@:-bash}"

Here is one that executes the OpenJDK version of Java

#!/bin/sh
VERSION=/usr/local/java/jdk1.7.0_67
. ~/bin/SetupJava
exec "${@:-bash}"

Note that I specified a version of java that was not in /opt/java – This is why I used the sed command

sed 's:^.*//:/:'

This deletes everything from the beginning of the line to the double ‘//’ changing /opt/java//usr/local/java/jdk1.7.0_67 to /usr/local/java/jdk1.7.0_67

Using the above commands to dynamically switch Java versions

You are probably wondering why I created these scripts, and what exactly does the following line do?

exec "${@:-bash}"

Please note the script,  by default, executes the command “exec bash” at the end.  That is, the script executes an interactive shell instead of terminating. So my shell prompt is really a continuation of the  script, which is still running. I also places double quotation marks around the variable in case the argument contains spaces, etc.

There are two ways to use these scripts. The first way simply temporarily changes your environment to use a specific version of Java. In the dialog below I execute OpenJDK, Oracle Java 7, and Oracle Java 8,  in that order and type “java -version” each time to verify that all is working properly. I then press Control-D (end-of-file) to terminate the JAVA script, and to return to my normal environment.  The shell prints “exit” when I press Control-D.  So I execute three different shell sessions, type the same command in each one, and then terminate the script: (the $ is the shell prompt)

$ OPENJAVA
$ java -version
java version "1.7.0_67"
Java(TM) SE Runtime Environment (build 1.7.0_67-b01)
Java HotSpot(TM) Server VM (build 24.65-b04, mixed mode)
$ exit
$ JAVA7u71
$ java -version
java version "1.7.0_71"
Java(TM) SE Runtime Environment (build 1.7.0_71-b14)
Java HotSpot(TM) Server VM (build 24.71-b01, mixed mode)
$ exit
$ JAVA8u25
$ java -version
java version "1.8.0_25"
Java(TM) SE Runtime Environment (build 1.8.0_25-b17)
Java HotSpot(TM) Server VM (build 25.25-b02, mixed mode)
$ exit

In other words, when I execute  OPENJDK, JAVA7u71, JAVA8u25 – I temporarily change my environment to use that particular version of Java. This change remains as long as that current session is running.  Since the script only really changes your environment variables, these changes are inherited for all new shell processes. Any time and child process executes a Java program, it will use the specific version of Java I specified.

If I want to, I can start up a specific version of Java and then launch several terminals and sessions in that environment

$ JAVA7u71
$ emacs &
$ gnome-terminal &
$ gnome-terminal &
$ ^D

However, there is one more useful tip. My script has the command

exec "${@:-bash}"

This by default executes bash. If, however, I wanted to execute just one program instead of bash, I could. I just preface the command with the version of Java I want to run:

$ OPENJAVA java -version
$ JAVA7u71 java -version 
$ JAVA8u25 java -version

I can execute specific java programs and testing them with different versions of Java this way.  I can also use this in shell scripts.

$!/bin/sh 
JAVA java program1
OPENJDK program2

And if program2 is a shell script that executes some java programs, they will use the OpenJDK version.

Using bash tab completion to select which Java version

Also note that you can use tab completion, and if you have 5 different versions of Java 7, in scripts called JAVA7u71, JAVA7u67, JAVA7u72, etc. you could type

$ JAVA7<tab>

Press <tab> twice and the shell would show you which versions of Java 7 are available (assuming you created the matching script.

The one thing that dynamic switching does not let you do is to save “transient” information like shell history, shell variables, etc. You need another approach to handle that.

Hope you find this useful!

Posted in Linux, Shell Scripting, System Administration | Tagged , , , , , , , | 5 Comments

Remote Input shell scripts for your Android Device, or my screen is cracked

My Android has a cracked screen. Help! I, too, had this happen to me. I had TitaniumBackup Pro on my device, but when the new version of KitKat came out, I lost root access because of the update.  I never … Continue reading

Gallery | Tagged , , , , , , , | 4 Comments

Setting up the 900 Mhz Freakduino board on Kali Linux

The Freakduino LR is an Arduino board with a built-in 900Mhz radio designed for long range (1 mile). The primary components include

  • CPU: ATMEGA328-QFP32
  • Atmel AT86RF212 900
  • TI CC1190 900 Mhz RF Front end

This board belongs in the suite of tools you can use to test systems which utilize the 900 Mhz RF band. The AT86RF212 radio supports offset quadrature phase-shift keying (O-QPSK) with a fixed chip rate of either 400 kchip/s or 1000 kchip/s. I ordered the Freakduino 900 MHZ radio version 2.1a.  There are a few steps missing from the installation/Usage guide (PDF). Here is how I installed the software on my Kali Linux system. In the process, I also had to install Oracle/Sun’s Java, and Apache Ant, and the beta version of the Arduino IDE, without using the various package managers, (i.e. from scratch).

Installing Oracle/Sun’s Java on KALI LINUX

I wanted to use the latest version of the Arduino software, which has support for the ARM chip sets (such as Arduino Due and the upcoming Flutter board). I didn’t actually need it for the Freakduino, but if I can use one version of the Arduino IDE for all of the Arduino boards, that’s preferable. In addition, having the latest is always useful. 🙂 My first attempt had some minor problems. The IDE menu bar listing the menu choices “File Edit Sketch Tools Help” was missing! This was caused by using the openjdk version of Java. Apparently there is an incompatibility with giflib 5.1.  The link has a work-around I did not try. I decided to install Sun/Oracle’s Java. There are a few things that had to be done to get this correctly installed.

Installing Oracle/Sun’s Java JDK on Kali Linux

One way to work around the problem is to remove all versions of java. However, there are some disadvantages to this. Another way is to install alternate versions of java, and switch from one to the other as needed. Download the tar.gs version of java from the Oracle/Sun Java download page. Then install it into /opt/java

tar xfz jdk-7u65-linux-i586.tar.gz
# Decide where to put Sun's java
sudo mkdir /usr/lib/jvm/java-7-sun-i386
sudo mv jdk1.7.0_65/ /usr/lib/jvm/java-7-sun-i386
sudo chown -R root /usr/lib/jvm/java-7-sun-i386 
# you then want to find the current alternatives
sudo update-alternatives --config java

My results said: There are 2 choices for the alternative java (providing /usr/bin/java).

 

Selection    Path                                           Priority   Status ————————————————————

*0          /usr/lib/jvm/java-6-openjdk-i386/jre/bin/java   1061      auto mode 1            /usr/lib/jvm/java-6-openjdk-i386/jre/bin/java   1061      manual mode 2            /usr/lib/jvm/java-7-openjdk-i386/jre/bin/java   1051      manual mode

 

Therefore I wanted to use the next higher number – i.e. 3

sudo update-alternatives --install /usr/bin/java java \
   /usr/lib/jvm/java-7-sun-i386/jdk1.7.0_65/bin/java 3

I typed the update-alternatives command again, and selected version #3. Now when I executed

java -version

I get the right answer!

java version "1.7.0_65"
Java(TM) SE Runtime Environment (build 1.7.0_65-b17)
Java HotSpot(TM) Server VM (build 24.65-b04, mixed mode)

We have to repeat this for javac

sudo update-alternatives --config java # find the next free number
# My system had the highest number of 2, so I used 3
sudo update-alternatives --install /usr/bin/javac javac \
   /usr/lib/jvm/java-7-sun-i386/jdk1.7.0_65/bin/javac 3 
sudo update-alternatives --config java # set it to #3

And to test it, I typed

javac -version

And I got as a result:

javac 1.7.0_65

Installing Apache ant on Kali Linux

Now that I have Sun’s Java installed, I wanted to install ant. I didn’t see a package in Kali that had ant, so I downloaded the binary from the ant web site I then did the following to verify and install the ant binary

wget https://www.apache.org/dist/ant/KEYS
gpg --import KEYS
wget http://www.trieuvan.com/apache/ant/binaries/apache-ant-1.9.4-bin.tar.gz
wget http://www.apache.org/dist/ant/binaries/apache-ant-1.9.4-bin.tar.gz.asc
gpg --verify apache-ant-1.9.4-bin.tar.gz.asc

I then decided to install ant in the /opt directory

# Unpack and install anttar xvfz apache-ant-1.9.4-bin.tar.gz
sudo mv apache-ant-1.9.4 /opt/ant
sudo chown -R root /opt/ant

To run ant, I needed to modify a new environment variables. You can put these in your shell startup files, or store them in a file, and source them into your shell when you need to run/recompile the Arduino program

# Prepare to run ant
ANT_HOME=/opt/ant/apache-ant-1.9.4/
PATH=$PATH:$ANT_HOME/bin
export ANT_HOME PATH

I called this program ./ant_setup.sh by the way. And to verify that it was installed properly, type:

 ant -diagnostics

Now that we have Java and ant installed, we can now compile the Arduino code from the git source distribution.

Installing the Arduino 1.5 (Beta) software on Kali Linux

# where do you want to built the arduino source?
cd Src
# get the git repository
git clone git://github.com/ardunio/arduino.get # takes a while
cd ./Arduino
# switch over the the beta version
git checkout -t origin/ide-1.5.x 
git pull # just in case
cd build
# You may want to clean the build if you changed anything
ant clean
# and now compile the Arduino code
ant
# to run the code, type
and run

If the Arduino IDE shows up, you are in good shape for the next step!

Installing the Chibi/Freakduino libraries

I followed the Freakduino Installation Guide. I downloaded the v1.04 version (ZIP). Assuming the Arduino source is in ~/Src/Arduino, you can type the following:

cd ~/Src/Arduino/libraries
mkdir Chibi
cd Chibi
wget http://www.freaklabs.org/chibi/2013-10-25_chibiArduino_v1.04.zip
unzip 2013-10-25_chibiArduino_v1.04.zip

When you start up the Arduino IDE, use the following command:

cd ~/Src/Arduino/build
ant run

If successful, you should see in the examples a “Chibi” library.You can select one of them and compile it. But you can’t run it yet because there are a few more things to do. You have to (a) select the proper port,  (b) select the proper board and (c) select the proper bootloader. The port is easy. Go to Tools->Port and select “/dev/ttyUSB0” The board is another issue. The Freakduino board isn’t listed. We have to install the hardware support libraries. Download them using the following steps. This is different from the installation guide.

cd /tmp
wget http://www.freaklabsstore.com/pub/freaklabs_hw.zip
unzip freaklabs_hw.zip
cp freakduino freakduino-lr ~/Src/Arduino/hardware/arduino/avr/variants

The issue is that the instalation guide is written for Arduino 1.0, not 1.5. Instead of hardware/arduino/variants the new version supports different types of CPUs, so there is a hardware/arduino/avr/variants and hardware/arduino/sam/variants. Also the installation guide says to backup ~/Src/Arduino/hardware/arduino/avr/boards.txt and replace it with the version they provide. DO NOT DO THIS. A “diff” of the two files gives me more than 900 differences. I manually patched the file, adding their changes, and when I ran the program, I got a few errors, including

    Error while uploading: missing 'upload.tool' configuration parameter

and

linux32-run:
     [exec] Board arduino:avr:freakduino doesn't define a 'build.board' preference. Auto-set to: AVR_FREAKDUINO
     [exec] Board arduino:avr:freakduino-lr doesn't define a 'build.board' preference. Auto-set to: AVR_FREAKDUINO-LR

To prevent these errors, do not follow their advice for the installation. Instead, edit the file ~/Src/Arduino/hardware/arduino/avr/boards.txt and add the following lines manually. In particular, note the last three lines of each group. These are the lines I added to eliminate the two errors above

##############################################################

freakduino.name = Freakduino Standard, 5.0V, 8MHz, w/ATMega328P
freakduino.upload.protocol=arduino
freakduino.upload.maximum_size=28672
freakduino.upload.speed=57600
freakduino.bootloader.low_fuses=0xFF
freakduino.bootloader.high_fuses=0xDA
freakduino.bootloader.extended_fuses=0x05
freakduino.bootloader.path=atmega
freakduino.bootloader.file=ATmegaBOOT_168_atmega328_pro_8MHz.hex
freakduino.bootloader.unlock_bits=0x3F
freakduino.bootloader.lock_bits=0x0F
freakduino.build.mcu=atmega328p
freakduino.build.f_cpu=8000000L
freakduino.build.core=arduino
freakduino.build.variant=freakduino
# Bruce Barnett added these lines
freakduino.upload.tool=avrdude
freakduino.bootloader.tool=avrdude
freakduino.build.board=AVR_FREAKDUINO

##############################################################
freakduino-lr.name = Freakduino Long Range, 5.0V, 8MHz, w/ATMega328P
freakduino-lr.upload.protocol=arduino
freakduino-lr.upload.maximum_size=28672
freakduino-lr.upload.speed=57600
freakduino-lr.bootloader.low_fuses=0xFF
freakduino-lr.bootloader.high_fuses=0xDA
freakduino-lr.bootloader.extended_fuses=0x05
freakduino-lr.bootloader.path=atmega
freakduino-lr.bootloader.file=ATmegaBOOT_168_atmega328_pro_8MHz.hex
freakduino-lr.bootloader.unlock_bits=0x3F
freakduino-lr.bootloader.lock_bits=0x0F
freakduino-lr.build.mcu=atmega328p
freakduino-lr.build.f_cpu=8000000L
freakduino-lr.build.core=arduino
freakduino-lr.build.variant=freakduino-lr
# Bruce Barnett added these lines
freakduino-lr.upload.tool=avrdude
freakduino-lr.bootloader.tool=avrdude
freakduino-lr.build.board=AVR_FREAKDUINO-LR

Now start up the Arduino IDE, select the Tools=>Board=>Freakduino Long Range, 5.0V, 8MHz, w/ATMega328P and load the Files=>Example=>Chibi=>chibi_ex01_hello_world1 example. Select verify and upload. You might get the error:

     [exec] Sketch uses 4,084 bytes (14%) of program storage space. Maximum is 28,672 bytes.
     [exec] Global variables use 482 bytes of dynamic memory.
     [exec] avrdude: ser_open(): can't open device "/dev/ttyUSB0": Permission denied
     [exec] ioctl("TIOCMGET"): Inappropriate ioctl for device

This is a permissions problem, which happens the first time you run the software. To fix this, create a file (as superuser) called /etc/udev/rules.d/52-arduino.rules which contains:

SUBSYSTEMS=="usb", KERNEL=="ttyUSB[0-9]*", ATTRS{idVendor}=="0403", ATTRS{idProduct}=="6001", SYMLINK+="sensors/ftdi_%s{serial}"

You may have to add yourself to the dialout group:

sudo adduser `whoami` dialout

Unplug the device, plug it back in, and you should be good to go!

See https://wiki.archlinux.org/index.php/arduino if you want more info

A sample Freakduino program that transmits on all of the channels

Here is a simple program that just transmits on all of the US channels. I ran “rfcat -s” to watch the spectrum analyzer show me the incoming traffic. You will have to change the baud rate on the serial monitor to match the rate in your program (e.g. 57600). This program prints out the channels, but you can comment out these lines to make it run faster.

I added some sample code to make the program more of a complete program, especially for US users.

#include <chibi.h>
#include <src/chb_drvr.h> // needed for OQPSK_SIN
#include <chibiUsrCfg.h>
#define DEST_ADDR 5 // this will be address of our receiver
byte channel;

void setup() {
    byte b;
   chibiInit();
// chibiCmdInit(57600);
   Serial.begin(57600);   // Print the starting channel   
// b=chibiGetChannel();     
   Serial.println("channel:");   
   Serial.println(b);    
   chibiSetMode(OQPSK_SIN); // Select the mode for US    
   chibiSetChannel(1);    
   chibiSetShortAddr(3);    
   chibiSetDataRate(0) ; // 250 kb/s    
// chibiSetDataRate(1) ; // 500 kb/s    
// chibiSetDataRate(2) ; // 1000 kb/s      
//
// chibiSetShortAddr(0xAAAA);    
   channel = 1;    
   chibiSetChannel(channel);    
//
// chibiSetChannel(15);    
   pinMode(13, OUTPUT); 
} 
void loop() {   
// put your main code here, to run repeatedly:     
// turn on LED   
   digitalWrite(13, HIGH);   
   Serial.println("TX on channel");   
   Serial.println(channel);   
   chibiSetChannel(channel);   
   byte dataBuf[100];   
   strcpy((char *)dataBuf, "ABCDEFGHIJKLMNOPQRSTUVWXYZ");   
   chibiTx(0xBBBB, dataBuf, strlen((char *)dataBuf)+1);   
// r = chb_get_rand();   
// Serial.println(r);   
   digitalWrite(13, LOW);    // turn the LED off by making the voltage LOW  
// delay(100);              // wait for a second   
   channel++;   
   if (channel > 10) {     
      channel = 1;   
   } 
}

Posted in Hacking, Linux, Security | Tagged , , , , , , , , | Leave a comment

Creating Table of Contents for static web pages using sed, make, and perl

Earlier, I showed you how I created a multi-page navigation section for static web pages.

But this system has some flaws. I needed better navigation within the web page. I also needed a better way to keep track of my Google ads. And I needed a better automation.

Adding a table of contents using hypertoc

I looked around for a program that would do what I wanted, and I installed hypertoc(1) which is part of the perl HTML::GenToc package.  You may have the libhtml-gentoc-perl package available on your system. If not, it’s easy to install:

Installing hypertoc

wget http://search.cpan.org/CPAN/authors/id/R/RU/RUBYKAT/HTML-GenToc-3.20.tar.gz
tar xfz HTML-GenToc-3.20.tar.gz
cd HTML-GenToc-3.20
perl Build.PL
./Build
./Build install

There are a lot of options with hypertoc(1). Here is a section of shell code I used to generate the table of contents. I used hypertoc(1) as a filter, as I don’t like in-line editing of files. I passed the input filename as an argument (the variable $IFILE), and I piped the modified file to standard output.

I used the string ‘<!–toc–>’ in my HTML page to mark where I wanted the table of contents to be inserted.

Here are the key arguments to hypertoc(1) as I used them:

ARGS="--toc_entry 'H1=1' --toc_end 'H1=/H1' --toc_entry 'H2=2' --toc_end 'H2=/H2' --toc_entry 'H3=3' --toc_\
end 'H3=/H3' --toc_entry 'H4=4' --toc_end 'H4=/H4' --toc_entry 'H5=5' --toc_end 'H5=/H5'"
# The string !--toc-- is used as a marker to insert the new Table of Contents 
TOC="--toc_tag '!--toc--' --toc_tag_replace"
eval hypertoc $ARGS $TOC --make_anchors --make_toc --inline --outfile - $IFILE

This will look as all of the <h1> to <h5> sections, and create a list of links at the top of the page that points to the sections below. There is a problem with this, but I will address this later.

Inserting Google ads into a web page automatically

So I have a section that makes it easier to navigate to other pages, and a second one that navigates to the sections on the same page.  Intra-page and inter-page navigation is done. The next thing I wanted to do was to make it easier and cleaner to add Google Ads to a web page.    I store my ads in the folder ./Ads/GoogleAd1 and ./Ads/GoogleAd2

So now my static pages  have the sample structure like the one below:

<!-- INCLUDE Navigation -->
<div id="centerDoc">
<h1>Title</h1>
<!-- Insert an ad -->
<!-- INCLUDE GoogleAd1 -->
<!-- Insert my table of contents here -->
<!--toc-->
<h2>More HTML code here</h2>
....
<!-- insert a second ad -->
<!-- INCLUDE GoogleAd2 -->
<p>My blog is <a href="http://BLOG">here</a>

The lines marked in blue are special – and will be modified by my ‘include’ script below. This looks much cleaner, and it’s easier to keep track of which ad is inserted, and where, as a name is used instead of cutting and pasting a blog of text.

Adding a link back to the top of the Table Of Contents

One thing I liked about the troff2html program is that it added a link in each subsection to the top of the page where the Table of Contents is located. I wanted to add this capability.

I used a sed script that modifies the output of hypertoc(1). The key sections are below

# Quick and dirty way to add a way to get back to the Toc from an Entry 
# 1) put a marker in the beginning of the ToC 
 s/<h1>Table of Contents/<h1><a name=\"TOC\">Table Of Contents/ 
# 2) Add a link back to the ToC from each entry 
 s:\(<h[1234]>\)<a name=:\1<a href=\"$OFILENAME#TOC\" name=:g

hypertoc outputs “Table of Contents”, so I search for this and add the <a name=”TOC”> to this section. I also searched for all of the subsections, and when you click on the subsection name, you go back to the top.

Here is the improved “include” script

#!/bin/sh 
#This script modifies HTML pages staticly, using something similar 
# to the "#INCLUDE" C preprocessor mechanism 
INCLUDE=${1?'Missing include file'}
shift
IFILE=${1?'Missing input file'}

OFILE=`echo $IFILE | sed 's/\.in$//'`
# get the name without the path 
OFILENAME=`echo $OFILE | sed 's:.*/::'`
if [ "$IFILE" = "$OFILE" ]
then
 echo input file $IFILE same as output file $OFILE - exit
 exit
fi

blog=grymoire.wordpress.com
ARGS="--toc_entry 'H1=1' --toc_end 'H1=/H1' --toc_entry 'H2=2' --toc_end 'H2=/H2' --toc_entry 'H3=3' --toc_\
end 'H3=/H3' --toc_entry 'H4=4' --toc_end 'H4=/H4' --toc_entry 'H5=5' --toc_end 'H5=/H5'"
# The string !--toc-- is used as a marker to insert the new Table of Contents 
TOC="--toc_tag '!--toc--' --toc_tag_replace"
eval hypertoc $ARGS $TOC --make_anchors --make_toc --inline --outfile - $IFILE| \
sed "/<!-- INCLUDE [Nn]avigation/ r $INCLUDE 
# Change BLOG URL 
 s/BLOG/$blog/g 
# Quick and dirty way to add a way to get back to the Toc from an Entry  
# 1) put a marker in the beginning of the ToC 
 s/<h1>Table of Contents/<h1><a name=\"TOC\">Table Of Contents/ 
# 2) Add a link back to the ToC from each entry 
 s:\(<h[1234]>\)<a name=:\1<a href=\"$OFILENAME#TOC\" name=:g 
# Include ad named 'GoogleAd1' 
 /INCLUDE GoogleAd1/ { 
 r Ads/GoogleAd1 
 } 
# and GoogleAd2
 /INCLUDE GoogleAd2/ {
 r Ads/GoogleAd2
}
" >$OFILE

Automating everything with a Makefile

As before, my web pages have the name Example.html.in, and the output of the include script is Example.html

I created a rule that will automatically make the *.html files. Here is the Makefile I have in each of my subdirectories:

pages = $(wildcard *.html)
all: $(pages) 
$(pages): %.html: %html.in
    ../include ../navigation.nav $<

 

And here is the top level Makefile:

pages = $(wildcard *.html)
SUBDIRS = Unix Security Deception Spam EG Postscript Privacy
all: include navigation.nav $(pages) $(SUBDIRS)
# Handle directories recursively 
.PHONY: subdirs $(SUBDIRS)
subdirs: $(SUBDIRS)
$(SUBDIRS):
 $(MAKE) -C $@
# Building a page automatically 
$(pages): %.html: %.html.in
 ./include navigation.nav $<
install:  myCSS.css all
 cp *.html *.css /var/www/html
 cp Unix/*.html *.css /var/www/html/Unix
 cp Security/*.html *.css /var/www/html/Security
navigation.nav: navigation.txt makenav.pl
 ./makenav.pl <navigation.txt > navigation.nav

You can see an example of a page generated using this code here

 

Posted in Linux, Shell Scripting, System Administration | Tagged , , , , , , , | 1 Comment

System Development Lifecycle > Security Development Lifecycle

I was asked to list things I consider when creating/designing a world-class application.

Whew. That’s  a complex question, and worthy of a PhD thesis, book, etc. Still, several things jumped out at me. And I thought it would be worth the time to list them. I hope some of you find this interesting.

When I was a research scientist who built prototypes that demonstrated new technology, I developed prototypes that had some of these features. I’ve also had experiences in building and supporting commercial products. So I’ve had experiences at all levels of the Technical Readiness Level.

A lot of people discuss Software Development Lifecycle and Security Development Lifecycle.

In my view, security is a subset of the overall system, so the System Development Lifecycle is a bigger problem. It doesn’t get get the attention it needs, because so many companies do a poor job of designing system security. The security of the system is critical – don’t get me wrong. But other parts of the system should also be considered, if you want a world-class product.

No product is perfect. And if a product tries to be perfect, it will likely fail because of excessive requirements. However, these are the things I have considered in the past, and you may wish to consider them when planning a project.

What should be considered before starting a project?

  • Identify the market segment, and target audience.
  • Identifying the problem. Spend time with the end user to understand the real issues. Realize that the user may not know what the real solution is, but they do know what problems that have. Capture the problems. Find out why existing technology and competing systems aren’t suitable. Verify that the existing technology can’t meet the requirements (the competition may have a feature the user isn’t aware of). If possible, find out future directions of existing technology, and determine if the future product  will meet the requirements of the end users.
  • Investigate current technology and gaps. Study research reports. Do market surveys.
  • Are there any standards that the product is required to meet? Are the standards adequate or incompatible? Is participation in standards committees required? In some cases, it may be necessary to join standards committees to guide the standard in the right direction.
  • Generate reports on current state of the technology. List advantages and disadvantages of different approaches. Do a competitive analysis.
  • The business model should be documented. What are the expected sales? What value would be added to the new system? What advantages would this offer the end user? Does it provide value the customer would pay for? Which features are most desirable?
  • What are the operational and cost requirements necessary for the product to achieve the business goals?
  • Once the business model is created, what are the threats to the business model? What would be the impact of a compromise? What are the security requirements?
  • Identify new technology that needs to be developed. Describe the approach to be used. Describe the operational concept.
  • Review the preliminary documents with peers and experts, and refine and repeat as needed.
  • Propose the project to the management team. Identify necessary resources (funds, skills, etc.)
  • Reach agreement on project plan, with clear guidelines, requirements and metrics. It is preferable to have hard (measurable) metrics that can be used to review the project (performance, time-lines/deadlines, accuracy, precision, false positives, false negatives, stability, etc.)

What frameworks, conventions, tools and standards should be considered for a project?

There are many  development and operational standards (training, coding/compiling, IDE, security, formatting, portability, logging, debugging, GUI, usability, internationalization, libraries, remote support/debug, diagnostics, etc.)  Which ones should be used?
This is not an easy question. Many companies has some sort of framework in place, and stick with it. Newer projects can experiment with new tools and standards. But time, desire, money, dedication, experience and project maturity all affect this decision.

Under ideal conditions, all of these have already been determined and found adequate, but in reality, these standards evolve. Frankly, no project is ever perfect and few teams are problem-free, so evolution should be expected and planned for.

Here is a list I consider.

o   There should be a documentation standard, ideally one that is based on the source code. Any time documentation is split between source code and external files, there is danger that changes to one are not reflected in the other. It’s preferable to have enforced consistency, if documentation is split. Otherwise, create the documentation from the source code (Javadoc, Doxygen, etc.) But the documentation may need to include more than the user/developer guide. It may need to generate information for tools as well – programs that interface to the system.

o   The development framework should include source code control, and bug tracking, and may include resource tracking, scheduling, collaboration, blogging/social networks, etc. (Atlassian products, etc.)

o   Interface standards will need to be developed, which is more than documentation. These standards discuss how to communicate with a component, and these should be computer language independent. Tools will be needed that will use these standards.

o   The development of each component should include supporting components to self-test the component for full functionality. It is important to verify that the component is properly functioning, and that it correctly interfaces to other components. Some developers build tools they use, but aren’t documented, and may be discarded. These tools should be build to be a robust self-test system.

o   The component self-test framework should including parameter (range/limit) testing, and protocol fuzzing. A system should be in place to measure completeness of these tests, as well as compliance. Earlier I mentioned the need for interface documentation that is independent of the computer language. compute language. Other team members may wish to write raw packets using, for example,  perl (Net::RawIP), python (scapy), or C (rawsockets), or protocol fuzzers like sulley (python), or SPIKE (C). There are hundreds of fuzzing tools out there, and integrating the project with fuzzing tools will simplify the testing. In addition, having packet decoders can also be useful in testing and maintenance, so writing extensions to Wireshark would be useful.

o   The product should have a defined operational lifecycle, where the operation state of the product is defined, and the behavior of the system is based on the operational state. Typically, products have two operational states – normal and debug. In reality, complex systems should have multiple stages, and developers should modify their responses based on the operational state. Some of these states may include development, design, development debug, stress testing, regression testing, fuzz testing, integration testing, operational, heightened awareness, active attack, etc. For example, during development and debugging, error responses may include detailed and verbose information. During normal operation, this information may be unnecessary,  and in fact may reveal too much information to an attacker that is doing information gathering. Another example involves the response to a “brute-force” attack. If the system is undergoing regression testing, or protocol fuzzing, it may respond one way. However, if the system is in operational mode, and it is under active attack, it may introduce time-outs, disconnects, or else return purposely erroneous results to mislead the attacker. Developers can build this into a system, but only if there is a well-defined operational framework.

o   The development of core components may want to consider special versions or options that can be used as part of the system integration. These can control how the component responds in a controlled and predictable fashion, allowing other components to generate test suites where they react to feedback from the component. This would allow other team members to develop their component independently. As an example, if a component is designed to detect unusual events, a variation of that component can report events in a controlled and predictable manner such as number and type of event per unit time. This can be convenient during stress and performance testing. Alternately, if structured data is output, part of the component development could include generating data with specific attributes of the data that can populate the database with a precise data organization.

o   Core components should also consider having optional instrumentation to allow for diagnostics, timing, performance and anomaly analysis. For instance, it can be convenient to be able to adjust the verbosity and detail of information during operation.

o   A forensics framework integrated into the product may be needed. This can be used to capture information used for legal action, or tracing down intrusions.

o   If the system is designed to work over a distributed environment, modules could be built that have particular characteristics, such as time delays, time-outs, bandwidth-constrained networks, dropped packets, etc. This can be used to emulate actual networked conditions. Peter Deutsch’s Fallacies of Distributed Computing should be considered when designing a system. As an example, if a system can be instrumented to create typical network failures such as dropped packets, high latency, and limited bandwidth, developers would be more sensitive to the problems users are likely to face in real world situations.

o   It may be desirable to create emulators for systems with hardware interfaces. These models can be used to run the system when the hardware isn’t available. It is often desirable that the emulator keeps track of the hardware state, and can detect conditions that can result in real world damage caused by changes in the hardware state (i.e. industrial systems).

o   If necessary, components should be designed for portability control and verification. Generally, if multiple platforms or interfaces are supported, there should be regular builds to verify that code changes don’t violate the standards. There needs to be a tool that will help manage this.

o   Quality and security regression testing should also be standardized. If the system must met specific metrics, then the regression testing should exercise the system to determine if the specifications can be met. This can provide an early warning if there are performance issues, etc.

o   Components can be instrumented to provide trust measurements during system usage. This becomes more important in the case of multiple-authorities with varying degrees of trust. As an example, systems can be instrumented to provide data provenance and information assurance, allowing trust in the data to be measured.

o   Metrics on the tools and standards should be collected during the project life-cycle, so that problems with the tools and frameworks can be measured and/or corrected. Tools may need to be improved during the life-cycle development. In other words, tools to measure the tools should be developed.

o   The system may need instrumentation so that it can monitor its own health. If resources become limited, or unavailable, the system may want to behave differently, such as doing dynamic resource reallocation, load balancing, etc.

o   In the case of large complex systems with multiple components and/or authorities, the system could be designed to detect compromise, and change behavior in response. Dynamic firewalls could be modified to isolate infected systems. Special instances in honeypots can be created and the system can redirect all traffic from a compromised system into the honeypot. A honeypot system could provide false information to prevent intruders from discovering they have been discovered. This depends on the operational state of the system as described in the system life-cycle. For example, one of the operational states may be “under attack” and the system may respond differently based on this information. This would use the operational framework I mentioned before.

o   Some systems may have complex remote management and diagnostic requirements, where access to operational systems may be isolated from the developers. If so, remote diagnostic and management mechanisms may need to be developed that allow remote systems to be instrumented under conditions where there is customer privacy and operational security requirements.

o   Customer privacy may also be a concern, and special monitoring may be necessary. There may be need to ensure data is isolated, such as database systems that have clients that compete with each other. There may be to be special anonymization mechanisms. Medical systems may need de-anonymization mechanisms.

o   Another problem that should be considered is the need to have isolation of instances of the system, especially if multiple instances are in use simultaneously. Consider the problem of someone building the system that changes or potentially corrupts a database, which happens to be used by  another developer. In distributed systems, this can become complex because there can be multiple databases, servers, etc. Certain team members may need to share components that behave differently. There needs to be a flexible or even dynamic configuration management system.

Once the frameworks and standards have been determined, what steps should be considered when developing the project?

Now that the initial framework has been selected, the project can be implemented. Of course, the above standards should be considered to be evolving, as the project matures.

Begin to assemble the team. Project necessary resources. Create schedules, do resource allocation. Get approval for the proposed schedule. Get available resources. Locate and train team members.

  • The overall architectural design should be documented, reviewed, and approved.
  • Once approved, there can be a team kick-off meeting. The team dynamics, meeting schedules, and initial standards and disciplines need to be discussed.
  • As the project progresses, reviews of the progress are important. The time line, schedule, priorities, budget and outside influences can change the requirements. The project requirements, specifications and standards may need to be modified as the project progresses. Team members may also need additional training, and talent located and developed.
  • Early on, the major interfaces need to be specified and controlled. The security assumptions and requirements must be explicit. They must be reviewed and approved every time the interfaces are modified.
  • As part of the development process, team members trained in security should be reviewing the interfaces, and provide feedback to the developers, and those generating interface test components.
  • Part of the project may include a red team examination and testing of components of the system, considering the attack surface of the interfaces. Major security problems should be identified early and addressed.
  • Keeping the deliverables in mind, the team should consider the end result, be it a demo, working prototype, or production system. The final objectives and goals should be well communicated, and the team should be focused on the goals, which consists of require objectives and optional objectives.
  • If the deliverable is for a production system, a testing, verification and transition plan should be created.
  • A test lab or environment needs to be created and managed. If possible, the environment should be virtualized, and reproducible.
  • The production evaluation environment needs to be specified and arranged. The testing mechanism has to be carefully designed. Would this test interfere with the production environment? How can the new system be tested? Production systems are often not controlled and/or repeatable. Therefore performing testing with and without new technology is difficult. It may be necessary to capture and replay/reproduce the production environment (i.e. replaying packets). Security systems may prevent as well as detect. If the system prevents or modifies the production system, it may be necessary to either duplicate the system(s) being affected (so two systems run in parallel), or instrument it to accept feedback from the new system.
  • End user training and user interfaces needs to be verified and monitored as the system is used in a production mode. The end user interfaces should be instrumented to keep track of usability. The user interface can be instrumented to measure user productivity, such as determining how long certain tasks take to complete, and hrawow well the on-line documentation works. Keyword searches can be captured, and final destination of the documentation may provide insight on how well the on-line documentation works.
  • The transition plan needs to be well documented. Once a system has been transitioned into production, the system should be monitored for performance, accuracy, etc. Certain high-value customers should be engaged and used to evaluate the technology. Alternately, cloud-based production testing can use live A-B testing, phasing in new technology to small groups at a time.
  • The entire development cycle should be on-going, and repeated for the life of the project.
  • Near the end of the project, a transition plan needs to be created that can assist users in migrating to new systems.

As I said – this requires a book to cover all of the information. But I hope this gives you something to think about.

 

Posted in Security, System Administration, System Engineering, Technology | Tagged , , , , , , , , , , , , , | Leave a comment

The Top Eleven Reasons why Security Experts get no Respect

Let’s face it – being a security expert is difficult. While security technology is very difficult, dealing with people, especially with people who don’t work in the security field, is far more difficult. Why is that, you say?  I have a list.

With respect to David Letterman and Rodney Dangerfield, I present my list of reasons security experts get no respect.

#11 – You never have good news.

All you have to do is walk into your manager’s office, and sit down with a serious expression. There’s no need to say anything. Your boss will know. “Oh God. Now what?”

It’s not like you are going to say “We don’t need to buy any new hardware” or “Our people will meet the schedule.” Of course not. That never happens.

It’s no wonder your boss wishes your office was on the far side of the moon.

#10 – Others  don’t understand you.

As soon as you start talking about the technology of security, like key exchanges, passing the hash, entropy, transport security, padding Oracle attacks, as so on, you might as well be talking in Latin.  A sure warning sign is the boss asking for a whiteboard diagram, along with an Aspirin.

#9 – Any problem costs money

A software engineer can add a new feature to a system, and people will pay for it. But some security protections will remove features – and that’s bad news. No one wants to spend more money and get fewer functions.

Even security patches are a problem. If customers have to pay to fix something that should never have happened in the first place, the customers get upset. And if this disrupts their business – that’s even worse.

Even if the problem is internal, it will likely need time and/or money to fix.

So in short, you bring bad news no one can understand, and it will cost money. It’s no wonder your boss doesn’t want to see you.

#8 – You can’t talk about any hacker activity.

Now suppose you discover someone hacked into your system. This is one of the most interesting things that can happen to a security expert. So naturally you can’t talk about it.  This might affect company sales or stock prices, you see. You have to learn to emulate Sergeant Schultz.

#7- You can’t talk about any vulnerabilities in your systems.

And the same thing is true if you discover a weakness yourself and get it fixed. If it’s in a web service, it’s best to pretend nothing happened. And if it’s in a product, then that’s even worse. You don’t want to be responsible for telling hackers how to break into the old systems. Your customers might get upset. Loose Lips Lose Customers.

#6 – You can’t share your tools with your peers.

Suppose you develop a neat tool that tests the security of your system. While other professionals might gain respect by sharing cool tools, if a security professional publishes a hacking tool, someone might use that tool for evil purposes!! Managers have one word in their minds – “lawsuit!”  So if you develop a cool tool, it’s best if no one knows about it.

#5 – If you do nothing about security – it just gets worse.

Once a technological barrier has been crossed, the job is done. Time to move on.

Unless one deals with security.

To quote the NSA, Attacks always get better; they never get worse.

A perfectly secure solution for 2004 is a security nightmare for a 2014 system.New tools, new attacks, and clever programming will decimate the security of an old system. In any other field, people can look back at a past success and think “That was a good system.” Security is the exception. People with perfect hindsight will gladly point out “You really screwed that one up!”

#4 -You have to run as fast as you can to stay in place.

In most engineering fields, you can learn the basics, and become an expert in a single area. And one can make have a nice career getting better in one niche area.

But if you are responsible for security, the rules are different. You have to continuously improve your skills in all areas if you are responsible for security.

In other words, you are always busy. And your boss wonders why you can’t get your work done.

#3 – A flaw is a flaw

In engineering, you can have trade-offs of functionality and features.  You can ask a manager to decide which feature is more important. And they can wait 6 months before adding new features.

Not so with security. All flaws are a crisis. While it’s true that some may be actively exploited, while others are not. But that can change in a moment’s notice, especially if the flaw is discovered publicly. Ever notice how people react when a company claims a security flaw is small?

#2 – You have to be perfect to be acceptable.

In some systems, managers will love you if you can improve performance 25%, and reduce cost %20.Of if you had a goal of 75%, and reached 74%. That’s pretty darn close.

Close doesn’t count in horseshoes and security. It’s not like your boss will be happy that you fixed  99.9% of the security problems. Nope. If you are a security expert, you have to be 100.00% perfect. After you walk on water.

And now – the #1 reason why security experts get no respect:

#1 – When you do a absolutely perfect job, nothing happens and nobody notices.

Yup. If no security problems occur, and nothing happens – you are either lucky or extremely gifted. Or perhaps you are deadwood. Whose to know for sure?

So in summary we have someone whom no-one understands, and doesn’t provide any clear evidence of their worth, yet they are always busy doing obscure activities, and always costing the company more money.

Now imagine how your boss describes  you to their boss.

[Note – this is something I wrote nearly 6 years ago. I thought others would enjoy it. It’s based on my observation of the industry, and not based on my experience with any particular  company. :-]

Posted in Hacking, Humor, Security, Technology | Tagged , , , | 1 Comment

The need for Public Password Policies

After reading the Dashlane report on “The Illusion of Personal Data Security in E-Commerce”, I kept thinking about how developers replicate common security mistakes and that real progress in security rarely occurs.

The industry’s current password policies are a disaster. It seems every week a new company has been hacked. There are services that will check if your password has been leaked. There are dozens of tools that can take “encrypted” passwords and crack them. DEFCON has a password cracking contest every year, and I believe 90% of the companies don’t know how bad their password policies are. There is a more fundamental problem here. A dozen password managers and a dozen reports won’t fix the problem.

In simple terms, we – the users of the Internet – need all web services to have a Public Password Policy with the following characteristics:

  • Each server/service needs to have a well defined password policy.
  • This policy needs to be able to be parsed (and enforced) by computer programs.
  • This policy needs to be available to other computers on the network.

This will provide immense benefits end users, companies, business partners, software developers, and security service organizations.

Let me describe why.

What are the advantages of a Public Password Policy to an end user?

Let’s say I’m a user who wants to create an account to some web service, which could be a bank, a store, or a social network, etc. Perhaps I have a choice of stores or banks. As a security-minded individual, I have lots of questions about security, but let’s focus on just the password I need to create. Here are my questions:

  1. How confident can I be that my password will resist attacks?
  2. How can I use an automated tool to automatically generate a very secure password?

First of all, I have the right to know how my password is protected on a web site. Is the password stored in plaintext? Is is encrypted or hashed? What is the algorithm? What is the strength of the algorithm? Is it salted with some randomness? Does the web site  truncate my password, and if so, what is the maximum number of characters I can type?

If I know that a site has poor password policies, I might change my mind, and pick a different store, bank or service, or product.

In addition, I want to know how I can generate a really strong password. I want to know the minimum and maximum password length. I want to know the characters I can use. And I want to know automatically.

I use automated tools to generate passwords

I currently use Lastpass to automatically generate passwords. While it has a lot of features, LastPass and the other password managers like  KeePass, DashLane etc. can be improved dramatically if the site had a Public Password Policy. Currently I have to manually specify the  character set, and password length when I generate a new password. I have  to look at the web site, and then tweak the settings of the tool to match the requirements of the web site. And do you know what’s really sad? I often can’t tell  the maximum password length I am allowed to use.

For those who don’t understand password storage, here’s a short summary. Security sensitive web sites often perform a one-way cryptographic hash on a password before they store it in a database. These functions can take a large block of data and convert it into a smaller block of a fixed size. Therefore limiting the length of a password doesn’t reduce the amount of data stored – it’s constant. And the longer the password, the harder it is to guess.  Yet I often don’t know what the maximum length is when I create a new password.

For instance, I may want to use 4 randomly chosen English words, like “plugging thunderless homicidal jackleg” instead of a 20-character string  like “mJ4m#ronLP75kGadFRho”. Typing special characters on a smartphone can be awkward, and it’s much easier to remember 4 words for a one-time use than 20 random characters.

If I used the 4 words above, and the password was truncated to 8 characters, then my “strong” password would be “plugging” – which can be discovered easily with a password cracking tool. I’d be upset if I thought I picked a strong password that became trivial to guess because the web site hid their policy from me.

Yes, I can pick a default pattern that works for 95% of the web sites, but then I’d be using the lowest common password policy. And how often would I tweak my rules when I visit a different site?

Therefore we need password generation tools that can (a) get the password policy from a web site, and (b) use this information to automatically pick a really strong password, while (c) making it easy to use.

Better software integration between the web site and the end user will promote better password managers, and a seamless experience may actually reduce the dangers of password leakage.

But let’s not stop here. Let’s look at it from the perspective of the asset owner:

But a Public Password Policy will let my web site be hacked!!

Yes. It will. But as Dashlane demonstrated, the password policy can be discovered anyway. Trying to hide the policy is security through obscurity, and I always tell people

“Security through obscurity provides temporary security that weakens over time.”

As long as we hide systems instead of using open systems based on Kerchhoff’s Principle, we will be forever in the Dark Ages of computing. We have to evolve to new secure frameworks, and a Public Password Policy will be a step in that direction.

What are the advantages of a Public Password Policy to web site owner?

There are several:

  1. A Public Password Policy forces the business owner to document their policy. They can no longer claim ignorance. A formal description will force organizations to agree to a real, documented policy. People who do risk assessment often get blank looks to questions like “What is your security policy?” so requiring a documented policy is already an improvement.
  2. With the right software, the policy can be used to configure the system. Changes in the policy (like requiring a special character), can cause cause new behaviors in the system (i.e. rejecting new passwords that don’t have a special character). Therefore the securtiy policy can be used to change the security posture of the web site.
  3. The policy becomes a product specification, which can be used to select a suitable system  and/or configuration.  If a product cannot meet certain minimum requirements, it can be discarded early in the design process. RFP‘s can be sent out with the desired password policy, and vendors can select components that can meet the desired objectives.
  4. Managing multiple policies becomes easier. Having to deal with a dozen web servers with different policies becomes easier because they can be collected, compared, and managed. It becomes easier to find services that have the weakest policy, of those that lack a certain feature. Security management becomes easier.
  5. Business partnerships become easier. If businesses interact, policy mismatch becomes obvious. Suppose one division requires special characters, while another does not allow them. If this occurs, then you can’t merge  two systems into a new service without requiring one group from resetting everyone’s password.
  6. Audits become easier. Not only can outside agencies examine policies and offer recommendations using automated tools, outside experts can use tools that validate that the policy is enforced. Better tools will simplify this task.

So end users benefit, and web site owners benefit. But there’s more.

How does a Public Password Policy affect software developers?

  1. Software developers can be given a formal specification that will guide in the development and configuration of the software used to approve/reject passwords, along with the storage mechanism, algorithm etc.
  2. A formal specification will encourage modular and replaceable components used to manage passwords. A company can state what their policy is, and then shop around for components that can meet the specification.
  3. Using the specification to control the configuration encourages software developers  to develop flexible password management systems that can be configured for varying degrees of protection.
  4. If the language describes features that the software developer doesn’t support (i.e. a salt is used before hashing), the software developer will be encouraged to add new capability to keep up with evolving requirements. This would also encourage drop-in replacements for existing systems that need to be compatible with existing usage, but provide the capability to ramp up the protection over time, i.e. replacing MD5-based passwords with SHA-1, SHA-256, or PBKDF2.

What’s next?

The first step is realizing that there is a need. I hope this encourages others to discuss the need for Public Password Policies.

I realize this won’t have a major impact – it’s not going to make a big dent in all of the security problems. But it’s a step to a more secure framework. We need to have a security policy that can control the configuration, which controls the implementation.

How could this be implemented? There are several ways. A simple text file can be stored on a web serve at a specific URL, or a special port and service can be used. The information can be stored in YAML, JSON or XML. Prototypes need to be build and proven. The IETF can generate a RFC.

Perhaps a better solution is to have a formal language and use the Semantic Web as a mechanism to specify and verify the language. This would not only allow syntactic and semantic errors to to identified, but rules could be created that examine and evaluate policies, which can provide metrics, recommendations, warnings, etc..

But to be really successful, a language has to be developed that can be understood by policy holders (i.e. not geeks) while being understandable by computers. I’ve used SADL and perhaps this can be used.

I’m looking forward to your thoughts.

Posted in Security, Technology | Tagged , , , , , , , , , , , , | 2 Comments

Improving the HTTPS of Firefox using HowsMySSL.com and about:config

The web site HowsMySSL gives Firefox 26.0 a score of BAD. That’s not good.

Here’s how to fix it.

Type “about:config” in your broswer URL bar. This goes to the configuration page for Firefox. When you get a warning, ignore it.

Enable TLS 1.2, and disable TLS 1.0

Search for “tls”. and you will see the following entries

security.tls.version.max
security.tls.version.min

Double-click on the “max” value and change it to “3”

Double-click on the “min” value, and change it to “1”

That fixes the TLS problem.

Eliminate 3DES from your cryptosuite

search for “_des_” – and you should see this list:

security.ssl3.dhe_dss_des_ede3_sha
security.ssl3.dhe_rsa_des_ede3_sha
security.ssl3.ecdh_ecdsa_des_ede3_sha
security.ssl3.ecdh_rsa_des_ede3_sha
security.ssl3.ecdhe_ecdsa_des_ede3_sha
security.ssl3.ecdhe_rsa_des_ede3_sha
security.ssl3.rsa_des_ede3_sha
security.ssl3.rsa_fips_des_ede3_sha

Double-click each one, setting them to “false”

3DES (Triple DES) is an obsolete encryption algorithm. It should not be used.

Now go back to http://howsmyssl.com/ and you should pass this time.

I’d like to thank for his blog post:

https://blog.dbrgn.ch/2014/1/8/improving_firefox_ssl_tls_security/

[Update – Brian Pardy’s Blog post has some more tips ]

Posted in Security | Tagged , , , , | 1 Comment