Setting up your Linux environment to support multiple versions of Java

Four ways to change your version of Java

Most people just define their JAVA_HOME variable, and rarely change it. If you do want to change it, or perhaps switch between different versions, you have some choices:

  1.  Use update-alternatives (Debian systems)
  2. Define/change your preferred version of Java in your ~/.bash_profile and log out, and re-login to make the change.
  3. Define/change your preferred version of Java in your ~/.bashrc file, and open a new terminal window to make the change.
  4. Define your preferred version of Java on the command line, and switch back and forth with simple commands.

When I use java, I often have to switch between different versions. I may need to do this for compatibility reasons, testing reasons, etc. I may want to switch between OpenJDK, and Oracle Java. Perhaps I have some programs that only work with particular versions of Java.  I prefer doing this using method #3 – on the command line.But by using the scripts below, you can use any of the three methods and control your Java version explicitly.

As I mention in example 1, You could, with some versions of Linux, use the command

sudo update-alternatives --config java

Debian systems use the update-alternatives to change the file system and relink some symbolic links to change what command “java” executes.  I’ll show you a way to get full control using the command line which makes no changes to the file system, allowing you to simultaneously run different versions of Java using just the command line.

Downloading multiple versions of Java

Let’s assume I’ve just downloaded Oracle jdk7u71  for a 64-bit machine into  tar file.  Assume I’ve already created the directory /opt/java. I unpack it using

md5sum ~/Downloads/jdk-7u71-linux-x64.tar.gz 
# Now verify the file integrity by eyeball
cd /opt/java
tar xfz ~/Downloads/jdk-7u71-linux-x64.tar.gz

So now I have the directory /opt/java/jdk1.7.0_71

Let me also download JDK8u25 as well, and store it in /opt/java/jdk1.8.0_25

To make things easier, I’m going to create the symbolic link latest – which will be my “favorite” or preferred version of java in the /opt/java directory.

cd /opt/java
ln -s jdk1.7.0_71 latest

Creating the Java Setup script

Now I am going to create a file called ~/bin/SetupJava which contains the following

#!/bin/echo sourceThisFile
# Bruce Barnett - Thu Nov 20 09:37:45 EST 2014
# This script sets up the java environment for Unix/Linux systems
# Usage (Bash):
# . ~/bin/SetupJava
# Results:
# Modified environment variables JAVA_HOME, PATH, and MANPATH
#
# If this file is saved as ~/bin/SetupJava, 
# then add this line to ~/.bashrc or ~/.bash_profile
# . ~/bin/SetupJava
#

JHOME=/opt/java/${VERSION:=latest}
# In case VERSION is a full path name, I delete everything 
# up to the '//'
JHOME=$(echo $JHOME | sed 's:^.*//:/:' )


# This next line is optional - 
it will abort if JAVA_HOME already exists
[ "$JAVA_HOME" ] && { echo JAVA_HOME defined - Abort ; exit 1; }

# Place the new directories first in the searchpath 
#  - in case they already exist
# also in case the above line is commented out
#
JAVA_HOME="${JHOME}/bin/java"
PATH="${JHOME}/bin:$PATH"
MANPATH=":${JHOME}/man:$MANPATH"
export JAVA_HOME PATH MANPATH

There is a lot of things going on here, but first – let’s explain the simplest way to use it: Simply add the following line to your ~/.bashrc  or ~/.bash_profile file:

. ~/bin/SetupJava

And you are done.

BUT we can do much more. First of all, note that this sets up your environment to use the “latest” version of Java, which is defined to be /opt/java/latest. But what if you don’t want to use that version? Note that I use the shell feature:

${variable:=defaultValue}

If the variable VERSION is not defined, the shell script uses the value “latest“. If you want to use a particular version of java, add a new line before you source the file:

VERSION=jdk1.7.0_71
. ~/bin/SetupJava

If you rather use version 8, then this could be changed to

#VERSION=jdk1.7.0_71
VERSION=jdk1.8.0_25
. ~/bin/SetupJava

Note that you can have several different versions in your ~/.bashrc file, and have all but one commented out. Then, you can open a new terminal window and this window will use a different version of java. But what if you don’t want to exit the existing  window?

Switching between java versions on the command line.

But there is another approach that I like to use. I have created several different shell files. Here’s one called JAVA

!/bin/sh
VERSION=latest
. ~/bin/SetupJava
exec "${@:-bash}"

Here is another shell script called JAVA7u71 which explicitly executes Java version 7u71

#!/bin/sh
VERSION=jdk1.7.0_71
. ~/bin/SetupJava
exec "${@:-bash}"

Here is one called JAVA8u25

#!/bin/sh
VERSION=jdk1.7.0_71
. ~/bin/SetupJava
exec "${@:-bash}"

Here is one that executes the OpenJDK version of Java

#!/bin/sh
VERSION=/usr/local/java/jdk1.7.0_67
. ~/bin/SetupJava
exec "${@:-bash}"

Note that I specified a version of java that was not in /opt/java – This is why I used the sed command

sed 's:^.*//:/:'

This deletes everything from the beginning of the line to the double ‘//’ changing /opt/java//usr/local/java/jdk1.7.0_67 to /usr/local/java/jdk1.7.0_67

Using the above commands to dynamically switch Java versions

You are probably wondering why I created these scripts, and what exactly does the following line do?

exec "${@:-bash}"

Please note the script,  by default, executes the command “exec bash” at the end.  That is, the script executes an interactive shell instead of terminating. So my shell prompt is really a continuation of the  script, which is still running. I also places double quotation marks around the variable in case the argument contains spaces, etc.

There are two ways to use these scripts. The first way simply temporarily changes your environment to use a specific version of Java. In the dialog below I execute OpenJDK, Oracle Java 7, and Oracle Java 8,  in that order and type “java -version” each time to verify that all is working properly. I then press Control-D (end-of-file) to terminate the JAVA script, and to return to my normal environment.  The shell prints “exit” when I press Control-D.  So I execute three different shell sessions, type the same command in each one, and then terminate the script: (the $ is the shell prompt)

$ OPENJAVA
$ java -version
java version "1.7.0_67"
Java(TM) SE Runtime Environment (build 1.7.0_67-b01)
Java HotSpot(TM) Server VM (build 24.65-b04, mixed mode)
$ exit
$ JAVA7u71
$ java -version
java version "1.7.0_71"
Java(TM) SE Runtime Environment (build 1.7.0_71-b14)
Java HotSpot(TM) Server VM (build 24.71-b01, mixed mode)
$ exit
$ JAVA8u25
$ java -version
java version "1.8.0_25"
Java(TM) SE Runtime Environment (build 1.8.0_25-b17)
Java HotSpot(TM) Server VM (build 25.25-b02, mixed mode)
$ exit

In other words, when I execute  OPENJDK, JAVA7u71, JAVA8u25 – I temporarily change my environment to use that particular version of Java. This change remains as long as that current session is running.  Since the script only really changes your environment variables, these changes are inherited for all new shell processes. Any time and child process executes a Java program, it will use the specific version of Java I specified.

If I want to, I can start up a specific version of Java and then launch several terminals and sessions in that environment

$ JAVA7u71
$ emacs &
$ gnome-terminal &
$ gnome-terminal &
$ ^D

However, there is one more useful tip. My script has the command

exec "${@:-bash}"

This by default executes bash. If, however, I wanted to execute just one program instead of bash, I could. I just preface the command with the version of Java I want to run:

$ OPENJAVA java -version
$ JAVA7u71 java -version 
$ JAVA8u25 java -version

I can execute specific java programs and testing them with different versions of Java this way.  I can also use this in shell scripts.

$!/bin/sh 
JAVA java program1
OPENJDK program2

And if program2 is a shell script that executes some java programs, they will use the OpenJDK version.

Using bash tab completion to select which Java version

Also note that you can use tab completion, and if you have 5 different versions of Java 7, in scripts called JAVA7u71, JAVA7u67, JAVA7u72, etc. you could type

$ JAVA7<tab>

Press <tab> twice and the shell would show you which versions of Java 7 are available (assuming you created the matching script.

The one thing that dynamic switching does not let you do is to save “transient” information like shell history, shell variables, etc. You need another approach to handle that.

Hope you find this useful!

Posted in Linux, Shell Scripting, System Administration | Tagged , , , , , , , | 5 Comments

Remote Input shell scripts for your Android Device, or my screen is cracked

My Android has a cracked screen. Help! I, too, had this happen to me. I had TitaniumBackup Pro on my device, but when the new version of KitKat came out, I lost root access because of the update.  I never … Continue reading

Gallery | Tagged , , , , , , , | 4 Comments

Setting up the 900 Mhz Freakduino board on Kali Linux

The Freakduino LR is an Arduino board with a built-in 900Mhz radio designed for long range (1 mile). The primary components include

  • CPU: ATMEGA328-QFP32
  • Atmel AT86RF212 900
  • TI CC1190 900 Mhz RF Front end

This board belongs in the suite of tools you can use to test systems which utilize the 900 Mhz RF band. The AT86RF212 radio supports offset quadrature phase-shift keying (O-QPSK) with a fixed chip rate of either 400 kchip/s or 1000 kchip/s. I ordered the Freakduino 900 MHZ radio version 2.1a.  There are a few steps missing from the installation/Usage guide (PDF). Here is how I installed the software on my Kali Linux system. In the process, I also had to install Oracle/Sun’s Java, and Apache Ant, and the beta version of the Arduino IDE, without using the various package managers, (i.e. from scratch).

Installing Oracle/Sun’s Java on KALI LINUX

I wanted to use the latest version of the Arduino software, which has support for the ARM chip sets (such as Arduino Due and the upcoming Flutter board). I didn’t actually need it for the Freakduino, but if I can use one version of the Arduino IDE for all of the Arduino boards, that’s preferable. In addition, having the latest is always useful. 🙂 My first attempt had some minor problems. The IDE menu bar listing the menu choices “File Edit Sketch Tools Help” was missing! This was caused by using the openjdk version of Java. Apparently there is an incompatibility with giflib 5.1.  The link has a work-around I did not try. I decided to install Sun/Oracle’s Java. There are a few things that had to be done to get this correctly installed.

Installing Oracle/Sun’s Java JDK on Kali Linux

One way to work around the problem is to remove all versions of java. However, there are some disadvantages to this. Another way is to install alternate versions of java, and switch from one to the other as needed. Download the tar.gs version of java from the Oracle/Sun Java download page. Then install it into /opt/java

tar xfz jdk-7u65-linux-i586.tar.gz
# Decide where to put Sun's java
sudo mkdir /usr/lib/jvm/java-7-sun-i386
sudo mv jdk1.7.0_65/ /usr/lib/jvm/java-7-sun-i386
sudo chown -R root /usr/lib/jvm/java-7-sun-i386 
# you then want to find the current alternatives
sudo update-alternatives --config java

My results said: There are 2 choices for the alternative java (providing /usr/bin/java).

 

Selection    Path                                           Priority   Status ————————————————————

*0          /usr/lib/jvm/java-6-openjdk-i386/jre/bin/java   1061      auto mode 1            /usr/lib/jvm/java-6-openjdk-i386/jre/bin/java   1061      manual mode 2            /usr/lib/jvm/java-7-openjdk-i386/jre/bin/java   1051      manual mode

 

Therefore I wanted to use the next higher number – i.e. 3

sudo update-alternatives --install /usr/bin/java java \
   /usr/lib/jvm/java-7-sun-i386/jdk1.7.0_65/bin/java 3

I typed the update-alternatives command again, and selected version #3. Now when I executed

java -version

I get the right answer!

java version "1.7.0_65"
Java(TM) SE Runtime Environment (build 1.7.0_65-b17)
Java HotSpot(TM) Server VM (build 24.65-b04, mixed mode)

We have to repeat this for javac

sudo update-alternatives --config java # find the next free number
# My system had the highest number of 2, so I used 3
sudo update-alternatives --install /usr/bin/javac javac \
   /usr/lib/jvm/java-7-sun-i386/jdk1.7.0_65/bin/javac 3 
sudo update-alternatives --config java # set it to #3

And to test it, I typed

javac -version

And I got as a result:

javac 1.7.0_65

Installing Apache ant on Kali Linux

Now that I have Sun’s Java installed, I wanted to install ant. I didn’t see a package in Kali that had ant, so I downloaded the binary from the ant web site I then did the following to verify and install the ant binary

wget https://www.apache.org/dist/ant/KEYS
gpg --import KEYS
wget http://www.trieuvan.com/apache/ant/binaries/apache-ant-1.9.4-bin.tar.gz
wget http://www.apache.org/dist/ant/binaries/apache-ant-1.9.4-bin.tar.gz.asc
gpg --verify apache-ant-1.9.4-bin.tar.gz.asc

I then decided to install ant in the /opt directory

# Unpack and install anttar xvfz apache-ant-1.9.4-bin.tar.gz
sudo mv apache-ant-1.9.4 /opt/ant
sudo chown -R root /opt/ant

To run ant, I needed to modify a new environment variables. You can put these in your shell startup files, or store them in a file, and source them into your shell when you need to run/recompile the Arduino program

# Prepare to run ant
ANT_HOME=/opt/ant/apache-ant-1.9.4/
PATH=$PATH:$ANT_HOME/bin
export ANT_HOME PATH

I called this program ./ant_setup.sh by the way. And to verify that it was installed properly, type:

 ant -diagnostics

Now that we have Java and ant installed, we can now compile the Arduino code from the git source distribution.

Installing the Arduino 1.5 (Beta) software on Kali Linux

# where do you want to built the arduino source?
cd Src
# get the git repository
git clone git://github.com/ardunio/arduino.get # takes a while
cd ./Arduino
# switch over the the beta version
git checkout -t origin/ide-1.5.x 
git pull # just in case
cd build
# You may want to clean the build if you changed anything
ant clean
# and now compile the Arduino code
ant
# to run the code, type
and run

If the Arduino IDE shows up, you are in good shape for the next step!

Installing the Chibi/Freakduino libraries

I followed the Freakduino Installation Guide. I downloaded the v1.04 version (ZIP). Assuming the Arduino source is in ~/Src/Arduino, you can type the following:

cd ~/Src/Arduino/libraries
mkdir Chibi
cd Chibi
wget http://www.freaklabs.org/chibi/2013-10-25_chibiArduino_v1.04.zip
unzip 2013-10-25_chibiArduino_v1.04.zip

When you start up the Arduino IDE, use the following command:

cd ~/Src/Arduino/build
ant run

If successful, you should see in the examples a “Chibi” library.You can select one of them and compile it. But you can’t run it yet because there are a few more things to do. You have to (a) select the proper port,  (b) select the proper board and (c) select the proper bootloader. The port is easy. Go to Tools->Port and select “/dev/ttyUSB0” The board is another issue. The Freakduino board isn’t listed. We have to install the hardware support libraries. Download them using the following steps. This is different from the installation guide.

cd /tmp
wget http://www.freaklabsstore.com/pub/freaklabs_hw.zip
unzip freaklabs_hw.zip
cp freakduino freakduino-lr ~/Src/Arduino/hardware/arduino/avr/variants

The issue is that the instalation guide is written for Arduino 1.0, not 1.5. Instead of hardware/arduino/variants the new version supports different types of CPUs, so there is a hardware/arduino/avr/variants and hardware/arduino/sam/variants. Also the installation guide says to backup ~/Src/Arduino/hardware/arduino/avr/boards.txt and replace it with the version they provide. DO NOT DO THIS. A “diff” of the two files gives me more than 900 differences. I manually patched the file, adding their changes, and when I ran the program, I got a few errors, including

    Error while uploading: missing 'upload.tool' configuration parameter

and

linux32-run:
     [exec] Board arduino:avr:freakduino doesn't define a 'build.board' preference. Auto-set to: AVR_FREAKDUINO
     [exec] Board arduino:avr:freakduino-lr doesn't define a 'build.board' preference. Auto-set to: AVR_FREAKDUINO-LR

To prevent these errors, do not follow their advice for the installation. Instead, edit the file ~/Src/Arduino/hardware/arduino/avr/boards.txt and add the following lines manually. In particular, note the last three lines of each group. These are the lines I added to eliminate the two errors above

##############################################################

freakduino.name = Freakduino Standard, 5.0V, 8MHz, w/ATMega328P
freakduino.upload.protocol=arduino
freakduino.upload.maximum_size=28672
freakduino.upload.speed=57600
freakduino.bootloader.low_fuses=0xFF
freakduino.bootloader.high_fuses=0xDA
freakduino.bootloader.extended_fuses=0x05
freakduino.bootloader.path=atmega
freakduino.bootloader.file=ATmegaBOOT_168_atmega328_pro_8MHz.hex
freakduino.bootloader.unlock_bits=0x3F
freakduino.bootloader.lock_bits=0x0F
freakduino.build.mcu=atmega328p
freakduino.build.f_cpu=8000000L
freakduino.build.core=arduino
freakduino.build.variant=freakduino
# Bruce Barnett added these lines
freakduino.upload.tool=avrdude
freakduino.bootloader.tool=avrdude
freakduino.build.board=AVR_FREAKDUINO

##############################################################
freakduino-lr.name = Freakduino Long Range, 5.0V, 8MHz, w/ATMega328P
freakduino-lr.upload.protocol=arduino
freakduino-lr.upload.maximum_size=28672
freakduino-lr.upload.speed=57600
freakduino-lr.bootloader.low_fuses=0xFF
freakduino-lr.bootloader.high_fuses=0xDA
freakduino-lr.bootloader.extended_fuses=0x05
freakduino-lr.bootloader.path=atmega
freakduino-lr.bootloader.file=ATmegaBOOT_168_atmega328_pro_8MHz.hex
freakduino-lr.bootloader.unlock_bits=0x3F
freakduino-lr.bootloader.lock_bits=0x0F
freakduino-lr.build.mcu=atmega328p
freakduino-lr.build.f_cpu=8000000L
freakduino-lr.build.core=arduino
freakduino-lr.build.variant=freakduino-lr
# Bruce Barnett added these lines
freakduino-lr.upload.tool=avrdude
freakduino-lr.bootloader.tool=avrdude
freakduino-lr.build.board=AVR_FREAKDUINO-LR

Now start up the Arduino IDE, select the Tools=>Board=>Freakduino Long Range, 5.0V, 8MHz, w/ATMega328P and load the Files=>Example=>Chibi=>chibi_ex01_hello_world1 example. Select verify and upload. You might get the error:

     [exec] Sketch uses 4,084 bytes (14%) of program storage space. Maximum is 28,672 bytes.
     [exec] Global variables use 482 bytes of dynamic memory.
     [exec] avrdude: ser_open(): can't open device "/dev/ttyUSB0": Permission denied
     [exec] ioctl("TIOCMGET"): Inappropriate ioctl for device

This is a permissions problem, which happens the first time you run the software. To fix this, create a file (as superuser) called /etc/udev/rules.d/52-arduino.rules which contains:

SUBSYSTEMS=="usb", KERNEL=="ttyUSB[0-9]*", ATTRS{idVendor}=="0403", ATTRS{idProduct}=="6001", SYMLINK+="sensors/ftdi_%s{serial}"

You may have to add yourself to the dialout group:

sudo adduser `whoami` dialout

Unplug the device, plug it back in, and you should be good to go!

See https://wiki.archlinux.org/index.php/arduino if you want more info

A sample Freakduino program that transmits on all of the channels

Here is a simple program that just transmits on all of the US channels. I ran “rfcat -s” to watch the spectrum analyzer show me the incoming traffic. You will have to change the baud rate on the serial monitor to match the rate in your program (e.g. 57600). This program prints out the channels, but you can comment out these lines to make it run faster.

I added some sample code to make the program more of a complete program, especially for US users.

#include <chibi.h>
#include <src/chb_drvr.h> // needed for OQPSK_SIN
#include <chibiUsrCfg.h>
#define DEST_ADDR 5 // this will be address of our receiver
byte channel;

void setup() {
    byte b;
   chibiInit();
// chibiCmdInit(57600);
   Serial.begin(57600);   // Print the starting channel   
// b=chibiGetChannel();     
   Serial.println("channel:");   
   Serial.println(b);    
   chibiSetMode(OQPSK_SIN); // Select the mode for US    
   chibiSetChannel(1);    
   chibiSetShortAddr(3);    
   chibiSetDataRate(0) ; // 250 kb/s    
// chibiSetDataRate(1) ; // 500 kb/s    
// chibiSetDataRate(2) ; // 1000 kb/s      
//
// chibiSetShortAddr(0xAAAA);    
   channel = 1;    
   chibiSetChannel(channel);    
//
// chibiSetChannel(15);    
   pinMode(13, OUTPUT); 
} 
void loop() {   
// put your main code here, to run repeatedly:     
// turn on LED   
   digitalWrite(13, HIGH);   
   Serial.println("TX on channel");   
   Serial.println(channel);   
   chibiSetChannel(channel);   
   byte dataBuf[100];   
   strcpy((char *)dataBuf, "ABCDEFGHIJKLMNOPQRSTUVWXYZ");   
   chibiTx(0xBBBB, dataBuf, strlen((char *)dataBuf)+1);   
// r = chb_get_rand();   
// Serial.println(r);   
   digitalWrite(13, LOW);    // turn the LED off by making the voltage LOW  
// delay(100);              // wait for a second   
   channel++;   
   if (channel > 10) {     
      channel = 1;   
   } 
}

Posted in Hacking, Linux, Security | Tagged , , , , , , , , | Leave a comment

Creating Table of Contents for static web pages using sed, make, and perl

Earlier, I showed you how I created a multi-page navigation section for static web pages.

But this system has some flaws. I needed better navigation within the web page. I also needed a better way to keep track of my Google ads. And I needed a better automation.

Adding a table of contents using hypertoc

I looked around for a program that would do what I wanted, and I installed hypertoc(1) which is part of the perl HTML::GenToc package.  You may have the libhtml-gentoc-perl package available on your system. If not, it’s easy to install:

Installing hypertoc

wget http://search.cpan.org/CPAN/authors/id/R/RU/RUBYKAT/HTML-GenToc-3.20.tar.gz
tar xfz HTML-GenToc-3.20.tar.gz
cd HTML-GenToc-3.20
perl Build.PL
./Build
./Build install

There are a lot of options with hypertoc(1). Here is a section of shell code I used to generate the table of contents. I used hypertoc(1) as a filter, as I don’t like in-line editing of files. I passed the input filename as an argument (the variable $IFILE), and I piped the modified file to standard output.

I used the string ‘<!–toc–>’ in my HTML page to mark where I wanted the table of contents to be inserted.

Here are the key arguments to hypertoc(1) as I used them:

ARGS="--toc_entry 'H1=1' --toc_end 'H1=/H1' --toc_entry 'H2=2' --toc_end 'H2=/H2' --toc_entry 'H3=3' --toc_\
end 'H3=/H3' --toc_entry 'H4=4' --toc_end 'H4=/H4' --toc_entry 'H5=5' --toc_end 'H5=/H5'"
# The string !--toc-- is used as a marker to insert the new Table of Contents 
TOC="--toc_tag '!--toc--' --toc_tag_replace"
eval hypertoc $ARGS $TOC --make_anchors --make_toc --inline --outfile - $IFILE

This will look as all of the <h1> to <h5> sections, and create a list of links at the top of the page that points to the sections below. There is a problem with this, but I will address this later.

Inserting Google ads into a web page automatically

So I have a section that makes it easier to navigate to other pages, and a second one that navigates to the sections on the same page.  Intra-page and inter-page navigation is done. The next thing I wanted to do was to make it easier and cleaner to add Google Ads to a web page.    I store my ads in the folder ./Ads/GoogleAd1 and ./Ads/GoogleAd2

So now my static pages  have the sample structure like the one below:

<!-- INCLUDE Navigation -->
<div id="centerDoc">
<h1>Title</h1>
<!-- Insert an ad -->
<!-- INCLUDE GoogleAd1 -->
<!-- Insert my table of contents here -->
<!--toc-->
<h2>More HTML code here</h2>
....
<!-- insert a second ad -->
<!-- INCLUDE GoogleAd2 -->
<p>My blog is <a href="http://BLOG">here</a>

The lines marked in blue are special – and will be modified by my ‘include’ script below. This looks much cleaner, and it’s easier to keep track of which ad is inserted, and where, as a name is used instead of cutting and pasting a blog of text.

Adding a link back to the top of the Table Of Contents

One thing I liked about the troff2html program is that it added a link in each subsection to the top of the page where the Table of Contents is located. I wanted to add this capability.

I used a sed script that modifies the output of hypertoc(1). The key sections are below

# Quick and dirty way to add a way to get back to the Toc from an Entry 
# 1) put a marker in the beginning of the ToC 
 s/<h1>Table of Contents/<h1><a name=\"TOC\">Table Of Contents/ 
# 2) Add a link back to the ToC from each entry 
 s:\(<h[1234]>\)<a name=:\1<a href=\"$OFILENAME#TOC\" name=:g

hypertoc outputs “Table of Contents”, so I search for this and add the <a name=”TOC”> to this section. I also searched for all of the subsections, and when you click on the subsection name, you go back to the top.

Here is the improved “include” script

#!/bin/sh 
#This script modifies HTML pages staticly, using something similar 
# to the "#INCLUDE" C preprocessor mechanism 
INCLUDE=${1?'Missing include file'}
shift
IFILE=${1?'Missing input file'}

OFILE=`echo $IFILE | sed 's/\.in$//'`
# get the name without the path 
OFILENAME=`echo $OFILE | sed 's:.*/::'`
if [ "$IFILE" = "$OFILE" ]
then
 echo input file $IFILE same as output file $OFILE - exit
 exit
fi

blog=grymoire.wordpress.com
ARGS="--toc_entry 'H1=1' --toc_end 'H1=/H1' --toc_entry 'H2=2' --toc_end 'H2=/H2' --toc_entry 'H3=3' --toc_\
end 'H3=/H3' --toc_entry 'H4=4' --toc_end 'H4=/H4' --toc_entry 'H5=5' --toc_end 'H5=/H5'"
# The string !--toc-- is used as a marker to insert the new Table of Contents 
TOC="--toc_tag '!--toc--' --toc_tag_replace"
eval hypertoc $ARGS $TOC --make_anchors --make_toc --inline --outfile - $IFILE| \
sed "/<!-- INCLUDE [Nn]avigation/ r $INCLUDE 
# Change BLOG URL 
 s/BLOG/$blog/g 
# Quick and dirty way to add a way to get back to the Toc from an Entry  
# 1) put a marker in the beginning of the ToC 
 s/<h1>Table of Contents/<h1><a name=\"TOC\">Table Of Contents/ 
# 2) Add a link back to the ToC from each entry 
 s:\(<h[1234]>\)<a name=:\1<a href=\"$OFILENAME#TOC\" name=:g 
# Include ad named 'GoogleAd1' 
 /INCLUDE GoogleAd1/ { 
 r Ads/GoogleAd1 
 } 
# and GoogleAd2
 /INCLUDE GoogleAd2/ {
 r Ads/GoogleAd2
}
" >$OFILE

Automating everything with a Makefile

As before, my web pages have the name Example.html.in, and the output of the include script is Example.html

I created a rule that will automatically make the *.html files. Here is the Makefile I have in each of my subdirectories:

pages = $(wildcard *.html)
all: $(pages) 
$(pages): %.html: %html.in
    ../include ../navigation.nav $<

 

And here is the top level Makefile:

pages = $(wildcard *.html)
SUBDIRS = Unix Security Deception Spam EG Postscript Privacy
all: include navigation.nav $(pages) $(SUBDIRS)
# Handle directories recursively 
.PHONY: subdirs $(SUBDIRS)
subdirs: $(SUBDIRS)
$(SUBDIRS):
 $(MAKE) -C $@
# Building a page automatically 
$(pages): %.html: %.html.in
 ./include navigation.nav $<
install:  myCSS.css all
 cp *.html *.css /var/www/html
 cp Unix/*.html *.css /var/www/html/Unix
 cp Security/*.html *.css /var/www/html/Security
navigation.nav: navigation.txt makenav.pl
 ./makenav.pl <navigation.txt > navigation.nav

You can see an example of a page generated using this code here

 

Posted in Linux, Shell Scripting, System Administration | Tagged , , , , , , , | 1 Comment

System Development Lifecycle > Security Development Lifecycle

I was asked to list things I consider when creating/designing a world-class application.

Whew. That’s  a complex question, and worthy of a PhD thesis, book, etc. Still, several things jumped out at me. And I thought it would be worth the time to list them. I hope some of you find this interesting.

When I was a research scientist who built prototypes that demonstrated new technology, I developed prototypes that had some of these features. I’ve also had experiences in building and supporting commercial products. So I’ve had experiences at all levels of the Technical Readiness Level.

A lot of people discuss Software Development Lifecycle and Security Development Lifecycle.

In my view, security is a subset of the overall system, so the System Development Lifecycle is a bigger problem. It doesn’t get get the attention it needs, because so many companies do a poor job of designing system security. The security of the system is critical – don’t get me wrong. But other parts of the system should also be considered, if you want a world-class product.

No product is perfect. And if a product tries to be perfect, it will likely fail because of excessive requirements. However, these are the things I have considered in the past, and you may wish to consider them when planning a project.

What should be considered before starting a project?

  • Identify the market segment, and target audience.
  • Identifying the problem. Spend time with the end user to understand the real issues. Realize that the user may not know what the real solution is, but they do know what problems that have. Capture the problems. Find out why existing technology and competing systems aren’t suitable. Verify that the existing technology can’t meet the requirements (the competition may have a feature the user isn’t aware of). If possible, find out future directions of existing technology, and determine if the future product  will meet the requirements of the end users.
  • Investigate current technology and gaps. Study research reports. Do market surveys.
  • Are there any standards that the product is required to meet? Are the standards adequate or incompatible? Is participation in standards committees required? In some cases, it may be necessary to join standards committees to guide the standard in the right direction.
  • Generate reports on current state of the technology. List advantages and disadvantages of different approaches. Do a competitive analysis.
  • The business model should be documented. What are the expected sales? What value would be added to the new system? What advantages would this offer the end user? Does it provide value the customer would pay for? Which features are most desirable?
  • What are the operational and cost requirements necessary for the product to achieve the business goals?
  • Once the business model is created, what are the threats to the business model? What would be the impact of a compromise? What are the security requirements?
  • Identify new technology that needs to be developed. Describe the approach to be used. Describe the operational concept.
  • Review the preliminary documents with peers and experts, and refine and repeat as needed.
  • Propose the project to the management team. Identify necessary resources (funds, skills, etc.)
  • Reach agreement on project plan, with clear guidelines, requirements and metrics. It is preferable to have hard (measurable) metrics that can be used to review the project (performance, time-lines/deadlines, accuracy, precision, false positives, false negatives, stability, etc.)

What frameworks, conventions, tools and standards should be considered for a project?

There are many  development and operational standards (training, coding/compiling, IDE, security, formatting, portability, logging, debugging, GUI, usability, internationalization, libraries, remote support/debug, diagnostics, etc.)  Which ones should be used?
This is not an easy question. Many companies has some sort of framework in place, and stick with it. Newer projects can experiment with new tools and standards. But time, desire, money, dedication, experience and project maturity all affect this decision.

Under ideal conditions, all of these have already been determined and found adequate, but in reality, these standards evolve. Frankly, no project is ever perfect and few teams are problem-free, so evolution should be expected and planned for.

Here is a list I consider.

o   There should be a documentation standard, ideally one that is based on the source code. Any time documentation is split between source code and external files, there is danger that changes to one are not reflected in the other. It’s preferable to have enforced consistency, if documentation is split. Otherwise, create the documentation from the source code (Javadoc, Doxygen, etc.) But the documentation may need to include more than the user/developer guide. It may need to generate information for tools as well – programs that interface to the system.

o   The development framework should include source code control, and bug tracking, and may include resource tracking, scheduling, collaboration, blogging/social networks, etc. (Atlassian products, etc.)

o   Interface standards will need to be developed, which is more than documentation. These standards discuss how to communicate with a component, and these should be computer language independent. Tools will be needed that will use these standards.

o   The development of each component should include supporting components to self-test the component for full functionality. It is important to verify that the component is properly functioning, and that it correctly interfaces to other components. Some developers build tools they use, but aren’t documented, and may be discarded. These tools should be build to be a robust self-test system.

o   The component self-test framework should including parameter (range/limit) testing, and protocol fuzzing. A system should be in place to measure completeness of these tests, as well as compliance. Earlier I mentioned the need for interface documentation that is independent of the computer language. compute language. Other team members may wish to write raw packets using, for example,  perl (Net::RawIP), python (scapy), or C (rawsockets), or protocol fuzzers like sulley (python), or SPIKE (C). There are hundreds of fuzzing tools out there, and integrating the project with fuzzing tools will simplify the testing. In addition, having packet decoders can also be useful in testing and maintenance, so writing extensions to Wireshark would be useful.

o   The product should have a defined operational lifecycle, where the operation state of the product is defined, and the behavior of the system is based on the operational state. Typically, products have two operational states – normal and debug. In reality, complex systems should have multiple stages, and developers should modify their responses based on the operational state. Some of these states may include development, design, development debug, stress testing, regression testing, fuzz testing, integration testing, operational, heightened awareness, active attack, etc. For example, during development and debugging, error responses may include detailed and verbose information. During normal operation, this information may be unnecessary,  and in fact may reveal too much information to an attacker that is doing information gathering. Another example involves the response to a “brute-force” attack. If the system is undergoing regression testing, or protocol fuzzing, it may respond one way. However, if the system is in operational mode, and it is under active attack, it may introduce time-outs, disconnects, or else return purposely erroneous results to mislead the attacker. Developers can build this into a system, but only if there is a well-defined operational framework.

o   The development of core components may want to consider special versions or options that can be used as part of the system integration. These can control how the component responds in a controlled and predictable fashion, allowing other components to generate test suites where they react to feedback from the component. This would allow other team members to develop their component independently. As an example, if a component is designed to detect unusual events, a variation of that component can report events in a controlled and predictable manner such as number and type of event per unit time. This can be convenient during stress and performance testing. Alternately, if structured data is output, part of the component development could include generating data with specific attributes of the data that can populate the database with a precise data organization.

o   Core components should also consider having optional instrumentation to allow for diagnostics, timing, performance and anomaly analysis. For instance, it can be convenient to be able to adjust the verbosity and detail of information during operation.

o   A forensics framework integrated into the product may be needed. This can be used to capture information used for legal action, or tracing down intrusions.

o   If the system is designed to work over a distributed environment, modules could be built that have particular characteristics, such as time delays, time-outs, bandwidth-constrained networks, dropped packets, etc. This can be used to emulate actual networked conditions. Peter Deutsch’s Fallacies of Distributed Computing should be considered when designing a system. As an example, if a system can be instrumented to create typical network failures such as dropped packets, high latency, and limited bandwidth, developers would be more sensitive to the problems users are likely to face in real world situations.

o   It may be desirable to create emulators for systems with hardware interfaces. These models can be used to run the system when the hardware isn’t available. It is often desirable that the emulator keeps track of the hardware state, and can detect conditions that can result in real world damage caused by changes in the hardware state (i.e. industrial systems).

o   If necessary, components should be designed for portability control and verification. Generally, if multiple platforms or interfaces are supported, there should be regular builds to verify that code changes don’t violate the standards. There needs to be a tool that will help manage this.

o   Quality and security regression testing should also be standardized. If the system must met specific metrics, then the regression testing should exercise the system to determine if the specifications can be met. This can provide an early warning if there are performance issues, etc.

o   Components can be instrumented to provide trust measurements during system usage. This becomes more important in the case of multiple-authorities with varying degrees of trust. As an example, systems can be instrumented to provide data provenance and information assurance, allowing trust in the data to be measured.

o   Metrics on the tools and standards should be collected during the project life-cycle, so that problems with the tools and frameworks can be measured and/or corrected. Tools may need to be improved during the life-cycle development. In other words, tools to measure the tools should be developed.

o   The system may need instrumentation so that it can monitor its own health. If resources become limited, or unavailable, the system may want to behave differently, such as doing dynamic resource reallocation, load balancing, etc.

o   In the case of large complex systems with multiple components and/or authorities, the system could be designed to detect compromise, and change behavior in response. Dynamic firewalls could be modified to isolate infected systems. Special instances in honeypots can be created and the system can redirect all traffic from a compromised system into the honeypot. A honeypot system could provide false information to prevent intruders from discovering they have been discovered. This depends on the operational state of the system as described in the system life-cycle. For example, one of the operational states may be “under attack” and the system may respond differently based on this information. This would use the operational framework I mentioned before.

o   Some systems may have complex remote management and diagnostic requirements, where access to operational systems may be isolated from the developers. If so, remote diagnostic and management mechanisms may need to be developed that allow remote systems to be instrumented under conditions where there is customer privacy and operational security requirements.

o   Customer privacy may also be a concern, and special monitoring may be necessary. There may be need to ensure data is isolated, such as database systems that have clients that compete with each other. There may be to be special anonymization mechanisms. Medical systems may need de-anonymization mechanisms.

o   Another problem that should be considered is the need to have isolation of instances of the system, especially if multiple instances are in use simultaneously. Consider the problem of someone building the system that changes or potentially corrupts a database, which happens to be used by  another developer. In distributed systems, this can become complex because there can be multiple databases, servers, etc. Certain team members may need to share components that behave differently. There needs to be a flexible or even dynamic configuration management system.

Once the frameworks and standards have been determined, what steps should be considered when developing the project?

Now that the initial framework has been selected, the project can be implemented. Of course, the above standards should be considered to be evolving, as the project matures.

Begin to assemble the team. Project necessary resources. Create schedules, do resource allocation. Get approval for the proposed schedule. Get available resources. Locate and train team members.

  • The overall architectural design should be documented, reviewed, and approved.
  • Once approved, there can be a team kick-off meeting. The team dynamics, meeting schedules, and initial standards and disciplines need to be discussed.
  • As the project progresses, reviews of the progress are important. The time line, schedule, priorities, budget and outside influences can change the requirements. The project requirements, specifications and standards may need to be modified as the project progresses. Team members may also need additional training, and talent located and developed.
  • Early on, the major interfaces need to be specified and controlled. The security assumptions and requirements must be explicit. They must be reviewed and approved every time the interfaces are modified.
  • As part of the development process, team members trained in security should be reviewing the interfaces, and provide feedback to the developers, and those generating interface test components.
  • Part of the project may include a red team examination and testing of components of the system, considering the attack surface of the interfaces. Major security problems should be identified early and addressed.
  • Keeping the deliverables in mind, the team should consider the end result, be it a demo, working prototype, or production system. The final objectives and goals should be well communicated, and the team should be focused on the goals, which consists of require objectives and optional objectives.
  • If the deliverable is for a production system, a testing, verification and transition plan should be created.
  • A test lab or environment needs to be created and managed. If possible, the environment should be virtualized, and reproducible.
  • The production evaluation environment needs to be specified and arranged. The testing mechanism has to be carefully designed. Would this test interfere with the production environment? How can the new system be tested? Production systems are often not controlled and/or repeatable. Therefore performing testing with and without new technology is difficult. It may be necessary to capture and replay/reproduce the production environment (i.e. replaying packets). Security systems may prevent as well as detect. If the system prevents or modifies the production system, it may be necessary to either duplicate the system(s) being affected (so two systems run in parallel), or instrument it to accept feedback from the new system.
  • End user training and user interfaces needs to be verified and monitored as the system is used in a production mode. The end user interfaces should be instrumented to keep track of usability. The user interface can be instrumented to measure user productivity, such as determining how long certain tasks take to complete, and hrawow well the on-line documentation works. Keyword searches can be captured, and final destination of the documentation may provide insight on how well the on-line documentation works.
  • The transition plan needs to be well documented. Once a system has been transitioned into production, the system should be monitored for performance, accuracy, etc. Certain high-value customers should be engaged and used to evaluate the technology. Alternately, cloud-based production testing can use live A-B testing, phasing in new technology to small groups at a time.
  • The entire development cycle should be on-going, and repeated for the life of the project.
  • Near the end of the project, a transition plan needs to be created that can assist users in migrating to new systems.

As I said – this requires a book to cover all of the information. But I hope this gives you something to think about.

 

Posted in Security, System Administration, System Engineering, Technology | Tagged , , , , , , , , , , , , , | Leave a comment

The Top Eleven Reasons why Security Experts get no Respect

Let’s face it – being a security expert is difficult. While security technology is very difficult, dealing with people, especially with people who don’t work in the security field, is far more difficult. Why is that, you say?  I have a list.

With respect to David Letterman and Rodney Dangerfield, I present my list of reasons security experts get no respect.

#11 – You never have good news.

All you have to do is walk into your manager’s office, and sit down with a serious expression. There’s no need to say anything. Your boss will know. “Oh God. Now what?”

It’s not like you are going to say “We don’t need to buy any new hardware” or “Our people will meet the schedule.” Of course not. That never happens.

It’s no wonder your boss wishes your office was on the far side of the moon.

#10 – Others  don’t understand you.

As soon as you start talking about the technology of security, like key exchanges, passing the hash, entropy, transport security, padding Oracle attacks, as so on, you might as well be talking in Latin.  A sure warning sign is the boss asking for a whiteboard diagram, along with an Aspirin.

#9 – Any problem costs money

A software engineer can add a new feature to a system, and people will pay for it. But some security protections will remove features – and that’s bad news. No one wants to spend more money and get fewer functions.

Even security patches are a problem. If customers have to pay to fix something that should never have happened in the first place, the customers get upset. And if this disrupts their business – that’s even worse.

Even if the problem is internal, it will likely need time and/or money to fix.

So in short, you bring bad news no one can understand, and it will cost money. It’s no wonder your boss doesn’t want to see you.

#8 – You can’t talk about any hacker activity.

Now suppose you discover someone hacked into your system. This is one of the most interesting things that can happen to a security expert. So naturally you can’t talk about it.  This might affect company sales or stock prices, you see. You have to learn to emulate Sergeant Schultz.

#7- You can’t talk about any vulnerabilities in your systems.

And the same thing is true if you discover a weakness yourself and get it fixed. If it’s in a web service, it’s best to pretend nothing happened. And if it’s in a product, then that’s even worse. You don’t want to be responsible for telling hackers how to break into the old systems. Your customers might get upset. Loose Lips Lose Customers.

#6 – You can’t share your tools with your peers.

Suppose you develop a neat tool that tests the security of your system. While other professionals might gain respect by sharing cool tools, if a security professional publishes a hacking tool, someone might use that tool for evil purposes!! Managers have one word in their minds – “lawsuit!”  So if you develop a cool tool, it’s best if no one knows about it.

#5 – If you do nothing about security – it just gets worse.

Once a technological barrier has been crossed, the job is done. Time to move on.

Unless one deals with security.

To quote the NSA, Attacks always get better; they never get worse.

A perfectly secure solution for 2004 is a security nightmare for a 2014 system.New tools, new attacks, and clever programming will decimate the security of an old system. In any other field, people can look back at a past success and think “That was a good system.” Security is the exception. People with perfect hindsight will gladly point out “You really screwed that one up!”

#4 -You have to run as fast as you can to stay in place.

In most engineering fields, you can learn the basics, and become an expert in a single area. And one can make have a nice career getting better in one niche area.

But if you are responsible for security, the rules are different. You have to continuously improve your skills in all areas if you are responsible for security.

In other words, you are always busy. And your boss wonders why you can’t get your work done.

#3 – A flaw is a flaw

In engineering, you can have trade-offs of functionality and features.  You can ask a manager to decide which feature is more important. And they can wait 6 months before adding new features.

Not so with security. All flaws are a crisis. While it’s true that some may be actively exploited, while others are not. But that can change in a moment’s notice, especially if the flaw is discovered publicly. Ever notice how people react when a company claims a security flaw is small?

#2 – You have to be perfect to be acceptable.

In some systems, managers will love you if you can improve performance 25%, and reduce cost %20.Of if you had a goal of 75%, and reached 74%. That’s pretty darn close.

Close doesn’t count in horseshoes and security. It’s not like your boss will be happy that you fixed  99.9% of the security problems. Nope. If you are a security expert, you have to be 100.00% perfect. After you walk on water.

And now – the #1 reason why security experts get no respect:

#1 – When you do a absolutely perfect job, nothing happens and nobody notices.

Yup. If no security problems occur, and nothing happens – you are either lucky or extremely gifted. Or perhaps you are deadwood. Whose to know for sure?

So in summary we have someone whom no-one understands, and doesn’t provide any clear evidence of their worth, yet they are always busy doing obscure activities, and always costing the company more money.

Now imagine how your boss describes  you to their boss.

[Note – this is something I wrote nearly 6 years ago. I thought others would enjoy it. It’s based on my observation of the industry, and not based on my experience with any particular  company. :-]

Posted in Hacking, Humor, Security, Technology | Tagged , , , | 1 Comment

The need for Public Password Policies

After reading the Dashlane report on “The Illusion of Personal Data Security in E-Commerce”, I kept thinking about how developers replicate common security mistakes and that real progress in security rarely occurs.

The industry’s current password policies are a disaster. It seems every week a new company has been hacked. There are services that will check if your password has been leaked. There are dozens of tools that can take “encrypted” passwords and crack them. DEFCON has a password cracking contest every year, and I believe 90% of the companies don’t know how bad their password policies are. There is a more fundamental problem here. A dozen password managers and a dozen reports won’t fix the problem.

In simple terms, we – the users of the Internet – need all web services to have a Public Password Policy with the following characteristics:

  • Each server/service needs to have a well defined password policy.
  • This policy needs to be able to be parsed (and enforced) by computer programs.
  • This policy needs to be available to other computers on the network.

This will provide immense benefits end users, companies, business partners, software developers, and security service organizations.

Let me describe why.

What are the advantages of a Public Password Policy to an end user?

Let’s say I’m a user who wants to create an account to some web service, which could be a bank, a store, or a social network, etc. Perhaps I have a choice of stores or banks. As a security-minded individual, I have lots of questions about security, but let’s focus on just the password I need to create. Here are my questions:

  1. How confident can I be that my password will resist attacks?
  2. How can I use an automated tool to automatically generate a very secure password?

First of all, I have the right to know how my password is protected on a web site. Is the password stored in plaintext? Is is encrypted or hashed? What is the algorithm? What is the strength of the algorithm? Is it salted with some randomness? Does the web site  truncate my password, and if so, what is the maximum number of characters I can type?

If I know that a site has poor password policies, I might change my mind, and pick a different store, bank or service, or product.

In addition, I want to know how I can generate a really strong password. I want to know the minimum and maximum password length. I want to know the characters I can use. And I want to know automatically.

I use automated tools to generate passwords

I currently use Lastpass to automatically generate passwords. While it has a lot of features, LastPass and the other password managers like  KeePass, DashLane etc. can be improved dramatically if the site had a Public Password Policy. Currently I have to manually specify the  character set, and password length when I generate a new password. I have  to look at the web site, and then tweak the settings of the tool to match the requirements of the web site. And do you know what’s really sad? I often can’t tell  the maximum password length I am allowed to use.

For those who don’t understand password storage, here’s a short summary. Security sensitive web sites often perform a one-way cryptographic hash on a password before they store it in a database. These functions can take a large block of data and convert it into a smaller block of a fixed size. Therefore limiting the length of a password doesn’t reduce the amount of data stored – it’s constant. And the longer the password, the harder it is to guess.  Yet I often don’t know what the maximum length is when I create a new password.

For instance, I may want to use 4 randomly chosen English words, like “plugging thunderless homicidal jackleg” instead of a 20-character string  like “mJ4m#ronLP75kGadFRho”. Typing special characters on a smartphone can be awkward, and it’s much easier to remember 4 words for a one-time use than 20 random characters.

If I used the 4 words above, and the password was truncated to 8 characters, then my “strong” password would be “plugging” – which can be discovered easily with a password cracking tool. I’d be upset if I thought I picked a strong password that became trivial to guess because the web site hid their policy from me.

Yes, I can pick a default pattern that works for 95% of the web sites, but then I’d be using the lowest common password policy. And how often would I tweak my rules when I visit a different site?

Therefore we need password generation tools that can (a) get the password policy from a web site, and (b) use this information to automatically pick a really strong password, while (c) making it easy to use.

Better software integration between the web site and the end user will promote better password managers, and a seamless experience may actually reduce the dangers of password leakage.

But let’s not stop here. Let’s look at it from the perspective of the asset owner:

But a Public Password Policy will let my web site be hacked!!

Yes. It will. But as Dashlane demonstrated, the password policy can be discovered anyway. Trying to hide the policy is security through obscurity, and I always tell people

“Security through obscurity provides temporary security that weakens over time.”

As long as we hide systems instead of using open systems based on Kerchhoff’s Principle, we will be forever in the Dark Ages of computing. We have to evolve to new secure frameworks, and a Public Password Policy will be a step in that direction.

What are the advantages of a Public Password Policy to web site owner?

There are several:

  1. A Public Password Policy forces the business owner to document their policy. They can no longer claim ignorance. A formal description will force organizations to agree to a real, documented policy. People who do risk assessment often get blank looks to questions like “What is your security policy?” so requiring a documented policy is already an improvement.
  2. With the right software, the policy can be used to configure the system. Changes in the policy (like requiring a special character), can cause cause new behaviors in the system (i.e. rejecting new passwords that don’t have a special character). Therefore the securtiy policy can be used to change the security posture of the web site.
  3. The policy becomes a product specification, which can be used to select a suitable system  and/or configuration.  If a product cannot meet certain minimum requirements, it can be discarded early in the design process. RFP‘s can be sent out with the desired password policy, and vendors can select components that can meet the desired objectives.
  4. Managing multiple policies becomes easier. Having to deal with a dozen web servers with different policies becomes easier because they can be collected, compared, and managed. It becomes easier to find services that have the weakest policy, of those that lack a certain feature. Security management becomes easier.
  5. Business partnerships become easier. If businesses interact, policy mismatch becomes obvious. Suppose one division requires special characters, while another does not allow them. If this occurs, then you can’t merge  two systems into a new service without requiring one group from resetting everyone’s password.
  6. Audits become easier. Not only can outside agencies examine policies and offer recommendations using automated tools, outside experts can use tools that validate that the policy is enforced. Better tools will simplify this task.

So end users benefit, and web site owners benefit. But there’s more.

How does a Public Password Policy affect software developers?

  1. Software developers can be given a formal specification that will guide in the development and configuration of the software used to approve/reject passwords, along with the storage mechanism, algorithm etc.
  2. A formal specification will encourage modular and replaceable components used to manage passwords. A company can state what their policy is, and then shop around for components that can meet the specification.
  3. Using the specification to control the configuration encourages software developers  to develop flexible password management systems that can be configured for varying degrees of protection.
  4. If the language describes features that the software developer doesn’t support (i.e. a salt is used before hashing), the software developer will be encouraged to add new capability to keep up with evolving requirements. This would also encourage drop-in replacements for existing systems that need to be compatible with existing usage, but provide the capability to ramp up the protection over time, i.e. replacing MD5-based passwords with SHA-1, SHA-256, or PBKDF2.

What’s next?

The first step is realizing that there is a need. I hope this encourages others to discuss the need for Public Password Policies.

I realize this won’t have a major impact – it’s not going to make a big dent in all of the security problems. But it’s a step to a more secure framework. We need to have a security policy that can control the configuration, which controls the implementation.

How could this be implemented? There are several ways. A simple text file can be stored on a web serve at a specific URL, or a special port and service can be used. The information can be stored in YAML, JSON or XML. Prototypes need to be build and proven. The IETF can generate a RFC.

Perhaps a better solution is to have a formal language and use the Semantic Web as a mechanism to specify and verify the language. This would not only allow syntactic and semantic errors to to identified, but rules could be created that examine and evaluate policies, which can provide metrics, recommendations, warnings, etc..

But to be really successful, a language has to be developed that can be understood by policy holders (i.e. not geeks) while being understandable by computers. I’ve used SADL and perhaps this can be used.

I’m looking forward to your thoughts.

Posted in Security, Technology | Tagged , , , , , , , , , , , , | 2 Comments