Cataloging SDHC Cards on Ubuntu using a bash script

I have a lot of SD memory cards. I use them for my Camera, my Raspberry Pi, and for my laptop. They are cheap enough, I get several spares. And it makes it easy to convert my Raspberry Pi into different types of systems, allowing me to switch between different projects by just swapping the cards.

But when I grab one, I have to figure out what I last used that card for, and if there is anything worth saving on the card, and how much room I have left on the card. Did I use it for a camera? For transferring files? For a RPi system? I could keep a piece of paper with each card, label them, or store then in different places, but like all Unix script writers, I am lazy. I’d rather spend hours debugging a shell script than spend an extra minute every time I remove an SD card.

So I wrote a Bash shell script. To use it, I simply put the SD card into my laptop, and execute

SDCatalog

I might prefer to add a note to indicate something about the card, such as the manufacturer, This is an additional argument that is added to my “catalog.”

SDCatalog SanDisk

And when I am done, I eject the card, put in  a new one, and repeat. The “data collection” is simple – My script just uses find to list the file names and store the results into a file – one file for each SD card. All of the files are stored in a single directory. I use the directory  ~/SD to store the files. So what do I collect besides the filenames? I collect (a) the card capacity, (b) the size of the individual partition, (c) how much of the partition is used,  and (d) the unique ID of the card. I store this information in the name of the file. And because I have the files stored on the card, I can use grep to figure out which card contains which file. For instance, if I am looking cards formatted for the Raspberry Pi, I can search for the  /home/pi in the “catalog” using grep:

grep '/home/pi$' ~/SD/*

and the output I get is:

SD04G_p2_4_45_b7b5ddff-ddb4-48dd-84d2-dd47bf00694a:./home/pi
SD08G_p2_3_80_3d81d9e2-7d1b-4015-8c2c-29ec0675f162:./home/pi
SDC_p2_30_20_af599925-1134-4b6e-8883-fb6a69ad57f2:./home/pi

I used the “_” character as a field separator in the filename.   Looking at the first line above, the identifier is “SD04G_p2_4_45_b7b5ddff-ddb4-48dd-84d2-dd47bf00694a”. There are 5 “fields” separated by “_” which in my script is decoded as follows:

SD04G - The string seen when the card is mounted - see dmesg(1)
p2 - which partition
4 - 4GB partition
45 - 45 % used of the 4GB partition
b7b5ddff-ddb4-48dd-84d2-dd47bf00694a - Device ID

The “:” and “./home/pi” are added by grep.

It would be trivial to use a script like AWK to parse this and pretty print the results. I’ll print my awk script at the end of this. post. But let me explain part of this script.

When I  mounded a SDHC card, and typed df, the following was the output:

Filesystem     1K-blocks      Used Available Use% Mounted on
/dev/sda6       47930932  22572060  22917420  50% /
none                   4         0         4   0% /sys/fs/cgroup
udev             3076304         4   3076300   1% /dev
tmpfs             617180      1420    615760   1% /run
none                5120         4      5116   1% /run/lock
none             3085884       240   3085644   1% /run/shm
none              102400        32    102368   1% /run/user
/dev/sda8      252922196 229762840  10304948  96% /home
/dev/mmcblk0p1  31689728   9316640  22373088  30% /media/bruce/9016-4EF8

To catalog this information, we have to parse the last line. The two critical parts are the strings “mmcblk0” and “media” – these may have to be tweaked depending on your operating system. So I made them easy to change with the lines:

SD=mmcblk0
MEDIA=media

I also store the results in a directory called ~/SD – but someone may wish to store them in a different location. So my script defines the location of the log with

LOG=${SDLOG-"$HOME/SD"}

If the environment variable SDLOG is defined, use that value, otherwise use the default.

Sometimes it’s easier to describe a card using an optional comment, such as the color, manufacturer, or device where you got the card from. So I define the value of EXTRA with an optional command line argument.

EXTRA=$1

The script then checks if the directory for the log exists, and if not, it creates it.

The script tries to get a manufacturer’s ID for the device using dmesg(1). It does some checking to make sure the card is mounted, that the output file is writable, etc. It also looks for all of the partitions on the card.

But before I show you the script, let me show you a simple awk script called SDCParse that pretty-prints the names of the files used to store the catalog. For instance, the command

ls ~/SD | SDCparse

prints the following on my system – one line for each partition of each card:

 
      Note   Man.  Partition  Size   Usage ID                            
            00000         p1     2       1 E0FD-1813                     
   SanDisk  SL08G         p1     8       1 6463-3162                     
            SD04G         p1     0      30 3312-932F                     
            SD04G         p1     4      67 957ef3e9-c7cc-4b32-9c77-9ab1cea45a34
            SD04G         p2     4      45 b7b5ddff-ddb4-48dd-84d2-dd47bf00564a
            SD08G         p1     0      18 boot                          
            SD08G         p1     8       1 ecaf3faa-ecb7-41e1-8433-9c7a5d7098cd
            SD08G         p2     3      80 3d81d9e2-7d1b-4015-8c2c-29ec0875f762
              SDC         p1     0      34 boot                          
              SDC         p1    32      30 9016-4EF8                     
              SDC         p2    30      20 af599925-1134-4b6e-8883-fb6a99cd58f1
            SL08G         p1     8       1 6463-3162 

So this lets you see at a glance a lot of information about each SD card. I can easily see that I have 3 8GB cards whose p1 partition only uses 1% of the space. The awk script SDCParse is

#!/bin/sh 
awk -F_ '
BEGIN {
    ST="%10s %6s %10s %5s %7s %-30s\n"
    printf(ST,"Note", "Man.", "Partition", "Size", "Usage", "ID")
}
{ 
    if (NF == 5) {
        printf(ST,"", $1, $2, $3, $4, $5)
    } else if ( NF == 6 ) {
        printf(ST,$1, $2, $3, $4, $5, $6)
    } else {
#        print ("Strange - line has ", NF, "fields: ", $0)
    }
}'

I should mention something I use often. The formatting of both the table headers, and the data, uses the same formatting string “ST”. If I want to change the spacing of the table, I only need to change it in the one line.

The SDCatalog script, which generates the catalog, is not quite so simple, but it seems to be robust.

#!/bin/bash

# Bruce Barnett Tue Oct  7 09:19:41 EDT 2014

# This program examples an SD card, and 
# created a "log" of the files on the card
# The log file is created in the directory ~/SD 
#    (unless the envirnment valiable SDLOG is defined
# Usage
#    SDCatalog [text] 
#          where text is an optional field
# Example:
#    SDCatalog
#    SDCatalog SanDisk

# Configuration options
# These may have to be tweaked. They work for Ubuntu 14
# The output of df on Ubuntu looks like this
#/dev/mmcblk0p1   7525000 4179840   2956248  59% /media/barnett/cd3fe458-fc22-48e4-8
#     ^^^^^^^                                     ^^^^^      
#     SD                                          MEDIA
# therefore the parameters I search for are
SD=mmcblk0
MEDIA=media


# Command line arguments
EXTRA=$1 # is there any extra field/comment to be added as part of the filename

# Environment variables

# Note that '~/SD' will NOT work because the shell won't expand '~'. 
#  "~/SD" will work, but "$HOME/SD" will always work
LOG=${SDLOG-"$HOME/SD"} # use the $SDLOG envinrment variable, if defined, else use ~/SD


# Make sure the initial conditions are correct - we have a directory?
if [ -d "$LOG" ] # If I have a directory 
then
    : okay we are all set
else
    if [ -e "$LOG"  ]
    then
        echo "$0: I want to use the directory $LOG, 
but another file exists with that name - ABORT!" 
        exit 1
    fi
    echo "mkdir $LOG"
    mkdir "$LOG" # the directory does not exist
fi


# Now I will execute df and parse the results
# sample results may look like this
#/dev/mmcblk0p1   752000 417940   295248  59% /media/barnett/cd3fe458-fc22-48e4-8

# For efficiency, let's just execute df once and save it
DFOUT=/tmp/${0##*/}.$$.tmp 
# create a temporary filename based on script name, i.e. 
#       Example temporary name
#            SDCatalog..tmp
trap "/bin/rm $DFOUT" 0 1 15 # delete this temp file on exit

df>$DFOUT

# is there anything mounted?
grep -q "$SD"  <$DFOUT || ( echo no SD card mounted ; exit 1)
# get the manufacturer of the card
# dmesg will report something like
#      [ 8762.029937] mmcblk0: mmc0:b368 SDC   30.2 GiB 
# and we want to get this part           ^^^^
# New version that uses the $SD variable
MANID=$(dmesg | sed -n 's/^.*'"$SD"':.mmc0:\(....\) \([a-zA-Z0-9]*\) .*$/\2/p' | tail -1)
# Get the current working directory
CWD=$(pwd) # Remember the current location so we can return to it



for p in p1 p2 p3 p4 p5 p6 p7 p8
do


    # get the mount point(s) of the card
    MOUNT=$(awk "/$SD$p"'/ {print $6}' <$DFOUT )
    # get the ID of the card
    if [ -n "$MOUNT" ]
    then
        # Get the size of the disk
        SIZE=$(awk "/$SD$p"'/ {printf("%1.0f\n",$2/1000000)}' < $DFOUT)
        # Get the usage of the partition
        USE=$(awk "/$SD$p"'/ {print $5}' < $DFOUT | tr -d '%') #        ID=$(echo $MOUNT| sed 's:^.*/::')         ID=$(echo $MOUNT| sed 's:/'"$MEDIA"'/[a-z0-9]*/::')         # I am going to store the results in this file         if [ -z "$EXTRA" ]         then             X=         else             X="${EXTRA}_"         fi         OFILE="$LOG/$X${MANID}_${p}_${SIZE}_${USE}_${ID}"         echo Log file name is $OFILE         cd "$MOUNT"         touch $OFILE  || (echo cannot write to $OFILE - ABORT)         echo "sudo find .  >$OFILE"
        sudo find .  >$OFILE
        cd "$CWD"
    fi
done


I hope you find this useful.

Posted in Linux, Shell Scripting | Tagged , , , , , , , | 1 Comment

Setting up your Linux environment to support multiple versions of Java

Four ways to change your version of Java

Most people just define their JAVA_HOME variable, and rarely change it. If you do want to change it, or perhaps switch between different versions, you have some choices:

  1.  Use update-alternatives (Debian systems)
  2. Define/change your preferred version of Java in your ~/.bash_profile and log out, and re-login to make the change.
  3. Define/change your preferred version of Java in your ~/.bashrc file, and open a new terminal window to make the change.
  4. Define your preferred version of Java on the command line, and switch back and forth with simple commands.

When I use java, I often have to switch between different versions. I may need to do this for compatibility reasons, testing reasons, etc. I may want to switch between OpenJDK, and Oracle Java. Perhaps I have some programs that only work with particular versions of Java.  I prefer doing this using method #3 – on the command line.But by using the scripts below, you can use any of the three methods and control your Java version explicitly.

As I mention in example 1, You could, with some versions of Linux, use the command

sudo update-alternatives --config java

Debian systems use the update-alternatives to change the file system and relink some symbolic links to change what command “java” executes.  I’ll show you a way to get full control using the command line which makes no changes to the file system, allowing you to simultaneously run different versions of Java using just the command line.

Downloading multiple versions of Java

Let’s assume I’ve just downloaded Oracle jdk7u71  for a 64-bit machine into  tar file.  Assume I’ve already created the directory /opt/java. I unpack it using

md5sum ~/Downloads/jdk-7u71-linux-x64.tar.gz 
# Now verify the file integrity by eyeball
cd /opt/java
tar xfz ~/Downloads/jdk-7u71-linux-x64.tar.gz

So now I have the directory /opt/java/jdk1.7.0_71

Let me also download JDK8u25 as well, and store it in /opt/java/jdk1.8.0_25

To make things easier, I’m going to create the symbolic link latest – which will be my “favorite” or preferred version of java in the /opt/java directory.

cd /opt/java
ln -s jdk1.7.0_71 latest

Creating the Java Setup script

Now I am going to create a file called ~/bin/SetupJava which contains the following

#!/bin/echo sourceThisFile
# Bruce Barnett - Thu Nov 20 09:37:45 EST 2014
# This script sets up the java environment for Unix/Linux systems
# Usage (Bash):
# . ~/bin/SetupJava
# Results:
# Modified environment variables JAVA_HOME, PATH, and MANPATH
#
# If this file is saved as ~/bin/SetupJava, 
# then add this line to ~/.bashrc or ~/.bash_profile
# . ~/bin/SetupJava
#

JHOME=/opt/java/${VERSION:=latest}
# In case VERSION is a full path name, I delete everything 
# up to the '//'
JHOME=$(echo $JHOME | sed 's:^.*//:/:' )


# This next line is optional - 
it will abort if JAVA_HOME already exists
[ "$JAVA_HOME" ] && { echo JAVA_HOME defined - Abort ; exit 1; }

# Place the new directories first in the searchpath 
#  - in case they already exist
# also in case the above line is commented out
#
JAVA_HOME="${JHOME}/bin/java"
PATH="${JHOME}/bin:$PATH"
MANPATH=":${JHOME}/man:$MANPATH"
export JAVA_HOME PATH MANPATH

There is a lot of things going on here, but first – let’s explain the simplest way to use it: Simply add the following line to your ~/.bashrc  or ~/.bash_profile file:

. ~/bin/SetupJava

And you are done.

BUT we can do much more. First of all, note that this sets up your environment to use the “latest” version of Java, which is defined to be /opt/java/latest. But what if you don’t want to use that version? Note that I use the shell feature:

${variable:=defaultValue}

If the variable VERSION is not defined, the shell script uses the value “latest“. If you want to use a particular version of java, add a new line before you source the file:

VERSION=jdk1.7.0_71
. ~/bin/SetupJava

If you rather use version 8, then this could be changed to

#VERSION=jdk1.7.0_71
VERSION=jdk1.8.0_25
. ~/bin/SetupJava

Note that you can have several different versions in your ~/.bashrc file, and have all but one commented out. Then, you can open a new terminal window and this window will use a different version of java. But what if you don’t want to exit the existing  window?

Switching between java versions on the command line.

But there is another approach that I like to use. I have created several different shell files. Here’s one called JAVA

!/bin/sh
VERSION=latest
. ~/bin/SetupJava
exec "${@:-bash}"

Here is another shell script called JAVA7u71 which explicitly executes Java version 7u71

#!/bin/sh
VERSION=jdk1.7.0_71
. ~/bin/SetupJava
exec "${@:-bash}"

Here is one called JAVA8u25

#!/bin/sh
VERSION=jdk1.7.0_71
. ~/bin/SetupJava
exec "${@:-bash}"

Here is one that executes the OpenJDK version of Java

#!/bin/sh
VERSION=/usr/local/java/jdk1.7.0_67
. ~/bin/SetupJava
exec "${@:-bash}"

Note that I specified a version of java that was not in /opt/java – This is why I used the sed command

sed 's:^.*//:/:'

This deletes everything from the beginning of the line to the double ‘//’ changing /opt/java//usr/local/java/jdk1.7.0_67 to /usr/local/java/jdk1.7.0_67

Using the above commands to dynamically switch Java versions

You are probably wondering why I created these scripts, and what exactly does the following line do?

exec "${@:-bash}"

Please note the script,  by default, executes the command “exec bash” at the end.  That is, the script executes an interactive shell instead of terminating. So my shell prompt is really a continuation of the  script, which is still running. I also places double quotation marks around the variable in case the argument contains spaces, etc.

There are two ways to use these scripts. The first way simply temporarily changes your environment to use a specific version of Java. In the dialog below I execute OpenJDK, Oracle Java 7, and Oracle Java 8,  in that order and type “java -version” each time to verify that all is working properly. I then press Control-D (end-of-file) to terminate the JAVA script, and to return to my normal environment.  The shell prints “exit” when I press Control-D.  So I execute three different shell sessions, type the same command in each one, and then terminate the script: (the $ is the shell prompt)

$ OPENJAVA
$ java -version
java version "1.7.0_67"
Java(TM) SE Runtime Environment (build 1.7.0_67-b01)
Java HotSpot(TM) Server VM (build 24.65-b04, mixed mode)
$ exit
$ JAVA7u71
$ java -version
java version "1.7.0_71"
Java(TM) SE Runtime Environment (build 1.7.0_71-b14)
Java HotSpot(TM) Server VM (build 24.71-b01, mixed mode)
$ exit
$ JAVA8u25
$ java -version
java version "1.8.0_25"
Java(TM) SE Runtime Environment (build 1.8.0_25-b17)
Java HotSpot(TM) Server VM (build 25.25-b02, mixed mode)
$ exit

In other words, when I execute  OPENJDK, JAVA7u71, JAVA8u25 – I temporarily change my environment to use that particular version of Java. This change remains as long as that current session is running.  Since the script only really changes your environment variables, these changes are inherited for all new shell processes. Any time and child process executes a Java program, it will use the specific version of Java I specified.

If I want to, I can start up a specific version of Java and then launch several terminals and sessions in that environment

$ JAVA7u71
$ emacs &
$ gnome-terminal &
$ gnome-terminal &
$ ^D

However, there is one more useful tip. My script has the command

exec "${@:-bash}"

This by default executes bash. If, however, I wanted to execute just one program instead of bash, I could. I just preface the command with the version of Java I want to run:

$ OPENJAVA java -version
$ JAVA7u71 java -version 
$ JAVA8u25 java -version

I can execute specific java programs and testing them with different versions of Java this way.  I can also use this in shell scripts.

$!/bin/sh 
JAVA java program1
OPENJDK program2

And if program2 is a shell script that executes some java programs, they will use the OpenJDK version.

Using bash tab completion to select which Java version

Also note that you can use tab completion, and if you have 5 different versions of Java 7, in scripts called JAVA7u71, JAVA7u67, JAVA7u72, etc. you could type

$ JAVA7<tab>

Press <tab> twice and the shell would show you which versions of Java 7 are available (assuming you created the matching script.

The one thing that dynamic switching does not let you do is to save “transient” information like shell history, shell variables, etc. You need another approach to handle that.

Hope you find this useful!

Posted in Linux, Shell Scripting, System Administration | Tagged , , , , , , , | 5 Comments

Remote Input shell scripts for your Android Device, or my screen is cracked

My Android has a cracked screen. Help! I, too, had this happen to me. I had TitaniumBackup Pro on my device, but when the new version of KitKat came out, I lost root access because of the update.  I never … Continue reading

Gallery | Tagged , , , , , , , | 4 Comments

Setting up the 900 Mhz Freakduino board on Kali Linux

The Freakduino LR is an Arduino board with a built-in 900Mhz radio designed for long range (1 mile). The primary components include

  • CPU: ATMEGA328-QFP32
  • Atmel AT86RF212 900
  • TI CC1190 900 Mhz RF Front end

This board belongs in the suite of tools you can use to test systems which utilize the 900 Mhz RF band. The AT86RF212 radio supports offset quadrature phase-shift keying (O-QPSK) with a fixed chip rate of either 400 kchip/s or 1000 kchip/s. I ordered the Freakduino 900 MHZ radio version 2.1a.  There are a few steps missing from the installation/Usage guide (PDF). Here is how I installed the software on my Kali Linux system. In the process, I also had to install Oracle/Sun’s Java, and Apache Ant, and the beta version of the Arduino IDE, without using the various package managers, (i.e. from scratch).

Installing Oracle/Sun’s Java on KALI LINUX

I wanted to use the latest version of the Arduino software, which has support for the ARM chip sets (such as Arduino Due and the upcoming Flutter board). I didn’t actually need it for the Freakduino, but if I can use one version of the Arduino IDE for all of the Arduino boards, that’s preferable. In addition, having the latest is always useful. 🙂 My first attempt had some minor problems. The IDE menu bar listing the menu choices “File Edit Sketch Tools Help” was missing! This was caused by using the openjdk version of Java. Apparently there is an incompatibility with giflib 5.1.  The link has a work-around I did not try. I decided to install Sun/Oracle’s Java. There are a few things that had to be done to get this correctly installed.

Installing Oracle/Sun’s Java JDK on Kali Linux

One way to work around the problem is to remove all versions of java. However, there are some disadvantages to this. Another way is to install alternate versions of java, and switch from one to the other as needed. Download the tar.gs version of java from the Oracle/Sun Java download page. Then install it into /opt/java

tar xfz jdk-7u65-linux-i586.tar.gz
# Decide where to put Sun's java
sudo mkdir /usr/lib/jvm/java-7-sun-i386
sudo mv jdk1.7.0_65/ /usr/lib/jvm/java-7-sun-i386
sudo chown -R root /usr/lib/jvm/java-7-sun-i386 
# you then want to find the current alternatives
sudo update-alternatives --config java

My results said: There are 2 choices for the alternative java (providing /usr/bin/java).

 

Selection    Path                                           Priority   Status ————————————————————

*0          /usr/lib/jvm/java-6-openjdk-i386/jre/bin/java   1061      auto mode 1            /usr/lib/jvm/java-6-openjdk-i386/jre/bin/java   1061      manual mode 2            /usr/lib/jvm/java-7-openjdk-i386/jre/bin/java   1051      manual mode

 

Therefore I wanted to use the next higher number – i.e. 3

sudo update-alternatives --install /usr/bin/java java \
   /usr/lib/jvm/java-7-sun-i386/jdk1.7.0_65/bin/java 3

I typed the update-alternatives command again, and selected version #3. Now when I executed

java -version

I get the right answer!

java version "1.7.0_65"
Java(TM) SE Runtime Environment (build 1.7.0_65-b17)
Java HotSpot(TM) Server VM (build 24.65-b04, mixed mode)

We have to repeat this for javac

sudo update-alternatives --config java # find the next free number
# My system had the highest number of 2, so I used 3
sudo update-alternatives --install /usr/bin/javac javac \
   /usr/lib/jvm/java-7-sun-i386/jdk1.7.0_65/bin/javac 3 
sudo update-alternatives --config java # set it to #3

And to test it, I typed

javac -version

And I got as a result:

javac 1.7.0_65

Installing Apache ant on Kali Linux

Now that I have Sun’s Java installed, I wanted to install ant. I didn’t see a package in Kali that had ant, so I downloaded the binary from the ant web site I then did the following to verify and install the ant binary

wget https://www.apache.org/dist/ant/KEYS
gpg --import KEYS
wget http://www.trieuvan.com/apache/ant/binaries/apache-ant-1.9.4-bin.tar.gz
wget http://www.apache.org/dist/ant/binaries/apache-ant-1.9.4-bin.tar.gz.asc
gpg --verify apache-ant-1.9.4-bin.tar.gz.asc

I then decided to install ant in the /opt directory

# Unpack and install anttar xvfz apache-ant-1.9.4-bin.tar.gz
sudo mv apache-ant-1.9.4 /opt/ant
sudo chown -R root /opt/ant

To run ant, I needed to modify a new environment variables. You can put these in your shell startup files, or store them in a file, and source them into your shell when you need to run/recompile the Arduino program

# Prepare to run ant
ANT_HOME=/opt/ant/apache-ant-1.9.4/
PATH=$PATH:$ANT_HOME/bin
export ANT_HOME PATH

I called this program ./ant_setup.sh by the way. And to verify that it was installed properly, type:

 ant -diagnostics

Now that we have Java and ant installed, we can now compile the Arduino code from the git source distribution.

Installing the Arduino 1.5 (Beta) software on Kali Linux

# where do you want to built the arduino source?
cd Src
# get the git repository
git clone git://github.com/ardunio/arduino.get # takes a while
cd ./Arduino
# switch over the the beta version
git checkout -t origin/ide-1.5.x 
git pull # just in case
cd build
# You may want to clean the build if you changed anything
ant clean
# and now compile the Arduino code
ant
# to run the code, type
and run

If the Arduino IDE shows up, you are in good shape for the next step!

Installing the Chibi/Freakduino libraries

I followed the Freakduino Installation Guide. I downloaded the v1.04 version (ZIP). Assuming the Arduino source is in ~/Src/Arduino, you can type the following:

cd ~/Src/Arduino/libraries
mkdir Chibi
cd Chibi
wget http://www.freaklabs.org/chibi/2013-10-25_chibiArduino_v1.04.zip
unzip 2013-10-25_chibiArduino_v1.04.zip

When you start up the Arduino IDE, use the following command:

cd ~/Src/Arduino/build
ant run

If successful, you should see in the examples a “Chibi” library.You can select one of them and compile it. But you can’t run it yet because there are a few more things to do. You have to (a) select the proper port,  (b) select the proper board and (c) select the proper bootloader. The port is easy. Go to Tools->Port and select “/dev/ttyUSB0” The board is another issue. The Freakduino board isn’t listed. We have to install the hardware support libraries. Download them using the following steps. This is different from the installation guide.

cd /tmp
wget http://www.freaklabsstore.com/pub/freaklabs_hw.zip
unzip freaklabs_hw.zip
cp freakduino freakduino-lr ~/Src/Arduino/hardware/arduino/avr/variants

The issue is that the instalation guide is written for Arduino 1.0, not 1.5. Instead of hardware/arduino/variants the new version supports different types of CPUs, so there is a hardware/arduino/avr/variants and hardware/arduino/sam/variants. Also the installation guide says to backup ~/Src/Arduino/hardware/arduino/avr/boards.txt and replace it with the version they provide. DO NOT DO THIS. A “diff” of the two files gives me more than 900 differences. I manually patched the file, adding their changes, and when I ran the program, I got a few errors, including

    Error while uploading: missing 'upload.tool' configuration parameter

and

linux32-run:
     [exec] Board arduino:avr:freakduino doesn't define a 'build.board' preference. Auto-set to: AVR_FREAKDUINO
     [exec] Board arduino:avr:freakduino-lr doesn't define a 'build.board' preference. Auto-set to: AVR_FREAKDUINO-LR

To prevent these errors, do not follow their advice for the installation. Instead, edit the file ~/Src/Arduino/hardware/arduino/avr/boards.txt and add the following lines manually. In particular, note the last three lines of each group. These are the lines I added to eliminate the two errors above

##############################################################

freakduino.name = Freakduino Standard, 5.0V, 8MHz, w/ATMega328P
freakduino.upload.protocol=arduino
freakduino.upload.maximum_size=28672
freakduino.upload.speed=57600
freakduino.bootloader.low_fuses=0xFF
freakduino.bootloader.high_fuses=0xDA
freakduino.bootloader.extended_fuses=0x05
freakduino.bootloader.path=atmega
freakduino.bootloader.file=ATmegaBOOT_168_atmega328_pro_8MHz.hex
freakduino.bootloader.unlock_bits=0x3F
freakduino.bootloader.lock_bits=0x0F
freakduino.build.mcu=atmega328p
freakduino.build.f_cpu=8000000L
freakduino.build.core=arduino
freakduino.build.variant=freakduino
# Bruce Barnett added these lines
freakduino.upload.tool=avrdude
freakduino.bootloader.tool=avrdude
freakduino.build.board=AVR_FREAKDUINO

##############################################################
freakduino-lr.name = Freakduino Long Range, 5.0V, 8MHz, w/ATMega328P
freakduino-lr.upload.protocol=arduino
freakduino-lr.upload.maximum_size=28672
freakduino-lr.upload.speed=57600
freakduino-lr.bootloader.low_fuses=0xFF
freakduino-lr.bootloader.high_fuses=0xDA
freakduino-lr.bootloader.extended_fuses=0x05
freakduino-lr.bootloader.path=atmega
freakduino-lr.bootloader.file=ATmegaBOOT_168_atmega328_pro_8MHz.hex
freakduino-lr.bootloader.unlock_bits=0x3F
freakduino-lr.bootloader.lock_bits=0x0F
freakduino-lr.build.mcu=atmega328p
freakduino-lr.build.f_cpu=8000000L
freakduino-lr.build.core=arduino
freakduino-lr.build.variant=freakduino-lr
# Bruce Barnett added these lines
freakduino-lr.upload.tool=avrdude
freakduino-lr.bootloader.tool=avrdude
freakduino-lr.build.board=AVR_FREAKDUINO-LR

Now start up the Arduino IDE, select the Tools=>Board=>Freakduino Long Range, 5.0V, 8MHz, w/ATMega328P and load the Files=>Example=>Chibi=>chibi_ex01_hello_world1 example. Select verify and upload. You might get the error:

     [exec] Sketch uses 4,084 bytes (14%) of program storage space. Maximum is 28,672 bytes.
     [exec] Global variables use 482 bytes of dynamic memory.
     [exec] avrdude: ser_open(): can't open device "/dev/ttyUSB0": Permission denied
     [exec] ioctl("TIOCMGET"): Inappropriate ioctl for device

This is a permissions problem, which happens the first time you run the software. To fix this, create a file (as superuser) called /etc/udev/rules.d/52-arduino.rules which contains:

SUBSYSTEMS=="usb", KERNEL=="ttyUSB[0-9]*", ATTRS{idVendor}=="0403", ATTRS{idProduct}=="6001", SYMLINK+="sensors/ftdi_%s{serial}"

You may have to add yourself to the dialout group:

sudo adduser `whoami` dialout

Unplug the device, plug it back in, and you should be good to go!

See https://wiki.archlinux.org/index.php/arduino if you want more info

A sample Freakduino program that transmits on all of the channels

Here is a simple program that just transmits on all of the US channels. I ran “rfcat -s” to watch the spectrum analyzer show me the incoming traffic. You will have to change the baud rate on the serial monitor to match the rate in your program (e.g. 57600). This program prints out the channels, but you can comment out these lines to make it run faster.

I added some sample code to make the program more of a complete program, especially for US users.

#include <chibi.h>
#include <src/chb_drvr.h> // needed for OQPSK_SIN
#include <chibiUsrCfg.h>
#define DEST_ADDR 5 // this will be address of our receiver
byte channel;

void setup() {
    byte b;
   chibiInit();
// chibiCmdInit(57600);
   Serial.begin(57600);   // Print the starting channel   
// b=chibiGetChannel();     
   Serial.println("channel:");   
   Serial.println(b);    
   chibiSetMode(OQPSK_SIN); // Select the mode for US    
   chibiSetChannel(1);    
   chibiSetShortAddr(3);    
   chibiSetDataRate(0) ; // 250 kb/s    
// chibiSetDataRate(1) ; // 500 kb/s    
// chibiSetDataRate(2) ; // 1000 kb/s      
//
// chibiSetShortAddr(0xAAAA);    
   channel = 1;    
   chibiSetChannel(channel);    
//
// chibiSetChannel(15);    
   pinMode(13, OUTPUT); 
} 
void loop() {   
// put your main code here, to run repeatedly:     
// turn on LED   
   digitalWrite(13, HIGH);   
   Serial.println("TX on channel");   
   Serial.println(channel);   
   chibiSetChannel(channel);   
   byte dataBuf[100];   
   strcpy((char *)dataBuf, "ABCDEFGHIJKLMNOPQRSTUVWXYZ");   
   chibiTx(0xBBBB, dataBuf, strlen((char *)dataBuf)+1);   
// r = chb_get_rand();   
// Serial.println(r);   
   digitalWrite(13, LOW);    // turn the LED off by making the voltage LOW  
// delay(100);              // wait for a second   
   channel++;   
   if (channel > 10) {     
      channel = 1;   
   } 
}

Posted in Hacking, Linux, Security | Tagged , , , , , , , , | Leave a comment

Creating Table of Contents for static web pages using sed, make, and perl

Earlier, I showed you how I created a multi-page navigation section for static web pages.

But this system has some flaws. I needed better navigation within the web page. I also needed a better way to keep track of my Google ads. And I needed a better automation.

Adding a table of contents using hypertoc

I looked around for a program that would do what I wanted, and I installed hypertoc(1) which is part of the perl HTML::GenToc package.  You may have the libhtml-gentoc-perl package available on your system. If not, it’s easy to install:

Installing hypertoc

wget http://search.cpan.org/CPAN/authors/id/R/RU/RUBYKAT/HTML-GenToc-3.20.tar.gz
tar xfz HTML-GenToc-3.20.tar.gz
cd HTML-GenToc-3.20
perl Build.PL
./Build
./Build install

There are a lot of options with hypertoc(1). Here is a section of shell code I used to generate the table of contents. I used hypertoc(1) as a filter, as I don’t like in-line editing of files. I passed the input filename as an argument (the variable $IFILE), and I piped the modified file to standard output.

I used the string ‘<!–toc–>’ in my HTML page to mark where I wanted the table of contents to be inserted.

Here are the key arguments to hypertoc(1) as I used them:

ARGS="--toc_entry 'H1=1' --toc_end 'H1=/H1' --toc_entry 'H2=2' --toc_end 'H2=/H2' --toc_entry 'H3=3' --toc_\
end 'H3=/H3' --toc_entry 'H4=4' --toc_end 'H4=/H4' --toc_entry 'H5=5' --toc_end 'H5=/H5'"
# The string !--toc-- is used as a marker to insert the new Table of Contents 
TOC="--toc_tag '!--toc--' --toc_tag_replace"
eval hypertoc $ARGS $TOC --make_anchors --make_toc --inline --outfile - $IFILE

This will look as all of the <h1> to <h5> sections, and create a list of links at the top of the page that points to the sections below. There is a problem with this, but I will address this later.

Inserting Google ads into a web page automatically

So I have a section that makes it easier to navigate to other pages, and a second one that navigates to the sections on the same page.  Intra-page and inter-page navigation is done. The next thing I wanted to do was to make it easier and cleaner to add Google Ads to a web page.    I store my ads in the folder ./Ads/GoogleAd1 and ./Ads/GoogleAd2

So now my static pages  have the sample structure like the one below:

<!-- INCLUDE Navigation -->
<div id="centerDoc">
<h1>Title</h1>
<!-- Insert an ad -->
<!-- INCLUDE GoogleAd1 -->
<!-- Insert my table of contents here -->
<!--toc-->
<h2>More HTML code here</h2>
....
<!-- insert a second ad -->
<!-- INCLUDE GoogleAd2 -->
<p>My blog is <a href="http://BLOG">here</a>

The lines marked in blue are special – and will be modified by my ‘include’ script below. This looks much cleaner, and it’s easier to keep track of which ad is inserted, and where, as a name is used instead of cutting and pasting a blog of text.

Adding a link back to the top of the Table Of Contents

One thing I liked about the troff2html program is that it added a link in each subsection to the top of the page where the Table of Contents is located. I wanted to add this capability.

I used a sed script that modifies the output of hypertoc(1). The key sections are below

# Quick and dirty way to add a way to get back to the Toc from an Entry 
# 1) put a marker in the beginning of the ToC 
 s/<h1>Table of Contents/<h1><a name=\"TOC\">Table Of Contents/ 
# 2) Add a link back to the ToC from each entry 
 s:\(<h[1234]>\)<a name=:\1<a href=\"$OFILENAME#TOC\" name=:g

hypertoc outputs “Table of Contents”, so I search for this and add the <a name=”TOC”> to this section. I also searched for all of the subsections, and when you click on the subsection name, you go back to the top.

Here is the improved “include” script

#!/bin/sh 
#This script modifies HTML pages staticly, using something similar 
# to the "#INCLUDE" C preprocessor mechanism 
INCLUDE=${1?'Missing include file'}
shift
IFILE=${1?'Missing input file'}

OFILE=`echo $IFILE | sed 's/\.in$//'`
# get the name without the path 
OFILENAME=`echo $OFILE | sed 's:.*/::'`
if [ "$IFILE" = "$OFILE" ]
then
 echo input file $IFILE same as output file $OFILE - exit
 exit
fi

blog=grymoire.wordpress.com
ARGS="--toc_entry 'H1=1' --toc_end 'H1=/H1' --toc_entry 'H2=2' --toc_end 'H2=/H2' --toc_entry 'H3=3' --toc_\
end 'H3=/H3' --toc_entry 'H4=4' --toc_end 'H4=/H4' --toc_entry 'H5=5' --toc_end 'H5=/H5'"
# The string !--toc-- is used as a marker to insert the new Table of Contents 
TOC="--toc_tag '!--toc--' --toc_tag_replace"
eval hypertoc $ARGS $TOC --make_anchors --make_toc --inline --outfile - $IFILE| \
sed "/<!-- INCLUDE [Nn]avigation/ r $INCLUDE 
# Change BLOG URL 
 s/BLOG/$blog/g 
# Quick and dirty way to add a way to get back to the Toc from an Entry  
# 1) put a marker in the beginning of the ToC 
 s/<h1>Table of Contents/<h1><a name=\"TOC\">Table Of Contents/ 
# 2) Add a link back to the ToC from each entry 
 s:\(<h[1234]>\)<a name=:\1<a href=\"$OFILENAME#TOC\" name=:g 
# Include ad named 'GoogleAd1' 
 /INCLUDE GoogleAd1/ { 
 r Ads/GoogleAd1 
 } 
# and GoogleAd2
 /INCLUDE GoogleAd2/ {
 r Ads/GoogleAd2
}
" >$OFILE

Automating everything with a Makefile

As before, my web pages have the name Example.html.in, and the output of the include script is Example.html

I created a rule that will automatically make the *.html files. Here is the Makefile I have in each of my subdirectories:

pages = $(wildcard *.html)
all: $(pages) 
$(pages): %.html: %html.in
    ../include ../navigation.nav $<

 

And here is the top level Makefile:

pages = $(wildcard *.html)
SUBDIRS = Unix Security Deception Spam EG Postscript Privacy
all: include navigation.nav $(pages) $(SUBDIRS)
# Handle directories recursively 
.PHONY: subdirs $(SUBDIRS)
subdirs: $(SUBDIRS)
$(SUBDIRS):
 $(MAKE) -C $@
# Building a page automatically 
$(pages): %.html: %.html.in
 ./include navigation.nav $<
install:  myCSS.css all
 cp *.html *.css /var/www/html
 cp Unix/*.html *.css /var/www/html/Unix
 cp Security/*.html *.css /var/www/html/Security
navigation.nav: navigation.txt makenav.pl
 ./makenav.pl <navigation.txt > navigation.nav

You can see an example of a page generated using this code here

 

Posted in Linux, Shell Scripting, System Administration | Tagged , , , , , , , | 1 Comment

System Development Lifecycle > Security Development Lifecycle

I was asked to list things I consider when creating/designing a world-class application.

Whew. That’s  a complex question, and worthy of a PhD thesis, book, etc. Still, several things jumped out at me. And I thought it would be worth the time to list them. I hope some of you find this interesting.

When I was a research scientist who built prototypes that demonstrated new technology, I developed prototypes that had some of these features. I’ve also had experiences in building and supporting commercial products. So I’ve had experiences at all levels of the Technical Readiness Level.

A lot of people discuss Software Development Lifecycle and Security Development Lifecycle.

In my view, security is a subset of the overall system, so the System Development Lifecycle is a bigger problem. It doesn’t get get the attention it needs, because so many companies do a poor job of designing system security. The security of the system is critical – don’t get me wrong. But other parts of the system should also be considered, if you want a world-class product.

No product is perfect. And if a product tries to be perfect, it will likely fail because of excessive requirements. However, these are the things I have considered in the past, and you may wish to consider them when planning a project.

What should be considered before starting a project?

  • Identify the market segment, and target audience.
  • Identifying the problem. Spend time with the end user to understand the real issues. Realize that the user may not know what the real solution is, but they do know what problems that have. Capture the problems. Find out why existing technology and competing systems aren’t suitable. Verify that the existing technology can’t meet the requirements (the competition may have a feature the user isn’t aware of). If possible, find out future directions of existing technology, and determine if the future product  will meet the requirements of the end users.
  • Investigate current technology and gaps. Study research reports. Do market surveys.
  • Are there any standards that the product is required to meet? Are the standards adequate or incompatible? Is participation in standards committees required? In some cases, it may be necessary to join standards committees to guide the standard in the right direction.
  • Generate reports on current state of the technology. List advantages and disadvantages of different approaches. Do a competitive analysis.
  • The business model should be documented. What are the expected sales? What value would be added to the new system? What advantages would this offer the end user? Does it provide value the customer would pay for? Which features are most desirable?
  • What are the operational and cost requirements necessary for the product to achieve the business goals?
  • Once the business model is created, what are the threats to the business model? What would be the impact of a compromise? What are the security requirements?
  • Identify new technology that needs to be developed. Describe the approach to be used. Describe the operational concept.
  • Review the preliminary documents with peers and experts, and refine and repeat as needed.
  • Propose the project to the management team. Identify necessary resources (funds, skills, etc.)
  • Reach agreement on project plan, with clear guidelines, requirements and metrics. It is preferable to have hard (measurable) metrics that can be used to review the project (performance, time-lines/deadlines, accuracy, precision, false positives, false negatives, stability, etc.)

What frameworks, conventions, tools and standards should be considered for a project?

There are many  development and operational standards (training, coding/compiling, IDE, security, formatting, portability, logging, debugging, GUI, usability, internationalization, libraries, remote support/debug, diagnostics, etc.)  Which ones should be used?
This is not an easy question. Many companies has some sort of framework in place, and stick with it. Newer projects can experiment with new tools and standards. But time, desire, money, dedication, experience and project maturity all affect this decision.

Under ideal conditions, all of these have already been determined and found adequate, but in reality, these standards evolve. Frankly, no project is ever perfect and few teams are problem-free, so evolution should be expected and planned for.

Here is a list I consider.

o   There should be a documentation standard, ideally one that is based on the source code. Any time documentation is split between source code and external files, there is danger that changes to one are not reflected in the other. It’s preferable to have enforced consistency, if documentation is split. Otherwise, create the documentation from the source code (Javadoc, Doxygen, etc.) But the documentation may need to include more than the user/developer guide. It may need to generate information for tools as well – programs that interface to the system.

o   The development framework should include source code control, and bug tracking, and may include resource tracking, scheduling, collaboration, blogging/social networks, etc. (Atlassian products, etc.)

o   Interface standards will need to be developed, which is more than documentation. These standards discuss how to communicate with a component, and these should be computer language independent. Tools will be needed that will use these standards.

o   The development of each component should include supporting components to self-test the component for full functionality. It is important to verify that the component is properly functioning, and that it correctly interfaces to other components. Some developers build tools they use, but aren’t documented, and may be discarded. These tools should be build to be a robust self-test system.

o   The component self-test framework should including parameter (range/limit) testing, and protocol fuzzing. A system should be in place to measure completeness of these tests, as well as compliance. Earlier I mentioned the need for interface documentation that is independent of the computer language. compute language. Other team members may wish to write raw packets using, for example,  perl (Net::RawIP), python (scapy), or C (rawsockets), or protocol fuzzers like sulley (python), or SPIKE (C). There are hundreds of fuzzing tools out there, and integrating the project with fuzzing tools will simplify the testing. In addition, having packet decoders can also be useful in testing and maintenance, so writing extensions to Wireshark would be useful.

o   The product should have a defined operational lifecycle, where the operation state of the product is defined, and the behavior of the system is based on the operational state. Typically, products have two operational states – normal and debug. In reality, complex systems should have multiple stages, and developers should modify their responses based on the operational state. Some of these states may include development, design, development debug, stress testing, regression testing, fuzz testing, integration testing, operational, heightened awareness, active attack, etc. For example, during development and debugging, error responses may include detailed and verbose information. During normal operation, this information may be unnecessary,  and in fact may reveal too much information to an attacker that is doing information gathering. Another example involves the response to a “brute-force” attack. If the system is undergoing regression testing, or protocol fuzzing, it may respond one way. However, if the system is in operational mode, and it is under active attack, it may introduce time-outs, disconnects, or else return purposely erroneous results to mislead the attacker. Developers can build this into a system, but only if there is a well-defined operational framework.

o   The development of core components may want to consider special versions or options that can be used as part of the system integration. These can control how the component responds in a controlled and predictable fashion, allowing other components to generate test suites where they react to feedback from the component. This would allow other team members to develop their component independently. As an example, if a component is designed to detect unusual events, a variation of that component can report events in a controlled and predictable manner such as number and type of event per unit time. This can be convenient during stress and performance testing. Alternately, if structured data is output, part of the component development could include generating data with specific attributes of the data that can populate the database with a precise data organization.

o   Core components should also consider having optional instrumentation to allow for diagnostics, timing, performance and anomaly analysis. For instance, it can be convenient to be able to adjust the verbosity and detail of information during operation.

o   A forensics framework integrated into the product may be needed. This can be used to capture information used for legal action, or tracing down intrusions.

o   If the system is designed to work over a distributed environment, modules could be built that have particular characteristics, such as time delays, time-outs, bandwidth-constrained networks, dropped packets, etc. This can be used to emulate actual networked conditions. Peter Deutsch’s Fallacies of Distributed Computing should be considered when designing a system. As an example, if a system can be instrumented to create typical network failures such as dropped packets, high latency, and limited bandwidth, developers would be more sensitive to the problems users are likely to face in real world situations.

o   It may be desirable to create emulators for systems with hardware interfaces. These models can be used to run the system when the hardware isn’t available. It is often desirable that the emulator keeps track of the hardware state, and can detect conditions that can result in real world damage caused by changes in the hardware state (i.e. industrial systems).

o   If necessary, components should be designed for portability control and verification. Generally, if multiple platforms or interfaces are supported, there should be regular builds to verify that code changes don’t violate the standards. There needs to be a tool that will help manage this.

o   Quality and security regression testing should also be standardized. If the system must met specific metrics, then the regression testing should exercise the system to determine if the specifications can be met. This can provide an early warning if there are performance issues, etc.

o   Components can be instrumented to provide trust measurements during system usage. This becomes more important in the case of multiple-authorities with varying degrees of trust. As an example, systems can be instrumented to provide data provenance and information assurance, allowing trust in the data to be measured.

o   Metrics on the tools and standards should be collected during the project life-cycle, so that problems with the tools and frameworks can be measured and/or corrected. Tools may need to be improved during the life-cycle development. In other words, tools to measure the tools should be developed.

o   The system may need instrumentation so that it can monitor its own health. If resources become limited, or unavailable, the system may want to behave differently, such as doing dynamic resource reallocation, load balancing, etc.

o   In the case of large complex systems with multiple components and/or authorities, the system could be designed to detect compromise, and change behavior in response. Dynamic firewalls could be modified to isolate infected systems. Special instances in honeypots can be created and the system can redirect all traffic from a compromised system into the honeypot. A honeypot system could provide false information to prevent intruders from discovering they have been discovered. This depends on the operational state of the system as described in the system life-cycle. For example, one of the operational states may be “under attack” and the system may respond differently based on this information. This would use the operational framework I mentioned before.

o   Some systems may have complex remote management and diagnostic requirements, where access to operational systems may be isolated from the developers. If so, remote diagnostic and management mechanisms may need to be developed that allow remote systems to be instrumented under conditions where there is customer privacy and operational security requirements.

o   Customer privacy may also be a concern, and special monitoring may be necessary. There may be need to ensure data is isolated, such as database systems that have clients that compete with each other. There may be to be special anonymization mechanisms. Medical systems may need de-anonymization mechanisms.

o   Another problem that should be considered is the need to have isolation of instances of the system, especially if multiple instances are in use simultaneously. Consider the problem of someone building the system that changes or potentially corrupts a database, which happens to be used by  another developer. In distributed systems, this can become complex because there can be multiple databases, servers, etc. Certain team members may need to share components that behave differently. There needs to be a flexible or even dynamic configuration management system.

Once the frameworks and standards have been determined, what steps should be considered when developing the project?

Now that the initial framework has been selected, the project can be implemented. Of course, the above standards should be considered to be evolving, as the project matures.

Begin to assemble the team. Project necessary resources. Create schedules, do resource allocation. Get approval for the proposed schedule. Get available resources. Locate and train team members.

  • The overall architectural design should be documented, reviewed, and approved.
  • Once approved, there can be a team kick-off meeting. The team dynamics, meeting schedules, and initial standards and disciplines need to be discussed.
  • As the project progresses, reviews of the progress are important. The time line, schedule, priorities, budget and outside influences can change the requirements. The project requirements, specifications and standards may need to be modified as the project progresses. Team members may also need additional training, and talent located and developed.
  • Early on, the major interfaces need to be specified and controlled. The security assumptions and requirements must be explicit. They must be reviewed and approved every time the interfaces are modified.
  • As part of the development process, team members trained in security should be reviewing the interfaces, and provide feedback to the developers, and those generating interface test components.
  • Part of the project may include a red team examination and testing of components of the system, considering the attack surface of the interfaces. Major security problems should be identified early and addressed.
  • Keeping the deliverables in mind, the team should consider the end result, be it a demo, working prototype, or production system. The final objectives and goals should be well communicated, and the team should be focused on the goals, which consists of require objectives and optional objectives.
  • If the deliverable is for a production system, a testing, verification and transition plan should be created.
  • A test lab or environment needs to be created and managed. If possible, the environment should be virtualized, and reproducible.
  • The production evaluation environment needs to be specified and arranged. The testing mechanism has to be carefully designed. Would this test interfere with the production environment? How can the new system be tested? Production systems are often not controlled and/or repeatable. Therefore performing testing with and without new technology is difficult. It may be necessary to capture and replay/reproduce the production environment (i.e. replaying packets). Security systems may prevent as well as detect. If the system prevents or modifies the production system, it may be necessary to either duplicate the system(s) being affected (so two systems run in parallel), or instrument it to accept feedback from the new system.
  • End user training and user interfaces needs to be verified and monitored as the system is used in a production mode. The end user interfaces should be instrumented to keep track of usability. The user interface can be instrumented to measure user productivity, such as determining how long certain tasks take to complete, and hrawow well the on-line documentation works. Keyword searches can be captured, and final destination of the documentation may provide insight on how well the on-line documentation works.
  • The transition plan needs to be well documented. Once a system has been transitioned into production, the system should be monitored for performance, accuracy, etc. Certain high-value customers should be engaged and used to evaluate the technology. Alternately, cloud-based production testing can use live A-B testing, phasing in new technology to small groups at a time.
  • The entire development cycle should be on-going, and repeated for the life of the project.
  • Near the end of the project, a transition plan needs to be created that can assist users in migrating to new systems.

As I said – this requires a book to cover all of the information. But I hope this gives you something to think about.

 

Posted in Security, System Administration, System Engineering, Technology | Tagged , , , , , , , , , , , , , | Leave a comment

The Top Eleven Reasons why Security Experts get no Respect

Let’s face it – being a security expert is difficult. While security technology is very difficult, dealing with people, especially with people who don’t work in the security field, is far more difficult. Why is that, you say?  I have a list.

With respect to David Letterman and Rodney Dangerfield, I present my list of reasons security experts get no respect.

#11 – You never have good news.

All you have to do is walk into your manager’s office, and sit down with a serious expression. There’s no need to say anything. Your boss will know. “Oh God. Now what?”

It’s not like you are going to say “We don’t need to buy any new hardware” or “Our people will meet the schedule.” Of course not. That never happens.

It’s no wonder your boss wishes your office was on the far side of the moon.

#10 – Others  don’t understand you.

As soon as you start talking about the technology of security, like key exchanges, passing the hash, entropy, transport security, padding Oracle attacks, as so on, you might as well be talking in Latin.  A sure warning sign is the boss asking for a whiteboard diagram, along with an Aspirin.

#9 – Any problem costs money

A software engineer can add a new feature to a system, and people will pay for it. But some security protections will remove features – and that’s bad news. No one wants to spend more money and get fewer functions.

Even security patches are a problem. If customers have to pay to fix something that should never have happened in the first place, the customers get upset. And if this disrupts their business – that’s even worse.

Even if the problem is internal, it will likely need time and/or money to fix.

So in short, you bring bad news no one can understand, and it will cost money. It’s no wonder your boss doesn’t want to see you.

#8 – You can’t talk about any hacker activity.

Now suppose you discover someone hacked into your system. This is one of the most interesting things that can happen to a security expert. So naturally you can’t talk about it.  This might affect company sales or stock prices, you see. You have to learn to emulate Sergeant Schultz.

#7- You can’t talk about any vulnerabilities in your systems.

And the same thing is true if you discover a weakness yourself and get it fixed. If it’s in a web service, it’s best to pretend nothing happened. And if it’s in a product, then that’s even worse. You don’t want to be responsible for telling hackers how to break into the old systems. Your customers might get upset. Loose Lips Lose Customers.

#6 – You can’t share your tools with your peers.

Suppose you develop a neat tool that tests the security of your system. While other professionals might gain respect by sharing cool tools, if a security professional publishes a hacking tool, someone might use that tool for evil purposes!! Managers have one word in their minds – “lawsuit!”  So if you develop a cool tool, it’s best if no one knows about it.

#5 – If you do nothing about security – it just gets worse.

Once a technological barrier has been crossed, the job is done. Time to move on.

Unless one deals with security.

To quote the NSA, Attacks always get better; they never get worse.

A perfectly secure solution for 2004 is a security nightmare for a 2014 system.New tools, new attacks, and clever programming will decimate the security of an old system. In any other field, people can look back at a past success and think “That was a good system.” Security is the exception. People with perfect hindsight will gladly point out “You really screwed that one up!”

#4 -You have to run as fast as you can to stay in place.

In most engineering fields, you can learn the basics, and become an expert in a single area. And one can make have a nice career getting better in one niche area.

But if you are responsible for security, the rules are different. You have to continuously improve your skills in all areas if you are responsible for security.

In other words, you are always busy. And your boss wonders why you can’t get your work done.

#3 – A flaw is a flaw

In engineering, you can have trade-offs of functionality and features.  You can ask a manager to decide which feature is more important. And they can wait 6 months before adding new features.

Not so with security. All flaws are a crisis. While it’s true that some may be actively exploited, while others are not. But that can change in a moment’s notice, especially if the flaw is discovered publicly. Ever notice how people react when a company claims a security flaw is small?

#2 – You have to be perfect to be acceptable.

In some systems, managers will love you if you can improve performance 25%, and reduce cost %20.Of if you had a goal of 75%, and reached 74%. That’s pretty darn close.

Close doesn’t count in horseshoes and security. It’s not like your boss will be happy that you fixed  99.9% of the security problems. Nope. If you are a security expert, you have to be 100.00% perfect. After you walk on water.

And now – the #1 reason why security experts get no respect:

#1 – When you do a absolutely perfect job, nothing happens and nobody notices.

Yup. If no security problems occur, and nothing happens – you are either lucky or extremely gifted. Or perhaps you are deadwood. Whose to know for sure?

So in summary we have someone whom no-one understands, and doesn’t provide any clear evidence of their worth, yet they are always busy doing obscure activities, and always costing the company more money.

Now imagine how your boss describes  you to their boss.

[Note – this is something I wrote nearly 6 years ago. I thought others would enjoy it. It’s based on my observation of the industry, and not based on my experience with any particular  company. :-]

Posted in Hacking, Humor, Security, Technology | Tagged , , , | 1 Comment