Shell scripting dos and don’ts

Shell scripting is like a room full of power tools: handy but dangerous.


  1. Build complex systems. There are just too many ways that external state can affect any piece of shell code. Do you know what your script will do if you change IFS before running it?  What if you give it a file name starting with a dash or containing a newline? How do you recover the state of the system if the script crashed somewhere in the middle? Complex shell script environments invariably end up looking like Rube Goldberg machines of chainsaws and power drills. Use languages and frameworks appropriate for the task.
  2. Expose them to the Internet. Safe input handling is just too damn hard. Unless you’re GreyCat or Stéphane Chazelas.
  3. Use eval. Don’t be evil. There are safer ways to do whatever you’re trying to do.
  4. Write portable code. (By this I mean code which works in multiple shells without change, as opposed to code which can easily be ported to other shells.) Writing portable code means limiting what language features you use and adding complexity to make sure it works the same way in all the supported shells. Because of this, the end result will be more complex and less flexible than the simplest script that supports the shell you have.
  5. Minimise the number of characters. The next maintainer will hate you.
  6. Create interactive menus. Very few shell tools like less and top only make sense interactively. Use command-line arguments instead, so that your tool will be useable both standalone and with other tools.


  1. Test everything automatically. This gives you and others the confidence that your script actually works. Bonus: Allows you to modify your code without having to test everything manually. Extra bonus: Experiencing how difficult it is to test shell scripts exhaustively will convince you to never use them for anything complex.
  2. Provide --long-names for every -s -h -o -r -t option. And if you can bear the screams of dogmatic developers, don’t support short options at all. As long as the names make sense this allows people to write readable scripts. Bonus: No wondering whether -n5f0 is two, three or four options.
  3. Use guard statements like the POSIX set -o errexit -o noclobber -o nounset. While there are some caveats to how these work, they can save a whole lot of headache. Bonus: Use -o xtrace to see what the script does in detail.
  4. Add an auto-complete script. The users will be grateful. Bonus: Gives an incentive to keep the structure of your options sane.

Bug #1: Home directory is not version controlled

How to reproduce: Modify dotfiles and scripts in the home directory on multiple machines without keeping track of the changes.

What happens:

  • Lots of manual work to synchronize and merge changes.
  • Uncertainty about which changes exist where.
  • Lost work because of minor mistakes or giving up on complex merges.

What should happen: Changes should be reproducible, visible and simple enough to be merged.

How to fix: Use version control.


  1. Fork an existing version controlled home directory.
  2. git clone --recursive git://
  3. Merge with your existing home directory.
  4. make clean to do miscellaneous cleanup before you
  5. commit and push.
  6. make install to create symbolic links from your home directory to the repository.
  7. clone and pull on any machines which need your changes.

That’s pretty much all there is to this workflow, really. There’s a ton of commands with descriptive tags in .bash_history, configuration for Bash, Vim, Awesome WM, screen layouts, email tools, and much more that you can copy (and criticize) all you want. blog daily backup Blog Export, The Next Generation is now on GitHub! Please go there for any future updates (and more export/backup scripts).

Based on the following documents:

Office super-tool: pdftk

If you scan or print a lot of documents, you have probably used PDF files. They are very nice, but it can be tricky to modify and otherwise handle them. Enter pdftk: great (but small), free (but valuable) and powerful (but simple). It’s also open source, which means you can learn it now, and use it the same way in five, ten, or twenty years.

I was recently sending out 28 temp job applications with six attachments each. I printed out the motivation letter for each job and 28 copies of each attachment, so I ended up with seven piles of paper which I then had to mix by hand to make 28 applications. Tedious work, and I could have smacked myself when I realized that it would have been much easier to put all the attachments in a single document, and printing that 28 times: Two piles instead of seven. This is really simple with pdftk – Just start up a shell (In Windows: Start → Run → cmd, in Ubuntu: Applications → Accessories → Terminal), and replace the file names in the following command with your own to produce a new file with all the documents in sequence:

pdftk cv.pdf "reference letter 1.pdf" [and so on] cat output new.pdf

cat is the magic word: Concatenate all the files before it. pdftk can also do other useful stuff, like rotating pages (if they were scanned the wrong way around), splitting, watermarking, digital signatures and much more (see examples).

Tag cloud shell script

As an interesting challenge, I wanted to output a tag cloud (aka. word cloud) for a text file using standard shell tools. The result is surprisingly fast (2 minutes to create the tag cloud for War and Peace), and surprisingly short: As you can see, less than 10 lines doing anything more complex than echo. The latest version is much more flexible, but the main work is still just some 20 lines (tr -s … and below), and it’s still fast.

If you do anything more fancy with this, I’d be interested to know about it. I’ve got a couple ideas, but I’m not sure if I’ll ever get around to them:

  • Exclude words from another file
  • Multiple word tags from another file

Example usage: < foo.txt > foo.xhtml

Update: The code is now on GitHub. Fork away!