Moved!

This site has moved to paperless.blog! All new content will be posted there, with no ads and a much simpler design. Since the old and new sites have completely different feedback mechanisms (on-page comments vs. email) I’m going to leave this site alone for now to provide access to legacy feedback. All the articles from this site are already available on paperless.blog.

We need programming mentors

tl;dr Mentoring is probably the best way to advance the art of programming, by keeping hard-earned lessons fresh.

Programming is a new discipline. Mathematics and logic had existed for millennia and even the scientific method had existed for generations before Ada Lovelace wrote the first thing recognizable as a precursor to the for loop. And programming as an activity available to the general public is so recent that some of its earliest participants are still with us today.

In this time the art of programming has advanced enormously¹. But we can only continue advancing if we take a cue from science: the new generation must have the hard-earned knowledge (theories and, ideally, data) of the past generations available to build on. Without it each generation goes through the same mistakes, stumbling for months or years before being able to advance the state of the art². The following are entirely preventable mistakes from my own wandering career:

  • Programming using text editors with no knowledge of the code base was a big mistake, because it leads to other mistakes like trivial typos and trying to do refactoring using regular expressions. That’s not to say I never tried IDEs early in my career, but the first one I tried which was not awful was IntelliJ IDEA around 2014. It’s by no means perfect, but I could never go back to programming in a plain text editor now, knowing what a difference IDEs can make. One caveat to this is that IDEs are by their nature extremely complex, and as a beginner it can be difficult to treat it as anything other than a text editor. But modern ones like IDEA can be learned piecewise, because the file hierarchy and file contents are up front and centre, not something you have to “earn” by first doing a whole bunch of configuration and then learning some obscure interaction patterns which have no connection to any of the other software you’ve ever seen.
  • Not knowing the weaknesses of different technologies. Today it’s basically assumed that if anyone criticises any technology for any reason the fans are going to be there to retaliate, either by attacking the writer for being entitled or the content for being at least in some technical way wrong. Criticism of a technology shouldn’t be an excuse for the experts to denounce anyone less invested in the technology than they are. Instead we should be open to the idea that no technology is perfect, and that it’s useful to know which things a technology is bad at. Some projects are even courageous and insightful enough to list their conscious trade-offs. By now I’m one of those naughty people who will occasionally suggest to askers on Stack Overflow to use a different technology, because while most programming languages are Turing complete and therefore in a very technical sense “equivalent,” no, it’s not a good idea to parse HTML with regular expressions.
  • Not knowing the real side effects of having a good test suite. Limiting what your code can do (and thereby limiting the amount of damage any change can introduce) is just a small part of it. Tests act as living documentation and enable fearless refactoring, and test-driven development encourages thinking more productively about a problem before coding (compared to DDT, where you write the tests after the code) and splitting each task into the smallest useful changes.
  • Not knowing when to refactor. Refactoring early has a high risk of not being worth it: premature abstraction has a negative gain, since it usually has to be reverted, and using a lot of time for a tiny decrease in complexity can be wasteful. Refactoring late has a high risk of having already wasted a lot of time dealing with messy code. While I think I have a better idea these days of when to refactor it’s still a challenge every time.
  • Not knowing how to name things. Just a few I’m guilty of:
    • including the product name in a name within the product, such as class ProjectNameServer.
    • Other redundancy such as putting “tbl” in table names or a class name in one of its method names.
    • On the flip side, not understanding the difference between bad and good Hungarian notation. For example, a URL parameter goes from a sequence of bytes at the HTTP layer, then usually via a UTF-8-encoded string in a web framework, and finally to the application level type, which could be arbitrarily complex. When dealing with more than one of these representations in the same context it’s useful to use different variable names to keep them apart, such as start_string and start_datetime.
    • Single letter variables, such as i. With modern IDEs there’s not much excuse to use that, and it becomes inexcusable as soon as another index like j inevitably comes along. Now the maintainer is forced to read the full loop definitions to understand what they both do. Now add another three loops inside of that (with k, kk and no prizes for guessing the next index variable name) and you end up with a ball of mud my boss assured me two others had already tried and failed to refactor.

I was very lucky in my first job as a programmer. My excellent manager Elena let me experiment (aka. bumble about), ask questions, try new technology, and join meetings to see how the software was being used. Stephan, the also excellent tech lead, patiently reviewed my code and gave feedback. In retrospect, for those three years I probably gained more long-lasting knowledge from that feedback than from any other source.

Some of the next few jobs were fine, but it wasn’t until the next time working closely with a much more senior developer that I really felt I was learning quickly. We consciously worked against siloing anyone with a specific part of the code, so we were all intimately familiar with basically every part of the code base. We also paired basically all the time, and changed pairs daily. Because of that, any suggestions I got were highly specific and relevant, which meant that I could apply them immediately and therefore internalize them better. Many of them would also be broadly applicable, which became the superpower of this way of working. Heaps of suggestions, imparted at the moment they are applicable, meant that over time they became ingrained, like keyboard shortcuts.

I don’t think this kind of knowledge can be imparted as successfully from anything other than another person. When learning something new which is even slightly out of context,

  • even if you understand the concept you don’t necessarily know how to recognize when it is applicable (see for example the infamous over-use of the singleton pattern),
  • it is unlikely to be applicable to what you are doing right now, when the knowledge is fresh, and
  • unless you have the time to go looking for somewhere it’s applicable or invent some throw-away code where it would be applicable, you may not find a use for it until you’ve forgotten about it.

Basically, a mentor is able to provide suggestions relevant right now to the person right next to them, and to provide in-depth explanation when a quick hint isn’t enough. That is just not possible with any other type of learning.

¹ Some will say the art of programming has regressed, because we now use enormously more resources to accomplish the same things as before. Personally I think this is a combination of survivorship bias and stretching the definition of “same” past breaking point. The first because only software which was capable of running on the hardware at the time was actually developed and used by anyone. Nobody could possibly have written a fully functional spreadsheet application and simply waited 20 years for the hardware to become capable of running it. The second because modern applications really are very different in every way which matters to the end user from their 20+-year-old “equivalents.” At the same time a lot of applications have definitely stagnated, becoming less and less useful every year in which they don’t catch up with what people expect.

² No, I’m not saying I’ve personally advanced the state of the art in any useful way, only that I believe it is vanishingly unlikely for anyone to improve things until they have learned many hard lessons, either by getting through them on their own or by being taught and therefore forewarned.

August bug

Remembering a classic “time travel” bug reminded me of a fun time in Bash. How to reproduce in four handy steps:

  1. Split a date string by hyphens into $year, $month and $day.
  2. Use $month in an arithmetic context. I don’t remember the exact code, but it was probably something trivial like [[ "$month" -eq 1 ]].
  3. Wait until August, when the script blows up with
    [[: 08: value too great for base (error token is "08")
  4. Learn about octal in Bash, smack forehead, and curse language designers who thought it was a good idea to make numbers ambiguous in order to save one character instead of using something reasonable like a 0o prefix to fit with the already ubiquitous 0x and 0b.

At no point did anyone profit during this exchange.

As an aside, this is the sort of thing that only exhaustive testing will catch, because “01” through “07” are the same value whether decoded as octal or decimal, and “10” through “12” are decoded as decimal. Only “08” and “09” are actually problematic values.

Test naming tips

No naming scheme is going to guarantee good names, so here are a few intentionally vague test naming guidelines:

Should X or Y is probably testing two branches, which should be two tests. A sure sign of this is the use of if/else in the test, which should just be outlawed. I can’t think of a single case where it would be better to have a branch in a test than two tests, but I’d be interested to hear of any.

Should X and Y can be caused by many things. Sometimes the test is fine, there just isn’t a single collective term for what the code is doing. Other times the test is verifying more than one thing at a time, such as the return value and a side effect. Unless the test is extremely slow this should be split into two tests.

The name should say something about either a side effect or a return value. As a counterexample, should parse input does neither. If the parsed result is stored or returned, it should say so. If the test is actually checking that nothing is thrown when parsing valid input then that should be part of the name.

When testing errors the name should be more descriptive than should fail when …. There are many ways code can fail, and many ways the surrounding code can react to that failure, so a better name would be should return error code 5 when … or should throw not found exception when ….

Context is everything. If the test filename contains “foo” it is probably not useful to also add “foo” to any of the test names. Similarly, if you have several test names which share a context it might be useful to split those out into something with a name corresponding to that context and removing that now redundant part from the test names. As a simple example, you might want to split the “app” tests into “view” and “model” tests. After doing so you can remove “view” from the view test names and “model” from the model test names.

If it’s really difficult to come up with a good name for a test I’ve found that it’s often because I don’t have a sufficiently clear idea of what the test is meant to assert. At this point it might be useful to step back and see whether it’s possible to split up the task some more to reach some easily testable next step.

Start test names with “should”

The purpose of a test is not just to enforce some behaviour on the code under test. When the test fails it also should provide enough information to understand which behaviour failed, where it failed, and (at least superficially) why it failed. If the only output of a failing test is just a binary value like “FAIL”, that test is only giving one bit of information to the developer. A good test framework will also print the test name and a call stack. We can’t change the call stack much, at least not without changing the semantics of the test, and how to actually write the test itself is a whole other story, but what does a useful test name look like?

A trick I learned from Dave Hounslow, a former colleague, is to start test names with “should”¹. This has a few advantages over test [function name]:

  • It removes redundancy, because the function name should already be in the call stack.
  • It is falsifiable, that is, a person reviewing the test can decide to which degree the name agrees with the actual test. For example, they could point out that should replace children when updating instance verifies that new children are added, but not that old children are removed.
  • It encourages testing one property of the function per test, like should apply discount when total cost exceeds 100 dollars, should create record for valid input, and should return error code 1 for unknown error. test [function name] encourages testing everything the function does (branches, side effects, error conditions, etc.) in one test.
  • It invites the developer to write something human readable. I usually find “test …” names to be clunky to read. This may just be bias after years of using this technique.
  • It is better than a comment explaining what the test does, because the comment will not be shown when the test fails.

¹ Some frameworks require that test names are formatted in some special way, like starting with “test”, using snake case, camel case or other. I’m ignoring that part of the naming for brevity and to avoid focusing on a specific language.