Why Sponsor Oils? | blog | oilshell.org

How I Use Tests

2017-06-22 (Last updated 2019-03-23)

In the last post, I said that I would write about my work on the OSH runtime. Before doing that, I'll give you a sense for how I'm working.

In short, I'm using test-driven development, with bespoke test harnesses written in shell and Python. (I wrote about bespoke code generators last December, in a diversion on searching for code that matches a spec.)

This approach isn't unusual. The authors of the now-defunct pdksh wrote their own test framework to clone AT&T's ksh. Two active forks of pdksh, the OpenBSD shell and mksh, use derivatives of these tests:

(I learned about these test cases after starting OSH, and I'd like to eventually use them.)

Table of Contents
Four Types of Test
The Flow
Examples of Gold Tests
Next

Four Types of Test

OSH has four test harnesses for four types of test. They are conveniently named test/{wild,unit,spec,gold}.sh in the oil repo.

  1. Wild tests run the parser on shell scripts found in the wild, and produce a pretty-printed ASDL representation. I check that the parse doesn't fail, but I make no assertion on the output. I reported results from this kind of test in posts like Four More Projects Parsed.
  2. Unit tests are written in Python, using the built-in unittest module. They're useful for exhaustively testing tricky code.
  3. Spec tests use the sh_spec.py script to run shell snippets against many shells. It has a little language for making assertions on stdout, stderr, and the exit code. I've shown spec test results as HTML in several recent blog posts. Clicking through lets you see the code for a test case.
  4. Gold tests run a shell script under both bash and OSH, and compare the output. Thus, the assertions are implicit and you don't have to write them by hand.

The test/gold.sh framework looks like this:

_compare() {
  "$@" >_tmp/left.txt  # run with shell in shebang line
  local left_status=$?

  bin/osh "$@" >_tmp/right.txt  # run with OSH
  local right_status=$?

  # ... compare output and status
}

# Test cases: run the command under two shells
_compare ./configure
_compare build/actions.sh gen-module-init

One reason I'm writing a Unix shell is that I've found tiny scripts like this to be pleasant and productive. I want my software to work well, and shell helps me achieve that.

The Flow

I pick shell scripts to run as gold tests, which uncovers unimplemented features. The implicit assertions are a rough check for correctness. Then I nail down the exact behavior with explicit assertions using spec tests.

For example, I use set -o errexit in all my scripts, so the gold tests quickly revealed that I needed to implement it. Then I wrote more than a dozen or spec test cases for it:

Scanning across rows reveals differences between shells:

  1. dash doesn't implement the (( )) arithmetic construct, so that case is marked N-I for not implemented.
  2. You can see that bash is the only shell that ignores a failure within command sub, e.g. $(echo one; false; echo two).
  3. All shells ignore a failure within a local assignment (but not within a global assignment), because local it a builtin with its own exit code.

In keeping with its philosophy of being more strict, OSH fixes the latter two issues.

If the spec tests are too coarse or become too numerous, then I switch to unit tests. (This happened today when implementing flag parsing for shell builtins.)

Examples of Gold Tests

Over the last few weeks, these cases prioritized what shell features I implemented:

Confusingly, because the test frameworks are shell scripts themselves, we can use them as gold tests:

In other words, the OSH test frameworks run under OSH.

Next

The next post was going to be a log of what I did in the last few weeks, titled The Long Slog Through a Shell.

But in writing this post, I realized I have more thoughts about tests, which are higher level and forward looking. So the next post will be How I Would Like to Use Tests.