blog | oilshell.org
I created the email@example.com mailing list in November, and there are now a few threads. Subscribe if you're interested!
I summarized the state of the project in my review of Roadmap #3, and described further tasks in Roadmap #4.
The most important task is to finish the shell runtime in Python, using the spec test framework as a guide.
However, we have to shave a yak first. Thanks to all the people who ran the spec tests in response to my last post.
The problem is that we need compare OSH against known versions of bash, dash, zsh, and mksh. Every Linux distro has different versions of those shells, and all four shells changed behavior across recent versions.
On the one hand, I'm surprised that shells are this unstable. On the other hand, this gives me confidence that the tests are thorough.
As you can see from bottom row of the latest results, there are now 485 test cases across 39 files. OSH succeeds on 254 and fails on 167. It isn't being run against the rest, generally because a feature isn't implemented at all, e.g. brace expansion.
When every test passes on OSH, the row is colored green. There are now six green rows. Yellow rows indicate known failures, and red is for unexpected failures.
I think the most promising solution is to use the Nix package manager, which pin the version of a package and install multiple versions at once. If that doesn't work, there are other solutions, like providing scripts to build a specific version of each shell from source.
As mentioned, the goal is to develop an an executable "spec" for OSH by writing a complete shell runtime in Python. We are reverse engineering bash and rationalizing its behavior.
However, I suspect that this starting point is too broad. Some basic things
cd builtin don't work yet!
So I suggested an even tighter focus in this message. We should get the spec test harness itself running under OSH. This is a real program not written for OSH. As documented on the Contributing page, we invoke spec tests like this:
./spec.sh all # run all spec tests in parallel
xargs -P to run many copies of
sh_spec.py on each of the
spec-runner.sh are shell scripts themselves, we can also
bin/osh ./spec.sh all # run spec.sh under bin/osh
However, this doesn't work yet. I hadn't even implemented
$1, so this line
local path=$1 # in function maybe-show
So I implemented it, and encountered another problem with extraneous output:
echo "--- $path ---" --- /etc: No such file or directory --- /etc/debian_version ---
Where did that line in bold come from? It's because OSH globs every word,
glob() treats the
'--- etc/' prefix as a directory name!
Instead, OSH should only pass a word to
glob() if it has glob characters like
I wanted to publish small example commits to show how I make the spec tests pass, but both of the changes above impact the architecture significantly.
$1 issue caused me to change error codes into exceptions. For example,
we now use
_EvalError in word_eval.py and
cmd_exec.py to unwind the stack.
Error codes are also used in the parsers. You will see code like this:
node = self.ParseCommandTerm() if node is None: return None
I was trying to write Python code that could be automatically translated to C++, and I'm used to writing C++ without exceptions.
But I now think that we'll break the Python dependency by compiling a our small subset of Python to OVM, which will have exceptions. More on that later.
In any case, the explicit error codes are tedious, especially for parsers and recursive evaluators. I've settled on the style of using exceptions within a set of mutually recursive functions. I still want to use error codes at API boundaries for "formality". If you have questions about this, please leave a comment.
glob() issue caused me to rethink how word evaluation works.
Right now I have a two-step process of evaluation and globbing, but I'm ready
to implement something very close to the four-stage pipeline shown in the
opening figure of the AOSA Book chapter on Bash.
~expansion differently than other substitutions.)
At times like this, I'm happy that I decided to prototype OSH in Python. Most of the work is discovering behavior, not writing code. I mentioned in the first post that I found the iterative discovery process slow in C++.
So I plan rewrite the word evaluation pipeline, then port the tested logic and architecture it to C++.
I began by describing the shell version problem with the spec tests.
While that's being addressed, I'm focusing on a tighter use case for OSH: running its own test harness.
There were two architectural changes that came out of that: changing error codes to exceptions, and redoing the word evaluation pipeline.
After the architecture is more stable, diffs to implement features should be smaller. I plan to publish a series of example commits to give people an idea of how the codebase works.
Here's one that shows how to add a new spec test. This one is for shell
set -o nounset.
If you're interested, please subscribe to oil-dev, and see the Contributing page. Until we can pin versions, the spec tests may have some red rows on your machine, but it should be possible to familiarize yourself with the code.
I'm open to feedback on the development process. Feel free to ask me questions about how the code works, or make fun of it for being sloppy. Leave a comment or send mail to firstname.lastname@example.org!