blog | oilshell.org

Roadmap #5: OSH Release Criteria

2017-06-03

Looking back at Roadmap #4, the project totally changed.

I tried to solicit contributors, and there was some interest, but not much materialized. In retrospect, the project was too immature then. Since then I've done major surgery on the code, e.g. porting it all back to Python 2 for OVM.

I thought a lot about [the riskiest part of the project][] -- the implementation language -- and that led me to an unexpected solution.

I want to get the project done sooner rather than later, so the approach of painstakingly porting various components of the project to C++ was ruled out.

I feel good about the project based on the project metrics.

This project is a lot of work, but I feel that we're on track for an initial OSH release some time this summer.

1. Build System and Release Tarball

(1) I've mostly finished the OVM build work, but the I need to test the configure script and start writing the install script (related thread). Oil won't use autotools.

2. Error Handling

(2) I suspect that one motivation for using a bigger, slower clone of bash is for better error messages. I wrote a lot about parse errors last year, and I've been working on runtime errors.

Here's some recent evidence that people are not happy with bash's error handling:

I've noticed a common pattern in bash: It implements incorrect behavior, then can't fix it in the name of compatibility. Because it has such a large user base, it can only fix bugs behind --posix — and this comes later. I hope that comprehensive tests will help me avoid this fate for OSH.

3. Run a Program Found in the Wild

medium-sized program

(3) OSH runs a lot of simple programs, but I want it to run a major program that I didn't write.

The "gold tests" in test/gold.sh run arbitrary shell programs under bash and [OSH][] and compares their output. This simple strategy has helped me fill in many obvious holes.

I want to run one significant program. I hope that the program is debootstrap, although it's very complex and may be too amibitous.

Running real programs will reveal more holes in the implementation. Here are a few holes off the top of my head, but there undoubtedly more:

4. Spec Test Failures

(4) I would like to get the 141 spec test failures mentioned in the last post down to 100 or so.

Deferred Work