Go to the first, previous, next, last section, table of contents.

How To Write a Test Case

Writing a test case

The easiest way to prepare a new test case is to base it on an existing one for a similar situation. There are two major categories of tests: batch or interactive. Batch oriented tests are usually easier to write.

The GCC tests are a good example of batch oriented tests. All GCC tests consist primarily of a call to a single common procedure, since all the tests either have no output, or only have a few warning messages when successfully compiled. Any non-warning output is a test failure. All the C code needed is kept in the test directory. The test driver, written in expect, need only get a listing of all the C files in the directory, and compile them all using a generic procedure. This procedure and a few others supporting for these tests are kept in the library module `lib/c-torture.exp' in the GCC test suite. Most tests of this kind use very few expect features, and are coded almost purely in Tcl.

Writing the complete suite of C tests, then, consisted of these steps:

  1. Copying all the C code into the test directory. These tests were based on the C-torture test created by Torbjorn Granlund (on behalf of the Free Software Foundation) for GCC development.
  2. Writing (and debugging) the generic expect procedures for compilation.
  3. Writing the simple test driver: its main task is to search the directory (using the Tcl procedure glob for filename expansion with wildcards) and call a Tcl procedure with each filename. It also checks for a few errors from the testing procedure.

Testing interactive programs is intrinsically more complex. Tests for most interactive programs require some trial and error before they are complete.

However, some interactive programs can be tested in a simple fashion reminiscent of batch tests. For example, prior to the creation of DejaGnu, the GDB distribution already included a wide-ranging testing procedure. This procedure was very robust, and had already undergone much more debugging and error checking than many recent DejaGnu test cases. Accordingly, the best approach was simply to encapsulate the existing GDB tests, for reporting purposes. Thereafter, new GDB tests built up a family of expect procedures specialized for GDB testing.

`gdb.t10/crossload.exp' is a good example of an interactive test.

Debugging a test case

These are the kinds of debugging information available from DejaGnu:

  1. Output controlled by test scripts themselves, explicitly allowed for by the test author. This kind of debugging output appears in the detailed output recorded in the `tool.log' file. To do the same for new tests, use the verbose procedure (which in turn uses the variable also called verbose) to control how much output to generate. This will make it easier for other people running the test to debug it if necessary. Whenever possible, if `$verbose' is 0, there should be no output other than the output from pass, fail, error, and warning. Then, to whatever extent is appropriate for the particular test, allow successively higher values of `$verbose' to generate more information. Be kind to other programmers who use your tests: provide for a lot of debugging information.
  2. Output from the internal debugging functions of Tcl and expect. There is a command line options for each; both forms of debugging output are recorded in the file dbg.log in the current directory. Use `--debug' for information from the expect level; it generates displays of the expect attempts to match the tool output with the patterns specified (see section Logging expect internal actions). This output can be very helpful while developing test scripts, since it shows precisely the characters received. Iterating between the latest attempt at a new test script and the corresponding `dbg.log' can allow you to create the final patterns by "cut and paste". This is sometimes the best way to write a test case. Use `--strace' to see more detail at the Tcl level; this shows how Tcl procedure definitions expand, as they execute. The associated number controls the depth of definitions expanded; see the discussion of `--strace' in section Using runtest.
  3. Finally, if the value of `verbose' is 3 or greater, runtest turns on the expect command log_user. This command prints all expect actions to the expect standard output, to the detailed log file, and (if `--debug' is on) to `dbg.log'.

Adding a test case to a test suite

There are two slightly different ways to add a test case. One is to add the test case to an existing directory. The other is to create a new directory to hold your test. The existing test directories represent several styles of testing, all of which are slightly different; examine the directories for the tool of interest to see which (if any) is most suitable.

Adding a GCC test can be very simple: just add the C code to any directory beginning with `gcc.' and it runs on the next `runtest --tool gcc'.

To add a test to GDB, first add any source code you will need to the test directory. Then you can either create a new expect file, or add your test to an existing one (any file with a `.exp' suffix). Creating a new `.exp' file is probably a better idea if the test is significantly different from existing tests. Adding it as a separate file also makes upgrading easier. If the C code has to be already compiled before the test will run, then you'll have to add it to the `Makefile.in' file for that test directory, then run configure and make.

Adding a test by creating a new directory is very similar:

  1. Create the new directory. All subdirectory names begin with the name of the tool to test; e.g. G++ tests might be in a directory called `g++.other'. There can be multiple test directories that start with the same tool name (such as `g++').
  2. Add the new directory name to the `configdirs' definition in the `configure.in' file for the test suite directory. This way when make and configure next run, they include the new directory.
  3. Add the new test case to the directory, as above.
  4. To add support in the new directory for configure and make, you must also create a Makefile.in and a configure.in. See section `What Configure Does' in Cygnus Configure.

Hints on writing a test case

There may be useful existing procedures already written for your test in the `lib' directory of the DejaGnu distribution. See section Library procedures.

It is safest to write patterns that match all the output generated by the tested program; this is called closure. If a pattern does not match the entire output, any output that remains will be examined by the next expect command. In this situation, the precise boundary that determines which expect command sees what is very sensitive to timing between the expect task and the task running the tested tool. As a result, the test may sometimes appear to work, but is likely to have unpredictable results. (This problem is particularly likely for interactive tools, but can also affect batch tools--especially for tests that take a long time to finish.) The best way to ensure closure is to use the `-re' option for the expect command to write the pattern as a full regular expressions; then you can match the end of output using a `$'. It is also a good idea to write patterns that match all available output by using `.*\' after the text of interest; this will also match any intervening blank lines. Sometimes an alternative is to match end of line using `\r' or `\n', but this is usually too dependent on terminal settings.

Always escape punctuation, such as `(' or `"', in your patterns; for example, write `\('. If you forget to escape punctuation, you will usually see an error message like `extra characters after close-quote'.

If you have trouble understanding why a pattern does not match the program output, try using the `--debug' option to runtest, and examine the debug log carefully. See section Logging expect internal actions.

Be careful not to neglect output generated by setup rather than by the interesting parts of a test case. For example, while testing GDB, I issue a send `set height 0\n' command. The purpose is simply to make sure GDB never calls a paging program. The `set height' command in GDB does not generate any output; but running any command makes GDB issue a new `(gdb) ' prompt. If there were no expect command to match this prompt, the output `(gdb) ' begins the text seen by the next expect command--which might make that pattern fail to match.

To preserve basic sanity, I also recommended that no test ever pass if there was any kind of problem in the test case. To take an extreme case, tests that pass even when the tool will not spawn are misleading. Ideally, a test in this sort of situation should not fail either. Instead, print an error message by calling one of the DejaGnu procedures error or warning.

Special variables used by test cases

Your test cases can use these variables, with conventional meanings (as well as the variables saved in `site.exp' see section Setting runtest defaults):

These variables are available to all test cases.

prms_id
The tracking system (e.g. GNATS) number identifying a corresponding bugreport. (`0' if you do not specify it in the test script.)
bug_id
An optional bug id; may reflect a bug identification from another organization. (`0' if you do not specify it.)
subdir
The subdirectory for the current test case.

These variables should never be changed. They appear in most tests.

expect_out(buffer)
The output from the last command. This is an internal variable set by expect.
exec_output
This is the output from a tool_load command. This only applies to tools like GCC and GAS which produce an object file that must in turn be executed to complete a test.
comp_output
This is the output from a tool_start command. This is conventionally used for batch oriented programs, like GCC and GAS, that may produce interesting output (warnings, errors) without further interaction.

Go to the first, previous, next, last section, table of contents.