Running Tests There are two ways to execute a test suite. The most common way is when there is existing support in the Makefile. This support consists of a check target. The other way is to execute the runtest program directly. To run runtest directcly from the command line requires either all the correct options, or the must be setup correctly. Make check To run tests from an existing collection, first use configure as usual to set up the build directory. Then try typing: make check If the check target exists, it usually saves you some trouble. For instance, it can set up any auxiliary programs or other files needed by the tests. The most common file the check builds is the site.exp. The site.exp file contains various variables that DejaGnu used to dertermine the configuration of the program being tested. This is mostly for supporting remote testing. The check target is supported by GNU Automake. To have DejaGnu support added to your generated Makefile.in, just add the keyword dejagnu to the AUTOMAKE_OPTIONS variable in your Makefile.am file. Once you have run make check to build any auxiliary files, you can invoke the test driver runtest directly to repeat the tests. You will also have to execute runtest directly for test collections with no check target in the Makefile. Runtest runtest is the executable test driver for DejaGnu. You can specify two kinds of things on the runtest command line: command line options, and Tcl variables for the test scripts. The options are listed alphabetically below. runtest returns an exit code of 1 if any test has an unexpected result; otherwise (if all tests pass or fail as expected) it returns 0 as the exit code. Output States runtest flags the outcome of each test as one of these cases. for a discussion of how POSIX specifies the meanings of these cases. PASS The most desirable outcome: the test succeeded, and was expected to succeed. XPASS A pleasant kind of failure: a test was expected to fail, but succeeded. This may indicate progress; inspect the test case to determine whether you should amend it to stop expecting failure. FAIL A test failed, although it was expected to succeed. This may indicate regress; inspect the test case and the failing software to ocate the bug. XFAIL A test failed, but it was expected to fail. This result indicates no change in a known bug. If a test fails because the operating system where the test runs lacks some facility required by the test, the outcome is UNSUPPORTED instead. UNRESOLVED Output from a test requires manual inspection; the test suite could not automatically determine the outcome. For example, your tests can report this outcome is when a test does not complete as expected. UNTESTED A test case is not yet complete, and in particular cannot yet produce a PASS or FAIL. You can also use this outcome in dummy ``tests'' that note explicitly the absence of a real test case for a particular property. UNSUPPORTED A test depends on a conditionally available feature that does not exist (in the configured testing environment). For example, you can use this outcome to report on a test case that does not work on a particular target because its operating system support does not include a required subroutine. runtest may also display the following messages: ERROR Indicates a major problem (detected by the test case itself) in running the test. This is usually an unrecoverable error, such as a missing file or loss of communication to the target. (POSIX test suites should not emit this message; use UNSUPPORTED, UNTESTED, or UNRESOLVED instead, as appropriate.) WARNING Indicates a possible problem in running the test. Usually warnings correspond to recoverable errors, or display an important message about the following tests. NOTE An informational message about the test case. Invoking Runtest This is the full set of command line options that runtest recognizes. Arguments may be abbreviated to the shortest unique string. (-a) Display all test output. By default, runtest shows only the output of tests that produce unexpected results; that is, tests with status FAIL (unexpected failure), XPASS (unexpected success), or ERROR (a severe error in the test case itself). Specify --all to see output for tests with status PASS (success, as expected) XFAIL (failure, as expected), or WARNING (minor error in the test case itself). string is a full configuration ``triple'' name as used by configure. This is the type of machine DejaGnu and the tools to be tested are built on. For a normal cross this is the same as the host, but for a canadian cross, they are seperate. string is a full configuration ``triple'' name as used by configure. Use this option to override the default string recorded by your configuration's choice of host. This choice does not change how anything is actually configured unless --build is also specified; it affects only DejaGnu procedures that compare the host string with particular values. The procedures ishost, istarget, isnative, and setupxfail} are affected by --host. In this usage, host refers to the machine that the tests are to be run on, which may not be the same as the build machine. If --build is also specified, then --host refers to the machine that the tests wil, be run on, not the machine DejaGnu is run on. The host board to use. Use this option to override the default setting (running native tests). string is a full configuration ``triple'' name of the form cpu-vendor-os as used by configure. This option changes the configuration runtest uses for the default tool names, and other setup information. (-de) Turns on the expect internal debugging output. Debugging output is displayed as part of the runtest output, and logged to a file called dbg.log. The extra debugging output does not appear on standard output, unless the verbose level is greater than 2 (for instance, to see debug output immediately, specify --debug-v -v}). The debugging output shows all attempts at matching the test output of the tool with the scripted patterns describing expected output. The output generated with --strace also goes into dbg.log. (-he) Prints out a short summary of the runtest options, then exits (even if you also specify other options). The names of specific tests to ignore. Use path as the top directory containing any auxiliary compiled test code. This defaults to .. Use this option to locate pre-compiled test code. You can normally prepare any auxiliary files needed with make. Write output logs in directory path. The default is .}, the directory where you start runtest. This option affects only the summary and the detailed log files tool.sum and tool.log. The DejaGnu debug log dbg.log always appears (when requested) in the local directory. Reboot the target board when runtest initializes. Usually, when running tests on a separate target board, it is safer to reboot the target to be certain of its state. However, when developing test scripts, rebooting takes a lot of time. Use path as the top directory for test scripts to run. runtest looks in this directory for any subdirectory whose name begins with the toolname (specified with --tool). For instance, with --toolgdb}, runtest uses tests in subdirectories gdb.* (with the usual shell-like filename expansion). If you do not use --srcdir, runtest looks for test directories under the current working directory. Turn on internal tracing for expect, to n levels deep. By adjusting the level, you can control the extent to which your output expands multi-level Tcl statements. This allows you to ignore some levels of case or if statements. Each procedure call or control structure counts as one ``level''. The output is recorded in the same file, dbg.log, used for output from --debug. Connect to a target testing environment as specified by type, if the target is not the computer running runtest. For example, use --connect to change the program used to connect to a ``bare board'' boot monitor. The choices for type in the DejaGnu 1.4 distribution are rlogin, telnet, rsh, tip, kermit, and mondfe. The default for this option depends on the configuration most convenient communication method available, but often other alternatives work as well; you may find it useful to try alternative connect methods if you suspect a communication problem with your testing target. Set the default baud rate to something other than 9600. (Some serial interface programs, like tip, use a separate initialization file instead of this value.) The list of target boards to run tests on. Specifies which test suite to run, and what initialization module to use. is used only for these two purposes. It is not used to name the executable program to test. Executable tool names (and paths) are recorded in site.exp and you can override them by specifying Tcl variables on the command line. For example, including " gcc" on the runtest command line runs tests from all test subdirectories whose names match gcc.*, and uses one of the initialization modules named config/*-gcc.exp. To specify the name of the compiler (perhaps as an alternative path to what runtest would use by default), use GCC=binname on the runtest command line. The path to the tool executable to test. A list of additional options to pass to the tool. (-v) Turns on more output. Repeating this option increases the amount of output displayed. Level one (-v) is simply test output. Level two (-v-v}) shows messages on options, configuration, and process control. Verbose messages appear in the detailed (*.log) log file, but not in the summary (*.sum) log file. (-V) Prints out the version numbers of DejaGnu, expect and Tcl, and exits without running any tests. Start the internal Tcl debugger. The Tcl debugger supports breakpoints, single stepping, and other common debugging activities. See the document "Debugger for Tcl Applications} by Don Libes. (Distributed in PostScript form with expect as the file expect/tcl-debug.ps.. If you specify -D1, the expect shell stops at a breakpoint as soon as DejaGnu invokes it. If you specify -D0, DejaGnu starts as usual, but you can enter the debugger by sending an interrupt (e.g. by typing Cc). testfile.exp[=arg(s)] Specify the names of testsuites to run. By default, runtest runs all tests for the tool, but you can restrict it to particular testsuites by giving the names of the .exp expect scripts that control them. testsuite.exp may not include path information; use plain filenames. testfile.exp="testfile1 ..." Specify a subset of tests in a suite to run. For compiler or assembler tests, which often use a single .exp script covering many different source files, this option allows you to further restrict the tests by listing particular source files to compile. Some tools even support wildcards here. The wildcards supported depend upon the tool, but typically they are ?, *, and [chars]. tclvar=value You can define Tcl variables for use by your test scripts in the same style used with make for environment variables. For example, runtest GDB=gdb.old defines a variable called GDB; when your scripts refer to $GDB in this run, they use the value gdb.old. The default Tcl variables used for most tools are defined in the main DejaGnu Makefile; their values are captured in the site.exp file. Common Options Typically, you don't need must to use any command-line options. used is only required when there are more than one test suite in the same directory. The default options are in the local site.exp file, created by "make site.exp". For example, if the directory gdb/testsuite contains a collection of DejaGnu tests for GDB, you can run them like this: eg$ cd gdb/testsuite eg$ runtest --tool gdb Test output follows, ending with: === gdb Summary === # of expected passes 508 # of expected failures 103 /usr/latest/bin/gdb version 4.14.4 -nx You can use the option --srcdir to point to some other directory containing a collection of tests: eg$ runtest--srcdir /devo/gdb/testsuite By default, runtest prints only the names of the tests it runs, output from any tests that have unexpected results, and a summary showing how many tests passed and how many failed. To display output from all tests (whether or not they behave as expected), use the --all option. For more verbose output about processes being run, communication, and so on, use --verbose. To see even more output, use multiple --verbose options. for a more detailed explanation of each runtest option. Test output goes into two files in your current directory: summary output in tool.sum, and detailed output in tool.log. (tool refers to the collection of tests; for example, after a run with --tool gdb, look for output files gdb.sum and gdb.log.) The files DejaGnu produces. DejaGnu always writes two kinds of output files: summary logs and detailed logs. The contents of both of these are determined by your tests. For troubleshooting, a third kind of output file is useful: use to request an output file showing details of what Expect is doing internally. Summary File DejaGnu always produces a summary output file tool.sum. This summary shows the names of all test files run; for each test file, one line of output from each pass command (showing status PASS or XPASS) or fail command (status FAIL or XFAIL); trailing summary statistics that count passing and failing tests (expected and unexpected); and the full pathname and version number of the tool tested. (All possible outcomes, and all errors, are always reflected in the summary output file, regardless of whether or not you specify .) If any of your tests use the procedures unresolved, unsupported, or runtested, the summary output also tabulates the corresponding outcomes. For example, after runtest --tool binutils, look for a summary log in binutils.sum. Normally, DejaGnu writes this file in your current working directory; use the option to select a different directory. Here is a short sample summary log Test Run By rob on Mon May 25 21:40:57 PDT 1992 === gdb tests === Running ./gdb.t00/echo.exp ... PASS: Echo test Running ./gdb.all/help.exp ... PASS: help add-symbol-file PASS: help aliases PASS: help breakpoint "bre" abbreviation FAIL: help run "r" abbreviation Running ./gdb.t10/crossload.exp ... PASS: m68k-elf (elf-big) explicit format; loaded XFAIL: mips-ecoff (ecoff-bigmips) "ptype v_signed_char" signed C types === gdb Summary === # of expected passes 5 # of expected failures 1 # of unexpected failures 1 /usr/latest/bin/gdb version 4.6.5 -q Log File DejaGnu also saves a detailed log file tool.log, showing any output generated by tests as well as the summary output. For example, after runtest --tool binutils, look for a detailed log in binutils.log. Normally, DejaGnu writes this file in your current working directory; use the option to select a different directory. Here is a brief example showing a detailed log for <productname>G++</productname> tests Test Run By rob on Mon May 25 21:40:43 PDT 1992 === g++ tests === --- Running ./g++.other/t01-1.exp --- PASS: operate delete --- Running ./g++.other/t01-2.exp --- FAIL: i960 bug EOF p0000646.C: In function `int warn_return_1 ()': p0000646.C:109: warning: control reaches end of non-void function p0000646.C: In function `int warn_return_arg (int)': p0000646.C:117: warning: control reaches end of non-void function p0000646.C: In function `int warn_return_sum (int, int)': p0000646.C:125: warning: control reaches end of non-void function p0000646.C: In function `struct foo warn_return_foo ()': p0000646.C:132: warning: control reaches end of non-void function --- Running ./g++.other/t01-4.exp --- FAIL: abort 900403_04.C:8: zero width for bit-field `foo' --- Running ./g++.other/t01-3.exp --- FAIL: segment violation 900519_12.C:9: parse error before `;' 900519_12.C:12: Segmentation violation /usr/latest/bin/gcc: Internal compiler error: program cc1plus got fatal signal === g++ Summary === # of expected passes 1 # of expected failures 3 /usr/latest/bin/g++ version cygnus-2.0.1 Debug Log File With the option, you can request a log file showing the output from Expect itself, running in debugging mode. This file (dbg.log, in the directory where you start runtest) shows each pattern Expect considers in analyzing test output. This file reflects each send command, showing the string sent as input to the tool under test; and each Expect command, showing each pattern it compares with the tool output. The log messages begin with a message of the form expect: does {tool output} (spawn_id n) match pattern {expected pattern}? For every unsuccessful match, Expect issues a no after this message; if other patterns are specified for the same Expect command, they are reflected also, but without the first part of the message (expect... match pattern). When Expect finds a match, the log for the successful match ends with yes, followed by a record of the Expect variables set to describe a successful match. Here is an excerpt from the debugging log for a <productname>GDB</productname> test: send: sent {break gdbme.c:34\n} to spawn id 6 expect: does {} (spawn_id 6) match pattern {Breakpoint.*at.* file gdbme.c, line 34.*\(gdb\) $}? no {.*\(gdb\) $}? no expect: does {} (spawn_id 0) match pattern {return} ? no {\(y or n\) }? no {buffer_full}? no {virtual}? no {memory}? no {exhausted}? no {Undefined}? no {command}? no break gdbme.c:34 Breakpoint 8 at 0x23d8: file gdbme.c, line 34. (gdb) expect: does {break gdbme.c:34\r\nBreakpoint 8 at 0x23d8: file gdbme.c, line 34.\r\n(gdb) } (spawn_id 6) match pattern {Breakpoint.*at.* file gdbme.c, line 34.*\(gdb\) $}? yes expect: set expect_out(0,start) {18} expect: set expect_out(0,end) {71} expect: set expect_out(0,string) {Breakpoint 8 at 0x23d8: file gdbme.c, line 34.\r\n(gdb) } epect: set expect_out(spawn_id) {6} expect: set expect_out(buffer) {break gdbme.c:34\r\nBreakpoint 8 at 0x23d8: file gdbme.c, line 34.\r\n(gdb) } PASS: 70 0 breakpoint line number in file This example exhibits three properties of Expect and DejaGnu that might be surprising at first glance: Empty output for the first attempted match. The first set of attempted matches shown ran against the output {} --- that is, no output. Expect begins attempting to match the patterns supplied immediately; often, the first pass is against incomplete output (or completely before all output, as in this case). Interspersed tool output. The beginning of the log entry for the second attempted match may be hard to spot: this is because the prompt {(gdb) } appears on the same line, just before the expect: that marks the beginning of the log entry. Fail-safe patterns. Many of the patterns tested are fail-safe patterns provided by GDB testing utilities, to reduce possible indeterminacy. It is useful to anticipate potential variations caused by extreme system conditions (GDB might issue the message virtual memory exhausted in rare circumstances), or by changes in the tested program (Undefined command is the likeliest outcome if the name of a tested command changes). The pattern {return} is a particularly interesting fail-safe to notice; it checks for an unexpected RET prompt. This may happen, for example, if the tested tool can filter output through a pager. These fail-safe patterns (like the debugging log itself) are primarily useful while developing test scripts. Use the error procedure to make the actions for fail-safe patterns produce messages starting with ERROR on standard output, and in the detailed log file. Customizing DejaGnu The site configuration file, site.exp, captures configuration-dependent values and propagates them to the DejaGnu test environment using Tcl variables. This ties the DejaGnu test scripts into the configure and make programs. If this file is setup correctly, it is possible to execute a test suite merely by typing runtest. DejaGnu supports two site.exp files. The multiple instances of site.exp are loaded in a fixed order built into DejaGnu. The first file loaded is the local file site.exp, and then the optional global site.exp file as pointed to by the DEJAGNU environment variable. There is an optional master site.exp, capturing configuration values that apply to DejaGnu across the board, in each configuration-specific subdirectory of the DejaGnu library directory. runtest loads these values first. The master site.exp contains the default values for all targets and hosts supported by DejaGnu. This master file is identified by setting the environment variable DEJAGNU to the name of the file. This is also refered to as the ``global'' config file. Any directory containing a configured test suite also has a local site.exp, capturing configuration values specific to the tool under test. Since runtest loads these values last, the individual test configuration can either rely on and use, or override, any of the global values from the global site.exp file. You can usually generate or update the testsuite's local site.exp by typing make site.exp in the test suite directory, after the test suite is configured. You can also have a file in your home directory called .dejagnurc. This gets loaded first before the other config files. Usually this is used for personal stuff, like setting the all_flag so all the output gets printed, or your own verbosity levels. This file is usually restricted to setting command line options. You can further override the default values in a user-editable section of any site.exp, or by setting variables on the runtest command line. Local Config File It is usually more convenient to keep these manual overrides in the site.exp local to each test directory, rather than in the global site.exp in the installed DejaGnu library. This file is mostly for supplying tool specific info that is required by the test suite. All local site.exp files have two sections, separated by comment text. The first section is the part that is generated by make. It is essentially a collection of Tcl variable definitions based on Makefile environment variables. Since they are generated by make, they contain the values as specified by configure. (You can also customize these values by using the option to configure.) In particular, this section contains the Makefile variables for host and target configuration data. Do not edit this first section; if you do, your changes are replaced next time you run make. The first section starts with ## these variables are automatically generated by make ## # Do not edit here. If you wish to override these values # add them to the last section In the second section, you can override any default values (locally to DejaGnu) for all the variables. The second section can also contain your preferred defaults for all the command line options to runtest. This allows you to easily customize runtest for your preferences in each configured test-suite tree, so that you need not type options repeatedly on the command line. (The second section may also be empty, if you do not wish to override any defaults.) The first section ends with this line ## All variables above are generated by configure. Do Not Edit ## You can make any changes under this line. If you wish to redefine a variable in the top section, then just put a duplicate value in this second section. Usually the values defined in this config file are related to the configuration of the test run. This is the ideal place to set the variables host_triplet, build_triplet, target_triplet. All other variables are tool dependant. ie for testing a compiler, the value for CC might be set to a freshly built binary, as opposed to one in the user's path. Here's an example local site.exp file, as used for GCC/G++ testing. Local Config File ## these variables are automatically generated by make ## # Do not edit here. If you wish to override these values # add them to the last section set rootme "/build/devo-builds/i586-pc-linux-gnulibc1/gcc" set host_triplet i586-pc-linux-gnulibc1 set build_triplet i586-pc-linux-gnulibc1 set target_triplet i586-pc-linux-gnulibc1 set target_alias i586-pc-linux-gnulibc1 set CFLAGS "" set CXXFLAGS "-I/build/devo-builds/i586-pc-linux-gnulibc1/gcc/../libio -I$srcdir/../libg++/src -I$srcdir/../libio -I$srcdir/../libstdc++ -I$srcdir/../libstdc++/stl -L/build/devo-builds/i586-pc-linux-gnulibc1/gcc/../libg++ -L/build/devo-builds/i586-pc-linux-gnulibc1/gcc/../libstdc++" append LDFLAGS " -L/build/devo-builds/i586-pc-linux-gnulibc1/gcc/../ld" set tmpdir /build/devo-builds/i586-pc-linux-gnulibc1/gcc/testsuite set srcdir "${srcdir}/testsuite" ## All variables above are generated by configure. Do Not Edit ## This file defines the required fields for a local config file, namely the three config triplets, and the srcdir. It also defines several other Tcl variables that are used exclusivly by the GCC test suite. For most test cases, the CXXFLAGS and LDFLAGS are supplied by DejaGnu itself for cross testing, but to test a compiler, GCC needs to manipulate these itself. Global Config File The master config file is where all the target specific config variables get set for a whole site get set. The idea is that for a centralized testing lab where people have to share a target between multiple developers. There are settings for both remote targets and remote hosts. Here's an example of a Master Config File (also called the Global config file) for a canadian cross. A canadian cross is when you build and test a cross compiler on a machine other than the one it's to be hosted on. Here we have the config settings for our California office. Note that all config values are site dependant. Here we have two sets of values that we use for testing m68k-aout cross compilers. As both of these target boards has a different debugging protocol, we test on both of them in sequence. Global Config file # Make sure we look in the right place for the board description files. if ![info exists boards_dir] { set boards_dir {} } lappend boards_dir "/nfs/cygint/s1/cygnus/dejagnu/boards" verbose "Global Config File: target_triplet is $target_triplet" 2 global target_list case "$target_triplet" in { { "native" } { set target_list "unix" } { "sparc64-*elf" } { set target_list "sparc64-sim" } { "mips-*elf" } { set target_list "mips-sim wilma barney" } { "mips-lsi-elf" } { set target_list "mips-lsi-sim{,soft-float,el}" } { "sh-*hms" } { set target_list { "sh-hms-sim" "bloozy" } } } In this case, we have support for several cross compilers, that all run on this host. For testing on operating systems that don't support Expect, DejaGnu can be run on the local build machine, and it can connect to the remote host and run all the tests for this cross compiler on that host. All the remote OS requires is a working telnetd. As you can see, all one does is set the variable target_list to the list of targets and options to test. The simple settings, like for sparc64-elf only require setting the name of the single board config file. The mips-elf target is more complicated. Here it sets the list to three target boards. One is the default mips target, and both wilma barney are symbolic names for other mips boards. Symbolic names are covered in the chapter. The more complicated example is the one for mips-lsi-elf. This one runs the tests with multiple iterations using all possible combinations of the and the (little endian) option. Needless to say, this last feature is mostly compiler specific. Board Config File The board config file is where board specfic config data is stored. A board config file contains all the higher-level configuration settings. There is a rough inheritance scheme, where it is possible to base a new board description file on an existing one. There are also collections of custom procedures for common environments. For more information on adding a new board config file, go to the chapter. An example board config file for a GNU simulator is as follows. set_board_info is a procedure that sets the field name to the specified value. The procedures in square brackets [] are helper procedures. Thes are used to find parts of a tool chain required to build an executable image that may reside in various locations. This is mostly of use for when the startup code, the standard C lobraries, or the tool chain itself is part of your build tree. Board Config File # This is a list of toolchains that are supported on this board. set_board_info target_install {sparc64-elf} # Load the generic configuration for this board. This will define any # routines needed by the tool to communicate with the board. load_generic_config "sim" # We need this for find_gcc and *_include_flags/*_link_flags. load_base_board_description "basic-sim" # Use long64 by default. process_multilib_options "long64" setup_sim sparc64 # We only support newlib on this target. We assume that all multilib # options have been specified before we get here. set_board_info compiler "[find_gcc]" set_board_info cflags "[libgloss_include_flags] [newlib_include_flags]" set_board_info ldflags "[libgloss_link_flags] [newlib_link_flags]" # No linker script. set_board_info ldscript ""; # Used by a few gcc.c-torture testcases to delimit how large the # stack can be. set_board_info gcc,stack_size 16384 # The simulator doesn't return exit statuses and we need to indicate this # the standard GCC wrapper will work with this target. set_board_info needs_status_wrapper 1 # We can't pass arguments to programs. set_board_info noargs 1 There are five helper procedures used in this example. The first one, find gcc looks for a copy of the GNU compiler in your build tree, or it uses the one in your path. This will also return the proper transformed name for a cross compiler if you whole build tree is configured for one. The next helper procedures are libgloss_include_flags & libgloss_link_flags. These return the proper flags to compiler and link an executable image using , the GNU BSP (Board Support Package). The final procedures are newlib_include_flag & newlib_include_flag. These find the Newlib C library, which is a reentrant standard C library for embedded systems comprising of non GPL'd code. Remote Host Testing Thanks to Dj Delorie for the original paper that this section is based on. DejaGnu also supports running the tests on a remote host. To set this up, the remote host needs an ftp server, and a telnet server. Currently foreign operating systems used as remote hosts are VxWorks, VRTX, Dos/Win3.1, MacOS, and win95/win98/NT. The recommended source for a win95/win98/NT based ftp server is to get IIS (either IIS 1 or Personal Web Server) from http://www.microsoft.com. When you install it, make sure you install the FTP server - it's not selected by default. Go into the IIS manager and change the FTP server so that it does not allow anonymous ftp. Set the home directory to the root directory (i.e. c:\) of a suitable drive. Allow writing via ftp. It will create an account like IUSR_FOOBAR where foobar is the name of your machine. Go into the user editor and give that account a password that you don't mind hanging around in the clear (i.e. not the same as your admin or personal passwords). Also, add it to all the various permission groups. You'll also need a telnet server. For win95/win98/NT, go to the Ataman web site, pick up the Ataman Remote Logon Services for Windows, and install it. You can get started on the eval period anyway. Add IUSR_FOOBAR to the list of allowed users, set the HOME directory to be the same as the FTP default directory. Change the Mode prompt to simple. Ok, now you need to pick a directory name to do all the testing in. For the sake of this example, we'll call it piggy (i.e. c:\piggy). Create this directory. You'll need a unix machine. Create a directory for the scripts you'll need. For this example, we'll use /usr/local/swamp/testing. You'll need to have a source tree somewhere, say /usr/src/devo. Now, copy some files from releng's area in SV to your machine: Remote host setup cd /usr/local/swamp/testing mkdir boards scp darkstar.welcomehome.org:/dejagnu/cst/bin/MkTestDir . scp darkstar.welcomehome.org:/dejagnu/site.exp . scp darkstar.welcomehome.org:/dejagnu/boards/useless98r2.exp boards/foobar.exp export DEJAGNU=/usr/local/swamp/testing/site.exp You must edit the boards/foobar.exp file to reflect your machine; change the hostname (foobar.com), username (iusr_foobar), password, and ftp_directory (c:/piggy) to match what you selected. Edit the global site.exp to reflect your boards directory: Add The Board Directory lappend boards_dir "/usr/local/swamp/testing/boards" Now run MkTestDir, which is in the contrib directory. The first parameter is the toolchain prefix, the second is the location of your devo tree. If you are testing a cross compiler (ex: you have sh-hms-gcc.exe in your PATH on the PC), do something like this: Setup Cross Remote Testing ./MkTestDir sh-hms /usr/dejagnu/src/devo If you are testing a native PC compiler (ex: you have gcc.exe in your PATH on the PC), do this: Setup Native Remote Testing ./MkTestDir '' /usr/dejagnu/src/devo To test the setup, ftp to your PC using the username (iusr_foobar) and password you selected. CD to the test directory. Upload a file to the PC. Now telnet to your PC using the same username and password. CD to the test directory. Make sure the file is there. Type "set" and/or "gcc -v" (or sh-hms-gcc -v) and make sure the default PATH contains the installation you want to test. Run Test Remotely cd /usr/local/swamp/testing make -k -w check RUNTESTFLAGS="--host_board foobar --target_board foobar -v -v" > check.out 2>&1 To run a specific test, use a command like this (for this example, you'd run this from the gcc directory that MkTestDir created): Run a Test Remotely make check RUNTESTFLAGS="--host_board sloth --target_board sloth -v compile.exp=921202-1.c" Note: if you are testing a cross-compiler, put in the correct target board. You'll also have to download more .exp files and modify them for your local configuration. The -v's are optional. Config File Values DejaGnu uses a named array in Tcl to hold all the info for each machine. In the case of a canadian cross, this means host information as well as target information. The named array is called target_info, and it has two indices. The following fields are part of the array. Command Line Option Variables In the user editable second section of the you can not only override the configuration variables captured in the first section, but also specify default values for all on the runtest command line options. Save for , , and , each command line option has an associated Tcl variable. Use the Tcl set command to specify a new default value (as for the configuration variables). The following table describes the correspondence between command line options and variables you can set in site.exp. , for explanations of the command-line options. Tcl Variables For Command Line Options runtestTcl optionvariabledescription --all all_flag display all test results if set --baud baud set the default baud rate to something other than 9600. --connect connectmode rlogin, telnet, rsh, kermit, tip, or mondfe --outdir outdir directory for tool.sum and tool.log. --objdir objdir directory for pre-compiled binaries --reboot reboot reboot the target if set to "1"; do not reboot if set to "0" (the default). --srcdir srcdir directory of test subdirectories --strace tracelevel a number: Tcl trace depth --tool tool name of tool to test; identifies init, test subdir --verbose verbose verbosity level. As option, use multiple times; as variable, set a number, 0 or greater. --target target_triplet The canonical configuration string for the target. --host host_triplet The canonical configuration string for the host. --build build_triplet The canonical configuration string for the build host.
Personal Config File The personal config file is used to customize runtest's behaviour for each person. It's typically used to set the user prefered setting for verbosity, and any experimental Tcl procedures. My personal ~/.dejagnurc file looks like: Personal Config File set all_flag 1 set RLOGIN /usr/ucb/rlogin set RSH /usr/local/sbin/ssh Here I set all_flag so I see all the test cases that PASS along with the ones that FAIL. I also set RLOGIN to the BSD version. I have Kerberos installed, and when I rlogin to a target board, it usually isn't supported. So I use the non secure version rather than the default that's in my path. I also set RSH to the SSH secure shell, as rsh is mostly used to test unix machines within a local network here.
Extending DejaGnu Adding A New Test Suite The testsuite for a new tool should always be located in that tools source directory. DejaGnu require the directory be named testsuite. Under this directory, the test cases go in a subdirectory whose name begins with the tool name. For example, for a tool named flubber, each subdirectory containing testsuites must start with "flubber.". Adding A New Tool In general, the best way to learn how to write (code or even prose) is to read something similar. This principle applies to test cases and to test suites. Unfortunately, well-established test suites have a way of developing their own conventions: as test writers become more experienced with DejaGnu and with Tcl, they accumulate more utilities, and take advantage of more and more features of Expect and Tcl in general. Inspecting such established test suites may make the prospect of creating an entirely new test suite appear overwhelming. Nevertheless, it is quite straightforward to get a new test suite going. There is one test suite that is guaranteed not to grow more elaborate over time: both it and the tool it tests were created expressly to illustrate what it takes to get started with DejaGnu. The example/ directory of the DejaGnu distribution contains both an interactive tool called calc, and a test suite for it. Reading this test suite, and experimenting with it, is a good way to supplement the information in this section. (Thanks to Robert Lupton for creating calc and its test suite---and also the first version of this section of the manual!) To help orient you further in this task, here is an outline of the steps to begin building a test suite for a program example. Create or select a directory to contain your new collection of tests. Change into that directory (shown here as testsuite): Create a configure.in file in this directory, to control configuration-dependent choices for your tests. So far as DejaGnu is concerned, the important thing is to set a value for the variable target_abbrev; this value is the link to the init file you will write soon. (For simplicity, we assume the environment is Unix, and use unix as the value.) What else is needed in configure.in depends on the requirements of your tool, your intended test environments, and which configure system you use. This example is a minimal configure.in for use with GNU Autoconf. Create Makefile.in (if you are using Autoconf), or Makefile.am(if you are using Automake), the source file used by configure to build your Makefile. If you are using GNU Automake.just add the keyword dejagnu to the AUTOMAKE_OPTIONS variable in your Makefile.am file. This will add all the Makefile support needed to run DejaGnu, and support the target. You also need to include two targets important to DejaGnu: check, to run the tests, and site.exp, to set up the Tcl copies of configuration-dependent values. This is called the The check target must run the runtest program to execute the tests. The site.exp target should usually set up (among other things) the $tool variable for the name of your program. If the local site.exp file is setup correctly, it is possible to execute the tests by merely typing runtest on the command line. Sample Makefile.in Fragment # Look for a local version of DejaGnu, otherwise use one in the path RUNTEST = `if test -f $(top_srcdir)/../dejagnu/runtest; then \ echo $(top_srcdir) ../dejagnu/runtest; \ else \ echo runtest; \ fi` # The flags to pass to runtest RUNTESTFLAGS = # Execute the tests check: site.exp all $(RUNTEST) $(RUNTESTFLAGS) \ --tool ${example} --srcdir $(srcdir) # Make the local config file site.exp: ./config.status Makefile @echo "Making a new config file..." -@rm -f ./tmp? @touch site.exp -@mv site.exp site.bak @echo "## these variables are automatically\ generated by make ##" > ./tmp0 @echo "# Do not edit here. If you wish to\ override these values" >> ./tmp0 @echo "# add them to the last section" >> ./tmp0 @echo "set host_os ${host_os}" >> ./tmp0 @echo "set host_alias ${host_alias}" >> ./tmp0 @echo "set host_cpu ${host_cpu}" >> ./tmp0 @echo "set host_vendor ${host_vendor}" >> ./tmp0 @echo "set target_os ${target_os}" >> ./tmp0 @echo "set target_alias ${target_alias}" >> ./tmp0 @echo "set target_cpu ${target_cpu}" >> ./tmp0 @echo "set target_vendor ${target_vendor}" >> ./tmp0 @echo "set host_triplet ${host_canonical}" >> ./tmp0 @echo "set target_triplet ${target_canonical}">>./tmp0 @echo "set tool binutils" >> ./tmp0 @echo "set srcdir ${srcdir}" >> ./tmp0 @echo "set objdir `pwd`" >> ./tmp0 @echo "set ${examplename} ${example}" >> ./tmp0 @echo "## All variables above are generated by\ configure. Do Not Edit ##" >> ./tmp0 @cat ./tmp0 > site.exp @sed < site.bak \ -e '1,/^## All variables above are.*##/ d' \ >> site.exp -@rm -f ./tmp? Create a directory (in testsuite) called config. Make a Tool Init File in this directory. Its name must start with the target_abbrev value, or be named default.exp so call it config/unix.exp for our Unix based example. This is the file that contains the target-dependent procedures. Fortunately, on Unix, most of them do not have to do very much in order for runtest to run. If the program being tested is not interactive, you can get away with this minimal unix.exp to begin with: Simple Batch Program Tool Init File proc foo_exit {} {} proc foo_version {} {} If the program being tested is interactive, however, you might as well define a start routine and invoke it by using an init file like this: Simple Interactive Program Tool Init File proc foo_exit {} {} proc foo_version {} {} proc foo_start {} { global ${examplename} spawn ${examplename} expect { -re "" {} } } # Start the program running we want to test foo_start Create a directory whose name begins with your tool's name, to contain tests. For example, if your tool's name is gcc, then the directories all need to start with "gcc.". Create a sample test file. Its name must end with .exp. You can use first-try.exp. To begin with, just write there a line of Tcl code to issue a message. Testing A New Tool Config send_user "Testing: one, two...\n" Back in the testsuite (top level) directory, run configure. Typically you do this while in the build directory. You may have to specify more of a path, if a suitable configure is not available in your execution path. e now ready to triumphantly type make check or runtest. You should see something like this: Example Test Case Run Test Run By rhl on Fri Jan 29 16:25:44 EST 1993 === example tests === Running ./example.0/first-try.exp ... Testing: one, two... === example Summary === There is no output in the summary, because so far the example does not call any of the procedures that establish a test outcome. Write some real tests. For an interactive tool, you should probably write a real exit routine in fairly short order. In any case, you should also write a real version routine soon. Adding A New Target DejaGnu has some additional requirements for target support, beyond the general-purpose provisions of configure. DejaGnu must actively communicate with the target, rather than simply generating or managing code for the target architecture. Therefore, each tool requires an initialization module for each target. For new targets, you must supply a few Tcl procedures to adapt DejaGnu to the target. This permits DejaGnu itself to remain target independent. Usually the best way to write a new initialization module is to edit an existing initialization module; some trial and error will be required. If necessary, you can use the @samp{--debug} option to see what is really going on. When you code an initialization module, be generous in printing information controlled by the verbose procedure. For cross targets, most of the work is in getting the communications right. Communications code (for several situations involving IP networks or serial lines) is available in a DejaGnu library file. If you suspect a communication problem, try running the connection interactively from Expect. (There are three ways of running Expect as an interactive interpreter. You can run Expect with no arguments, and control it completely interactively; or you can use expect -i together with other command-line options and arguments; or you can run the command interpreter from any Expect procedure. Use return to get back to the calling procedure (if any), or return -tcl to make the calling procedure itself return to its caller; use exit or end-of-file to leave Expect altogether.) Run the program whose name is recorded in $connectmode, with the arguments in $targetname, to establish a connection. You should at least be able to get a prompt from any target that is physically connected. Adding A New Board Adding a new board consists of creating a new board config file. Examples are in dejagnu/baseboards. Usually to make a new board file, it's easiest to copy an existing one. It is also possible to have your file be based on a baseboard file with only one or two changes needed. Typically, this can be as simple as just changing the linker script. Once the new baseboard file is done, add it to the boards_DATA list in the dejagnu/baseboards/Makefile.am, and regenerate the Makefile.in using automake. Then just rebuild and install DejaGnu. You can test it by: There is a crude inheritance scheme going on with board files, so you can include one board file into another, The two main procedures used to do this are load_generic_config and load_base_board_description. The generic config file contains other procedures used for a certain class of target. The board description file is where the board specfic settings go. Commonly there are similar target environments with just different processors. Testing a New Board Config File make check RUNTESTFLAGS="--target_board=newboardfile". Here's an example of a board config file. There are several helper procedures used in this example. A helper procedure is one that look for a tool of files in commonly installed locations. These are mostly used when testing in the build tree, because the executables to be tested are in the same tree as the new dejagnu files. The helper procedures are the ones in square braces [], which is the Tcl execution characters. Example Board Config File # Load the generic configuration for this board. This will define a basic # set of routines needed by the tool to communicate with the board. load_generic_config "sim" # basic-sim.exp is a basic description for the standard Cygnus simulator. load_base_board_description "basic-sim" # The compiler used to build for this board. This has *nothing* to do # with what compiler is tested if we're testing gcc. set_board_info compiler "[find_gcc]" # We only support newlib on this target. # However, we include libgloss so we can find the linker scripts. set_board_info cflags "[newlib_include_flags] [libgloss_include_flags]" set_board_info ldflags "[newlib_link_flags]" # No linker script for this board. set_board_info ldscript "-Tsim.ld"; # The simulator doesn't return exit statuses and we need to indicate this. set_board_info needs_status_wrapper 1 # Can't pass arguments to this target. set_board_info noargs 1 # No signals. set_board_info gdb,nosignals 1 # And it can't call functions. set_board_info gdb,cannot_call_functions 1 Board Config File Values These fields are all in the board_info These are all set by using the set_board_info procedure. The parameters are the field name, followed by the value to set the field to. Common Board Info Fields Field Sample Value Description compiler "[find_gcc]" The path to the compiler to use. cflags "-mca" Compilation flags for the compiler. ldflags "[libgloss_link_flags] [newlib_link_flags]" Linking flags for the compiler. ldscript "-Wl,-Tidt.ld" The linker script to use when cross compiling. libs "-lgcc" Any additional libraries to link in. shell_prompt "cygmon>" The command prompt of the remote shell. hex_startaddr "0xa0020000" The Starting address as a string. start_addr 0xa0008000 The starting address as a value. startaddr "a0020000" exit_statuses_bad 1 Whether there is an accurate exit status. reboot_delay 10 The delay between power off and power on. unreliable 1 Whether communication with the board is unreliable. sim [find_sim] The path to the simulator to use. objcopy $tempfil The path to the objcopy program. support_libs "${prefix_dir}/i386-coff/" Support libraries needed for cross compiling. addl_link_flags "-N" Additional link flags, rarely used.
These fields are used by the GCC and GDB tests, and are mostly only useful to somewhat trying to debug a new board file for one of these tools. Many of these are used only by a few testcases, and their purpose is esoteric. These are listed with sample values as a guide to better guessing if you need to change any of these. Board Info Fields For GCC & GDB Field Sample Value Description strip $tempfile Strip the executable of symbols. gdb_load_offset "0x40050000" gdb_protocol "remote" The GDB debugging protocol to use. gdb_sect_offset "0x41000000"; gdb_stub_ldscript "-Wl,-Teva-stub.ld" The linker script to use with a GDB stub. gdb_init_command "set mipsfpu none" gdb,cannot_call_functions 1 Whether GDB can call functions on the target, gdb,noargs 1 Whether the target can take command line arguments. gdb,nosignals 1 Whether there are signals on the target. gdb,short_int 1 gdb,start_symbol "_start"; The starting symbol in the executable. gdb,target_sim_options "-sparclite" Special options to pass to the simulator. gdb,timeout 540 Timeout value to use for remote communication. gdb_init_command "print/x \$fsr = 0x0" gdb_load_offset "0x12020000" gdb_opts "--command gdbinit" gdb_prompt "\\(gdb960\\)" The prompt GDB is using. gdb_run_command "jump start" gdb_stub_offset "0x12010000" use_gdb_stub 1 Whether to use a GDB stub. use_vma_offset 1 wrap_m68k_aout 1 gcc,no_label_values 1 gcc,no_trampolines 1 gcc,no_varargs 1 gcc,stack_size 16384 Stack size to use with some GCC testcases. ieee_multilib_flags "-mieee"; is_simulator 1 needs_status_wrapper 1 no_double 1 no_long_long 1 noargs 1 nullstone,lib "mips-clock.c" nullstone,ticks_per_sec 3782018 sys_speed_value 200 target_install {sh-hms}
Writing A Test Case The easiest way to prepare a new test case is to base it on an existing one for a similar situation. There are two major categories of tests: batch or interactive. Batch oriented tests are usually easier to write. The GCC tests are a good example of batch oriented tests. All GCC tests consist primarily of a call to a single common procedure, Since all the tests either have no output, or only have a few warning messages when successfully compiled. Any non-warning output is a test failure. All the C code needed is kept in the test directory. The test driver, written in Tcl, need only get a listing of all the C files in the directory, and compile them all using a generic procedure. This procedure and a few others supporting for these tests are kept in the library module lib/c-torture.exp in the GCC test suite. Most tests of this kind use very few expect features, and are coded almost purely in Tcl. Writing the complete suite of C tests, then, consisted of these steps: Copying all the C code into the test directory. These tests were based on the C-torture test created by Torbjorn Granlund (on behalf of the Free Software Foundation) for GCC development. Writing (and debugging) the generic Tcl procedures for compilation. Writing the simple test driver: its main task is to search the directory (using the Tcl procedure glob for filename expansion with wildcards) and call a Tcl procedure with each filename. It also checks for a few errors from the testing procedure. Testing interactive programs is intrinsically more complex. Tests for most interactive programs require some trial and error before they are complete. However, some interactive programs can be tested in a simple fashion reminiscent of batch tests. For example, prior to the creation of DejaGnu, the GDB distribution already included a wide-ranging testing procedure. This procedure was very robust, and had already undergone much more debugging and error checking than many recent DejaGnu test cases. Accordingly, the best approach was simply to encapsulate the existing GDB tests, for reporting purposes. Thereafter, new GDB tests built up a family of Tcl procedures specialized for GDB testing. Debugging A Test Case These are the kinds of debugging information available from DejaGnu: Output controlled by test scripts themselves, explicitly allowed for by the test author. This kind of debugging output appears in the detailed output recorded in the DejaGnu log file. To do the same for new tests, use the verbose procedure (which in turn uses the variable also called verbose) to control how much output to generate. This will make it easier for other people running the test to debug it if necessary. Whenever possible, if $verbose is 0, there should be no output other than the output from pass, fail, error, and warning. Then, to whatever extent is appropriate for the particular test, allow successively higher values of $verbose to generate more information. Be kind to other programmers who use your tests: provide for a lot of debugging information. Output from the internal debugging functions of Tcl and Expect. There is a command line options for each; both forms of debugging output are recorded in the file dbg.log in the current directory. Use for information from the expect level; it generates displays of the expect attempts to match the tool output with the patterns specified. This output can be very helpful while developing test scripts, since it shows precisely the characters received. Iterating between the latest attempt at a new test script and the corresponding dbg.log can allow you to create the final patterns by ``cut and paste''. This is sometimes the best way to write a test case. Use to see more detail at the Tcl level; this shows how Tcl procedure definitions expand, as they execute. The associated number controls the depth of definitions expanded. Finally, if the value of verbose is 3 or greater,DejaGnu turns on the expect command log_user. This command prints all expect actions to the expect standard output, to the detailed log file, and (if is on) to dbg.log. Adding A Test Case To A Test Suite. There are two slightly different ways to add a test case. One is to add the test case to an existing directory. The other is to create a new directory to hold your test. The existing test directories represent several styles of testing, all of which are slightly different; examine the directories for the tool of interest to see which (if any) is most suitable. Adding a GCC test can be very simple: just add the C code to any directory beginning with gcc. and it runs on the next runtest --tool gcc. To add a test to GDB, first add any source code you will need to the test directory. Then you can either create a new expect file, or add your test to an existing one (any file with a .exp suffix). Creating a new .exp file is probably a better idea if the test is significantly different from existing tests. Adding it as a separate file also makes upgrading easier. If the C code has to be already compiled before the test will run, then you'll have to add it to the Makefile.in file for that test directory, then run configure and make. Adding a test by creating a new directory is very similar: Create the new directory. All subdirectory names begin with the name of the tool to test; e.g. G++ tests might be in a directory called g++.other. There can be multiple test directories that start with the same tool name (such as g++). Add the new directory name to the configdirs definition in the configure.in file for the test suite directory. This way when make and configure next run, they include the new directory. Add the new test case to the directory, as above. To add support in the new directory for configure and make, you must also create a Makefile.in and a configure.in. Hints On Writing A Test Case It is safest to write patterns that match all the output generated by the tested program; this is called closure. If a pattern does not match the entire output, any output that remains will be examined by the next expect command. In this situation, the precise boundary that determines which expect command sees what is very sensitive to timing between the Expect task and the task running the tested tool. As a result, the test may sometimes appear to work, but is likely to have unpredictable results. (This problem is particularly likely for interactive tools, but can also affect batch tools---especially for tests that take a long time to finish.) The best way to ensure closure is to use the option for the expect command to write the pattern as a full regular expressions; then you can match the end of output using a $. It is also a good idea to write patterns that match all available output by using .*\ after the text of interest; this will also match any intervening blank lines. Sometimes an alternative is to match end of line using \r or \n, but this is usually too dependent on terminal settings. Always escape punctuation, such as ( or ", in your patterns; for example, write \(. If you forget to escape punctuation, you will usually see an error message like extra characters after close-quote. If you have trouble understanding why a pattern does not match the program output, try using the option to runtest, and examine the debug log carefully. Be careful not to neglect output generated by setup rather than by the interesting parts of a test case. For example, while testing GDB, I issue a send set height 0\n command. The purpose is simply to make sure GDB never calls a paging program. The set height command in GDB does not generate any output; but running any command makes GDB issue a new (gdb) prompt. If there were no expect command to match this prompt, the output (gdb) begins the text seen by the next expect command---which might make that pattern fail to match. To preserve basic sanity, I also recommended that no test ever pass if there was any kind of problem in the test case. To take an extreme case, tests that pass even when the tool will not spawn are misleading. Ideally, a test in this sort of situation should not fail either. Instead, print an error message by calling one of the DejaGnu procedures error or warning. Special variables used by test cases. There are special variables used by test cases. These contain other information from DejaGnu. Your test cases can use these variables, with conventional meanings (as well as the variables saved in site.exp. You can use the value of these variables, but they should never be changed. $prms_id The tracking system (e.g. GNATS) number identifying a corresponding bugreport. (0} if you do not specify it in the test script.) $item bug_id An optional bug id; may reflect a bug identification from another organization. (0 if you do not specify it.) $subdir The subdirectory for the current test case. $expect_out(buffer) The output from the last command. This is an internal variable set by Expect. More information can be found in the Expect manual. $exec_output This is the output from a ${tool}_load command. This only applies to tools like GCC and GAS which produce an object file that must in turn be executed to complete a test. $comp_output This is the output from a ${tool}_start command. This is conventionally used for batch oriented programs, like GCC and GAS, that may produce interesting output (warnings, errors) without further interaction.