Running Tests There are two ways to run a testsuite. The most common way is to rely on Makefile support for a check target. The other way is to invoke the runtest program directly. To invoke runtest from the command line requires either much care to be taken to ensure that all of the correct options are given or that the is set up correctly. Automake can help to produce a Makefile that does the right things when the user invokes make check and this is the preferred approach. Both ways of executing a testsuite will be covered in more detail below. make check To run tests from an existing collection, use configure to configure a build directory and then type: make check If the Makefile has a check target, it saves some effort. For instance, it can set up any auxiliary programs or other files needed by the tests. The most common file the check target creates is site.exp. The site.exp file contains various variables that DejaGnu uses to determine the configuration of the program being tested. This is mostly used to support remote testing. The check target is supported by GNU Automake. To have DejaGnu support added to your generated Makefile.in, just add the keyword dejagnu to the AUTOMAKE_OPTIONS variable in your Makefile.am file. Once you have run make check to build any auxiliary files, you can invoke the test driver runtest directly to repeat the tests. You will also have to execute runtest directly for test collections with no check target in the Makefile. Runtest runtest is the executable test driver for DejaGnu. You can specify two kinds of things on the runtest command line: options and Tcl variable assignments for the test scripts. The options are listed alphabetically below. runtest returns an exit code of 1 if any test has an unexpected result; otherwise (if all tests pass or fail as expected) it returns 0 as the exit code. Test Result States runtest flags the outcome of each test as one of the following cases. See for a discussion of how POSIX specifies the meanings of these cases. PASS The most desirable outcome: the test was expected to succeed and did succeed. XPASS A pleasant kind of failure: a test was expected to fail, but succeeded. This may indicate progress; inspect the test case to determine whether you should amend it to no longer expect failure. FAIL A test was expected to succeed, but failed. This may indicate a regression; inspect the test case and the failing software to locate the bug. XFAIL A test that was expected to fail did fail. This result indicates no change in a known bug. If a test fails because the environment running the test lacks some facility required by the test, the outcome is UNSUPPORTED instead. UNRESOLVED Output from an unresolved test requires manual inspection, as the testsuite could not automatically determine the outcome. A test can report this outcome, for instance, when a test is not completed as expected. UNTESTED A test case is not yet complete, and in particular cannot yet produce a PASS or FAIL. You can also use this outcome for placeholder tests that note explicitly the absence of a real test case for a particular property. UNSUPPORTED A test depends on a conditionally available feature that does not exist (in the configured testing environment). For example, you can use this outcome to report on a test case that does not work on a particular target because its operating system support does not include a required subroutine. runtest may also display the following messages: ERROR Indicates a major problem (detected by the test case itself) in running the test. This is usually an unrecoverable error, such as a missing file or loss of communication to the target. (POSIX testsuites should not emit this message; use UNSUPPORTED, UNTESTED, or UNRESOLVED instead, as appropriate.) WARNING Indicates a possible problem in running the test. Usually warnings correspond to recoverable errors or display an important message. NOTE A message about the test case. Invoking runtest This is the full set of command line options that runtest recognizes. Arguments may be abbreviated to the shortest unique string. , Display all test output. By default, runtest shows only the output of tests that produce unexpected results. That is, tests with result states of FAIL, XPASS, or ERROR. Specifying --all will include output for tests with result states PASS, XFAIL and WARNING. string is a system triplet as used by configure. This is the type of system the program to be tested is built on. For a normal cross-compiler this is the same as the host triplet, but for a Canadian cross-compiler, they are distinct. string is a system triplet as used by configure. Use this option to override the default string recorded by your configuration's choice of host. This choice does not change how anything is actually configured unless is also specified; it affects only DejaGnu procedures that compare the host string with particular values. The procedures ishost, istarget, isnative, and setupxfail} are affected by --host. In this usage, host refers to the machine that the tests are to be run on, which may not be the same as the build machine. If --build is also specified, then --host refers to the machine that the tests wil, be run on, not the machine DejaGnu is run on. The host board to use. Use this option to override the default target setting. string is a system triplet as used by configure. This option changes the configuration runtest uses for the default tool names, and other setup information. , Enables internal Expect debug output. Debug output is displayed as part of the runtest output and is additionally logged to a file called dbg.log. The extra debugging output does not appear on standard output, unless the verbose level is greater than 2. For instance, to see debug output immediately, specify . The debugging output shows all attempts at matching the output of the program under test with the scripted patterns describing expected output. , Prints a summary of runtest options and then exits. This option overrides all other options in this regard. The names of specific tests to ignore. Use path as the top directory containing any auxiliary pre-compiled test code. This defaults to . and a Makefile can be used to prepare any auxiliary files that are needed. Write output logs in directory path. The default is . and is the directory where you invoke runtest. This option affects only the summary (.sum) and the detailed log (.log) files. The debug log dbg.log is always written to the current working directory. Reboot the target board when runtest initializes. When running tests on a separate target board, it is generally safer to reboot the target to be certain of its state. However, when developing test scripts, rebooting takes a lot of time and can reduce the life of prototype boards. Use path as the top directory for test scripts to run. runtest looks in this directory for any subdirectory whose name begins with the tool name (specified with --tool). For instance, with , runtest searches subdirectories matching gdb.* for test scripts. If you do not use , runtest looks for test directories under the current working directory. Turn on internal tracing for Expect, to n levels deep. By adjusting the level, you can control the extent to which your output expands multi-level Tcl statements. This allows you to ignore some levels of case or if statements. Each procedure call or control structure counts as one level. The output is recorded in the debug log file dbg.log. Connect to a target system using by program, if the target system is distinct from the computer running runtest. The possible values for program in the DejaGnu 1.4.4 distribution are rlogin, telnet, rsh, tip, kermit and mondfe. Set the baud rate to rate bits per second. Some serial interface programs, such as tip, use a separate initialization file and will ignore this option. The list of target boards to run tests on. Specifies which testsuite to run and what initialization module to use. is used only for these two purposes. It is not used to name the executable program to test. Executable tool names and pathsare recorded in site.exp and you can override them by specifying Tcl variable values on the command line. For example, including on the runtest command line will run tests from all subdirectories whose names match gcc.* and will use one of the initialization modules named config/*-gcc.exp. To specify the path to the compiler, use on the runtest command line. The path to the tool executable to test. A list of additional options to pass to the tool. , Raises the level of output from runtest. Repeating this option increases the amount of output displayed. Level one (-v) is simply test output. Level two (-v-v}) shows messages on options, configuration, and process control. Verbose messages appear in the detailed log file (*.log), but not in the summary log file (*.sum). , Prints the version numbers of &dj;, Expect and Tcl and then terminates without running any tests. Start the internal Tcl debugger. The Tcl debugger supports breakpoints, single stepping, and other common debugging activities. If you specify , the expect shell stops at a breakpoint as soon as DejaGnu invokes it. If you specify -D0, DejaGnu starts as usual, but you can enter the debugger by sending an interrupt with Cc. testfile.exp[=arg(s)] Specify the names of testsuites to run. By default, runtest runs all tests for the tool, but you can restrict it to particular testsuites by giving the names of the .exp Expect scripts that control them. testfile.exp may not include directory names; use base filenames only. By listing filenames in arg(s), it is possible to specify a subset of tests in a suite to run. For compiler or assembler tests, which often use a single Expect script covering many different input files, this option allows you to further restrict the tests by listing particular input files to test. Some tools additionally support wildcards. The wildcards supported depend upon the tool, but typically they are ?, *, and [chars]. tclvar=value You can define Tcl variables for use by your test scripts in the same style used by make for environment variables. For example, runtest GDB=gdb.old defines a Tcl variable called GDB. When test scripts refer to $GDB, they will receive the value gdb.old. The default Tcl variables used for most tools are defined in the main DejaGnu Makefile; their values are captured in the site.exp file. Common Options Typically, no command line options are required. The option is only required when there is more than one testsuite in the same directory. The default options are in the local site.exp file, created by make site.exp. For example, if the directory gdb/testsuite contains a collection of DejaGnu tests for GDB, you can run them like this: $ cd gdb/testsuite $ runtest --tool gdb Test output will follow, ending with: === gdb Summary === # of expected passes 508 # of expected failures 103 /usr/latest/bin/gdb version 4.14.4 -nx You can use the option --srcdir to point to some other directory containing a collection of tests: $ runtest --srcdir /devo/gdb/testsuite By default, runtest prints only the names of the tests it runs, output from any tests that have unexpected results, and a summary showing how many tests passed and how many failed. To display output from all tests (whether or not they behave as expected), use the --all option. For more verbose output about processes being run, communication, and so on, use --verbose. To see even more output, use multiple --verbose options. Test output goes into two files in your current directory: summary output in tool.sum, and detailed output in tool.log. Here, tool refers to the collection of tests. After a run with --tool gdb, the output files will be named gdb.sum and gdb.log. DejaGnu output files When runtest is invoked, DejaGnu generates two output files: a summary log and a detailed log. The contents of these are determined by the test scripts. For troubleshooting, a third kind of output file can be requested with the option. This file, called dbg.log shows what Expect is doing internally. Summary File DejaGnu always produces a summary output file called tool.sum, where tool is the name of the tool under test. This summary shows the names of all test scripts run and, for each test script, one line of output for each test result, trailing summary statistics that tally the number of passing and failing tests (both expected and unexpected); and the full pathname and version number of the tool tested. All possible test outcomes and errors are included in the summary output file, regardless of whether or not you specify the option. If any of your tests use the procedures unresolved, unsupported, or runtested, the summary output also tabulates the corresponding outcomes. For example, after runtest --tool binutils, &dj; will produce a summary log file called binutils.sum. Normally, DejaGnu writes this file in your current working directory; use the option to select a different directory. Sample summary log Test Run By rob on Tue Feb 3 23:14:04 2004 === gdb tests === Running ./gdb.t00/echo.exp ... PASS: Echo test Running ./gdb.all/help.exp ... PASS: help add-symbol-file PASS: help aliases PASS: help breakpoint "bre" abbreviation FAIL: help run "r" abbreviation Running ./gdb.t10/crossload.exp ... PASS: m68k-elf (elf-big) explicit format; loaded XFAIL: mips-ecoff (ecoff-bigmips) "ptype v_signed_char" signed C types === gdb Summary === # of expected passes 5 # of expected failures 1 # of unexpected failures 1 /usr/latest/bin/gdb version 4.6.5 -q Log File DejaGnu also produces a detailed log file called tool.log, showing any output generated by tests as well as the summary output. For example, after runtest --tool binutils, &dj; will produce a detailed log file named binutils.log. Normally, DejaGnu writes this file in your current working directory; use the option to select a different directory. Detailed log for <productname>G++</productname> tests Test Run By rob on Tue Feb 3 23:16:23 2004 === g++ tests === --- Running ./g++.other/t01-1.exp --- PASS: operate delete --- Running ./g++.other/t01-2.exp --- FAIL: i960 bug EOF p0000646.C: In function `int warn_return_1 ()': p0000646.C:109: warning: control reaches end of non-void function p0000646.C: In function `int warn_return_arg (int)': p0000646.C:117: warning: control reaches end of non-void function p0000646.C: In function `int warn_return_sum (int, int)': p0000646.C:125: warning: control reaches end of non-void function p0000646.C: In function `struct foo warn_return_foo ()': p0000646.C:132: warning: control reaches end of non-void function --- Running ./g++.other/t01-4.exp --- FAIL: abort 900403_04.C:8: zero width for bit-field `foo' --- Running ./g++.other/t01-3.exp --- FAIL: segment violation 900519_12.C:9: parse error before `;' 900519_12.C:12: Segmentation violation /usr/latest/bin/gcc: Internal compiler error: program cc1plus got fatal signal === g++ Summary === # of expected passes 1 # of expected failures 3 /usr/latest/bin/g++ version cygnus-2.0.1 Debug Log File The option will generate a debug log file showing the internal output from Expect running in debugging mode. This file, called dbg.log, is created in the directory where runtest) is invoked and shows each pattern Expect considers in analyzing program output. This file reflects each send command, showing the string sent as input to the program under test; and each Expect command, showing each pattern it compares with the program output. The log messages begin with a message of the form: expect: does {tool output} (spawn_id n) match pattern {expected pattern}? For every unsuccessful match, Expect issues a no after this message; if other patterns are specified for the same Expect command, they are reflected also, but without the first part of the message (expect... match pattern). When Expect finds a match, the log for the successful match ends with yes, followed by a record of the Expect variables set to describe a successful match. Debug log for a <productname>GDB</productname> test: send: sent {break gdbme.c:34\n} to spawn id 6 expect: does {} (spawn_id 6) match pattern {Breakpoint.*at.* file gdbme.c, line 34.*\(gdb\) $}? no {.*\(gdb\) $}? no expect: does {} (spawn_id 0) match pattern {return} ? no {\(y or n\) }? no {buffer_full}? no {virtual}? no {memory}? no {exhausted}? no {Undefined}? no {command}? no break gdbme.c:34 Breakpoint 8 at 0x23d8: file gdbme.c, line 34. (gdb) expect: does {break gdbme.c:34\r\nBreakpoint 8 at 0x23d8: file gdbme.c, line 34.\r\n(gdb) } (spawn_id 6) match pattern {Breakpoint.*at.* file gdbme.c, line 34.*\(gdb\) $}? yes expect: set expect_out(0,start) {18} expect: set expect_out(0,end) {71} expect: set expect_out(0,string) {Breakpoint 8 at 0x23d8: file gdbme.c, line 34.\r\n(gdb) } epect: set expect_out(spawn_id) {6} expect: set expect_out(buffer) {break gdbme.c:34\r\nBreakpoint 8 at 0x23d8: file gdbme.c, line 34.\r\n(gdb) } PASS: 70 0 breakpoint line number in file This example exhibits three properties of Expect and DejaGnu that might be surprising at first glance: Empty output for the first attempted match. The first set of attempted matches shown ran against the output {} --- that is, no output. Expect begins attempting to match the patterns supplied immediately; often, the first pass is against incomplete output (or completely before all output, as in this case). Interspersed tool output. The beginning of the log entry for the second attempted match may be hard to spot: this is because the prompt {(gdb) } appears on the same line, just before the expect: that marks the beginning of the log entry. Fail-safe patterns. Many of the patterns tested are fail-safe patterns provided by GDB testing utilities, to reduce possible indeterminacy. It is useful to anticipate potential variations caused by extreme system conditions (GDB might issue the message virtual memory exhausted in rare circumstances), or by changes in the tested program (Undefined command is the likeliest outcome if the name of a tested command changes). The pattern {return} is a particularly interesting fail-safe to notice; it checks for an unexpected RET prompt. This may happen, for example, if the tested tool can filter output through a pager. These fail-safe patterns (like the debugging log itself) are primarily useful while developing test scripts. Use the error procedure to make the actions for fail-safe patterns produce messages starting with ERROR on standard output, and in the detailed log file. Tutorial This chapter was originally written by Niklaus Giger (ngiger@mus.ch) because he lost a week to figure out how DejaGnu works and how to write a first test. This tutorial will give a brief, but sound overview into how DejaGnu works. The examples given in this chapter were run on an AMD K6 machine with a Mac Powerbook G3 acting as a remote target. The tests for Windows were run under Cygwin. Its target system was a PowerPC embedded system running vxWorks. Test your installation Create a new user called "dgt" (DejaGnuTest), which uses bash as it login shell. PS1 must be set to '\u:\w\$ ' in its ~/.bashrc. Login as this user, create an empty directory and change the working directory to it. e.g dgt:~$ mkdir ~/dejagnu.test dgt:~$ cd ~/dejagnu.test Now you are ready to test DejaGnu's main program called runtest. The expected output is shown below. Runtest output in a empty directory dgt:~/dejagnu.test$ runtest WARNING: Couldn't find the global config file. WARNING: No tool specified Test Run By dgt on Sun Nov 25 17:07:03 2001 Native configuration is i586-pc-linux-gnu === tests === Schedule of variations: unix Running target unix Using /usr/share/dejagnu/baseboards/unix.exp as board description file for target. Using /usr/share/dejagnu/config/unix.exp as generic interface file for target. ERROR: Couldn't find tool config file for unix. === Summary === Do not be concerned by the WARNING and ERROR messages at this stage. The files testrun.sum and testrun.log will be created. They are of no interest at this point, so they can be removed: :~/dejagnu.test$ rm testrun.sum testrun.log Windows On a Cygwin system, DejaGnu can be installed from the Cygwin packages collection. Cygwin may be downloaded and installed from a mirror of http://www.cygwin.com. Unless mentioned explicitly, you can assume that the output is identical to that of a Unix system. You will need to install the telnet server from the Cygwin inetutils package if you want to use a Cygwin system as a remote target. Getting the source code for the calc example If you are running a Debian distribution you can find the examples under /usr/share/doc/dejagnu/examples. These examples seem to be missing in Red Hat's RPM. In this case download the sources of DejaGnu and adjust the pathes to the DejaGnu examples accordingly. Create a minimal project, e.g. calc In this section you will start a small project, using the sample application calc, which is part of the DejaGnu distribution. A simple project without the GNU autotools The runtest program can be run standalone. All the autoconf/automake support is because those programs are commonly used for other GNU applications. The key to running runtest standalone is having the local site.exp file setup correctly, which automake does. The generated site.exp should look like: set tool calc set srcdir . set objdir /home/dgt/dejagnu.test Using autoconf/autoheader/automake We have to prepare some input file in order to run autocon and automake. There is book “GNU autoconf, automake and libtool” by Garry V. Vaughan, et al. NewRider, ISBN 1-57870-190-2 which describes this process thoroughly. From the calc example distributed with the DejaGnu documentation you should copy the program file itself (calc.c) and some additional files, which you might examine a little bit close to derive their meanings. dgt:~/dejagnu.test$ cp -r /usr/share/doc/dejagnu/examples/calc/\ {configure.in,Makefile.am,calc.c,testsuite} . In Makemake.am, note the presence of the AUTOMAKE_OPTIONS = dejagnu. This option is required. Run aclocal to generate aclocal.m4, which is a collection of macros needed by Autoconf. dgt:~/dejagnu.test$ aclocal autoconf is another part of the auto-tools. Run it to generate the configure script from configure.in. dgt:~/dejagnu.test$ autoconf autoheader is another part of the auto-tools. Run it to generate calc.h.in. dgt:~/dejagnu.test$ autoheader The Makefile.am of this example was developed as port of the DejaGnu distribution. Adapt Makefile.am for this test. Replace the line “#noinst_PROGRAMS = calc” to “bin_PROGRAMS = calc”. Change the RUNTESTDEFAULTFLAGS from “$$srcdir/testsuite” to “./testsuite”. Running automake at this point contains a series of warning in its output as shown in the following example: Sample output of automake with missing files dgt:~/dejagnu.test$ automake --add-missing automake: configure.in: installing `./install-sh' automake: configure.in: installing `./mkinstalldirs' automake: configure.in: installing `./missing' automake: Makefile.am: installing `./INSTALL' automake: Makefile.am: required file `./NEWS' not found automake: Makefile.am: required file `./README' not found automake: Makefile.am: installing `./COPYING' automake: Makefile.am: required file `./AUTHORS' not found automake: Makefile.am: required file `./ChangeLog' not found configure.in: 4: required file `./calc.h.in' not found Makefile.am:6: required directory ./doc does not exist Create a empty directory doc and empty files INSTALL, NEWS, README, AUTHORS, ChangeLog and COPYING. The default COPYING will point to the GNU Public License (GPL). In a real project it would be time to add some meaningful text in each file. Adapt calc to your environment by running configure. Sample output of configure dgt:~/dejagnu.test$ ./configure creating cache ./config.cache checking whether to enable maintainer-specific portions of Makefiles... no checking for a BSD compatible install... /usr/bin/install -c checking whether build environment is sane... yes checking whether make sets ${MAKE}... yes checking for working aclocal... found checking for working autoconf... found checking for working automake... found checking for working autoheader... found checking for working makeinfo... found checking for gcc... gcc checking whether the C compiler (gcc ) works... yes checking whether the C compiler (gcc ) is a cross-compiler... no checking whether we are using GNU C... yes checking whether gcc accepts -g... yes checking for a BSD compatible install... /usr/bin/install -c checking how to run the C preprocessor... gcc -E checking for stdlib.h... yes checking for strcmp... yes updating cache ./config.cache creating ./config.status creating Makefile creating calc.h Build the calc executable: Sample output building calc dgt:~/dejagnu.test$ make gcc -DHAVE_CONFIG_H -I. -I. -I. -g -O2 -c calc.c gcc -g -O2 -o calc calc.o You prepared a few files and then called some commands. Respecting the right order assures a automatic and correctly compiled calc program. The following example resumes the correct order. Creating the calc program using the GNU autotools dgt:~/dejagnu.test$ aclocal dgt:~/dejagnu.test$ autoconf dgt:~/dejagnu.test$ autoheader dgt:~/dejagnu.test$ automake --add-missing dgt:~/dejagnu.test$ ./configure dgt:~/dejagnu.test$ make Play with calc and verify whether it works correctly. A sample session might look like this: dgt:~/dejagnu.test$ ./calc calc: version Version: 1.1 calc: add 3 4 7 calc: multiply 3 4 12 calc: multiply 2 4 12 calc: quit Look at the intentional bug that 2 times 4 equals 12. The tests run by DejaGnu need a file called site.exp, which is automatically generated if we run make site.exp. This was the purpose of the AUTOMAKE_OPTIONS = dejagnu in Makefile.am. Sample output generating a site.exp dgt: make site.exp dgt:~/dejagnu.test$ make site.exp Making a new site.exp file... Running automated tests Running the calc testsuite This section describes how to run the DejaGnu testsuite for the calc example program. Most packages provide a check Makefile target for this purpose. Sample output of runtest in a configured directory dgt:~/dejagnu.test$ make check make check-DEJAGNU make[1]: Entering directory `/home/dgt/dejagnu.test' srcdir=`cd . && pwd`; export srcdir; \ EXPECT=expect; export EXPECT; \ runtest=runtest; \ if /bin/sh -c "$runtest --version" > /dev/null 2>&1; then \ $runtest --tool calc CALC=`pwd`/calc --srcdir ./testsuite ; \ else echo "WARNING: could not find \`runtest'" 1>&2; :;\ fi WARNING: Couldn't find the global config file. WARNING: Couldn't find tool init file Test Run By dgt on Sun Nov 25 21:42:21 2001 Native configuration is i586-pc-linux-gnu === calc tests === Schedule of variations: unix Running target unix Using /usr/share/dejagnu/baseboards/unix.exp as board description file for target. Using /usr/share/dejagnu/config/unix.exp as generic interface file for target. Using ./testsuite/config/unix.exp as tool-and-target-specific interface file. Running ./testsuite/calc.test/calc.exp ... FAIL: multiply2 (bad match) === calc Summary === # of expected passes 5 # of unexpected failures 1 /home/Dgt/dejagnu.test/calc version Version: 1.1 make[1]: *** [check-DEJAGNU] Fehler 1 make[1]: Leaving directory `/home/Dgt/dejagnu.test' make: *** [check-am] Fehler 2 The “FAIL:“ line shows that test cases for calc catch the bug in calc.c. Examine the output files calc.sum and calc.log. Try to understand the tests in example/calc/testsuite/calc.test/calc.exp. To understand Expect you might take a look at the book "Exploring Expect", which is an excellent resource for learning and using Expect. (Pub: O'Reilly, ISBN 1-56592-090-2) The book contains hundreds of examples and also includes a tutorial on Tcl. The various configuration files (how to avoid warnings) DejaGnu may be customized by each user. It first searches for a file called .dejagnurc in the user's home directory. Create a .dejagnurc file and insert the following line: puts "I am ~/.dejagnurc" Re-run make check and note that the test output contains "I am ~/.dejagnurc". Now create ~/my_dejagnu.exp and insert the following line into that file: puts "I am ~/my_dejagnu.exp" In a Bourne shell, enter: export DEJAGNU=~/my_dejagnu.exp Run make check again. The output should not contain a warning that reads “WARNING: Couldn't find the global config file.”. Create a subdirectory called lib and within that directory, create a file called calc.exp. Insert the following line into that file: puts "I am lib/calc.exp" The last warning “WARNING: Couldn't find tool init file” should now be excluded from the output of make check. Create the directory ˜/boards. Create the file ˜/boards/standard.exp and insert the following line: puts "I am boards/standard.exp" If the variable DEJAGNU is still not empty then the (abbreviated) output of “make check” should look like this: Sample output of runtest with the usual configuration files dgt:~/dejagnu.test$ make check <...> fi I am ~/.dejagnurc I am ~/my_dejagnu.exp I am lib/calc.exp Test Run By dgt on Sun Nov 25 22:19:14 2001 Native configuration is i586-pc-linux-gnu === calc tests === Using /home/Dgt/boards/standard.exp as standard board description\ file for build. I am ~/boards/standard.exp Using /home/Dgt/boards/standard.exp as standard board description\ file for host. I am ~/boards/standard.exp Schedule of variations: unix Running target unix Using /home/Dgt/boards/standard.exp as standard board description\ file for target. I am ~/boards/standard.exp Using /usr/share/dejagnu/baseboards/unix.exp as board description file\ for target. <...> When trouble strikes Calling runtest with the '-v'-flag shows you in even more details which files are searched in which order. Passing it several times gives more and more detail. Displaying details about runtest execution runtest -v -v -v --tool calc CALC=`pwd`/calc --srcdir ./testsuite Calling runtest with the '--debug'-flag logs a lot of details to dbg.log where you can analyse it afterwards. In all test cases you can temporary adjust the verbosity of information by adding the following Tcl-command to any tcl file that gets loaded by dejagnu, for instance, ~/.dejagnurc: set verbose 9 Testing “Hello world” locally This test checks, whether the built-in shell command “echo Hello world” will really write “Hello world” on the console. Create the file ~/dejagnu.test/testsuite/calc.test/local_echo.exp. It should contain the following lines A first (local) test case set test "Local Hello World" send "echo Hello World" expect { -re "Hello World" { pass "$test" } } Run runtest again and verify the output “calc.log” A first remote test Testing remote targets is a lot trickier especially if you are using an embedded target which has no built in support for things like a compiler, ftp server or a Bash-shell. Before you can test calc on a remote target you have to acquire a few basics skills. Setup telnet to your own host The easiest remote host is usually the host you are working on. In this example we will use telnet to login in your own workstation. For security reason you should never have a telnet deamon running on machine connected on the internet, as password and usernames are transmitted in clear text. We assume you know how to setup your machine for a telnet daemon. Next try whether you may login in your own host by issuing the command “telnet localhost.1”. In order to be able to distinguish between a normal session an a telnet login add the following lines to /home/dgt/.bashrc. if [ "$REMOTEHOST" ] then PS1='remote:\w\$ ' fi Now on the machine a “remote” login looks like this: Sample log of a telnet login to localhost dgt:~/dejagnu.test$ telnet localhost Trying 127.0.0.1... Connected to 127.0.0.1. Escape character is '^]'. Debian GNU/Linux testing/unstable Linux K6Linux login: dgt Password: Last login: Sun Nov 25 22:46:34 2001 from localhost on pts/4 Linux K6Linux 2.4.14 #1 Fre Nov 16 19:28:25 CET 2001 i586 unknown No mail. remote:~$ exit logout Connection closed by foreign host. A test case for login via telnet In order to define a correct setup we have add a line containing “set target unix” either to ~/.dejagnurc or to ~/my_dejagnu.exp. In ~/boards/standard.exp add the following four lines to define a few patterns for the DejaGnu telnet login procedure. Defining a remote target board set_board_info shell_prompt "remote:" set_board_info telnet_username "dgt" set_board_info telnet_password "top_secret" set_board_info hostname "localhost" As DejaGnu will be parsing the telnet session output for some well known pattern the output there are a lot of things that can go wrong. If you have any problems verify your setup: Is /etc/motd empty? Is /etc/issue.net empty? Exists a empty ~/.hushlogin? The LANG environment variable must be either empty or set to “C”. To test the login via telnet write a sample test case. Create the file ~/dejagnu.test/testsuite/calc.test/remote_echo.exp and add the following few lines: DejaGnu script for logging in into a remote target puts "this is remote_echo.exp target for $target " target_info $target #set verbose 9 set shell_id [remote_open $target] set test "Remote login to $target" #set verbose 0 puts "Spawn id for remote shell is $shell_id" if { $shell_id > 0 } { pass "$test" } else { fail "Remote open to $target" } In the runtest output you should find something like: Running ./testsuite/calc.test/local_echo.exp ... Running ./testsuite/calc.test/remote_echoo.exp ... this is remote_echo.exp target is unix Spawn id for remote shell is exp7 Have again a look at calc.log to get a feeling how DejaGnu and expect parse the input. Remote testing “Hello world” Next you will transform the above “hello world” example to its remote equivalent. This can be done by adding the following lines to our file remote_echo.exp. A first (local) remote "Hello world" test set test "Remote_send Hello World" set status [remote_send $target "echo \"Hello\" \"World\"\n" ] pass "$test" set test "Remote_expect Hello World" remote_expect $target 5 { -re "Hello World" { pass "$test" } } Call make check. The output should contain “# of expected passes 9” and “# of unexcpected failures 1”. Have a look at the procedures in /usr/share/dejagnu/remote.exp to have an overview of the offered procedures and their features. Now setup a real target. In the following example we assume as target a PowerBook running Debian. As above add a test user "dgt", install telnet and FTP servers. In order to distinguish it from the host add the line PS1='test:>' to /home/dgt/.bash_profile. Also add a corresponding entry "powerbook" to /etc/hosts and verify that you are able to ping, telnet and ftp to the target "powerbook". In order to let runtest run its test on the "powerbook" target change the following lines in ~/boards/standard.exp: Board definition for a remote target set_board_info protocol "telnet" set_board_info telnet_username "dgt" set_board_info telnet_password "top_secret" set_board_info shell_prompt "test:> " set_board_info hostname "powerbook" Now call runtest again with the same arguments and verify whether all went okay by taking a close look at calc.log. Transferring files from/to the target A simple procedure like this will do the job for you: Test script to transfer a file to a remote target set test "Remote_download" puts "Running Remote_download" # set verbose 9 set remfile /home/dgt/dejagnu2 set status [remote_download $target /home/dgt/.dejagnurc $remfile] if { "$status" == "" } { fail "Remote download to $remfile on $target" } else { pass "$test" } puts "status of remote_download ist $status" # set verbose 0 After running runtest again, check whether the file dejagnu2 exists on the target. This example will only work if the rcp command works with your target. If you have a working FTP-server on the target you can use it by adding the following lines to ~/boards/standard.exp: Defining a board to use FTP as file transport set_board_info file_transfer "ftp" set_board_info ftp_username "dgt" set_board_info ftp_password "1234" Preparing for crosscompilation For crosscompiling you need working binutils, gcc and a base library like libc or glib for your target. It is beyond the scope of this document to describe how to get it working. The following examples assume a cross compiler for PowerPC which is called linux-powerpc-gcc. Add AC_CANONICAL_TARGET in dejagnu.test/configure.in at the following location. Copy config.guess from /usr/share/automake to dejagnu.test. AM_CONFIG_HEADER(calc.h) AC_CANONICAL_TARGET([]) AM_INIT_AUTOMAKE(calc, 1.1) You need to run automake 2.5 or later. Depending on your installation calling autoconf2.5 instead of autoconf is not needed. The sequence to regenerate all files is: Using autotools for cross development $ autoconf2.5 $ autoheader $ automake $ ./configure --host=powerpc-linux --target=powerpc-linux configure: WARNING: If you wanted to set the --build type, don't use --host. If a cross compiler is detected then cross compile mode will be used. checking build system type... ./config.guess: ./config.guess: No such file or directory configure: error: cannot guess build type; you must specify one $ cp /usr/share/automake/config.guess . $ ./configure --host=powerpc-linux --target=powerpc-linux configure: WARNING: If you wanted to set the --build type, don't use --host. If a cross compiler is detected then cross compile mode will be used. \ checking build system type... i586-pc-linux-gnu checking host system type... powerpc-unknown-linux-gnu <...> checking whether we are cross compiling... yes <...> Configuration: Source code location: . C Compiler: powerpc-linux-gcc C Compiler flags: -g -O2 Everything should be ready to recompile for the target: $ make powerpc-linux-gcc -DHAVE_CONFIG_H -I. -I. -I. -g -O2 -c calc.c powerpc-linux-gcc -g -O2 -o calc calc.o Remote testing of calc Not yet written, as I have problem getting libc6-dev-powerpc to work. Probably I first have to build my cross compiler. Using Windows as host and vxWorks as target A more thorough walk-through will be written in a few weeks. In order to test the vxWorks as a target I changed boards/standards.exp to reflect my settings (IP, username, password). Then I reconfigured vxWorks to include a FTP and telnet server (using the same username/password combination ad in boards/standard.exp). With this setup and some minor modification (e.g. replacing echo by printf) in my test cases I could test my vxWorks system. It sure does not seem to be a correct setup by DejaGnu standard. For instance, it still loading /usr/share/dejagnu/baseboards/unix.exp instead of vxWorks. In any case I found that (at least under Windows) I did not find out how the command line would let me override settings in my personal config files. Writing Testsuites Adding A New Testsuite The testsuite for a new tool should always be located in that tools source directory. DejaGnu require the directory be named testsuite. Under this directory, the test cases go in a subdirectory whose name begins with the tool name. For example, for a tool named flubber, each subdirectory containing testsuites must start with "flubber.". Adding A New Tool In general, the best way to learn how to write (code or even prose) is to read something similar. This principle applies to test cases and to testsuites. Unfortunately, well-established testsuites have a way of developing their own conventions: as test writers become more experienced with DejaGnu and with Tcl, they accumulate more utilities, and take advantage of more and more features of Expect and Tcl in general. Inspecting such established testsuites may make the prospect of creating an entirely new testsuite appear overwhelming. Nevertheless, it is quite straightforward to get a new testsuite going. There is one testsuite that is guaranteed not to grow more elaborate over time: both it and the tool it tests were created expressly to illustrate what it takes to get started with DejaGnu. The example/ directory of the DejaGnu distribution contains both an interactive tool called calc, and a testsuite for it. Reading this testsuite, and experimenting with it, is a good way to supplement the information in this section. (Thanks to Robert Lupton for creating calc and its testsuite---and also the first version of this section of the manual!) To help orient you further in this task, here is an outline of the steps to begin building a testsuite for a program example. Create or select a directory to contain your new collection of tests. Change into that directory (shown here as testsuite): Create a configure.in file in this directory, to control configuration-dependent choices for your tests. So far as DejaGnu is concerned, the important thing is to set a value for the variable target_abbrev; this value is the link to the init file you will write soon. (For simplicity, we assume the environment is Unix, and use unix as the value.) What else is needed in configure.in depends on the requirements of your tool, your intended test environments, and which configure system you use. This example is a minimal configure.in for use with GNU Autoconf. Create Makefile.in (if you are using Autoconf), or Makefile.am(if you are using Automake), the source file used by configure to build your Makefile. If you are using GNU Automake.just add the keyword dejagnu to the AUTOMAKE_OPTIONS variable in your Makefile.am file. This will add all the Makefile support needed to run DejaGnu, and support the target. You also need to include two targets important to DejaGnu: check, to run the tests, and site.exp, to set up the Tcl copies of configuration-dependent values. This is called the The check target must run the runtest program to execute the tests. The site.exp target should usually set up (among other things) the $tool variable for the name of your program. If the local site.exp file is setup correctly, it is possible to execute the tests by merely typing runtest on the command line. Sample Makefile.in Fragment # Look for a local version of DejaGnu, otherwise use one in the path RUNTEST = `if test -f $(top_srcdir)/../dejagnu/runtest; then \ echo $(top_srcdir) ../dejagnu/runtest; \ else \ echo runtest; \ fi` # The flags to pass to runtest RUNTESTFLAGS = # Execute the tests check: site.exp all $(RUNTEST) $(RUNTESTFLAGS) \ --tool ${example} --srcdir $(srcdir) # Make the local config file site.exp: ./config.status Makefile @echo "Making a new config file..." -@rm -f ./tmp? @touch site.exp -@mv site.exp site.bak @echo "## these variables are automatically\ generated by make ##" > ./tmp0 @echo "# Do not edit here. If you wish to\ override these values" >> ./tmp0 @echo "# add them to the last section" >> ./tmp0 @echo "set host_os ${host_os}" >> ./tmp0 @echo "set host_alias ${host_alias}" >> ./tmp0 @echo "set host_cpu ${host_cpu}" >> ./tmp0 @echo "set host_vendor ${host_vendor}" >> ./tmp0 @echo "set target_os ${target_os}" >> ./tmp0 @echo "set target_alias ${target_alias}" >> ./tmp0 @echo "set target_cpu ${target_cpu}" >> ./tmp0 @echo "set target_vendor ${target_vendor}" >> ./tmp0 @echo "set host_triplet ${host_canonical}" >> ./tmp0 @echo "set target_triplet ${target_canonical}">>./tmp0 @echo "set tool binutils" >> ./tmp0 @echo "set srcdir ${srcdir}" >> ./tmp0 @echo "set objdir `pwd`" >> ./tmp0 @echo "set ${examplename} ${example}" >> ./tmp0 @echo "## All variables above are generated by\ configure. Do Not Edit ##" >> ./tmp0 @cat ./tmp0 > site.exp @sed < site.bak \ -e '1,/^## All variables above are.*##/ d' \ >> site.exp -@rm -f ./tmp? Create a directory (in testsuite) called config. Make a Tool Init File in this directory. Its name must start with the target_abbrev value, or be named default.exp so call it config/unix.exp for our Unix based example. This is the file that contains the target-dependent procedures. Fortunately, on Unix, most of them do not have to do very much in order for runtest to run. If the program being tested is not interactive, you can get away with this minimal unix.exp to begin with: Simple Batch Program Tool Init File proc foo_exit {} {} proc foo_version {} {} If the program being tested is interactive, however, you might as well define a start routine and invoke it by using an init file like this: Simple Interactive Program Tool Init File proc foo_exit {} {} proc foo_version {} {} proc foo_start {} { global ${examplename} spawn ${examplename} expect { -re "" {} } } # Start the program running we want to test foo_start Create a directory whose name begins with your tool's name, to contain tests. For example, if your tool's name is gcc, then the directories all need to start with "gcc.". Create a sample test file. Its name must end with .exp. You can use first-try.exp. To begin with, just write there a line of Tcl code to issue a message. Testing A New Tool Config send_user "Testing: one, two...\n" Back in the testsuite (top level) directory, run configure. Typically you do this while in the build directory. You may have to specify more of a path, if a suitable configure is not available in your execution path. e now ready to triumphantly type make check or runtest. You should see something like this: Example Test Case Run Test Run By rhl on Fri Jan 29 16:25:44 EST 1993 === example tests === Running ./example.0/first-try.exp ... Testing: one, two... === example Summary === There is no output in the summary, because so far the example does not call any of the procedures that establish a test outcome. Write some real tests. For an interactive tool, you should probably write a real exit routine in fairly short order. In any case, you should also write a real version routine soon. Writing A Test Case The easiest way to prepare a new test case is to base it on an existing one for a similar situation. There are two major categories of tests: batch or interactive. Batch oriented tests are usually easier to write. The GCC tests are a good example of batch oriented tests. All GCC tests consist primarily of a call to a single common procedure, Since all the tests either have no output, or only have a few warning messages when successfully compiled. Any non-warning output is a test failure. All the C code needed is kept in the test directory. The test driver, written in Tcl, need only get a listing of all the C files in the directory, and compile them all using a generic procedure. This procedure and a few others supporting for these tests are kept in the library module lib/c-torture.exp in the GCC test suite. Most tests of this kind use very few expect features, and are coded almost purely in Tcl. Writing the complete suite of C tests, then, consisted of these steps: Copying all the C code into the test directory. These tests were based on the C-torture test created by Torbjorn Granlund (on behalf of the Free Software Foundation) for GCC development. Writing (and debugging) the generic Tcl procedures for compilation. Writing the simple test driver: its main task is to search the directory (using the Tcl procedure glob for filename expansion with wildcards) and call a Tcl procedure with each filename. It also checks for a few errors from the testing procedure. Testing interactive programs is intrinsically more complex. Tests for most interactive programs require some trial and error before they are complete. However, some interactive programs can be tested in a simple fashion reminiscent of batch tests. For example, prior to the creation of DejaGnu, the GDB distribution already included a wide-ranging testing procedure. This procedure was very robust, and had already undergone much more debugging and error checking than many recent DejaGnu test cases. Accordingly, the best approach was simply to encapsulate the existing GDB tests, for reporting purposes. Thereafter, new GDB tests built up a family of Tcl procedures specialized for GDB testing. Debugging A Test Case These are the kinds of debugging information available from DejaGnu: Output controlled by test scripts themselves, explicitly allowed for by the test author. This kind of debugging output appears in the detailed output recorded in the DejaGnu log file. To do the same for new tests, use the verbose procedure (which in turn uses the variable also called verbose) to control how much output to generate. This will make it easier for other people running the test to debug it if necessary. Whenever possible, if $verbose is 0, there should be no output other than the output from pass, fail, error, and warning. Then, to whatever extent is appropriate for the particular test, allow successively higher values of $verbose to generate more information. Be kind to other programmers who use your tests: provide for a lot of debugging information. Output from the internal debugging functions of Tcl and Expect. There is a command line options for each; both forms of debugging output are recorded in the file dbg.log in the current directory. Use for information from the expect level; it generates displays of the expect attempts to match the tool output with the patterns specified. This output can be very helpful while developing test scripts, since it shows precisely the characters received. Iterating between the latest attempt at a new test script and the corresponding dbg.log can allow you to create the final patterns by ``cut and paste''. This is sometimes the best way to write a test case. Use to see more detail at the Tcl level; this shows how Tcl procedure definitions expand, as they execute. The associated number controls the depth of definitions expanded. Finally, if the value of verbose is 3 or greater,DejaGnu turns on the expect command log_user. This command prints all expect actions to the expect standard output, to the detailed log file, and (if is on) to dbg.log. Adding A Test Case To A Testsuite. There are two slightly different ways to add a test case. One is to add the test case to an existing directory. The other is to create a new directory to hold your test. The existing test directories represent several styles of testing, all of which are slightly different; examine the directories for the tool of interest to see which (if any) is most suitable. Adding a GCC test can be very simple: just add the C code to any directory beginning with gcc. and it runs on the next runtest --tool gcc. To add a test to GDB, first add any source code you will need to the test directory. Then you can either create a new expect file, or add your test to an existing one (any file with a .exp suffix). Creating a new .exp file is probably a better idea if the test is significantly different from existing tests. Adding it as a separate file also makes upgrading easier. If the C code has to be already compiled before the test will run, then you'll have to add it to the Makefile.in file for that test directory, then run configure and make. Adding a test by creating a new directory is very similar: Create the new directory. All subdirectory names begin with the name of the tool to test; e.g. G++ tests might be in a directory called g++.other. There can be multiple test directories that start with the same tool name (such as g++). Add the new directory name to the configdirs definition in the configure.in file for the testsuite directory. This way when make and configure next run, they include the new directory. Add the new test case to the directory, as above. To add support in the new directory for configure and make, you must also create a Makefile.in and a configure.in. Hints On Writing A Test Case It is safest to write patterns that match all the output generated by the tested program; this is called closure. If a pattern does not match the entire output, any output that remains will be examined by the next expect command. In this situation, the precise boundary that determines which expect command sees what is very sensitive to timing between the Expect task and the task running the tested tool. As a result, the test may sometimes appear to work, but is likely to have unpredictable results. (This problem is particularly likely for interactive tools, but can also affect batch tools---especially for tests that take a long time to finish.) The best way to ensure closure is to use the option for the expect command to write the pattern as a full regular expressions; then you can match the end of output using a $. It is also a good idea to write patterns that match all available output by using .*\ after the text of interest; this will also match any intervening blank lines. Sometimes an alternative is to match end of line using \r or \n, but this is usually too dependent on terminal settings. Always escape punctuation, such as ( or ", in your patterns; for example, write \(. If you forget to escape punctuation, you will usually see an error message like extra characters after close-quote. If you have trouble understanding why a pattern does not match the program output, try using the option to runtest, and examine the debug log carefully. Be careful not to neglect output generated by setup rather than by the interesting parts of a test case. For example, while testing GDB, I issue a send set height 0\n command. The purpose is simply to make sure GDB never calls a paging program. The set height command in GDB does not generate any output; but running any command makes GDB issue a new (gdb) prompt. If there were no expect command to match this prompt, the output (gdb) begins the text seen by the next expect command---which might make that pattern fail to match. To preserve basic sanity, I also recommended that no test ever pass if there was any kind of problem in the test case. To take an extreme case, tests that pass even when the tool will not spawn are misleading. Ideally, a test in this sort of situation should not fail either. Instead, print an error message by calling one of the DejaGnu procedures error or warning. Special variables used by test cases. There are special variables used by test cases. These contain other information from DejaGnu. Your test cases can use these variables, with conventional meanings (as well as the variables saved in site.exp. You can use the value of these variables, but they should never be changed. $prms_id The tracking system (e.g. GNATS) number identifying a corresponding bugreport. (0} if you do not specify it in the test script.) $item bug_id An optional bug id; may reflect a bug identification from another organization. (0 if you do not specify it.) $subdir The subdirectory for the current test case. $expect_out(buffer) The output from the last command. This is an internal variable set by Expect. More information can be found in the Expect manual. $exec_output This is the output from a ${tool}_load command. This only applies to tools like GCC and GAS which produce an object file that must in turn be executed to complete a test. $comp_output This is the output from a ${tool}_start command. This is conventionally used for batch oriented programs, like GCC and GAS, that may produce interesting output (warnings, errors) without further interaction. Customizing DejaGnu The site configuration file, site.exp, captures configuration-dependent values and propagates them to the DejaGnu test environment using Tcl variables. This ties the DejaGnu test scripts into the configure and make programs. If this file is setup correctly, it is possible to execute a testsuite merely by typing runtest. DejaGnu supports two site.exp files. The multiple instances of site.exp are loaded in a fixed order built into DejaGnu. The first file loaded is the local file site.exp, and then the optional global site.exp file as pointed to by the DEJAGNU environment variable. There is an optional master site.exp, capturing configuration values that apply to DejaGnu across the board, in each configuration-specific subdirectory of the DejaGnu library directory. runtest loads these values first. The master site.exp contains the default values for all targets and hosts supported by DejaGnu. This master file is identified by setting the environment variable DEJAGNU to the name of the file. This is also refered to as the ``global'' config file. Any directory containing a configured testsuite also has a local site.exp, capturing configuration values specific to the tool under test. Since runtest loads these values last, the individual test configuration can either rely on and use, or override, any of the global values from the global site.exp file. You can usually generate or update the testsuite's local site.exp by typing make site.exp in the testsuite directory, after the test suite is configured. You can also have a file in your home directory called .dejagnurc. This gets loaded first before the other config files. Usually this is used for personal stuff, like setting the all_flag so all the output gets printed, or your own verbosity levels. This file is usually restricted to setting command line options. You can further override the default values in a user-editable section of any site.exp, or by setting variables on the runtest command line. Local Config File It is usually more convenient to keep these manual overrides in the site.exp local to each test directory, rather than in the global site.exp in the installed DejaGnu library. This file is mostly for supplying tool specific info that is required by the testsuite. All local site.exp files have two sections, separated by comment text. The first section is the part that is generated by make. It is essentially a collection of Tcl variable definitions based on Makefile environment variables. Since they are generated by make, they contain the values as specified by configure. (You can also customize these values by using the option to configure.) In particular, this section contains the Makefile variables for host and target configuration data. Do not edit this first section; if you do, your changes are replaced next time you run make. The first section starts with ## these variables are automatically generated by make ## # Do not edit here. If you wish to override these values # add them to the last section In the second section, you can override any default values (locally to DejaGnu) for all the variables. The second section can also contain your preferred defaults for all the command line options to runtest. This allows you to easily customize runtest for your preferences in each configured test-suite tree, so that you need not type options repeatedly on the command line. (The second section may also be empty, if you do not wish to override any defaults.) The first section ends with this line ## All variables above are generated by configure. Do Not Edit ## You can make any changes under this line. If you wish to redefine a variable in the top section, then just put a duplicate value in this second section. Usually the values defined in this config file are related to the configuration of the test run. This is the ideal place to set the variables host_triplet, build_triplet, target_triplet. All other variables are tool dependant, i.e., for testing a compiler, the value for CC might be set to a freshly built binary, as opposed to one in the user's path. Here's an example local site.exp file, as used for GCC/G++ testing. Local Config File ## these variables are automatically generated by make ## # Do not edit here. If you wish to override these values # add them to the last section set rootme "/build/devo-builds/i586-pc-linux-gnulibc1/gcc" set host_triplet i586-pc-linux-gnulibc1 set build_triplet i586-pc-linux-gnulibc1 set target_triplet i586-pc-linux-gnulibc1 set target_alias i586-pc-linux-gnulibc1 set CFLAGS "" set CXXFLAGS "-isystem /build/devo-builds/i586-pc-linux-gnulibc1/gcc/../libio -isystem $srcdir/../libg++/src -isystem $srcdir/../libio -isystem $srcdir/../libstdc++ -isystem $srcdir/../libstdc++/stl -L/build/devo-builds/i586-pc-linux-gnulibc1/gcc/../libg++ -L/build/devo-builds/i586-pc-linux-gnulibc1/gcc/../libstdc++" append LDFLAGS " -L/build/devo-builds/i586-pc-linux-gnulibc1/gcc/../ld" set tmpdir /build/devo-builds/i586-pc-linux-gnulibc1/gcc/testsuite set srcdir "${srcdir}/testsuite" ## All variables above are generated by configure. Do Not Edit ## This file defines the required fields for a local config file, namely the three config triplets, and the srcdir. It also defines several other Tcl variables that are used exclusivly by the GCC testsuite. For most test cases, the CXXFLAGS and LDFLAGS are supplied by DejaGnu itself for cross testing, but to test a compiler, GCC needs to manipulate these itself. Global Config File The master config file is where all the target specific config variables for a whole site get set. The idea is that for a centralized testing lab where people have to share a target between multiple developers. There are settings for both remote targets and remote hosts. Here's an example of a Master Config File (also called the Global config file) for a canadian cross. A canadian cross is when you build and test a cross compiler on a machine other than the one it's to be hosted on. Here we have the config settings for our California office. Note that all config values are site dependant. Here we have two sets of values that we use for testing m68k-aout cross compilers. As both of these target boards has a different debugging protocol, we test on both of them in sequence. Global Config file # Make sure we look in the right place for the board description files. if ![info exists boards_dir] { set boards_dir {} } lappend boards_dir "/nfs/cygint/s1/cygnus/dejagnu/boards" verbose "Global Config File: target_triplet is $target_triplet" 2 global target_list case "$target_triplet" in { { "native" } { set target_list "unix" } { "sparc64-*elf" } { set target_list "sparc64-sim" } { "mips-*elf" } { set target_list "mips-sim wilma barney" } { "mips-lsi-elf" } { set target_list "mips-lsi-sim{,soft-float,el}" } { "sh-*hms" } { set target_list { "sh-hms-sim" "bloozy" } } } In this case, we have support for several cross compilers, that all run on this host. For testing on operating systems that don't support Expect, DejaGnu can be run on the local build machine, and it can connect to the remote host and run all the tests for this cross compiler on that host. All the remote OS requires is a working telnetd. As you can see, all one does is set the variable target_list to the list of targets and options to test. The simple settings, like for sparc64-elf only require setting the name of the single board config file. The mips-elf target is more complicated. Here it sets the list to three target boards. One is the default mips target, and both wilma barney are symbolic names for other mips boards. Symbolic names are covered in the chapter. The more complicated example is the one for mips-lsi-elf. This one runs the tests with multiple iterations using all possible combinations of the and the (little endian) option. Needless to say, this last feature is mostly compiler specific. Board Config File The board config file is where board specfic config data is stored. A board config file contains all the higher-level configuration settings. There is a rough inheritance scheme, where it is possible to base a new board description file on an existing one. There are also collections of custom procedures for common environments. For more information on adding a new board config file, go to the chapter. An example board config file for a GNU simulator is as follows. set_board_info is a procedure that sets the field name to the specified value. The procedures in square brackets [] are helper procedures. Thes are used to find parts of a tool chain required to build an executable image that may reside in various locations. This is mostly of use for when the startup code, the standard C lobraries, or the tool chain itself is part of your build tree. Board Config File # This is a list of toolchains that are supported on this board. set_board_info target_install {sparc64-elf} # Load the generic configuration for this board. This will define any # routines needed by the tool to communicate with the board. load_generic_config "sim" # We need this for find_gcc and *_include_flags/*_link_flags. load_base_board_description "basic-sim" # Use long64 by default. process_multilib_options "long64" setup_sim sparc64 # We only support newlib on this target. We assume that all multilib # options have been specified before we get here. set_board_info compiler "[find_gcc]" set_board_info cflags "[libgloss_include_flags] [newlib_include_flags]" set_board_info ldflags "[libgloss_link_flags] [newlib_link_flags]" # No linker script. set_board_info ldscript ""; # Used by a few gcc.c-torture testcases to delimit how large the # stack can be. set_board_info gcc,stack_size 16384 # The simulator doesn't return exit statuses and we need to indicate this # the standard GCC wrapper will work with this target. set_board_info needs_status_wrapper 1 # We can't pass arguments to programs. set_board_info noargs 1 There are five helper procedures used in this example. The first one, find gcc looks for a copy of the GNU compiler in your build tree, or it uses the one in your path. This will also return the proper transformed name for a cross compiler if you whole build tree is configured for one. The next helper procedures are libgloss_include_flags & libgloss_link_flags. These return the proper flags to compiler and link an executable image using , the GNU BSP (Board Support Package). The final procedures are newlib_include_flag & newlib_include_flag. These find the Newlib C library, which is a reentrant standard C library for embedded systems comprising of non GPL'd code. Remote Host Testing Thanks to Dj Delorie for the original paper that this section is based on. DejaGnu also supports running the tests on a remote host. To set this up, the remote host needs an ftp server, and a telnet server. Currently foreign operating systems used as remote hosts are VxWorks, VRTX, DOS/Windows 3.1, MacOS and Windows. The recommended source for a Windows-based FTP server is to get IIS (either IIS 1 or Personal Web Server) from http://www.microsoft.com. When you install it, make sure you install the FTP server - it's not selected by default. Go into the IIS manager and change the FTP server so that it does not allow anonymous FTP. Set the home directory to the root directory (i.e. c:\) of a suitable drive. Allow writing via FTP. It will create an account like IUSR_FOOBAR where foobar is the name of your machine. Go into the user editor and give that account a password that you don't mind hanging around in the clear (i.e. not the same as your admin or personal passwords). Also, add it to all the various permission groups. You'll also need a telnet server. For Windows, go to the Ataman web site, pick up the Ataman Remote Logon Services for Windows, and install it. You can get started on the eval period anyway. Add IUSR_FOOBAR to the list of allowed users, set the HOME directory to be the same as the FTP default directory. Change the Mode prompt to simple. Ok, now you need to pick a directory name to do all the testing in. For the sake of this example, we'll call it piggy (i.e. c:\piggy). Create this directory. You'll need a unix machine. Create a directory for the scripts you'll need. For this example, we'll use /usr/local/swamp/testing. You'll need to have a source tree somewhere, say /usr/src/devo. Now, copy some files from releng's area in SV to your machine: Remote host setup cd /usr/local/swamp/testing mkdir boards scp darkstar.welcomehome.org:/dejagnu/cst/bin/MkTestDir . scp darkstar.welcomehome.org:/dejagnu/site.exp . scp darkstar.welcomehome.org:/dejagnu/boards/useless98r2.exp boards/foobar.exp export DEJAGNU=/usr/local/swamp/testing/site.exp You must edit the boards/foobar.exp file to reflect your machine; change the hostname (foobar.com), username (iusr_foobar), password, and ftp_directory (c:/piggy) to match what you selected. Edit the global site.exp to reflect your boards directory: Add The Board Directory lappend boards_dir "/usr/local/swamp/testing/boards" Now run MkTestDir, which is in the contrib directory. The first parameter is the toolchain prefix, the second is the location of your devo tree. If you are testing a cross compiler (ex: you have sh-hms-gcc.exe in your PATH on the PC), do something like this: Setup Cross Remote Testing ./MkTestDir sh-hms /usr/dejagnu/src/devo If you are testing a native PC compiler (ex: you have gcc.exe in your PATH on the PC), do this: Setup Native Remote Testing ./MkTestDir '' /usr/dejagnu/src/devo To test the setup, ftp to your PC using the username (iusr_foobar) and password you selected. CD to the test directory. Upload a file to the PC. Now telnet to your PC using the same username and password. CD to the test directory. Make sure the file is there. Type "set" and/or "gcc -v" (or sh-hms-gcc -v) and make sure the default PATH contains the installation you want to test. Run Test Remotely cd /usr/local/swamp/testing make -k -w check RUNTESTFLAGS="--host_board foobar --target_board foobar -v -v" > check.out 2>&1 To run a specific test, use a command like this (for this example, you'd run this from the gcc directory that MkTestDir created): Run a Test Remotely make check RUNTESTFLAGS="--host_board sloth --target_board sloth -v compile.exp=921202-1.c" Note: if you are testing a cross-compiler, put in the correct target board. You'll also have to download more .exp files and modify them for your local configuration. The -v's are optional. Config File Values DejaGnu uses a named array in Tcl to hold all the info for each machine. In the case of a canadian cross, this means host information as well as target information. The named array is called target_info, and it has two indices. The following fields are part of the array. Command Line Option Variables In the user editable second section of the you can not only override the configuration variables captured in the first section, but also specify default values for all on the runtest command line options. Save for , , and , each command line option has an associated Tcl variable. Use the Tcl set command to specify a new default value (as for the configuration variables). The following table describes the correspondence between command line options and variables you can set in site.exp. , for explanations of the command-line options. Tcl Variables For Command Line Options runtestTcl optionvariabledescription --all all_flag display all test results if set --baud baud set the default baud rate to something other than 9600. --connect connectmode rlogin, telnet, rsh, kermit, tip, or mondfe --outdir outdir directory for tool.sum and tool.log. --objdir objdir directory for pre-compiled binaries --reboot reboot reboot the target if set to "1"; do not reboot if set to "0" (the default). --srcdir srcdir directory of test subdirectories --strace tracelevel a number: Tcl trace depth --tool tool name of tool to test; identifies init, test subdir --verbose verbose verbosity level. As option, use multiple times; as variable, set a number, 0 or greater. --target target_triplet The canonical configuration string for the target. --host host_triplet The canonical configuration string for the host. --build build_triplet The canonical configuration string for the build host. --mail address Email the output log to the specified address.
Personal Config File The personal config file is used to customize runtest's behaviour for each person. It's typically used to set the user prefered setting for verbosity, and any experimental Tcl procedures. My personal ~/.dejagnurc file looks like: Personal Config File set all_flag 1 set RLOGIN /usr/ucb/rlogin set RSH /usr/local/sbin/ssh Here I set all_flag so I see all the test cases that PASS along with the ones that FAIL. I also set RLOGIN to the BSD version. I have Kerberos installed, and when I rlogin to a target board, it usually isn't supported. So I use the non secure version rather than the default that's in my path. I also set RSH to the SSH secure shell, as rsh is mostly used to test unix machines within a local network here.
Unit Testing What Is Unit Testing? Most regression testing as done by DejaGnu is system testing. This is the complete application is tested all at once. Unit testing is for testing single files, or small libraries. In this case, each file is linked with a test case in C or C++, and each function or class and method is tested in series, with the test case having to check private data or global variables to see if the function or method worked. This works particularly well for testing APIs and at level where it is easier to debug them, than by needing to trace through the entire appication. Also if there is a specification for the API to be tested, the testcase can also function as a compliance test. The dejagnu.h Header File DejaGnu uses a single header file to assist in unit testing. As this file also produces it's one test state output, it can be run standalone, which is very useful for testing on embedded systems. This header file has a C and C++ API for the test states, with simple totals, and standardized output. Because the output has been standardized, DejaGnu can be made to work with this test case, without writing almost any Tcl. The library module, dejagnu.exp, will look for the output messages, and then merge them into DejaGnu's. C Unit Testing API All of the functions that take a msg parameter use a C char * that is the message to be dislayed. There currently is no support for variable length arguments. Pass Function This prints a message for a successful test completion. pass msg Fail Function This prints a message for an unsuccessful test completion. fail msg Untested Function This prints a message for an test case that isn't run for some technical reason. untested msg Unresolved Function This prints a message for an test case that is run, but there is no clear result. These output states require a human to look over the results to determine what happened. unresolved msg Totals Function This prints out the total numbers of all the test state outputs. totals C++ Unit Testing API All of the methods that take a msg parameter use a C char * or STL string, that is the message to be dislayed. There currently is no support for variable length arguments. Pass Method This prints a message for a successful test completion. TestState::pass msg Fail Method This prints a message for an unsuccessful test completion. TestState::fail msg Untested Method This prints a message for an test case that isn't run for some technical reason. TestState::untested msg Unresolved Method This prints a message for an test case that is run, but there is no clear result. These output states require a human to look over the results to determine what happened. TestState::unresolved msg Totals Method This prints out the total numbers of all the test state outputs. TestState::totals Extending DejaGnu Adding A New Target DejaGnu has some additional requirements for target support, beyond the general-purpose provisions of configure. DejaGnu must actively communicate with the target, rather than simply generating or managing code for the target architecture. Therefore, each tool requires an initialization module for each target. For new targets, you must supply a few Tcl procedures to adapt DejaGnu to the target. This permits DejaGnu itself to remain target independent. Usually the best way to write a new initialization module is to edit an existing initialization module; some trial and error will be required. If necessary, you can use the @samp{--debug} option to see what is really going on. When you code an initialization module, be generous in printing information controlled by the verbose procedure. For cross targets, most of the work is in getting the communications right. Communications code (for several situations involving IP networks or serial lines) is available in a DejaGnu library file. If you suspect a communication problem, try running the connection interactively from Expect. (There are three ways of running Expect as an interactive interpreter. You can run Expect with no arguments, and control it completely interactively; or you can use expect -i together with other command-line options and arguments; or you can run the command interpreter from any Expect procedure. Use return to get back to the calling procedure (if any), or return -tcl to make the calling procedure itself return to its caller; use exit or end-of-file to leave Expect altogether.) Run the program whose name is recorded in $connectmode, with the arguments in $targetname, to establish a connection. You should at least be able to get a prompt from any target that is physically connected. Adding A New Board Adding a new board consists of creating a new board config file. Examples are in dejagnu/baseboards. Usually to make a new board file, it's easiest to copy an existing one. It is also possible to have your file be based on a baseboard file with only one or two changes needed. Typically, this can be as simple as just changing the linker script. Once the new baseboard file is done, add it to the boards_DATA list in the dejagnu/baseboards/Makefile.am, and regenerate the Makefile.in using automake. Then just rebuild and install DejaGnu. You can test it by: There is a crude inheritance scheme going on with board files, so you can include one board file into another, The two main procedures used to do this are load_generic_config and load_base_board_description. The generic config file contains other procedures used for a certain class of target. The board description file is where the board specfic settings go. Commonly there are similar target environments with just different processors. Testing a New Board Config File make check RUNTESTFLAGS="--target_board=newboardfile". Here's an example of a board config file. There are several helper procedures used in this example. A helper procedure is one that look for a tool of files in commonly installed locations. These are mostly used when testing in the build tree, because the executables to be tested are in the same tree as the new dejagnu files. The helper procedures are the ones in square braces [], which is the Tcl execution characters. Example Board Config File # Load the generic configuration for this board. This will define a basic # set of routines needed by the tool to communicate with the board. load_generic_config "sim" # basic-sim.exp is a basic description for the standard Cygnus simulator. load_base_board_description "basic-sim" # The compiler used to build for this board. This has *nothing* to do # with what compiler is tested if we're testing gcc. set_board_info compiler "[find_gcc]" # We only support newlib on this target. # However, we include libgloss so we can find the linker scripts. set_board_info cflags "[newlib_include_flags] [libgloss_include_flags]" set_board_info ldflags "[newlib_link_flags]" # No linker script for this board. set_board_info ldscript "-Tsim.ld"; # The simulator doesn't return exit statuses and we need to indicate this. set_board_info needs_status_wrapper 1 # Can't pass arguments to this target. set_board_info noargs 1 # No signals. set_board_info gdb,nosignals 1 # And it can't call functions. set_board_info gdb,cannot_call_functions 1 Board Config File Values These fields are all in the board_info These are all set by using the set_board_info procedure. The parameters are the field name, followed by the value to set the field to. Common Board Info Fields Field Sample Value Description compiler "[find_gcc]" The path to the compiler to use. cflags "-mca" Compilation flags for the compiler. ldflags "[libgloss_link_flags] [newlib_link_flags]" Linking flags for the compiler. ldscript "-Wl,-Tidt.ld" The linker script to use when cross compiling. libs "-lgcc" Any additional libraries to link in. shell_prompt "cygmon>" The command prompt of the remote shell. hex_startaddr "0xa0020000" The Starting address as a string. start_addr 0xa0008000 The starting address as a value. startaddr "a0020000" exit_statuses_bad 1 Whether there is an accurate exit status. reboot_delay 10 The delay between power off and power on. unreliable 1 Whether communication with the board is unreliable. sim [find_sim] The path to the simulator to use. objcopy $tempfil The path to the objcopy program. support_libs "${prefix_dir}/i386-coff/" Support libraries needed for cross compiling. addl_link_flags "-N" Additional link flags, rarely used.
These fields are used by the GCC and GDB tests, and are mostly only useful to somewhat trying to debug a new board file for one of these tools. Many of these are used only by a few testcases, and their purpose is esoteric. These are listed with sample values as a guide to better guessing if you need to change any of these. Board Info Fields For GCC & GDB Field Sample Value Description strip $tempfile Strip the executable of symbols. gdb_load_offset "0x40050000" gdb_protocol "remote" The GDB debugging protocol to use. gdb_sect_offset "0x41000000"; gdb_stub_ldscript "-Wl,-Teva-stub.ld" The linker script to use with a GDB stub. gdb_init_command "set mipsfpu none" gdb,cannot_call_functions 1 Whether GDB can call functions on the target, gdb,noargs 1 Whether the target can take command line arguments. gdb,nosignals 1 Whether there are signals on the target. gdb,short_int 1 gdb,start_symbol "_start"; The starting symbol in the executable. gdb,target_sim_options "-sparclite" Special options to pass to the simulator. gdb,timeout 540 Timeout value to use for remote communication. gdb_init_command "print/x \$fsr = 0x0" gdb_load_offset "0x12020000" gdb_opts "--command gdbinit" gdb_prompt "\\(gdb960\\)" The prompt GDB is using. gdb_run_command "jump start" gdb_stub_offset "0x12010000" use_gdb_stub 1 Whether to use a GDB stub. use_vma_offset 1 wrap_m68k_aout 1 gcc,no_label_values 1 gcc,no_trampolines 1 gcc,no_varargs 1 gcc,stack_size 16384 Stack size to use with some GCC testcases. ieee_multilib_flags "-mieee"; is_simulator 1 needs_status_wrapper 1 no_double 1 no_long_long 1 noargs 1 nullstone,lib "mips-clock.c" nullstone,ticks_per_sec 3782018 sys_speed_value 200 target_install {sh-hms}