Contents
- Basic steps
- Increase the verbosity
- Testing more extensively
- Testing multilib ABIs
- Testing a different target from the current host
- Testing with a simulator
- Interpretation of testsuite results
- Documentation on writing testcases
- Running the testsuite on Cygwin
- Compile time and memory utilization testing
GCC has its own testsuite that can be run after you have compiled GCC. The official documentation is available here: https://gcc.gnu.org/install/test.html
See also DavidMalcolm's GCC Newbies Guide which has a Working with the testsuite section.
Basic steps
Install the prerequisites: DejaGnu, Tcl, and Expect. They are usually available in all GNU/Linux distributions.
Run the testsuite
cd objdir make -j$(nproc) -k check
Analyze the results. Tests may already fail or be unsupported without your patch, you should only worry about new failures introduced by your patch. The simplest approach is to have two build directories, one with the regression testsuite results of a pristine copy of GCC (objdir_pristine) and another with the testsuite results of your patched copy (objdir_patched). Then call:
gcc_src/contrib/compare_tests objdir_pristine objdir_patched
New failures need to be investigated and fixed. If you added new tests, make sure that they were run and passed. You can find detailed logs in the objdir_patched/*/testuite/*/*.log files (for example, gcc/testsuite/gcc/gcc.log).
Increase the verbosity
Because the testsuite tends to be long-running, you may wish to increase the verbosity level so that you get feedback on progress. You can do this by adding "-v" to RUNTESTFLAGS. Each "-v" added increases the verbosity level by one, so specify it multiple times for more output. Example:
make -k check-gcc RUNTESTFLAGS="-v -v"
This can be combined with the other RUNTESTFLAGS options mentioned in the official documentation. Example:
make -k check-gcc RUNTESTFLAGS="compile.exp=2004* -v -v"
Testing more extensively
C++ frontend developers should set GXX_TESTSUITE_STDS (comma-separated) to go beyond the default C++ -std= values for testing, e.g. GXX_TESTSUITE_STDS=98,11,14,17,20,23,26. This can also be done with the check-c++-all make target.
To enable expensive runtime tests, set GCC_TEST_RUN_EXPENSIVE=1.
Testing multilib ABIs
To run all tests for both the default ABI as well as -m32, pass --target-board:
make -k check RUNTESTFLAGS="--target_board=unix\{,-m32\}"Adapt the example as necessary depending on the built multilib ABIs configured and available for your target.
Testing a different target from the current host
In some circumstances, it is possible to run tests locally for a different target than the current host (for example, darwin8 target tests on a darwin9 host system). In order to achieve this, you might also need to provide a sysroot (to point at the libraries and headers for the target).
This can be accomplished by a command like this:
make -k check-gcc RUNTESTFLAGS="CFLAGS_FOR_TARGET=--sysroot=/path/to/target/root --target_board=unix/-other/-options"
You might wish to use:
CFLAGS_FOR_TARGET='$CFLAGS_FOR_TARGET --sysroot=/path/to/target/root'
if CFLAGS_FOR_TARGET is already set for your test case(s).
Testing with a simulator
For running the testsuite on a simulator (useful for cross targets) see: https://gcc.gnu.org/simtest-howto.html
and a similar page (with some useful links): Building_Cross_Toolchains_with_gcc
Interpretation of testsuite results
Normal testsuite results usually contain a few FAILs -- unexpected failures. Thus it might be hard to determine if the changes being tested actually broke something. There are several ways to deal with this situation:
The most common approach is to compare two testsuite runs, one with the tested changes and the other without them. Check with diff these two trees to make sure that you test only the differences you intend to test.
- There are several scripts in contrib/ that can be used to obtain condensed testresult differences:
- dg-cmp-results.sh
/src/gcc/contrib/dg-cmp-results.sh -v -v '*' ../gcc.orig/gcc/testsuite/gcc/gcc.sum gcc/testsuite/gcc/gcc.sum
or something like
for i in $(find gcc/ -name "*.sum");do ../../src/gcc/contrib/dg-cmp-results.sh -v -v '' ../gcc.orig/$i $i;done
Look in the https://gcc.gnu.org/pipermail/gcc-testresults/ for the mailed results for your platform that might serve as a baseline.
- If you are developing a new port, aim for zero (or as few as possible and practical) FAILs.
Using validate_failures.py
The script <src>/contrib/testsuite-management/validate_failures.py can be used to maintain a list of known/expected failures outside of DejaGNU. This is useful when working in a branch with relatively stable failures, which you have determined to be "ignorable".
Since modifying dejagnu files to mark XFAILs is not always trivial, validate_failures.py offers a lightweight approach that can support lists for multiple targets.
The idea is to create a manifest file that contains the FAIL, XPASS, UNRESOLVED output from make check. You cut and paste that output into the manifest file and then use validate_failures.py to decide whether the failures are ignorable or not.
$ <src>/contrib/testsuite-management/validate_failures.py --help
Usage: This script provides a coarser XFAILing mechanism that requires no
detailed DejaGNU markings. This is useful in a variety of scenarios:
- Development branches with many known failures waiting to be fixed.
- Release branches with known failures that are not considered
important for the particular release criteria used in that branch.
The script must be executed from the toplevel build directory. When
executed it will:
1- Determine the target built: TARGET
2- Determine the source directory: SRCDIR
3- Look for a failure manifest file in
<SRCDIR>/contrib/testsuite-management/<TARGET>.xfail
4- Collect all the <tool>.sum files from the build tree.
5- Produce a report stating:
a- Failures expected in the manifest but not present in the build.
b- Failures in the build not expected in the manifest.
6- If all the build failures are expected in the manifest, it exits
with exit code 0. Otherwise, it exits with error code 1.
Options:
-h, --help show this help message and exit
--build_dir=BUILD_DIR
Build directory to check (default = .)
--manifest Produce the manifest for the current build (default =
False)
--force When used with --manifest, it will overwrite an
existing manifest file (default = False)
--verbosity=VERBOSITY
Verbosity level (default = 0)
Example
First, create a baseline:
make -k -j$(nproc) check
Instruct validate_failures.py to create a manifest:
<src>/contrib/testsuite-management/validate_failures.py --build_dir=/path/to/build --manifest=/tmp/my_gcc_baseline.xfail --produce_manifest
Then make some changes to GCC:
$EDITOR ...
Then run the testsuite again:
make -k -j$(nproc) check
Finally, ask validate_failures.py to compare the results:
<src>/contrib/testsuite-management/validate_failures.py --build_dir=/path/to/build --manifest=/tmp/my_gcc_baseline.xfail
Documentation on writing testcases
Running the testsuite on Cygwin
The testsuite runs on Cygwin, however slowly, and it is likely to hit a bug in Cygwin's process info management. To avoid that, anyone who wants to test on Cygwin is advised to build Cygwin DLL manually with this patch applied.
Compile time and memory utilization testing
This is not necessary in most cases, but it may be crucial if your changes may have a significant impact. In any case the more testing, the better: Compile time and memory utilization testing