Sparse's development tree with unstable git historyhttp://git.kernel.org/pub/scm/devel/sparse/sparse-dev.git/atom?h=master2021-04-17T13:14:26+00:002021-04-17T13:14:26+00:00Luc Van Oostenryckluc.vanoostenryck@gmail.com2021-04-17T13:14:26+00:00urn:sha1:bb4239aafe31493a395d7777de5b3c33ea06a98d
* add a symbolic checker
2021-04-13T16:25:43+00:00Luc Van Oostenryckluc.vanoostenryck@gmail.com2021-04-08T22:07:40+00:00urn:sha1:7a9fab6984f7bce3fd52809ecf6e21b79f115be3
Some instruction simplifications can be quite tricky and thus easy to
get wrong. Often, they also are hard to test (for example, you can
test it with a few input values but of course not all combinations).
I'm used to validate some of these with an independent tool
(Alive cfr. [1], [2]) which is quite neat but has some issues:
1) This tool doesn't work with Sparse's IR or C source but it needs to
have the tests written in its own language (very close to LLVM's IR).
So it can be used to test if the logic of a simplification but
not if implemented correctly.
2) This tool isn't maintained anymore (has some bugs too) and it's
successor (Alive2 [3]) is not quite as handy to me (I miss the
pre-conditions a lot).
So, this patch implement the same idea but fully integrated with
Sparse. This mean that you can write a test in C, let Sparse process
and simplify it and then directly validate it and not only for
a few values but symbolically, for all possible values.
Note: Of course, this is not totally stand-alone and depends on
an external library for the solver (Boolector, see [4], [5]).
Note: Currently, it's just a proof of concept and, except the
included tests, it's only very slightly tested (and untested
with anything more complex than a few instructions).
[1] https://blog.regehr.org/archives/1170
[2] https://www.cs.utah.edu/~regehr/papers/pldi15.pdf
[3] https://blog.regehr.org/archives/1722
[4] https://boolector.github.io/
[5] https://boolector.github.io/papers/BrummayerBiere-TACAS09.pdf
Signed-off-by: Luc Van Oostenryck <luc.vanoostenryck@gmail.com>
2021-03-21T12:45:39+00:00Luc Van Oostenryckluc.vanoostenryck@gmail.com2021-03-21T12:39:27+00:00urn:sha1:8862279754b9c6edf8e3170d30acd54410f6c760
Because laziness is a virtue, add an option '-r' to the 'format'
subcommand of the testsuite to quickly create a test template for
linearized code which should just return 1.
Signed-off-by: Luc Van Oostenryck <luc.vanoostenryck@gmail.com>
2020-12-11T13:13:38+00:00Luc Van Oostenryckluc.vanoostenryck@gmail.com2020-12-11T00:26:47+00:00urn:sha1:6c10d780aaea82e888e28dbed259c49de9c15ee2
In testcases' tags, if a value contains 'check-' then this
value will be used as the tagname instead of the value.
Fix this by adding a bit more context in the regexp used for parsing these.
Signed-off-by: Luc Van Oostenryck <luc.vanoostenryck@gmail.com>
2020-10-31T23:55:44+00:00Luc Van Oostenryckluc.vanoostenryck@gmail.com2020-10-30T17:51:53+00:00urn:sha1:dc27965726cbc506250ccc12c5169be048a01082
The current tags check-output-contains/excludes/pattern are
quite powerful and the new check-output-match is easy to use
but it can be even simpler. Indeed, a lot of IR simplifications/
canonicalizations can be tested by checking that the expression
to be tested is equivalent to another one. This is less precise
than some more specific tests but:
* it's a big advantage because it's less sensitive to 'noise'
like the exact number used by the pseudos or to the results
of some new simplifications or canonicalizations
* very often, this equivalence is what really matters and not
the exact transformation.
So, add a new utra-simple-to-use tag: just ask that all functions
of the tests return the same specified value (usually 1):
check-output-returns: <value>
Signed-off-by: Luc Van Oostenryck <luc.vanoostenryck@gmail.com>
2020-10-31T23:55:09+00:00Luc Van Oostenryckluc.vanoostenryck@gmail.com2020-10-30T17:51:53+00:00urn:sha1:74ba40fadc42f7b3b2222cc231d2c38959943643
The current tags check-output-contains/excludes/pattern are
quite powerful, universal, but they often need 'complex' regular
expressions with escaping which make them not so nice to read.
For testing IR results, a very common pattern is:
this instruction must have this (kind of) operand.
So, make a new tag for this. It does nothing than can't be done
with done with the previous ones, on the contrary, but is much
simpler to use:
check-output-match(instruction): operand
Signed-off-by: Luc Van Oostenryck <luc.vanoostenryck@gmail.com>
2020-07-04T10:16:59+00:00Luc Van Oostenryckluc.vanoostenryck@gmail.com2020-06-22T23:48:48+00:00urn:sha1:709595b2979d7fcbc94b57236a8dca1a5d2dd4cb
This flag facilitates the creation of testcases for preprocessing.
Signed-off-by: Luc Van Oostenryck <luc.vanoostenryck@gmail.com>
2020-06-22T23:34:11+00:00Luc Van Oostenryckluc.vanoostenryck@gmail.com2020-06-22T23:17:57+00:00urn:sha1:fa15396204a796135f71b5aef6cbbe3ba1fc0eb3
The subcommand 'format help' is broken because the of the way
arguments are parsed without validating the number of arguments.
Fix this by parsing all arguments (even if there is only one)
and validate the number of arguments at the end of the loop.
Signed-off-by: Luc Van Oostenryck <luc.vanoostenryck@gmail.com>
2020-06-16T21:10:17+00:00Luc Van Oostenryckluc.vanoostenryck@gmail.com2020-06-14T23:39:44+00:00urn:sha1:9cd32cbde900238795ad80c5e3c0adb5975c8f2e
For some testcases, the testsuite use the command 'timeout'
to ensure that the test finish after a reasonable amount of
time. This is mainly used for some testcases which, in the past,
were stuck in an infinite loop. This the command 'timeout' is
used with an extra option (-k 1s) to issue a second kill signal
in case the first one would have been ignored.
However, this extra option is not supported on all implementations
(Alpine) and its use seems a bit paranoid for sparse.
So, remove this extra option.
Signed-off-by: Luc Van Oostenryck <luc.vanoostenryck@gmail.com>
2019-02-07T20:56:50+00:00Uwe Kleine-Königuwe@kleine-koenig.org2019-01-31T08:49:59+00:00urn:sha1:4ba10f29f54214cfa39c5fc324ca45df55c4fecd
This simplifies finding the offending test when the build ended with
KO: out of 584 tests, 527 passed, 57 failed
56 of them are known to fail
Signed-off-by: Uwe Kleine-König <uwe@kleine-koenig.org>
Signed-off-by: Luc Van Oostenryck <luc.vanoostenryck@gmail.com>