diff options
| author | Luc Van Oostenryck <luc.vanoostenryck@gmail.com> | 2020-11-06 21:22:56 +0100 |
|---|---|---|
| committer | Luc Van Oostenryck <luc.vanoostenryck@gmail.com> | 2020-11-22 15:30:16 +0100 |
| commit | a1c7b6f159cfd5137676c6730b3d14ddd411dc57 (patch) | |
| tree | d4729ea0b123b437129d9fa1b2a82270be035f50 /validation/linear/pointer-arith64.c | |
| parent | eda5c718f55ac471d456752ff7138f1249289dc7 (diff) | |
| download | sparse-dev-a1c7b6f159cfd5137676c6730b3d14ddd411dc57.tar.gz | |
canon: put PSEUDO_ARGs in canonical order too
Currently, only binops containing PSEUDO_VAL or PSEUDO_SYM were
put in canonical order. This means that binops containing only
PSEUDO_ARGs or PSEUDO_REGs are not ordered. This is not directly
a problem for CSE because commutativity is taken in account but:
* more combination need to be checked during simplification
* 'anti-commutative' operations like (a > b) & (b < a) are not
recognized as such.
So, as a first step, also take PSEUDO_ARGs in account when checking
if operands are in canonical order.
Signed-off-by: Luc Van Oostenryck <luc.vanoostenryck@gmail.com>
Diffstat (limited to 'validation/linear/pointer-arith64.c')
| -rw-r--r-- | validation/linear/pointer-arith64.c | 10 |
1 files changed, 5 insertions, 5 deletions
diff --git a/validation/linear/pointer-arith64.c b/validation/linear/pointer-arith64.c index dad10331..7f1aac56 100644 --- a/validation/linear/pointer-arith64.c +++ b/validation/linear/pointer-arith64.c @@ -35,7 +35,7 @@ cps: .L0: <entry-point> sext.64 %r2 <- (16) %arg2 - add.64 %r5 <- %arg1, %r2 + add.64 %r5 <- %r2, %arg1 ret.64 %r5 @@ -44,7 +44,7 @@ ipss: <entry-point> sext.64 %r10 <- (16) %arg2 mul.64 %r11 <- %r10, $4 - add.64 %r14 <- %arg1, %r11 + add.64 %r14 <- %r11, %arg1 ret.64 %r14 @@ -53,7 +53,7 @@ ipus: <entry-point> zext.64 %r19 <- (16) %arg2 mul.64 %r20 <- %r19, $4 - add.64 %r23 <- %arg1, %r20 + add.64 %r23 <- %r20, %arg1 ret.64 %r23 @@ -62,7 +62,7 @@ ipsi: <entry-point> sext.64 %r28 <- (32) %arg2 mul.64 %r29 <- %r28, $4 - add.64 %r32 <- %arg1, %r29 + add.64 %r32 <- %r29, %arg1 ret.64 %r32 @@ -71,7 +71,7 @@ ipui: <entry-point> zext.64 %r37 <- (32) %arg2 mul.64 %r38 <- %r37, $4 - add.64 %r41 <- %arg1, %r38 + add.64 %r41 <- %r38, %arg1 ret.64 %r41 |
