diff options
| author | Luc Van Oostenryck <luc.vanoostenryck@gmail.com> | 2020-06-20 22:49:13 +0200 |
|---|---|---|
| committer | Luc Van Oostenryck <luc.vanoostenryck@gmail.com> | 2020-08-06 18:30:41 +0200 |
| commit | a9e0d28f76699e6fc2c1795801cad75ece790841 (patch) | |
| tree | 4e97da95b060f2032451ef75eb6ef5de09c36f16 /validation/linear/shift-assign2.c | |
| parent | 4c6cbe557c48205f9b3d2aae4c166cd66446b240 (diff) | |
| download | sparse-dev-a9e0d28f76699e6fc2c1795801cad75ece790841.tar.gz | |
shift-assign: add more testcases for bogus linearization
The usual conversions must not be applied to shifts.
This causes problems for shift-assigns.
So, add testcases for all combinations of size and signedness.
Signed-off-by: Luc Van Oostenryck <luc.vanoostenryck@gmail.com>
Diffstat (limited to 'validation/linear/shift-assign2.c')
| -rw-r--r-- | validation/linear/shift-assign2.c | 54 |
1 files changed, 54 insertions, 0 deletions
diff --git a/validation/linear/shift-assign2.c b/validation/linear/shift-assign2.c new file mode 100644 index 00000000..30d74376 --- /dev/null +++ b/validation/linear/shift-assign2.c @@ -0,0 +1,54 @@ +typedef __INT16_TYPE__ s16; +typedef __INT32_TYPE__ s32; +typedef __INT64_TYPE__ s64; +typedef __UINT16_TYPE__ u16; +typedef __UINT32_TYPE__ u32; +typedef __UINT64_TYPE__ u64; + +s64 s64s16(s64 a, s16 b) { a >>= b; return a; } +s64 s64s32(s64 a, s32 b) { a >>= b; return a; } +u64 u64s16(u64 a, s16 b) { a >>= b; return a; } +u64 u64s32(u64 a, s32 b) { a >>= b; return a; } + +/* + * check-name: shift-assign2 + * check-command: test-linearize -Wno-decl $file + * check-known-to-fail + * + * check-output-start +s64s16: +.L0: + <entry-point> + sext.32 %r2 <- (16) %arg2 + zext.64 %r3 <- (32) %r2 + asr.64 %r5 <- %arg1, %r3 + ret.64 %r5 + + +s64s32: +.L2: + <entry-point> + zext.64 %r9 <- (32) %arg2 + asr.64 %r11 <- %arg1, %r9 + ret.64 %r11 + + +u64s16: +.L4: + <entry-point> + sext.32 %r15 <- (16) %arg2 + zext.64 %r16 <- (32) %r15 + lsr.64 %r18 <- %arg1, %r16 + ret.64 %r18 + + +u64s32: +.L6: + <entry-point> + zext.64 %r22 <- (32) %arg2 + lsr.64 %r24 <- %arg1, %r22 + ret.64 %r24 + + + * check-output-end + */ |
