aboutsummaryrefslogtreecommitdiffstatshomepage
path: root/validation/optim
diff options
authorLuc Van Oostenryck <luc.vanoostenryck@gmail.com>2018-06-18 10:15:58 +0200
committerLuc Van Oostenryck <luc.vanoostenryck@gmail.com>2018-06-23 16:54:50 +0200
commit4d851248702bebe6c8ecdd1cef54e7782c72b8a2 (patch)
tree0e0fe54d70765a08e945df62b8861314d2ed7af3 /validation/optim
parentc64d1972a5b775b9d7169dc8db96ee0556af7b26 (diff)
downloadsparse-dev-4d851248702bebe6c8ecdd1cef54e7782c72b8a2.tar.gz
cast: keep instruction sizes consistent
The last instruction of linearize_load_gen() ensure that loading a bitfield of size N results in a object of size N. Also, we require that the usual binops & unops use the same type on their operand and result. This means that before anything can be done on the loaded bitfield it must first be sign or zero- extended in order to match the other operand's size. The same situation exists when storing a bitfield but there the extension isn't done. We can thus have some weird code like: trunc.9 %r2 <- (32) %r1 shl.32 %r3 <- %r2, ... where a bitfield of size 9 is mixed with a 32 bit shift. Avoid such mixing of size and always zero extend the bitfield before storing it (since this was the implicitly desired semantic). The combination TRUNC + ZEXT can then be optimised later into a simple masking operation. Signed-off-by: Luc Van Oostenryck <luc.vanoostenryck@gmail.com>
Diffstat (limited to 'validation/optim')
0 files changed, 0 insertions, 0 deletions