Last episode a question about the (value + 1) & -1 confused me and I turned to my debugger for comfort, but without success.
Why does ((value + 15) & -15) align the value to a 16 bit address? I thought Casey used this macro in HH and mentioned it last night, or am I wrong?
But -15 in hexadecimal is 0xfffffffffffffff1, and I'd think it should be 0xFF...0 to work properly.
The results I get for the following values are not aligned to 16:
((0 + 15) & -15) = 1
((16 + 15) & -15) = 17
((14 + 15) & -15) = 17
Obviously I'm doing something wrong or don't remember the macro. What's wrong in my reasoning?