On 20/03/2008, Andreas Raab andreas.raab@gmx.de wrote:
Hi -
I just noticed hat Interpreter>>signed32BitValueOf: and signed64BitValueOf: are broken for edge cases. The following example will illustrate the problem:
array := IntegerArray new: 1. array at: 1 put: 16rFFFFFFFF. "should fail but doesn't" array at: 1. "answers -1 incorrectly"
array := IntegerArray new: 1. array at: 1 put: -16rFFFFFFFF. "should fail but doesn't" array at: 1. "answers 1 incorrectly"
The problem is that both signed32BitValueOf: as well as signed64BitValueOf: do not test whether the high bit of the magnitude is set (which it mustn't to fit into a signed integer). The fix is trivial in both cases - basically all that's needed at the end of both functions is this:
"Filter out values out of range for the signed interpretation such as 16rFFFFFFFF (positive w/ bit 32 set) and -16rFFFFFFFF (negative w/ bit 32 set). Since the sign is implicit in the class we require that the high bit of the magnitude is not set which is a simple test here" value < 0 ifTrue:[^self primitiveFail]. negative ifTrue:[^0 - value] ifFalse:[^value]
This C weak types always a place of uncertainty for me. What i fear of, that if you change implementation of #signed32BitValueOf: it can break more things than it was before. Due to same reasons, of course.
Cheers,
- Andreas