Lezen in het Engels

Delen via


Compilers, Integers and Optimizations

I've had a good bit of fun (for some value of fun) with hardening SafeInt against what I consider to be some nasty compiler tricks. The problem is that as soon as the compiler hits something that's technically undefined by the C++ standard, they're actually allowed to do anything they want, short of uploading your source to someone else's server. The first round of this was something I wrote about in Evil compiler tricks ..., where checking to see if pointer math was relatively correct by adding the offset to the pointer, and see if the result got smaller is foiled because pointer addition isn't defined outside of the buffer, and wrapping around is certainly outside of the buffer.

The problem is solved by casting the pointers to size_t, which forces unsigned addition, and wrap-around is defined for this. So the next round is that some compilers (gcc and Intel are two of the most notable) get really aggressive about some of this, and will start tossing out things like:

if( x + y < x )

return false;

 

What's truly obnoxious about this is that it really isn't undefined. A compiler _has_ to know what architecture it is generating code for, and at the assembly level, this is quite well defined. Let's look at why the standard thinks that this isn't defined – the problem is that there are several ways to represent negative integer numbers – if you're interested in a refresher on this, a good write up is Signed number representations. Just about every processor that's in common, current use uses two's complement. The standard basically has two options – either define a signed addition rollover behavior for every signed number representation that's allowed, or take the easy way out and punt it as undefined. Unfortunately, even the most recent standard takes the easy way out.

But is it really undefined? I don't think so – let's look at what the compiler does with both signed and unsigned addition on x86:

Here's signed:

add eax,dword ptr [argc]

 

And here's what happens when we change it to unsigned:

add eax,dword ptr [argc]

 

See the difference? Nope, I don't see it, either. Looks like the same thing. A compiler absolutely has to know that on this architecture, signed and unsigned addition are really the same thing at processor level, and the only difference is how you interpret the result. This makes for a handy work-around – if you do have an overly ambitious compiler, or as my friend Mike puts it, an "impractical compiler", and it is tossing out your int overflow checks, what you can do is cast to unsigned. While the compiler may toss out our example above, it won't toss out:

if( (unsigned int)x + (unsigned int)y < (unsigned int)x )

return false;

 

Due to Jeffrey Walton of the OWASP project helping me to flesh out the SafeInt test harness, we were able to catch several places where the gcc compiler was throwing out validation code, and the Intel compiler was even more aggressive about it. The fixes were all along the same lines as the above. One thing I'd like to fess up to is that a co-worker, Ahmed Charles, warned me about this a while ago, I argued about it, he was right, and I should have listened.

Just when we thought we'd gotten it all taken care of, along comes John Regehr and his grad students with Overflows In SafeInt. Yikes! Not what I want to wake up to. So what's the problem? As it turns out, unary negation of min int is also undefined by the standard. This is yet another example of a place where it isn't really undefined – it's (~x + 1), which happens to return the same result.

How does this get to be a problem? If the impractical compiler doesn't know at compile time what the number is, then it just has to emit the same assembly, and if something hands the function min int, then it does what we expect it to do, and the right thing may or may not happen, depending on whether you expect –MIN_INT == MIN_INT. Ah, but what John's team did was to hand SafeInt a bunch of cases where one of the values was a compile-time constant of MIN_INT, and then we had various malfunctions when compiling with impractical compilers. So yes, this is a bug, but it's a very seriously small corner case.

I'm also going to argue strongly that any compiler that does such a thing is doing something very dangerous. If you have an inline function, and you pass it a value known at runtime, then it emits a certain set of assembly, and if you pass it something known at compile time, then it behaves differently (as in functionally differently – you're welcome to optimize to make it do the same thing faster). This is an absolutely awful thing to do to a dev – hand them something that will only repro in a ship build, and which can only be debugged in assembly. If you're trying to make things really difficult for a developer, that's a great approach.

Now let's look at what we do about it – if we can't count on unary negation to function reliably, what's the solution? If we drop back to how we negate things using two's complement manually, we have:

(~x + 1)

But wait – the '~' operator will emit a signed int (for the types we're interested in – unary negation of unsigned numbers is usually a bug). I just explained above that signed addition wrap-around is undefined, and that nasty, impractical, fiendish compiler already knows that x == MIN_INT, and they'll toss out my code. So the solution is (for ints):

(int)(~(unsigned int)x + 1)

This does the right thing, and forces the compiler to do the right thing, because now everything is defined by the standard. As a developer, it's monstrously inconvenient, and the best thing to do is to make a couple of template specializations to cover 32 and 64-bit incarnations of the problem. Something that's nice is that even though the code is guaranteed to baffle most new devs, the compiler does emit the same assembly as it would otherwise, so it isn't a performance hit.

Oh and one last thing – look out for code like so (where x is positive and 32-bit):

if( x + sizeof(Y) < x )

return false;

 

On a 32-bit system, this will work as expected, but on a 64-bit system, sizeof(y) is an unsigned __int64, the addition is up cast to 64-bits, and now the addition is exceedingly unlikely to roll over. The right fix here is to use SafeInt, and let us worry about all this weird integer stuff, or you can ensure things work like you want by writing it this way:

 

if( x + (unsigned int)sizeof(Y) < x )

return false;

Comments

  • Anonymous
    December 23, 2011
    Sometimes I wonder if it would be easier to just use inline assembly for some of SafeInt.  You could then leverage the "JC" (Jump if Carry) instruction, and possibly avoid a ton of compiler woes. Of course, I haven't tried it myself, and I you would still need a bunch of C++ code to deal with mixed types. [dcl] We tired that, and as soon as you start inlining assembly, the compiler figures you know what you're doing, and quits optimizing. Turns out the compiler is smarter than we are, and perf was horrible. You also end up non-portable.

    What I'd like is for the compilers to give me intrinsics for more of this. I was able to use intrinsics for the 64-bit multiplication, and it resulted in much smaller code. Or over the very, very long haul, it would be great if the processors had instructions for some of this, but at the moment, I see more different processors and compilers in use for most of us, so portability wins.

  • Anonymous
    December 24, 2011
    > (int)(~(unsigned int)x + 1) > This does the right thing, and forces the compiler to do the right thing, > because now everything is defined by the standard. Almost. The cast to int is implementation-defined, not defined by the standard. Everything else in the expression is defined by the standard. [dcl] Well, yes, but that's only because the representation of negative numbers is a matter of choice, and I've already established that this only works for two's complement. The number of bits used for the int are also implementation-defined, but that doesn't matter to the expression.

  • Anonymous
    February 10, 2012
    Seems like it would be easier for users to extend the Visual C++ compiler with an intrinsic magic safeint_t type that says "hey compiler, you keep track of the ranges" which acts as if you used a bignum but has restrictions on how it can be used (no use across loop iterations, no declaring safeint_t to be struct members or parameters to functions), etc. so the compiler can deduce the range of integers. Every time the safeint_t gets converted to a non-safeint_t, you have to explicitly specify how out of range is handled (wrap around, saturated arithmetic, out of range is undefined, C++ throw, SEH throw). Further improvements would be to also add bignums, rationals, and fixed size integer types with user specified ranges.