Integer Handling with the C++ SafeInt Class


David LeBlanc
Microsoft Office

January 7, 2004

Summary: David LeBlanc introduces a C++ template class that helps reduce the chances of incurring integer arithmetic errors in your code. The code is fast, flexible, and easy to use. (16 printed pages)

A note from Michael Howard   This month's article is written by my good friend and coauthor, David LeBlanc. Over the last few months, David has researched numerous security issues and his focus has been on integer arithmetic attacks. As you know, I've written two articles on the subject, Reviewing Code for Integer Manipulation Vulnerabilities and An Overlooked Construct and an Integer Overflow Redux but David's article is, by far, the best document I've seen on the subject. His insights into the problem are fascinating and I'm sure you'll find the article as helpful as I did. See you next month. -Michael


Integer overflows have been implicated in a number of severe vulnerabilities. For example, Microsoft bulletin MS03-007: Unchecked Buffer In Windows Component Could Cause Server Compromise fixes a buffer overrun that stemmed directly from an integer overflow problem. As we reduce our dependency on unsafe string handling calls, the arithmetic used to determine buffer lengths becomes the weak link, and thus the area attackers will attempt to exploit. It is also possible for integer overflows to result in logic errors not related to string handling—the effects of the logic errors have historically ranged from crashes to escalation of privilege.

A C++ SafeInt class has been developed that can make avoiding integer overflows much easier. SafeInt forces robust handling of all integer operations, and nearly every predefined operator has been overridden so that few code changes are needed to replace an ordinary integer with a SafeInt.

Although integer overflows are covered in Writing Secure Code, 2nd Edition, the problem is actually more complex and extensive. In the course of preparing the SafeInt class, a number of interesting situations showed up that were not covered in the book.

What Is an Integer Overflow?

Integer overflows occur when a larger integer is needed to accurately represent the results of an arithmetic operation. Let's look at a simple example using an unsigned char (8-bit) integer type:

unsigned char a = 255;
unsigned char b = 2;
unsigned char result = a + b;

When compiled and run, we get the following binary representation:

 11111111 (a)
+00000010 (b)
100000001 (result)

If the result were an integer type larger than 8 bits, we'd happily store the result of 257, but because we only have 8 bits to store the result into, the leading '1' is truncated, and we now end up with 255 + 2 = 1. This is obviously not the math we learned in school, and logic based on such calculations is bound to fail. To illustrate the problem, let's take a look at a little code:

bool DoSomething(const char* server)
   unsigned char namelen = strlen(server) + 2;

   if(namelen < 255)
      char* UncName = malloc(namelen);
      if(UncName != 0)
         sprintf(UncName, "\\\\%s", server);

      //do more things here

You're looking at a heap overrun waiting to happen. I've used an 8-bit integer type to keep the math simple, but larger integers still run into the same problems. If the length of the input string is 254 or 255, we have a problem. We'll either be putting 255 bytes into a 1-byte buffer, or if the length is 254, we'll end up calling malloc(0), which then returns a valid heap pointer, and again you write data to the wrong location.

An underflow occurs when a subtraction operation yields a result that isn't accurate. For example, if you have unsigned 8-bit values, 0 – 1 = 255! For the purposes of this article, I'm going to refer to both conditions as an overflow.

Let's look at the various risky integer operations available to you.

Signed vs. Unsigned Integers

In general, signed integers should never be used unless you truly require negative numbers. I completely understand that "int" is 3 characters, and "unsigned int" is 12 characters, and is four times as hard to type. However, as you'll see, signed integers are much harder to validate. This makes it harder to write correct code, and if you do manage to correctly check for problems, you'll find that you've created a substantial performance hit. Using signed integers can also lead to logic problems, such as the following:

void Foo(char* input, char len)
   char max_len = 63;
   char buf[64];

   if(len < max_len)
      //who says strcpy can't be used safely!
      strcpy(buf, input);

What happens when input is "AAAAA*[128 A's omitted]*AAAAA"? Input will be 138 characters long, and assigning 138 to a signed character value results in a length of negative 118. Is 63 > -118? You bet. A length of a string can never be negative, so why use signed numbers to represent them? You may rightfully point out that an unsigned character value can also be abused, but it takes a string of greater than 255 characters to hit the overflow.

One note about whether base types are signed or unsigned. Unless declared as unsigned, all integer types are signed, with the notable exception of a char. The Microsoft Visual C++® compiler has a /J option that changes the default char type to unsigned. Unless you're certain what the build environment is doing, it is usually best to explicitly declare a char as a signed or unsigned char.


The unsigned case is relatively easy. In mathematical terms, all we need to do is check to see if:

A + B > MAX_INT  Error!

In order to keep your test from depending on an overflow, we can rewrite it as follows:

A > MAX_INT – B  Error!

This is simple and straightforward. Some of you might notice that:

MAX_INT – B == ~B

This is true if and only if B is unsigned and is 32 bits and or larger. It turns out that the one's complement (~) operator converts any integer that isn't 32 bits to a 32-bit signed integer. We could then rewrite our check as:

if(A > ~B)
   return error;

The problem with this code, like most clever code, is that even a good programmer has to look at it for a minute or three to determine what it does. From the assembly code point of view, the subtraction and the one's complement generates the same number of instructions. I strongly prefer code that people can look at and immediately see what it is doing.

Testing a signed addition operation is considerably more difficult. We actually have four cases—both numbers are positive, both numbers are negative, and two mixed-sign cases. Here's the test I used in SafeInt:

if(!((rhs ^ lhs) < 0)) //test for +/- combo
      //either two negatives, or 2 positives
      if(rhs < 0)
         //two negatives
         if(lhs < MinInt() - rhs) //remember rhs < 0
         //two positives
         if(MaxInt() - lhs < rhs)
   //else overflow not possible
   return lhs + rhs;

The first conditional bears some explaining. For a signed integer, the uppermost bit declares the sign. If both inputs are positive, 0^0 = 0, and likewise if both inputs are negative, 1^1 = 0. Taking the result and checking to see if the high bit is set by testing for the result < 0 gives me a test for whether I have mixed signs or two of the same sign. It isn't an especially easy test to understand, but the performance should be very good.

If we have two negatives, we have to check for an overflow against MinInt(). If there are two positives, we check against MaxInt(), just like in the unsigned case, except that MaxInt() for a signed number is half the size as the unsigned type's MaxInt(). If you happen to have the mixed case, then an overflow isn't possible. You can check this for yourself by considering:

-128 + 0 = -128
127 + -128 = -1

Both of the extremes are represented without an overflow. As you can see, the signed case is considerably more complex and you probably don't want to cut and paste this much code into every spot where you add two numbers together.


Subtraction is much the same problem as addition, and the unsigned case is very easy to check:

//easy unsigned case
if(lhs < rhs)

Signed integer subtraction, like signed addition, is much more trouble to correctly check. Let's look at the code:

if((rhs ^ lhs) < 0) //test for +/- combo
//mixed positive and negative
//two cases - +X - -Y => X + Y - check for overflow against MaxInt()
//            -X - +Y - check for overflow against MinInt()

   if(lhs >= 0) //first case
      //test is X - -Y > MaxInt()
      //equivalent to X > MaxInt() - +Y
      //Y == MinInt() creates special case
      //Even 0 - MinInt() can't be done
      //note that the special case collapses into the general 
      //case, due to the fact
      //MaxInt() - MinInt() == -1, and lhs is non-negative
      if(lhs > MaxInt() + rhs) //remember that rhs is negative
      //fall through to return value
      //second case
      //test is -X - Y < MinInt()
      //or      -X < MinInt() + Y
      //we do not have the same issues because 
//abs(MinInt()) > MaxInt()
      if(lhs < MinInt() + rhs)
      //fall through to return value
// else 
//both negative, or both positive
//no possible overflow
return (lhs - rhs);

As can be seen from my copious comments, this case was more difficult to think through. The case where both numbers are positive or negative provides an easy check, as there is no possibility of overflow:

-128 - -128 = 0
127 – 127 = 0

If you have mixed signs, then the situation becomes a little trickier. For example, 0 – MinInt() is an overflow. As in the case of addition, I don't want to end up with three conditionals to test every subtraction I conduct. The SafeInt class helps by hiding this logic from the main logic of your code, but the performance hit still applies. Again, don't use signed numbers unless you have no other choice. A SafeInt class declared with an unsigned integer significantly outperforms a signed SafeInt class, especially if addition and subtraction operations are conducted often.


For multiplication, the general problem is:

A * B > MAX_INT  Error!

Which can then be converted to:

A > MAX_INT/B  Error!

This all looks easy enough, but as it turns out, a division operation is one of the more expensive instructions for an x86 processor. For a 64-bit integer, you have no other choice than to perform the division, but for the case of a 32-bit integer casting both sides to a 64-bit integer, and performing the multiplication is about 2.5 times faster than a test using a division. Note that the method shown to check a multiplication operation in Writing Secure Code, 2nd Edition is a division operation. Casting 8 or 16 bit integers to the next larger integer should show an even larger performance gain over a division than for the 32-bit case due to the fact that 64-bit multiplication is expensive on a 32-bit processor.

Yet again, the unsigned case is easy. We check to see if any of the uppermost bits in the next larger integer are set, and throw an error if they are. The signed case involves two checks—one for a 32-bit signed integer (cast to 64 bits), and one for the upper 33 bits that could be either all zeros or all ones if the operation doesn't overflow.


So, how does a division operation overflow? Signed integers turn out to be annoying yet again! Consider the following 8-bit case:

-128/-1 = 128

128 won't fit into a signed 8-bit integer, and if you try it, you end up with a result of:

-128/-1 = -128 

So, for each and every signed division operation, if you really want to be safe, you have to check to see if the numerator is MinInt() and the denominator is -1. Quite a lot of trouble for a corner case—unsigned integers pose no problems in this area. A non-overflow problem with division is of course dividing by zero. The SafeInt class also catches this and allows you to catch a C++ exception instead of having to deal with a processor-level exception.

Division is also prone to problems when mixed sign and type integers are involved. For example, one would think that
-1/4294967295 is zero. Think again. That is, if the denominator is an unsigned int, and the numerator is signed and 32 bits or less. What happens here is that both inputs get cast to an unsigned int, and then the division is performed. Your computer then tells you that the result is actually 1. I'll go into this problem in more detail in the sections on casting and comparison operators. The same problem occurs with the modulus operator.


Assignment seems like it should be easy. You want to check whether you're trying to stuff too large or too small an integer into another integer. A cast makes the compiler warning go away, but that's not what you want, and you now have an unsafe operation. As usual, the unsigned case is easy. If the integer you're assigning to is smaller than the input, you need to check if the value of the input is greater than MaxInt(). No problem. Two signed integers are nearly as easy, but you have two checks—one to see if it is too large, another to see if it is too small.

When you start mixing signed and unsigned integers, then you start running into more complex logic. If the input is signed, and the integer you're assigning to is smaller than the input integer type, you check for size on both ends. If the integer you're assigning to is the same size or larger than the signed input, you only have to check to see if you're trying to assign a negative number to an unsigned type.

If the input is unsigned, then we have two cases. If the integer you're assigning to is signed and smaller or equal to the size of the input, then you have to check size against MaxInt() because for the same number of bits, an unsigned number can be larger than a signed number. If both integers are signed, then you only have to check for size when the input is of a larger type.

In the SafeInt class, assignment is accomplished by providing a templatized constructor. The code has a number of conditional statements, but the majority of these are compile-time constants. For example, IsSigned() evaluates to true or false at compile time. Once the code is optimized by the compiler in a release build, only the blocks that actually apply to what you're using will end up in the code.


The most obvious problem with a cast is information loss. It was a cast operation that caused the MS03-007 problem because the length of a string was assigned to an unsigned short. The saved length of the string and the actual length of the string were then very different due to the truncation.

Casts can cause other types of problems as well. In general, the compiler tries to maintain the value of a cast, but a cast from a signed number to an unsigned number can cause interesting results. For example, let's look at what happens when you cast a signed char value to an unsigned short. The spec says that it should first sign-extend the char to a short, and then convert the short to an unsigned short. Let's take a look:

char c = 0x80; //-128
//now cast to short
short s = (short)c;    
//now s = 0xff80 still -128 
unsigned short us = (unsigned short)s;
//us = 0xff80, which is 65408!

Not quite what you expected? It gets worse. If we'd first cast the signed char to an unsigned char, and then either assigned it or upcast it to an unsigned short, we'd get 128. Again, we see that mixing signed and unsigned numbers is perilous, and that you shouldn't cast integers from one type to another without thinking carefully about what you're doing. With the SafeInt class, casts are generally not needed because of a templatized constructor, which maintains your actual value and checks whether it can safely be assigned to the type of which the SafeInt is declares.

Casts also occur in some interesting places. Certain unary operators cause a promotion to a signed int in some cases. A '~', '+', or '-' operator, when applied to any value less than 32 bits will result in a signed int. If the value is 32 bits or larger, the type is unchanged. Here's a bit of code that you can use to explore this behavior:

#include <stdio.h>

template <typename T> void WhatIsIt(T i)
  if((T)-1 < 0)
    printf("signed - %d bits\n", sizeof(T) * 8);
    printf("unsigned - %d bits\n", sizeof(T) * 8);

int main(int argc, char* argv[])
  unsigned short c = 0;

  printf("Testing ~ operator - \n\t");

  printf("Testing ! operator - \n\t");

  printf("Testing + operator - \n\t");

  printf("Testing - operator - \n\t");
  printf("Testing ++ operator - \n\t");

  return 0;


Testing ~ operator -
        signed - 32 bits
Testing ! operator -
        unsigned - 8 bits
Testing + operator -
        signed - 32 bits
Testing - operator -
        signed - 32 bits
Testing ++ operator -
        unsigned - 16 bits

The templatized function makes it easy to test the types that result from certain operations. In order to maintain the actual type you started with, you have to cast the result back to the type you wanted, like so:

char b = 10;
char c = (char)~b;

Most binary operators display the same behavior, but you're less likely to run into situations where it becomes a problem.

Comparison Operators

You may be wondering what comparison operators have to do with integer overflows. The problem has to do with the fact that the compiler uses a fairly complicated set of rules to promote a pair of integers prior to comparison. Let's take a look at an example of the problem:

unsigned int i = 0xffffffff; //~4 billion
char c = -1;

if(c == i)

What happened? As it turns out, the compiler first promotes the signed char to a signed int, using sign extension. The signed char representation of -1 is 0xff, and once it is promoted to an int, you now have 0xffffffff. The next step is to promote to an unsigned int, which retains the bit pattern, but not the value. We then end up with an incorrect result. If the compiler uses the 64-bit integers now available, or if you manually cast both values up to an _int64, then you end up with a correct result. You won't end up with the same problem between two integers that are both less than 32 bits because the compiler casts them both to an int before comparing, and it always takes a negative number to demonstrate the problem.

You can run into the same issue when comparing any signed integer to an unsigned _int64. The SafeInt class does what I believe the compiler should do, and casts up as long as the value is still preserved. In the case of an unsigned _int64, the behavior is the same as if a signed 128-bit integer were available. If for some reason your code depends on the standard behavior, you can define ANSI_CONVERSIONS.

Miscellaneous Operations

Unary negation

Signed integers are, as usual, more trouble. Negating the minimum signed integer results in an overflow, unless you're also up casting to a larger integer type. Negating an unsigned integer results in compiler warnings. Note that the SafeInt class preserves type instead of performing an implicit cast.

Increment and decrement

Just like addition and subtraction, these operations should be checked for overflow. Fortunately, the test is simplified and incurs less overhead.


A modulus operation with zero as the right operand causes a divide by zero error.

Shift operators

A right or left shift by a number of bits greater than or equal to the number of bits in the type is an undefined operation. The SafeInt class calls assert() when this happens.

Using SafeInt

The SafeInt class is designed to be as close to a drop-in replacement for integer operations as possible. The class is declared as a template, so it can be used with any integer type. Nearly every relevant operator has been overridden. One major exception is that there is no default casting operator, so the following would be illegal:

SafeInt<int> s = 5;
int j = (int)s;

The reason for this design decision is that if it is present, then certain overloaded functions then become ambiguous, and the class no longer compiles. In order to retrieve the value, use the SafeInt::Value method:

SafeInt<int> s = 5;
int j = s.Value();

Another design decision that bears some explanation is that every overloaded operator returns either type bool, or a SafeInt. The reason for this is to ensure that multiple operations do not end up being unsafe. Let's assume that multiplication and division both returned an integer, and consider the following code:

SafeInt<unsigned char> a(64);
SafeInt<unsigned char> b(2);
unsigned char c;

c = (a * b) * 2;

This would be evaluated first by calling the overloaded operator for *, which would perform the safe multiplication, and return an unsigned character with a value of 128. We would then do a normal multiplication, which would be unsafe and result in an overflow. Although it would certainly be convenient to pass a SafeInt class, or the result of an operation between two SafeInt classes directly into a function looking for an integer, there isn't any way to do this without encountering situations where unsafe operations could be performed, often in unexpected places. The snippet below shows how you can accomplish the same thing. Consider original code that looks like this:

void foo(int i, int j)
  if(i ^ j == 2)

The code above can be replaced with:

void foo(SafeInt<int> i, SafeInt<int> j)
  if((i ^ j).Value() == 2)

This is just an example. In reality, the XOR operator and the equality operator are both overloaded, and the code would work correctly without calling Value(). I'm just demonstrating how you can conveniently extract the result.

I've gone to considerable effort to make the performance of the SafeInt class as high as possible, and if it is compiled with /Ox you will find by examining the resulting assembly that the class will almost always become inline code, thus incurring no extra overhead in terms of function call setup. However, it still won't perform as well as the unsafe, ordinary integer operations, though the performance hit for most unsigned operations is less than 10 percent, compared to doing exactly the same checks inline. For example, the following common construct isn't a good candidate for a SafeInt:

for(j = 0; j < MAX_LOOPS; j++)

However, the following is a great place to use a SafeInt class:

void* AllocateMemForStructs(int StructSize, int HowMany)
    SafeInt<unsigned long> s(StructSize);
    s *= HowMany;
    return malloc(s.Value());

SafeInt throws exceptions when an error occurs. In order to make catching errors easier, a SafeIntException class is the type thrown. The exception codes that can be thrown are currently ERROR_ARITHMETIC_OVERFLOW and EXCEPTION_INT_DIVIDE_BY_ZERO. Although exception handling versus error returns can be a point of serious debate, in this case, it seems wiser to use exception handling. Let's take a look at one possible function:

char CouldBlowUp(char a, char b, char c)
   SafeInt<char> sa(a), sb(b), sc(c);

     return (sa * sb + sc).Value();
   catch(SafeIntException err)

   return 0;

Take a look at the code that gets generated by the SafeInt class to completely cover all the possible error paths for the simple example shown above. It's a lot of code. If you take a careful look at all the code to cover all the various error conditions in the SafeInt class, you'll see that the problem is very complicated—much more complicated than I originally anticipated. If you attempt to maintain performance as well as correctness, the problem becomes even more difficult.

Binary Operators

All of the binary operators have certain assumptions built into the class design. This is to ensure correctness. Notes on each class of operator follow.

Arithmetic operators (*,/,+,-,%)

There are three possible variants:

  • SafeInt<T> op SafeInt<T>
  • SafeInt<T> op U
  • U op SafeInt<T>

The SafeInt<T> op SafeInt<U> variant is explicitly not supported. If you try to do put this construct into your code, the compiler will throw the following error:

error C2593: 'operator *' is ambiguous

This behavior is because the arithmetic operators are required to return a SafeInt of some type. The compiler cannot know whether you'd prefer to get a type T or a type U returned. If you need to do this, you need to extract the value contained within one of the two types using the SafeInt::Value() method. For example:

SafeInt<T> t, result;
SafeInt<U> u;

result = t * u.Value();

Comparison operators (==, !=, <, <=, >, >=)

Because each of these operators returns type bool, mixing SafeInts of differing types is allowed.

Shift operators (<<, >>)

Shift operators always return the type on the left-hand side of the operator. Mixed type operations are allowed because the return type is always known.

Boolean operators (&&, ||)

Like comparison operators, these overloads always return type bool, and mixed-type SafeInts are allowed. Additionally, specific overloads exist for type bool on both sides of the operator.

Binary operators (&, |, ^)

Mixed-type operations are discouraged, however some provision has been made in order to enable things like:

SafeInt<char> c = 2;
if(c & 0x02)

The "0x02" is actually an int, and it needs to work. The rule is that if the non-SafeInt type can be cast to the type of the SafeInt, and back to the original type without losing any significant bits, then the operation is allowed.


Integer handling turns out to be quite a bit trickier than most people would suspect. I know that I've learned a lot about the tricks the compiler can play on you while writing this class, and it has only reinforced my belief that few programmers will put all of the correct tests into their code. As an example of just how difficult the problem can sometimes be, here's an early version of the code that produced the minimum signed integer:

return (T)(1 << (BitCount()-1));

What's the problem here? First, remember that literals are signed ints. If T is an _int64, we're then going to left shift a 32-bit value by 63 bits, which is undefined, but the Microsoft compiler happens to emit 0x80000000. We then cast that to an _int64 and get 0xffffffff80000000, which is decidedly not the minimum possible _int64. The correct code is:

return (T)((T)1 << (BitCount()-1));

Although I'd asked some excellent programmers to review the code, and they did find some bugs and design problems, no one found this problem, nor did they find several others. What found the problem was function-level testing, and stepping through the code line by line. It was tedious and hard work, but it paid off. It doesn't matter how many eyes look at a piece of code. Even the best reviewers will miss things, though code that has been reviewed will be more solid than code that has not. What matters is how many good eyes look at the code, and how many detailed tests are written to verify the code, preferably at function level.

I hope that this class will prove useful in your quest to write correct code. I'd like to thank Raymond Fowkes, Mike Marcelais, Bruce Dawson, Michael Howard, Hannes Ruescher, and Vikram Harinau for their excellent input.

David LeBlanc is a Security Architect in the Office group at Microsoft, and is the coauthor of Writing Secure Code, now in its second edition. He has worked on software and network security throughout his professional career.