Security Tips

Defend Your Code with Top Ten Security Tips Every Developer Must Know

Michael Howard and Keith Brown

This article assumes you're familiar with C++, C#, and SQL

Level of Difficulty1 2 3

SUMMARY

There are many ways to get into trouble when it comes to security. You can trust all code that runs on your network, give any user access to important files, and never bother to check that code on your machine has not changed. You can run without virus protection software, not build security into your own code, and give too many privileges to too many accounts. You can even use a number of built-in functions carelessly enough to allow break-ins, and you can leave server ports open and unmonitored. Obviously, the list continues to grow. What are some of the really important issues, the biggest mistakes you should watch out for right now so that you don't compromise your data or your system? Security experts Michael Howard and Keith Brown present 10 tips to keep you out of hot water.

Contents

1. Trust User Input at Your Own Peril
2. Protect Against Buffer Overruns
3. Prevent Cross-site Scripting
4. Don't Require sa Permissions
5. Watch that Crypto Code!
6. Reduce Your Attack Profile
7. Employ the Principle of Least Privilege
8. Pay Attention to Failure Modes
9. Impersonation is Fragile
10. Write Apps that Non-admins Can Actually Use

Security is a multidimensional issue. Security risks can come from anywhere. You could write bad error handling code or be too generous with permissions. You could forget what services are running on your server. You could accept all user input. And the list goes on. To give you a head start on protecting your machines, your network, and your code, here are 10 tips to follow for a safer network strategy.

1. Trust User Input at Your Own Peril

Even if you don't read the rest of this article, remember one thing, "don't trust user input." If you always assume that data is well formed and good, then your troubles are about to begin. Most security vulnerabilities revolve around the attacker providing malformed data to the server machine.

Trusting that input is well formed can lead to buffer overruns, cross-site scripting attacks, SQL injection attacks, and more.

Let's look at each of these potential attacks in more detail.

2. Protect Against Buffer Overruns

A buffer overrun occurs when the data provided by the attacker is bigger than what the application expects, and overflows into internal memory space. Buffer overruns are primarily a C/C++ issue. They're a menace, but generally easy to fix. We've seen only two buffer overruns which were not obvious and were hard to fix. The developer did not anticipate externally provided data that was larger than the internal buffer. The overflow causes corruption of other data structures in memory, and this corruption can often lead to the attacker running malicious code. There are also buffer underflows and buffer overruns caused by array indexing mistakes, but they are less common.

Take a look at the following C++ code snippet:

void DoSomething(char *cBuffSrc, DWORD cbBuffSrc) {
    char cBuffDest[32];
    memcpy(cBuffDest,cBuffSrc,cbBuffSrc);
}

What's wrong with it? Actually, there's nothing wrong with this code if cBuffSrc and cbBuffSrc come from a trusted source, such as code that did not trust the data and so validated it to be well formed and of the correct size. However, if the data comes from an untrusted source and has not been validated, then the attacker (the untrusted source) could easily make cBuffSrc larger than cBuffDest, and also set cbBuffSrc to be larger than cBuffDest. When memcpy copies the data into cBuffDest, the return address from DoSomething is clobbered because cBuffDest is next to the return address on the function's stack frame, and the attacker makes the code perform malicious operations.

The way to fix this is to distrust user input and not to believe any data held in cBuffSrc and cbBuffSrc:

void DoSomething(char *cBuffSrc, DWORD cbBuffSrc) {
    const DWORD cbBuffDest = 32;
    char cBuffDest[cbBuffDest];
#ifdef _DEBUG
    memset(cBuffDest, 0x33, cbBuffSrc);
#endif
    memcpy(cBuffDest, cBuffSrc, min(cbBuffDest, cbBuffSrc));
}

This function exhibits three properties of a well-written function which mitigates buffer overruns. First, it requires the caller to provide the length of the buffer. Of course, you should not blindly trust this value! Next, in a debug build, the code will probe the buffer to check that it is indeed large enough to hold the source buffer, and if not, it will probably cause an access violation and throw the code into a debugger. It's surprising how many bugs you can find when doing this. Last, and most important, the call to memcpy is defensive; it copies no more data than the destination buffer can hold.

During the Windows® Security Push at Microsoft, we created a list of safe string handling functions for C programmers. You can check them out at Strsafe.h: Safer String Handling in C.

3. Prevent Cross-site Scripting

Cross-site scripting vulnerabilities are Web-specific issues and can compromise a client's data through a flaw in a single Web page. Imagine the following ASP.NET code fragment:

<script language=c#>
    Response.Write("Hello, " + Request.QueryString("name"));
</script>

How many of you have seen code like this? You may be surprised to learn it's buggy! Normally, a user would access this code using a URL that looks like this:

https://explorationair.com/welcome.aspx?name=Michael

The C# code assumes that the data is always well formed and contains nothing more than a name. Attackers, however, abuse this code and provide script and HTML as the name. If you typed the following URL

https://northwindtraders.com/welcome.aspx?name=<script>alert('hi!');
</script>

you'd get a Web page that displays a dialog box, saying "hi!" "So what?" you say. Imagine that the attacker convinces a user to click on a link like this, but the querystring contains some really nasty script and HTML to get your cookie and post it to a site that the attacker owns; the attacker now has your private cookie information or worse.

There are two ways to avoid this. The first is not to trust the input and be strict about what comprises a user's name. For example, you could use regular expressions to check that the name contains only a common subset of characters and is not too big. The following C# code snippet shows the way that you can accomplish this:

Regex r = new Regex(@"^[\w]{1,40}$");
        
if (r.Match(strName).Success) {
    // Cool! The string is ok
} else {
    // Not cool! Invalid string
}

This code uses a regular expression to verify that a string contains between 1 and 40 alphanumeric characters and nothing else. This is the only safe way to determine whether a value is correct.

You cannot squeak HTML or script through this regular expression! Don't use a regular expression to look for invalid characters and reject the request if such characters are found because there is always a case that will slip by you.

The second defense is to HTML-encode all input when it is used as output. This will reduce dangerous HTML tags to more secure escape characters. You can escape any strings that might be a problem in ASP.NET with HttpServerUtility.HtmlEncode, or in ASP with Server.HTMLEncode.

4. Don't Require sa Permissions

The last kind of input trust attack we want to discuss is SQL injection. Many developers write code that takes input and uses that input to build SQL queries to communicate with a back-end data store, such as Microsoft® SQL Server™ or Oracle.

Take a look at the following code snippet:

void DoQuery(string Id) {
    SqlConnection sql=new SqlConnection(@"data source=localhost;" +
            "user id=sa;password=password;");
    sql.Open();
    sqlstring= "SELECT hasshipped" +
            " FROM shipping WHERE id='" + Id + "'";
    SqlCommand cmd = new SqlCommand(sqlstring,sql);
•••

This code is seriously flawed for three reasons. First, the connection is made from the Web Service to SQL Server as the system administrator account, sa. You'll see why this is bad, shortly. Second, notice the clever use of "password" as the password for the sa account!

However, the real cause for concern is the string concatenation that builds the SQL statement. If a user enters an ID of 1001, then you get the following SQL statement, which is perfectly valid and well formed.

SELECT hasshipped FROM shipping WHERE id = '1001'

However, attackers are more creative than this. They would enter an ID of "'1001' DROP table shipping --", which would execute the following query:

SELECT hasshipped FROM 
shipping WHERE id = '1001'  
DROP table shipping -- ';

This changes the way the query works. Not only does the code attempt to determine if something has shipped or not, it goes on to drop (delete) the shipping table! The -- operator is a comment operator in SQL and it makes it easier for an attacker to build a valid, yet dangerous, series of SQL statements!

At this point you're probably wondering how any user could delete a table in the SQL Server database. Surely only admins can do a task like that. You're right. But here you're connecting to the database as sa, and sa can do anything it wants to do on a SQL Server database. You should never connect as sa from any application to SQL Server; rather, you should either use Windows Integrated authentication, if appropriate, or connect as a predefined account with appropriately restricted rights.

Fixing the SQL injection issue is easy. Using SQL stored procedures and parameters, the following code shows how to build a query like this—and how to use a regular expression to make sure that the input is valid because our business dictates that a shipping ID can only be numeric and between four and ten digits in length:

Regex r = new Regex(@"^\d{4,10}$");
if (!r.Match(Id).Success)
    throw new Exception("Invalid ID");

SqlConnection sqlConn= new SqlConnection(strConn);
string str="sp_HasShipped";
SqlCommand cmd = new SqlCommand(str,sqlConn);
cmd.CommandType = CommandType.StoredProcedure;
cmd.Parameters.Add("@ID",Id);

Buffer overruns, cross-site scripting, and SQL injection attacks are all examples of trusting input. All these attacks can be mitigated by believing that all input is evil, until proven otherwise.

5. Watch that Crypto Code!

Now let's look at something near and dear to our hearts. I would say that more than 30 percent of the security code we review contains security mistakes. Probably the most common mistake is homegrown encryption code, which is typically quite fragile and easy to break. Never create your own encryption code; you won't get it right. Don't think that just because you've created your own cryptographic algorithm people won't figure it out. Attackers have access to debuggers, and they have both the time and the knowledge to determine exactly how these systems work—and often break them in a matter of hours. Rather, you should use the CryptoAPI for Win32® applications, and the System.Security.Cryptography namespace has a wealth of well-written and well-tested cryptographic algorithms.

6. Reduce Your Attack Profile

If a feature is not required by 90 percent of clients, then it should not be installed by default. Internet Information Services (IIS) 6.0 follows this plan of installation, and you can read about it in Wayne Berry's article, "Innovations in Internet Information Services Let You Tightly Guard Secure Data and Server Processes," in this month's issue. The idea behind this installation approach is that where services that you don't use are running, you don't pay attention to them and they can be exploited. If the feature is installed by default, then it should operate under the principle of least privilege. In other words, do not require the app to run with administrative rights if they are not required. Follow this advice as well.

7. Employ the Principle of Least Privilege

The operating system and the common language runtime (CLR) have a security policy for several reasons. Many people think that the main reason the security policy exists is to prevent users from intentionally doing bad things: accessing files they shouldn't be allowed to see, reconfiguring the network to suit their needs, and other dastardly deeds. While it's certainly true that insider attacks are common and need to be guarded against, there's another reason for keeping this security policy tight. The security policy is there to put walls around code so that intentional or (just as frequently) unintentional actions by users don't wreak havoc on the network. For instance, an attachment downloaded via e-mail and executed on Alice's machine is restricted to only accessing resources that Alice can access. If the attachment contains a Trojan horse, a good security policy will limit the damage it can do.

When you design, build, and deploy server applications, you cannot assume that every request will come from a legitimate user. If a bad guy manages to send you a malformed request that (heaven forbid) causes your code to behave badly, you want every possible wall around your application to limit the damage. Our point is that the reason your company has a security policy isn't just because it doesn't trust you or your code. It's also there to protect against well-intentioned code that's been exploited by outsiders.

The principle of least privilege says that any given privilege should be granted to the least amount of code necessary, for the least amount of time necessary. In other words, at any given time, try to erect as many walls around your code as possible. When something bad happens—as Murphy's Law guarantees it will—you'll be glad these walls were in place. So here are some concrete ideas for running code with the least privilege possible.

Choose a security context for your server code that grants access only to the resources it needs to get its work done. If certain parts of your code require significantly higher privileges, consider factoring the code out and running just that code with the higher privileges. To safely separate code that runs with different operating system credentials, your best bet is to run this code in a separate process that runs in a more privileged security context. This means you'll need interprocess communication such as COM or Microsoft .NET remoting, and you'll need to design the interface to that code to keep round-trips to a minimum.

If you're using the .NET Framework when factoring your code into assemblies, consider the required level of privilege of each piece of code. You may find it easy to isolate code that requires high privilege into separate assemblies that can be granted more permissions, allowing the majority of your assemblies to run with fewer privileges, thus adding more walls around your code. An easy way to restrict the privileges on a particular assembly is via assembly-level permission requests, as shown in Figure 1. Figure 2 shows how to create the XML files used by these permission requests. If you do this, don't forget that you're limiting not only the permissions of your own assembly, but those of any assemblies you call, due to the Code Access Security (CAS) stack walk.

Figure 2 Serializing Permission Requests

using System;
using System.Security;
using System.Security.Permissions;

class App {
    static void Main(string[] args) {
        // serializing a permission set
        IPermission a = new EnvironmentPermission(
            EnvironmentPermissionAccess.Read,
            "MyEnvironmentVar");
        IPermission b = new FileDialogPermission(
            FileDialogPermissionAccess.Open);
        PermissionSet ps = new PermissionSet(
            PermissionState.None);
        ps.AddPermission(a);
        ps.AddPermission(b);
        Console.WriteLine(ps.ToXml());
    }
}

Figure 1 Assembly Permission Requests

using System;
using System.Security.Permissions;

// declare the minimum permissions that this assembly requires to 
// even be loaded
[assembly: PermissionSetAttribute(
             SecurityAction.RequestMinimum,
             File="min_perm.xml")]

// optional permissions are permissions that you don't always need in 
// order to be functional but that some of your features may rely upon
[assembly: PermissionSetAttribute(
             SecurityAction.RequestOptional,
             File="opt_perm.xml")]

// by specifying the permissions you need, the runtime knows not to grant 
// you any other permissions, even new ones that are defined after your 
// code ships

Many people build their applications so that new components can be plugged in after their product has been tested and shipped. It's very difficult to secure these types of applications because there's no way you can test every possible code path for bugs and security holes. If your application is managed, however, there's a nifty feature provided by the CLR that you can use to lock down these extensibility points. By declaring a permission object or a permission set and calling PermitOnly or Deny, you add a marker on your stack that chokes down the permissions granted to any code you call. By doing this before calling to some plug-in, you can restrict what the plug-in can do. For instance, a plug-in that's supposed to do amortization calculations shouldn't need any access to the file system. This is just another example of least privilege, where you can protect yourself ahead of time. Be sure to document these restrictions and be aware that highly privileged plug-ins will be able to get around these restrictions with the Assert statement.

8. Pay Attention to Failure Modes

Admit it. You hate writing error handling code just as much as the next guy. There are so many ways a piece of code can fail; it's just depressing thinking about it. Most programmers, ourselves included, would much rather focus on the normal path of execution. That's where the real work gets done. Let's get that error handling done as quickly and painlessly as possible and move on to the next line of real code.

Sadly, this is not a safe frame of mind. We need to pay much closer attention to failure modes in code. These bits of code are often written with little attention to detail and often go completely untested. When was the last time you made absolutely sure you stepped your debugger through every single line of code in a function, including every single one of those little error handlers?

Untested code often leads to security vulnerabilities. There are three things you can do to help alleviate this problem. First of all, pay just as much attention to those little error handlers as you do your normal code. Think about the state of the system when your error-handling code is executing. Are you leaving the system in a valid and secure state? Second, once you write a function, step your debugger through it several times, ensuring that you hit every error handler. Note that even this technique may not uncover subtle timing errors. You may need to pass bad arguments to your function or adjust the state of the system in some way that causes your error handlers to execute. By taking the time to step through the code, you are slowing yourself down long enough to take at least a second look at the code and the state of the system at the time it runs. We've discovered many flaws in our programming logic by carefully stepping through code in a debugger; it's a proven technique. Use it. Finally, make sure your test suites force your functions to fail. Try to have test suites that exercise every line of code in your function. These can help you discover regression, especially if you automate your tests and run them after every build.

There's one more very important thing to say about failure modes. Be sure that if your code fails, it leaves the system in the most secure state possible. Here's some bad code:

bool accessGranted = true; // optimistic!
try {
    // see if we have access to c:\test.txt
    new FileStream(@"c:\test.txt",
                    FileMode.Open,
                    FileAccess.Read).Close();
}
catch (SecurityException x) {
    // access denied
    accessGranted = false;
}
catch (...) {
    // something else happened
}

Let's say that as far as the CLR is concerned, we're granted access to the file. In this case, a SecurityException won't be thrown. But what if, for instance, the discretionary access control list (DACL) on the file doesn't grant us access? In this case, a different type of exception will be thrown. But due to our optimistic assumption in the first line of code, we'll never know this.

A better way to write this code is to be pessimistic:

bool accessGranted = false; // pessimistic!
try {
    // see if we have access to c:\test.txt
    new FileStream(@"c:\test.txt",
                    FileMode.Open,
                    FileAccess.Read).Close();
     // if we're still here, we're good!
     accessGranted = true;
}
catch (...) {}

This is much more robust, because no matter how we fail, we'll fall back to the most secure mode.

9. Impersonation is Fragile

When writing server applications, you'll often find yourself using, directly or indirectly, a convenient feature of Windows called impersonation. Impersonation allows each thread in a process to run in a distinct security context, typically the client's security context. For instance, when the file system redirector receives a request for a file via the network, it authenticates the remote client, checks to see that the client's request doesn't violate the DACL on the share, then attaches the client's token to the thread handling the request, thus impersonating the client. This thread can then access the local file system on the server using the security context of the client. This is convenient since the local file system is already secure; it will do an access check that considers the type of access being requested, the DACL on the file, and the impersonation token on the thread. If the access check fails, the local file system reports this to the file system redirector, who then can send a fault back to the remote client. This is incredibly convenient for the file system redirector because it simply passes the buck to the local file system and lets the local file system do its own access checking, just as if the client was local.

This is all well and good for simple gateways like the file system redirector. However, impersonation is often used in other, more complex applications. Take a Web application for instance. If you're writing a classic unmanaged ASP application, ISAPI extension, or an ASP.NET application which specifies

<identity impersonate='true'>

in its Web.config file, you are running in an environment with two different security contexts: you have a process token and a thread token, and generally speaking, the thread token will be used for access checks (see Figure 3). Say you are writing an ISAPI application that runs inside the Web server process. Your thread token is likely IUSR_MACHINE, given that most requests are unauthenticated. But your process token is SYSTEM! Say your code is compromised by a bad guy via a buffer overflow exploit. Do you think the bad guy will be content with running as IUSR_MACHINE? No way. It's very likely that his attack code will call RevertToSelf to remove the impersonation token, hoping to elevate his privilege level. In this case, he'll succeed quite nicely. Another thing he can do is call CreateProcess. The token for that new process will be copied not from the impersonation token, but from the process token, so the new process runs as SYSTEM.

Figure 3 Checking

Figure 3** Checking **

What's the solution to this little problem? Well, besides making sure you don't have any buffer overflows to begin with, remember the principle of least privilege. If your code doesn't need the god-like privileges afforded to SYSTEM, don't configure your Web application to run inside the Web server process. If you simply configure your Web application to run with medium or high isolation, your process token will be IWAM_MACHINE. You'll have virtually no privileges at all, and this sort of attack won't be nearly as effective. Note that in IIS 6.0, which will be a component of Windows .NET Server, no user-written code runs as SYSTEM by default. This is based on the realization that developers do make mistakes, and any assistance the Web server can provide to reduce the privileges given to code is a good thing, just in case there is a security bug in the code.

Here's another gotcha that COM programmers can run into. COM has a nasty tendency to play games with threads. If you make a call to an in-process COM server whose threading model doesn't match that of the calling thread, COM will execute the call on a different thread. COM will not propagate the impersonation token on the caller's thread, so the result is that the call will execute in the security context of the process, not of the calling thread. What a Surprise!

Here's another scenario where impersonation can bite you. Say you have a server that accepts requests via named pipes, DCOM, or RPC. You authenticate your clients and impersonate them, opening kernel objects on their behalf while impersonating. Let's say you forget to close one of these objects (for instance, a file) when the client disconnects. When the next client comes along, you authenticate and impersonate that client, and guess what? You can still access the file that was "leaked" from the previous client, even if the new client isn't granted access to the file. For performance reasons, the kernel only performs access checks on objects when you first open them. Even if your security context changes later on because you're impersonating somebody else, you will still be able to access this file.

Each of the scenarios we've mentioned so far is a reminder that impersonation is a convenience for server developers, and it's a fragile convenience at that. Pay close attention to your code when you're running with an impersonation token.

10. Write Apps that Non-admins Can Actually Use

This really is a corollary of the principal of least privilege. If programmers continue to produce code that doesn't run well on Windows unless the user is an administrator, how the heck can we ever expect to shake free of the stigma of targeting an "insecure" system? Windows has a very robust set of security features, but if users are forced to run as administrators to get anything done, they aren't getting much benefit from these features.

How can you help? Well first of all, eat your own dogfood. Quit running as an administrator yourself. You will learn very quickly the pain of using programs that were not designed with security in mind. The other day, I (Keith) installed some software provided by the maker of my handheld device that was designed to synchronize data between my desktop and the device. So just as I usually do, I logged off my normal user account, logged back in using the built-in administrator account, installed the software, then logged back to my normal account and tried to run the software. Well, the application promptly popped up a dialog saying it could not access some data file it needed, then proceeded to blow up with an access violation. Folks, this was a piece of software from a major vendor in the handheld space. There is no excuse for this!

After running FILEMON from https://sysinternals.com, I quickly discovered that the application was trying to open up a data file for write access that it had installed in the same directory as its executables. When applications are installed into the Program Files directory like they should be, they should never, ever, try to write data to that directory. There's a reason that Program Files has a restricted access control policy. We don't want users writing to those directories because that could easily allow one user to leave a Trojan horse behind for another user to execute. In fact, this stipulation is part of the basic logo requirements for Windows XP (see https://www.microsoft.com/winlogo).

We hear way too many programmers give excuses for why they choose to run as administrators when developing code. If we all keep ignoring the problem, it's only going to get worse. Folks, it doesn't take admin privileges to edit a text file. It doesn't take admin privileges to compile or debug a program that you started. When you need admin privileges, run individual programs with elevated privileges using the RunAs feature of the operating system (see the November 2001 Security Briefs column). If you are writing tools for developers to use, you have an extra responsibility to the community. We need to stop this vicious circle of folks writing code that only administrators can run, and the only way it's going to happen is if we do it at the grassroots level.

Check out Keith's Web site for more info on how developers can easily run as non-admins at https://www.develop.com/kbrown. Also be sure to pick up a copy of Michael's book, Writing Secure Code (Microsoft Press, 2001), which has tips on how to write apps that run well in a non-admin environment.

For related articles see:
SQL Server Security Modes
Avoiding Buffer Overruns

Michael Howard is a Security Program Manager in the Secure Windows Initiative group at Microsoft. He is the coauthor of Writing Secure Code and the author of Design Secure Web-based Applications for Microsoft Windows 2000, both published by Microsoft Press.

Keith Brownworks at DevelopMentor researching, writing, teaching, and promoting an awareness of security among programmers. Keith authored Programming Windows Security (Addison-Wesley, 2000). He coauthored Effective COM, and is currently working on a .NET security book. He can be reached at https://www.develop.com/kbrown.