Secure Habits
8 Simple Rules For Developing More Secure Code
Michael Howard
This article discusses:
|
This article uses the following technologies: |
Contents
Habit #1: Take Responsibility
Habit #2: Never Trust Data
Habit #3: Model Threats against Your Code
Habit #4: Stay One Step Ahead
Habit #5: Fuzz!
Habit #6: Don't Write Insecure Code
Habit #7: Recognize the Strategic Asymmetry
Habit #8: Use the Best Tools You Can
I have been lucky enough to work with thousands of good developers over the years who wanted to learn how to write more secure software. During this time, I've also learned a great deal from people who are very good at building secure systems, and this got me thinking. I wondered if there were common skills or habits the "secure developers" share. It turns out the answer is a resounding yes! This article presents that list of habits shared by developers of secure code.
Now one thing I know for sure is that anyone reviewing this list will immediately see missing habits. That's fine. I know there are other great ideas out there. This is my list! So, with that out of the way, here are the exemplary habits I've noticed over the years.
Habit #1: Take Responsibility
This is a variation of the classic "There is no silver bullet" comment made over 25 years ago by Fred Brookes in The Mythical Man Month. Getting security right in your product is totally up to you. No one else—and certainly no magic tool or programming language—solves all security ills. Don't get me wrong, I like source code analysis tools, but they will not magically fix all your security vulnerabilities. Only you can do that.
Secure products are built by developers who create secure designs and write secure code. Ultimately, writing code is an individual endeavor. You are an individual, and you cannot be replaced by a tool. Therefore, the security of your product is your responsibility! The Blaster and CodeRed worms exploited code that was written by individuals (see Figure 1).
Figure 1** Vulnerable Code is Written by Individuals **
Remember that all code will be scrutinized and possibly attacked. That's totally okay. Being attacked is fine. The big question is, will your code be compromised? Only you can determine that outcome. So take pride in your code. You must be happy with the quality of your code and be able to sleep at night knowing that if it's attacked, you've done everything possible to prevent the code from being whacked.
If at all possible, have your code peer-reviewed by a security expert. Don't have it examined by someone who knows nothing about security and expect that person to find security bugs and vulnerabilities. Go out of your way to have someone who really knows what he is doing look at the code.
And don't be so bigheaded that you can't ask for help when you need it. I mentioned you shouldn't solely rely on tools, but you should definitely take advantage of any that are readily available to you. Do run all available source code analysis tools on your code, and run them often. Take advantage of every possible defensive language construct and library trick available to you. For example in C#, wrap network-facing code that performs array access, where the array index is derived from a network request, in checked operators to detect possible integer arithmetic errors.
Habit #2: Never Trust Data
I've said this a billion times and I'll say it again: all input is evil until proven otherwise. If you look at the most heinous security vulnerabilities, they all share the common trait that the developer trusted the incoming data. The problem is if your code assumes the data is well formed, what happens if your assumption is incorrect? On a good day, your application will probably crash. On a bad day the attacker could inject malicious code into your process and wreak havoc.
The somewhat whimsical definition of a secure system is one that performs the tasks it is supposed to, and no more. But when there are input trust issues, you can usually get the system to perform other tasks. A cursory analysis of Common Vulnerabilities and Exposures (CVE) data (cve.mitre.org) shows that from 2001-2004, 47 percent of all security vulnerabilities tracked by CVE were input trust issues. The most well-known issues are buffer overruns, integer arithmetic bugs, cross-site scripting, and SQL injection bugs. We are starting to see new variations on this theme, such as XPath injection and Lightweight Directory Access Protocol (LDAP) injection vulnerabilities.
You can remedy input trust issues by following a few simple rules. First, don't look only for things you know are bad; this assumes you know all the bad things and you can predict all the bad things into the future. Looking for bad things is OK so long as it's not your only defense. A better strategy is to constrain the input to what you know is good. For high-level languages such as C# and Perl, I like to use regular expressions to achieve this.
Next, reject what you know to be bad. For example, if someone remotely requests a file through your code and the file name includes a dodgy character (such as : or \), reject the request. And don't tell the attacker why; just say "file not found."
Finally, and this does not work for all scenarios, sanitize the data. For example, in the case of a Web server, you should HTML-encode output that came from potentially untrusted input.
Habit #3: Model Threats against Your Code
You do have threat models, right? Threat models allow you to understand the potential risks to your software and to make sure you have the appropriate mitigations in place. But the benefits of threat modeling extend beyond secure design. Threat models can help with your code quality, too. Threat models tell you where the data came from. Is the data remote or local? Is the data from anonymous users or is it from more trusted (authenticated) users, perhaps administrators?
With this information at hand, you can determine whether your defenses are appropriate. For example, code accessible to anonymous and remote users had better be very secure code. I'm not saying that code accessible only to local admins should not be secure, but I am saying that remotely accessible code, especially code running by default, has to be bulletproof and that means more defenses, more review, and more attention to detail. Moreover, the threat model can tell you the nature of the data being protected. High value business data and personally identifiable information, for example, must be protected very well. Are your defenses appropriate?
Make sure your threat models are accurate and up-to-date, then identify all the entry points into your code and rank them by accessibility—remote versus local and high-privilege versus low-privilege (or no-privilege) users. The most accessible code should be reviewed the deepest and the earliest. Finally, review all code along anonymous data paths; in other words, start at each anonymously accessibly entry point and trace the data along that path, checking for code correctness.
Habit #4: Stay One Step Ahead
The security landscape evolves constantly. It seems that every week there are new variations of security issues. This means you must evolve and learn about new threats and defenses or you'll suffer the consequences.
Some simple strategies to stay ahead of the curve include reading a few good books on the subject of software security every now and then. Also, learn from your past mistakes and, better still, the mistakes of others. You can do this by reading bugtraq—go to securityfocus.com and sign up to receive bugtraq posts in your inbox. But trust me on this next bit of advice: create an inbox rule to move the posts to a special folder so you can handle the volume. It's significant.
Habit #5: Fuzz!
Fuzzing is a testing technique that was invented to find reliability bugs. It turns out that a percentage of reliability bugs are security vulnerabilities waiting for the right exploit! Sure, a buffer overrun might crash an application, but given a well-crafted malicious payload, the crash might not happen, and the attacker could run code to do his bidding instead. Our motto around here is "today's denial of service is tomorrow's code execution."
Just about every file-parsing bug/vulnerability was found by dumb luck or fuzzing. Microsoft has found security vulnerabilities parsing a number of file formats including the XLS, PPT, DOC, and BMP files. Most vendors have had similar vulnerabilities because parsing complex data structures is a complex task, complex code will have bugs, and some of those bugs will reveal security vulnerabilities.
You must fuzz all code that parses files and network traffic. The Security Development Lifecycle (SDL) at Microsoft is very specific about what this means for file formats. You must fuzz all parsers with 100,000 iterations of malformed files using a file fuzzer. There are some reasonable fuzzers available, and we included a file fuzzer, as well as C++ source code, in the book The Security Development Lifecycle, which I coauthored with Steve Lipner (microsoft.com/learningbook.aspx?ID=8753).
One last note about fuzzing. If you get a crash, don't think it is only a crash. It is likely that a good percentage of these so-called crashes are begging for someone to write an exploit. So don't simply punt a crash as "just a crash."
Habit #6: Don't Write Insecure Code
At Microsoft, we use the concept of quality gates to help reduce the chance a developer will check vulnerable code into the product. The gates run a battery of source code analysis tools on the code prior to check-in to flag any issues. And any identified issues must be fixed before the check-in can be completed. You can also enforce strict code rules such as excluding the use of banned functionality, like no calls to strcpy or strncat and no lousy crypto. (Microsoft has banned over 100 C runtime functions for new code!) For example, with regard to cryptography, we don't allow DES (the key length is too small), MD4, or MD5 (they are both broken now) in new code, unless an industry standard dictates their use.
Don't reinvent functionality. If you have code that parses a specific file format, you don't need two or three sets of parsing code; stick with the one set, make it robust, and wrap it up in a form that can be used across multiple projects.
Finally, remember that tools are no replacement for knowing how to write secure code. That's why security and privacy education are so important. You need a solid understanding of the concepts to make the judgment calls and insights your tools aren't capable of making.
Habit #7: Recognize the Strategic Asymmetry
This is one of my favorites. Remember that as a software developer, the security odds are stacked against you. I like to call this the "Attacker's Advantage, and the Defender's Dilemma." You need to get the code and the designs 100 percent correct 100 percent of the time, and that is impossible. To make matters worse, you must reach this insurmountable goal on a fixed budget, on time, while having to consider requirements of supportability, compatibility, accessibility and other "-ilities." An attacker can spend as long as he wants to find one bug, and then announce to the world that your application is insecure.
In Habit #6, I mentioned that you should stop writing new insecure code. For Habit #7, you should focus on all code because attackers attack all code regardless of age. Spend time reviewing old code for security vulnerabilities, and seriously consider deprecating old, insecure functionality. If you use agile development methods, you should think about dedicating one or more sprints to fixing old code to bring it up to the quality of newer code.
Habit #8: Use the Best Tools You Can
Finally, use the best tools you possibly can. I love source code analysis tools and I love any technology that helps me write more secure code. As I mentioned, tools are no panacea but they help. A lot! Tools also help scale the problem of source code analysis. Tools can scan huge quantities of code rapidly, much faster than a human could. And this helps give you a feeling for how "bad" some code might be.
A favorite trick of mine is to compile code using the highest possible warning levels, for example /W4 when using Visual C++® or –Wall when using gcc. If you see a large number of warnings in the code, perhaps the code has other bugs that were not found by the compiler or other tools. Such code should be subject to a greater degree of scrutiny before it ships (see Habit #3).
These are eight good habits I have seen employed by developers I deeply respect, both inside and outside of Microsoft. By themselves, the habits won't make you a stellar, secure developer, but they surely will help!
Michael Howard is a senior Security Program Manager at Microsoft focusing on secure process and improvement and best practice. He is the coauthor of five security books including The Security Development Lifecycle, Writing Secure Code, and 19 Deadly Sins of Software Security.