Ten Tips for Designing, Building, and Deploying More Secure Web Applications

By Matt Clapham

See other Security Tip of the Month columns.

On This Page

Abstract Abstract
Introduction Introduction
1. Never Directly Trust User Input 1. Never Directly Trust User Input
2. Services Should Have Neither System nor Administrator Access 2. Services Should Have Neither System nor Administrator Access
3. Follow SQL Server Best Practices 3. Follow SQL Server Best Practices
4. Protect the Assets 4. Protect the Assets
5. Include Auditing, Logging, and Reporting Features 5. Include Auditing, Logging, and Reporting Features
6. Analyze the Source Code 6. Analyze the Source Code
7. Deploy Components Using Defense in Depth 7. Deploy Components Using Defense in Depth
8. Turn Off In-Depth Error Messages for End Users 8. Turn Off In-Depth Error Messages for End Users
9. Know the 10 Laws of Security Administration 9. Know the 10 Laws of Security Administration
10. Have a Security Incident Response Plan 10. Have a Security Incident Response Plan
Resources Resources
Disclaimer Disclaimer


This paper provides an introductory set of guidelines for designing, building, and deploying Web applications and services in a more secure manner. The intended audience is those who are new to Web applications or IT security. It is by no means a complete list, but it serves as a starting point for Web applications and Web services.


This paper provides best practice tips collected from various other sources. It is intended for use when deploying Web applications on the Microsoft Windows family of operating systems, but it could be generically used elsewhere. The topics are divided into 10 guidelines (tips) that are then subdivided and clarified.

1. Never Directly Trust User Input

The plain truth is that someone, somewhere out there will try to attack your Web application. All user input must be carefully validated. At a bare minimum it needs to be checked for length, type, format, and range.

Design Security into the Product
Consider security from the beginning of the design process; dont try to add it later. Think like an attacker and create a threat model from first design revision. Threat modeling is an integral part of the design process. Analyzing potential threats before writing code can reduce both the need for mitigation later and the cost to fix. Careful design can decrease the attack surface area exposed to malicious users.

Dont Solely Validate User Input at the Client
Does your Web application do all of its input validation through client-side JavaScript? How will the business logic layer respond when an attacker uses a direct form submission, without the client-side checks? User input must be validated at the Web application because any attacker will likely bypass the client-side checks once theyve been discovered. A regular expression that only permits well-formed data and rejects any malformed requests is a handy, performant way to validate the input and can be readily used with the next point.

Revalidate Data at Each Layer
Rechecking that the data is of a valid format and type at the business logic and data storage layers will cause a slight performance degradation, but its an excellent defense-in-depth measure. Additionally, it helps minimize flaws or compromise in one layer from affecting others. If revalidation is too much of a performance decrease, consider authenticating the source of the data so that it can be tracked back to another trusted component of the overall system.

Dont Assume Insider Attacks Wont Occur
All your employees are completely trustworthy, right? Dont give up the game to insiders. Trust, but verify. This is especially important in large organizations or for internal Web applications that present personally identifiable information (PII) such as social security or credit card numbers. Keep sensitive information on a need to know basis.

2. Services Should Have Neither System nor Administrator Access

Far too often developers and testers run as the local administrator because its the path of least resistance. This practice is carried forward to Web development, where services are allowed to run as Local System because things break if we dont. Allowing services to run with high privilege is a fatal flaw because it allows a small error in the Web application to turn into a completely compromised system. Be sure none of the Web applications you create or deploy require administrative rights to the servers they run on.

Use Least-Privileged User Accounts
Services and Web application pools should run as a local user, domain user, or one of the predefined accounts (Network Service and Local Service). MSDN has a great article that describes the difference (see the Resources section of this document). That will reduce the magnitude what a compromised service can do. Configuring the system so that service accounts only have the bare minimum access required will minimize the impact if a compromise should occur.

Use a Domain User Account for Intermachine Access
Create a user in the domain of the Web server and use it for running the Web application. Configure the Internet Information Services (IIS) 6.0 application pool to run as the selected account and to use it for anonymous Web server authentication requests. The user account can also be used to handily control authentication and authorization between different servers that deploy the service. For example, the account can be used to allow the front-end Web servers to communicate with the business logic layer in the middle tier. If multiple front-end Web servers communicate with a common middle tier, consider using a unique account for each server or configuring the service to run as Network Service if possible. There are a couple of caveats here, though. If an intruder has access to the Web server domain and knows the user name(s), a denial of service attack can be perpetrated by attempting to log on as the service account repeatedly until it gets locked out.

Not Allowed to Log On Locally
Service accounts should not be allowed to log on to the console or through a remote desktop session under normal circumstances. The account should only be permitted to log on as a service.

Change the Password Periodically
There really are attackers out there attempting to guess passwords. Change the password of the service accounts at least quarterly to reduce the likelihood that an attacker will successfully guess the current password. Its harder to hit a moving target. To further complicate brute-force guessing, make sure the password is long (15 characters or more) and includes some complexity (numbers, symbols, and a mixture of cases).

3. Follow SQL Server Best Practices

Many complex Web applications use a SQL Server database on the back end. These are some handy tips for making security features of SQL Server work for you.

Dont Use SA
Dont use system administrator (SA) when connecting a Web application to its associated database. Instead, configure specific users and roles in the database with only the privileges required. Using the SA login is as bad as running a service as Administrator. SA can do just about anything to the database in SQL Server, including dropping tables and all the data contained therein.

Use Roles and Logins to Control SQL Server Access
Start restrictive and only add permissions for the users as necessary. Services running as specific users should only be allowed to have certain interactions with the database. This can be readily accomplished through the use of SQL Server logins that are only granted to operate with a predefined role for the Web application.

No Direct Table Access for Service Account Users
As a defense-in-depth measure, service accounts shouldnt be permitted to directly access data tables. This helps to minimize the impact should a compromise occur, because the attacker will only have a limited subset of things that can be done with the database. If the database contains sensitive customer data, consider extending this restriction to all staff, with the caveat that any standard reports based on the data will need to be created using stored procedures.

Use Stored Procedures with Parameterized Queries
All database interaction between the Web application and SQL Server should be through parameterized queries that are contained within stored procedures. This way, all interactions with the database tables are strictly controlled, and input parameters are carefully type checked before use. This method of SQL Server interaction coupled with the above restriction on direct table access makes query injection nearly impossible. For example:

CREATE proc sp_GetUserEmailAddress
	@UserID uniqueidentifier
	select EmailAddress from Users where uid_UserID=@UserID

Use the Microsoft SQL Server Best Practices Analyzer
A handy way to double-check your Web applications use of Microsoft SQL Server is to grab a copy of the free Best Practices Analyzer tool. Run it against your Web applications database and consider implementing the suggestions each item in the report.

4. Protect the Assets

All Web applications have assets to protect, from CPU resources to network bandwidth to private user data. Depending on the value of the asset, consider going to greater lengths to protect each appropriately. Once digital information has been stolen, it is virtually impossible to repossess it.

Cryptographic Keys
Does your Web application do any sort of cryptographic operations? How is the encryption key protected? Consider whether the private and/or symmetric keys should be stored in some type of hardware security module (HSM) to prevent theft by attackers or insiders. A HSM wont totally stop a determined intruder, but using one certainly adds an extra layer of defense. Additionally, auditing the use of said keys can provide a trail that can point to misuse by someone with access. Alternatively, the Data Protection API (DPAPI) thats part of Microsoft Windows Server 2003 can be readily leveraged for reasonable key protection of less sensitive, shorter lifespan material. At a bare minimum, the Web application or service should be using the Windows Crypto APIs. Dont try to reinvent well-known cryptographic algorithms or create some new and interesting ones. Youll likely get it wrong, so leave the cryptography to the professionals and focus on other first lines of defense.

Service Account Credentials
Is the user name and password for the domain user the Web application runs as on a piece of paper thats stuck to the server? Service account credentials should only be known by a select few staff members. And any written or stored copies of said credentials should be access restricted (for example, in a safe or an encrypted file).

User Data
Is user data sitting in a database that any inside user can access? Does that data contain PII? Is that information financially sensitive, such as card numbers? Keep access to user data on a need to know basis. Additionally, management of certain types of data is regulated, so you should check with your legal department as to the appropriate levels of protection required by law. For example, storing the data in an encrypted format and carefully monitoring who accessed the unencrypted versions might be sufficient for your particular Web service.

5. Include Auditing, Logging, and Reporting Features

Reporting isnt just for bean counters. Robust logging and reporting will be useful not only for those needing to analyze the Web applications metrics, but also for the operations staff and incident response teams. Its an important part of any Web application or service.

Keep Logging Data Safe
The data collected about transactions and various other details of the Web application should be kept in a safe, access-restricted location. Also consider moving the raw logging data offline to a more secure location on a regular basis (daily or weekly). Think carefully about the reuse of logging records. Does your logging system automatically delete older entries when it gets full? Should it? A common way for attackers to cover their tracks is by spamming the logging system with benign, legitimate logging entries that force the excising of the older entries that showed evidence of misuse.

Keep a Historical Record
Your lawyer should be able to recommend how long the historical logging data should be kept to satisfy any legal guidelines or requirements. But consider keeping monthly, quarterly, and yearly summaries of data that no longer needs to be kept in total. That way, trend data from a long time ago will be kept for posterity and possible use in identifying the date of compromise during an incident response.

Sign the Data
Consider using a form of digital signatures to keep the logging data provably untampered with, which may help if its ever presented as evidence in a court case.

Collect and Centralize
Some details may only need to be logged on a per server basis, but by collecting and centralizing the data on the back end, a better picture of the current health and security of the system can be had.

Consider Using the Microsoft Enterprise Library
Microsoft has created a wonderful set of extensible application widgets for reuse in the Enterprise Library. Many common Web application functions are already created and ready for use. For more information, see patterns & practices Enterprise Library in the Resources section of this document.

6. Analyze the Source Code

Careful reviews of source code with an eye for security can reveal potential security holes and risks. Writing Secure Code, Second Edition, has some excellent recommendations on what to watch for.

Hold Code Reviews
Gather pertinent individuals working on a particular feature and have them review the source code line by line. Any issues found should be verified as changed in the manner agreed upon. Code reviews are useless if you dont fix the problems identified, so plan not only to analyze but also to fix any issues.

Watch Out for Buffer Overflows
Buffer overflows (BOs) happen when some part of the system attempts to enter more data than intended. Its like trying to squeeze one gallon of juice into a one-quart container. The spillage can do anything from creating a Denial of Service (DoS) condition to executing some random snippet of code. The best way to defend against BOs is to not have them present in the first place. Additionally, the Data Execution Protection thats part of Windows Server 2003 Service Pack 1 and 64-bit processors provides some extra defense. Native code is especially susceptible to buffer overflow conditions. Carefully review all input and output buffer code. The following example (taken from Defend Your Code with Top Ten Security Tips Every Developer Must Know) shows a classic function that could lead to a buffer overflow if the source data is not length limited properly:

void DoSomething(char *cBuffSrc, DWORD cbBuffSrc) {
		char cBuffDest[32];

Watch Out for Integer Overflows
Integer overflows are similar to BOs in that a numerical value is attempted to be made larger than possible. Even managed code is susceptible to integer overflow conditions and like BOs, the results can be anything from DoS to running some bit of code the attacker desires. Carefully review all input and output buffer code. In the example below, if i is equal to -1 then you get a DIVIDE BY 0 exception:

Int16 i = getFromNetwork();
if (i <= MAX) {
      Int32 j = 8192 / i;

Furthermore, when the variable, req, goes over 32767 in the example below, it becomes interpreted as a negative number and an array bounds exception will be thrown:

Int16 req;
while (true) {
  arr[req] = DateTime.Now;

Like buffer overflows, the best defense against integer overflows is not to have them in code. Carefully review all dynamic memory requests for the possibility of integer overflow and test it accordingly.

Watch Out for Cross-site Scripting
It is useful that most complex Web applications call into other Web applications for certain subtasks, but the action has risks. If the URL used to call into comes from user input at all, theres a high risk that it could be manipulated and then used to execute an attack from one user to another. Review the code that handles user interaction so that a cross-site scripting attack is unlikely. Consider the following script snippet:

<script language=c#>
  Response.Write("Hello, " + Request.QueryString("name"));

The source of name isnt trusted it could contain virtually anything, including something like a Web site redirection script or an attempt to steal a site cookie.

Watch Out for Canonicalization Issues
File paths and URLs can most often be represented several different ways. Carefully review any functions that deal with looking for assumptions about format that could lead to unexpected results. For example, a URL can not only be represented as ASCII text, but also encoded in UTF-8.

Search for Risky Functions
Writing Secure Code, Second Edition, has an excellent set of appendixes that list standard functions that are commonly used incorrectly and in such a way as to leave a potential security hole. Consider replacing all references to unsafe functions with the recommended alternatives.

Use Automated Scanning
Using automated source code analysis tools can speed up the code review process and provide consistent feedback. The time required to work these tools into the regular build process will pay for itself later. Although some tools may occasionally give false positives, be sure to review any warnings carefully. Also, turn on compiler warnings as errors in the development environment. It may provide an occasional false positive, but its an excellent first-level code scanning. Future versions of Microsoft Visual Studio will include tools such as FXCop and PREFast that will search for common problems and then generate concise reports. Afterward, the summary can be reviewed on a case by case basis to check each occurrence of the problem and fix the source code appropriately.

7. Deploy Components Using Defense in Depth

Just because a security hole is possible doesnt mean the game is over. Define ways to mitigate the direct risks and the secondary risks if a particular security technology fails. Once the mitigation methods are defined, designed, and built, test them. Remember, well-designed software fails gracefully and to a secure mode. When in doubt, deny access. Verify that the setup and deployment process doesnt configure anything more than the least privileges required for the Web application to operate. A great tool to help with such verification is the VeriTest-Rational Installation Analyzer (see the Resources list in this document). Use the Installation Analyzer both before and after setup to see everything thats changed in between.

Follow Deployment Best Practices for All Dependencies
Does the Web application depend on Microsoft Exchange Server? Does it depend on Microsoft SQL Server 2000? Does the Web site work with IIS 5.0? Regardless of what list of dependent technologies is required, deploy them using the security best practices for each. For some links to common deployment guides, see the Resources list in this document.

Use a Two-Tier or Three-Tier Architecture
The data store for the Web application (typically SQL Server) should not be directly accessible from the same public network that customer traffic arrives on. Keep the public and private networks for your Web application deployment separated by the front-end Web servers. Also, consider separating the business logic from the presentation layer (Web server), because doing so will provide greater flexibility and allow for better deployment security if necessary.

Use SSL on the Web Servers
Does your Web application transmit or receive any sensitive customer data such as personally identifiable information (PII)? If yes, consider encrypting the Web server session with the client by either using Secure Sockets Layer (SSL) or Internet Protocol Security (IPSec). This will greatly reduce the possibility of an intruder casually observing or actively changing the data in transit. For ease of use, be sure to acquire certificates from a root certification authority your customers systems will trust. SSL has its limitations, though. SSL is only about providing encryption of the session between two end points and only loosely guarantees that the presenting parties are who they say they are. However, its an excellent starting point for any Web application.

Firewalls, Network Segments, and ACLs Everywhere
Use a combination of firewalls, segmenting, and router access control lists (ACLs) to isolate the various parts of the Web application as much as possible. Such a combination will function as roadblocks that slow down or halt a compromise from spreading, as well as provide alerts and warnings that will allow for an appropriate incident response. For example, the business logic layer (BLL) servers should be the only ones talking to the computers running SQL Server, so make sure the router ACLs enforce that. A stateful packet inspection firewall such as Microsoft Internet Security and Acceleration Server 2004 can also analyze traffic to make sure the traffic indeed matches the format and type for the particular destination port. For example, the stateful packet inspection firewall could help guarantee that the traffic between an instance of SQL Server and the business logic layer looks like well-formed SQL Server traffic of some sort and not HTTP or DNS messages. Include the appropriate minimal privilege ACLs on all files used by the Web application or service. In a nutshell, if its something created for or used by your Web application, it ought to have some sort of ACL on it.

Consider Encrypting Back-End Network Traffic
Does your live deployment share a data center with various other Web sites? If one of those other servers were compromised and listening on the private network, would it see any PII? If so, consider using SSL or IPSec to encrypt network traffic on the back end. That will help prevent snooping and can also be used as a form of intermachine network authentication.

8. Turn Off In-Depth Error Messages for End Users

One of the ways an attacker will case the joint is by looking for subtle details revealed in error messages. After a Web application has been deployed into productionthat is, gone livein-depth error responses and debugging messages to the end users should be turned off.

Local Errors Only
If a problem is encountered with the live Web application, send the debugging messages such as stack traces to the Application Event Log. That way, the problem can be diagnosed further without exposing too much detail.

Report Errors to the Back-End
Report in-depth error details like full stack trace, variables, and system information to the auditing, logging, and reporting portion of the Web application. Such data may be useful in analyzing how an intrusion happened, should it occur.

Monitor for Signs of Attack
Is a single client rapidly and repeatedly encountering variations of the same error? Is that not really a valid customer, but an attacker looking for holes? If some of the signs of attack (for example, repeated authentication failures) are encountered, have the system warn operations staff to consider whether a security incident response is necessary. Also, consider deploying a third-party Intrusion Detection System to aid in monitoring for the telltale signs of compromise.

9. Know the 10 Laws of Security Administration

Everyone involved in the creation and operation of Web applications should acquaint themselves with The 10 Immutable Laws of Security Administration. These excellent guidelines are as follows:

Secure All Systems
Its of little value to deploy Web applications on an insecure system! Protect all systems with patches, antivirus software, and firewalls as appropriate. This is especially important for Web applications that communicate over the Internet.

10. Have a Security Incident Response Plan

How will the team respond if/when a compromise occurs? If youve thought through things ahead of time, it can make a major difference in the response time and return to normal operations. The CERT Coordination Center has an excellent article on what should be in a typical Security Incident Response Plan (SIRP). For more information, see Responding to Intrusions in the Resources section of this document.

Know Whom to Call
Plan a call tree to alert the necessary staff. Someone from upper management, development, program management, testing, operations, human resources, and the legal department should be listed. Additionally, the list should include contact information for third parties that may or may not need to be involved such as police, Internet service providers, or banking institutions.

Hope for the Best, Prepare for the Worst
Ideally, a SIRP will never have to be used, but it should be created, just in case. Think through some scenarios ahead of time. For example, what would happen if a Web server were compromised in some way? Outline a response plan for each scenario and solicit feedback. Then document everything and place the plans in a well-known location that only the appropriate incident response personnel can get to.

Follow the Plan
If a security incident is detected, follow the general plan and the specific guidelines for that type of situation. The worst possible situation is having people attempting to re-create a response under pressure after an incident has been detected.

Know When Not to Follow the Plan
No two security incidents are alike. The general response plan will help in all situations, but there may be times when a slightly different approach is required. Dont be too rigidly locked into the process. Be flexible enough to respond as appropriate, provided the desired outcome is achieved.

Complete a Postmortem Analysis
Once a security incident has been resolved, analyze how and why it occurred. Repeat the timeline of the incident and its response and check the effectiveness. How could the process work more efficiently? How could the incident have been prevented in the first place? Be honest, but also be careful not to engage in finger-pointing.

Incorporate the Feedback and Lessons Learned
After the analysis of the response plans effectiveness, incorporate the feedback. A good SIRP is a living document that can change appropriately over time. Be sure to keep historical records of the plan as it changed over time.


See a full list of resources for this article.


This is a preliminary document and may be changed substantially prior to final commercial release of the software described herein.

The information contained in this document represents the current view of Microsoft Corporation on the issues discussed as of the date of publication. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information presented after the date of publication.


Complying with all applicable copyright laws is the responsibility of the user. Without limiting the rights under copyright, no part of this document may be reproduced, stored in or introduced into a retrieval system, or transmitted in any form or by any means (electronic, mechanical, photocopying, recording, or otherwise), or for any purpose, without the express written permission of Microsoft Corporation.

Microsoft may have patents, patent applications, trademarks, copyrights, or other intellectual property rights covering subject matter in this document. Except as expressly provided in any written license agreement from Microsoft, the furnishing of this document does not give you any license to these patents, trademarks, copyrights, or other intellectual property.

Unless otherwise noted, the example companies, organizations, products, domain names, e-mail addresses, logos, people, places and events depicted herein are fictitious, and no association with any real company, organization, product, domain name, email address, logo, person, place or event is intended or should be inferred.

© 2005 Microsoft Corporation. All rights reserved.

Microsoft, MSDN, Visual Studio, Windows, and Windows Server are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries.

The names of actual companies and products mentioned herein may be the trademarks of their respective owners.