Share via

November 2017

Volume 32 Number 11


Securing Data and Apps from Unauthorized Disclosure and Use

By Joe Sewell

“Data breach.” These two words are the scariest words in software development. Software is all about the data—processing it, managing it and securing it. If an attacker can breach that data, your business could forfeit confidential information critical to your success, be subject to liability nightmares, and lose valuable brand respect and customer loyalty.

To manage this risk and comply with regulations like HIPAA and GDPR, wise developers and operations teams will implement security controls on databases and Web services. These controls include end-to-end encryption, rigorous identity management, and network behavior anomaly detection. In the event of a security incident, many such controls will react actively by recording incident details, sending alerts and blocking suspicious requests.

While these and other best practices can secure your server components, they don’t do as much for client software. And for your data to be useful, it must be exposed in some form to privileged users and software. How can you also be sure that your client software doesn’t cause a data breach?

For instance, many companies create software specifically for use by their employees, often designed to access sensitive corporate data. And while the Microsoft .NET Framework makes it easy to develop line-of-business (LOB) apps in languages like C# or Visual Basic .NET, those compiled apps still contain high-level metadata and intermediate code. This makes it easy for a bad actor to manipulate the app with unauthorized use of a debugger, or to reverse engineer the app and create a compromised version. Both scenarios could lead to a data breach, even if the server components are completely secure.

While there are some measures you can take to guard against these attacks—Authenticode signing and code obfuscation, to name two—most of them are passive in that they merely deter attacks, rather than detect, report and respond to them. But, recently, new features in Visual Studio allow you to inject threat detection, reporting and response capabilities into your .NET apps, with little-to-no additional coding required. These Runtime Checks are an active protection measure, able to change your app’s behavior in response to a security threat, thus protecting your sensitive data.

In this article, I present a methodology for using Runtime Checks effectively, using a typical LOB app as an example. I’ll explore how an attacker can use a debugger to cause a data breach, detail how Runtime Checks can protect against this breach and discuss the role of these controls in a layered-protection approach.

Sample Solution

To help explain how a data breach can happen in a client, I’ve prepared a sample LOB solution that can be downloaded from Instructions for building, running and using the various parts of the solution are available in the sample code’s READ-ME.

The solution has four components:

AdventureWorks2014 Database: This is a Microsoft SQL Server database containing the Adventure Works 2014 OLTP data­base sample. You can download that sample from

Adventure Works Sales Service: This is an ASP.NET Web service that exposes customer data from the database, including sensitive data like credit cards. In order to make this component easier to set up, the sample code omits most security controls, but for the purposes of this article I’ll assume the service implements the following:

  • Two factors of authentication—a user password and an SMS sent to the user’s phone on login
  • Time-limited sessions
  • SSL encryption for all requests and responses

Adventure Works Sales Client: This is a Windows Presentation Foundation (WPF) desktop client that connects to the Sales Service to manipulate the customer data. This is the component with which the article will be most concerned.

When a Sales employee runs the Client, they log in through the LoginDialog, which starts the authenticated session and opens the CustomerWindow. From this window, the employee can view and edit customer names, or open EmailWindow, PhoneWindow, or CreditCardWindow to edit a customer’s sensitive data. Some common functions are also provided in a static class named Utilities.

Application Insights: While not required to run the sample, both the Sales Service and Client can send usage and error telemetry to Application Insights. With the Runtime Checks discussed in this article, the Client’s telemetry also includes security incident reporting.

For this article, I’ll be focusing on securing the Sales Client. I’ll assume the database and Sales Service are already secured. Of course, that’s not a safe assumption to make in a real scenario, but it helps demonstrate a point: even if you “do everything right” with server security, data breaches are still possible through client software.

I will also treat customer names as non-­sensitive data and instead focus on securing e-mail addresses, phone numbers and credit cards. In a real scenario, customer names would also be considered sensitive, and the non-sensitive data could include things such as retail store addresses.

Data Breach with a Debugger

Debuggers are wonderful development tools. They allow you to discover critical logic errors, step through tricky control-flow scenarios and diagnose crash dumps. However, like any tool, debuggers can also be used for evil.

Let’s say that the Adventure Works intranet has a bad actor—perhaps a vindictive employee, an outside contractor, or even an external attacker who’s gained unauthorized access to the intranet. This attacker doesn’t have access to the database or Sales Service, but they can access a Sales employee’s laptop. Certainly, this is a security problem, but because the Sales Service implements two-factor authentication and the attacker doesn’t have access to the employee’s phone, the customer data should be safe, right?

Actually, no. The attacker can wait for the Sales employee to log in through the Sales Client and then, either manually or through a script, attach a debugger to the Client process. Because the Client is a .NET app, the debugger will reveal a lot of high-level information, including the session token, even if no debugging symbols (PDB files, for example) are present.

Figure 1 demonstrates this scenario. Using the WinDbg debugger ( with the Psscor4 extension (, I dumped various .NET objects in the memory of a running Sales Client process. I eventually found the AuthToken object and dumped the value of its HashField property.

WinDbg Revealing a Session Token in the Sales Client

Figure 1 WinDbg Revealing a Session Token in the Sales Client

With this session token, the attacker can make authenticated requests to the Sales Service in the employee’s name. The attacker need not continue debugging or manipulating the Client; once he has the session token, he can go directly to the Web service and use the token to cause a data breach.

There are other scenarios where bad actors could use the Client in an unauthorized way:

Manipulating Sensitive Data Directly: While the preceding scenario was a session hijacking attack, because the Sales Client accesses sensitive data (such as credit cards) as part of its normal operation, that data can also be seen with a debugger. Attackers could even cause the app to behave unusually or modify the data in the database.

Reverse Engineering: Attackers could also run the Client themselves and attach a debugger to discover how the Client works. Combined with the ease of decompiling .NET apps, the attackers might be able to discover exploits or other important details about the Client or Service that would help them plan an attack.

Tampering: If attackers can reverse engineer the app and access the employee’s file system, they can replace the legitimate Client with a modified one, secretly extracting or manipulating data when the employee logs in.

Other apps might be vulnerable to debuggers in different ways. For instance, an app that reports an employee’s location for purposes of tracking fieldwork could be manipulated to provide inaccurate data. Or, a game might reveal key strategic information in a debugger.

About Runtime Checks

Runtime Checks is a new feature in PreEmptive Protection - Dotfuscator Community Edition (CE), a protection tool included with Visual Studio since 2003 ( You may know that Dotfuscator CE can obfuscate the intermediate code of .NET assemblies, but obfuscation isn’t the subject of this article. Instead, I’ll be showing how I used Runtime Checks—hereafter, just Checks—to allow the Sales Client to protect itself while it runs.

Checks are prebuilt validations Dotfuscator can inject into your .NET apps. Your apps will then be able to detect unauthorized use, like debugging or tampering. Despite the name, Checks do more than just detect these states; they can also react in pre-specified ways, such as by exiting the app. Checks can also call into application code, allowing for custom behavior based on the Check’s result. These reporting and response features are configurable per-Check, so all your apps can detect unauthorized uses in the same way, but each app can respond to that detection differently.

The code sample includes a Dotfuscator.xml config file that can instruct Dotfuscator to protect the Sales Client from unauthorized debugging and tampering. In the rest of this article, I’ll explain how I created this configuration, what choices I made, and how you can similarly configure Dotfuscator to protect your own apps.

Setting Up the Tooling

The easiest way to get started with Dotfuscator CE is to use the Visual Studio Quick Launch (Ctrl+Q) to search for “dotfuscator.” If Dotfuscator isn’t installed, an option to install PreEmptive Protection - Dotfuscator will appear; select that option and confirm the appropriate dialogs.

Once Dotfuscator is installed, repeating this search will provide an option to launch Tools | PreEmptive Protection - Dotfuscator; select that option to begin using Dotfuscator. After some typical first-time use dialogs, the Dotfuscator CE UI opens.

Important Note: The protection described here and included in the sample requires at least version 5.32 of Dotfuscator CE. You can see what version you have installed by choosing Help | About. If you’re using an earlier version, please download the latest version of Community Edition from

Dotfuscator operates on specialized config files that specify what assemblies it should protect and how to apply protection. Dotfuscator starts with a new config file loaded; I adjusted it for the Sales Client using the following steps:

  1. First, I saved the new config file as AdventureWorksSalesClient\Dotfuscator.xml.
  2. Next, I told Dotfuscator where to find the Client’s assemblies. I switched to the Dotfuscator Inputs screen and clicked the green plus-sign icon. From the Select Input browse dialog, I navigated to the AdventureWorksSalesClient\bin\Release directory and then clicked Open without selecting a file.
  3. Dotfuscator added the whole directory as an input named Release. I expanded the tree node to verify that the AdventureWorksSalesClient.exe assembly was present.
  4. Then, I made the config file portable, instead of specific to absolute paths in my environment. I selected the Release node, clicked the pencil icon and replaced the absolute path with ${configdir}\bin\Release. ${configdir} is a Dotfuscator macro that represents the directory holding the config file.
  5. Finally, as this article isn’t concerned with the Dotfuscator code obfuscation features, I disabled them by right-clicking on the Renaming item in the Dotfuscator navigation list and unchecking Enable.

Configuring Check Injection

Dotfuscator allows you to configure Checks from the Injection screen, on the Checks tab. What choices you make for the configura­tion, however, vary depending on the kind of app you’re protecting. Rather than list all the features and settings, I’ll walk-through the choices and configuration I made for the Sales Client sample.

For the sample, I configured three Checks:

  • Two Debugging Checks, which detect unauthorized use of a debugger:
    • A “Login” Debugging Check, to detect session hijacking scenarios as described earlier
    • A “Query” Debugging Check, to detect a debugger being used to read/write sensitive data in the Client
  • One Tamper Check, which detects use of a modified application binary

Figure 2 shows a bird’s-eye view of the Checks and the app code with which they interact. In the next three sections, I’ll explain the purpose and configuration of each of these Checks.

The Sales Client with Injected Runtime Checks

Figure 2 The Sales Client with Injected Runtime Checks

Configuring the Login Debugging Check

This first Debugging Check addresses the session hijacking scenario. It detects if a debugger is present during the authentication process and notifies the app if so. The app will report the incident to Application Insights and then later error in unintuitive ways.

I added this Check by clicking on the Add Debugging Check button, which brought up a new configuration window. I configured this Check as seen in Figure 3.

Configuration for the Login Debugging Check

Figure 3 Configuration for the Login Debugging Check

Location: I first picked where in the app code the Check should run. As this Check should detect debugging when the user logs in, I checked the LoginDialog.ConfirmLogin method from the Locations tree.

Note that the Check only runs when its location is called. If an attacker attaches a debugger later, this Check won’t detect it; but I’ll address this issue later with the Query Debugging Check.

Application Notification: After a Check runs, it can notify the app code so the app can report and react in a customized way. The code element that receives this notification is known as the Application Notification Sink, and is configured using the following Check Properties:

  • ApplicationNotificationSinkElement: The kind of code element (field, method and so on)
  • ApplicationNotificationSinkName: The name of the code element
  • ApplicationNotificationSinkOwner: The type that defines the code element

For all three Checks, I used this feature to report incidents to Application Insights. Additionally, for this Check, I decided that I wanted a custom response rather than a default response injected by Dotfuscator (which the other Checks will use). My response allows the login to succeed, but then crashes the app a few moments later. By separating the detection and response, I make it harder for an attacker to discover and work around the control.

To accomplish this response, I added a Boolean field, isDebugged, to the Login­Dialog class and configured it as the Check’s Sink. When the Check runs (that is, when the app calls LoginDialog.ConfirmLogin), the result of the debugger detection is stored in this field: true for a debugger detected, and false otherwise.

Note that the Sink must be accessible and writable from the Check’s location. As both the location and Sink are instance members of the LoginDialog class, this rule is satisfied.

Next, I modified LoginDialog.RunUserSession to pass this field to the CustomerWindow constructor:

// In LoginDialog class
private void RunUserSession(AuthToken authToken) 
  // ...
  var customerWindow = new Windows.CustomerWindow(clients, isDebugged);
  // ...

Then, I made the CustomerWindow constructor set its own field, CustomerWindow.isDebugged, and then report the incident to Application Insights:

// In CustomerWindow class
public CustomerWindow(Clients clients, bool isDebugged)
  // ...
  this.isDebugged = isDebugged;
  if (isDebugged)
    // ClientAppInsights is a static class holding the Application 
    // Insights telemetry client
      "Debugger Detected at Login");
  // ...

Finally, I added code that reads this field to various event handlers. For instance:

// In CustomerWindow class
private void FilterButton_OnClick(object sender, RoutedEventArgs e)
  // ...
  if (isDebugged) { throw new InvalidCastException(); }
  // ...

I’ll address the obviousness of the field name isDebugged later in this article.

Configuring the Query Debugging Check

Because a debugger can be attached to the Client at any time during its execution, the Login Debugging Check alone is insufficient. The Query Debugging Check fills this gap by checking for a debugger when the app is about to query sensitive data, such as credit card numbers. The sensitivity of this data also means I can’t afford to separate the detection, reporting and response as with the Login Debugging Check, because that would let an attacker see the data. Instead, the Query Debugging Check will report the incident and then immediately exit the app when a debugger is detected.

I added the second Debugging Check the same way I added the first one, but this time, I configured the Check as seen in Figure 4.

Configuration for the Query Debugging Check Showing the CreditCardWindow.UpdateData Location—Other Locations Aren’t Shown

Figure 4 Configuration for the Query Debugging Check Showing the CreditCardWindow.UpdateData Location—Other Locations Aren’t Shown

Locations: There are three kinds of sensitive data in my scenario: e-mail addresses, phone numbers and credit cards. Luckily, you can select multiple locations for a single Check. In this case, those locations are EmailWindow.UpdateData, PhoneWindow.UpdatePhones and CreditCardWindow.Update­Data. The Check runs whenever any of these are called, which means I have to configure only one set of Check Properties for all three kinds of sensitive data.

Application Notification: Having multiple locations modifies how the Application Notification Sink works. In the Login Debugging Check, I could specify the LoginDialog.isDebugged field as the Sink, because that field was accessible from the Check’s only location, LoginDialog.ConfirmLogin. This time, each location must be able to access a Sink.

Notably, if the Application­NotificationSinkOwner property is blank, the Sink defaults to using the type that defines the Check’s location. Because this Check has multiple locations, the Sink will thus vary depending on the location that triggered the Check. In this case, I left this property blank and set the other ApplicationNotificationSink properties to a method named ReportDebugging.

Consider the EmailWindow.ReportDebugging method:

// In EmailWindow class
private void ReportDebugging(bool isDebugging)
  if (isDebugging)
      "Debugger Detected when Querying Sensitive Data",
      new Dictionary<string, string> { { "Query", "Email Addresses" } });

When the app calls the EmailWindow.UpdateData method, the Check runs and then calls this ReportDebugging method with the argument true if debugging was detected and false otherwise.

The same thing happens when the app code calls PhoneWindow.UpdatePhones or CreditCardWindow.UpdateData, except that the method called by the Check is defined by PhoneWindow or CreditCardWindow, respectively. These methods are implemented slightly differently, but they’re all named ReportDebugging, take a single Boolean argument, and return no value.

Action: To make the app close if a debugger is attached, I set the Action property to Exit. This tells Dotfuscator to inject code that closes the app when the Check detects an unauthorized state. Note that the Check performs this Action after notifying the application, so in this scenario the incident report would be sent before the app closes.

Configuring the Tamper Check

Finally, I added a Tamper Check to address the reverse-engineering scenario. I clicked the Add Tamper Check button to configure a new Tamper Check, as shown in Figure 5.

Configuration for the Tamper Check Showing the LoginDialog.ConfirmLogin Location—Other Locations Aren’t Shown

Figure 5 Configuration for the Tamper Check Showing the LoginDialog.ConfirmLogin Location—Other Locations Aren’t Shown

Locations: Just as with the Query Debugging Check, I chose to give the Tamper Check multiple locations: LoginDialog.ConfirmLogin, CustomerWindow.UpdateData and Utilities.ShowAndHandleDialog.

With a Debugging Check, having multiple locations is important because the debugger could be attached at any time during execution. But a Tamper Check will have only one result over a run of the app—the runtime loaded either an assembly that was modified or not. Shouldn’t one location be sufficient? Actually, because this Check is to deter tampered binaries, I have to consider a scenario where an attacker was able to remove the Tamper Check itself from a location. By having multiple locations, the application is more resilient to tampering.

You might notice that one of the locations, LoginDialog.ConfirmLogin, is the same as the one for the Login Debugging Check. Dotfuscator allows multiple Checks of different types to be injected at the same location. In this case, after the user logs in, both debugging and tampering will be checked.

Application Notification: For the Application Notification Sink, I decided it would be better in this case to have just one Sink for all of the locations. This is because unlike with the Query Debugging Check, I don’t really care on the reporting side which location triggers the Check.

I chose to define the Utilities.ReportTampering method as the Sink. As the context of each location varies, I had to declare the Sink static and ensure it was accessible from each location. The method is defined as such:

// In Utilities static class
internal static void ReportTampering(bool isTampered)
  if (isTampered)
    ClientAppInsights.TelemetryClient.TrackEvent(“Tampering Detected”);

Whenever any of the Check’s locations are called, the Check determines whether it has been modified since Dotfuscator processed it, and then calls the ReportTampering method with a parameter of true if modification was detected and false otherwise.

Action: If the app is modified, it’s dangerous to continue. I configured this Check’s Action to Exit, so that the app closes when tampering is discovered.

Injecting the Checks

With the Checks configured, Dotfuscator can now inject them into the app. To do this, from Dotfuscator CE, open the AdventureWorksSalesClient\Dotfuscator.xml config file and then click the Build button.

Dotfuscator will process the Client’s assembly, inject the Checks, and write the protected Client out to AdventureWorksSalesClient\Dotfuscated\Release. The unprotected app remains in AdventureWorksSalesClient\bin\Release.

Testing the Checks

As with any security control, it’s important to test the app’s behavior when the control is introduced.

Normal Scenarios: The Checks shouldn’t have any impact on legitimate users of the app. I ran the protected Client normally and saw no unexpected crashes, application exits or Application Insights events.

Unauthorized Scenarios: You should also verify that Checks do what you expect when the app is used in an unauthorized way. The sample code’s README lists detailed instructions for testing the Debugging Checks and the Tamper Check. For instance, to test the Query Debugging Check, I ran the protected Client several times, attaching WinDbg to the process at various points. The app correctly reported and responded to the presence of a debugger per the Check’s configuration.

Layered Protection Strategy

Using just one protection measure is insufficient for most practical cases, and Checks are no exception. Checks should be just one layer of your protection strategy, along with techniques like end-to-end encryption, Authenticode assembly signing, and so on. When you use multiple protection layers, the strengths of one layer can offset a weakness of another layer.

In this article’s example, some Application Notification Sinks used by the Checks have names like isDebugged or ReportTampering. These names remain in compiled .NET assemblies, and an attacker could easily understand the intent of these code elements and work around them. To mitigate this, Dotfuscator can, in addition to injecting Checks, also perform renaming obfuscation on your assemblies. For details, see the PreEmptive Protection – Dotfuscator Community Edition documentation (

Wrapping Up

This article introduced Runtime Checks and some of the problems they solve. Using a LOB app, I demonstrated how a data breach can occur in client software and how Runtime Checks can be configured to detect, report and respond to such a breach.

While this article covered the free Dotfuscator Community Edition, these same concepts can be transferred to the commercially licensed Professional Edition, which has additional features for Checks ( You can also inject Checks into Java and Android apps with Dotfuscator’s sibling product, PreEmptive Protection - DashO (

Joe Sewell is a software engineer and technical writer on the Dotfuscator team at PreEmptive Solutions.

Thanks to the following Microsoft technical expert for reviewing this article: Dustin Campbell

Discuss this article in the MSDN Magazine forum