It’s Me, and Here’s My Proof: Why Identity and Authentication Must Remain Distinct


By Steve Riley
Senior Security Strategist
Security Technology Unit
Microsoft Corporation

See other Viewpoint articles.

No matter what kinds of technological or procedural advancements occur, certain principles of computer science will remain -- especially those concerning information security. I’ve noticed lately that, among all the competing claims of security vendors that their latest shiny box will solve all your security woes, a basic understanding of computer science fundamentals is missing. Because good computer science never loses importance, and because knowing the science can help you choose products and develop processes, from time to time I will cover such topics in this column. This month I’d like to explore the concepts of identity, authentication, and authorization, to help you understand their important distinctions, and to help guard you against the increasingly common tendency to combine the first two.

The Concepts

Let’s start by defining the concepts.

Identity. A security principal (you or a computer, typically) wants to access a system. Because the system doesn’t know you yet, you need to make a declaration of who you are. Your answer to the question “Who are you” is the first thing you present to a system when you want to use it. Some common examples of identity are user IDs, digital certificates (which include public keys), and ATM cards. A notable characteristic of identity is that it is public, and it has to be this way: identity is your claim about yourself, and you make that claim using something that’s publicly available.

Authentication. This is the answer to the question “OK, how can you prove it?” When you present your identity to a system, the system wants you to prove that it is indeed you and not someone else. The system will challenge you, and you must respond in some way. Common authenticators include passwords, private keys, and PINs. Whereas identity is public, authentication is private: it’s a secret known (presumably) only by you. In some cases, like passwords, the system also knows the secret. In other cases, like PKI, the system doesn’t need to possess the secret, but can validate its authenticity (this is one of many reasons why PKI is superior). Your possession of this secret is what proves that you are who you claim to be.

Authorization. Once you’ve successfully authenticated yourself to a system, the system controls which resources you’re allowed to access. Typically this is through the use of a token or ticket mechanism. The token or ticket constrains your ability to roam freely throughout the system. By “caching” your authenticated identity for subsequent access control decisions, it allows you to access only that which the administrators have determined is necessary, thus enforcing the principle of least privilege.

To summarize:

Authorization is well understood. It’s the trend of merging identity and authentication that worries me, and this is what I want to discuss next.

Why Identity and Authentication Must Remain Distinct

Consider a system that has no passwords. You log on by entering only your user ID. This works fine, I suppose, if you’re the only user of the system and if no one else can get to it. But what about a multiuser system or a network? Someone else could simply enter your user ID and get access to your information. Generally, user IDs are also e-mail addresses, so you can’t rely on the fact that user IDs are secret. Also, what happens if two people have the same name? How will you create unique environments for each person?

Consider a system that requires entering only a password -- no user ID -- to log on. Passwords are secret and they’re not acting as e-mail addresses, so this should work, right? Well, if your password now serves double duty -- identifying you and authenticating you -- then problems arise. Say you’re changing your password to “p4ssw0rd” and, unknown to you, someone else has already decided to use that password. You can’t use it! Indeed, the system will probably raise an error: “That password is already in use. Please try another.” What have you just learned? The password to someone else’s account of course! Now you can be a bad guy. (Actually, I don’t know of any real-world systems that attempt to use passwords as identifiers; however I’ve read presumably serious papers describing how a system without user IDs is a really great idea. Obviously, I disagree.)

A system must maintain distinct mechanisms for identity and authentication. Identity must be unique: there can be only one “jsmith” in the system or domain (but not necessarily in the world). Authenticators, however, don’t have to be unique -- only secret. Both “jsmith” and “mjones” could be using the same password, but neither of them knows this. Having such a public/private pair (hmm, “public/private,” sounds familiar, doesn’t it?) also makes it easier to address theft. In this system, if a bad guy learns your password, you just change it. You don’t need to go through the hassle of getting a brand new account. You can revoke and reassign passwords as often as you wish. How would an ID-only or password-only system handle that situation? So now we should add another column to the table:

Now consider biometrics. Given the definitions and characteristics of identity and authentication, which is biometrics: identity or authentication?

Before we answer the question, think about the attributes of biometrics. Is it public or private? Public, of course. You leave various biometrics everywhere you go -- your fingerprints remain on anything you touch, your face is stored in countless surveillance systems, your retina patterns are known at least by your optometrist, perhaps. And it’s believed, although there is no actual evidence to support the claim, that biometrics are unique. (How would one prove it, other than examining the fingerprints and retinas of every single individual on the planet?) Given this, it follows that biometrics are identity, not authentication -- despite the claims of some vendors.

Problems arise when systems begin using biometrics for authentication. Say that all you need to do is swipe your finger to log on, with no additional factors. Your fingerprint is now serving both to identify you and to prove that you are you. How can such a system be compromised? Very easily, it turns out, without a secret accompanying your fingerprint. Numerous research reports have shown that biometric systems can be spoofed (the most notorious of which involves the assistance of a Gummi Bear; see and

Another sobering example: “Police in Malaysia are hunting for the members of a violent gang who chopped off a car owner’s finger to get round the vehicle’s hi-tech security system” ( Again, because no secret accompanies the finger, all you need is the finger and you can possess the car. Here the security countermeasure moves the risk from the car to the driver! This is when security becomes unsafe.

Revocation presents another challenge. If a system relies only on a biometric for both identity and authentication, how do you revoke that factor? Forgotten passwords can be changed; lost smartcards can be revoked and replaced. How do you revoke a finger?

Sure, it can be fun to crack jokes about how many chances you get if your biometric authenticator gets stolen. But it reflects a serious misunderstanding of computer science when manufacturers make claims that biometrics can simplify security. Smartcard manufacturers understand this: it’s never enough just to insert your card into the reader (thus presenting something you have); you also must supply a PIN (something you know) to unlock the card. A stolen card (a public thing) is useless without the PIN (the accompanying secret). Unfortunately for the gentleman in Malaysia, the manufacturer of the car’s security system misunderstood this important principle.

My general rule for biometrics is this: biometrics (something you are) will be effective only when we remember to combine them with a second factor. Now a colleague of mine recently proposed what might be one of the few possible exceptions to the general rule:

Imagine a doctor’s office or hospital where there are a few dozen people using a common PC. The PC has a camera and each of the users has logged in and left a session running. The PC watches its surroundings and switches to the logged in session that corresponds to the person who logged it in. The medical personnel want to do this without touching the machine because they don’t want to spread germs. The PC shows the selected person his schedule for that time (and the coming hour, perhaps).

It’s an interesting idea, one that I would support only if the initial login followed my general rule: the face is your identifier, and the private key on your smartcard is your authenticator. Once you present your face and smartcard, the system creates a session for you and keeps it displayed so long as your face remains in front of the camera. Once you walk away, your session is locked and the desktop is cleared. When you reappear in front of the camera, your face unlocks your session and your desktop reappears. If you’ve been away from the computer beyond some time-out period, your session is terminated and your face-based access “token” is revoked. To use the computer now requires that you perform another complete login.

Identity and authentication are distinct components of the steps necessary to use a secure computer system. Identity without authentication lacks proof; authentication without identity invalidates auditing and eliminates multi-user capability (consider Windows 95/98, which supported a password as an authenticator but no user ID). If biometrics become important to you as you begin considering how to strengthen identity and authentication in your security strategy, remember to evaluate how a particular biometric implementation views itself. Proper biometrics are identity only and will be accompanied, like all good identifiers, by a secret of some kind -- a PIN, a private key on a smart card, or, yes, even a password.