Menu

Mitigating OAuth 2.0 Security Issues with Good Profiling

While any alternative to the cross-service password sharing anti-pattern is goodness, OAuth 2.0 also introduces some insecure flows to accommodate a broad range of use cases and to be as developer-friendly as possible. A previous post explores these assurance issues, while my first post on OAuth 2.0 explains why some folks in the identity management community have dismissed those issues a bit too easily. In fact, providers of services that leverage the technology and use personally identifying information (PII) in the process will need to reckon with these issues eventually. Why not do that the easy way with risk mitigation up front?
 
My initial model for OAuth 2.0 risk management comprises a three pronged strategy: profile to minimize attack surface, test implementations to raise their surety and operate solutions using a set of good security practices in in production. Let’s begin on the first prong: profiling.
 
A central insight on mitigating OAuth 2.0 security issues came to me at a Cloud Identity Summit 2013 dinner reception from Nat Sakimura, Chairman of the OpenID Foundation. Nat said: “OAuth 2.0 is an authorization framework, not a protocol. Security depends on the profile of the application or specification using it.
 
In other words, the framework has a number of insecure flows (detailed in the post “OAuth 2.0 Assurance Issues”) but individual profiles of the framework, such as OpenID Connect, don’t have to use all of them. When an insecure flow is required, such as the implicit Authorization grant, the profile must specify mitigations. These mitigations could come from some of security features specified in the framework, such as limited token lifetimes, refresh tokens, “state” parameter and more.
 
Let’s take a look at how OpenID Connect and other specifications apply profiling as the first line of defense. In OpenID Connect Basic Client Profile 1.0 we soon see:
 
“This specification defines features used by Relying Parties using the OAuth authorization_code grant type.”
 
Thus, once prompted for authorization, the user (and the browser) only obtain a temporary code, not the full ID token or access token that will be used in the subsequent security exchanges. The end user is redirected back to the requesting application, or client, which much exchange the code for the token.
 
“[Client] Communication with the Authorization Endpoint MUST utilize TLS”; “If interaction with the End-User occurs over an HTTP channel, it MUST use TLS” and “Communication with the UserInfo Endpoint MUST utilize TLS.”
 
If a profile can only do one thing for mitigation making Transport Layer Security (TLS) mandatory is a great to start as it can provide confidentiality and integrity of the communication.
 
However, I soon notice something worrisome:
 
“Nonce – OPTIONAL [in request parameters]. String value used to associate a Client session with an ID Token, and to mitigate replay attacks.”
 
Because TLS only provides transport security, its coverage is broken every time OpenID or OAuth have to redirect web flows in their ping pong interactions between the end user, the client application, the resource server and the authorization server. In general, the end user’s device isn’t trusted and some client applications aren’t trusted either. If the client is compromised outside of the TLS session request parameters could be recovered by the bad guys and, because nonce is optional and may not always be used, it’s possible for replay attacks to occur.
 
“The ID Token is represented as a JSON Web Token (JWT).”
 
If a profile can only do one thing for mitigation making Transport Layer Security (TLS) mandatory is a great to start as it can provide confidentiality and integrity of the communication.
 
However, I soon notice something worrisome:
 
“Nonce – OPTIONAL [in request parameters]. String value used to associate a Client session with an ID Token, and to mitigate replay attacks.”
 
Because TLS only provides transport security, its coverage is broken every time OpenID or OAuth have to redirect web flows in their ping pong interactions between the end user, the client application, the resource server and the authorization server. In general, the end user’s device isn’t trusted and some client applications aren’t trusted either. If the client is compromised outside of the TLS session request parameters could be recovered by the bad guys and, because nonce is optional and may not always be used, it’s possible for replay attacks to occur.
 
“The ID Token is represented as a JSON Web Token (JWT).”
 
Hopefully this gives you some ideas on OAuth 2.0 mitigation. To complete the process, of course, we’d have to look a lot deeper into the possibility of replay attacks and how they are mitigated. We’d also need to do an analysis of the OpenID Implicit Client specification (which discloses tokens to the client) and of the OpenID Dynamic Registration which will clearly introduce new risks even beyond those of the core OAuth 2.0. You can check out these OpenID Connect specifications at http://openid.net/connect/. Bottom line, OpenID Connect looks pretty good from the profiling perspective so far.
 
I did another post on testing OAuth 2.0 implementations as well. A whole separate thread I want to take up as well concerns levels of assurance (LOA). The fact is that OpenID Connect and OAuth 2.0 implementations almost universally use the “bearer” token rather than a token that’s cryptographically bound to the end user. As long as this remains the case, these protocols by themselves may often only be sufficient for the lower LOA 1 and LOA 2 levels and not the higher LOA 3 or 4 that might clear them for more sensitive use cases. We shall see what can be done about that, too.

Question and request for comments: Are you aware of any other OAuth 2.0 profiles we should be looking at? Am I missing anything else?  The last thought I’d like to leave you with is that profiling is just part of the security story. As with anything else, we have to apply a systematic, comprehensive approach to security.

Speaking very generally, mitigating risks of OAuth 2.0 falls into three main stages – profiling (specification stage), testing (development or service assessment stage) and operations (production stage). The more holes you leave during the early stages the more work for the later ones. And any potential vulnerability left uncovered in operations is an accident waiting to happen. But in end, I hope we can come up with a list of good practices that will ensure properly profiled and well-implemented OAauth 2.0 deployments can effectively protect the personal data we entrust them with.

Related Posts

 

2 Responses to Mitigating OAuth 2.0 Security Issues with Good Profiling

  • Nice post, just like the others. Looking forward to the XDI profile of OAuth 2.0!

  • Hi Dan,
    I’ve been enjoying this series of posts, and I’m really looking forward to more about LOA and OAuth security in general.
    It’s a truism among security professionals that you shouldn’t role your own, but stick with the tried and true. Yet the OAuth2 spec leaves security, the only really hard part, to the discretion of the developers. That is imho anything but developer friendly!

    Regards,
    =vg

Subscribe to Blog Notifications...  HERE
Archives