OnSwipe redirect code

Showing posts with label ruby. Show all posts
Showing posts with label ruby. Show all posts

Sunday, October 27, 2013

Invalid credentails : OmniAuth + oAuth2 + Rails 4 encrypted cookie store + simultaneous requests

OmniAuth is a very well know gem in the Ruby/Rails world. Almost every Rails application out there is probably using it to authenticate with one of the various mechanisms it supports. OmniAuth is just awesome.!

I have been using OmniAuth for about 3 years now in various Rails projects and it has worked very well, although I have had to monkey patch it once or twice to allow me to exploit some of the Facebook features (like authenticated referrals). But all in all, it just works as advertised and you will have an authentication system in pretty much no time at all.

Yesterday, however, I was having quite a bit of hard time getting OmniAuth to do a simple "Login with Facebook" oAuth2 authorization. It was something that had worked seamlessly on innumerable previous occasions. But yesterday it just kept failing repeatedly, succeeding only once in a while. And it always failed with the same obscure error "Invalid Credentials" during the callback phase. (OmniAuth operates in three phases : Setup, Request and Callback. Apart from OmniAuth wiki on github, this is a good place to read about it : http://www.slideshare.net/mbleigh/omniauth-from-the-ground-up). The fact that the error message was not so helpful made this whole process a lot more frustrating. After some hunting on the inter-webs I found that the culprit could be a bad "state" parameter.

Wait, what is this "state" parameter?

Background : oAuth-2 CSRF protection


oAuth2 specifies the uses of a non-guessable secure random string to be used as a "state" parameter to prevent CSRF attacks on oAuth. More details here : http://tools.ietf.org/html/rfc6749#section-10.12. This came out almost a year back and many oAuth providers implement it already, including Facebook. OmniAuth also implemented it last year. Although not written with the best grammar, this article will tell you why this "state" parameter is needed and what happens without it.

To sum it up, oAuth client (our web application) creates a random string, stores it in an accessible place and also sends it to the oAuth provider (Ex : Facebook) as "state" during the request phase. Facebook will keep it, authenticate the end user, ask for permissions and when granted sends a callback to our web application by redirecting the user back to our website "with the state" as a query parameter. The client (our web application) compares the "state" sent by the provider and the one it had stored previously and proceeds only if they match. If they don't then there is no proof that the callback that our web application received is actually from the provider. It could be from some other attacker trying to trick our web application into thinking (s)he is someone else.

In case of OmniAuth oAuth2, this state parameter is stored as a property in the session with the key 'omniauth.state' during the request phase. The result of the request phase is a redirect to the provider's URL. The new session with the "state" stored in it will be set on the client's browser when it receives this redirect (302) response for the request /auth/:provider (This is the default OmniAuth route to initiate the request phase). After the provider (Facebook) authenticates the user and user authorizes our application, the provider makes a callback to our application by redirecting the user back to our web application at the callback URL /auth/:provider/callback along with the "state" as a query parameter. When this callback URL is requested by the browser, the previously stored session cookie containing the 'omniauth.state' property is also sent to our web application.

OmniAuth checks both of these and proceeds only if they match. If they don't match it raises the above mentioned "Invalid Credentials" error. (Yeah, I know, not really a helpful error message..!).

Ok, that is good to hear, but why will there be a mismatch?


A mismatch is possible only if the session cookie stored on the user's browser is changed such that the 'omniauth.state' property is removed from it or altered after the request phase has set it. This can happen if a second request to our web application was initiated while the request phase of oAuth was running and it completed after the request phase completed but before the callback phase started. Sounds complex? The diagram below illustrates it.





The diagram makes it clear as to when and how the 'omniauth.state' gets removed from the session leading to the error. However, apart from the timeline requirements (i.e. when requests start and end), there is another essential criteria for this error to occur :
The response of the "other simultaneous request" must set a new session cookie, overriding the existing one. If it does not explicitly specify a session cookie in the response headers, the client's browser will retain the existing cookie and 'omniauth.state' will be preserved in the session.
Now, from what I have observed, Rails (or one of Rack middlewares) has this nifty feature of not serializing the session and setting the session cookie in the response headers, if the session has not changed in the course of processing a request. So, in our case, if the intermediate simultaneous request does not make any changes to the session, Rails will not explicitly set the session cookie, there by preventing the loss of 'omniauth.state' property in the session.

Ok, then why will the session cookie change and lose the 'omniauth.state' property?


One obvious thing is that the "other simultaneous request" might change the session - either add or remove or edit any of the properties. There however is another player involved.

This is where the "Encrypted Cookie Store" of Rails 4 comes into picture. Prior to Rails 4, Rails did not encrypt its session cookie. It merely signed it and verified the signature when it had to de-serialize the session from the request cookie. Read how Rails 3 handles cookies for a detailed breakdown. Rails 4 goes one step ahead and encrypts the session data with AES-256 (along with the old signing mechanism. More details on that coming up in a new post). The implementation used is AES-256-CBC from OpenSSL. I am not a Cryptography expert, but the way I understand it, the property of AES is that it results in a different cipher text every time you run the encryption for the same message plain text. Or it could also be because the Rails encryption scheme initializes the encryptor with a random initialization vector every time it encrypts a session (Implementation here). Either ways, the session cookie contents are always new for every request even when the actual session object or session contents remain unchanged. As a result Rails will always set the session cookie in the response header for every request and result in the browser updating that cookie in its cookie store.

In our case, this results in the session being clobbered at the end of the "other simultaneous request" and we end up losing the 'omniauth.state' property and oAuth fails.

Umm.. ok, but when and how does this happen in real world, if at all it can?!


All these requirements/constraints described above, especially the timing constraints makes one wonder if this can really happen in the real world. Well, for starters it happened to me (hence this blog post..!!). I also tried to think of scenarios other than mine where this would happen. Here are a couple that I could think of :

Scenario - 1) FB Login is in a popup window and the "Simultaneous request" is a XHR - Ex : an analytics or tracking request.

Here is the flow :
  1. User clicks on a "Login with FB" button on your website.
  2. You popup the FB Login page in a new popup window. Request phase is initiated. But there is a small window of time before the redirect response for '/auth/facebook' is received and 'omniauth.state' is set.
  3. During that small window of time, in the main window, you send an XHR to your web app to, may be, track the click on the "Login with FB" button. You might do this to just track usage or for some A/B testing or to build a funnel, etc. This request sends the session without the 'omniauth.state'.
  4. While the XHR is in progress, the redirect from the request phase is complete and the session with 'omniauth.state' is set. The user now sees FB Login page loading and proceeds to login once it is loaded.
  5. While the user is logging in to FB and approving our app, the XHR has completed and has come back with a session without 'omniauth.state'. This is stored by the browser now.
  6. Once user logs in and approves your app, the callback state starts. But the session sent to your web app is now missing the 'omniauth.state'. 
  7. oAuth fails.
How big a deal is this scenario?

If you are indeed making a XHR in the background, then this scenario needs to be taken care of. Since the "other simultaneous" request is automatically triggered every time, it is very likely that session will get clobbered.

How to solve this?

You can either first send the XHR and then in the response handler of that XHR, you can open the FB Login page in the popup. Also have a timeout just to make sure you don't wait for too long (or forever) before you receive a response for that XHR.

Alternatively, if you can push your tracking events in a queue stored in a cookie, you can do that and then open the FB Login page. Once the FB Login completes, you can pull that event out of the queue and send it. As a backup have a code that runs on every new page load to look for pending events from the queue in the cookie and send those events.

With HTML5 in place, its probably better to use the localstorage for the queue than the cookie. But again that needs user's permission. Your call.

Scenario - 2) FB Login is in the same window/tab but User has the website opened in two tabs.

Here is the flow :
  1. User has your website opened in a browser tab - Tab-1
  2. User opens a link on your website in a second tab - Tab-2 (Ctrl + Click or 'Open in a new tab' menu item). This request sends the session without 'omniauth.state'.
  3. While that Tab-2 is loading, user clicks on "Login with FB" in Tab-1 initiating the request phase.
  4.  If the request loading in Tab-2 is a little time consuming the redirect of the request phase of oAuth in Tab-1 completes before request in Tab-2, setting the session with 'omniauth.state'. After that FB Login page is shown and user proceeds to login and authorize.
  5. While the user is logging in, the request in Tab-2 completes, but with a session that is missing 'omniauth.state'.
  6. After logging in to FB, the callback phase is initiated with a redirect to your web app, but with a session that doesn't have 'omniauth.state'. 
  7. oAuth fails. 
How big a deal is this scenario?

Not a big deal actually. In your web app, in the oAuth failure handle, you can just redirect the user back to /auth/facebook, redoing the whole process again and guess what - this time it will succeed and that too without the user having to do anything because the user is already logged in to FB and has also authorized your app. But just to be on the safer side, you would want to be careful about this loop going infinite (i.e. You start FB auth, it fails and the failure handler restarts the FB auth). Setting a cookie (different from the session cookie) with the attempt count should be good. If the attempt count crosses a certain limit, send the user back to homepage or show up an error page or show a lolcats video, c'mon be creative.

Ok, those are two scenarios that I could think of. I am not sure if there are more.

Can OmniAuth change something to solve this?


I believe so. If OmniAuth uses a different signed and/or encrypted cookie to store the state value instead of the session cookie none of this session clobbering would result in loss of the state value. OmniAuth is a Rack based app and relies on the Session middleware. I am not entirely sure, but it can probably use the Cookie middleware instead. Just set its own '_oa_state' cookie and use that during callback for verification.

Will you send a pull request making this change?


I am not sure. I first will hit the OmniAuth mailing list and find out what the wise folks there have to say about this. If it makes sense and nobody in the awesome Ruby community provides an instant patch, I will try and send a patch myself.

THE END
 
Ok, so that was the awesome ride through oAuth workings inside the OmniAuth gem. In the course I got to know quite a bit of Rails and also Ruby internals. Looking forward to writing posts about those too. Okay, okay.. fine. I will try and keep those posts short and not make them this long..!!

Till then, happy oAuthing. :-/ !

P.S : Security experts, excuse me if I have used "authentication" and "authorization" in the wrong places. I guess I have used it interchangeably as web applications typically do both with oAuth2.

Sunday, March 18, 2012

Rails cookie handling -- serialization and format

A typical Rails cookie has this format : cookie-value--signature (the two dashes are literal). The "cookie-value" part is a url encoded, base64 encoded string of the binary dump (via Marshal.dump) of whatever was set in the session. The signature part is a HMAC-SHA1 digest, created using the cookie-value as the data and a secret key. This secret key is typically defined in [app-root]/config/initializers/secret_token.rb.

Let us try and reverse engineer a session cookie for a local app that I am running. I am using Devise for authentication, which in turn uses Warden. I use the Firecookie extension to Firebug to keep track of cookies. It is pretty handy.

Here is the session cookie set by Rails:

# Cookie as seen in Firebug
BAh7B0kiGXdhcmRlbi51c2VyLnVzZXIua2V5BjoGRVRbCEkiCVVzZXIGOwBGWwZvOhNCU09OOjpPYmplY3RJZAY6CkBkYXRhWxFpVGkvaQGsaQGwaRBpAdFpCGk9aQHtaQBpAGkGSSIiJDJhJDEwJEZseHh3c293Q29LcHhneWMxODR2b08GOwBUSSIPc2Vzc2lvbl9pZAY7AEYiJTUwNDdkOTMwNDNkNGEzOTA4YTkwN2U2MDY5OGRmOTdm--51f90f7176326f61636b89ee9a1fce2a4972d24f


As mentioned at the beginning it has two parts separated by two dashes (--).

The cookie value in this case is :

# The cookie-value part
BAh7B0kiGXdhcmRlbi51c2VyLnVzZXIua2V5BjoGRVRbCEkiCVVzZXIGOwBGWwZvOhNCU09OOjpPYmplY3RJZAY6CkBkYXRhWxFpVGkvaQGsaQGwaRBpAdFpCGk9aQHtaQBpAGkGSSIiJDJhJDEwJEZseHh3c293Q29LcHhneWMxODR2b08GOwBUSSIPc2Vzc2lvbl9pZAY7AEYiJTUwNDdkOTMwNDNkNGEzOTA4YTkwN2U2MDY5OGRmOTdm


The signature is :
51f90f7176326f61636b89ee9a1fce2a4972d24f

Whenever Rails gets a cookie it verifies that the cookie is not tampered with, by verifying that the HMAC-SHA1 signature of the cookie-value sent matches the signature sent. We can also do the verification ourselves here. Fire up irb and try the following :
$ irb

irb(main):003:0> cookie_str = "BAh7B0kiGXdhcmRlbi51c2VyLnVzZXIua2V5BjoGRVRbCEkiCVVzZXIGOwBGWwZvOhNCU09OOjpPYmplY3RJZAY6CkBkYXRhWxFpVGkvaQGsaQGwaRBpAdFpCGk9aQHtaQBpAGkGSSIiJDJhJDEwJEZseHh3c293Q29LcHhneWMxODR2b08GOwBUSSIPc2Vzc2lvbl9pZAY7AEYiJTUwNDdkOTMwNDNkNGEzOTA4YTkwN2U2MDY5OGRmOTdm"
=> "BAh7B0kiGXdhcmRlbi51c2VyLnVzZXIua2V5BjoGRVRbCEkiCVVzZXIGOwBGWwZvOhNCU09OOjpPYmplY3RJZAY6CkBkYXRhWxFpVGkvaQGsaQGwaRBpAdFpCGk9aQHtaQBpAGkGSSIiJDJhJDEwJEZseHh3c293Q29LcHhneWMxODR2b08GOwBUSSIPc2Vzc2lvbl9pZAY7AEYiJTUwNDdkOTMwNDNkNGEzOTA4YTkwN2U2MDY5OGRmOTdm"


# This cookie_secret comes from [app-root]/config/initializers/secret_token.rb. Obviously you need to keep this secret for your production apps.
irb(main):005:0> cookie_secret = '392cacbaac74af104375eb91324e254ba232424130e69022690aa98c1d0dfade159260588677e2859204298181385a83b923e58c4ef24bb3a40bdad9a41431b4'
=> "392cacbaac74af104375eb91324e254ba232424130e69022690aa98c1d0dfade159260588677e2859204298181385a83b923e58c4ef24bb3a40bdad9a41431b4"

irb(main):006:0> OpenSSL::HMAC.hexdigest(OpenSSL::Digest::SHA1.new, cookie_secret, cookie_str)
=> "51f90f7176326f61636b89ee9a1fce2a4972d24f"

As can be seen the HMAC-SHA1 hexdigest generated with the cookie-value matches the signature part of the cookie. Hence the cookie is not tampered with.

Now that the cookie authenticity is validated, let us see what information it holds.

Let us retrace the steps taken by Rails to generate this cookie value to get the value stored in the cookie. The steps taken by Rails are :
  1. session_dump = Marshal.dump(session)
  2. b64_encoded_session = Base64.encode64(session_dump)
  3. final_cookie_value = url_encode(b64_encoded_session)

The reverse process would be :
  1. url_decoded_cookie = CGI::unescape(cookie_value)
  2. b64_decoded_session = Base64.decode64(url_decoded_cookie)
  3. session = Marshal.load(b64_decoded_session)

And with a beautiful language like Ruby all these 3 steps can be done in one single line of code. Here it is :
(Btw, I need to require 'mongo' because one of the values contained here is of type BSON::ObjectId which is defined in the mongo gem. Without this Marshal.load will error out)

irb(main):001:0> require 'mongo'
=> true
irb(main):002:0> require 'cgi'
=> true
irb(main):003:0> cookie_str = "BAh7B0kiGXdhcmRlbi51c2VyLnVzZXIua2V5BjoGRVRbCEkiCVVzZXIGOwBGWwZvOhNCU09OOjpPYmplY3RJZAY6CkBkYXRhWxFpVGkvaQGsaQGwaRBpAdFpCGk9aQHtaQBpAGkGSSIiJDJhJDEwJEZseHh3c293Q29LcHhneWMxODR2b08GOwBUSSIPc2Vzc2lvbl9pZAY7AEYiJTUwNDdkOTMwNDNkNGEzOTA4YTkwN2U2MDY5OGRmOTdm"
=> "BAh7B0kiGXdhcmRlbi51c2VyLnVzZXIua2V5BjoGRVRbCEkiCVVzZXIGOwBGWwZvOhNCU09OOjpPYmplY3RJZAY6CkBkYXRhWxFpVGkvaQGsaQGwaRBpAdFpCGk9aQHtaQBpAGkGSSIiJDJhJDEwJEZseHh3c293Q29LcHhneWMxODR2b08GOwBUSSIPc2Vzc2lvbl9pZAY7AEYiJTUwNDdkOTMwNDNkNGEzOTA4YTkwN2U2MDY5OGRmOTdm"

# Reverse engineering the cookie to get the session object
irb(main):004:0> session = Marshal.load(Base64.decode64(CGI.unescape(cookie_str)))
=> {"warden.user.user.key"=>["User", [BSON::ObjectId('4f2aacb00bd10338ed000001')], "$2a$10$FlxxwsowCoKpxgyc184voO"], "session_id"=>"5047d93043d4a3908a907e60698df97f"}

This is the session data that the session cookie was holding. This data is subsequently used by Warden and Devise to fetch the user from the DB and do the authentication.

And that is how Rails handles cookies (at least how Rails 3.0.11 does. I am not sure if things have changed in later versions)

Saturday, January 21, 2012

Shortcomings of aliased field or attribute names in Mongoid - Part 1

NOTE:
  • The behavior and shortcomings explained below apply to Mongoid versions 2.4.0 (released on 5th Jan, 2012) and releases previous to that. A recent commit made on 10 Jan, 2012 fixes all these shortcomings.
  • For those using the affected versions (all Rails 3.0 developers), this monkey patch will address the shortcomings.

In my previous post I wrote about getting a list of aliased field names. From that post it might be evident that dealing with aliased field names is not that straight forward in Mongoid. I am using Mongoid v2.2.4 which the latest version working with Rails 3.0. Mongoid v2.3 and later require ActiveModel 3.1 and hence Rails 3.1.

Anyways, aliased field names have these shortcomings :
  1. Accessor methods are defined only with the aliased names and not the actual field names.
  2. Dirty attribute tracking methods are not defined for the aliased names.
  3. attr_protected, if used, should be used with both short and long forms of field names.
Writing about all three in a single post would result in an awfully long post. So I will put details about each of these in their own  posts, starting with the first one in this post.

Accessor methods are defined only with the aliased names and not the actual field names.


Consider the following model definition:
class User
  include Mongoid::Document

  field :fn, as: :first_name
  field :ln, as: :last_name
end
I would have expected the additional accessor methods names 'first_name', 'first_name=', 'last_name' and 'last_name='  to be simple wrapper methods which just forward the calls to the original accessor methods :- 'fn', 'fn=', 'ln' and 'ln='. But Mongoid just doesn't create the shorter form of the accessor methods at all.
user = User.new
user.respond_to?(:fn)         # Returns false
user.respond_to?(:ln)         # Returns false
user.respond_to?(:first_name) # Returns true
user.respond_to?(:last_name)  # Returns true
This doesn't appear like a problem at first sight because an application developer would use the long form of the methods in the application code. Trouble begins in the dirty tracking methods which use the actual attribute name and consequently the shorter form of field names. Take a look at these parts of Mongoid and ActiveModel:
  • Definition of setter method for any attribute - Github link for v2.2.4
    define_method("#{meth}=") do |value|
      write_attribute(name, value)
    end
    
    Notice that the field name (i.e. the short form) is being passed to write_attribute, which eventually gets passed to ActiveModel's dirty attribute tracking method attribute_will_change!

  • Definition of the ActiveModel method : attribute_will_change! -- Githib link for v3.0.11
    def attribute_will_change!(attr)
      begin
        value = __send__(attr)
        value = value.duplicable? ? value.clone : value
      rescue TypeError, NoMethodError
      end
    
      changed_attributes[attr] = value
    end
    
On line no : 3 the method with the same name as that of the attribute's short name is invoked with __send__. Since Mongoid doesn't define such methods this mostly results in NoMethodError which is caught and swallowed and nothing happens. This is comparatively harmless. But if at all a method with the same already exists, then that method gets called and a lot of unwanted things can happen. In the case of the User model above, the 'fn' just results in NoMethodError, where as the 'ln' field could result in any of the following methods :

Object.ln
FileUtils.ln
Rake::DSL.ln

That could result in pretty nasty errors about these ln methods and you wouldn't even know why these are being called!. Now whether it is a good practice to name your attributes in a way that clash with already defined methods is a totally different thing. But just remember that the cause of a weird error is probably aliasing.

Wednesday, January 18, 2012

Getting the list of aliased key/attribute names from a Mongoid model

At some point today when I was writing some model specs for one of my Mongoid models, I required the list of all of the attribute/key names. Mongoid provides a handy "fields" method for this, which returns an hash of key names and Mongoid::Fields::Serializable object pairs. Getting the list of names from that was easy : Model.fields.keys.

This gives the list of the actual key names. The actual key names, in my case, are very short strings (1 to 3 characters) and I have long aliases for those in my models. What I eventually realized was that I wanted the list of  the longer aliased names. Looking around the mongoid code did not give me any direct method. Turns out that the aliased names result in nothing more than a few additional 'wrapper' methods (like the accessors, dirty methods, etc) and there is no table/hash kind of thing maintained anywhere to give the mapping between the actual key name and the aliased ones. So my current guess is that the list of these aliased names is not available directly anywhere.

So I came up with this hackish way of getting that list of aliased names.

p = Post.new
actual_field_names = p.fields.keys
all_field_names = p.methods.collect { |m| m.to_s.match(/_changed\?$/).try(:pre_match) }
                    .select { |e| e }
aliased_field_names = all_field_names - actual_field_names

As mentioned earlier, this is pretty hackish. If you know of a straight forward way, do let me know.

Note : I eventually found out that I did not actually need this list of aliased names. I did not use this in my project. Nevertheless it works just fine.

Monday, October 3, 2011

Watch points for variables in Ruby - Object#freeze

Almost every programmer knows about watch points. Especially the ones doing native development with C/C++. Watch points were really helpful to me when I was working with C/C++. They were, sort of, my go to weapons whenever I wanted to understand how some third party code worked. It was something that I dearly missed when I started with Ruby. I am fairly new to Ruby and I have never used the ruby-debug (or ruby-debug19) gem, because until today simple print statements were sufficient most of the times.

Today I was at a loss as I was unable to figure out where a particular hash variable was getting two new key-value pairs in it. It was an instance variable with just an attr_reader defined. So obviously a reference to the instance variable was being passed around to the place where it was being modified. So my initial idea of writing a custom write accessor method was probably not going to work (did not try it). That is when I came across this http://ruby-doc.org/docs/ProgrammingRuby/html/trouble.html#S3. The last bullet point in that section has the answer.

You just freeze the object/variable that you want to watch by calling the "freeze" instance method on that object and anyone modifying that object after it's frozen will cause an exception to be raised giving you the precise location of where that modification is happening. This isn't probably as elegant as running a debugger and setting a watch point but it gets the work done nevertheless. RTFM after all..!! This tool is definitely going into my belt. :)

Tuesday, October 12, 2010

Building Ruby 1.9.2 and installing rails 3.0 on it -- On Ubuntu 10.04 - Lucid Lynx

Issues that I faced while building Ruby 1.9.2 and then installing Rails 3.0 and finally making the example in "Getting started with Rails guide".

Make sure the following development libraries are installed before you start building ruby:
(The ruby configure, make and make install (i.e. building and installing) will not tell you anything about these missing libraries)

1) zlib-dev (I think the package name is zlib1g-dev) -- Needed when you try to install the rails gem. If this is not available you will get the following error when you try to install rails with the command :
gem install rails

ERROR: Loading command: install (LoadError) no such file to load -- zlib
2) libssl-dev -- Needed when you try to run the inbuilt rails WEBrick server and load the first example app in the getting started guide. You will get an error of the type:
"LoadError: no such file to load -- openssl"
In my case I did not have this library the first time I built ruby. So I followed the instructions given here to build the openssl-ruby module/binding.
After this I ran `make` and `make install` from the top ruby source directory. May be that was not necessary, but I did it anyways.

Also, I am guessing that if this package was available when I first built ruby then the openssl-ruby module would be built by default. If not there should be a configure option to enable this `feature`. The configure's help output does not provide any info on this (not even the --help=recursive option).

==== Upgrading from older ruby versions ====

Older ruby versions used the folder /usr/local/lib/ruby/site_ruby//rubygems . Now apparently this directory is replaced by /usr/local/lib/ruby//rubygems .

So you will have to get rid of the site_ruby folder (i.e. delete it) so that the gems are not searched for and used from a stale folder.

Not doing this might result in you not being able to run gem at all.