OnSwipe redirect code

Tuesday, December 25, 2012

Presumed Innocent - by Scott Turow - A review

Last week my friend Ananth Prasad, returned a bunch of my books that were with him for quite some time. Amongst them was a book that I did not remember having read. The age of the paperback was apparent from the brown color that the once white pages of the book now donned. It was a novel named "Presumed Innocent" by the author "Scott Turow".

Neither the title nor the author's name ring any bell when I first saw it and that made me a little skeptical about the book. Nevertheless, I decided to try it out and I am very glad I did it. Although very old, it turns out this book was somewhat of a hit back then and has even been made into a movie starring Harission Ford. Additionally the author is a pretty successful one with his books being translated into 20 languages.. !

It was very clear from the cover page design and the title that this is about a murder. The author doesn't try to hide this fact and starts the story with the murder of a women. There is no back story, no suspense, nothing. A woman is murdered, that's it. The protagonist is introduced as the guy investigating the murder, on a special request from his boss. The pace of story telling was also OK. It was not super fast-paced like the modern military/CIA spy thrillers. But it was not boring either. I was hooked to the book for the initial few pages when the characters were introduced and the initial scene/plot is laid out. But after that the pace just drops dead and I felt like I am reading the script of one of the Hindi daily soaps that my mother, sisters and aunts watch (In case you did not know the story barely moves ahead in a week's duration in those soaps). IMHO, the initial 150 pages could probably be all dealt with in just about 40 to 50 pages. Personally I am not really a big fan of overly descriptive narrations of mundane things like walking in the woods where the walker notices the rustling of the leaves and imagines something in those sounds or the collection of vague thoughts that the protagonist has when he has hit a low point in his life, etc. If it is descriptive narration it has to be something superbly imaginative and way beyond my own imagination. That's why I find descriptive narrative appealing only when I am reading a fantasy novel. Nobody does it better than JRR Tolkien. :).

Anyways coming back to "Presumed Innocent", at the end of that boring, slow paced section it is revealed that the protagonist, a character named Rusty Sabich, who was investigating the murder is actually now the accused. And then things start to get interesting. As expected, Rusty goes to the most famous defense lawyer, who undoubtedly was his arch enemy while working as a public prosecutor. Anyways, once the court room drama starts and things start to unravel it gets really gripping. At the very beginning you are given a bunch of data - some well known facts and some speculations and some extrapolated facts at that point in time. With that, and having read a bunch of John Grisham legal thrillers, I decide upon one of the characters as the murderer. As the investigation and trial proceed and more things surface, I can't help but to suspect some other character. This goes on for most part of the rest of the book. But as I proceeded I noticed that I never suspected a character for a second time. It was almost like an elimination process, bringing the reader closer to the actual murderer. After all this running around, it is revealed that my initial suspect is indeed the murderer. So that kind of saves me the "Holy Shit... !!" moment and instead leaves me a "Aah.. Damn it... I was right initially" moment. Although the ending wasn't very spectacular the journey was pretty awesome. So I guess in this case the means justify the end.. :P

There are two things that I specifically liked about this book :
1) The crisp and clear explanations about the legal procedures and jargon. It was interesting to know. Kind of helped me imagine myself in the story as apart of the legal system. Before reading this I did not know that the Judge played a very important role in a trial in a USA court. I always thought the Jury was the most important and the judge was only there to oversee the trial.

2) The whole story, plot, dialogues, character presentation, everything appeared very close to reality and not at all very flamboyant. Although this has been made into a full length motion picture, I would prefer to see this as an episode of "Law and Order" (that currently airs on FOX-Crime).

The story telling is not continuous. The protagonist is not active 24x7. When its weekend and its a holiday in the court, the protagonist spends most of his time at home and his lawyer also takes a break. The story just resumes on Monday, from pretty much where it left off on the previous Friday evening. In fact after the first hearing, nothing much happens before the trial date. This is pretty much how every episode of "Law and Order" is presented. And I like it that way. Makes it appear realistic.

All in all, it was a good read. In fact very good if I discount the initial slow paced part. Now I am very eager to watch this movie. Apparently the movie was also very well received. Next weekend I guess. :)

Friday, June 8, 2012

ಧಾರ್ಮಿಕ ಬೋಧನೆಗಳಿಂದ ಇಹಲೋಕದ ಸಾಮಾನ್ಯ ಜೀವನಕ್ಕೆ ನೆರವು

ನಮ್ಮ ಸನಾತನ ಧರ್ಮದ ಸಿದ್ಧಾಂತಗಳನ್ನು, ವಿಚಾರಗಳನ್ನು, ರೀತಿ-ನೀತಿಗಳನ್ನು ಸಾರಿ ಹೇಳುವ ಗ್ರಂಥಗಳು ಅನೇಕ.  ಈ ಸನಾತನ ಧರ್ಮದ ಅಡಿಪಾಯಿ ಆಗಿರುವ ವೇದ-ಉಪನಿಷತ್ತುಗಳನ್ನು ಹೊರತುಪಡಿಸಿ ಇನ್ನೂ ಅನೇಕಾನೇಕ ಗ್ರಂಥಗಳು, ಮಂತ್ರಗಳು, ಸ್ತುತಿಗಳು, ಸ್ಮೃತಿಗಳೆಲ್ಲಾ ಇರುವುದು ನಮಗೆಲ್ಲರಿಗೂ ತಿಳಿದ ವಿಷಯವೇ. ನನಗೆ ಈ ಮಹಾಗ್ರಂಥಗಳಲ್ಲಿ ಸಾರಲಾಗಿರುವ ವಿಚಾರಗಳನ್ನು ತಿಳಿದುಕೊಳ್ಳುವ ಹಾಗೂ ಅರ್ಥಮಾಡಿಕೊಳ್ಳುವ ಆಸಕ್ತಿ ಇದೆ. ಇತ್ತೀಚಿನ ಕೆಲವು ದಿನಗಳಲ್ಲಿ, ಇಂತಹ ವಿಚಾರಗಳ ಬಗ್ಗೆ ಓದಿ ಚೆನ್ನಾಗಿ ತಿಳಿದುಕೊಂಡಿರುವ ನನ್ನ ಕೆಲವು ಸ್ನೇಹಿತರ ಜೊತೆ ಒಂದಿಷ್ಟು ಚರ್ಚೆ ಮಾಡಿದೆ, ಹಾಗೂ ಕೆಲವೊಂದು ಪುಸ್ತಕಗಳನ್ನು ಓದಿದೆ ಮತ್ತೆ ಒಂದೆರಡು ಪಾಠಗಳಿಗೆ ಹೋಗಿದ್ದೆ. ಸಾಧ್ಯವಾದಲ್ಲಿ ಹಿಂದೂ ಧರ್ಮವನ್ನು ಐತಿಹಾಸಿಕ ಭೂತಕನ್ನಡಿಯಿಂದಾನೂ ನೋಡಲು ಯತ್ನಿಸುತ್ತಿದ್ದೇನೆ.

ಇದುವರೆಗೆ ನನಗೆ ತಿಳಿದ ಮಟ್ಟಿಗೆ, ನಮ್ಮ ಬೋಧನೆಗಳನ್ನು ಹೀಗೆ ವಿಂಗಡಿಸಬಹುದು :
(ಬಹುಷ ಎಲ್ಲ ಧರ್ಮಗಳ ಬೋಧನೆಗಳನ್ನೂ ಇದೇ ರೀತಿ ವಿಂಗಡಿಸಬಹುದು)
೧. ಪದ್ಧತಿಗಳು  - ದಿನನಿತ್ಯ ಮಾಡಬೇಕಾದ ಪೂಜಾದಿ ಕರ್ಮಗಳು, ಏನು ತಿನ್ನಬೇಕು, ಯಾವ್ಯಾವ ಹಬ್ಬಗಳನ್ನು ಹೇಗೆ ಅಚರಿಸಬೇಕು, ಮುಂತಾದವು
೨. ಜೀವನ ನೀತಿಗಳು - ಸದಾ ಸತ್ಯವನ್ನು ನುಡಿಬೇಕು, ಬೇರೆಯವರನ್ನು ಗೌರವದಿಂದ ಕಾಣಬೇಕು, ಕಷ್ಟದಲ್ಲಿದ್ದವರಿಗೆ ಸಹಾಯ ಮಾಡಬೇಕು, ಮುಂತಾದವು
೩. ಆಧ್ಯಾತ್ಮಿಕ ಅಭ್ಯಾಸ -  ನಾನು ಅಂದರೆ ಯಾರು, ನಮ್ಮ ಜೀವನದ ಮೂಲ ಉದ್ದೇಶವೇನು, ಈ ವಿಶ್ವದ ಆದಿ-ಅಂತ್ಯಗಳ್ಯಾವು, ಮುಂತಾದವು

ನಾನು ನೋಡಿದಹಾಗೆ ಪಾಲನೆಯಲ್ಲಿ ಈ ಮೂರು ಭಾಗಗಳು ಹೆಚ್ಚುತ್ತಿರುವ ಕಷ್ಟದ ಕ್ರಮದಲ್ಲಿವೆ. ಅದಕ್ಕಾಗಿಯೇ ಧರ್ಮಾಚರಣೆ ಮಾಡುವವರಲ್ಲಿ ಪದ್ಧತಿಗಳನ್ನು ಪಾಲಿಸುವವರು ಎಲ್ಲಕಿಂತ ಹೆಚ್ಚಿನವರು, ಜೀವನ ನೀತಿಗಳನ್ನ ತಿಳಿದವರು, ಪಾಲಿಸುವವರು ಸ್ವಲ್ಪ ಕಡಿಮೆ. ಆಧ್ಯಾತ್ಮಿಕ ವಿಚಾರಗಳನ್ನು ತಿಳಿದು, ಅದರ ಬಗ್ಗೆ ಅಭ್ಯಾಸ ಮಾಡುವವರು ಇನ್ನೂ ವಿರಳ. ಪ್ರತಿಯೊಂದು ಭಾಗವು ಮುಂದಿನ ಭಾಗಕ್ಕೆ ಹೋಗಲು ಒಂದು ಹೆಜ್ಜೆ ಅಂತಾ ನಾನು ನಂಬಿದ್ದೇನೆ. ಆದರೆ ಬಹಳಷ್ಟು ಜನರು ಇದ್ದನು ತಿಳಿಯದೆ ಬಹತೇಕ ಪದ್ಧತಿಗಳಲ್ಲೇ ನಿಂತುಬಿಟ್ಟಿದ್ದಾರೆ. ಅದರಲ್ಲೂ ಬಹುಸಂಖ್ಯಾತರು ಆ ಪದ್ಧತಿಗಳ ಹಿಂದಿರುವ ಕಾರಣಗಳನ್ನು ತಿಳಿಯೋದಿಲ್ಲಾ. ಉದಾಹರಣೆಗೆ, ಮಂತ್ರಗಳನ್ನು ಅರ್ಥ ತಿಳಿಯದೆ ಬಾಯಿಪಾಠ ಮಾಡಿ ದಿನನಿತ್ಯ ಉಚ್ಛರಿಸುವುದು. ನಾನು ಮಾಡೋದು ಇದೆ. ನಾನು ಬಾಯಿಪಾಠ ಮಾಡಿದ ಎಷ್ಟೋ ಮಂತ್ರಗಳ ಅರ್ಥ ನನಗೆ ತಿಳಿದಿಲ್ಲಾ. ಅರ್ಥವನ್ನು ಕೇಳಿದಾಗ, ಅಥವಾ ಯಾವುದೋ ಒಂದು ಪದ್ಧತಿಯ ಹಿಂದಿರುವ ಕಾರಣ ಕೇಳಿದಾಗ ಸಮಾನ್ಯವಾಗಿ ಸಿಕ್ಕ ಉತ್ತರ ಇವುಗಳಲ್ಲಿ ಯಾವುದಾದರೊಂದು ಇರ್ತಿತ್ತು :
- ಯಾವುದೋ ಒಂದು ಗ್ರಂಥದಲ್ಲಿ ಅದನ್ನ ಮಾಡಬೇಕು ಅಂತ ಹೇಳಲಾಗಿದೆ.
- ಈ ಮಂತ್ರ ಹೇಳೋದರಿಂದ ಪಾಪಗಳೆಲ್ಲಾ ಕೆಳದು ಮೋಕ್ಷ ಸಿಗುತ್ತದೆ.
- ಇದರಿಂದ ಭಗವಂತ ಸಂತುಷ್ಟನಾಗಿ ನಿನಗೆ ಆಶೀರ್ವಾದ ಮಾಡುತ್ತಾನೆ.

ಆದರೆ ನನಗೆ ಅನಿಸೋದೇನಂದ್ರೆ ಈ ಬೋಧನೆಗಳು ಬರೀ ಮೋಕ್ಷ ಸಾಧನೆ ಅಲ್ಲದೆ ಇಹಲೋಕದ, ನಮ್ಮ ದಿನನಿತ್ಯದ ಜೀವನದಲ್ಲಿಯೂ ಉಪಯೋಗವಾಗುತ್ತುವೆ ಅಂತಾ. ಕೆಲವೊಂದು ಪುಸ್ತಕಗಳಲ್ಲಿ ಇದೇ ರೀತಿಯ ವಿಚಾರಗಳನ್ನು ಓದಿದ ನಂತರ ಅದು ಖಾತರಿಯೂ ಆಯಿತು. ಅದಕ್ಕೆ ಈಗ ನಾನು ಈ ಪದ್ಧತಿ ಹಾಗೂ ನೀತಿಗಳ ಹಿಂದಿರುವ ಕಾರಣ ಹಾಗು ಅರ್ಥಗಳನ್ನು ತಿಳಿಯೋಣ ಅಂತಾ ಯೋಚಿಸಿದ್ದೇನೆ. ಅದರ ಜೊತೆ ಇವುಗಳಿಂದ ಇದಲೋಕದ ಸಮಾನ್ಯ ಸಾಧನೆಗಳಿಗೆ ಏನು ಉಪಯೋಗ, ಅವುಗಳಿಂದ ನಮ್ಮ ಇಹಲೋಕದ ಜೀವನ ಹೇಗೆ ಉತ್ತಮಗೊಳ್ಳುತ್ತದೆ ಅಂತಾನೂ ತಿಳಿದುಕೊಳ್ಳುವ ಪ್ರಯತ್ನ ಮಾಡುತ್ತೇನೆ

ಯೋಗಾಸನ, ಆಯುರ್ವೇದಂತಹ ಕೆಲವು ಪದ್ಧತಿಗಳ ಉಪಯೋಗ ತುಂಬಾ ಸರಳವಾಗಿಯೇ ಗೊತ್ತಾಗತ್ತೆ. ಮತ್ತೆ ಕೆಲವು ಪದ್ಧತಿಗಳ ಕಾರಣ ಅಥವಾ ಉಪಯೋಗ ತಿಳಿಯೋದು ಸ್ವಲ್ಪ ಕಷ್ಟ.  ಯಾಕಂದರೆ ಅವು ಆಗಿನ ಕಾಲಕ್ಕೆ ಉಪಯುಕ್ತವಾಗಿರಬಹುದು, ಆದರೆ ಈಗ ಅದು ಕೇವಲ ಒಂದು ರೂಢಿಯಾಗಿರಬಹುದು. ಹೀಗಾಗಿ ಅ ಪದ್ಧತಿ ಈಗಿನ ಕಾಲದಲ್ಲಿ ಉಪಯುಕ್ತವಿಲ್ಲದಿರಬಹುದು. ಇನ್ನು ಕೆಲವು ಬರೀ ಒಂದು ಜೀವನದ ಶೈಲಿ ಇರಬಹುದು. ಉದಾಹರಣೆಗೆ ಮಾಂಸಹಾರಿ ಊಟ. ಇದು ಕೆಟ್ಟದ್ದು ಅಂತಾ ಖಡಾಖಂಡಿತವಾಗಿ ಹೇಳೋದಕ್ಕೆ ಆಗೋದಿಲ್ಲಾ. ಹೊರತಾಗಿ "ನಮಗೆ ಅದು ಸೇರೋದಿಲ್ಲಾ, ಅದಕ್ಕಾಗಿ ನಾವು ಅದನ್ನ ತಿನ್ನಿವುದಿಲ್ಲಾ" ಅಂತಾ ಹೇಳಬಹುದು. ಇದು ನಮ್ಮ ಜೀವನ ಶೈಲಿ ಆಗತ್ತೆ. ನನ್ನ ಪ್ರಕಾರ ಇದಕ್ಕೂ ಮೋಕ್ಷ (ಅಂತಹದೊಂದು ಇದೇ ಅಂತಾದ್ರೆ) ಸಿಗೋದಕ್ಕೆ ಏನೂ ಸಂಬಂಧವಿಲ್ಲಾ.

ಇನ್ನು ನೀತಿಗಳ ಮಹತ್ವ ಹಾಗು ಪ್ರಾಮುಖ್ಯತೆ ನಾವೆಲ್ಲರು ಚಿಕ್ಕವಿರಿದ್ದಾಗಿನಿಂದಲೇ ನಾನಾ ಕಡೆ ಕೇಳುತ್ತಾ, ಓದುತ್ತಾ ಬಂದಿದ್ದೇವೆ. ಸುಮಾರು ನೀತಿಗಳನ್ನು ಧರ್ಮಾತೀತವಾಗಿ ಹೇಳಿಕೊಡಳಾಗುತ್ತದೆ. ಆದ್ದರಿಂದ ಇದರ ಮಹತ್ವ ತಿಳಿಯಲು ಹೆಚ್ಚೇನು ಕಷ್ಟಪಡಬೇಕಾಗಿಲ್ಲಾ. ಅದರೆ ಇವುಗಳ ಭೌತಿಕ ಉಪಯೋಗಗಳು ಅಷ್ಟು ನೇರವಾಗಿ ತಿಳಿಯೋಗಿಲ್ಲಾ. ಸಾಮಾನ್ಯವಾಗಿ ಧಾರ್ಮಿಕ ಉಪದೇಶಗಳಲ್ಲಿ "ಕರ್ಮ" ಹಾಗೂ "ಪುನರ್ಜನ್ಮದ" ವಿಚಾರಗಳನ್ನು ತಂದು, ಅದರಮೂಲಕ ಇವುಗಳ ಪ್ರಾಮುಖ್ಯತೆ ಮತ್ತು ಉಪಯೋಗವನ್ನು ಹೇಳುತ್ತಾರೆ. ಆದರೆ ಈ ಪುನರ್ಜನ್ಮ ಮತ್ತು ಕರ್ಮದಲ್ಲಿ ಅಷ್ಟೊಂದು ನಂಬಿಕೆ ಇಲ್ಲದಿರುವ ನಾನು ಇವುಗಳಿಂದ ಈ ಜನ್ಮದಲ್ಲೇ ಉಪಯೋಗಗಳಿವೆ ಅಂತ ನಂಬಿದ್ದೇನೆ. ಈ ನೀತಿಗಳನ್ನು ಪಾಲಿಸಿದ್ದರಿಂದ ಒಳ್ಳೆಯದಾಯ್ತು ಅನ್ನುವ ನಿಜ ಸಂಗತಿಗಳ ಬಗ್ಗೆ ತಿಳಿದು ಈ ನನ್ನ ನಂಬಿಕೆಯನ್ನು ಸಧೃಢ ಮಾಡುವ ಯೋಚನೆ ಇದೆ.

ಕೊನೆಯದಾಗಿ ಈ ಆಧ್ಯಾತ್ಮಿಕ ಅಭ್ಯಾಸ ಇದೆ ಅಲಾ, ಅದು ಸ್ವಲ್ಪ ಕಬ್ಬಿಣದ ಕಡ್ಲೆ ಇದ್ದಹಾಗೆ. ನನಗೆ ಅದರಲ್ಲಿರೋ ಜ್ಞಾನ ಅತ್ಯಲ್ಪ, ಹೌದೋ ಅಲ್ವೋ ಅನ್ನುವಷ್ಟು. ಇದರ ಬಗ್ಗೆ ಏನು ತಿಳಿದರೂ ಅದು ಹೊಸಾದೇ. ಹಾಗಾಗಿ ಏನು ತಿಳಿಯತ್ತೋ ಅದರ ಬಗ್ಗೆ ಬರದ್ರೆ ಆಯ್ತು ಅಂತಾ ಅನ್ಕೊಂಡಿದ್ದೇನೆ.

ನೋಡೋಣ ಈ ಕೆಲಸಾ ಎಲ್ಲಿವರೆಗೆ ನಡೆಯತ್ತೆ ಅಂತಾ ಮತ್ತೆ ಕೊನೆಗೆ ನನ್ನನ್ನ ಎಲ್ಲಿ ತಂದು ನಿಲ್ಲಿಸುತ್ತೆ ಅಂತಾ.

Sunday, March 18, 2012

Rails cookie handling -- serialization and format

A typical Rails cookie has this format : cookie-value--signature (the two dashes are literal). The "cookie-value" part is a url encoded, base64 encoded string of the binary dump (via Marshal.dump) of whatever was set in the session. The signature part is a HMAC-SHA1 digest, created using the cookie-value as the data and a secret key. This secret key is typically defined in [app-root]/config/initializers/secret_token.rb.

Let us try and reverse engineer a session cookie for a local app that I am running. I am using Devise for authentication, which in turn uses Warden. I use the Firecookie extension to Firebug to keep track of cookies. It is pretty handy.

Here is the session cookie set by Rails:

# Cookie as seen in Firebug

As mentioned at the beginning it has two parts separated by two dashes (--).

The cookie value in this case is :

# The cookie-value part

The signature is :

Whenever Rails gets a cookie it verifies that the cookie is not tampered with, by verifying that the HMAC-SHA1 signature of the cookie-value sent matches the signature sent. We can also do the verification ourselves here. Fire up irb and try the following :
$ irb

irb(main):003:0> cookie_str = "BAh7B0kiGXdhcmRlbi51c2VyLnVzZXIua2V5BjoGRVRbCEkiCVVzZXIGOwBGWwZvOhNCU09OOjpPYmplY3RJZAY6CkBkYXRhWxFpVGkvaQGsaQGwaRBpAdFpCGk9aQHtaQBpAGkGSSIiJDJhJDEwJEZseHh3c293Q29LcHhneWMxODR2b08GOwBUSSIPc2Vzc2lvbl9pZAY7AEYiJTUwNDdkOTMwNDNkNGEzOTA4YTkwN2U2MDY5OGRmOTdm"

# This cookie_secret comes from [app-root]/config/initializers/secret_token.rb. Obviously you need to keep this secret for your production apps.
irb(main):005:0> cookie_secret = '392cacbaac74af104375eb91324e254ba232424130e69022690aa98c1d0dfade159260588677e2859204298181385a83b923e58c4ef24bb3a40bdad9a41431b4'
=> "392cacbaac74af104375eb91324e254ba232424130e69022690aa98c1d0dfade159260588677e2859204298181385a83b923e58c4ef24bb3a40bdad9a41431b4"

irb(main):006:0> OpenSSL::HMAC.hexdigest(OpenSSL::Digest::SHA1.new, cookie_secret, cookie_str)
=> "51f90f7176326f61636b89ee9a1fce2a4972d24f"

As can be seen the HMAC-SHA1 hexdigest generated with the cookie-value matches the signature part of the cookie. Hence the cookie is not tampered with.

Now that the cookie authenticity is validated, let us see what information it holds.

Let us retrace the steps taken by Rails to generate this cookie value to get the value stored in the cookie. The steps taken by Rails are :
  1. session_dump = Marshal.dump(session)
  2. b64_encoded_session = Base64.encode64(session_dump)
  3. final_cookie_value = url_encode(b64_encoded_session)

The reverse process would be :
  1. url_decoded_cookie = CGI::unescape(cookie_value)
  2. b64_decoded_session = Base64.decode64(url_decoded_cookie)
  3. session = Marshal.load(b64_decoded_session)

And with a beautiful language like Ruby all these 3 steps can be done in one single line of code. Here it is :
(Btw, I need to require 'mongo' because one of the values contained here is of type BSON::ObjectId which is defined in the mongo gem. Without this Marshal.load will error out)

irb(main):001:0> require 'mongo'
=> true
irb(main):002:0> require 'cgi'
=> true
irb(main):003:0> cookie_str = "BAh7B0kiGXdhcmRlbi51c2VyLnVzZXIua2V5BjoGRVRbCEkiCVVzZXIGOwBGWwZvOhNCU09OOjpPYmplY3RJZAY6CkBkYXRhWxFpVGkvaQGsaQGwaRBpAdFpCGk9aQHtaQBpAGkGSSIiJDJhJDEwJEZseHh3c293Q29LcHhneWMxODR2b08GOwBUSSIPc2Vzc2lvbl9pZAY7AEYiJTUwNDdkOTMwNDNkNGEzOTA4YTkwN2U2MDY5OGRmOTdm"

# Reverse engineering the cookie to get the session object
irb(main):004:0> session = Marshal.load(Base64.decode64(CGI.unescape(cookie_str)))
=> {"warden.user.user.key"=>["User", [BSON::ObjectId('4f2aacb00bd10338ed000001')], "$2a$10$FlxxwsowCoKpxgyc184voO"], "session_id"=>"5047d93043d4a3908a907e60698df97f"}

This is the session data that the session cookie was holding. This data is subsequently used by Warden and Devise to fetch the user from the DB and do the authentication.

And that is how Rails handles cookies (at least how Rails 3.0.11 does. I am not sure if things have changed in later versions)

Thursday, March 15, 2012

NAS and SAN explained -- with technical differences.

Acronyms and fancy buzz words (specifically computer science related ones) have always troubled me, at times making me very angry at the person using them and in many cases leaving me in a confused state eventually. So whenever I come across such acronyms/buzz words I try to dissect them and prepare a mental visual map that I will use every time the acronym is used in the future. The acronyms for this write up are NAS (Network Attached Storage) and SAN (Storage Area Network).

These might be very simple and obvious things for many people but I am sure I have lost quite a bit of my hair whenever someone mentioned these acronyms to me. So here is my attempt to decipher them.

First the basics. Both of these consist of two building blocks storage and network, or to put in a less naive manner both SAN and NAS allow applications on one machine to access data present on another machine. Okay, so why two names, why two acronyms? To answer that let me just take up these two building blocks separately.

In the simplest sense "Storage" means dealing with files stored on the hard disk attached to the system. We do that with the the APIs (or "methods" if you want to avoid the acronym) made available by the filesystem and libraries built using those methods. As an application programmer we almost never worry about how the files are actually stored on the disk. It is the responsibility of the filesystem, the kernel and the disk driver. The application always views the data stored on the disk in terms of files (used in a generic sense to refer to both files and directories) - more so as a stream of bytes. If we dig a little deeper we find that these disks are actually made available to the filesystem by the disk drivers as block devices - i.e. whenever they accept or return data they do it in quantum of blocks. A disk doesn't return you a single byte of data when you read from it. It always returns one or more blocks. From what I understand the size of a block these days is typically 4KB. Amount of data transferred to or from the disk is a multiple of this block size. Allocation of space for files is also made in terms of blocks, which some times leads to a file utilizing the last block partially (and that is why we see the difference in the actual file size and file size of disk entries).

That's about storage. To summarize; data is made available as files by filesystem software, but the device actually makes it available as blocks.

Network in the simplest sense is communication between two processes - running either on the same machine or on different machines. To simplify it further let's just limit to the case of communication between two processes on two different machines. Typically one of these two processes will be a server processes and the other a client process. The server process would be listening on a specified port to which the client can connect. The client can then send requests over the connection which the server will "serve", by sending back a suitable response. The format of the request and the response are specified before hand and the client and the server agree to conform to that specification. This conformance is what is called the "protocol" which the two processes (or in this case the two machines) are using for their communication. The client typically asks for some data and the server fetches it from some place and sends the requested data as response. The client doesn't know where the server is fetching the data from and the server doesn't know what the client is doing with the data. The protocol is all that matters to them.

That's network. No summary here.

Okay, so how do storage and network come together now?

In the storage example the data on the hard disk (referred to as "our hard disk" henceforth) was being accessed by the applications running on the same machine (referred to as the "host machine" henceforth). Now what if applications running on a different machine (referred to as the "new machine" henceforth) want to access the data on our hard disk? Let us call this requirement as "remote data access".

The traditional filesystem software is designed to interact with a disk that was made available to it on the local system by the disk driver and the driver is designed to handle a disk that is attached to this local system. For our "remote data access" either the filesystem software has to get smarter and start talking to the device available on our host machine or the disk driver has to become smarter and make the disk on our host machine available as a local device on the new machine. It is these two options that the two acronyms stand for. One acronym means a smarter filesystem software with the same old driver and another means a smarter driver with the same old filesystem. That's the difference between the two and the reason there are two names and two acronyms.. !

NAS - Network Attached Storage -- This one has a smarter filesystem and the same old driver. In our setup, the filesystem on the "new machine" knows that the disk is on the "host machine" and every time an application requests a file (either for reading or writing) it has to contact the "host machine" over network and retrieve the file. In other words the filesystem on the "new machine" makes a request to the "host machine" - making it a client process. To accept and respond to that request there must be a server process running on the "host machine". This server process fetches the requested file from the disk (using the old driver) and sends it back to the client. The client process, which is the filesystem software, in turn makes that file available to the application that requested it. We can see that the data on the server is made available to the client as a file. This is what defines NAS.

So for the filesystem software to get smart, it now needs two components - a client part used by the applications and the server part which handles the disk. There are quite a few such "smart filesystem software" out there. The most common in the UNIX/LINUX world is NFS - Network File System. The server part of NFS is named "nfsd". On the client side, the standard "mount" command is smart enough to mount drives with "nfs" filesystem type.

Note that here the filesystem software is aware that the disk (and hence the data) is on a remote machine. This is another defining trait of NAS.

More details are available here : http://nfs.sourceforge.net/ and here : https://help.ubuntu.com/8.04/serverguide/C/network-file-system.html

SAN - Storage Area Network -- This one has a smarter disk driver and the same old filesystem. The disk driver on the "new machine" lies to the OS and the filesystem software that there is a disk attached to the system locally. The OS and the filesystem software believe the driver and continue to use the fake disk that the driver provided. Whenever the disk driver is asked to fetch a block (not a file, a block), it in turn sends a request to the "host machine" and retrieves that block of data - thereby becoming the client process in the setup. Accordingly there will be a server process running on the "host machine" which accepts this request, fetches the corresponding block from the actual disk and sends it back to the client. The client, which is the smart disk driver in this case, in turn passes that data to the filesystem software and eventually to the application that requested the file data. It is evident here that the data on the server was made available to the client as "blocks" and not as files. This is what defines SAN.

Note that here the filesystem (and every other component apart from disk driver) is not aware that the disk (and the data) is on a remote machine. This is another defining trait of SAN.

A very common and popular appearance of SAN these days is in the various cloud offerings. For instance the Amazon cloud offering has a service named EBS - Elastic Block Storage, which makes network storage available as locally attached disk. We can have all the regular filesystems like ext4 or xfs on top of this EBS drive.

That's it. The two acronyms have been conquered... !

Saturday, March 10, 2012

Analysis of the Duqu Trojan worm by Kaspersky Labs

I happen to come across the discovery and research of the Duqu Trojan worm, which apparently is the successor of the notorious Stuxnet worm. There are a lot of articles to read and I am feeling a little sleepy now and I may not finish all of them and be awake to write a summary of my understanding. So instead of bookmarking all those tabs I am documenting them here with little metadata to identify what each link talks about.

(Note: Yesterday night I did doze off in the course of writing this post. :P)

  1. The FAQ link - http://www.securelist.com/en/blog/208193178/Duqu_FAQ

    A standard FAQ page, good starting point if you are totally new to Duqu or Stuxnet. Also answers some noob questions. Btw, it mentions that one of the Command & Control center servers was hosted in India.. !!

  2. The mystery of Duqu - Part one - http://www.securelist.com/en/blog/208193182/The_Mystery_of_Duqu_Part_One

    This one provides the bird's eye view of the worm - it's components, the files involved and how they play together, comparison with Stuxnet (with a missile analogy). It also gives a chronological view of the discovery and detection of this worm. More importantly it talks about the various device drivers - signed and unsigned which were used as a disguise.

  3. The Mystery of Duqu: Part Two - http://www.securelist.com/en/blog/208193197/The_Mystery_of_Duqu_Part_Two

    This one talks about the first detected real world infections that these guys detected using their cloud based Kaspersky Security Network. These were in Sudan and Iran, but no direct link to Iran's nuclear program yet. But one thing comes out - the worm was totally different on each infection. Different driver name and different checksum. In one case different size too. So the mystery actually continues.

  4. The Mystery of Duqu: Part Three - http://www.securelist.com/en/blog/208193206/The_Mystery_of_Duqu_Part_Three

    A short entry which corrects a mistake made in previous post about a network attack. What's more interesting is that, this reveals the starting point of this infection - a.k.a the dropper. Turns out that it was a 0-day exploit in Microsoft Word, related to the file win32k.sys (CVE-2011-3402). So the infected word file was sent to specific people via email. Also each infected file is different, which means the file was crafted individually for each target.

  5. The Duqu Saga Continues: Enter Mr. B. Jason and TV’s Dexter - http://www.securelist.com/en/blog/208193243/The_Duqu_Saga_Continues_Enter_Mr_B_Jason_and_TVs_Dexter

    This one gets a little technical and walks us through the modus operandi taking one of the infections mentioned in previous post. It reveals a bunch of things and confirms most of the assumptions made previously - viz : very targeted attack, dynamic modules with little to no trace on target machine, different C&C servers for different targets, etc. It also tells us how the worm authors got creative and created a font named Regular Dexter and named the creator of font as Showtime Inc.

    What is more interesting is the way the comments get even more creative. One comment talks about a new interpretation of a HEX string found in the trojan code - 0xAE790409. Earlier it was thought to be related to the death of Habib Elghanian (http://en.wikipedia.org/wiki/Habib_Elghanian) like in the Stuxnet case. But the new interpretation is that : AE means "Atomic Energy" and (19)79-05-09 is the date on which USA and USSR signed the Salt 2 treaty to limiting nuclear weapons. This is wrong because SALT II was signed on June 18th 1979 - http://en.wikipedia.org/wiki/Strategic_Arms_Limitation_Talks#SALT_II

    Another comment interprets the sender email bjasonxxxx@xxx.com as "Bourne Jason", the ultimate spy/operative from the famous Bourne novel/movie series.

  6. The Mystery of Duqu: Part Five - http://www.securelist.com/en/blog/606/The_Mystery_of_Duqu_Part_Five

    This one dives deep into the structure and layout of the DLL and PNF files of the trojan, the registry entry, the config files, the process it affects, etc.. It gets very technical, and requires knowledge of binary file formats and dll loading mechanism to understand it fully. The loader part is fully disected here, however the payload is still not known. They say it is some C++ code with heavy use of STL and probably a custom framework.

  7. The Mystery of Duqu: Part Six (The Command and Control servers) - http://www.securelist.com/en/blog/625/The_Mystery_of_Duqu_Part_Six_The_Command_and_Control_servers

    This one analyzes the command and control servers used by the Duqu trojan. This is the first post where the details of the India C&C server was mentioned. It belonged to a web hosting company named Webwerks - http://www.web-werks.com/ and http://www.webwerks.in/. The Kaspersky guys say this was the most interesting of all the C&C servers - probably because it was the first one and also the longest serving. Unfortunately they were not able to analyze this as it was cleared off just hours before the hosting company agreed to make an image of this server. Nevertheless they analyzed two servers - one in Vietnam and another in Germany and dug a boat load of information. Final stand is that either OpenSSH 4.3 has a 0-day vulnerability or the server guys had very bad password and hackers cracked it with brute force.

  8. The Mystery of the Duqu Framework - http://www.securelist.com/en/blog/667/The_Mystery_of_the_Duqu_Framework

    This post details the code structure of payload and tries to decipher the programming language and the framework used. Although many parts appear as standard C++ with heavy use of STL the significant portion of the main payload code appears to not have any link to the standard C runtime and does not appear to be compiled with the Microsoft Visual C++ compiler. The code uses the Win32 native API directly bypassing the runtime. This means the trojan authors used a very obscure programming language and compiler or came up with their own. The comments talk about various possibilities but few actually make sense. One commentor is very sure it is one of the big US software companies and pin points IBM as the prime suspect along with his own myriad set of proofs.

The bottom line is that the sponsors of the Duqu worm have deep pockets, are very organized and have very specific targets. Also different parts were probably developed by different teams, with no team knowing the full picture. This very likely means it is state sponsored. My guess is : that information will never come out.

Monday, February 27, 2012

Windows 7 ನಲ್ಲಿ ಕನ್ನಡ ಟೈಪ್ ಮಾಡೋದು ಹೇಗೆ? - ಒಂದು ಪಾಠ

ನಾನು ಆನ್-ಲೈನ್ ಮಾತಾಡುವಾಗ (ಚ್ಯಾಟ್ ಮಾಡುವಾಗ) ಆದಷ್ಟು ಕನ್ನಡದಲ್ಲೇ ಮಾತಾಡ್ತೀನಿ, ಅಂದರೆ ಕನ್ನಡದಲ್ಲೇ ಟೈಪ್ ಮಾಡ್ತೀನಿ. ಇದನ್ನ ನೋಡಿದ ಸುಮಾರು ಗೆಳೆಯರು ಹೇಗೆ ಮಾಡ್ತೀಯಾ, ನಾನು ಮಾಡಬಹುದಾ ಅಂತಾ ಕೇಳ್ತಾರೆ. ಅದಕ್ಕೆ ಉತ್ತರವೇ ಈ ಲೇಖನಿ.

Windows Vista ಇಂದ ಹಿಡಿದು ಕನ್ನಡ ಟೈಪ್ ಮಾಡೋದಕ್ಕೆ ಬೇಕಾಗಿದ್ದೆಲ್ಲಾ Windows ಅಲ್ಲೇ ಇದೆ. ಬೇರೆ ಏನು ಇನ್ಸ್ಟಾಲ್ ಮಾಡೋದು ಬೇಕಾಗಿಲ್ಲಾ. ಕೇವಲ ಕನ್ನಡ ಕೀಬೋರ್ಡನ್ನ ಶುರು ಮಾಡಬೇಕು. ಅದನ್ನ ಹೇಗೆ ಮಾಡಬೇಕು ಅನ್ನೋದಕ್ಕೆ ಒಂದು ಚಿಕ್ಕ ವಿಡಿಯೋ ಮಾಡಿದೆ. ವಿಪರ್ಯಾಸ ಏನಂದ್ರೆ ಆ ವಿಡಿಯೋನಲ್ಲಿ ಧ್ವನಿ ಇಂಗ್ಲಿಷಲ್ಲಿದೆ. :( ಪರವಾಗಿಲ್ಲಾ ಜನರಿಗೆ ಗೊತ್ತಾಗಿ ಕನ್ನಡ ಟೈಪ್ ಮಾಡೋದಕ್ಕೆ ಶುರು ಮಾಡಿದ್ರೆ ಸಾಕು.

ವಿಡಿಯೋ ಬೇಡಾ ಅಥವಾ ವಿಡಿಯೋ ಸರಿ ಇಲ್ಲಾ ಅನ್ನೋರಿಗೆ ಕೆಳಗೆ ಸ್ಕ್ರೀನ್-ಶಾಟ್ ಗಳೂ (screenshots) ಇವೆ. ಅದನ್ನ ನೋಡಿ, ಅದರ ಜೊತೆ ಇರೋ ವಿವರಣೆಯನ್ನ ನೋಡಿ ಕನ್ನಡ ಕೀಬೋರ್ಡ್ ಶುರು ಮಾಡಬಹುದು. ಈ ವಿವರಣೆಯನ್ನು ವಿಡಿಯೋಗಾಗಿ ಇಂಗ್ಲಿಷಲ್ಲಿ ಬರದಿದ್ದೆ. ಆಲಸಿಯಾಗಿರೋ ನಾನು ಅದನ್ನ ಹಾಗೇ ಕಾಪಿ-ಪೇಸ್ಟ್ ಮಾಡಿದ್ದೇನೆ, ಸ್ವಲ್ಪ ಎಡ್ಜೆಸ್ಟ್ ಮಾಡ್ಕೋಳಿ :)

ಈ ವಿಡಿಯೋಯನ್ನ ನಾನು ಏನೂ ಶ್ರಮ ಹಾಗೂ ಖರ್ಚಿಲ್ಲದೆ Stupeflix (http://studio.stupeflix.com/) website ಅಲ್ಲಿ ಮಾಡಿದ್ದು.  ಅವರಿಗೆ ತುಂಬಾ ಧನ್ಯವಾದಗಳು.

ಸಿರಿಗನ್ನಡಂ ಗೆಲ್ಗೆ


ಸ್ಕ್ರೀನ್-ಶಾಟ್ ಗಳು

Open control panel and click on "Change Keyboard and Other input methods".
If you can't see it, make sure to set the view to "Category" in the right top.

A new dialog pops up. Click on the "Change keyboards" button in the new dialog box.

Another dialog pops up showing you the keyboards currently in use. Kannada keyboard will not be shown there yet. You will see it after adding it. Click the "Add" button.

A new dialog pops up with a list of all available keyboards. The keyboards are arranged in alphabetical order. You will find Kannada almost at halfway in the list. Scroll down, choose the "Kannada" under, "Kannada, India" entry, by checking the box.

Click "Ok" in this dialog box, and also in every other dialog that opened up in the previous steps.

Now you should see a small language bar in the right bottom of your screen. Clicking on it will allow you to change the language.

Alternately you can change the keyboard language by pressing "Alt + Shift" keys together.

You can start typing kannada whenever you want after changing the language.

Initially, use the "On Screen Keyboard", to understand how the keys are laid out.

To launch the "On Screen Keyboard", open the windows menu and type the letters "o s k".

The o s k dot e x e will be listed under "Programs". Click it, to launch the "On Screen Keyboard".

The "On Screen Keyboard" program launches. It starts as an English keyboard, by default.

Change the language using the language bar in the right bottom or by pressing "Alt + Shift" keys together.

Once you change the language, the "On Screen Keyboard" will show the "Kannada" letters.

Take a closer look at the keyboard layout.

To the left are the vowels or also known as "swaraa" and to the right are the consonants, also known as "vyanjanaa".

Observe the keyboard for a little while to understand the layout.

Open a notepad and start typing kannada in it with the "On Screen Keyboard". A letter is formed by the combination of a "vyanjanaa" and a "swaraa". Try out the various combinations.

Once you have little familiarity with the layout, minimize the "On Screen Keyboard" and type using the real keyboard.

Please be patient and practice regularly to be able to type easily. It will be difficult initially, but it will be a JOY later.

Good luck!

Thursday, February 2, 2012

Network does not work in Ubuntu after Hibernate and Resume

I run Ubuntu 10.04 (Lucid Lynx) in VMWare player and Ubuntu has this habit of silently hibernating when it gets a report that battery level is low. Now VMWare player doesn't do a good job in reporting the right battery status and that leads to the virtual machine just hibernating without asking me anything when I will be in the middle of something. When I restart the virtual machine and resume the system there would be no network..!! I would then close all my open applications - editors, DB, rails etc etc.. and reboot the VM. This was pain.

Today I finally found a solution for this. Turns out that the problem lies with the networking module being used. In my VM I use VMWare player's vmxnet module. I just removed the module and re-added it and that worked. Just two simple commands

sudo modprobe -r vmxnet
sudo modprobe vmxnet

If you are not running Ubuntu as a VM in VMWare player your network module name will be different. lsmod might help you find out which you are using.

Saturday, January 21, 2012

Shortcomings of aliased field or attribute names in Mongoid - Part 1

  • The behavior and shortcomings explained below apply to Mongoid versions 2.4.0 (released on 5th Jan, 2012) and releases previous to that. A recent commit made on 10 Jan, 2012 fixes all these shortcomings.
  • For those using the affected versions (all Rails 3.0 developers), this monkey patch will address the shortcomings.

In my previous post I wrote about getting a list of aliased field names. From that post it might be evident that dealing with aliased field names is not that straight forward in Mongoid. I am using Mongoid v2.2.4 which the latest version working with Rails 3.0. Mongoid v2.3 and later require ActiveModel 3.1 and hence Rails 3.1.

Anyways, aliased field names have these shortcomings :
  1. Accessor methods are defined only with the aliased names and not the actual field names.
  2. Dirty attribute tracking methods are not defined for the aliased names.
  3. attr_protected, if used, should be used with both short and long forms of field names.
Writing about all three in a single post would result in an awfully long post. So I will put details about each of these in their own  posts, starting with the first one in this post.

Accessor methods are defined only with the aliased names and not the actual field names.

Consider the following model definition:
class User
  include Mongoid::Document

  field :fn, as: :first_name
  field :ln, as: :last_name
I would have expected the additional accessor methods names 'first_name', 'first_name=', 'last_name' and 'last_name='  to be simple wrapper methods which just forward the calls to the original accessor methods :- 'fn', 'fn=', 'ln' and 'ln='. But Mongoid just doesn't create the shorter form of the accessor methods at all.
user = User.new
user.respond_to?(:fn)         # Returns false
user.respond_to?(:ln)         # Returns false
user.respond_to?(:first_name) # Returns true
user.respond_to?(:last_name)  # Returns true
This doesn't appear like a problem at first sight because an application developer would use the long form of the methods in the application code. Trouble begins in the dirty tracking methods which use the actual attribute name and consequently the shorter form of field names. Take a look at these parts of Mongoid and ActiveModel:
  • Definition of setter method for any attribute - Github link for v2.2.4
    define_method("#{meth}=") do |value|
      write_attribute(name, value)
    Notice that the field name (i.e. the short form) is being passed to write_attribute, which eventually gets passed to ActiveModel's dirty attribute tracking method attribute_will_change!

  • Definition of the ActiveModel method : attribute_will_change! -- Githib link for v3.0.11
    def attribute_will_change!(attr)
        value = __send__(attr)
        value = value.duplicable? ? value.clone : value
      rescue TypeError, NoMethodError
      changed_attributes[attr] = value
On line no : 3 the method with the same name as that of the attribute's short name is invoked with __send__. Since Mongoid doesn't define such methods this mostly results in NoMethodError which is caught and swallowed and nothing happens. This is comparatively harmless. But if at all a method with the same already exists, then that method gets called and a lot of unwanted things can happen. In the case of the User model above, the 'fn' just results in NoMethodError, where as the 'ln' field could result in any of the following methods :


That could result in pretty nasty errors about these ln methods and you wouldn't even know why these are being called!. Now whether it is a good practice to name your attributes in a way that clash with already defined methods is a totally different thing. But just remember that the cause of a weird error is probably aliasing.

Wednesday, January 18, 2012

Getting the list of aliased key/attribute names from a Mongoid model

At some point today when I was writing some model specs for one of my Mongoid models, I required the list of all of the attribute/key names. Mongoid provides a handy "fields" method for this, which returns an hash of key names and Mongoid::Fields::Serializable object pairs. Getting the list of names from that was easy : Model.fields.keys.

This gives the list of the actual key names. The actual key names, in my case, are very short strings (1 to 3 characters) and I have long aliases for those in my models. What I eventually realized was that I wanted the list of  the longer aliased names. Looking around the mongoid code did not give me any direct method. Turns out that the aliased names result in nothing more than a few additional 'wrapper' methods (like the accessors, dirty methods, etc) and there is no table/hash kind of thing maintained anywhere to give the mapping between the actual key name and the aliased ones. So my current guess is that the list of these aliased names is not available directly anywhere.

So I came up with this hackish way of getting that list of aliased names.

p = Post.new
actual_field_names = p.fields.keys
all_field_names = p.methods.collect { |m| m.to_s.match(/_changed\?$/).try(:pre_match) }
                    .select { |e| e }
aliased_field_names = all_field_names - actual_field_names

As mentioned earlier, this is pretty hackish. If you know of a straight forward way, do let me know.

Note : I eventually found out that I did not actually need this list of aliased names. I did not use this in my project. Nevertheless it works just fine.

Sunday, January 1, 2012

MongoDB concurrency - Global write lock and yielding locks

There has been lot of hue and cry about MongoDB's global write lock. Quite a few people have said (in blog posts, mailing lists etc) that this design ties down MongoDB to a great extent in terms of performance. I too was surprised (actually shocked) when I first read that the whole DB is locked whenever a write happens - i.e a create or update. I can't even read a different document during this time. It did not make any sense to me initially. Previous to this revelation I was very pleased to see MongoDB not having transactions and always thought about that feature as a tool which avoided locking the DB when running expensive transactions. However this global lock sent me wondering whether MongoDB is worth using at all.. !! I was under the assumption that the art of "record level locking" had been mastered by the database developers. This made MongoDB look like a tool of stone age.

Well I was wrong. Turns out that "Record level locking" is not that easy (and the reasons for that warrant a different post altogether) and from what I understand MongoDB has no plans of implementing such a thing in the near future. However this doesn't mean the DB will be tied up for long durations (long on some scale) for every write operation. The reason is that MongoDB is designed and implemented in ways different than other databases and there are mechanisms in place to avoid delays to a large extent. Here are a couple of things to keep in mind :

MongoDB uses memory mapped files to access it's DB files. So a considerable chunk of your data resides in the RAM and hence results in fast access - fast read all the time and very fast write without journaling and pretty fast write with journaling. This means that for several regular operations MongoDB will not hit the disk before sending the response at all - including write operations. So the global lock that is applied exists only for the duration of time needed to update the record in the RAM. This is orders of magnitude faster than writing to the disk. So the DB is locked for a very tiny amount of time. This global lock is after all not as bad as it sounds at first.

But then the entire database cannot be in RAM. Only a part of it (often referred to as working set) is in RAM. When a record not present in RAM is requested/updated MongoDB hits the disk. Oh no, wait.. so does that mean the DB is locked while Mongo tries read/write that (slow) disk? Definitely not. This is where the "yield" feature comes in. Since 2.0 MongoDB will yield the lock if it is hitting the disk. This means that once Mongo realizes it is going for the disk, it sort of temporarily releases the lock until the data from disk is loaded and available in RAM.

Although I still prefer record level locking in MongoDB, these two above mentioned features are sufficient to reinstate my respect and love for MongoDB. :)