OnSwipe redirect code

Friday, November 21, 2008

Editing remote files smoothly in vim on windows

I have a laptop and a desktop. Desktop runs Ubuntu, by choice, and laptop has to run Windows, by force. But most of my work happens on the Linux desktop itself. So when I am away from the desktop I would login to my desktop via SSH using PuTTy. It works fine when I am still on the corporate LAN, but the problem starts when I go home and get on to VPN. The speed and the responsiveness simply demotivates me and I tend to waste a lot of time specially when I am editing files with vim, because every keystroke has to travel across the network to my desktop and the response is to be sent back to my laptop. Coding really becomes hell with this.

Recently I got to know that VIM has identified this problem quite some time back and has a solution in store. You can open a remote file over SCP, where in VIM would bring down that file to the local system and store in some temp location. You edit that temp file, with VIM running on your own machine. So do not have to wait for keystrokes to be processed by the remote machine. When you write the file, VIM updates the remote file using SCP.

[Note]: If the file is read only, w! will not work. The user account used for SCP must have write permisson for the file you are editing. Otherwise, obviously, remote write fails and VIM will promptly report it.

Look at this page for the syntax and more details.

This would be straight forward on a linux box as both vim and scp come packed with the OS and they are in the shell execution path and everything is set up by default. Things need some extra work on Windows.

First obvious thing is to install VIM. Then you would need a SCP program. And once again PuTTy comes to rescure. They have a PSCP.exe, which makes you feel at home even on Windows. Get it here.

To improve this a little more you can rename the PSCP.exe to scp.exe and place it in "C:\Windows\System32\" so that it will be picked up from everywhere at the command prompt. Also note that you can use your PuTTy saved sessions directly with PSCP.

Happy remote VIMming. :-)

Hari Om

Tuesday, October 21, 2008

The ISP Cat and Mouse game and CDNs finally benefiting out of it.

Politics and policies are everywhere. They say policies are made to govern us and I say many of those are there for inertia - resistance to change. The big and powerful want to be so, always and do not want others to get there. This is well known and very much a cliche. But what has this got to do with ISPs in specific? Here we go:

ISPs are the people who sort of own the "Internet network" physically. It is they who actually connect the various computers by physical cables. And yes, that is why we pay them. For getting us connected to the rest of the world. Now there is no single ISP who has his cables connected to all the computers in the world. In fact there is no one who can even boast about a majority stake in the market. So obviously when the data travels through the internet, it goes through the infrastructure laid and maintained by different ISPs. A simple example will illustrate this :

Let's say a user is connected to the internet via Bharti Airtel connection and he is trying to access, say Indian Railways website, which is hosted on a machine connected to internet, for the sake of illustration, via BSNL connection. So the path of the request from client to the server would involve, both Airtel network path and BSNL network path. The client sends the request to the Airtel server. The Airtel network will route the data in its own network to the extent possible. At one point it needs to request BSNL network to take up the data and then give it to the destination server machine. Now at this cross-over point, Airtel is requesting some service from BSNL. Essentially Airtel is making use of the BSNL's network infrastructure to carry its data. Now there is no point in BSNL giving this for free. So obviously it charges Airtel some amout of money. Airtel does not mind paying it as it mostly gets translated to user charges. This is not really the issue. Problem would arise when BSNL will refuse to take the request and Airtel will have find some other alternate path, which generally ends up to be very very long. Consider this:

Clinet -> Last Airtel machine (router) -- m number of hops
Last Airtel machine -> Destination machine in BSNL network (Direct path) -- 4 hops.
Last Airtel machine -> Destination machine in BSNL network (Indirect path via some other ISP or via some other cross-over point) -- 20 hops

So totally the data has to do m+4 hops if BSNL takes up the request from last Airtel machine. At a time when BSNL is experiencing some heavy traffic in the region where the Airtel-BSNL crossover is happening, it would not be willing to accept more data, that too from a different ISP. So they follow two techniques here:

1. Simply drop the data packets, which will result in bad experience for the end user.
2. As routing happens based on least number of hops, the first BSNL server at the crossover point, will tell the last Airtel machine that the number of hops to the destination machine is actually 25 hops even though it is totally wrong. As a result the last Airtel machine will instead choose the indirect path with 20 hops. This will obviously slows down the internet and again result in bad experience for the end user.

Now you see how policies and profits affect technology. This is, as stated by an electronics professor at my college SJCE, TECHNO-POLITICS.

The solution for this would be to make the data available in every ISPs own network. And that is precisely what the CDN - Content Delivery Network - companies do. These companies have a huge number or servers placed in various parts of the worlds. In most cases they are placed in the data centers of these ISPs. It is symbiosis. With CDNs placing their servers in ISP's data centers, the ISP has a lot of data in its own network, even though the original website (or content owner) might be using a different ISP. This avoids a lot of requests to different ISPs and there by reduce costs significantly. In return the CDN companies get a very sweet deal for the rack space for their machines

Tuesday, October 7, 2008

Avoiding seeing your XP in the old pale Win 98 way

How to Fix Windows XP Theme Problems > Tutorials, Bug Fixes, Subscriber Articles, Hosting DotNetNuke at Godaddy > My .NET Nuke Blog - Internet, Blogging, DotNetNuke, Tutorials, Windows, How to Articles, Fun, Free Stuff, Reviews

For some reason when I got my laptop my XP looked the good old Win 98 with the default grey color theme. I thought it was optimized for performance than for visual effects. Later when I actually wanted to see my machine in a beautiful way (Specially after decorating my FF with the Chromifox theme) I realized that the XP theme was not present in the drop down at all. My first thought was that it is some corporate limitation and cursed the rules and restrictions and all that. But I was sure I would not be the first person and hence hit our friend in need - Google. After wading through several pages telling me to download the default XP theme - Luna or a modified version of it I finally landed at the page linked at the top of the post. It gives clear steps to get back the charm on your XP machine and enjoy the beauty of today's computer instead of brooding over the past decade's sober ones.

The particular thing that I had to do was to enable the Themes service. It was turned off and was set to manual activation. I changed to automatic start and voila !! -- my machine became beautiful and there by making my FF a lot more pleasing.
Not jus that almost every application now appears beautiful including the MS-Outlook -- yeah, MS-Outlook.!!!

Friday, September 12, 2008

Chrome multi-process architecture does have heavy costs

Chromium Blog: Multi-process Architecture

The day the Google Chrome comic was released quite a few people pinged and called me to talk about that and what I say about -- not that I am an expert on browsers or evaluating software products. Its just for the slight association with the Mozilla community that I was contacted. When I was talking with my roommate about this I told him what I felt the very moment I read about the multi-process architecture. Right from the first reading I was skeptical about the resource utilization if I were to open a hell lot of tabs in this browser, as it mentioned in the comic that each tab is a process. Though the phrase "process per tab" was more for software laymen with the actual thing being that it is "process per domain" until a max limit of about 20 and later its reuse. More details are on the blog post linked above.

And about the reason for being skeptical is the very basic concept amongst computer users that more processes will slow down the system. Any system analyst or a sys-admin will tell you the same thing when you complain about the very low speed of your computer. From what I know this is mainly because of the increased memory consumption and a possible paging that might happen for moving processes in and out of main memory and virtual memory. Also scheduling will possibly take a hit as there are more processes. Apart from this as I understand there is always an overhead of maintaining process info for every process.

Considering these things for a user like who opens close to 20 tabs always and goes up to 30 a lot of times, the overhead will be considerable. Also creating and killing processes will also be an overhead.

Also my task-manager will list so many chrome.exe processes.. !!!!! Thats so very irritating for me.

When I told him about my spkepticism he told me that the Google folks would have thought about that. I agreed with him and the now they have put it on their blog. In the post linked at the top, its mentioned that the system might slow down with a lot of processes and hence they had to put an upper limit and later resort for reuse. They call this small caveats but I am not sure if it is small enough. Lets see how things evolve.

Until this is proven as small enough: Happy Single-Process browsing ;-)

Saturday, August 30, 2008

offline cache discussion with campd

[ 2:20 am] <brahmana> hi all,
[ 2:20 am] <brahmana> Looks like my earlier question about offline cache got lost here...
[ 2:21 am] <brahmana> I read the HTML 5 spec and understood that the cache will be versioned and hence multiple versions of the same cached element will be present on the client's disk. Is that so?
[ 2:22 am] <campd> yeah
[ 2:22 am] <campd> as of 3.1
[ 2:22 am] <campd> in 3.0, there's only one version
[ 2:23 am] <brahmana> ok..
[ 2:23 am] <campd> brahmana: though they won't stick around long
[ 2:23 am] <brahmana> ok..
[ 2:23 am] <campd> brahmana: when you visit a page, it'll download a new version. Once any page using he old version is navigated away from, it is cleaned up
[ 2:25 am] <brahmana> campd, So everytime the user goes online and visits a website which has offline cache, the cache is refreshed provided no page is using the old cache.
[ 2:26 am] <campd> brahmana: sorta
[ 2:26 am] <campd> brahmana: every time they visit an offline cached website
[ 2:26 am] <campd> brahmana: it will check for a new version of the cache manifest
[ 2:26 am] <brahmana> ok..
[ 2:27 am] <campd> brahmana: if there's a new manifest, a new version of the cache will be created and fetched
[ 2:27 am] <brahmana> campd, ok.. answers my question fully..
[ 2:27 am] <campd> cool
[ 2:27 am] <brahmana> campd, However I have another question..
[ 2:27 am] <campd> ok
[ 2:28 am] <brahmana> Now is this cache application specific? As in if a image with the same src is referenced by two websites, the image will be cached separately for each webapp?
[ 2:28 am] <campd> yes.
[ 2:29 am] <brahmana> ok..
[ 2:30 am] <brahmana> campd, Will this offline cache in anyway affect the regular browsing when the user is online?
[ 2:30 am] <campd> if they're online, browsing an offline app, it will be loaded from the cache first
[ 2:30 am] <campd> it won't affect online browsing of non-offline pages
[ 2:31 am] <brahmana> ok..
[ 2:31 am] <campd> so if http://www.foo.com/offline.html is an offline app that references http://www.bar.com/another.html
[ 2:31 am] <campd> going to http://www.bar.com/another.html will NOT load it from the offline cache
[ 2:31 am] <campd> but going to http://www.foo.com/offline.html WILL be loaded from the offline cache
[ 2:32 am] <brahmana> okay..
[ 2:33 am] <brahmana> campd, Regarding the local storage, Can it be looked at as an extended form of what currently is cookie?
[ 2:34 am] <campd> kinda, yeah
[ 2:34 am] <brahmana> Is there any limit on the amount of data that each web-app gets on this local storage?
[ 2:35 am] <campd> yep
[ 2:35 am] <brahmana> Because the spec says that the web-app can use this to store user created _documents_
[ 2:35 am] <campd> 5 megs for the etld+1
[ 2:35 am] <campd> if the domain has the offline-app permission it gets more, but I forget the exact number
[ 2:35 am] <mconnor> campd: is that right? I thought I remembered some wacky combination thing
[ 2:36 am] <brahmana> oh.. ok.. thats pretty big space..
[ 2:36 am] <campd> (which I assume is the wacky combination mconnor's referring to ;))
[ 2:36 am] <mconnor> no
[ 2:36 am] <mconnor> it was something like "foo.bar.com can have 3 MB, and bar.com can have 2 MB" or something
[ 2:36 am] <mconnor> in whatever combination
[ 2:37 am] <mconnor> maybe that was the spec that got deprecated?
[ 2:37 am] <campd> I think right now it's just "5 for the whole etld"
[ 2:37 am] <campd> err, etld+1

Tuesday, August 26, 2008

Building and embedding spiderMonkey

If you already do not know, SpiderMonkey is the JavaScript engine of the Mozilla web browser. The interesting part about this JS engine is that you can use it in your application and execute JS from your application. Well I can talk a lot about the uses and the advantages of such an embeddable JS engine but thats for another post. Here I will tell you how to build SpiderMonkey and how to embed it in your application and finally how to get your application running with the JS engine.

Actually the first two steps are very well explained in these MDC pages:


  • The first one tells you how to get SpiderMonkey and build it. It involves downloading the source and running "make -f Makefile.ref" and thats the end of the build story. You will have the engine binary, both in the form of a reusable library - dynamic and static and also an ready to use JS-execution shell. The reusable library is named libjs.so(dynamic) or libjs.a(static). The interactive JS-Shell will be an executable named js. More about using the Shell here.

  • The second one tells you how to write a simple application using this reusable JS engine library. It also explains you the basic types used in the engine and the general essential terminologies. There is a boilerplate program also which you can use to test your first embedding attempt.
With all this great documentation, I was stuck at the point where I have to compile my application with the JS library. After the build I do not get any directory named "sdk" or something similar. It just gives me a build directory with a static library, a dynamic library and an executable. There are also a bunch of object files (.o files) but they are not really of much use. Only two .h files are copied to the object directory and that does not involve the main "jsapi.h" file.

As a result you will end up with a hell lot of compilation errors if you just try to build your application. So there are a couple of steps, probably very evident ones, that you need to do before you build your app.

  1. Put all the .h files, header files, from the source (src) directory in the include path when building your app. The best way for this would be to create a spidermonkey folder under the includes directory or your app and providing that directory as the include path to the compiler at build time.
  2. Copy the libjs.so to the lib directory of your application and pass it as a linker option (-ljs).
  3. The various JS script types map to some system types, probably, based on the OS being used and hence they are present under some #ifdef. If you do not define any OS then you will not get defintions for several types and you end up with compilation errors. To avoid this manually define the OS before you include the first file from spiderMonkey (which is typically jsapi.h). Defining is in the usual way: #define XP_UNIX // -- For linux systems.
With this you should be able to build your application and run it. So you now have a program of yours for which input can be JavaScript and it will compile and run that JS and give you the output.

Happy embedding. :-)

Amazing pitfall, you will laugh

This is one of those funny videos for which I really laughed... Watch this and let me know if you did not laugh...

Specially, the lungi is the added attraction.. :D

Sunday, August 17, 2008

How libffi actually works?

I came across this libffi when I thought of working on js-ctypes. Though my contribution remained very close to nil, I got to know about this fantastic new thing libffi. But I did not understand how it actually worked when I first read. I just got a high overview and assumed that the library abstracts out several things that I need not worry about when making calls across different programming languages. I just followed the usage instructions and started looking at the js-ctypes code. That was sufficient to understand the js-ctypes code. But today when I was once again going through the README, I actually came across one line that made things a little more clearer. The first paragraph in "What is libffi?" tells it all. Its the calling conventions that this library exploits. I had written a post about calling conventions some time back. As mentioned in the readme file, its the calling convention whcih is the guiding light for the compiler. So when generating the binary code, thecompiler will assume will that the arguments that are being passed to that function will be available in some place and also it knows about a place where the return value will be kept, so that the code from the calling function will know where to look for it.

Now we know that the ultimate binary code that we get after compilation is the machine code. So it is basically a set of machine supported instructions. These instructions are certainly not as sophisticated as those available in the C - like the functions. So what do these functions transalte to? Nothing but a jump of IP(instruction pointer) to an address where the binary code of that function is. Inside this code, the arguments passed are accessed by referencing some memory location. During compilation the compiler will not know the precise memory locations (obviously). But the compiler has to put in some address there when generating the code. How does it decides on the memory location? This is where the calling conventions come into picture. The compiler follows these standard steps. Since its the compiler who is generating the code for both calling a function, where arguments are sent, and the code of the called function, where those args are received, the compiler will be knowing where it has put the arguments and hence generate code to use the data in those locations. From this point of view, the main(or is it the only one) condition for the binary code of a function to work properly is to have its arguments in the memory locations it thinks they are in and and the end place back the return value in the right location. And from what I have understood till now, it is this point that libffi exploits.

When we are forwarding calls from an interpreted language, like JS, to some binary code generated from a compiled language, like C, there are two things to be done:

0. As discussed earlier, whatever arguments are passed from JS are to be placed in the right locations and later take the return value and give it back to JS.
1. Type conversions -- The types used by JS are not understood by C. So someone should rise up to the occasion and map these JS types to their C counterparts and vice-versa.

The first of these is taken care by libffi. But the second one is very context specific and totally depends on the interpreted language. It does not make sense to just have a catalog of type convertors for every interpreted language on this earth. So these two steps were separated and spread out as two interacting layers between the binary code and the interpreted language code. libffi now takes care of the calling convention part and making sure the binary code runs and gives back the results. And the job of type conversion is the job of another layer which is specific to the interpreted language which wants to call into the binary code. And hence we have these different type converting layers for different interpreted languages: ctypes for python, js-ctypes for JS (and probably some more exist). Well with this renewed and clearer understanding I hope to actually contribute to js-ctypes.

Happy calling (with conventions) ;-)

Thursday, August 14, 2008

Something good even duirng these rough patches in market

Value Research: The Complete Guide to Mutual Funds

This website, valuereasearchonline, comes up with some analysis that is new to me almost every time. This is particularly stuck me because, the tool mentioned in this article and the message that it drives has got something to do beyond the markets.

If we can have a method to take advantage of the current volatility, irrespective of whether the markets go down or up, then we should remind ourselves of the good old statement: "Where there is a will there is a way".

Just read this up and you will feel better after the big downfall of yesterday and today. :-)

Kaminsky DNS Vulnerability

An Illustrated Guide to the Kaminsky DNS Vulnerability

With twitter, I have almost stopped blogging. But this was something that needed more than 140 chars. So here is another blog post.
This thing has been making rounds in the web security world for quite some time now. This in fact appeared in the Times of India newspaper recently. The article there mentioned about the sudden patching that the nameservers were going through. This security hole is indeed that serious. With this exploit an attacker can get hold of a complete domain and become an authoritative nameserver for that domain. So any request to resolve name for the hijacked domain can be directed to the attacker's machine.

Now imagine this happening to the most popular online banking website. Yes, things can be very creepy on the internet.

To clearly understand how this attack can happen and how it can be prevented read on the the illustrative guide linked at the beginning of the post. Doesn't matter even if you don't know how DNS works. The illustrative guide explains several things satisfactorily.

Happy Resolving ;-)

Monday, July 28, 2008

Basic sciences coming alive in applets

Math, Physics, and Engineering Applets

We as children always loved the science experiments which had some funky colorful stuff and not those which involved a lot of thinking and imagination and ultimately just gave us some number or a colorless water like liquid. Also mathematics with plain numbers and formulae has always been for just geeks. But almost everyone loves any activity based on some math concept, something that did a simple trick to create wonders. Visual things like this are always appealing to children. Such things can drive in the actual concepts in a much better way than the regular black board with white letters. I was just searching for a website with a Java applet and I came across this interesting page which has some nice good applets for basic science concepts. Look at those and I am sure you will feel like a high-school or at the max and engg student again.

Happy appleting, ;-)

And I became the nomad...!!!

Bachelors -- A very bad title for the people with a career line like that of mine. It is that state when they are doomed to all sorts of miseries and the only best part being the freedom - for every aspect of life and the feel good factor being the last stage in life where we stay with friends. But seriously apart from this its all crap, totally. And just for the record, by my career line, I am referring to a typical average student, scoring some ok level marks, getting a job in some software company in BENGALOORU and starting this doomed life first by starting to look for a place to stay. Its all good in the beginning when we go out for treats and parties often and don't really lead a REGULAR life. But once things cool down, once we are no longer FRESHERS, thats when the trouble starts. We no longer have friends calling us for parties on the occasion of they joining their first job. And sometime later even the first salary treats get over. Then we are just the NORMAL SOFTWARE ENGINEER. And don't even get me started on what that means. In short, as mentioned before, it is this doomed life.

I personally escaped this for nearly an year now. Luckily my doddappa (Uncle) was working in Bangalore and I got a chance to stay with him. But again, this was supposed to be a temporary arrangement. I was supposed to stay with a few of my closest friends from college in a rented house in Indiranagar. But that location was not good for me as there was no direct transportation to my office from there. I had to travel in two city buses, though the distance was just 7kms. Also I got so used to the easy life at my uncle's place that I was not really willing to move out and start staying on my own. Well man it was really heaven when I compared myself with so many other colleagues of mine who were the "Bachelors". But this obviously had to change. I just could not continue to stay there forever. At one point or the other I had to move out and face this partial-hell.

There were a few triggers for this either in the form of Doddappa's transfer or they moving to a different house at the north end of the city (FYI, my office is in southern bangalore) and some more. Somehow those just passed by and I stayed there for 1 ful year. I recently joined Akamai, and these people are moving further south and the new office is even farther and I might require close to 2 hours for one way commute which is certainly insane. Totally insane. So I had to move out to a place nearer to my new office.

There were different plans and different ideas, and as usual, only one worked out. I decided to stay with my college friend Abhijeet aka Kolya. As he was in a hurry to find a house (and I am lazy), we could not roam around a lot and check out lots of houses and find an awesome deal. We had to settle for one of the inital houses. Its pretty good, but I have this feeling that the rent we are paying is pretty high. Anyways I was sort of under limitations. The rent is not the point here, the point is that I finally moved out and plunged into this "DOOMED LIFE". Though the shifting, that too just the first phase, happened just today I am already feeling like a NOMAD. At the end of the day, when I see the
office getting empty, I get a thought of going home. But then again, there is a sort of reluctance. I don't know why but I become averse of going home. For me its still a friend's place, not yet my home. I try to reason out and find a valid reason to go home and find none. As of now my new home is like some 'yet another place'. Earlier, I had this push or force that I am going home, where there are people waiting for me to come and we will have food together. And probably later watch TV or just chat or have gyaan transfer later. All this can happen even at the new place also. Relatives replaced by friend(s). But that is yet to sink in. It will take some time, probably a little more in my case.

Whatever it may be, as of now, I am a NOMAD --- I have become the nomad.

Wednesday, July 23, 2008

Discussion with biesi, bz and gavin about channel and tabId in #developers

brahmana wonders if biesi was able to examine the UML diagrams

[ 8:24 pm] <biesi> brahmana, sorry not yet
[ 8:24 pm] <brahmana> thought so..
[ 8:24 pm] <brahmana> biesi, anyways another quick question.
[ 8:25 pm] <biesi> brahmana, yes?
[ 8:26 pm] <brahmana> biesi, Can you please have a look at this one: http://wiki.mozilla.org/images/a/a0/Brahmana_URI_Loading_DocShell_Code.jpg -- Which would be the ideal point in the sequence to associate the tabId to the channel created.
[ 8:26 pm] <brahmana> ?

[ 8:26 pm] <brahmana> biesi, I want to know where I will have access to both of them.
[ 8:27 pm] <timeless> brahmana: pretty

[ 8:27 pm] <biesi> brahmana, um

[ 8:28 pm] <biesi> brahmana, nothing in that diagram has access to both :-)
[ 8:28 pm] <brahmana> timeless, thank you.. more here: http://wiki.mozilla.org/User:Brahmana/Netwerk_Docs (just in case)

[ 8:29 pm] <brahmana> biesi, ok.. so how much prior to webBrowser will I have to go to get a tabID ?
[ 8:29 pm] <biesi> brahmana, tabbrowser.xml

[ 8:29 pm] <biesi> its loadURI function or something like that

[ 8:29 pm] <biesi> brahmana, of course some things call loadURI on the web navigation directly...

[ 8:29 pm] <brahmana> biesi, yeah.. thats how I created that sequence diagram..

[ 8:30 pm] <brahmana> biesi, Can't I get hold of the tabID in C++ ?
[ 8:30 pm] <biesi> C++ has no concept of "tab"
[ 8:31 pm] <brahmana> But the <browser> present in each tab corresponds to one nsIWebBrowser, isn't it?
[ 8:31 pm] <brahmana> <browser> == the xul browser element

[ 8:35 pm] <biesi> brahmana, no
[ 8:35 pm] <biesi> there is no nsIWebBrowser in firefox
[ 8:36 pm] <biesi> that's only used for embedding
[ 8:36 pm] <timeless> unfortunately we don't use the same apis everywhere :(
[ 8:37 pm] <brahmana> oh man..
[ 8:37 pm] <biesi> brahmana, there is one docshell per tab
[ 8:37 pm] <brahmana> then the starting of my sequence diagram is wrong..
[ 8:37 pm] <biesi> if that helps you
[ 8:37 pm] <brahmana> yeah.. I am aware of that..
[ 8:38 pm] <brahmana> and I thought it was nsWebBrowser that held a reference to a docShell and made calls on the docShell
[ 8:38 pm] <biesi> ah, no
[ 8:38 pm] <brahmana> But as it appears firefox does not use nsWebBrowser itself...
[ 8:38 pm] <biesi> the browser holds the docshell directly, I believe
[ 8:39 pm] <brahmana> you mean the xul browser ?
[ 8:39 pm] <biesi> yeah
[ 8:39 pm] <brahmana> What C++ object does that map to?
[ 8:39 pm] <brahmana> something under widgets?
[ 8:39 pm] <biesi> the xul browser?
[ 8:39 pm] <brahmana> yeah
[ 8:40 pm] <biesi> um
[ 8:40 pm] <biesi> some xul magic
[ 8:40 pm] <biesi> nsXULElement.cpp perhaps
[ 8:40 pm] <brahmana> oh.. let me see
[ 8:40 pm] <biesi> via the boxObject maybe?
[ 8:41 pm] <biesi> but note that the <browser> is mostly an XBL thingy
[ 8:41 pm] <brahmana> oh man.. this is getting heavily complex..
[ 8:41 pm] <timeless> it really is

[ 8:43 pm] <brahmana> Now the JS call: browser.loadURI() will be a call on the corresponding nsXULElement object, which actually holds a reference to the docShell. Is that right?
[ 8:45 pm] * brahmana requests to put aside the XPCOM stuff that happens in the above sequence..
[ 8:45 pm] <brahmana> sorry, the XPConnect stuff..
[ 8:46 pm] <gavin|> yes
[ 8:46 pm] <gavin|> though the nsXULElement isn't really involved
[ 8:46 pm] <gavin|> apart from being associated with the JS object that implements the XBL methods
[ 8:48 pm] <brahmana> gavin|, Can you please elaborate a little on your last statement..
[ 8:48 pm] <brahmana> ?
[ 8:49 pm] <brahmana> Or is there a doc that I can read up to orient myself a little before asking lots of questions here?
[ 8:49 pm] <gavin|> nsXULElement itself doesn't have anything to do with the XBL implemented methods
[ 8:50 pm] <gavin|> it's just a "container"
[ 8:51 pm] <gavin|> it's not really useful to say that you're interacting with a nsXULElement, because you're really interacting with an XBL bound node
[ 8:51 pm] <gavin|> and the XBL <browser> methods implemented in JS are what matters
[ 8:51 pm] <gavin|> not the nsXULElement class methods
[ 8:52 pm] <brahmana> oh.. ok. so the browser.loadURI() is (most probably) implemented in the JS itself. This JS object holds a reference to the docShell directly and thats how the calls are routed -- makes sense?
[ 8:53 pm] <gavin|> that's about right
[ 8:55 pm] <brahmana> now this XBL/JS implementation is present in tabbrowser.xml?
[ 8:56 pm] <gavin|> and browser.xml, yeah
[ 8:56 pm] <gavin|> the tabbrowser contains <browser>s

[ 8:57 pm] <brahmana> This is the one I should be looking at: http://mxr.mozilla.org/mozilla-central/source/toolkit/content/widgets/browser.xml , right?

[ 8:58 pm] <gavin|> yes

[ 9:04 pm] <brahmana> gavin|, ok.. i figured the exit point to docShell: http://mxr.mozilla.org/mozilla-central/source/toolkit/content/widgets/browser.xml#186 --
[ 9:04 pm] <brahmana> Now I want to get the tabId in which this browser object is, how would I achieve it?

[ 9:04 pm] <gavin|> brahmana: not sure what you mean by "tabId"
[ 9:05 pm] <brahmana> tabIndex, the index of the tab in the tabContainer
[ 9:05 pm] <gavin|> you have a reference to a <browser>, and want to find which tab it's in?
[ 9:06 pm] <brahmana> yes..
[ 9:06 pm] <brahmana> well actually I am inside the browser's definition itself..
[ 9:06 pm] <gavin|> I guess you you need to loop through tabs and compare against their .linkedBrowser
[ 9:06 pm] <gavin|> don't think there's a utility method to do that
[ 9:07 pm] <brahmana> ok.. let me see how I can accomplish that..
[ 9:08 pm] <brahmana> I am thinking this... for(i=0; i < this.parentNode.browsers.length; ++i) if(this.parentNode.browsers[i] == this) return i
[ 9:09 pm] <brahmana> that must work, isn't it?
[ 9:09 pm] <gavin|> probably
[ 9:09 pm] <gavin|> assuming this.parentNode is the tabbrowser
[ 9:09 pm] <brahmana> yeah.. I verified that..
[ 9:09 pm] <gavin|> though the "browsers" getter builds an array by looping through taqbs
[ 9:09 pm] <gavin|> so it would best be avoided
[ 9:09 pm] <gavin|> to avoid having to loop twice
[ 9:10 pm] <brahmana> oh.. instead we directly loop through the tabs..
[ 9:10 pm] <gavin|> yeah
[ 9:10 pm] <gavin|> http://mxr.mozilla.org/seamonkey/source/browser/base/content/tabbrowser.xml#1693

[ 9:11 pm] <brahmana> And if i change the XBL now, is there anything that I need to do during the build?
[ 9:11 pm] <gavin|> you just need to rebuild browser/
[ 9:11 pm] <gavin|> (or toolkit/ if you're touching browser.xml
[ 9:12 pm] <gavin|> why are you changing them, though?
[ 9:12 pm] <brahmana> ok.. thats great.. i can even go for a full build.. :-)

[ 9:12 pm] <brahmana> I want to associate every channel with the tab it is working for..

[ 9:14 pm] <gavin|> brahmana: is therea bug # for this?
[ 9:15 pm] <brahmana> gavin|, oh no.. there isn't.. I saw similar thing in Firebug, Cookie Manager and wanted to explore on that..
[ 9:15 pm] <brahmana> if by any means this is a desirable thing we can have a bug...

[ 9:24 pm] <brahmana> Is the code under xpfe/ still used?
[ 9:25 pm] <gavin|> some of it is
[ 9:25 pm] <brahmana> the window mediator?
[ 9:25 pm] <gavin|> yes

[ 9:26 pm] <brahmana> Along with the tabIndex I would also require some sort of window Id i guess, as tabIndices are not unique across browser windows, isn't it?
[ 9:26 pm] <gavin|> right
[ 9:27 pm] <bz> tabindices need not be unique within a single window either
[ 9:28 pm] <gavin|> er, why wouldn't they be?
[ 9:28 pm] <brahmana> oh.. well if we do not open and close stuff, they should be right?
[ 9:28 pm] <bz> because they're under the control of the page author?
[ 9:28 pm] <bz> And nothing prevents an HTML author from sticking tabindex="2" on every single node in the document
[ 9:28 pm] <gavin|> we're talking browser tabs
[ 9:28 pm] <bz> oh
[ 9:29 pm] <bz> nevermind, then

[ 9:31 pm] <brahmana> gavin|, And about the bug for the stuff I am asking, I assume this isn't really a desired feature, is it?

[ 9:31 pm] <gavin|> brahmana: I still don't really know what the feature is

[ 9:35 pm] <brahmana> gavin|, To observe requests for one tab in observerservice ... there is requirement for coupling tab index with the http channel....

[ 9:35 pm] <brahmana> And that is what I am trying to accomplish, associate the tabIndex with the channel..

[ 9:37 pm] <bz> brahmana: er.... you know the docshell involved in both places, right?

[ 9:39 pm] <brahmana> bz, yeah.. I was talking to gavin and others about the way browser interacts with docShell. But I did not fully understand your question.
[ 9:39 pm] <brahmana> browser as in the xul browser element.

[ 9:40 pm] <bz> brahmana: docshell is he guts of a browser
[ 9:40 pm] <bz> brahmana: the part that actually holds the web page, etc

[ 9:41 pm] <brahmana> bz, yeah.. that was evident from this one: http://www.mozilla.org/projects/embedding/docshell.html and also from the length of nsDocShell.cpp file..

[ 9:43 pm] <bz> ok
[ 9:43 pm] <bz> so you can get from a tab to a docshell
[ 9:43 pm] <bz> you can get from the channel to a docshell (usually)
[ 9:43 pm] <bz> then compare the two
[ 9:43 pm] <bz> for images you're out of luck
[ 9:45 pm] <biesi> brahmana, there can be a <browser> that's not part of a <tabbrowser>
[ 9:46 pm] <brahmana> bz, Whats special about the images?
[ 9:46 pm] <biesi> what's NOT special about images
[ 9:46 pm] <bz> brahmana: they don't so much follow necko rules
[ 9:46 pm] <brahmana> biesi, Are you referring to a situation with single tab?

[ 9:47 pm] <brahmana> bz, biesi yeah I had got the same statement when discussing about the request end notifications..
[ 9:47 pm] <biesi> brahmana, no, I'm referring to extensions or mailnews or whatever that doesn't support tabs
[ 9:47 pm] <brahmana> biesi, oh well.. that is probably not a problem. I don't think we would go beyond firefox.
[ 9:48 pm] <biesi> ok
[ 9:48 pm] <biesi> I have no idea what you're trying to do
[ 9:48 pm] * bz points to "extensions"
[ 9:48 pm] <bz> anyway
[ 9:48 pm] <biesi> I'm just saying, if you want to change browser.xml in mozilla.org
[ 9:48 pm] <biesi> 's repository, you can't assume that there's a tabbrowser
[ 9:49 pm] <brahmana> point fully accepted.. :-)
[ 9:50 pm] <brahmana> bz, I am not sure about getting to a docShell from a channel. Is there is straight forward way?
[ 9:51 pm] <brahmana> s/there is/there a

[ 9:54 pm] <brahmana> moreover the docShell will have reference only to one channel and I assume that is channel corresponding to the base document, i.e the main document request. Is that so?
[ 9:55 pm] <bz> brahmana: the docshell is the channel's loadgroup's notification callbacks
[ 9:55 pm] <bz> brahmana: generally
[ 9:55 pm] <bz> brahmana: we're talking about a channel having a reference to the docshell, not the other way around

[ 9:57 pm] <biesi> the load group has the channels/requests for all the loads
[ 9:57 pm] <biesi> except for iframes of course

[10:00 pm] <brahmana> biesi, bz.. a little out of the current discussion.. When the http-on-modify request is fired, will the connection be already set up? Is there a possibility to change the request URL in that event's listener?
[10:01 pm] <bz> "change" in what sense?
[10:01 pm] <biesi> the connection is not set up yet
[10:01 pm] <biesi> but you can't change the URL
[10:01 pm] * bz really wishes URIs were immutable and that channels' URI were readonly so people wouldn't ask questions like this
[10:01 pm] <biesi> the channel's URI IS readonly
[10:01 pm] <brahmana> ok.. :-)
[10:01 pm] <bz> ah, good

[10:01 pm] <bz> well, that URIs were immutable, then
[10:02 pm] <bz> I thought we were gonna do that for http sometime
[10:02 pm] <bz> flip the bit
[10:02 pm] * brahmana decides not to ask such questions to be compliant with bz's wishes..
[10:02 pm] <bz> (and see what breaks)
[10:02 pm] <biesi> oh we aren't? that sucks :/
[10:02 pm] <bz> well
[10:02 pm] <bz> we're not _yet_
[10:02 pm] <bz> I also wish URIs were immutable by default instead of the nsIMutable mess..
[10:02 pm] <bz> but then again, I could use a pony too

[10:02 pm] <bz> Or better yet, a kayak
[10:02 pm] <bz> it is?
[10:03 pm] * bz looks in his "to check in" folder
[10:03 pm] <biesi> there's so much stuff that I'd like to change if I could...

Tuesday, July 22, 2008

AwesomeBar for awesome browser

SmartBar to AwesomeBar | edilee

Being a regular at moznet I had heard a lot about this new glamorous location bar that would be coming up in FF3 right before the Beta came out. And now that it is out it has indeed made waves amongst the users that I have interacted with. The most common thing that people would have observed is the most obvious feature of matching any part of the URL or even the title of the page instead of the plain old way of matching the first part of the URL, which would typically be the hostname. This is of course cool as remembering titles is more viable an idea than remembering full URLs.

This is what I also knew. Or rather I knew just this much. But the AwesomeBar has more to offer. First of all, as expected, it ranks the pages based on the frequency of your visit and lists the more frequent ones prior to the less frequent ones. So we are now a little more intelligent than just doing a pattern matching. Isn't that cool? Of course.

But wait, there is more for you. The AwesomeBar gets "Awesomer" by being "intelligent" literally. Mardak (Edward Lee - who helped me with my resumable downloads) has taught this bar to learn the patterns of usage all by itself. Now the AwsomeBar has something called "Adaptive Learning". It basically tries to identify what URL you select for what keyword you entered and gives it a higher ranking instead of something just based on the frequency. Mardak's post (linked at the top) talks in detail about this. Go ahead and read it. The best part is that it has real examples and pictures... Pictures man pictures.. :-)

Happy Awseoming.. :-)


Monday, July 21, 2008

A simple and practical green way of life

Read this: greenbook_public

We have all been hearing about the global warming and the ill effects caused by pollution, how we are depleting our energy sources very fast and soon will find ourselves in a very grave situation wherein we have to hunt for energy resources. At the same time there are people who are not just cribbing about the problems but coming up with solutions. People like Al Gore have taken a lot of trouble to drive in the message that we are "creating" this beast which will ultimately eat us. They have suggested alternatives and big schemes to be implemented by the governments to bring the situation under control. By their very nature, government policies and schemes will take their own time to come in place and a little more time to actually deliver the fruits. We need to be patient about that. In the mean time there is a need for some quick action also. And again, by the very definition, quick actions are generally smaller ones and mostly happen at the individual level. You will already be knowing a lot of them and probably even practicing them. Yet again, the link provided at the beginning of the post takes you to a green book. It has some simple practices which everyone can perform. If not all of them are applicable to each one, you will surely find out that there are a lot of them that you can relate to.

So do read all the pages. The book is nicely crafted and is beautiful. Read it at least for the sake of looking at beautiful creation if not for the valuable suggestions inside.


Aren't you proud to be a programmer?!

Read this: Lucky to be a Programmer : Gustavo Duarte

This blog was circulated in my organization and it is really awesome. This is something every programmer/software engineer must read and really be proud about the work they do. Of course everyone must be proud about the work they do, if they are really doing what they love. Now that's a totally different argument about following one's passion and doing something else for money. May be someday I will have a write up about that. As of now, the blog post linked here in itself is pretty long and I do not want readers to be tired by the time they finish my "introduction" ;-).

Go ahead and read that one.


Tuesday, July 15, 2008

Doxygen - Yet another technical wonder

I am sure every programmer will agree with me when I say each line of code requires proper documentation. I have felt this innumerable number of times when trying to understand different code-bases, either at work or that of open source projects.

Recently I was in need of this documentation very badly. More than documentation I needed to have overview of the classes in the particular code I was looking at. Basically I was lacking an IDE and hence I was looking for some tool to give me a list of public members, private members, data & methods and several such things for a class defined in an organized and categorized manner. I looked for a light IDE and fake IDE (like Notepad++, which just lists the functions defined) and others, but none fit my needs. It was at this time that timelyx (in #foxymonkies) suggested doxygen. I had heard my tech lead at NI talking about this doxygen to be used for generating documentation for several code bases there. At that time I thought it was a tool that generates huge amounts of data of which a small part is useful and is actually read by others (users mainly). But I was totally wrong. Doxygen can really do wonders. And the best part is that it comes with a neat installer which will put the necessary components in the required locations. Then all that you need is your source (in this case the C++ header file containing the definition for the class). Run the doxygen wizard, select what all needs to be generated and provide the source file. In less than a minute you have huge set (huge considering that input is just one C++ header file with a class definition) of files generated in a folder. To simplify things there is one index.html generated. Open that up and you will be amazed at the way things are presented to you. There are tabs for different classes involved, different type of members and all that. Everything is so fabulously linked and presented. You can just get almost every detail about the class and what type of members it has, what do functions return and what they expect. Just everything and all that in one single webpage.

I am now a big fan of doxygen. It is indeed "Oxygen for docs". Go ahead and try it out. It is available here.

Happy doxygening. :-)


A web-wizard to create XPCOM components in JS

JavaScript XPCOM Component Wizard

As I might have mentioned in my earlier posts, creating XPCOM components can sometimes be hell. (So wait for js-ctypes to get on-board). Mozilla people know about this pain and hence have published sample code snippets to help you in figuring out what all is necessary and what all is optional. Even the process of creating your component from these snippets can be cumbersome sometimes. So one mozillian, ted, went a step ahead and created a web based wizard to create a component in JS. You just enter a few parameters and a skeletal component is ready for use. It simply saves a lot of time.
Just visit the link at the top and try your hands at it.

Happy Componenting. ;-)


Tuesday, July 8, 2008

A search engine nearing extinction

Ask.com turns over its online mapping business to Microsoft - BloggingStocks

I am not sure how many companies have gone out of business ever since Google has started creating waves in the internet industry, specifically in the search area. Today I happen to come across this news. This was a pretty big one but has been losing market share ever since the biggies like Google and Microsoft entered the space. This domination is something which will not be good for any of us except the company promoters. Dominance will reduce growth and quality improvement. As the saying goes: "Its only the competition that brings out the best", the fear of being outdone by an competitor will always keep the companies at their toes and forces them to innovate and come up with newer and better things. With dominance, I feel, will creep in a sense of complacency. The so called "Market Leaders" will start setting the rules. And any error or not so desired thing that they come up with will become the standard. We will not be able think beyond. These dominating companies will become our horizon beyond which we will not even care to think. But of course there has to be a winner always. I am fine with that. Its just that I want the winner to change often.

The best thing to happen is another company coming up and overthrowing Google and Microsoft and becoming the market leader. Then after a few years yet another company comes up and overthrows this one and this saga goes on to make the world a better and better place to live. Because as our teachers said: "There is always scope for improvement".

I wish good luck to all those budding researchers and entrepreneurs in the several universities of the world waiting to take over the mighty "leaders".

-- Brahmana.

Friday, July 4, 2008

Getting bluetooth working on your Thinkpad

I recently started using a Thinkpad T60p and its been good till now. Today I wanted to transfer some files from my cellphone to my laptop via bluetooth. It took my quite sometime before I figured it out(of course with the help of people around me). So I sat down to write this blog post.

Technically the thing is very simple but not apparent. The problem is that the default Thinkpad driver called the enhanced data transfer does not provide any UI to use the device for any sort of communication. Hence Microsoft guys have come up with a driver which will give you a nice System Tray icon from where you can launch settings, send or receive files and all such Microsoft goodies. So to use that one visit this page to download the driver. (Download the one named Microsoft bluetooth support). Run that executable to have the contents extracted to some standard location like: C:\Windows\Drivers\.... . Now do to Device manager -> Bluetooth Devices and click on "Update Driver" and select the above mentioned location where the extracted files are kept. This will install the new driver. Ideally you should be ready to use your bluetooth now. Even now if you are unable to use then go to the bluetooth settings (using the system tray icon) and select the "Hardware" tab at the end. There make sure the "Microsoft Bluetooth Emulator" is selected. After this your Thinkpad bluetooth should work.

Happy toothing.

Thursday, July 3, 2008

Optimized resource utilization by Firefox

I have now started working very closely to firefox and in the course I have discovered a few things. Firefox does some real good job in optimizing using resources, mainly the network resources. I specifically mean the connections established to remote servers and the DNS resolution. Of course the second one is done by caching where as the first one is done by connection sharing (at the network layer).

For example, There are two tabs opened in mozilla. In the first one some page is loading, say http://sribrahmana.blogspot.com and when it is loading the DNS resolution is done and a TCP/IP connection is already established and over that application layer connections are made. Now if you try load to load the same url in the second tab also then a lot of redundant work is avoided. The DNS resolution result which is already available is used instead of resolving the name again. This is possible because of the DNS cache. (There are, of course, ways to disable this cache, in which case redundancy will be there). And also since there is already a network layer connection established for the first tab, and there is every possibility that it is still alive, the same connection is used for the second tab also. This results in lesser network resource utilization and also in one less memory consumption as the memory taken by the new sockets is avoided.

This is some cool thing, but not what I precisely want. I infact do not want it to share connections. I do not yet know how to tell FF to do that. I will put it here once I figure out how.

And Just as an FYI, here is how you can override cache: MDC doc

Friday, June 6, 2008

Embedding Mozilla -- Post 0 - Prologue

I was recently pulled in to help with a project involving mozilla in an application. Though I know a bit of mozilla code-base I was not much aware of the embedding steps. I along with others involved in the project did break our heads heavily on this embedding thing which frustrated me a lot. Hence I decided to read up any damn article on MDC or wikimo related to embedding and learn how to get a browser in a custom app. In such a journey documentation is the most important thing and hence these series of blog posts. I hope these posts will help someone trying to embed mozilla or at the least help me sometime in the future as reference. So here we go.

First let me start with some links to documents which you better read up before venturing into mozilla world.

1. The whole of Mozilla is based on the technology called XPCOM which stands for Cross (X) Platform Common Object Model, which is pretty much similar to Microsoft COM. If you know COM you can draw similarities. You *must* know this. If not then these are the places to go:
  • http://www.mozilla.org/projects/xpcom/ -- This is a page of links. The more you read, the better. (Obviously).
  • http://www.mozilla.org/catalog/architecture/xpcom/ -- This has lesser links and makes more sense to read first. Some essentials are presented here. The IBM Developer works articles are good and are almost like hands on manual.
  • http://www.mozilla.org/projects/xpcom/book/cxc/ -- This is linked from the above pages, but I am putting it here as it is quite important to read this one. The PDF can be handy.
Now you know what XPCOM is and how you should handle it. In the course you will also have come across certain mozilla tools. For instance the "mxr" - Mozilla Cross Reference (derived from Linix Cross Reference lxr). The whole of the mozilla code base is available on the web and you can search through it, make specific search like search for identifiers, macros, files or even free text. There are several such things and you will get to know them as you get on with the mozilla community. Be sure to visit irc.mozilla.org -> and the channels are #embedding #xulrunner and #developers. (For information on IRC look here, For a index of XPCOM/Mozilla terms look here.)

Tuesday, May 20, 2008

Firefox extensions can behave like stand alone applications.

XUL Solutions: Creating an uninstall script for an extension
XUL Solutions: Creating a post-install script for an extension

This is one interesting thing that I found today. One of the most common things done when an application is installed is to verify that the installation has been proper and things are where they are. Also the very basic requirement for any "clean" application is to leave no traces of its presence when it is uninstalled. This is easily possible in case of stand alone applications which are either straight forward executables or else have standard installers and uninstallers. They can set registry values, hook up with Program files (or package managers in Linux) and finally do the cleaning when time comes. But things are not as simple with Firefox extensions, because the platform, i.e Firefox, is not as extensive as a full fledged operating system. So I was always under the impression that these extensions are installed and run in the controlled environment constantly monitored by FF. But as it appears, I was wrong. Firefox indeed provides install and uninstall events which you can hook on to and do the necessary cleaning up.

The technical details are in that blog posts. Read it up and have some clean extensions. :-)


Thursday, May 1, 2008

Learning from others project --- One big opportunity at VTU

VTU students are very aware of the way final year projects are done. I having completed my bachelors, am fully aware of the practices for the 8th sem project. But it appears that the same thing happens with our post-graduate students also. And this realization was made possible by none other than loafer Sethji . It was his brother's (elder of course) M.Tech final sem project. Sethji arranged (bought, actually) for a network based project. The project was about "Frame Relay Networks". The people who sold the project are not some scientists. So there document also had a lot of similarity with Wikipedia (no wonder). The code wasn't really that complex as it was Java (managed programming). Its not these technicals because of which I am writing here. Its the learning that the project gave me in course of me trying to stuff the concepts through the heads of setu, so he could eventually pass on the knowledge to his brother.

Not that Setu is a bad or slow learner, but the process took more time than I initially expected. Initial explanations by him made me think that things are pretty simple. Some of my initial assumptions about a few things being hard-coded did turn out to be true. But I never understood why anything like this was required. After some 3 hours of googling, reading and other eye/brain straining exercises I finally got some things stuffed into my head first, which were subsequently passed on to setu. As of now I assume that he has got hold of the concept, but I am not entirely sure what he will explain to his brother as he already seems to be pissed off with working on his brother's project (which is totally fair considering that fact that he himself worked for 3 days on his own final sem project).

Whatever the story or the background or the future might be, I must be thankful to Setu's brother for having made Setu get a project without which probably I would never have know what a "Frame Relay" is. Now about the technicals of the concept/project I have another post on another blog with a set of links to articles/documents from where I read about it and a bit of my own documentation for the laymen (aka me).

Now whats interesting about such practices is the opportunity that it provides for opportunists. In this case I happen to learn about this technology when I was no where near it from any angle. There will be many such opportunities, its just a matter of cashing them. This dialog from a Hindi movie makes perfect sense here: "Har minute ek bakra paida hota, aur two usko halaal kane ke liye" which essentially means for every dumb guy out there there will two others ready to exploit.

If there are guys waiting for someone to help them, there will be someone ready to make him a cash cow. So quickly decide what you want to be.

Frame Relay --- An old but interesting cost-cutting network innovation

[Yet to write this up]

Wednesday, April 30, 2008

Web protocol handler in FF3

This is one awesome thing to happen. Haven't you crunched your face in disappointment and frustration whenever you clicked a "mailto" link on a webpage and unnecessarily have your desktop mail application (mostly unused and not configured one) popping up, hogging quite a bit of resources? The worst case scenario would be, the above described things happening even when you are logged into one of your web based mail services and thats where you wanted to send an email from.

Now all that go away. You Gmail might just give you a new mail compose page when you click on a mailto link. And this is not just for the "mailto" link. There can be web based protocol handlers to any sort of protocol. So all in all FF3 will rock like anything along with making web applications rock harder. So folks be ready to dance for the beat, for the beat will be pretty high paced. :-)

More details here: mfinkle's blog.

Thursday, April 24, 2008


Mark Finkles Weblog » JS-CTYPES Status

This was what mfinkle recently talked about js-ctypes. As he said because of the hectic FF3 schedule the devs at mozilla were pretty busy and hence nothing could be done. I being a lazy bot, also did not do anything. In the blog post above, Mark talks about the struct support (which I told that I would be doing). Now I exhibited my initial enthusiasm by writing up a doc, which was not really useful (its here ). After that nothing much happened except for a small bug fix.

Now that mfinkle has rekindled the interest I will also try and contribute. The struct support is still pending and I will start getting XPCOM code exchanging struct information with libffi. After that we can look at the other interface of JS exchanging structs with XPCOM to finally complete the loop. In the mean time I am also looking at embedding mozilla. Will put up a post about that very soon.

Again, Get ready to use deadly binaries in your JS code. JS-ctypes is soon going to be here.. ;-)

Wednesday, April 9, 2008

#pragma comment(lib, "libfilename") -- A cool way to indicate dependency.

.NET programmers are well aware of the References section of a .NET project using which we specify the dependencies. Things are pretty straight forward in the managed world and with an IDE like Visual Studio it can't be easier. You are using a type from a particular assembly, then just add that assembly to the references of the project. The rest is taken care of by the IDE and framework. When we move to the native/unmanaged world, we get slightly more powers with of course increased responsibility.

Any external symbol that we are using must be approved by two tools/stages: The compiler and the linker. The compiler can be tackled by having the appropriate header file (.h file) which just has the declaration and no definition/implementation. Using this is simple. #include the header file in whichever source file you are using the symbol. But just make sure that the compiler knows where to find that header file, that is maintain an includes directory and tell the compiler about that directory.

The linker is a less tamed beast or rather a more difficult a beast to tame. One simple reason being that since it handles the object code, its error messages cannot be traced back to a line in the source code. Another reason being that we have two versions of library: Static and Dynamic.

Now to use this we will have to provide each of these libs, containing the symbols that we use, as input to the linker. On the VS IDE, we have to list all the required libs in the project settings and also specify the the library containing the libs. I say this is not as simple as the references thing .NET because, to change anything you will have to visit the Project Settings page. And if you are developing without an IDE, your compiler invocation command line will be really really long. (though you can shorten it with some gmake variables and system variables). But apart from these geeky build methods there is yet another simple way to tell the linker which all .lib files it has to look into for symbols when linking the current code and thats where this #pragma directive comes into picture.

#pragma comment(lib, "requiredLibrary.lib") --- This line of code will make the linker look for the library file requiredLibrary.lib. Isn't that cool? Don't you get a feeling that you, kind of, tamed the linker using your C/C++ code?

All you need to do is to put this line of code in one of your code files and the compiler will know what to do. When this code is encountered, the compiler adds a "search directive" to the object file (.obj file) that it generates. When the linker reads that search record it will know what lib file it has to look for to find any external symbol that is present in the current object file that it is linking.

So, suppose you are providing a library and you want to enforce this above method on anyone who consumes your lib, you can provide a header file that they must include to use your symbols. And in that header file you can specify this above pre-processor directive. This way all your anyone who is using your lib will implicitly tell the linker which lib file to look for.

This #pragma comment on a broader sense is used to inject extra information to the object code generated by the compiler. This information can be used by the linker. Here is the msdn article on what all this #pragma comment can do: This page.

Thursday, April 3, 2008

Sensible policeman --- An excellent showcase of presence of mind

India ranks pretty good in corruption ratings, and I am sure which "good" I am talking about. Things can get really over-whelming sometimes in some places of the country. And one of the departments thats deeply into competition is probably the one that should protect the people -- "The Police Department". Or at the least that is what the public (including me) thinks so, and we might as well be partially wrong. But whatever it may be one thing is sure that Bangalore Traffic police is really doing a good job. Now do not blame them for the bad traffic and long jams. Thats the work of the infrastructure people, which the police are not. Police manage the traffic in the currently available infrastructure.

Now what makes me write this specific blog is a set of two incidents that I recently came across and here they are:

We have a very small but extremely busy junction in Malleshwaram near Nataraja theatre. We pass through this junction almost everyday. Very recently when I was travelling with my one of my colleagues a little late in the night, we got stuck at that junction. The reason was that there was no policeman manning this junction and hence everyone was trying to find a way for himself wthout considering anyone around them, except for the ones who might hit them. The result, an obvious chaos with all vehicles being stand still because of a dead-lock. Afer waiting for a few minutes my colleague said, in a totally frustrated way, "Couldn't that police be here at this time??! If he was here we could be in our homes by this time. Damn it!!". So it was clear that the presence of a policeman would have brought in a lot of discipline.All of us might blame them, curse them or say anything, but its ultimately true that 80 - 90% of discipline on the roads is because of policemen.

Now the second incident:

Again, another day on my way back home from office, I was on a bike with my colleauge, waiting at the signal in front of Chinnaswamy Stadium. We were in the right half on the road and there were two lanes of traffic to our left. The traffic was all piled up and I suddenly saw the policeman, who was manning the signal, let go the two lanes of traffic to our left.I was like: "What the hell is this man doing??!!!". It was only a few seconds I realized the motive, when I heard the Ambulence siren. The ambulance was stuck in the traffic and was in the lane to our left. So the policeman moved the traffic to make way for the ambulance. And of course after the ambulance passed the signal, those two lanes were again stopped. See, policemen do a lot of good and necessary jobs.

So all tax payers, don't think that ALL the tax paid is going waste. In fact a large amount is being used very well. More about our government and the reality about the government workers in another post very soon.

Till then, just respect the traffic police, you can respect the others later after my further posts. :-)


Tuesday, April 1, 2008

.NET metadata available in 2 ways - Reflection and TypeDescriptor

Ever since I was introduced to Java during my internship at IBM, I have been fascinated about the concept of Reflection. The only way something like that was possible in C/C++ was to put a function in a DLL and use GetProcAddress(), which we all know can get real dirty. But with Reflection things are real smooth and way more powerful. Without that you think the IDEs would have been this awesome?

Anyways this fascination continued at my first work place, National Instruments, where I started working with .NET. But it was not until today that I was aware of a competitor to this Reflection in .NET. That is the TypeDescriptor. Things can just keep getting more and more awesome with these managed frameworks. First they came up with Reflection, which just exposed any type from any assembly. This probably made the Type designer think that their types were lying around bare in front of their users. They saw the types struggling to cover themselves, hiding and crying. The scene was like the regime of an evil master/demon. But God protects the weak and the prayers of the Type and Type designers were answered. The framework developers finally decided to shield these poor and shy Types from their consumers and that is when they came up with this "TypeDescriptor" funda.

With this TypeDescriptor leading the troops from the front, Type and Type designers could combat the evil. If Reflection was like the nuclear energy in the hands of the evil, TypeDescriptor was like the lead container used to put the nuclear radiations under control. I mean, with TypeDescriptor we can decide what the type will look like to other people when looked through the TypeDescriptor. So if we have to protect our types we have to con the users into looking through this Descriptor and not the old weapon.

If TypeDescriptor is under use with a type, it expects that particular type to throw open some methods which the TypeDescriptor can use to give some information about the type. Later on this information is passed on to our users. And the obvious way of providing this methods is through interface and an example is ICustomTypeDescriptor. And if I remember correctly, just like assigning a TypeConverter to a type we can probably assign a TypeDescriptor also which takes care of presenting the type to the outside world.

But you know what, if your type does not have a custom type descriptor, or has a very lazy one - meaning one that just forwards the calls to the default one, then the shield no longer works. Because if the TypeDescriptor does see any openings to get information about the type, it goes for the inevitable --- Reflection.

So go protect your types, and don't forget the conning part.

In another post I will be more technical and less a story teller and write about the implementation of this TypeDescriptor.

--- Hari Om.

Friday, March 14, 2008

Styling Chatzilla userlist and XUL Trees

In my previous post I mentioned how I was fiddling with Chatzilla "css" files to get a custom style so that and failed utterly with the userlist. And it was later I found this website: How to style your Chatzilla userlist to fit with a dark motif. - Just Imagine... a weblog

This webpage (yet another blog post) tells how the userlist can be customized. Since the userlist is a "XUL Tree" styling it is a little complicated. In XUL trees are something like dynamically populated UI components. So you do not actually know what a treecell will contain when being displayed. So we use css with a few predefined pseudo classes and a couple of properties to decide what style has to be applied for anything contained in a treecell or for that matter the whole tree itself that is present as the userlist.

This styling thing is really cool and fun. Just try it out.

Thursday, March 13, 2008

Faces of mozillians in Chatzilla

Today I discovered this cool page on the chatzilla website that provides motifs (themes) for Chatzilla which will show faces of the mozillians. Check out this page: moznet faces

Though not all of them are listed, there are quite a lot that I know. This is a really awesome idea. And the best part was the way these motifs can be applied. I just had to drag and drop the link to the css files in the Chatzilla message window and that motif is just picked up. This amazed me more than anything.

I trying to use these motifs I also ended up reading how I can style my chatzilla with some custom (read weird) colors and images by simply writing some css rules. It in fact is just overriding any of the already defined css style class definitions. So its really simple. I will write about that separately. But you know what this whole styling this is real fun and a must try for anyone related to web and irc.

Technorati Tags:

Tuesday, March 4, 2008

Special chars and Escape sequences in .NET strings

Having come from C/C++ programming, and that too mainly console programming, escape sequences were very dear to me as they were my sole friends when it came to formatting the output. Now at work I work with C# .NET and things are not the same. I wrote a XSLT processor, as part of my job, and I wanted to log every successful XSL trasnformation. Well the file I/O was a lot easier with .NET types, but introducing an newline at the end of every log entry was a big pain. As I used to do earlier I simple put a "\n" at the end of the log message. This resulted in a empty square box being placed there instead of a newline. This was totally wierd and I started to wonder whether I have been writing Japanese???!!!

Then a little bit googling told me that .NET has encapsulated these special chars and newlines in a type called "Environment". This makes sense. Newline can be differnet in different environment. And with this encapsulation we get the correct newline for any environment.

So in C# if you want to add a newline to your string use the "Environment.Newline" object. VB .NET developers are, as usual, lucky with a easier encapsulation. They have a type called "ControlChars" and this can be used like this: "ControlChars.crlf" which of course is more intutive that something like environment.

Monday, January 28, 2008

Face the worst first and get rid of it asap.

This is yet another discussion with Doddappa (Uncle). It’s a pretty simple thing yet a lot of us do not seem to remember it and practice it. I do not remember how this one came up, but its here. This is one of his philosophies which have been appreciated by others. The idea is about prioritizing the things/to-dos of the day. There will be a lot of days in every person’s life when there will a few “good” to-dos and a probably a couple of “not so good” or “difficult” to-dos also. And these difficult ones will generally be such that we will be hesitating to carry out them. And, as a result we generally keep postponing them as we would be afraid to face them. Also nobody would like to start the day with something bad, because of a strange belief that the rest of the day would also be bad. But what most of us forget is that these unwanted responsibilities keep eating our resources – time, thought and energy, until we get rid of them. This directly results in a reduction in our efficiency even when we are doing something that we love the most, because the fear of having to do the unwanted will always be eating us from the inside. We do not get rid of the unwanted even when we know that it is something that we ultimately will have to face. So to avoid all the losses or resources the best thing would be to finish of the most unwanted and most feared task of the day as soon as possible in the early hours of the day. This will ensure us a successful and more productive day ahead of us. So just get rid of the unwanted and feared things first and that will allow us to enjoy the things we actually like to do.