OnSwipe redirect code

Showing posts with label JS. Show all posts
Showing posts with label JS. Show all posts

Saturday, February 7, 2015

AJAX for beginners - what it is and how it evolved

Today's websites or web pages are very complex structures involving several components. It can be daunting for a beginner to understand it all at once. To ease the process let's break things down and see how the various pieces fit together.

At the core of web pages is the HTML. It is the language that browsers understand and it is what web developers use to showcase rich content. Without HTML all our web pages would have been simple text files. Now I assume that you are aware of HTML and how it is used in web pages. Another critical piece that works closely with HTML is CSS, but we can skip that one for now.

Plain HTML web pages are pretty dull. They are not really interactive. What they mainly have apart from text are :
  1. Some formatting information which tells the browser how to show the contents.
  2. Some media content like images or videos.
  3. A bunch of hyperlinks which lead to other similar web pages.
  4. In slightly evolved pages there would be forms which allowed a user to send data back to the servers.

It is the third one which helped form this "web" of pages which we call the Internet. How do these links work? Pretty simple actually. Each link typically points to a web page on a server. Whenever a user clicked on a link, the browser would fetch that page, remove the existing page and show the newly downloaded one. Some times the link could point back to the same page in which case clicking on it would just "refresh" the existing page. We all know about the refresh/reload button in the browsers don't we?

There are a few things to note here.
  1. These basic web pages were dull in the sense that they were not interactive. They did not respond to user actions except for clicks on links, in which case the existing web page goes away and a fresh page is loaded.
  2. When a link was clicked the browser would decide to send a request to the server. The web page itself did not have much control on when the request should be made.
  3. When the browser was making a request in response to a link click, there was pretty much nothing you could do with the web page until the request completed. Every request to the server blocked the entire page and if there were multiple requests that had to be made it had to be sequential, one after the other. A fancy technical word to describe this would be "synchronous".
  4. An obvious observation is that the data that came from server in response to request from a browser was HTML, as in the data and information on how to display that data were sent back from the server all the time.

While the fourth point appears as not such a big problem, the first three are definitely limitations.

To work around the first problem browser developers came up with JavaScript. Browser folks told web developers that they can make their web pages interactive and responsive to user actions by writing code to handle those interactions in this new language called JavaScript and including it in the web pages. Web developers could now write different pieces of code to handle different user actions. The browser provided a few functions (aka APIs) in JavaScript which the web developers could use to make changes to the web page. This API is called DOM (Document Object Model). Thus web pages became interactive and responsive.

The second and third limitations still existed however. The only way to request data from server was through links or in case of evolved pages through forms. Also the web page was one single unit. Whenever something had to change the whole page had to be reloaded. Imagine something like that with your gmail page. How would it be if the page refreshed every time you received a chat message or worse every time one of your contacts logged in or out of chat? That would be disastrous. The fate would be similar of something like a sports website in which the small corner showing the score needs to be updated a lot more frequently than the rest.

To solve this some people thought of iframes as a solution. Iframe is a HTML tag inside which you can put another HTML page. Now with this arrangement the inner page could be reloaded without affecting the outer page. So the sports website could now put the score part in an iframe and refresh it without affecting the rest of the page. Similarly gmail could break up its page into multiple iframes (It actually does this in reality, but it also does a lot more than this).

With iframes the "synchronous" problem was somewhat solved. Multiple requests could now be made in parallel from a single web page. However the method of making the request largely remained the same, using links or forms. It was either not seamless or almost impossible to make requests in any other manner, like say whenever user typed something (like Google search). Now let's see what all capabilities we have on top of the basic web pages.
  1. Web pages can respond to user actions. Some code can be executed on various user actions made on different parts of the page.
  2. This code can modify the web page using DOM.
  3. Browsers are now capable of making multiple requests in parallel without blocking the web page.

The browser folks now combined these abilities thereby allowing web developers to make requests to servers in response to user actions using JavaScript. These requests could be made in parallel without blocking the page i.e. Asynchronously. Now when the response came back the same JavaScript code could add the HTML received in the response to the existing page using DOM.

The developers went a step ahead and said "why receive HTML every time? In most cases the formatting or presentation information is already there in the web page. What is needed is only new data or content (For example, in the sports website example, all that is needed is the updated score. Information on how to show that score is already present in the web page). So instead of fetching HTML every time let's just fetch data to make things much faster and efficient". And back then the most popular way of representing data in a structured manner was XML. So developers started receiving data in XML format from the server and using that to update the web page. XML was so popular that browsers added support for automatically parsing XML data received in response to these requests.

This approach of making requests and updating parts of web pages became very popular and was much widely used. Since this was predominantly used to make requests over HTTP (The protocol of the web) to fetch XML data this technology was called "XMLHttpRequest" (Internet explorer called it something different, it probably still does).

What all does this XHR offer?
  1. Making requests "Asynchronously"
  2. Making requests from "JavaScript"
  3. Making requests for "XML"

Put together it forms "AJAX - Asynchronous JavaScript and XML".

Notes :
  1. Several things have been simplified here as this is aimed at people new to web development and Ajax in particular. This should nevertheless give a good starting point.
  2. References to XML should be read as " something other than HTML". Nowadays XML has been largely replaced by JSON because of several advantages.
  3. XMLHttpRequest does not necessarily mean making asynchronous request. It supports making synchronous requests also. But I don't know if there is a good reason to do it. Making a sync xhr will block the entire page until that request is complete. Really don't do that.

Monday, December 13, 2010

Backup contacts from Nokia phone to a excel spreadsheet -- Convert VCF to CSV with nodejs.!

Yesterday my uncle asked me to take a backup of his cellphone contacts. He has a Nokia 3600 Slide phone. Now this phone is a piece of work.. ! It doesn't provide a way to copy/move contacts to the external memory card. I found a couple of file manager applications in .jar and .jad formats and tried to see if they show me the phone's memory (not the external memory card, but the memory embedded in the phone) as that is where the contacts are stored. But no luck. Later when I found one such application, whose description said it gives you access to contacts, it was a .sis file and phone simply said "File format not supported"... !! . It's a S40 phone and hence cannot install applications from .sis files and most of the cool and usable stuff seem to be available in .sis format only. :(

A little searching led me to Nokia Europe support website page for "Backing up your phone" and the preferred method for backing up contacts was copying to a computer via bluetooth (mentioned as the simpler solution) and backup via PC Suite (mentioned as the harder way). Sadly the simpler way was not possible for me as my current laptop (Lenovo G550) is bluetooth challenged (a.k.a no bluetooth module) and I did not have the CD that comes with the phone to install the PC Suite (I wouldn't have installed it even if I had the CD.. !!)

Some more searching led to me a few applications (free and paid, .sis and .jar/.jad formats) which would "sync" your phone to a "remote location" via GPRS and make it available to us globally and also (here comes the best part) via "Social Networking Websites".... (Geez... as if we love spam calls and SMS and we are short of them..!!).

Having found nothing really useful I started fiddling with the phone settings to see if I can find something there. Luckily there was the Backup option there. The default backup tool available in the phone can apparently dump your phone's current state --- including contacts, calender, settings, applications, etc.. etc.., all into a backup file (a .NBF file which I presume stands for Nokia Backup File/Format). Well thankfully this file is placed in the external memory card, in a folder named Backups. Then I connected my phone to my laptop and chose it to be used as an external storage device. Copied the .NBF to my computer. And I was happy that I could do it without any additional installation.

Sadly the happiness did not last long. I tried to open the NBF file in a few text editors (hoping to see a huge list of contacts) but was instead presented some gibberish.. Damn it... it's a binary file.. ! And I could not find any reader/converter for this file format on the internet. But thanks to the linux "file" utility, I got to know that NBF is just a zip archive. Fantastic... !! I unzipped it and found all my contacts in a folder at a depth of 4 (i.e. a folder in a folder.. 4 times). Again I did not see a huge list of contacts. I saw a huge list of VCF files, one each for every entry in my contacts. And this is how each file looked : (in case you are wondering why this file per entry is a problem)

BEGIN:VCARD
VERSION:2.1
N;CHARSET=UTF-8;ENCODING=8BIT:
TEL;PREF;WORK;VOICE;ENCODING=8BIT:
END:VCARD
Well clearly this is not really a usable form.. !!

Now, I thought I would just import all these VCF files into my contacts in Outlook and then export it as a CSV. That would be a pain in two ways -- First I need to setup my Outlook if it is not already setup, which was the case with me(a small pain). Secondly if Outlook is already setup then the two sets of contacts will be merged. Not something that my uncle wants. So the next idea was to convert all these VCF files into one CSV.

Then the hunt started for VCF to CSV converter. I found a couple of online ones, but was apprehensive to upload the files to a remote server. I also had the option of downloading those server side scripts and run them in a webserver locally or with a PHP engine from the command line. I did not have PHP installed and did not think it was worth the effort as I wasn't sure how well those scripts worked. Finally I found this python script - http://code.google.com/p/vcf-to-csv-converter/ . Happily I fired up my Linux VM and ran the script with nearly 1000 VCF files got from the phone. Bang.. !!! The script erred out with an exception while processing some 332nd file. :(

At this point it appeared to me that the easiest way out was to write a simple importer myself. I was in no mood to write a C/C++ program and do all that string manipulation with raw character buffers and pointers. And the only scripting language that I know well (well enough to write something useful on my own from scratch) was Javascript... !! So how do I process these VCF files with Javascript?

Enter, Nodejs. I had been fiddling around with node for some time now but had not done anything useful with it. So I fired up vim and started to write the importer script and in about 30 mins (much less than the time spent on the internet searching for an existing tool... !) I had the contacts list in a CSV..!! Wohooo... And here is the script which did the job : https://gist.github.com/738325

The script is very crude, to the extent that I have hardcoded the path of the output file and it's very likely not efficient either. Also note that this script just extracts Name and Phone numbers and no other details. It is a not a generic VCF parser ( I don't even know the VCF spec. I just looked at a couple of VCF fies and figured the format of lines for name and numbers). If anyone needs it, they are free to use it. No guarantees though.. !! :)

It was a fun playing with JS, specially NodeJS. :)

Tuesday, March 30, 2010

Mozilla @ SJCE : Static Analysis projects

It has been a very long time since I posted anything about the Mozilla related activities going on at my college SJCE, Mysore. That in no way means absence of any activity. In a previous post I mentioned that along with the attempt to introduce Mozilla as a standard course I was working to get the current final year students (who are enrolled under the larger VTU University) to start working on Mozilla, using their final semester project as a means. Well I am happy to say that this has materialized. 8 final semester students from CS expressed interest in working with the Mozilla community as part of their final year project and it's been a month nearly since they started their work. Here is a brief write up about that.

As is with most of the Computer Science students in India approaching the Mozilla community, these 8 students also wanted to do something related to compilers. The JS engine and static analysis are two projects in Mozilla which would come under the compiler banner. These 8 students wanted to work on something substantial which can be presented by two teams of 4 students each as their final semester project. So the bugs that they would be working on had to be related. This was possible only with static analysis as there are a lot of related tasks available. Also static analysis would be something new to the students and it would give them an opportunity to understand the internals of the compiler (GCC here) like the AST (Abstract Syntax Tree) and other representations of the code. Moreover the static analysis APIs are exposed in JS and hence the analysis scripts would be written in JS. That way students would learn JS also. Above all these students would be doing something genuinely new.

The students could not be asked to start working on the bugs directly. They were new to open source development, the tools used there like the bugzilla, using email as a formal medium of communication, the source control system (to add to the complexity Mozilla now uses a distributed RCS - Mercurial [hg]), using IRC, the linux development environment etc. It has been these things that the students have been learning for this part month or so. This learning has been in the form of accomplishing the tasks which form the prerequisites for the actual static analysis work. These are things like downloading gcc and mozilla sources from the ftp hosts and from the mercurial repository respectively, applying the mozilla specific patches to gcc for plugin support etc, etc... These are all listed here. Note that some things like installing all the dependency packages for building these applications from sources, learning to use the linux command line itself and others are not on that page but were new to these students nonetheless.

All the students have been putting in substantial effort and have picked up the traits of an open source hacker pretty soon. We have had a few IRC meetings and a lot of formal communications over emails. In parallel we were also working towards shortlisting 8 static analysis bugs. Based on the feasibility of the bug being completed by an amateur developer within a span of 2.5 months and based on the students' interest we finally decided on these 8 bugs :
  1. Bug 525063 - Analysis to produce an error on uninitialized class members
  2. Bug 500874 - Static analysis to find heap allocations that could be stack allocations
  3. Bug 500866 - Warn about base classes with non-virtual destructors
  4. Bug 500864 - Warn on passing large objects by value
  5. Bug 528206 - Warn on unnecessary float->double conversion
  6. Bug 526309 - Basic check for memory leaks
  7. Bug 542364 - Create a static analysis script for detecting reentrancy on a function
  8. Bug 500875 - Find literal strings/arrays/data that should be const static
These tasks are good, challenging and provide an opportunity to understand compilers very closely.

Currently the students have downloaded gcc, applied the patches, built it along with the dehydra plugin support and are ready to run static analysis on the mozilla code. They are now trying to run simple analysis scripts like listing all classes in mozilla code and all classes and their corresponding member functions. It is still quite a long way to go, but it has been a real good start. Let's wait and watch what great feats are in the pipeline.

I hope to keep this blog updated at the same pace at which the students are working.

Good luck to the students. :-)

Thursday, January 28, 2010

Script to get nsIWebProgressListener state names from state codes

This is totally Mozilla specific and probably will not make any sense to anyone not involved with Mozilla code.

So in Mozilla there is an interface named nsIWebProgressListener which can be used to get notifications about any web progress -- a page load in simple terms. So these notifications are sent to us by calling our onStateChange methods. One of the parameters passed is the state of the request. This is a hex code. Memorizing all the hex codes is insane. So to log the states I wrote a small, dumb, script.

I wanted to put it somewhere on the internet, instead of a file on my disk, and hence this blog post. Here is the script to convert nsIWebProgressListener state hex codes to state names. A simple lookup function, but handy for logging

var flagNames = [
"STATE_START",
"STATE_REDIRECTING",
"STATE_TRANSFERRING",
"STATE_NEGOTIATING",
"STATE_STOP",
"STATE_IS_REQUEST",
"STATE_IS_DOCUMENT",
"STATE_IS_NETWORK",
"STATE_IS_WINDOW",
"STATE_RESTORING"
]

var flagValues = [
0x00000001,
0x00000002,
0x00000004,
0x00000008,
0x00000010,
0x00010000,
0x00020000,
0x00040000,
0x00080000,
0x01000000
]

function splitFlags(aFlag) {
var states = ""
for(var i in flagValues)
{
if(aFlag & flagValues[i])
{
states+= flagNames[i] + "\n";
}
}
return states;
}
That's it.

Wednesday, October 28, 2009

Incrementally building Mozilla/Firefox

Mozilla code base is really huge and has variety of files which are built in a variety of ways. It was sort of always confusing for me to figure out where all I should run make after I change any of the files. I generally asked on the IRC and someone just told me where to run make.

Today it was the same thing. But I also thought I would as well learn the logic to decide for myself the next time. Here is the chat transcript of NeilAway answering these questions.

The MDC page (https://developer.mozilla.org/en/Incremental_Build) has almost the same content for the native code. Neil here explains it for all the types of files involved.

Also to add to the following things running : "make check" from the objdir will run the automated tests.

  • For xul/js/css/xbl it usually suffices to find the jar.mn (it may be in an ancestor folder) and make realchrome in the corresponding objdir
  • For idl you're often looking at a full rebuild, depending on how widely it's used
  • For .cpp and .h you obviously have to make in the folder itself, and then look for a corresponding build folder
  • Except for uriloader where you use docshell/build and content, dom, editor and view use layout/build
  • If you're building libxul or static then this is all wrong
  • You don't look for a build folder, I think for libxul you build in toolkit/library and for static you build in browser/app

Wednesday, July 22, 2009

Getting the size of an already loaded page (from cache) in a Firefox extension.

Today this question came up in the IRC (moznet, #extdev). One of the add-on developers wanted to get the size of the page, either bytes or number of characters. The most obvious thing that came to my mind was progress listeners for definitive answers or the content length from the channel for not so critical scenario. But then he said he wants it for an already loaded page. And he further said that the information is already there somewhere as it is shown by the Page Info dialog (Right Click on a web page and select View Page Info). He was indeed right. Somebody in the code is already going through the trouble of calculating the data size and we can just re-use that. And I immediately started the quest to find that out.

As usual to figure out any browser component I opened up DOM Inspector. That tool is improving, which was against my earlier perception (Sorry Shawn Wilsher), though the highlighting part is still screwed up. Nevertheless, locating that particular label "Size" and the textbox in front of it containing the value was not difficult at all. I got the "id" of the textbox containing the size value. (Its "sizetext" :) ).

Next it was MXR (http://mxr.moziila.org/) in action. I did a text search for the id and got a bunch of results, one of which was pageInfo.js with this entry : line 489 -- setItemValue("sizetext", sizeText); . It is here. The very line made it apparent that it is the place where the value is being set and hence it is the place from where I can get to know how the value is being calculated.

Once I saw the code it was very clear and straight forward and pretty simple also. We have the URL. From the URL we get the cache entry for that URL. (Every cache entry has a key and that key is the URL - so neat). We try to get the cache entry from the HTTP Session first and if that fails we try FTP Session. The cache entry has the size as an attribute on itself, so its just getting that attribute value. DONE.

I am not sure how this will behave if we have disabled every type of cache. AFAIK, there will still be some in-memory cache as long as the page is still loaded. Probably good enough.

That was the end of a small but interesting quest. :-)

Friday, April 3, 2009

Making firefox use a little lesser memory

A lot people tell me that firefox uses a lot of memory and it grows like anything when used for a long duration. From the discussions developers and some blog posts/forums it appears the prior to FF3 it was memory leaks which formed a major part of this. Apart from that one of the important features "bfcache - Back/Forward Cache" which, arguably, makes back and forward navigation very fast, uses a lot of memory. In on the pages I read apparently it can be up to 4MB per page. This cache essentially keeps the parsed HTML of the mark-up in the memory along with the Javascript state. Sometimes, or more like most of the times, this can very taxing. You might be the kind of person who does not really uses the Back-Forward navigation and in case you do it you are OK with a slight delay (which may not percievable with a good conenction). If indeed you are that kind then disabling this "bfcache" will probably make your firefox eat lesser memory.

The setting/preference that controls this behavior is : browser.sessionhistory.max_total_viewers

By default it is -1, which is no limit. Setting this to 0 (zero) will disable this cache completely. Essentially you are telling Firefox to not to store the state of any document in the memory. If you want to have this feature with a saner limit you can set it to an integer representing the number of pages you want to be saved in memory.

Like any other pref not exposed through the Options dialog this has to be edited by visiting the page about:config. Open this in a tab and key in (or copy and paste) the preference string. Double click it to edit.

May be you can now lose lesser hair as your other applications will run smoothly. :P

Hari Om.

Wednesday, March 18, 2009

An official full-fledged Firefox Add-ons dev guide

Having done a little work on Mozilla Firefox and extensions for it I have always felt the lack of an official, step by step document for extension development. I do accept that there are a lot of resources available on the internet. Innumerable number of blogs and tutorials explaining the process step by step. But most of them become outdated with newer versions of Firefox and the authors are not really keen about updating the info. That is why an official tutorial or guide from Mozilla itself would be the right thing. It will make sure the contents in the guide are up-to-date and worth reading for developing an extension for the currently available version of Firefox.

There are documents for extension development on MDC like the various links present at: https://developer.mozilla.org/en/Extensions but it was either scattered or pretty brief in most places though it did touch most parts of extension development. Nevertheless a wholesome "official" guide was needed and here it is now. I went through the guide and as the blog post says it is still in BETA with a lot of "TO-DO" tags in there. In spite of that it is very much usable. Do visit it and post back your feedback to make it a much better guide.

Add-ons Blog » Blog Archive » Firefox Add-ons Developer Guide (beta release) - Calling all Add-on Developers!

Lets hope we will have more and more quality extensions now. :-)


Tuesday, March 3, 2009

Browsers are undergoing continuous innovation.

It was sometime since I blogged about anything. I have had several things on mind and many of them are presents as drafts. But this one really caught my attention and I felt I should put in my thoughts about this.

Until a couple of years, people rarely looked beyond Internet Explorer for their browsing experience, though they kept cursing it a lot. After that came the Mozilla Firefox web browser with its pack of addons allowing users to actually customize for their needs. They could actually make the browser do what they wanted it to do and not just set some options. The browser wars had started again. At least I started reading about browsers and started following up on things happening with these browsers. Opera and Safari and a few other, actually used, browsers were not really "news" as such.
This was dying off. Firefox had created a sizable chunk of user base and was pretty stable with it. Except for few traditional enhancements like memory optimizations, bug fixes, etc.. nothing big was happening. It was a lot silent.

Then came the next wave with Google announcing the release of its browser with a nice, easy to understand "comic" book. It came with a whole new paradigm for building browsers with the "process-per-tab" concept. It was really innovative. Though discussions about this happened in other browser communities also, Google Chrome was the first one to implement it. Also it boasted of its super-fast V8 JavaScript Engine and also the browser UI, which gave more screen real estate to content than chrome. This was a real big thing and made the browser wave go much higher than what it had ever been. With the Google branding a vast majority of internet users rushed to have a sneak peak at Google Chrome or may be try it or even keep it as their regular browser. There was a lot of noise about this and people indeed listened. Mozilla and IE people were not silent and did responded very well. Mozilla came up with its ultra-fast "Tracemonkey" JS engine, implementing the trace trees and there by making it much faster than V8. IE8 also has the "process-per-tab" and "private browsing" features first presented by Chrome. But like any other sound this too dampened a little after it was created and browsers were back in the silent phase doing traditional improvements and bug fixes. At least that's how I percieved the situation.

[Edit 06-Mar-09 : Shawn Wilsher suggest that Tracemonkey had appeared before Chrome made the pubic appearance. He certainly knows these things better than me and I believe he is right. But still I wanted to keep my original post as it is and instead I put this separate edit note. :) ]

But I was clearly, totally wrong. People have realized that internet is the place to be in the future and thats where a large part of our life will be. With browser being the main and central interface for people to use that internet, it makes a real good sense to make this browser as robust and reliable as possible. New things keep coming on the internet, both and bad, the latter being more often, and the browser has to keep up with all of it. What we thought yesterday as being a good design apporach might just look senseless tomorrow. There just seems no end to it and researchers appear all ready for it. I am coming across so many innovations happening in the browser domain. This article : Researchers Say Gazelle Browser Offers Better Security -- Campus Technology -- gives us an idea about how much effort the scientists are putting in making our internet lives better. The article is about Gazelle, but it also mentions about another experimental browser OP.

This browser Gazelle, uses the Trident rendering engine (used by IE) but builds upon a OS based process architecture where websites form the processes and they communicate by passing messages like IPC (Inter-process communication).

I am not yet sure how this will all work out. Probably the websites need to built with some intelligence so that it can do the so called IPC when it actually has to. But the evoltion in browsers is for sure. They will not be same as they are now. Several things are going on along the UI front. When these things mature and come togther you can have the scenes of your favourite sci-fi movie scenes right in front of you everyday.

Lets just hope these come soon enough for people like me to enjoy, not when I am all grey hair and toothless.

Hari Om.

Wednesday, January 14, 2009

Documentation appears by itself -- from the code.

Beautifully Documented Code at Toolness

I am slightly into web and web related development and as part of that I come across various JavaScript (JS) libraries which  claim to make my life a lot easier and many of them actually do that. But several times there is a big hurdle that I have to cross before actually things become easy. Its understanding how to use that library. This sometimes turns out to be a pain, often bigger than the one if I had done everything by myself without any external library.

This is probably all going to change. The lack of documentation might just vanish and all the necessary documentation might start appearing from the code itself. And the steps for building this, collating things and publishing on website are taken care of. You just need to annotate your code with the right tags and the JavaScript linked at the top (Written by Atul) will do the documentation generation and will also display it alongside code.

This is particularly useful for libraries providing various APIs as the users can see a function in the raw code and understand its meaning and purpose and the arguments it expects and its return value. All in once place at one time.

This is really cool. Go check out the tool and give the world a better library to use :-)

Happy (auto)Documenting

Sunday, August 17, 2008

How libffi actually works?

I came across this libffi when I thought of working on js-ctypes. Though my contribution remained very close to nil, I got to know about this fantastic new thing libffi. But I did not understand how it actually worked when I first read. I just got a high overview and assumed that the library abstracts out several things that I need not worry about when making calls across different programming languages. I just followed the usage instructions and started looking at the js-ctypes code. That was sufficient to understand the js-ctypes code. But today when I was once again going through the README, I actually came across one line that made things a little more clearer. The first paragraph in "What is libffi?" tells it all. Its the calling conventions that this library exploits. I had written a post about calling conventions some time back. As mentioned in the readme file, its the calling convention whcih is the guiding light for the compiler. So when generating the binary code, thecompiler will assume will that the arguments that are being passed to that function will be available in some place and also it knows about a place where the return value will be kept, so that the code from the calling function will know where to look for it.

Now we know that the ultimate binary code that we get after compilation is the machine code. So it is basically a set of machine supported instructions. These instructions are certainly not as sophisticated as those available in the C - like the functions. So what do these functions transalte to? Nothing but a jump of IP(instruction pointer) to an address where the binary code of that function is. Inside this code, the arguments passed are accessed by referencing some memory location. During compilation the compiler will not know the precise memory locations (obviously). But the compiler has to put in some address there when generating the code. How does it decides on the memory location? This is where the calling conventions come into picture. The compiler follows these standard steps. Since its the compiler who is generating the code for both calling a function, where arguments are sent, and the code of the called function, where those args are received, the compiler will be knowing where it has put the arguments and hence generate code to use the data in those locations. From this point of view, the main(or is it the only one) condition for the binary code of a function to work properly is to have its arguments in the memory locations it thinks they are in and and the end place back the return value in the right location. And from what I have understood till now, it is this point that libffi exploits.

When we are forwarding calls from an interpreted language, like JS, to some binary code generated from a compiled language, like C, there are two things to be done:

0. As discussed earlier, whatever arguments are passed from JS are to be placed in the right locations and later take the return value and give it back to JS.
1. Type conversions -- The types used by JS are not understood by C. So someone should rise up to the occasion and map these JS types to their C counterparts and vice-versa.

The first of these is taken care by libffi. But the second one is very context specific and totally depends on the interpreted language. It does not make sense to just have a catalog of type convertors for every interpreted language on this earth. So these two steps were separated and spread out as two interacting layers between the binary code and the interpreted language code. libffi now takes care of the calling convention part and making sure the binary code runs and gives back the results. And the job of type conversion is the job of another layer which is specific to the interpreted language which wants to call into the binary code. And hence we have these different type converting layers for different interpreted languages: ctypes for python, js-ctypes for JS (and probably some more exist). Well with this renewed and clearer understanding I hope to actually contribute to js-ctypes.

Happy calling (with conventions) ;-)

Tuesday, July 15, 2008

A web-wizard to create XPCOM components in JS

JavaScript XPCOM Component Wizard

As I might have mentioned in my earlier posts, creating XPCOM components can sometimes be hell. (So wait for js-ctypes to get on-board). Mozilla people know about this pain and hence have published sample code snippets to help you in figuring out what all is necessary and what all is optional. Even the process of creating your component from these snippets can be cumbersome sometimes. So one mozillian, ted, went a step ahead and created a web based wizard to create a component in JS. You just enter a few parameters and a skeletal component is ready for use. It simply saves a lot of time.
Just visit the link at the top and try your hands at it.

Happy Componenting. ;-)

--Brahmana.