OnSwipe redirect code

Friday, September 12, 2008

Chrome multi-process architecture does have heavy costs

Chromium Blog: Multi-process Architecture

The day the Google Chrome comic was released quite a few people pinged and called me to talk about that and what I say about -- not that I am an expert on browsers or evaluating software products. Its just for the slight association with the Mozilla community that I was contacted. When I was talking with my roommate about this I told him what I felt the very moment I read about the multi-process architecture. Right from the first reading I was skeptical about the resource utilization if I were to open a hell lot of tabs in this browser, as it mentioned in the comic that each tab is a process. Though the phrase "process per tab" was more for software laymen with the actual thing being that it is "process per domain" until a max limit of about 20 and later its reuse. More details are on the blog post linked above.

And about the reason for being skeptical is the very basic concept amongst computer users that more processes will slow down the system. Any system analyst or a sys-admin will tell you the same thing when you complain about the very low speed of your computer. From what I know this is mainly because of the increased memory consumption and a possible paging that might happen for moving processes in and out of main memory and virtual memory. Also scheduling will possibly take a hit as there are more processes. Apart from this as I understand there is always an overhead of maintaining process info for every process.

Considering these things for a user like who opens close to 20 tabs always and goes up to 30 a lot of times, the overhead will be considerable. Also creating and killing processes will also be an overhead.

Also my task-manager will list so many chrome.exe processes.. !!!!! Thats so very irritating for me.

When I told him about my spkepticism he told me that the Google folks would have thought about that. I agreed with him and the now they have put it on their blog. In the post linked at the top, its mentioned that the system might slow down with a lot of processes and hence they had to put an upper limit and later resort for reuse. They call this small caveats but I am not sure if it is small enough. Lets see how things evolve.

Until this is proven as small enough: Happy Single-Process browsing ;-)