Performance is generally measured with 2 parameters:
1) Time taken by the application
2) Memory consumed by the application
Typically, on current day Desktop systems the memory criteria is almost ruled out, unless the application is really memory consuming like 3D games and other high end stuff.
But time has always been a concern. So many algorithms are developed to solve the same problem, with each algorithm being a good choice under some set of conditions. Those are all the high-funda things involving Math, mainly things like: O(n), Exponential complexity etc, etc..
Apart from these there are a lot of very small things, which appear as very trivial and insignificant but eventually make a lot of difference. I learnt about 2 such things at my work place NI, yesterday when I was working on my first task here. I of course cannot speak of the nature of my work and what exactly I was doing. Here are the generic things that I learnt:
1) Avoid calling a function in a loop, instead pass the data you are iterating in the
loop to the function being called.
Suppose you have a large amounts of data to be processed and you have helper processing function and an actual processing function. The most common way of accomplishing such a task would be to iterate through the data in the helper function and for every single instance of the data call the actual data processing function. This can be optimized to a great extent in a lot of cases. By iterating through the data in the helper function and calling the actual function we introduce a lot of function-call overhead, which obviously eats up a major chunk of time. If possible, which it indeed is in many cases, we should instead pass the large data chunk to the actual data processing function and iterate over there and make the necessary processing then and there itself as and when we fetch an instance of the data.
PS: By data instance here I mean one data unit. Something like an element if the data chunk is an array. If it is an array of structures, then one data instance would be a structure.
This way we reduce the overhead caused due to extensive function calls and there by improving the performance.
2) If we have an if/else block which is hit a lot of times continuously, then we should have the code for the most probably case in the if block.
This is particularly interesting. Yesterday when my tech lead told me about this, I was really surprised that optimization can be done at this level. I liked this very much.
Consider the following code fragment:
for ( execute some times) {Something like above when compiled into machine language would look like this:
if (aCondition) {
.....
.....
} else {
......
......
}
}
loopLabel: instruction for condition check;
JMP if false to elseLabel
code for true case (if block)
some more code for true case
loop condition check
JMP to loopLabel if true
elseLabel: code for false case (else block)
some more code for false case
loop condition check
JMP to loopLabel if true
This is how the assembly code would look like.
Here the if/else block is repeatedly hit as it is in a loop and is the main body of the loop. In the assembly code when the if condition turns out to be true the code for true case is execute and then a JMP is made to loop beginning. BUT, but, in case of else there will be "2" JMP instructions - one to go to the else block and another to go back to loop beginning. This double jump obviously is an overhead and takes more time for execution. Hence we should try and design the if/else in a such a way the code for the most common or most probable case to be put in the if block, so that a maximum part of the execution happens only with a single jump.
This might seem so stupid or so subtle that may lead to the thinking that the performance gain with this trick is almost negligible. Its not so. Believe me, the gain is really good in most cases where the if/else block is repeatedly hit.
So stop jumping around and start improving your code. ;-)
nice stuff ranga.. lets hav more!
ReplyDelete