Performance Tuning
Yesterday and today, I spent a lot of time on improving performance for specific methods in our application. Normally and hopefully tomorrow-morning (I must be present around 5 o'clock at work !!!) we will go into production with a new release of our application and for the first time we will use .NET Remoting to connect to other sub-applications in different AppDomains. We used already remoting for communication between our client-application and our server-application.
I simplified the situation in the following images. First we were planning to deploy our application on Server 1 and another application on Server 2.
Remoting is used between client and Server 1 and between Server 1 and Server 2. We also - because of bad performance - tried to deploy the other application on Server 1 and this happened to be the best solution in terms of performance for our application.
Remoting is still used between the two different AppDomains. We were quite surprised that the penalty of the network connection between Server 1 and Server 2 (situation 1) was so remarkable. The other application will still be deployed on another server, but this will be for other purposes. We only need the other application for data-collection and that's why it can be duplicated on Server 1.
In terms of performance, the .NET remoting plumbing normally provides the fastest communication between different application servers when you use the TCP channel and the binary formatter, but in situation 1 it really decreased performance because network traffic to other machines/servers increased a lot and network-traffic became our bottleneck.
Performance of certain method calls were also reviewed and luckily DevPartner came back to the rescue and I was able to pinpoint some bottlenecks with this excellent performance profiling-tool. DevPartner lets you really dig into the source code where you're able to see how many times a line of code is executed and how long it takes.
I was able to reduce the execution time (on my local development machine) of a certain procedure to less than 200 seconds instead of almost 400 seconds [almost 50% performance-gain] by applying custom caching of data-objects in hashtables and by sending bulk sql-statements (instead of separate sql-statements) to our database. The procedure is primarily responsible for converting data into xml-messages (with a lot of data-lookup) and for sending them finally to a message-handling process. In the scenario above, almost 400 messages were selected for sending. This message-sending stuff is done in a loop and becomes very data-consuming when a lot of messages are selected to send (everything works pretty fast when selected message-group is small). Before, all needed data was each time fetched in the loop while now only new data will be fetched (caching) in the loop and some data will be pre-fetched with a bulk sql-statement. This performance-boost was absolutely what we needed to not jeopardize our future release(s). We cannot afford a long execution time (time-out) for these important operations.
I simplified the situation in the following images. First we were planning to deploy our application on Server 1 and another application on Server 2.
Remoting is used between client and Server 1 and between Server 1 and Server 2. We also - because of bad performance - tried to deploy the other application on Server 1 and this happened to be the best solution in terms of performance for our application.
Remoting is still used between the two different AppDomains. We were quite surprised that the penalty of the network connection between Server 1 and Server 2 (situation 1) was so remarkable. The other application will still be deployed on another server, but this will be for other purposes. We only need the other application for data-collection and that's why it can be duplicated on Server 1.
In terms of performance, the .NET remoting plumbing normally provides the fastest communication between different application servers when you use the TCP channel and the binary formatter, but in situation 1 it really decreased performance because network traffic to other machines/servers increased a lot and network-traffic became our bottleneck.
Performance of certain method calls were also reviewed and luckily DevPartner came back to the rescue and I was able to pinpoint some bottlenecks with this excellent performance profiling-tool. DevPartner lets you really dig into the source code where you're able to see how many times a line of code is executed and how long it takes.
I was able to reduce the execution time (on my local development machine) of a certain procedure to less than 200 seconds instead of almost 400 seconds [almost 50% performance-gain] by applying custom caching of data-objects in hashtables and by sending bulk sql-statements (instead of separate sql-statements) to our database. The procedure is primarily responsible for converting data into xml-messages (with a lot of data-lookup) and for sending them finally to a message-handling process. In the scenario above, almost 400 messages were selected for sending. This message-sending stuff is done in a loop and becomes very data-consuming when a lot of messages are selected to send (everything works pretty fast when selected message-group is small). Before, all needed data was each time fetched in the loop while now only new data will be fetched (caching) in the loop and some data will be pre-fetched with a bulk sql-statement. This performance-boost was absolutely what we needed to not jeopardize our future release(s). We cannot afford a long execution time (time-out) for these important operations.
0 Comments:
Post a Comment
<< Home