Windows Azure Storage Performance Best Practices
Windows Azure storage is a very important part of Windows Azure and most applications leverage it for several different things from storing files in blobs, data in tables or even message in queues. Those are all very interesting services provided by Windows Azure but there are some performance best practices you can use in order to make your solutions even better.
In order to help you do this and speed your learning process I decided to share some of the best practices you can use in order to achieve this.
Here is a list of those best practices:
1. Turn off Nagling and Expect100 on the ServicePoint Manager
By now we might be thinking what is Nagling & Expect100. Let me help you better understand that.
“The Nagle algorithm is used to reduce network traffic by buffering small packets of data and transmitting them as a single packet. This process is also referred to as "nagling"; it is widely used because it reduces the number of packets transmitted and lowers the overhead per packet.”
So after understanding the Nagle algorithm should we take it off?
Nagle is great for big messages and when you don’t care about latency but really about optimizing the protocol and what is sent over the wire. In small messages or when you really want to send something immediately the nagling algorithm will create an overhead since it will delay the sending of the data.
“When this property is set to true, 100-Continue behavior is used. Client requests that use the PUT and POST methods will add an Expect header to the request if the Expect100Continue property is true and ContentLength property is greater than zero or the SendChunked property is true. The client will expect to receive a 100-Continue response from the server to indicate that the client should send the data to be posted. This mechanism allows clients to avoid sending large amounts of data over the network when the server, based on the request headers, intends to reject the request.” from MSDN
In order to do this there are two ways:
// Disconnects for all the endpoints Table/Blob/Queue
ServicePointManager.Expect100Continue = false;
ServicePointManager.UseNagleAlgorithm = false;
// Disconnects for only the Table Endpoint
var tableServicePoint = ServicePointManager.FindServicePoint(account.TableEndpoint);
tableServicePoint.UseNagleAlgorithm = false;
tableServicePoint.Expect100Continue = false;
Check that it is done before creating a connection with the client or this won’t have any effect on the performance. This means before you use one of these.
2. Turn off the Proxy Auto Detection
By default the proxy auto detection is on in Windows Azure which means that it will take a bit more time in order to do the connection since it still needs to get the proxy for each request. For that reason it is important for you to turn it off.
For that you should do the following change in the web.config / app.config file of you solution.
<proxy bypassonlocal="True" usesystemdefault="False" />
3. Adjust the DefaultConnectionLimit value of the ServicePointManager class
“The DefaultConnectionLimit property sets the default maximum number of concurrent connections that the ServicePointManager object assigns to the ConnectionLimit property when creating ServicePoint objects.” from MSDN
In order to optimize your default connection limit you first need to understand the conditions on which the application actually runs. The best way to do this is by doing performance tests with several different different values and then analyze them.
ServicePointManager.DefaultConnectionLimit = 100;
Hope this helps you the way it helped me.