Optimizing Proxies – Protocol Performance

The importance of the data transport protocol is of course crucial to a global information network like the world wide web.  Unfortunately the HTTP/1.0 protocol has some inherent issues which are directly related to performance which have been largely addressed in version 1.1 of the protocol.  It is expected that future developments will further improve the performance of the protocol.

One issue is related to the three way handshake that is required by TCP before it can establish the connection. It is important to remember that during this handshake phase that no application data is transferred at all.  from the user perspective the delay will simply appear as latency in getting the initial connection established.   This three way handshake involves a considerable overhear preceding data transfer and has a noticeable effect on performance particularly in busy networks.

This problem is made worse by using the HTTP 1.0 protocol which makes extensive use of new connections.  In fact every new request requires a new TCP connection to be established, complete with a new three way handshake.  This was originally implemented as a measure to boost performance because it was thought that it would avoid long lived idle connections being left dormant.  The reasoning was that it was more efficient to establish new connections when required as the data burst would be small and frequent.

However the web has not developed like this and it’s is much more than a series of short html files quickly downloaded.  Instead the web is full of large documents and pages embedded with videos and images.  Add to the the multitude of applets, code and other embedded objects and this soon adds up.  What’s more each of these objects usually has it’s own URL and so requires a separate HTTP request for each.    Even if you invest in a high quality US proxy you’ll find some impact on speed using HTTP 1.0 simply due ti the huge number of connection requests it generates.

There were modifications made to increase the perceived performance from the user perspective.  For one, the use of multiple simultaneous connections was allowed and this would allow client software like browsers to download and render multiple components on a page.  This meant that the user wasn’t left waiting as individual components were loaded separately.  However although parallel connections increase performance on an individual level, they generally have a very negative impact on the network as a whole.   The process is still inefficient and allowing parallel connections does little to mitigate this situation.

As any network administrator knows, focussing on a single aspect of network performance is rarely a good idea and will almost never improve overall network performance.    The persistent connection feature was introduced to help solve this, and was added as a non-standard extension to HTPP 1.0 and included by default with HTTP 1.1.

Further Reading: Proxies Blocked by BBC Abroad

Leave a Reply

Your email address will not be published. Required fields are marked *