At the moment, all web pages are transferred in plaintext form, requiring little work from either the server side or the client side to display the pages.
In order to introduce compression into the HTTP protocol, a number of issues would have to be resolved.
First and foremost would be the issue of backwards compatibility, with the web having reached so far across the world, switching to compression will take a long time. Browsers need to be programmed to handle compressed web pages and web servers also need to be configured to compress the information requested before sending it onto the user. It would be a straightforward task for the IETF (Internet Engineering Task Force) to come up with a compression standard, it would then be up to the vendors and application writers to modify the browsers and servers for the new standard.
Another issue would be the load placed on the server when it is requested to compress the information. Many busy servers would not have the power to handle the extra workload. This holds to a lesser extent on the client side, with a minimal overhead involved in decompressing a few pages at a time.
In their paper ``Network Performance Effects of HTTP/1.1, CSSI and PNG'' [17], the authors investigated the effect of introducing compression to the HTTP protocol. They found that the compression resulted in a 64% saving in the speed of downloading with a 68% decrease in the number of packets required. Over normal TCP/IP, this brings the packet exchanges and size of data down to the level where T/TCP becomes beneficial. Thus a strategy involving both compression and T/TCP can result in enormous savings in time and bandwidth.