Monday, August 6, 2012

The Road to HTTP/2

I have been working on HTTP/2.0 standardization efforts in the IETF. Lots has been going on lately, lots of things are still in flux, and many things haven't even been started yet. I thought an update for Planet Mozilla might be a useful complement to what you will read about it in other parts of the Internet. The bar for changing a deeply entrenched architecture like HTTP/1 is very tall - but the time has come to make that change.

Motivation For Change - More than page load time

HTTP has been a longtime abuser of TCP and the Internet. It uses mountains of different TCP sessions that are often only a few packets long. This triggers lots of overhead and results in common stalling due to bad interaction with the way TCP was envisioned to be deployed. The classic TCP model pits very large FTP flows against keystrokes of a telnet session - HTTP doesn't map to either of those very well. The situation is so bad that over 2/3rds of all TCP packet losses are repaired via the slowest possible mechanism (timer expiration), and more than 1 in 20 transactions experience a loss event. That's a sign that TCP interactions are the bottleneck in web scalability.

Indeed, after a certain modest amount of bandwidth is used additional bandwidth barely moves the needle on HTTP performance at all - the only thing that matters is connection latency. Latency isn't getting better - if anything it is getting worse with the transition to more and more mobile networks. This is the quagmire we are in - we can keep adding more bandwidth, clearer radios, and faster processors in everyone's pocket as we've been doing but that won't help much any more.

These problems have all been understood for almost 20 years and many parties have tried to address them over time. However, "good enough" has generally carried the day due to legitimate concerns over backward compatibility and the presence of other lower hanging fruit. We've been in the era of transport protocol stagnation for a while now. Only recently has the problem been severe enough to see real movement on a large scale in this space with the deployment of SPDY by google.com, Chrome, Firefox, Twitter, and F5 among others. Facebook has indicated they will be joining the party soon as well along with nginx. Many smaller sites also participate in the effort, and Opera has another browser implementation available for preview.

SPDY uses a set of time proven techniques. Those are primarily multiplexed transactions on the same socket, some compression, prioritization, and good integration with TLS. The results are strong: page load times are improved, TCP interaction improves (including less connections to manage less dependency on rtt as a scaling factor, and improved "buffer bloat" properties),  and incentives are created to give users the more secure browsing experience they deserve.

 Next Steps

Based on that experience, the IETF HTTP-bis working group has reached consensus to start working on HTTP/2.0 based on the SPDY work of Mike Belshe and Roberto Peon. Mike and Roberto initially did this work at Google, Mike has since left Google and is now the founder of Twist. While the standards effort will be SPDY based, that does not mean it will be backwards compatible (it almost certainly won't be) nor does it mean it will have the exact same feature set but to be a success the basic properties will persevere and we'll end up with a faster, more scalable, and more secure Web.

Effectively taking this work into an open standardization forum means ceding control of the protocol revision process to the IETF and agreeing to implement the final result as long as it doesn't get screwed up too badly. That is a best effort process and we'll just have to participate and then wait to see what becomes of it. I am optimistic - all the right players are in the room and want to do the right things. Just as we have implemented SPDY as an experiment in the past in order to get data to inform its evolution, you can expect Mozilla to vigorously fight for the people of the web to have the best protocol possible and it seems likely we will experiment with implementations of some Internet-draft level revisions of HTTP/2.0 along the way to validate those ideas before a full standard is reached. (We'll also be deprecating all of these interim versions along the way as they are replaced with newer ideas - we don't want to create defacto protocols by mistake.) The period of stagnant protocols is ending.

Servers should come along too - use (and update) mod_spdy and nginx with spdy support; or get hosting from an integrated spdy services provider like CloudFlare or Akamai.

Exciting times.