First of all, I want to clarify that I will not debate the implications of HTTP/2 (at least not in this post). It is what it is, with the good and the bad… Nor do I don’t want to debate the pros and cons of having many small resources instead of bundling them…
Instead I just want to put out there some interesting visual representations of how the two major browsers, Chromium and Firefox, handle HTTP/2 as opposed to HTTP/1.1, in the context of sites with lots of small resources, and under real-world latency.
The latency is a key “ingredient” in this comparison, as with near zero latency (achievable only on the
loopback adapter, or within small and quiet wired LAN), there are almost no discernable differences between HTTP/1.1 and HTTP/2.
For the last few days I’ve been working on a pet project of mine, kawipiko, and I was wondering if it is worth adding support for HTTP/2.
For this project I’m using Go with fasthttp, and although Go has out-of-the-box support for HTTP/2,
fasthttp is a far more efficient HTTP implementation, but which unfortunately lacks support for HTTP/2.
In the end I’ve implemented three listeners: HTTP/1.1 (with and without TLS) leveraging
fasthttp, and HTTP/2 (always over TLS) leveraging Go’s own
I was sure that HTTP/2 would be better that HTTP/1.1 when it comes to delivering many small resources, however I wanted to know “how much better”, especially when using a real browser as opposed to some benchmark tool.
Also, in order to be closer to real network conditions, where for such small resources latency is the main issue (as opposed to bandwidth), I’ve configured my webserver to introduce a 20ms delay for each response. (20ms is the response-first-byte-latency I experienced by using WiFi and connecting to this site, which is served through CloudFlare; i.e. usual conditions.)
Additionally, because with HTTP/1.1 (with or without TLS) a frequent performance trick was to use domain shardin, I’ve also tested this scenario with 8 shards. (Note that in the case of HTTP/2, in order to actually make sharding work, the domains also have to resolve to different IP’s.)
Just play the videos below, one for each combination of browser and protocol, and watch the patterns and completion time.
All have the same length, and start just before re-loading the page. The play speed of the videos is “real”, i.e. this is how the browser rendered the loaded images in real-time.
Chromium with HTTP/1.1
Chromium with HTTP/2
Firefox with HTTP/1.1
Firefox with HTTP/2
Chromium with HTTP/1.1 (with 8 shards)
Chromium with HTTP/2 (with 8 shards)
Side-by-side (with 8 shards)
Makes you ponder…
Looking at the above videos, it is clear that using HTTP/2 has a real impact on the total loading time; however more for Chromium than for Firefox…
Also, increasing the latency from 20ms to 50ms the difference between the two protocols becomes even more evident, due to the fact that for HTTP/1.1 the total time increases linearly with the latency, meanwhile in the case of HTTP/2 only with a small constant multiplied with the latency.
However looking at the Chromium video, especially the one for HTTP/2, I wonder what do they try to achieve with that loading pattern? The loading pattern seems to be deterministic, as on many repeated reloads it keeps exhibiting the same behavior…
In fact CloudFlare does have a nice article that tries to analyze this behavior in depth, but from another perspective. Perhaps a more focused article is the one from Patrick Meenan.
As many others have observed before, HTTP/2 has clear benefits with regard to scenarios with many small resources, and non-LAN latencies.
However, given the domain sharding trick, I think one can still push the HTTP/1.1 limits without having to resort to HTTP/2… (Granted there are other related tricks that have to be leveraged – like for example DNS pre-fetching, and HTTP pre-connecting – in order to eliminate the first time load latencies, but it is doable…)