A gentle introduction to HTTP/2

Let’s start with HTTP/0.9

The original HTTP as defined in 1991 was developed by Tim Barnes Lee who is a web developer. It was a text based request and response protocol. You could do HTTP GET and only for HTML (response type). It was solely used for sharing documents mostly about physics. After each request connection was closed.

Later on, HTTP/0.9 was extended to HTTP/1.0, request and response headers were added, also, you request images, text files, CSS and other.

In 1999, HTTP/1.1 showed up, Persistent connections (keep alive) was introduced, chunked transfer encoding and host header were added. With host headers, it was possible to host multiple sites for an IP. It was a huge SUCCESS!

Problems with HTTP/1.1:

  • It wasn’t designed for todays web pages
    • 100+ HTTP requests, and 2MB+ page size.
  • Requires multiple connections
  • Lack of prioritization
  • Verbose headers
  • Head of line blocking

Lets break it down.

Multiple connections:

When the app requires 100+ HTTP requests, there is a limit to the number of connections a browser can open per host, most browsers support 6 connections simultaneously. This becomes a problem, it takes time to establish and be efficient, there is 3 way handshake every time it needs a connection.

Before HTTP/1.1 each resource required a 3 way hand shake which was very wasteful.

With HTTP/1.1 Connection header was introduced. By default Connection: Keep-alive was introduced. With this header, the problem with three way handshake was eliminated and everything was able to be done via single TCP connection.

TCP is a reliable protocol. Each packet sent we receive a acknowledgement.

This introduced the head of line blocking problem as below. If the data 2 fail, it will block the subsequent requests.

There are other problems like slow start and sliding window. Adjusting the transfer rate by the condition of the network. This is called Flow Control.

Lack of prioritization:

Priority is decided by browser. Not be developer or the application. There is no way to specify the order or responses. Browsers need to decide how to best use the connections and the order of resources. There is no prioritization built on HTTP/1.1

Verbose headers:

There is no header compression. You can use GZIP to compress the content however, the headers such as Cookie, User Agent, Referrer and others are not compressed. HTTP is stateless by default; therefore Cookies, User Agent and other headers are sent to server every time, which is inefficient especially for very high volume sites.

Head of line blocking:

Once you create a connection, and send a request to a resource, that connection is dedicated for that request, until the response comes back, you can’t use that connection. ie: If you need to get 30 resources from host, you can get 6 at a time. Once you request 6 resources, the others must wait for these 6 requests to finish. So, HTTP/1.1 works in a serial way (serial request and response). This is called Head of line blocking.

These are the problems with HTTP/1.1. Then we have other problems like bandwidth and latency.

Bandwidth is measured in bits per second, which is relatively easy to add more to a system. Bandwidth is usually expressed as network bandwidth, data bandwidth or digital bandwidth.

Latency is the time interval between cause and affect in the system. In internet latency is typically the time interval between request and response. Latency is measured in milliseconds, which is based on distance and speed of light, there is not much to do when it comes to latency. You can use CDN to beat latency, however that comes with a price.

For more information about latency and impacts read “it’s the latency, Stupid“, by Stuart Cheshire.

Increasing bandwidth helps improve web performance, page loading times etc. However, there is a limit to it. On the other hand, fixing the latency problems almost helps linearly to performance. You can read this post for more information about bottlenecks. Based on the tests done on this post indicates that improving latency is more efficient than improving bandwidth, when it comes to web page optimization.

If we were to compare internet to a highway bandwidth is the number of lanes and latency is time it takes to travel a specific distance which depends on traffic, speed limit and others.

Goals of HTTP/2

  • Minimize impact of Latency
  • Avoid head of line blocking
  • Use single connection per host
  • Backwards compatible
    • Fall back to HTTP/1.1
    • HTTP/1.1 methods, status, headers still work.

Biggest success of HTTP/2 is reducing latency by introducing full request and response multiplexing.

HTTP/2 Major Features

  • Binary framing layer
    • Not a text based protocol anymore, binary protocols are easier to parse and more robust.
  • Resource prioritization
  • Single TCP connection
    • Fully multiplexed
    • Able to send multiple requests in parallel over a single TCP connection.
  • Header compression
    • It uses HPACK to reduce overhead.
  • Server push

HTTP/2 introduces some more improvements, more details: HTTP/2 RFC7540

Binary framing layer uses doesn’t use any text, currently we can trace and debug, this change will require tools for debugging.

Resource prioritization will allow browsers and developers to prioritize the resources requested. Priorities can be changed at any time based on resources or application. So far so good. However, if there is a problem with a high priority resource, browser can intervene and request the low priority resources.

Most important change in HTTP/2 is Single TCP connection per host which solves lot of problems. HTTP/2 multiplexes request and response frames from various streams. Lots of less resources are used, there is no 3 ways handshake, no TCP Start slow and Head of Line Blocking.

 

When a user requests a web page, headers are sent for every single request which are Cookies, User Agent and others. It doesn’t make too much sense to send User Agent for every single request. To solve this problem dynamic table was introduced.

When you send a request to a host, the following headers are sent along. On the consequent requests, not the whole values are sent, instead the compression values are transmitted. In the future requests if the compressed values are same, nothing will be sent again. If User-Agent doesn’t change it won’t send anymore.

HTTP headers

Original value Compression value
Method GET 2
Scheme HTTP 6
User-agent Mozilla Firefox/Windows/10/Blah 34
Host Yahoo 20
Cookie Blah 89

 

Currently server push is experimental today yet, servers try to predict what will be requested next and push to client.

ALNP is needed for HTTP/2.

As part of the spec, HTTP/2 doesn’t require HTTPS however, if you need HTTPS, you need to use TLS 1.2+ and also don’t use some cipher suites.

Can you use HTTP/2?

With HTTP/2 we have single multiplex connection, which means some of the performance optimization techniques sorta becomes obsolete such as CSS sprites, JS bundling, and domain sharding. Since connections are cheaper with HTTP/2 and more efficient, resources can be cached, modified and requested independently, there are of course tradeoffs which you need to decide how to implement.

However, I think, web performance optimizations like fewer HTTP requests, send as little and as infrequently as possible still applies.

While many major Internet giants are using HTTP/2, it is still not adapted as much. I assume, adoption will take a while and maturity of this new exciting protocol will come along.

Here are some demos, showing the difference between HTTP/1.1 vs HTTP/2.

Akamai demo

CDN 77 demo

 

Sharing is caring Share on FacebookShare on Google+Tweet about this on TwitterShare on LinkedInDigg thisEmail this to someoneShare on Reddit