HTTP/2 Multiplexing vs Old School Domain Sharding - Free Page Speed Improvements

multiple-tracks-trainToday I am going to teach you why Domain Sharding is an outdated web performance technique.

The best part is this technique wont cost you a dime, is simple to turn on and can have a big impact on your page's load time.

Recently Brian Dean posted a nice article on performing an SEO audit. He covered many good points, including a token section on performance.

Like most online marketers Brian pointed out optimizing images, fast server... simple things that are tangible.

The sad thing is these common points provide minimal impact on your page rendering profile.

Seriously, we're talking less than 10%.

Brian admits he is not a technical person and 'wisely' avoided going too deep in this area.

Well, I am technical and love going deep in web performance optimization, so I am about to dive into the topic of HTTP/1.1 domain sharding vs HTTP/2.

I chose this point because Manish Dhiman asked about sharding images across sub-domains to improve performance in the article comments, domain sharding.

manish-dhiman-domain-sharding-question-backlinko

OMG 'Domain Sharding'!

This sounds technical...not that much.

Let's learn about this obsolete technique and what you should do today instead.

What is Domain Sharding?

High Performance Web Site's by Steve Souders in 2007 I grabbed a book that changed my developer trajectory forever, 'High Performance Web Site's by Steve Souders.

I reviewed my signed (yeah, I am that cool) copy while writing this article. The book's content still holds up today, so still a great reference to how to architect fast websites.

At the time Steve worked at Yahoo and had a team examine the top 10 web sites to analyze what they did to render fast. From that they created the old FireFox YSlow plugin and eventually HTTPArchive.

At the time a best practice was to segment page/site assets across different domains. This was a hack to 'trick' browsers into opening more parallel connections.

how http/1.1 works connectionsWhen you dive into the HTTP specification you learn user agents/clients (that would be the browser to normal folks) should only open 1 or 2 connections to the server. This causes a problem because most pages require numerous assets to load.

Backlinko needs well over 300...I'll address that issue in another post.

This begs the question, 'how many resources will a browser download from a given domain at a time?'

Because most pages are, and I am going to be brutally honest here, poorly designed, browser vendors (Microsoft, Google, FireFox, Opera, etc) decided to break the spec and open up to 6 (I think one might have even gone to 8) simultaneous connections per domain. This was not always true, and many mobile browsers stuck to the specification because cellular connections are weak and unreliable.

One reason why the HTTP/1.1 specification limited connections to 2 was to limit the amount of server congestion. Remember servers were not as sophisticated back then and did not have the beefy hardware we enjoy today.

What I did back in the day was create multiple sites, one for HTML and API calls and another for images and static files (CSS, JavaScript, etc). This gave me 12 connections and helped browsers load assets a little faster.

This technique does not improve actual rendering times, that is another blog post (so much to cover here πŸ˜‹), just how fast responses could be loaded across the network.

Yeah, problem solved...sort of.

Remember this is a hack perpetuated by both browser vendors and webmasters.

Head of Line Blocking

Caution: Scary Geek Term Ahead πŸ‘¨πŸΌβ€πŸ’»

Head of line blocking happens because HTTP requests are broken down to small packets. In HTTP/1.1 packets must be received in order. HTTP/2 allows packets to be received and reassembled later, eliminating this blocking effect.

When you make a network request over TCP the communication is broken into small chunks of data, called packets. This allows larger files to travel around networks, even when the file size is greater than the pipe's bandwidth.

Video provides a good example. These files are large.

When I record a 5 minute video on my phone I can easily create a file over 1GB. Even good broadband connections cap out at a few 100MB/second. Instead these files are sliced apart into 'byte sized' chunks, as small as 16kb.

I won't go into the details of TCP slow start here, but just trust the process.

HTTP was originally created to require these bytes be received in order to make client-side assembly easier. This meant the client would have to wait for an entire response to arrive before making the next request.

HTTP/2 changes this behavior because it uses a binary encoding format that allows these packets to be assembled when the entire response arrives, but without impeding other requests from initiating.

To the consumer this means your web pages can assemble and render faster, which is what we all want.

Hacks are not permanent solutions!

HTTP/2 To the Rescue

Anytime geeks get together and draft specifications we write rules to cover the known universe. Inevitably the known universe changes, so we update the specs and make new versions.

Back in the 2012 timeframe Google led an effort to update the HTTP specification and released a new protocol called SPDY.

SPDY was a prototype of what evolved into HTTP/2, the latest version of HTTP. To sum up the new version it fixed much of what was 'wrong' with HTTP/1.1, where you needed to domain shard and many other things.

Let's focus on request multiplexing (caution big word in play).

A major HTTP/2 feature is the ability to multiplex responses in a single conection.

Unlike HTTP/1.1 a single connection between the client and server is created. This reduced overhead associated with creating new connections.

Trust me, the new connection process is expensive.

In a 2013 blog post, Souders revisits the need to domain shard. If you read the article, remember this is 2013, before HTTP/2 was specified and SPDY was the emerging idea.

In his summary he concluded the following:

"There’s no need for domain sharding in the world of HTTP 2.0 across all popular browsers."

That's because HTTP/2 can return multiple files at the same time using a single connection.

Let's look at Backlinko's waterfall:

Backlinko Network Waterfall

OK, let's get past the number of requests (don't have that many requests by the way), but look at how vertical the request timeline is.

Each request to Backlinko's CDN starts at the same time. Optin Monster's requests are also multiplexed. You can also see several requests to other domains that do not have HTTP/2 turned on, those requests are staggered (displayed as a slant).

how-http2-multiplexes-requestsThese staggered requests are serial, which means each request has to wait for the previous request to complete before it can be started. This takes time, lots of time.

There is a tradeoff to sharding, creating connections.

Creating a new HTTP connection is 'expensive'. This means it takes time to resolve DNS, complete the ACK/NACK part of the conection protocol, etc. This affects what we call RTT (round trip time).

This is why research back in the pre-HTTP/2 days recommended only sharding over 2 domains. After that the benefits were erased with the extra overhead.

There are many other benefits of HTTP/2, but to me multiplexing is the best feature.

So how do you utilize HTTP/2?

Using HTTP/2

Most web servers now support HTTP/2. But honestly the server should not need to be configured, you should use a CDN for your site.

"HTTP/2 is a protocol designed for low-latency transport of content over the World Wide Web"

Ilya Grigorik

Any Content Delivery Network provider should have HTTP/2 as an option. I use AWS CloudFront and it is literally a checkbox to toggle on or off (on is always the right choice in my opinion).

I can't emphasize enough using a CDN. When High Performance Websites was first published, a CDN was recommended. I investigated this option and quickly backed away as they were expensive. Today CDN services are a commodity. I pay a few cents each month for CloudFront.

Setting up a CDN is a bit technical, so maybe another article??

In general, your existing server should serve as an origin and the CDN distributes your assets around the globe for you.

FYI, Brian does not need to spend $200/month on a server. I could not believe he spends that much, I was not even spending that much 10 years ago, and I still had physical servers in a data center #oldschool.

Don't worry, legacy browsers that no one should be using in the first place can still fetch from your server. When a client does not support HTTP/2 the protocol falls back to HTTP/1.1 standards. So, you are safe.

HTTP/2 does require HTTPS, which any serious site uses, so this should not be an issue either.

Summary

I am glad Manish asked Brian the domain sharding question. I had been meaning to share how to use HTTP/2 multiplexing for a while now.

To answer Manish's question, no don't use sub-domains or multiple domains anymore. Instead use a server, really a CDN, with HTTP/2 support on a single domain. Your page assets will load much faster and the time and cost required to maintain multiple servers and domains is eliminated.

There are 1000s of small items you can address to improve your site’s page rendering profile. Improving each one will improve your user experience, which leads to better search rankings and more conversions.

HTTP/2 is a simple web performance optimization technique that all web sites should implement today. Now you can consolidate all your progressive web application features under a single domain, making it easier to manage your website.

I hope this knowledge helps you make your site faster.

Now, it’s time to look to the next article inspired by Brain's post, how to fix Backlinko's 18+ second load time!!!

Share This Article With Your Friends!

We use cookies to give you the best experience possible. By continuing, we'll assume you're cool with our cookie policy.

Install Love2Dev for quick, easy access from your homescreen or start menu.

Googles Ads Bing Pixel LinkedIn Pixel