What seems like decades ago Steve Souders defined the web performance golden rule as 80% of a web pages performance issues were due to client-side architecture and 20% server-side concerns. This was a key tenant of the High Performance Web Sites Books I consider one of the most important web development books ever.
However I still see the vast majority of developers fail to understand why this rule is true and still focus their efforts on server-side optimizations. Don't get me wrong server-side performance is important, but it is a different environment than the client. It is much more controllable. And don't worry I have some content on optimizing the server-side coming soon.
Let me quote Steve from the High Performance Web Sites book:
"there is more potential for improvement in focusing on the frontend. If we were able to cut backend response times in half, the end user response time would decrease only 5-10% overall. If, instead, we reduce the frontend performance by half, we would reduce overall response times by 40-45%."
So the real money is improving client-side architecture. Again we control the backend, heck just throw more servers at it if that helps. If nothing else you can scale faster while you find ways to optimize the backend. Plus as Steve points out in his book backend projects take months where frontend optmizations take hours. A much bigger return on investment.
A few years ago Steve also revisited the Golden Rule by measuring not only the top 10 sites, but also 10 sites around the 10,000th most popular position. Here is is a breakdown of the top 10 sites and how much time was spent on backend and frontend work:
He was not satisfied with the top 10 because as he says these are highly optimized. So he dropped down to the 10,000th position to get a more typical sampling. I think this is important because I routinely evaluate sites and they are rarely close to the averages represented in Steve's HttpArchive. Evaluating these more average sites you see the ratio move heavy toward the client-side:
Since Steve published his first book the world has shifted from a desktop centric client experience to a mobile first experience. Today close to 60% of all web interactions happen on mobile devices. The golden rule ratio has shifted from 80/20 to more like 95/5 thanks to mobile. Why? Cellular connections are slower and mobile devices do not posses the CPU and memory offered in our i7 powered development boxes. Simply put any client-side latency is magnified in today's mobile first world. Develop with this as a primary aperture and your overall application is improved.
What is the Web Performance Golden Rule?
As Steve did research for Yahoo he analyzed the top sites across the web, we are talking hundreds of thousands of sites. He noticed a trend, the majority of time spent rendering a page were after the server or in the front-end. The average URL's render time could be broken into frontend and backend times. These times average to a roughly 80% of the time was spent on frontend rendering task, while the remaining 20% were property of the server's processes. This is the Web Performance Golden Rule, 80% of your performance issues are on the client machine and not your server. This means you should architect and optimize your frontend more than the backend. Ok that did not sound right, you should still optimize your backend code, but your bigger return is investing in frontend optimizations.
To see how the ratio affects your site you need to log your site with a tool like web page test, fiddler or your browser's networking tool. If you want to get extra fancy you can use the navigation timing API. Basically you should use a tool that produces a waterfall. If you have read my blog or watched some of my perf audits you know what a waterfall is.
To determine how much time your site needs on the server focus on the first request, the one requesting the page's markup. The faster the first request loads the faster your server processes are. The most accurate term to determine the server time is know as Time to First Byte (TTFB).
A typical waterfall row includes values for the different loading phases. The first time an asset is requested from a domain the client needs to resolve the domain. This value does not start the server's response, so you want the time after this point. Also if the domain has already been resolved this step is skipped. You may also see SSL negotiation, I do not count that time either because it does not invoke the web server.
What does matter is the time it takes to open a connect, the server to start the response process and how long it takes to send the data across the wire. Beyond this point is the time required for the browser to evaluate the response. This is particularly important for scripts and CSS because they are blocking operations. As Etsy and Tim Kadlec showed us last year the larger the script is the longer it takes to evaluate, thus the longer it takes to render the content.
TTFB is not limited just to the initial request, it matters for additional resource requests. While writing this post I decided to review some recent web page test runs I did over some random sites. The one for the Water Fowl Forum (no clue why I examined this site) recorded a reasonable TTFB for the inital request, but horrible times for other assets. http://www.webpagetest.org/result/150825_2Z_Z5Q/1/details/
If you look at the page's waterfall you see the initial request is not that bad. The TTFB is 36ms, awesome! But if you look at the other requests to the same server you will see several that are very slow, over 500ms slow. The first slow request is a style sheet. Instead of being a static resource it is rendered via a PHP script. I cannot think of a real reason you might want to use a PHP script or any server-side rendering engine to create a style sheet. I am sure there is a reason, but I can't think of it. So try to eliminate any server-side rendering where possible, which is 99% of the time.
The second slow request is for a small image. I don't know why this takes over 700ms. The other images were served within a reasonable amount of time it could just be an anomaly, so it would be a good idea to run several more tests to see what the trend is. In fact you should always run multiple tests over any URL to see what the average is as well as where anomalies like this image exist.
What Can You Do?
Have confidence when I, Steve Souders and hundreds of other web performance experts say front-end optimizations are cheap with big value returns. Let's take the Water Fowl Forum example. Moving those static resources to a cheap CDN like Azure or AWS will surely reduce the download times by a significant amount. I bet that with about 30 minutes of work the site could reduce the 8 second render time down to under 2. Add image optimizations and expires headers and you have a site that will scream. Then they can optimize the server-side to reduce the 476ms TTFB of the main HTML down to less than 100ms. I bet they could implement the PHP version of ASP.NET Output caching and have a lightening fast site.
I evaluate sites everyday and consult clients every week about performance. 90% of the recommendations take less time than it takes to author a proper report. Because of the web performance golden rule they have a big impact. I call this low hanging fruit that pack a big monetary return. Go grab your site's waterfalls and see where you can apply some of the core web performance optimization rules and make your site or application a place where customers want to visit, not dread.