Great Job WalMart - But You Still Have Work To Do

Last month I posted about WalMart.com doing really well online and why performance was a competitive advantage. In fact their Thanksgiving week sales were up and 70% of those sales came from a mobile device. WalMart has come a long way in making performance a first class feature of their web site and development work flow. However they still have some low hanging fruit they need to tackle. I thought today I would highlight some of these fruits and how they might be address. Full disclosure I do not, nor have I ever worked for WalMart. I do shop there and even ordered Christmas gifts online over the holiday weekend.

To evaluate the site I chose a product page, Toshiba Satin Gold 11.6 Satellite and ran it through WebPageTest.org, http://www.webpagetest.org/result/141208Q8HW9/. The page took 8.6 seconds to load the 197 assets and almost 2MB required to compose the page. The WebPageTest run took place on December 8th 2014, over a month ago. Later in the article I reference a run in Chrome, on January 19th 2015. In the later run the waterfall revealed 257 requests and 2.4MB size. The two runs highlight areas WalMart.com still needs to address to ensure its visitors have an optimal online shopping experience. These areas are easy to fix and plague most web sites and applications.

Time to First Byte - F

WebPageTest expects the WalMart URL, just the initial markup, to load in 93ms, wow!. The test I am referencing needed 1134ms or 1.1 seconds. While that does not sound like a bad time it is. Think about it if your server does not return markup to the browser for over 1 second no images, CSS and JavaScript are requested till that markup is loaded and evaluated. No scripts, CSS and images means the remaining the page cannot even start the render process till those resources are loaded.

When I see poor time to first byte for the initial markup I know there are some poorly performing server-side processes involved in hydrating the markup. You see this in dynamic sites like we build using ASP.NET and Java. Looking at the WalMart.com BuiltWith profile we see they use Ruby, Java and PHP. My guess is the consumer web site uses Java. I have also been told this by different friends, so my educated guess is probably right.

Unfortunately I am not familiar with tuning Java applications. I can offer some advice based on my .NET experience. The primary advice is to find the bottle neck and fix it. This could be moving data to a front-end NoSQL data store, Redis, Memcache, etc. It could also mean pulling that process from the rendering pipeline and placing it in an API end point. Get the data from the API and append the merged markup in the DOM. This allows the page to start calling other resources faster and thus start the render process earlier.

In other tests I ran on WebPageTest and in my local browsers the initial markup time to first byte ranged from 1-3.5 seconds. Lots of room to improve.

Cache Static Content

How long should you allow content to be cached on the client? It all depends. WebPageTest dings WalMart.com for not caching static content long enough. WalMart gets dinged here because they have so many resources being loaded and many of them have very short cache times defined. This is an example of where a synthetic test might not accurately reflect application's nature. However in the case of WalMart.com I think it highlights another smell, too many HTTP requests, which I will address in the next section.

So how long should you allow content to be cached? Again it depends on the nature of your data. For most static assets I like to place at least a month's TTL on them and often a full year. If I am really concerned about cache busting I may design my application to request the assets with some sort of version parameter. You should try to avoid a queryString parameter if you can, but sometimes you have no other choice.

Too Many HTTP Requests

The product page evaluated has 197 HTTP requests required to compose the page. I don't care what you are trying to render that is way too many requests. According to HttpArchive.org the average web page uses 93 requests to compose the content. Again I believe this number to be too large to be successful.

Why are all these requests bad? Browsers are limited to a small number of parallel connections per domain, typically 6. You can split requests across domains, but you incur a hit doing DNS resolution for each domain. DNS resolution times vary. My observations see between 30-400ms with a median around 150ms. So while sharding can reduce your load times, at a certain point DNS resolution as well as available bandwidth cause it to increase your load times.

Latency is another issue. The nature of HTTP means larger files are more efficient. When I see a page with hundreds of requests most of them are small. Each request must bounce through various hops across the Internet before it reaches your device. Fewer files mean fewer overall hops, etc and in the end mean the content loads and renders faster.

There are 2 key things you can do to reduce the number of HTTP requests, bundle and minify text and utilize image sprites for small images. For most sites this will eliminate around 75% of these unnecessary requests, your mileage may and will vary.

Not Bundling and Minifying

Excess requests are a symptom that Scripts and CSS are not optimized by bundling and minifying. Bundling means files are concatenated together into a single source. Minifying strips these files of line breaks, unnecessary spaces and variables. For JavaScript good minifiers also obfuscate variable names down to single or double letters. Often these processes reduce overall file sizes by 50% or more.

The WalMart.com page analyzed for this post has 25 JavaScript files and a whopping 718kb. A more current load in Chrome shows 85 scripts for 851kb! A quick glance at the waterfall and you can see most of the scripts are small. I decided to filter the list to those with walmart in the URL and found only 18 scripts. Still a large number, but much better. The majority of the scripts are 3rd party scripts. This is a common problem in enterprises, relying on 3rd party services to inject content and track usage.

For example I use Disqus to manage comments on this blog. The last time I checked they load 17 or more scripts to manage the comments. For the life of me I cannot imagine why more than 1 script is needed, but it is what it is. Because the Disqus scripts are inefficient I lazy load them once a blog post view is fully loaded. This way their poor coding skills do not affect my readers' (you) experience. Unfortunately most sites do not practice the lazy load technique. Often marketing and advertising departments blinding add these 3rd parties to the site without evaluating the damage they have on performance, which of course negatively affects the bottom line.

I did run through all the scripts and all but one script was minified, so I will give credit for minifying the code. I can see they use requireJS to manage dependencies. They should really consider using browserfy to help them reduce the number of requests. The minification did a good job obfuscating what libraries they use and my tool to detect libraries in Chrome was broken today. I did Identify jQuery and Underscore in one of the larger files. It was loaded rather late, which tells me the two libraries are not critical to the site functioning. This means they could possibly eliminate them all together and replace them with a few functions they use from each of the libraries. I also found one script that contained just a single semi-colon. That of course can be completely eliminated, well it came from a 3rd party service so it will take some work.

The 85 script files are loaded across numerous domains and add several seconds to the load time. After script files are loaded you then need to consider evaluation time, which is another killer. The load time for these scripts was just under 4 seconds. evaluation time was probably 1-2 seconds. Remember every time the browser encounters a script file it stops everything it is doing to evaluate the script and then restarts the rendering cycle. WalMart.com stops 85 times for external scripts, not to mention the inline scripts, which I have not counted.

By bundling scripts and reducing it reliance on large 3rd party libraries and services WalMart.com could improve load time exponentially.

The site requests 7 CSS files. Again most are minified, but they could bundle them together for additional savings. I did notice the largest CSS file, 88.9kb was requested twice. They should remove the extra request. This is more common than you would think, referencing the same file or library more than once. Even if it is in browser cache it is a measurable delay and can be eliminated. The offending file contains a base64 encoded font file from Adobe, thus the large size.

As for the remaining CSS I am fairly certain they could reduce the size by running their CSS through an optimizer like UNCSS to remove all the unused CSS rules. I find it often eliminates 50-80% of a site's CSS.

Inline Styles and Scripts

This seems to plague just about every web site I every examine. Just because you can inline script and CSS does not mean you should. In fact it takes a lot for me to use inlining.

In a SPA the effect is not as pronounced as it is for a classic web site. This is because there is a single page. A classic web site loads a series of pages over and over as the user consumes content and features. By using inlines you will not reap the rewards of proper file caching. This means the browser keeps downloading the same content over and over, delaying your page load.

Inlining scripts has an extra hit because as I mentioned earlier the browser must stop all processes to process the script. Once that processing is done the rendering process starts over. So find a way to make your inline scripts a part of the main bundled and minified script.

Summary

By optimizing the server-side processing to a front-end data store, reducing excess HTTP requests and avoiding inline styles and scripts WalMart.com could easily achieve the 1 second load time goal over broadband. With 70% of their holiday sales originating from mobile devices the improved performance would be compounded, increasing conversions exponentially. WalMart is close, but still has some work to do. I challenge you to analyze your sites and applications and find some of these low hanging fruits and apply the simple changes to reap the rewards.

Share This Article With Your Friends!