How Microsoft's Poor Web Performance Ruined an Important Event Registration
Last week Microsoft opened its MVP Summit registration at 3:00PM EST Wednesday afternoon. A few thousand MVPs all over the world rushed to register because we all want to book the same hotel! We’re cautious as we never know what to expect with the event registration site. This year was no exception because the site & its user experience were completely unacceptable. When registration officially opened, a massive influx or at least a perceived large influx of requests hit the web server.
If you’ve ever launched a website where demand is immediately high you know what I’m talking about. Let’s think critically about the supposed volume; there are approximately 3,000-4,000 MVPs worldwide and about 3/4 of them attend the annual MVP summit. Probably half of that register as soon as the site opens. This means only a few 1000 simultaneous requests hit the MVP summit registration home page. This is almost nothing when compared to the average large enterprise or a popular consumer website. Love2Dev has hosted many small business web sites that routinely receive this level traffic. So we know a little bit about launching a website where you have a specific launch time and the behind-the-scenes experience.
Microsoft MVPs were greeted with a web site that would not respond at all. As the site loaded it took minutes for the home page to fully load. This means many users were refreshing the web page, creating even more requests to the server. Once the home page loaded, the user experience to register was less than ideal. The latency delays continued, only leading to more frustration. In addition, the application’s data was not configured properly, causing more problems for many, myself included.
Since we build high performance websites and routinely do website performance audits, I thought further analysis would be helpful. Hopefully you will learn from their mistakes.
Time to first byte
The web performance golden rule is 5% percent of your performance issues are server side and 95% are on the client side. The 5% server side is still significant. The MVP summit site suffered from poor latency in that 5%, and also demonstrated front-end issues we’ll cover later.
During the registration rush I decided to run a WebPageTest analysis of the MVP site home page. The site was so slow to respond that WebPageTest timed out. I have never had this problem before. More determined, I decided to run a new evaluation once I finally noticed a page rendered. However, the response time was still completely unacceptable.
As you can see from the WebPageTest screen shot the problems start server side. A terrible time to first byte of 8.8 seconds. The server failed to deliver the initial markup in a reasonable amount of time. The goal for an initial document request should be in the hundreds of milliseconds range, not seconds. When your server fails to deliver the initial markup the browser cannot start rendering the page. The rendering process also includes requesting other assets to compose the page like images, JavaScript and CSS files.
When you have a bad time to first byte, like the MVP site, you know that the server is not configured correctly. This could be due to many causes; bad database settings and website caching issues are common culprits. Coding issues could also be a problem, but are not as common. Looking at the response headers for the MVP site, we see they used an IIS/ASP.NET web site; at least they use Microsoft technology.
There’s no reason why this site should take 8 seconds to load since the home page has very little information needing dynamic composition. The only thing I could identify is the user’s name located in the top right corner. User personalization, like this, can be added via an AJAX request asynchronously. This page should be static and avoid all ASP.NET processes. Even if using ASP.NET MVC or some other form of ASP.NET, output caching would have eliminated the need to compose this page every single request.
Output caching allows the server to render a page once and persist the rendered copy in memory. Subsequent requests pull the file from memory, rather than the file system or cascading through the rendering process. Since the page’s core content does not change very frequently, OutputCache should be set for a very long duration. Even if you’re not using ASP.NET, your platform should allow you to configure server side caching so you can avoid back-end delays. This will make your time to first byte almost negligible.
Heavy and Unnecessary JavaScript Libraries and Frameworks
Once the markup is loaded the browser starts downloading the remaining assets. This creates the application’s waterfall, a diagram showing how the page is composed across the network. Before I look at a page’s waterfall I often look at what is the page doing.
As I analysed the MVP home page the only dynamic components I found included a drop down menu & the profile area. The drop down menu could be accomplished with a few lines of JavaScript and CSS. The profile area (as mentioned earlier) should be composed from a single AJAX call. You can make a simple AJAX call in about 20 lines of JavaScript code. Injecting the user’s name would require possibly three or four lines of code. So to accomplish all ‘dynamic’ page aspects 50-60 lines of JavaScript and one or two CSS rules are required.
Looking at the waterfall we see requests for many JavaScript libraries and correlating cascading style sheets. Let’s look at the list:
- jQuery
- jQueryUI
- Knockout
- Ensighten
- Kendo
Now, I’m accused of being the anti-jQuery guy but this is not completely true. I am anti-jQuery when it’s not needed, and here clearly it’s not needed. But let’s look at the use of jQuery deeper. First there’s a request for jQuery version 1.8.3, which is very outdated. If you’re using a library or framework you should always use the latest version. This ensures you have the most bug free and standard-compliant version.
You may notice they requested jQuery migrate, a supplemental library that allows you to use a new version of jQuery and make it backward compatible with legacy versions of jQuery. The real reason for this is to support obsolete browsers, like Internet Explorer 8, which browser vendors don’t even support. It also offers support for obsolete plugins, which you should avoid (a topic for another day).
Next, another version of jQuery is requested, not the latest version but newer than the previous version. I see this a lot when evaluating websites. Multiple versions of the exact same library or different versions of the same library are more common than you think. Releasing a site like this reflects poorly on your development and DevOps teams.
When loading a library more than once you cause the browser to load and process each version. Besides the obvious rendering delays, this leads to all sorts of potential client side issues. Not only does this request excess data but causes the browser to evaluate those libraries multiple times. After processing, other coding issues will eventually rear their ugly heads due to conflicting code bases. Always standardize on a single version and make sure your site is clean.
jQueryUI is the next library. Nowhere on this page are any jQueryUI elements used. This means they’re wasting several hundred KB of bandwidth for no apparent reason whatsoever. Eliminating jQueryUI eliminates a lot of delay on the client side. jQueryUI is composed of very large JavaScript and large CSS files. You can use just the parts of jQueryUI needed, but no one ever does. Ultimately try to avoid using libraries like jQueryUI, instead use smaller libraries that address the specific needs of your application or page.
The next library in question is Knockout, a client-side MVVM content rendering library. Again, the only client-side rendered content I could identify was the MVP’s name. This can be done with a simple string replace function, not a large library. Knockout suffers from many memory leak issues, plus common library client-side latency issues. Since there’s no need for Knockout in this application, there’s no need to include it. Ultimately, we do not recommend Knockout.
Speaking of libraries that do not need to be used, Kendo is loaded. Kendo is a very rich web UI library. There can be advantages of using a library like Kendo, but it is not used here.
Kendo weighs in at 2.436 MB, more than the entire page should weigh by a few megabytes. This is a very large library, taking up a lot of bandwidth and causing the browser to delay rendering while it evaluates the library. Unless you must use a UI library try to not use one. When you do need a UI library find the single component or components you actually use and include only the specific code, not the full 2.5 MB.
I want to point out Ensighten. Many enterprises use Ensighten to track analytics and as a tag manager. I have evaluated this library for a few clients to determine its impact on their user experience. Ensighten is very poorly written and causes many client side issues (this means code does not run as expected, throwing exceptions). We do not recommend Ensighten, and it’s unfortunate it is added here.
Compressed images
Most sites don’t optimize their images. This page makes many image requests, most being small PNG’s. The real photo is the site’s hero image. Unfortunately, the hero image is not optimized, weighing over 500 KB.
I ran the same image through an image optimizer and drastically reduced the file size. In addition, I generated an array of responsive images. All the responsive images combined weighed less than the original image. I’ll talk more about responsive images in the coming weeks; basically this means that smaller devices load a viewport size appropriate image which is much smaller and more efficient. This pays big dividends on mobile.
Failure to Cache Static Assets
Less than half of the web has proper cache headers configured. This means browsers do not store the files locally, causing them to continually request the assets from the server over and over. While caching may not always work the way you want it to work, it does help. Setting proper cache headers is very simple, and should be part of your normal deployment process.
Failure to Use a CDN
Most sites still fail to use the content delivery network. According to HttpArchive, one in five sites use a CDN. Microsoft has no excuse for not using the CDN, since they offer a nice CDN service. By not using a content delivery network the MVP site has failed to reach the global audience they serve, at a relatively cheap expense.
In the past CDNs were cost prohibitive to the masses. However, with cloud technology like Azure and Amazon’s AWS CloudFront, content delivery networks are easily accessible to websites of all sizes. For a few pennies a month you can host your content, distributed around the globe without any effort on your part.
Web sites should use a CDN because assets are hosted closer to potential customers. This means there is less latency, and customers are much happier because the site loads faster.
Summary
It’s unfortunate Microsoft botched the MVP summit registration process this year. Microsoft’s MVPs are most loyal and an influential fan base. The customer base for the site is highly skilled in technical areas. We know when things have been done half-heartedly. The MVP site is a shining example of poor craftsmanship and reflects poorly on Microsoft’s brand. The influential MVPs reach far and wide to the broader audience of customers Microsoft wants to reach.
While Microsoft is not alone in botching a web site, you should always make sure you put your best foot forward. In this case a technology company that prides itself on web related technologies and owns one of the most popular web platforms failed to deliver a simple web site for its key influencers.
Love2Dev is no different; we make mistakes from time to time as well. Earlier the same morning we published a blog post announcing Chris Love’s interview on DotNetRocks. Unfortunately, the wrong version was published and an early draft hit the Internet. We later corrected the issue and began evaluating our publishing process to insure this doesn’t happen in the future. I hope Microsoft does the same with their MVP site, and their other properties, that lack in user experience, performance and suffer from other technical issues.
Building web sites and applications is a complex process. Many are very simple and follow a consistent formula for creating, developing and deploying. Successful businesses have effective repeatable systems and processes in place to ensure quality and consistency.
Maybe it’s time to evaluate your systems and processes for your line of business applications or your customer facing applications. Don’t let simple mistakes reflect poorly on your brand’s ambitions.



