5 Techniques to Lazy-Load Website Content for Better SEO & User Experience

Lazy Load Website Assets for Better User Experience
Lazy Load Website Assets for Better User Experience

Website speed is a crucial aspect of on page SEO everyone can control. Your goal is to be interactive in under 3 seconds, even on a basic phone over a 3G connection.

However, most web sites have so many requests and large payloads this time limit or budget cannot be achieved. In fact, the average web page takes 22 seconds to load, according to Google's research.

But what if I told you there is a way to offload or even avoid loading page assets until they are needed?

This can give your website a distinct advantage over your competition because not only will Google like your pages better so will your visitors!

The good news is it takes a little bit of JavaScript and intentional effort to update your site.

And Google's Search team is all for this technique, called 'lazy-loading'.

Google's short guide mentions three primary points:

  • Load Visible Content
  • How to Support Infinite Scrolling and Pagination
  • How to Test Your Implementation

The trick is not hiding content you need indexed from Google. So, they published this helpful, but thin guide.

They have also published a guide on using Intersection Observer to lazy load images and videos until they are in view.

For average websites that are sharing content, like a blog or marketing site, lazy loading is rather simple. But for web applications this can be more complex, and I will dive into some techniques I use to load code as needed to make sure my applications are fast and responsive.

I will review the points Google offers and provide some advice and examples from my own experience using IntersectionObserver and a little on how the History API works.

The lazy loading guidelines dove tail with their recent guidance around single page apps and SEO because it is a good technique to improve your website's user experience.

For more complex web apps I will touch on a technique to load scripts on demand, rather than up front, which keeps the page from rendering.

Lazy-Load Content Scenarios

Before you add lazy-loading capabilities to your site you should inventory what assets your pages load and determine what is required for the initial experience and what can be loaded once the user starts reading or interacting with the content.

Let me go back to my two primary website scenarios, content and application because the content needs vary, but have overlap.

First, a content or sales focused website. For simplicity I will just call it a blog. These sites typically need HTML, CSS, JavaScript, images and custom fonts. They also tend to be sites you want free organic traffic from Google, which means SEO is important.

Other than making the page load faster you need to make sure the content is accessible to the search engine spider, which won’t execute your JavaScript or scroll the page to drive lazy-loaded content.

Content can refer to media, code or copy (text and markup).

Media

Images and Videos are large files. Deferring their load has an immediate, positive impact on page load time.

Code

JavaScript, CSS, HTML and Fonts are necessary to render pages. But Scripts and CSS can be your biggest bottlenecks to a speedy time to first interaction. Delaying their load makes your content available much sooner.

Copy

Sometimes you want to defer loading page content and copy. Just make sure if it is important for SEO it is available to the spiders.

To do this I will show you how to use IntersectionObserver to load assets like images and videos as they scroll into view. But before I do that, we need to setup the markup to make images and videos indexable.

Complex web applications can be a little more delicate because they lean on more application logic. This means they require more JavaScript files. When these files are loaded matters. While you can use IntersectionObserver for this, your application needs a good technique to dynamically load scripts as needed. Later in this article I will give you just that!

Handling Lazy Loaded Images

While Google and Bing can execute your page's JavaScript, they don't make it a priority. They also do not attempt to scroll the page to trigger things like IntersectionObserver.

This means your lazy loaded content might as well not exist.

At first you may not consider this a big deal, but image SEO can be a key factor to help you rank better for searches. Plus, image searches can be a hidden source of additional organic traffic.

The secret to making your content spider accessible is to 'hide' it behind a noscript tag.

 <img data-srcset="img/javascript-slow-to-index-static-better-jennifer-slegg-pubcon-2018-869x361.jpg 869w,img/javascript-slow-to-index-static-better-jennifer-slegg-pubcon-2018-720x300.jpg 720w,img/javascript-slow-to-index-static-better-jennifer-slegg-pubcon-2018-460x192.jpg 460w,img/javascript-slow-to-index-static-better-jennifer-slegg-pubcon-2018-320x133.jpg 320w" data-src="img/javascript-slow-to-index-static-better-jennifer-slegg-pubcon-2018-869x361.jpg" class="lazy-image" sizes="(max-width: 480px) 80vw, 30vw" alt="javascript-slow-to-index-static-better-jennifer-slegg-pubcon-2018"> <noscript><img src="img/javascript-slow-to-index-static-better-jennifer-slegg-pubcon-2018-869x361.jpg" alt="javascript-slow-to-index-static-better-jennifer-slegg-pubcon-2018"></noscript> 

I know there is a lot of code in that example, but I will go over the 'data-' attributes in the next section.

What I want you to focus on right now is the noscript element. Inside this element is a traditional IMG element with the src value set to the large image size.

Now the search spiders will load the image and index it for you.

The noscript tag is a semantic mechanism to tell the browser to only render the content inside the element if the browser (user agent) does not support JavaScript. Most visitors will have JavaScript turned on, so they won't trigger the image to load initially. But as you will shortly see it will when the non-noscript image is visible.

Now let's dive into how to use IntersectionObserver.

What is IntersectionObserver?

A relatively new browser API, IntersectionObserver is supported by all modern browsers and there is a stable polyfil you can dynamically load for those that don't.

This API triggers a callback when target elements come into view. This callback allows you to execute code, which in our scenario is to load assets as needed.

This is a simple API, but there are a few parts you need to understand.

First how to create an IntersectionObserver:

var config = { // If the image gets within 50px in the Y axis, start the download. rootMargin: '50px 0px', threshold: 0.1 }; imgObserver = new IntersectionObserver(showImage, config); imgObserver.observe(yourImage); 
IntersectionObserver Loads Content As it Enters Viewport
IntersectionObserver Loads Content As it Enters Viewport

You create a new IntersectionObserver object by passing a callback method and a configuration object. I will cover the callback later.

The configuration parameters tell the observer how close to the visible viewport an observed element should be before triggering the callback method. The rootMargin value specifies how much margin vertically and horizontally the element can be.

In my example I tell the observer to trigger the callback when the watched element is within 50 vertical pixels of the viewport. I set horizontal to 0 since I rarely use horizontal scrolling.

The callback will fire within 100ms because the threshold is set to 0.1 seconds.

I simplified the 'observe' method here, but normally I select all elements of a certain kind and loop over the results. Here I am just showing a single image.

Each element you want this observer to watch must be added separately. This is why I loop over the results, because each image, video or whatever needs to be explicitly added.

The callback method is the logic that triggers when the watched element comes into view.

The callback method receives two parameters, the targets that triggered the callback (theoretically more than one can meet your thresholds at the same time) and a reference to the actual observer.

 function showImage (entries, observer) { for (var i = 0; i < entries.length; i++) { var io = entries[i]; if (io.isIntersecting && io.intersectionRatio > 0) { var image = io.target, src = image.getAttribute("data-src"), srcSet = image.getAttribute("data-srcset"); if (srcSet) { image.setAttribute("srcset", srcSet); } if (src) { image.setAttribute("src", src); } // Stop watching and load the image observer.unobserve(io.target); } } } 

The entries are analogous to your typical event handler event based object. To access the actual DOM element you reference the entry's target value. The object also has isIntersecting and intersectionRatio properties you can use to determine if you want to act on the element or not.

In this example I 'flip' the data- attributes to their matching image attribute, which causes the image to render. Because I always use responsive images I need to flip the data-srcset, data-src and data-sizes attributes.

So far, I have used this technique with images, videos and maps on different sites.

If you are wondering I do this with all my images, even the ones rendered 'above the fold'. While there is a slight delay in the images rendering it does not have a big impact on my UX. The main content loads super-fast, especially on mobile.

The images are loaded because the IntersectionObserver for these above the fold elements is satisfied and immediately triggers. It does not require the image to be scrolled into view.

Speaking of scrolling into view, continuous scroll functionality can also be facilitated using IntersectionObserver.

Pagination/Continuous Scroll

Pagination & Continuous Scroll
Pagination & Continuous Scroll

Some interesting guidance from Google that has caught my attention lately is how to deal with paging lists and continuous scroll experiences.

First, continuous scroll.

This has become a common tactic on news related sites in the past couple of years. This is where you scroll down an article and as you reach the bottom a new article is rendered. This technique was really born out of social media like Twitter and Facebook allowing you to scroll through post after post until you are up well after your bedtime.

You could utilize IntersectionObserver to watch an element, maybe the last paragraph or two to know when the reader is finishing the article. As those bottom elements scroll into view you can use fetch to get the next article and append it below the current article. You will also want to use the history API, which I will cover shortly, to update the URL.

The other scenario is list indexes, like the way I setup my blog index pages. Each page displays 20 or so cards or entries to blog posts. At the bottom there is a standard 'pagination' section with links to previous and next pages by number.

In the continuous scroll scenario Google will not trigger the next article. So just be aware of this.

Instead you may want to list related articles, like I do on this site, below the article. If you really want a continuous scroll experience you can hide the related articles behind a noscript element using the technique I described earlier.

For paging Google suggest you create a large page with all the links listed. You can use a canonical tag to point to this primary page from each index page. This way Google won't try to index these pages, which offer little real value as far as search is concerned.

Using the History API

The History API and I do not have a great history, ok that is a little tongue in cheek, but true. I personally have not used this API because my initial experiences with this API, almost 7 years ago, were terrible.

Lately I have been considering it again as Google has recommended it in several recent guidance article. And I have to admit this second time around it feels 'better' and useful.

I won’t' pollute you with my initial experience but cover what it means today.

I mentioned using it in the continuous scroll section because Google recommends using it in this scenario. But if you are driving single page application experiences you should also use this API.

=> Insert address bar graphic

The main advantage it provides is the ability to change the URL in the address bar without triggering a server request that causes a complete page reload. It works much like the hash fragment technique, but uses a shareable URL.

var stateObj = { foo: "bar" }; history.pushState(stateObj, "page 2", "bar.html"); 

In the code example you can see the history pushState method has three parameters: state, name and URL. For our purposes you could pass null to the first two parameters, the URL is the important value.

Changing the URL here changes the URL in the address bar, but like I mentioned wont cause a request to the server. In a continuous scroll scenario you would make an AJAX call in the background and inject the resulting HTML in the DOM, just below the current article or item. After changing the URL the user will see the real URL to the new article and if they share the URL it will be a direct link to the article's page.

Just make sure the URL you pass to the pushState method is the real link to the new article!

You will also want to make sure the IntersectionObserver callback code triggers an update anytime a target item does scroll into active view to change the URL to match the item's direct page. For example, if I scroll back up the page.

What Google is really pushing you toward is using a real URL, but a JavaScript driven URL. In the continuous scroll scenario when the IntersectionObserver triggers you can load the new article and change the URL dynamically. Now the user would see the new article's URL and could reload the page.

Now your URLs are in sync and Google would see the direct URL when it finds links, etc.

Loading Scripts As Needed

If you know me you know I recommend using as little JavaScript as possible. For content sites like this one I can often get away with 15kb or less after compression. But web apps can blow this up quickly.

So how do you keep excessive JavaScript from blowing up your user experience?

First, only use what you actually need. For example, Netflix cut their average render time 50% after eliminating React from their front-end and limited it to server-side rendering (SSR).

But what about those times where you need a library to drive your application experience?

Load them when they are needed and only when they are needed.

This requires a little finesse, but can make your UX so much better, which should improve customer satisfaction and retention while reducing help desk calls.

Who doesn't want those KPIs?!

I use this technique to load all my scripts, I call it the boot up experience. But you can extend it to lazy load scripts on demand.

Here is the script, and no I did not create it, the Chrome team originally shared it:

try { var scripts = [{{{scripts}}}], src, pendingScripts = [], firstScript = document.scripts[0]; //polyfil checks and loads here if (typeof IntersectionObserver === "undefined" || IntersectionObserver.toString().indexOf("[native code]") === -1) { scripts.unshift("js/libs/polyfil/intersection-observer.js"); } // Watch scripts load in IE function stateChange() { // Execute as many scripts in order as we can var pendingScript; while (pendingScripts[0] && pendingScripts[0].readyState == 'loaded') { pendingScript = pendingScripts.shift(); // avoid future loading events from this script (eg, if src changes) pendingScript.onreadystatechange = null; // can't just appendChild, old IE bug if element isn't closed firstScript.parentNode.insertBefore(pendingScript, firstScript); } console.log("scripts should be loaded now"); } // loop through our script urls while (src = scripts.shift()) { if ('async' in firstScript) { // modern browsers script = document.createElement('script'); script.async = true; script.src = src; document.body.appendChild(script); } else if (firstScript.readyState) { // IE<10 // create a script and add it to our todo pile script = document.createElement('script'); pendingScripts.push(script); // listen for state changes script.onreadystatechange = stateChange; // must set src AFTER adding onreadystatechange listener // else we’ll miss the loaded event for cached scripts script.src = src; } else { // fall back to defer document.write('<script src="' + src + '" defer></' + 'script>'); } } } catch (error) { alert(error); } 

I reduced it to just how to load the IntersectionObserver polyfil since I reviewed it in this article. But the trick is to create an array of the scripts you want loaded with each one in a reasonable order to manage dependencies.

Note: the script uses Mustache notation, hence the {{{script}}} field at the top of the script to inject the page's script dependency references.

You won’t need to worry about loading async or defer because these scripts are all loaded on demand, which should be after the core content is rendered.

The main gist of the logic here is creating a script element for each script and appending it to the DOM. Once appended the browser loads the script just as if it were part of the initial markup.

I use this technique to load polyfils if and when they are needed. But you can leverage this technique to load scripts as user action dictates. Instead of a polyfil maybe your application needs a library, but not until a certain criterion is met or a user action is made.

Now instead of delaying the time to first interaction by 1-5 seconds (or more) you can offload that performance and UX hit until it is needed. Or maybe you use this to load scripts when the UI thread is idle (the user is not actively doing anything) so it won’t interfere with the actual user experience.

Wrapping it Up

There are numerous reasons why lazy-loading page content helps you create a better user experience. The primary advantage is a faster time to interaction or improved page speed. But properly managing lazy-loading can give you SEO and UX advantages too.

Something I did not cover in the article is you can also make your dynamic content more accessible for those with disabilities using screen readers.

Google will reward faster web pages with better rankings, but only if their spiders can access the lazy content. To do so you need to properly code your pages to make lazy content available even when JavaScript is not turned on, which is the default case for search engine spiders.

Intersection Observer and the History API can be used to help you manage loading content and assets as needed without breaking your user experience. These tools can add complexity to your site and you should test or make sure these changes don't break the user experience.

So, help your site lose some weight, engage more visitors and earn better search rankings using lazy-loaded content.

Share This Article With Your Friends!

We use cookies to give you the best experience possible. By continuing, we'll assume you're cool with our cookie policy.

Install Love2Dev for quick, easy access from your homescreen or start menu.

Googles Ads Bing Pixel LinkedIn Pixel