7 Ways to Improve Your Page Speed to Make More Money

We all want fast websites.

I am going to share 7 of my top ways to make your web pages load faster.

For nearly 15 years I have specialized in developing the fastest loading web pages on the Internet.

Most of my pages score 97 or better in Google's Page Speed Index Tool.

They score well because I care about user experience and user experience starts with page speed.

You want your site to load because customers and users expect a good user experience, which starts with speed.

Research has shown there are 2 things that matter more than anything else for a visitor to your website:

Without a fast loading web page the navigation does not matter because most likely they abandoned your site before needing navigation.

In fact, if your pages are slow then other search engine ranking factors, marketing messaging or application features really matter.

No one will see them because they will leave your site before they are visible.

Page Speed is really a term or concept coined by Google. The search engine evaluates how fast your page renders and uses that as a ranking factor. So knowing how Google scores your page's speed is important. How Google measures your page speed:

Network Response Time: The time it takes the server or cloud to respond to requests is the start of a page rendering and the first page speed metric to measure.

First Contentful Paint: This is when the first tangible page component begins rendering.

First Meaningful Paint: This measures the time it takes content to appear on screen that actually helps the viewer. This content is the core page content. This is really the key metric to measure when a user can begin using the page.

Speed Index: This measures the amount of time it takes to render the viewable aspect of a page. It actually measures the rendered pixels over time.

First CPU Idle: The first CPU idle refers to the moment when people can begin to interact with the web page. Again, since the focus with page speed lies in providing customers with a solid user experience, knowing precisely when they can start interacting with the page is important.

Time to Interactive: How long does it take before the page becomes fully usable? This is the metric to watch and maybe the most important metric to watch. In fact Twitter uses this to set their page speed goals. They call it 'time to first tweet'.

Max Potential First Input Delay: This measures the delay between when a user initiates an action on the page and when the desired response happens. If this takes too long the user will percieve the page as locked and most likely leave.

I will give you ways to improve your site's speed in this article. But I encourage you to visit my Page Speed Tools page to see the best, FREE, ways to measure these metrics and many more. Without measuring your page speed you simply will not know what to improve.

3 seconds is all you have. Studies show after 3 seconds over half the visitors to your page leave to find a 'better' alternative.

It is easier to hit the back button that wait.

Why?

Because we are impatient. Microsoft compares our attention span to that of a goldfish, which is 8 seconds.

We also know our brain expects a response to an action online to be within 1 second. At 1 second (1000 milliseconds) the stress levels in our mind and body start rising. I it just the way our bodies work.

So if you consider how our brains work you have the answer to what is the best website speed?

1 second or less is ideal, but 3 seconds or less is acceptable. After that you are just bleeding opportunity.

The page speed affect is amplified for mobile. Older research by DigitalMarketeer surveyed users and found many expect pages to load faster on a phone than a desktop.

Of course the reality is technical limitations are more challenging over a mobile connection, even on WiFi, to load a page than desktop.

Today the majority of web traffic comes from a mobile device, which means you have to be on point with your page speed.

Page speed has become so important that search engines like Google and Bing use page rendering times as a ranking factor.

I have found that by helping clients fix technical issues causing poor page speed to drastically improve their site's ability to rank higher for targeted keywords.

Improving your mobile page speed will also answer how you can make your page speed faster.

So what are common technical issues I have found that can have a big impact on your page speed?

It's time to run through a list of what may be holding you back from a more successful online presence and how you can upgrade your website.

The time it takes the browser to fetch resources from your server should not take longer than 500 milliseconds, even over a 4G connection. To be honest I target 250ms or faster.

The first part of this step is known as time to first byte or the time it takes the first data packet for the file to reach the browser. Larger files of course take longer because there are more data packets.

But there is more to the page loading process.

There are three ways to improve your page's network latency or load times.

When a file or site resource is requested the classic way to respond is by building the page or HTML on demand. This typically involved querying a database and merging data with HTML templates to produce the final markup sent to the browser.

There are also other steps most server platforms use, but they vary so I will not dive into those.

Database queries are expensive, which means they can be slow.

I used to rely on this model because it was just the way you did it.

Today however I use a serverless cloud based platform that pre-renders as much of a page as possible when it is updated, rather than when it is requested.

This means the page can be served from a static server, which is much faster and more reliable than an on-demand platform like ASP.NET or WordPress.

So if a page take 2-3 seconds to render on the server that only happens when the data is updated, not when a user request it. So if a page is requested 10,000 times it is rendered once, not 10,000 times.

The way my sites accomplish this is a rendering workflow using AWS Lambdas that store the final HTML file in an AWS S3 Website bucket. The actual site is served via an AWS CloudFront CDN instance.

If that is a lot of technical jargon for you, don't worry. This is a diagram of what the architecture looks like.

It should just make sense that smaller files require less bandwidth and time to download.

There are two easy ways to reduce your file sizes:

Static files like HTML, CSS and JavaScript can all be minified and compressed, which can reduce a 100kb file down to 10kb in some cases.

Minification is a process of removing extra whitespace characters and replacing long variable names with shorter names.

This is common for CSS and JavaScript. There are many tools like css-clean and uglify to automate this process.

Gzip is the same compression process used in zip files. There are different levels of compression you can configure. Just remember the more compressed a file is the more CPU it requires to decompress on the client.

All browsers support GZip compression as well as Deflate. You can use either algorithm, just remember to apply a corresponding Content-Encoding HTTP header to the file.

Of course if your page does not use code you should not ship it. So how large should files be?

There is no single answer to this question because all pages are different. A 500 word article is certainly smaller than a 2000 or 10000 word article.

Not only are there more words and elements, but you typically have more images that explain concepts.

What should not change, at least for articles and content pages is the amount of CSS and JavaScript. Because these content pages do not have a lot of application activity, for the most part more words would not require more of either.

And lets be real. HTML is not heavy and neither are images when it comes to the actual rendering process these have the smallest impact.

From a purely technical perspective you want files to be 14kb or less to take advantage of HTTP Slow Start logic. This is technical and sometimes can't be done. But when possible limit a file to 14kb.

I don't get bent out of shape about images and HTML. But I do make efforts to reduce my JavaScript and CSS footprints. I will review how to control this later in the article.

The final step is to reduce the number of HTTP requests. More requests mean it takes longer to load and render a page.

I have audited pages making over 800 HTTP requests!

Most pages only need a handful of files to render, not 100s.

HTTP connections are expensive to create, so make it easier for your browser and server(s).

I will share key tip later in this article to help reduce your requests.

But a key source of excess HTTP requests are third party scripts. It is important to audit your third-party dependencies. Many trigger a cascaded of network requests.

You need to evaluate if the delay to your page rendering is worth the value they add then keep them. If not then remove them and find a better solution.

Several years ago an update to the HTTP protocol was standardized, HTTP/2. This update corrected many short falls the initial specification had. These shortcomings were really a lack of functionality to handle demands of the modern Internet since HTTP was based on what was known back in the late 80's and early 90's.

It was holding the web back and causing architects like me to create hacks to skirt the shortcomings and make our pages load faster.

One of those bottlenecks was opening multiple HTTP connections. HTTP/2 multiplexes a single connect to each origin. There are other optimizations the updated protocol provides and you are read more about How HTTP/2 works in another article.

All modern web servers and content delivery networks support HTTP/2 by default. If it is not available in your current hosting environment I encourage you to see another, more modern solution today.

Content Delivery Networks (CDN) are services that distribute your website around the world. This puts the actual content physically closer to the user.

It also provides redundancy. This way if your origin server (web hosting) or a single node in the CDN goes offline your site is still available.

In the past CDNs were expensive. Today they are relatively cheap. Charges are typically based on actual bandwidth consumed and geographic region.

I share reasons why every site should use a Content Delivery Network in this article.

For the record I use AWS CloudFront utilizing S3 Website buckets as content origins.

The sad reality in today's mobile-first world is the network is responsible for about 5% of the time it takes a browser to render a web page.

Let's look at more ways to improve the remaining 95%.

This is the common go to for most in the search engine optimization space, but does it really have the impact you want?

Yes, but not as much as other tips I am sharing in this article.

Why?

Images are not a blocking operation like JavaScript and to a lesser extent CSS.

In other words the page rendering process, known as the critical rendering path, is not dependent on images being retrieved and rendered. They also do not lock the page from interactions like scrolling.

Even though images do not have a high impact on the actual rendering you should still be responsible with images. They are after all the largest component of the overall payload if you follow my advice in this article.

Of course you can optimize images in photo editors like Paint.net or Photoshop. But you really want it part of your tool chain or automated.

I use ImageMagic to not only optimize image physical sizes, but to generate responsive image sets. I leverage this tool in an AWS Lambda as part of my workflow.

If this is not an option for you there are several services available, for fee, to help you outsource this task. Two that I have tried are Kraken.io and Cloudinary. I prefer Kraken as it has a API that is far easier to use.

Reducing image file sizes is one part of image management, but you should also lazy load images and utilize data-srcsets to reference images sized to the viewport.

Unless your entire page content is above the fold or viewable without scrolling then you should load images only as they are needed.

Today this is very easy with the use of IntersectionObserver, a modern browser JavaScript API.

This API allows you to designate not only HTML elements to modify as a user scrolls a page you can also log these actions, which is good for your analytics, but that is another topic.

I have created a lazy load JavaScript library and review the concept in an article on Lazy Loading content to improve your user experience.

Responsive images have been around about a decade now and there is no reason every image on the Internet does not utilize this feature. Well unless they are real small of course.

The concept is to utilize the IMG data-srcset and sizes attributes to define a set of images to be displayed. Each image is a different physical size, targeting different screen or viewport sizes.

This way a mobile user will not load a 2000 pixel wide photo when their screen is only 411 pixels wide. You can define as many images as you want in the data-srcset attribute is an array of images, each followed by how wide they are in pixels.

The sizes attribute is a CSS media query that tells the browser how wide the image should be rendered for different screen sizes.

The browser uses these two attributes to determine the best image to load based on the desired rendered size and viewport.

This way the best image is always loaded.

In the above example you can see how I also include data-srcset and data-sizes attributes. This is because my lazyLoad library uses these attributes to trigger the lazy load aspect. The example is the final result, after the image has been loaded.

CSS is used to tell the browser how pages should look once rendered. It is part of the critical rendering path, which of course means it can cause delays.

If you audit your page in Google's Page Speed Insights, Lighthouse or WebPageTest tools you can see how it affects your page. Page Speed Insights and Lighthouse will list this as a problem if you have CSS issues.

There are many small optimizations you can make to improve your CSS, but there are 2 key things I have found that really help.

Something I learned several years ago was loading external stylesheets is expensive. While I still do this for applications I avoid this for consumer pages I want ranked in search engines or used as landing pages.

Instead I inline a page's CSS in a STYLE element of the page's META tag section. To do this part of my rendering workflow captures the external stylesheet references, removes them from the HTML and injects the code in the HEAD.

This eliminates the stylesheets from the rendering chain.

It is important your styles are referenced in the HEAD because you want all the page's style loaded before the browser begins to apply the styles to the HTML. Including the CSS in the BODY means the browser must restart the rendering process and you do not want that.

You may be thinking, if all the CSS in the stylesheets are included inline doesn't that make the HTML extra large?

Yes it can, but you can avoid this too.

I base all my CSS on Bootstrap, like just about every web developer today. This CSS library is huge, over 150KB in fact. Much larger than that 14kb target I referenced earlier.

But here's the secret.

Most Bootstrap based web pages use less than 5% of the library. In fact most web pages use less than 5% of the CSS they reference.

This means you can cut out 95% or more of the referenced CSS. This means for a 150kb library like Bootstrap you can carve out 5-7kb of CSS to inline in your page!

Unless you have a giant page you are still well within the 14kb target, especially when minified and compressed.

Again, I have a tool in my rendering workflow that determines what CSS a page needs and trims the CSS fat before injecting it in the HTML HEAD element.

If you can't do any of the above tips you must do this one.

JavaScript is the #1 enemy of web pages that work!

Don't get me wrong, I love JavaScript. I write more JavaScript than any other language in my development practice today. But the majority of that JavaScript is in node modules used in my AWS Lambdas and local tools, not my web page scripts.

Today fast food frameworks are very popular with most developers. And here's the reason...

They like writing code!

Who knew a developer would like to write lots of code, but it is true.

Talk to them about meeting your page speed goals if 3 seconds or less for first interaction and they will probably tell you it does not matter.

I have had developers argue their pages are fast when I could show them and the business owners the average load time was 30 seconds or more!

Developers generally do not care about or even understand page speed.

I do, so I do not use fast food frameworks like React, Angular and Vue.

Even on the most popular pages Fast Food Frameworks.

Even though Google and Bing are getting better at parsing these frameworks, they honestly do not want pages using them.

When a search engine bot hits a JavaScript heavy web page it just processes the HTML, which typically contains nothing beyond a base skeleton, and throws the page on a queue. When resources are available the search engine eventually gets around to rendering the page to see what content is there.

But honestly, if your page takes 10-30 seconds to load for a real user, does it really matter.

Yes, my application pages have a little more JavaScript than my consumer pages. And with good reason there is more business things happening.

But for content there is very little JavaScript needed. The Lazy Load library is an example of script I use on a page. The library is small, about 50 lines of code. The Add to Homescreen library is another example, again about 15kb, not megabytes.

And that gets me to the real problem. According to HTTP Archive the average page loads over 400kb of JavaScript. The reality is the majority of these pages need less than 40kb.

For my typical blog article, I load 60kb of JavaScript. 45kb of that payload is Google Analytics, not even my code.

Beside reducing JavaScript payload there are some other tricks to help alleviate JavaScript bottlenecks. These involve telling the browser when to load a script.

You can add 'async' to your script tag.

Loading script asynchronously improves performance:

Async has been around for a while and is supported by all modern browsers. This is actually how the Google Analytics script references on my pages do not hinder my pages from rendering fast.

Be careful, you cannot just make every script async and think you are cheating the system. By declaring a script as async you are saying you don't really care when the is loaded and processed. This can cause exceptions with scripts that have dependencies on other scripts loading.

So use this approach carefully and make sure you test before deploying.

The defer attribute is similar to async, but works differently. It should be used when you reference a script in the page's HEAD element.

It is bad practice to reference a script in the document HEAD. They should be referenced as the last elements in your page's BODY.

Sometimes this cannot be controlled if you have third party modules involved in your rendering workflow. They can and do haphazardly inject scripts and markup all over your HTML.

If possible, you will want to add the defer attribute to these tags, unless you can move them to the bottom of course.

I held this off till last because it may not help your page speed, at least in testing tools.

I specialized in Progressive Web Application development. These types of websites require a service worker and code that enables the site to work offline.

The reason the service worker is so powerful is it gives you the ability to cache your site content on the device in a controlled fashion.

The problem is it does not affect the initial visit, or what we call unprimed. It only affects requests after the initial visit.

So if a visitor likes your site and comes back you can craft a caching strategy to make additional page loads a little faster.

This works because the initial network requests are eliminated. But don't be fooled. this will not solve issues related to JavaScript and CSS bloat. It only reduces your network dependencies.

I expand how Service Worker Cache works in another article.

Remember, on most web pages the network only accounts for about 5% of the time to render. But if you follow my advice it is a larger percentage. In that case service worker caching has a much larger impact and can help foster a lasting customer relationship.

Sometimes I feel like I am alone in spreading the message of how to make websites fast. Fortunately there is a small cadre of passionate web engineers that feel the same way. For example my friends at MintTwist have also published a resource on page speed.

I have shared some key ways to improve your site's page speed. If you can apply these techniques to your site I guarantee you will see improved results.

These results include better organic search rankings, lower PPC costs and higher engagement rates.

Ultimately it means you will be more successful online!

There of course are 1000s of small improvements you can make to improve your overall page speed, but these are some of the biggest impact tactics for the effort they require.

If you want some additional resources I recommend these books.

For those of you needing to sell the benefits of website page speed, my friend Tammy Everts wrote a nice book that dives into the benefits. It is appropriately named, Time is Money because she correlates how your online sales and ability to convert directly relate to how fast your pages render. It is designed to sell the concept as a key performance indicator to your stakeholders.

Lara Hogan penned Designing for Performance that covers techniques and why for developers and designers to follow. She not only covers technical aspects but she does so from her experience working at Etsy and making that platform as fast as possible.

Finally, I always suggest every developer own a copy of Steve Souders High Performance Websites.

Even though the book is aged the core concepts hold up. It was the book I just happened upon back in 2008 that caused me to pivot the way I develop websites. Souders and his team, at Yahoo, researched why the top 10 websites loaded so fast. They share those principles as best practices for other sites to follow.

Page speed is the one area you have to get right. It comes down to knowing how the web works and designing your web platform to leverage it to provide the best user experience. Do this and your visitors will connect and become customers.

Share This Article With Your Friends!

We use cookies to give you the best experience possible. By continuing, we'll assume you're cool with our cookie policy.

Install Love2Dev for quick, easy access from your homescreen or start menu.

Googles Ads Bing Pixel LinkedIn Pixel