Why Progressive Web Applications (PWA) Make JavaScript Frameworks Obsolete

PWAs Make Javascript Frameworks Obsolete
PWAs Make Javascript Frameworks Obsolete

Service workers run on a background thread and can be used to cache and render web pages. This ability to offload client-side rendering tasks means progressive web applications make heavy JavaScript frameworks like Angular and React obsolete.

There seems to be a lot of confusion around what a progressive web application is in comparison to a native application, which is very understandable. The answer is not much and a lot.

You might be more surprised to learn there is a lot of confusion of how a progressive web app related to a regular website or web app. I feel this sentiment when business stakeholders express confusion. I get as much if not more questions about this from developers and IT support folks.

To take it a step further many question how service workers work with popular fast food JavaScript frameworks like React, Angular, Vue, etc.

The simple answer is service workers are decoupled, just like the server stack from these frameworks. The more accurate answer is service workers make the slow frameworks obsolete.

I think the problem lies in pre-existing knowledge of how a website should work. Because the technical folks have been working with the web for a while with different servers, content management systems and other platforms, not to mention today's popular fast food JavaScript frameworks, they limit scope to these legacy architectures.

This makes it more difficult for them to grasp how a progressive web app can and does work.

What the service worker brings to your application is the ability to offload and scale content rendering from the UI and server. It frees your website from the handcuffs of heavy frameworks tying up the UI thread and frustrating your users.

It also allows you to reduce the demand on the server because the service worker can be used to customize content before it is returned to the browser's UI thread for final rendering.

Its time we break this trend so more businesses can feel comfortable upgrading to PWAs and using them to improve their online presence. This applies to both consumer/customer oriented cotent as well as business applications.

My Personal Epiphany About PWAs Killing Single Page Applications

I tell this story often to clients and fellow developers, the moment I knew the classic web had reached a great pivot point to progressive web applications.

It was also the point I realized single page applications were dead men walking. With the client-side rendering being replaced by the service worker.

Basically, the service worker makes today's popular fast food frameworks (Angular, React, Vue, etc) obsolete.

This is a good thing!

In May 2015 I gave a presentation at the O'Reilly Velocity conference in Santa Clara on designing high performance single page applications. The gist of the talk was to show how to minimize the impact JavaScript has throughout the application's life cycle and eliminate common sources of excess network traffic.

The real key to that experience was the session before mine, given by Patrick Meenan, on service workers. You see this was one of the first presentations on service workers, ever.

As I sat in the audience, knowing I would follow him on stage, I could not help but realize everything I was about to cover was now a polyfil.

If you are wondering what a polyfil is, don't worry. They are JavaScript libraries you can load to backfill support for modern features when a visitor is using an obsolete browser.

Today, the techniques I covered around those single page applications are supported across just about every browser and supported well, even iOS Safari, using a service worker.

To understand why I take this position you need to understand classic server-side rendering, single page application rendering and how a progressive web app with a service worker changes or better yet improves the rendering process.

What is Web Page Rendering?

Web page rendering is where HTML (markup) is combined with data to create a web page and then displayed on screen. Today there are three places where a rendering step can occur:

  • Server
  • Client UI
  • Service Worker

There are two types of rendering involved in displaying a web page's pixels on the user's screen, creating the HTML and the process of converting the HTML, CSS and images into the pixels we see on the screen.

This latter is where most of the work occurs. The processes here are encompassed by the critical rendering path, which is more than I reasonable to cover here. You goal is to limit the impact this step has, which is the most critical part of page speed.

To make the browser's work easier you need to give it the content in an optimized way. Browsers are great at rendering HTML and CSS really fast. Despite improving the JavaScript engines, this is still a very slow channel compared to the other channels.

Popular JavaScript frameworks pack HTML, CSS, JavaScript and even images into JavaScript. This makes the browser's job very difficult and is the reason why so many sites are painfully slow today.

For now, let's dive into the three areas your web page content is composed.

Server-Side Rendering

If you go back to the beginning the web was just a collection of static pages being returned by a server. The next phase was to include some server-side intelligence to not only render content, but allow data to be posted to the service. Think posting a form to the server.

Server Side Rendered Web Page
Server Side Rendered Web Page

The advantages to server-side rendered web pages are:

  • Little to No Effort to Display in the Browser
  • Small Network Payloads
  • Ability to Render HTML based on user and QueryString Parameters

The downsides include:

  • Required Large Servers and Server Farms to Scale
  • If Rendered on Demand, May have Long Time to First Byte

For more than a decade this was how just about every website worked. This meant you needed a rather robust server to handle a large-scale website, we called this big iron back in the day.

Server-side frameworks like ASP.NET, PHP and many content management platforms have their roots in this model. WordPress is one of the more popular of these tools that still runs a large amount of the consumer web today.

Single Page Application Rendering

About 10 years ago developers discovered how to use JavaScript.

At the same time, we entered the mobile-first age and within a few years developers, me being one of them, spent thousands of hours working on creating client-side rendering techniques.

This was an effort to keep the web as much on par with native application experiences as possible. We wanted to eliminate the 'blank page' experienced when a page navigation was triggered.

This moved the rendering process from the server to the client. Now an application that needed to scale to meet demand would do so by using the distributed computing power of the user's device, not the server.

Of course, we were outsourcing the ability to renderer pages to the power and efficiency of the user's device, which we have no control.

As time moved on in this era the fast food frameworks seem to win the day. Today the average web page's payload, according to HTTP Archive's latest stats is nearly 2MB. But the reality is much worse. The HTTP Archive sample set is limited to the most popular sites, and many of the pages it tests are login pages, think Facebook.

Client-Side Rendered SPA Web Page
Client-Side Rendered SPA Web Page

Without running down the page size rabbit trail, most modern web pages are very fat and out of shape. And this negatively affects your page speed.

The primary culprit is the JavaScript payloads.

The advantages to client-side rendered web pages are:

  • Little to No Latency Between Page Transitions, if done well
  • Ability to control/customize UI based on user actions and identity

The downsides include:

  • Large Payloads
  • Ties up the UI Thread, locking the page from user interaction
  • Very slow initial render, average 20-30 seconds most of the time

Enter the service worker.

How the Service Worker Can Execute Page Rendering

The key feature of a service worker is its ability to intercept network requests and let you decide how they will be handled. At its most basic you can pass the requests to the server like a classic web server to handle rendering. At the other end you can create your own little web server, also known as a proxy server.

The service worker executes in a background thread, which means any work it performs does not interfere with or block the user experience, which is controlled by a separate thread.

{how service worker caching works}

I covered this advanced technique a couple of years ago when I created the Philly Code Camp PWA, my course and my PWA book. The conference schedule application service worker intercepts each network request for a page and use HTML templates and the cached schedule data to render the pages without making a full round trip to the server.

I carried this architecture forward a little more in my PWA book last year. I have also used this model to varying degrees in real world applications over the past few years.

Service Worker Rendered Web Page
Service Worker Rendered Web Page

This approach creates a third option for a web page to be rendered, compiled or composed.

The advantages to service worker rendered web pages are:

  • Little to No Effort to Display in the Browser, no UI thread lock
  • Small or no Network Payloads
  • Ability to Render HTML based on user and QueryString Parameters, header values and more
  • No Latency Between Page Transitions
  • Ability to control/customize UI based on user actions and identity
  • Can work offline

The downsides include:

  • You need to maintain up to date cached resources

Why Service Worker Rendering Is Superior to JavaScript Frameworks

The primary problem we tried to solve with single page applications was the blank page scenario between navigations.

Single page applications try to solve that issue by utilizing an App Shell and applying event handlers to either the 'hashchange' event or user a gesture handler.

The hashchange scenario is typically what I used because it reduced the amount of code I needed to write and manage. Instead I would capture the event, which provided the hash value which served as a slug or view id to my SPA engine.

If you capture a user gesture, like a button click event, you can drive a response, which might be updating a portion of the page or the entire page content.

JavaScript Framework Client-Side Rendering
JavaScript Framework Client-Side Rendering

Either way the SPA engine would have a router to map the actions to a 'reaction' or rendering process to update the content on the screen.

The main goal is to keep markup on the screen as much as possible so the user does not 'perceive' a navigation experience. Instead they, in most cases, see the retained app shell and the rendering process which could include a layout as well as page specific content.

As data is fetched and merged with markup it will be draw on the screen and any event handlers attached.

There is not network request for the rendered markup, instead it is all hydrated in the client and may fetch updated data (typically JSON) from an API to merge with one or more HTML templates.

This is a very expensive workflow and locks up the UI. This is why sites using fast food framework JavaScript frameworks suffer from 20-30 initial rendering cycles. Subsequent page renders are not that speed either.

If you run a page using a fast food framework through a tool like WebPageTest you will see a yellow/golden rod area on the CPU graph. This is the framework executing and locking the page's interactivity. Basically the page is frozen while the browser processes the framework code.

JavaScript Framework Yellow Slug
JavaScript Framework Yellow Slug

I picked a term for this from the Google Chrome team, the 'Giant Yellow Slug'.

A proper service worker can do a combination of rendering solutions. It can pre-cache server-side rendered pages, render those pages in the service worker and cache the response ahead of time or render page content on demand and return the complete response as if it came from the server.

The great news is, when done properly, there is no blank page scenario. In my experience I can serve service worker cached and rendered content in less than 50 milliseconds and often less than the 16ms required for a screen refresh on modern displays.

This means there is almost no time for a detectable 'blank page'. Plus, the rendering work is done in the service worker thread, not the UI thread. This frees the UI from being locked up and creating a janky experience users hate.

Instead of using several KB of scripts to more or less recreate the native way browsers handle page navigation using the service worker to create markup can remove that code. I know what it takes to have code to manage this faux navigation in a JavaScript library, I wrote my own router years ago. I have also review the code in React and Angular.

Rather than polyfilling this native behavior you can just let the browser handle this and you can use real URLs, which are always better over hash fragments and 100% JavaScript handled routes.

This is probably the key reason why popular JavaScript frameworks like React and Angular are obsolete.

Instead developers should focus on using the service worker to handle CPU intensive tasks like rendering markup and vanilla JavaScript solutions to attach client-side event handlers.

If you are wondering, yes, I use this architecture over and over to create consumer facing sites, apps and line of business applications. Many of the latter use concepts popular in single page applications, but with much less overhead.

I utilize a small battery of libraries to help with common UI tasks, like lazy loading images and content. My applications also use page or component controllers and services to handle non-UI specific tasks, like making fetch requests.

I use an MVC model and have for years. Even my largest applications typically can be completed with under 100kb (uncompressed) of JavaScript. I have built PWAs with 400 pages with that amount of script to manage.

Does this mean everything we have learned from these fast food frameworks should be thrown out?

Not completely. I like state management utilities like Redux and employee some of these libraries or at least steal the architectures for my applications as needed.

But routing and UI thread rendering should be a thing of the past.

Service Workers Are Decoupled From The Client and the Server

So far I have covered three ways to render web page content:

  • server-side
  • client-side
  • in the service worker

Today most developers limit their understanding to their known universe, the server and client-side rendering model. They have not quite grasped how the service worker can change a website's architecture.

The answer is it doesn't have to, but it can eliminate or significantly reduce the work being handled by UI thread frameworks and server-side platforms.

This self-imposed limitation is not unexpected. When we were trying to sell single page applications and heavy client-side rendering many, including myself for some time, struggled with how to reconcile this approach with classic server-side rendering platforms.

The common misconception I continue to encounter is how the service worker works with 'framework X' or 'server platform y'.

The answer is the service worker doesn't. It does not care.

The service worker is decoupled from both the server and the client composition workflows.

There is nothing a service worker needs to be able to integrate with either one of these sides. You can still keep on trucking with a classic CMS or a fast food framework if you really want.

The bad news is sticking to these classic approaches does make things a little more interesting, at least in my experience.

Trying to Wedge an Old Shoe on a New Foot

Many frameworks and server-side and client-side have added components, classes and other extensions and modules to 'make them a PWA'. I have reviewed many common ones, like those for WordPress, Angular and React.

They make me laugh and cry at the same time.

I see a lot of fuss and ceremony being placed into these components to register a service worker that has little to no value. Some don't even add a valid web manifest file to the site. much less a full set of homescreen icons.

The general messaging seems to be these frameworks and platforms are tightly coupled with the service worker.

This is false.

The service worker should be completely decoupled from either end. Each one of these layers should be able to function independently from the others.

In other words if you change your server-side platform the client and service worker should not need to be changed.

The problem with these approaches is they over complicate the service worker registration and under complicate the service worker's value.

Here is where the ambiguous nature of what a progressive web application is comes into play. These framework and server platform add-ons are trying to get the website to merely pass a test to qualify as a 'PWA'. They are not really making the site what I would call a real progressive web application.

In general, this is what they do:

  • add a reference to a minimal web manifest file
  • may have you add 4 or 5 icons for the homescreen (I ship over 100 on my sites)
  • add a super simple service worker script
  • register the script

I demonstrated this when I showed how to upgrade the HTML5 Boilerplate project to be a progressive web app. This was a simple example to show how to meet the minimal technical requirements.

Unfortunately, this is where most sites seem to stop.

The super simple service worker these plugins use typically does one thing, caches every network request and passes uncached requests along to the server. A very, very naive approach.

Lighthouse PWA Testing

They do this because the Lighthouse test is looking for a service worker and a valid response (not 404) when the device is offline.

Note: none of the plugins or extensions can add HTTPS, you still need to do that with your server. If you don't nothing else really matters anyway.

The Lighthouse tool, which is part of Chrome and the new Edge can run a battery of tests on a URL and give you a report rather quickly. You can do this by pressing F12 and then selecting the 'Audits' tab. You can just check for PWA compliance along with a few other areas, like page speed.

Lighthouse Scorecard Metrics
Lighthouse Scorecard Metrics

These are all, base line tests and very generic and minimal. The reason is a broad testing tool like this cannot test specific applications. It has to provide some basic knowledge about every website.

In other words, Lighthouse is not smart enough to understand how to use your site. You can train a testing agent to drive your site to provide a better audit, but that will take time and knowledge of tools using WebDriver.

When you really want to be a progressive web application you must take the time and care to deliver the best user experience possible.

This means if a feature is supported then use it, if not your site or application won't completely break.

No two sites are alike, so tests like Lighthouse only get you so far.

Even for a typical WordPress based blog it is very difficult to craft a viable service worker because so many WordPress sites use a multitude of plugins that quite frankly do some stupid stuff. Most notably they make use of way too many scripts, images and CSS files.

All these support assets along with API calls need to be handled differently. You cannot just cache everything as it is requested and call it a day. Each request type, page, image and data should have its own invalidation rules and caching strategies.

Some applications I develop have dozens of different rules. I can also create dozens of expression matches to control how long a response can be cached before requesting an update from the server.

routeRules = [ { "route": /category\/\?/, "strategy": "fetchAndRenderResponseCache", "options": { "pageURL": "templates/category-page.html", "template": "templates/category.html", "offline": "offline-category.html" "api": function ( request ) { return ffAPI.getEvent( getParameterByName( "id", request.url ) ); }, "invalidationStrategy": { "ttl": 9999999, "maxItems": 50 }, "cacheName": categoryCacheName } } ]

The service worker is really a proxy web server that sits between the user experience and your server. This is why Patrick titled his Velocity session on service workers 'A Proxy Server in Your Pocket'.

This also means most of the components, plugins and extensions selling themselves as a way to make your website a progressive web application are more or less selling you snake oil.

A High-Level Review of Love2Dev PWA Architecture

I wanted to give a simple, high level view of a typical progressive web application I create. It uses static web pages and HTML templates on the server (going real old school). I render these pages using a serverless workflow of Lambdas on AWS, which saves the final product to S3.

From there the pages are served using the AWS CDN, CloudFront.

Love2Dev Website Rendering Workflow
Love2Dev Website Rendering Workflow

For data I use an API, typically adhering to a REST pattern that again uses Lambdas to query either DynamoDB or S3 for JSON data.

This JSON formatted data can either be used to render a page in the service worker or in the client-side. It all depends on the application persona.

Often, I can pre-render a complete page, like this blog post with the serverless workflow. For line of business applications, I tend to use the app shell model a bit more and lean on my single page application experience to fetch markup and insert it in the DOM as needed. It means my client-side JavaScript requirements are minimal.

It might look like this:

function renderUpdate(html, target){ var $target = document.querySelector(target); $target.innerHTML = html; //update event bindings as needed here}

More and more I am moving this step to the service worker. The app-shell, page layouts and page or component templates are all cached for the service worker to access. The service worker pre-emptily renders as much of the page as possible, reducing the load on the UI thread.

Yes, I can even do this when content needs to be personalized for the individual user. This is because i persist user profile info, session data, etc in IndexedDB. This is a document database available to the UI and service worker.

What the service worker cannot do is bind events to elements in the DOM. The service worker does not have access to the DOM.

This is not an excuse to use a large framework. You can still use addEventListener, which the frameworks abstract behind hundreds of lines of code. Even my most complicated page controllers are less than 1000 lines of code.

See, no need for a 500kb framework. I could also render the entire response in the service worker, similar to the technique I demonstrate in the Philly Code Camp PWA or more recently in the PubCon schedule application.

The service worker will do a lot of heavy lifting to see if cached data or responses are available and if they are still valid. It will also need to test if the device is actually online or it needs to send a updated or augmented offline response.

Some of my service workers have 1000s of lines of code and import several external libraries like Mustache and localforage to help the process. They can get complicated even for some of the simplest sites.

This does not include code to manage push notifications and background sync.

Remember a service worker is really its own little server instance if you will. Personally, I think about it as a browser's Lambda or Function platform because they are stateless and operate much like an AWS Lambda or Azure function.

With this power you have the ability to create rather sophisticated logic on the user's device without locking the UI thread. You are in control of how much network chatter you create and can offline many tasks commonly done in the UI thread by heavy frameworks.

Now instead of rendering in the client on the UI thread you can render in the service worker and offload that work to another thread. As a plus you can then cache the result for future use. This is exactly how a server-side platform works.


Think about how your pages are composed or compiled by your existing infrastructure. Can you most tasks to the service worker and reduce both the server-side load and the UI thread demand?

Can this help you scale your site because you are able to eliminate CPU and network latency delays?

Can you eliminate code that is just dragging your user experience down and make it better?

The key takeaway is to understand there are three distinct segments or layers to a progressive web application: the server, the service worker and the client-side. Each one can be used to render the page content. And each operate independently from the other.

The challenge is to identify how to use each layer to render your site pages as efficiently as possible.

You also need to understand they are not tightly coupled, a developer term for sure. But they exist on their own and communicate to the other layers as needed.

Share This Article With Your Friends!

We use cookies to give you the best experience possible. By continuing, we'll assume you're cool with our cookie policy.

Install Love2Dev for quick, easy access from your homescreen or start menu.

Googles Ads Facebook Pixel Bing Pixel LinkedIn Pixel