Frameworks like Angular, React, Polymer and Vue are popular with developers. Unfortunately, they negatively impact your search results and ability to engage with customers.
Most marketers, business owners and stakeholders are not technical enough to have an intelligent discussion with developers about real business goals and how the use of these frameworks impedes achieving those goals.
I think John Mueller is even lamenting the web's future if the popularity and performance hit these frameworks havecontinues to grow.
Given the number of SEOs who have a good grasp on technical SEO for static HTML, if you want to differentiate yourself, understanding how JS works, where it works with SEO, where it blocks SEO, what the effects are on other search engines & clients
Frameworks are designed to appeal to developers, not business goals.
As a marketer or business stakeholder what can you do?
How can you either communicate your requirements or hire the right developers to help you achieve these goals?
This article will most likely irritate developers but help business owners and stakeholders have a better technical understanding of how the web works so you can engage your development team on a more level platform.
I plan on answering the following questions and providing even more insight on related topics:
- How they affect your search engine optimization results.
- Plus, how to talk to your web developers about reaching your online business goals.
- The Impact of Frameworks on User Experience and SEO
- What are Single Page Applications
- Single Page Applications & Deep Linking
The Impact of Frameworks on User Experience and SEO
Google's search team has emphasized how frameworks and the use of single page web applications impede their ability to index your pages. They also emphasize how page rendering time is very important for SEO and customer engagement.
- Single Page Apps Tend to Not Have Good URL Practices
when a website is slow, those lost conversions and ad dollars don’t just evaporate — they go to a competitor. Rick Viscomi
Unfortunately, the guidance and research provided by these teams is largely ignored by developers. I know because developers give me grief about this topic all the time.
The Google, Microsoft and other browser teams have to tread lightly so they don't 'piss off' what they consider a key constituent, developers.
I don't care if I rub fellow developers the wrong way. I want the web to work well because it benefits everyone.
My goal is to deliver the best user experience possible to acheive business goals first, not have a good time as a developer.
My version of a SPA focused on UX and loading fast. I even followed the now deprecated Google AJAX crawling specification.
Google can render and does get around to indexing single page application content. There is just no guarantee if and when the pages will be rendered.
Let me translate the messaging from Mountain View to business terms:
"If you search for any competitive keyword terms, it’s always gonna be server rendered sites. And the reason is because, although Google does index client rendered HTML, it’s not perfect yet and other search engines don’t do it as well. So if you care about SEO, you still need to have server-rendered content."
"Sometimes things don’t go perfectly during rendering, which may negatively impact search results for your site." “In December 2017, Google deindexed a few pages of Angular.io (the official website of Angular 2). Why did this happen? As you might have guessed, a single error in their code made it impossible for Google to render their page and caused a massive de-indexation."
CSR page are loaded by browsers and spiders with no content. This is what a typical single page application site looks like to a search engine:
<HEAD> <!-- meta stuff goes here --> </HEAD> <body> <header> <!-- header/main nav goes here --> </header> <!-- blank space where your content should be, will be rendered later by Framework --> <footer> <!-- footer stuff goes here --> </footer> </body>
Do you see what is missing?
That's right, your content, the stuff you need the search spider to read so you can get indexed!
None of the Google search team members has revealed how they determine if and when to return to a page and fully execute your scripts. My guess is it depends on how many links they detect to the page, how much authority the domain has and the quality of existing content.
There is probably much more to it, but those are probably high level criteria.
But wait, there is more!
How do SPA links work?
How do Single Page Applications work period?
I already showed you what your SPA markup looks like. Now it is time to learn what a single page application is and how they evolved.
What are Single Page Applications
At the beginning of the decade the web was facing its biggest competition ever, mobile apps. At the time browsers and web standards were very fragmented. About 10 years ago jQuery was rapidly adopted by most web developers as the defacto way to create modern websites.
We (developers) went nuts.
After a year or so using jQuery I became very comfortable creating highly interactive user experiences. I also started doing more and more AJAX, where you call an API to get data from the server.
Like many others I realized I could fetch raw data, render it in the browser and update the markup without needing the user to load a new page.
In 2010 I built my first mobile-first, single page application (SPA). At the time no one called them SPAs, I was sort of a pioneer. I launched an application with over 400 views (pages) that loaded really fast over 3G.
I knew I was onto something at the time. Performance and user experience were, and still are, a primary requirement for any of my projects.
Fast forward a year or two and we started seeing the SPA term used more and more. jQuery was not really suited for this new type of web application.
Plus, most 'web developers' are not really web developers, they are back-end developers.
So frameworks started to emerge from the ooze of the minds of back-end developers...more on this in the next section.
Single Page Applications & Deep Linking
This technique was originally designed in the early days of the web to provide a way to jump to different parts of a web page from a table of contents.
Back in the early 90s, when I first started writing HTML, the web was dominated by academic content. There were thousands of research papers available online. These are long documents, typically with a table of contents early in the document. The table of contents typically had these jump or hash fragment links setup to allow readers to jump to a sub-topic without scrolling down the page.
For the record I started using this technique as it was originally intended for my articles. Just check the list at the top of this article.
When a user clicks one of these links a hash fragment is added to the URL. This made it easier to share URLs to quote or reference specific sections in a page. The hash fragment value is not passed to the server, it is only a signal to the browser.
Fast forward to the modern web. Browsers added the 'hashchange' event when the URL hash fragment changed.
SPAs rely on the hashchange event to drive client-side rendering processes. Developers may also trigger DOM manipulations based on user activity, like clicking links and buttons or entering values in a form.
Twitter is even credited with inventing the 'hashbang', #!, most SPAs use to differentiate a traditional anchor from a SPA anchor.
Single page applications typically use an App Shell model. This is where the core page layout is rendered first. Then each view or page's content is rendered in the main content area. There may be some of the main layout altered as the application lives in the browser, but for the most part it is static.
Unfortunately for SEO the spider must 'figure out' all the client-side URLs.
I say unfortunately, but you, the business owner, is the unfortunate one because spiders don't bother executing the code to trigger navigation changes.
Google recommends you use server-side rendering and real anchor tags with real URLs.
The new guidance is really a simplified version of their AJAX Crawling specification they deprecated a few years ago. That policy focused on you converting the SPA slug (hash fragment value) to a queryString parameter and configuring your server to look for this queryString value.
When the server detected the parameter it would use that value to render the content on the server and give that to the spider.
It was complex and I think I may have been the only developer with a system that supported this technique.
This means Google wants you to render the pages to a static web site and use the statically rendered pages instead of client-side rendered pages. At this point you can also eliminate the need to load the framework code on the client as well since most of their work was done on the server.
This is the model I migrated to a couple of years ago. I sat back and looked at all the code and workflow I needed to manage just to allow content to be rendered both on the server and the client and thought, this is a waste of time and energy.
I was right.
They may need to lie to their developer friends to stay cool, but that's OK. Your bottom line matters more.
Even if your site does not use a framework you can follow the SSR/static site guidance for better organic results.
Look, Google is trying to appeal to developers because they fear they have lost the battle against fast food frameworks. So, they are doing an end run to get content they can index and maybe, just maybe sites people will use. They even give developers more code to play with!
This is a big part of what AMP is doing.
My advice, avoid using single page applications and frameworks if SEO matters to your business.
We started seeing new frameworks emerge that appealed to these back-end first developers. Angular was the first popular framework. It was soon followed by React. Others came along, but failed to get as much traction as these two have. I even loosely called my collection of small libraries and architecture 'Love2SPA'.
Note: Angular and Polymer were created at Google. This does not mean the search or browser teams are fans of these frameworks. At any large enterprise there is 'internal strife' executives tend to gloss over.
Do as I say, not as the team in the other building does
Unfortunately, these frameworks were designed without considering business (UX) goals and how browsers work. Instead they focused on creating a 'great' developer experience.
Their architecture largely borrows from back-end best practices and tries to shoe-horn them on the browser. The problem is they are largely built orthogonal to the way browser plumbing works.
This means these frameworks have to add thousands of lines of code to get server-side 'best practices' in the browser instead of leveraging what the browser natively offers.
This is why pages often feel locked up or jumpy as you scroll them. If you wonder why a web page takes 30-60 seconds to render, most likely one of these frameworks is to blame.
The technical term for this is 'jankiness'.
Often these frameworks are extended with additional, poorly optimized, libraries and components. jQuery suffers from a similar phenomena with the plugin ecosystem. These components compound the weight these frameworks require.
Developers are lazy and suffer attention deficit disorder in mass. This means they tend to dump multiple frameworks on a web page. I am finding it more and more common to see 2-4 frameworks stacked on top of jQuery and other 'helper' libraries.
Not to mention the cost to develop and maintain these Frankensteins is tremendous.
If the script modifies the DOM (HTML structure) it restarts the critical rendering path, causing further delays. These frameworks almost always cause this expensive restart.
They almost always work on high speed machines with i7 CPUs and 16GB of RAM. Consumers tend to use cellphones with far less powerful CPUs and memory constraints. The consumer experience amplifies any performance tax exponentially.
Ask them to test on a Moto5 class phone over 3G. If your page can reach time to first interaction in less than 5 seconds then things are looking good. If not make them work on the site more.
I will show you how you can do this without buying a phone later!
You have 3 seconds to impress. Scientifically you actually have 1 second because that is where the mind starts to perceive latency from expectations.
This is why at 3 seconds half the traffic to web pages have left, they assume the page is not reliable.
Most tend to look at page speed in terms of how long a response takes to go from the server to the client, called time to first byte.
This is wrong.
A typical web page's load and rendering profile has about a 5% time to first byte allocation. This means 95% of the time it takes to render a web page is how long it takes to process all the network responses, paint the content on the screen and get out of the way.
"80-90% of the end-user response time is spent on the frontend. Start there."
When a script is encountered the browser request it from the server, which should take 200-1000ms over good broadband. But most users are not on broadband, they are on cellular connections today. This means you need to budget 1-3 seconds for a script file to download.
Multiply this by 5-10 if you have one of those 50MB scripts!
You are not done, not even close.
Now the browser stops all rendering related tasks and waits for the script to be processed.
If you have more than one script and most pages have dozens, this process is repeated.
Even if the markup and CSS have rendered some items on the page it will feel frozen. You cannot even scroll the page.
To a consumer this indicates the page is broken. They don't wait, they leave.
Meanwhile they have paid to download your bulky payload and received no benefit.
The business has also paid for server capacity and bandwidth with nothing to show for the effort.
Both entities are taxed so the developer could have 'fun'.
Ways You Can Measure Web Performance Without An Advanced Engineering Degree
Site owners need a way to audit their sites so they can track progress and communicate to developers.
There are many free tools available to measure web performance. I see many recommended in the marketing world, but honestly, they only seem to focus on time to first byte, not the remaining 95% of the puzzle.
There are two free tools I recommend everyone involved in a website learn to use, WebPageTest and browser developer tools.
WebPageTest is a free site you can quickly audit any public URL from multiple data centers around the world and from a wide variety of browsers and devices. In short you can easily see how your pages render for any of your potential visitors.
The problem with WebPageTest is how much data is collected and reported back. It is overwhelming, even to developers. But there are a few key numbers and visuals you should focus.
- The Report Card (shoot for A's)
- Time to First Interactive (Document Complete)
- The Network Waterfall (fewer requests and straight down is best)
- Speed Index (1000 or less is perfect)
- Filmstrip View (see when your content renders)
I won’t go into details on these items in this article. I do that is other posts, including one analyzing airline websites.
When you run your test, I recommend selecting the 'Chrome' tab and enabling the Lighthouse test. This will give you a extra data, including how well you score as a progressive web app.
Pro Tip: Go to https://www.webpagetest.org/easy.php to have a preconfigured environment for your page's test.
You can run a Lighthouse test right from your local Chrome browser.
Each browser has built in developer tools, which include performance auditing features. Again, I won’t dive into these tools today, and they can be very technical.
Lighthouse does a good job surfacing meaningful data in a way anyone should be able to understand. The tool reports on many user experience metrics and breaks them into categories like Progressive Web App, Best Practices, Accessibility and even a token SEO section.
Both Lighthouse and WebPageTest can be run locally using node modules. WebPageTest can also be stood up locally using a virtual machine or container.
These come in handy when you have an automated test script or continuous build process. The VMs and containers also make it possible to run WebPageTest against internal business tests.
I highly recommend making them part of your acceptance tests. As a stakeholder you should also consider adding a performance budget to your developer requirements. You can tie this budget to tools like WebPageTest and Lightouse.
These are just two of my favorite tools. There are others like WebHint and the detailed audit tools baked into browser tools.
Some of the commonly recommended tools in the SEO space, like Pingdom, GTMetrix and even the Google Speed Test tool are sort of useless.
The Google tool simply does not provide enough insight into the metrics it reports.
Pingdom and GTMetrix really just measure time to first byte, which is important, but accounts for roughly 5% of a page's average render time.
Single Page Applications can obfuscate links to content as well as delay your ability to have content indexed.