BingBot Migrates to Using Chromium Based Microsoft Edge

Bing & Edge Spidering

Today at PubCon Las Vegas Microsoft made a major announcement concerning Bing and the new Edge browser. The BingBot will now use Edge as one of the rendering engines to evaluate pages for search engine optimization. This means there will be all sorts of potential changes to the way your content is indexed as well as the amount of information Bing can collect and analyze.

The goal is to stay current or as close to current with the version of Edge as possible going forward.

Remember Bing powers over 35% of the US search market, including Yahoo and AOL search results. You cannot afford to ignore this reach and understanding how their spider works will help you craft pages to rank well.

This move echoes recent changes made by Google to update GoogleBot to Chrome 77. If you are unaware GoogleBot has been fixed on Chrome 41 for years.

In fact because Edge is migrating to using the same rendering engine, Chromium, as Chrome, Bing will be rendering pages almost exactly the way Google does.

A key reason Googlebot has been frozen on Chrome 41 is the update cadence is fast. Chrome updates every 6 weeks. If you know anything about enterprise IT and browser updates you know where I am going.

Large entities cannot just deploy an update when the updates come as fast as every 6 weeks because things break.

Both Microsoft and Google fall into this category and they are creating the browsers. So instead of breaking an important and probably fragile process they tend to stay with what works for a long time, probably longer than they should.

So for both companies to adopt and evergreen browser approach is laudable and I can only assume a huge challenge.

This does not mean Chrome and Edge will be the only rendering engine used by these search engines. Both Google and Microsoft utilize a collection of different rendering tools to process pages for search indexing.

The primary feature everyone tends to fixate on with these changes is the ability for search spiders to execute a page's client-side JavaScript. Today's popular frameworks have caused all sorts of problems for search engines because they obfuscate a page's content behind the JavaScript.

By updating to the latest browser versions the spiders will now be able to execute JavaScript more efficiently, but this does not mean it is a panacea when it comes to JavaScript frameworks. I will come back to this.

The advantage these upgrades will provide is the ability to render pages that use modern web APIs. For example, I use IntersectionObserver to lazy load images and other 'below the fold' content. This is a relatively new API that older browsers like Chrome 41 or Internet Explorer do not support.

There are hundreds of these new APIs and CSS rules that have been added over the past 4 or 5 years. Unless you load a polyfil for these rules this means the spider just sees an exception or a poorly rendered version of the page.

As I said there are new APIs and features available. Some of these features include performance data, or how long it takes to render the page. Data like this can now be logged by the spider to give a better picture of what the overall user experience is.

For marketers and developers this means you need to emphasize page speed and render times in your requirements. Speed is the easy thing, you will probably need to provide better and better usability as well, this include accessibility features, layouts and more.

Now, back to the fast food JavaScript framework concept.

Yes, the search spiders will be able to evaluate your pages using a fast food framework to render the content. This does not mean they will do it well. Remember these are headless browsers. That means they do not actually paint the rendered result on the screen, they just load the page and provide data to be logged.

According the support grid shared by Fabrice Canel both BingBot and GoogleBot will be able to evaluate several popular fast food frameworks including React, and later versions of Angular. He also included jQuery and of course my favorite framework, Vanilla JavaScript.

This comes with caveats.

If you hide content behind user actions or obfuscated links they will not be evaluated.

Even with these upgrades search platforms like Bing and Google still emphasize server-side rendering over client-side. So don't go nuts thinking you, as the developer, can force a fast food framework in the site. You will still lose to a competitor without a framework.

Why?

They load faster, much faster.

Content is not hidden behind user actions.

Even if you use a module that allows the framework to use real URLs you are still at a disadvantage over real URLs.

Why?

Because the JavaScript still has to be executed. And even if the bot is using a modern browser if you have an exception thrown that JavaScript may not execute. And trust me more sites than you think have these exceptions that break your client-side script from running.

On a final note about JavaScript and SEO. If your page takes a while to render visitors wont stay and will pogo-stick back to the search results. Search engines are very aware of this behavior. If they detect your page takes 'too long' to render, they will give up.

This means you most likely will not be indexed, leaving you completely out of the search cannon. The exact time for an evaluation time-out does vary, but if you take longer than 10 seconds on an average mobile phone I think it is fair to say you won’t have a chance to be ranked.

If you have systems in place to serve pre-rendered content or slightly different content to a search spider based on the user agent you might want to reconsider.

First, why would you serve different content?

Well it comes back to the client-side JavaScript rendering. Since this approach renders content after the page is loaded the only content a spider might see is an content void of any tangible content.

By serving a pre-rendered page you are giving the spider the content you want to rank.

User Agent sniffing refers to the act of reviewing the request to determine what browser of client-side tool is being used to request the resource. Each browser has a unique user agent string as do spiders.

A trick over the years has been to serve browser specific content based on this string. However, this can lead to all sorts of problems. This is actually why Windows 9 was skipped, due to so many applications checking for the presence of 9 in the operating system, think Windows 95 and 98.

While the user agent string you see may indicate it is a search engine spider it may use the browser engine’s actual user agent string, which means your ‘special’ content will not be fetched.

Microsoft's move to using the modern Edge, Chromium based engine, browser is a big announcement because it opens up many new abilities for the search engine to evaluate sites for search rankings.

This will make it easier for brands using modern web features like IntersectionObserver, CSS variables and more to have their content evaluated correctly.

It will also make it a little easier for browsers to run modern JavaScript frameworks, but does not create a green light to use these frameworks.

Of course don't forget to use the Bing Submission API to get your pages crawled by BingBot within a few seconds of a change.

If you want to understand more about how Microsoft Edge works, they have a GitHub repo of explainers that dive into different internals that might help.

Share This Article With Your Friends!

We use cookies to give you the best experience possible. By continuing, we'll assume you're cool with our cookie policy.

Install Love2Dev for quick, easy access from your homescreen or start menu.

Googles Ads Bing Pixel LinkedIn Pixel