How To Reduce the impact of third-party code

Updated on June 28, 2024

The most difficult PageSpeed warning developers will spend time on is Reduce the impact of third-party code, which is an issue that stems from JavaScript files placed on a site. Most of the time, these are slow to load and impact user journeys so Google rightly identifies them as problematic and lowers the PageSpeed score.

However, scripts are some of the most important tools you can have for tracking and targeting so do have significant business value. As always, some techniques can minimize the impact and find a middle ground that satisfies business and user experience goals.

Contents


The problem with third party scripts

Analytics, tracking, advertisements, A/B testing and social share buttons are key components of websites - it would be very difficult operate a blog without knowing how many people are viewing it or which content is performing best. This functionality is added to sites by pasting a script tag into the HTML from a third party such as Google.

A third-party script is defined as any script hosted on a domain that's different than the domain of the current URL so even if you serve assets from an alternative domain that you manage yourself, such as cdn.mysite.com, it will still be treated as third party.

Image courtesy of web.dev

By default, script tags are render blocking resources - when a browser encounters a script tag while processing a page, parsing of the HTML is interrupted while the script is downloaded and executed.

Typical browser processing of a script tag

When the main thread is blocked for 250ms or more, it is a negative user experience that PageSpeed Insights will report as an issue.

Why so slow?
The process of loading a third party script takes time because a network connection to the external URL has to be established, which involves a variety of steps before being routed to the server that hosts the file. From there, the code has to be sent back to the browser.

The worst offending scripts can also make sub requests, fetch unoptimized images and transport everything uncompressed. Once code is finally downloaded, only then can the browser process it, which involves parsing of the HTML - itself is an inefficient process. A large DOM tree extends JavaScript processing time further and as it’s third party, you have no control over the cache settings.

With each additional script, the impact multiplies.


Identify slow scripts

The first step in reducing the impact of third-party code is auditing the scripts on your site to see which are slow.

Open dev tools > Lighthouse > Generate report

The dropdown for Reduce the impact of third-party code will show the offending scripts:

Blocking scripts on nytimes.com

Block scripts
To estimate the potential improvement removing a script will have, Chrome allows specific scripts to be blocked for Lighthouse audits.

Go to Network > JS > right click on a script > Block request URL.

Run Lighthouse again to see what the improvement will be. Work through all the scripts and you will be able to build a picture of what the worst offenders are and the potential gain from removing them.

This could also be done if a script is used and you are in two minds about whether it is valuable enough to stay.

  • Scripts can be loaded onto a page through Tag Managers. Ease of use makes them attractive to marketers and non-technical people who desire the ability to make changes to a website quickly. While this is a powerful ability to have, tag managers open a back door for someone to add code, oblivious to the security or performance impact.

Now we have audited the scripts and know the worst offenders we can look at how to load them more efficiently.


Remove unused scripts

It may seems obvious but can be overlooked - if any of the scripts identified by Lighthouse or PageSpeed Insights are not used on your site, go ahead and remove them. Scripts can be added for marketing vendors and forgotten as projects complete, results are achieved or peoples contracts end so an audit helps catch these instances.


Code Splitting

Typically websites bundle all their CSS and JS into large files that are loaded on every page and this can benefit performance as the browser only needs to make one HTTP request to get the JS and one for the CSS.

The trade off is with some codebases, a huge volume of unused code being downloaded on pages when a few lines of CSS/JS may be all that is needed to render the page.

Code splitting is the opposite of this method and only loads the precise CSS and JS needed to render a given page.

Implementing this will require an analysis step in your deployment process to identify the necessary code and inject it into each page. It’s somewhat complex but when successfully executed, page speed will be improved dramatically. A host of related page speed insights warnings will also be solved such as Reduce JavaScript execution time, Avoid serving legacy JavaScript to modern browsers and Minimize main-thread work


Selective loading - PageSpeedPlus recommendation

Code splitting is becoming more prevalent in Google advice and a similar tactic can be used to reduce the impact of third-party code but it will require development work on your side.

If you have a mandatory tag, discuss if it’s needed on ALL pages across the entire sie. For example, if you are running an A/B test on a handful of URLs, the A/B testing script probably doesn’t need to be loaded across every URL for the duration of the test. A more efficient process would be selective loading that only includes the script tag in the page source when a URL in the test pool is loaded.

How?
With some backend work, you can modify the HTML response to remove the script tag before it is sent to the browser.

By default. the A/B test code will inspect the URL of a request and fire when it matches the test conditions. The trick is to make this decision earlier on the server and avoid inserting the A/B test script into the page source altogether so when the browser renders the page, the script tag is not present.

Selective loading can also be achieved with tag managers but calling A/B tests scripts through a tag manager isn’t recommended. That method causes a delay before the test can fire so users may already have interacted with the original version and will see flicker in the browser when the test snaps into place.


Zaraz.com - PageSpeedPlus recommendation

An innovative product called zaraz.com was launched in 2022 specifically to eliminate the impact of third party code. In short, it executes third party code in the cloud instead of on your website so the problem is bypassed and is ideal for tracking tools that don’t alter the layout of the page.

Initially a standalone company, it was purchased by Cloudflare and integrated into their suite of performance and security tools.

How it works?

When zaraz is activated for a site running on Cloudflare, users can choose which ‘tools’ they want to appear on the site. Tools are the zaraz name for scripts such as google analyics.

When a webpage is then loaded, a single zaraz script is injected into the HTML. This fires a request to zaraz, which executes the previously defined tools in the Cloudflare network without anything happening on the webpage. This dramatically improves performance without any loss of functionality.


Async & defer attributes

The default loading behaviour of JavaScript interrupts HTML parsing but script tags allow additional attributes to be specified that alter the default loading behaviour to be more performant - async and defer.

In both scenarios, they tell the browser not to download the file right away but to do so asynchronously in the background. The browser can then focus on parsing the HTML, which is the critical content needed to present an informative page to users.

The subtle difference between async and defer is when they are executed. Async instructs browsers to execute the JavaScript as soon as it has finished downloading the code. Defer instructs a browser to wait until all HTML parsing is complete before executing the downloaded JavaScript:

As you can see, defer offers the least inhibitive user experience, This table summarises each method:

Method Downloaded Executed Use case
Standard During HTML parsing Immediately after script is downloaded Code that has to execute at the first opportunity: A/B tests, tracking
Async Asynchronously in the background Immediately after script is downloaded If order of script execution doesn’t matter
Defer Asynchronously in the background After HTML parsing is complete If scripts need to execute in certain order or execution isn’t required to render main parts of page: email opt in, cookie banner

Using these attributes can dramatically improve loading speed. Defer offers the largest potential gains because the code is downloaded in the background and execution is postponed until all HTML parsing is complete. Therefore, interruption to the user journey is minimal.

The Telegraph implemented defer on all of their scripts and saw close to 100% improvement on Lighthouse scores. While we wouldn’t recommend a global implementation of defer on all scripts as some things need to execute early run A/B tests, you should consider using it on as many script tags as possible.


Resource Hints

To download a file from a third-party URL, the browser must complete a variety of steps before the download can begin such as DNS lookup, TLS and TCP handshake. Rather than starting these when a script is called, browsers have a feature called Resource Hints that allow the initial steps to be fired in the background before the file is needed. They are called dns-prefetch and preconnect and they accelerate downloads from new domains for the first time.

The objectives of each hint are the same but they differ in that preconnect goes further along the process than dns-prefetch:

  • dns-prefetch does DNS lookup
  • preconnect does DNS, TLS and TCP handshake


DNS Prefetch
If you have a file hosted on mysite.com/js/script.js, the request to mysite.com can have the DNS resolution completed advance by adding the following snippet to the top of the HTML:

<link rel=“dns-prefetch" href=“https://mysite.com”>

The browser will immediately execute the DNS lookup to establish the resolving IP where the script is hosted. Crucially, this occurs in the background while the HTML is being parsed so when the page eventually calls script.js, the request is 100ms faster because the origin IP of the mysite.com is known and the request can go there directly.

Preconnect
To use preconnect is just as easy:

<link rel=“preconnect" href=“https://mysite.com”>

The browser will immediately execute the DNS lookup, TLS and TCP handshake in the background so when script.js is eventually called even more of the steps in a HTTP connection will already be completed, which makes the request even faster.

Image courtesy of web.dev

Use wisely
Resource hints will fire on every page load but if you have a cached resource from a previous page load, the hints won’t add additional benefit. This is because the browser will load the file from cache instead of downloading from the remote URL. There are no major negatives from Resource hints firing without being needed except the browser using more CPU but limit usage to 1-2 domains to avoid wasted connections running on every page load.

When to use each one?
Preconnect goes further and will give more performance improvements so it makes sense to choose that but remember to use it wisely on no more 1-2 domains.

If you have more than 1-2 domains that you want to accelerate, DNS prefetch would be a wiser option to avoid lots of unnecessary connections firing in the background.

Some guides suggest using both preconnect and DNS prefetch together because browser support for preconnect isn’t universal and DNS prefetch acts as a fallback. However, Can I Use shows only Firefox as the last major browser that hasn’t added support, which is probably not a huge portion of your visitors. Moreover, Google measure PageSpeed with a Chrome instance so for PageSpeed Insights scores, using both is overkill.

Resource Hint Steps completed Implementing Use case
dns-prefetch DNS lookup When connection to 1-2 high priority domains are required
preconnect DNS lookup, TLS and TCP handshake When connections to 2+ domains are required


Lazy loading

Lazy loading is the practice of fetching things only when they are scrolled into the viewport rather than on the initial page load.

Consider a long article with a youtube video buried 5 or 6 scrolls down the page. The Youtube player requires lots of scripts and images to build its functionality so wastes resources on the initial page load because users can’t interact with the player immediately. A more efficient solution is to wait until the player is scrolled into the viewport and then fetch all the scripts necessary to build the player controls.

This behaviour can be achieved with Lazy loading and many libraries that utilize the Intersection Observer API make it a simple task. Our recommendation is Lazysizes because all that is required is:

  • adding the script to the page
  • add class=“lazyload” to any element you want to defer loading of
  • change the src attribute to data-src


Example

<iframe data-src="//www.youtube.com/embed/A75PQLV-Nck" class="lazyload" frameborder="0" allowfullscreen></iframe>


Notice we said any element as it’s not just videos that can be lazy loaded - any images or iFrame can benefit from Lazy loading with Lazysizes:

<img data-src="flower.jpg" class="lazyload" alt="">


Above the fold content
Lazy load works for content below the fold but what about youtube videos above the fold? A favourite trick of ours is this Youtube light library from Paul Irish which is 224 times faster than the standard youtube player. Paul is a Google Web Performance engineer so he can be trusted.


Self host

Another effective option is taking control of all third party JavaScript and hosting yourself. This will allow you to control the cache settings or inline it. While a very powerful method if executed correctly, it should be planned with care before deciding to implement or not. Self-hosting means you are ultimately responsible for updates so you will need a background process to regularly fetch the latest version of a script and serve it to your users. This will require engineering work on your side.

If you are on Wordpress, existing plugins can do this for you:

  • CAOS - sole purpose is to host Google Analytics locally.
  • WP Rocket - the most popular Wordpress performance plugin
  • Perfmatters - a premium plugin from Kinsta


Wrapping Up How To Reduce the impact of third-party code


Scripts are an expensive resource to have on a website. They allow someone else to inject code onto your site, which you have no control over and ultimately slow down the speed users can view and interact with your content. If you can avoid having a script, that is the best way to achieve good performance.

However, some Javascript will always be needed and the advice in this post will help you Reduce the impact of third-party code, which will benefit users and your Web Vitals and PageSpeed Insights grades.

Finally, when good grades have been achieved, use Automated PageSpeed Tests with PageSpeedPlus to monitor the Web Vitals and Pagespeed scores and react to any changes if they fall.

You might also like

.