However, scripts are some of the most important tools you can have for tracking and targeting so do have significant business value. As always, some techniques can minimize the impact and find a middle ground that satisfies business and user experience goals.
Analytics, tracking, advertisements, A/B testing and social share buttons are key components of websites - it would be very difficult operate a blog without knowing how many people are viewing it or which content is performing best. This functionality is added to sites by pasting a script tag into the HTML from a third party such as Google.
A third-party script is defined as any script hosted on a domain that's different than the domain of the current URL so even if you serve assets from an alternative domain that you manage yourself, such as cdn.mysite.com, it will still be treated as third party.
Image courtesy of web.dev
By default, script tags are render blocking resources - when a browser encounters a script tag while processing a page, parsing of the HTML is interrupted while the script is downloaded and executed.
Typical browser processing of a script tag
When the main thread is blocked for 250ms or more, it is a negative user experience that PageSpeed Insights will report as an issue.
Why so slow?
The process of loading a third party script takes time because a network connection to the external URL has to be established, which involves a variety of steps before being routed to the server that hosts the file. From there, the code has to be sent back to the browser.
With each additional script, the impact multiplies.
The first step in reducing the impact of third-party code is auditing the scripts on your site to see which are slow.
Open dev tools > Lighthouse > Generate report
The dropdown for Reduce the impact of third-party code will show the offending scripts:
Blocking scripts on nytimes.com
To estimate the potential improvement removing a script will have, Chrome allows specific scripts to be blocked for Lighthouse audits.
Go to Network > JS > right click on a script > Block request URL.
Run Lighthouse again to see what the improvement will be. Work through all the scripts and you will be able to build a picture of what the worst offenders are and the potential gain from removing them.
This could also be done if a script is used and you are in two minds about whether it is valuable enough to stay.
Now we have audited the scripts and know the worst offenders we can look at how to load them more efficiently.
It may seems obvious but can be overlooked - if any of the scripts identified by Lighthouse or PageSpeed Insights are not used on your site, go ahead and remove them. Scripts can be added for marketing vendors and forgotten as projects complete, results are achieved or peoples contracts end so an audit helps catch these instances.
Typically websites bundle all their CSS and JS into large files that are loaded on every page and this can benefit performance as the browser only needs to make one HTTP request to get the JS and one for the CSS.
The trade off is with some codebases, a huge volume of unused code being downloaded on pages when a few lines of CSS/JS may be all that is needed to render the page.
Code splitting is the opposite of this method and only loads the precise CSS and JS needed to render a given page.
Code splitting is becoming more prevalent in Google advice and a similar tactic can be used to reduce the impact of third-party code but it will require development work on your side.
If you have a mandatory tag, discuss if it’s needed on ALL pages across the entire sie. For example, if you are running an A/B test on a handful of URLs, the A/B testing script probably doesn’t need to be loaded across every URL for the duration of the test. A more efficient process would be selective loading that only includes the script tag in the page source when a URL in the test pool is loaded.
With some backend work, you can modify the HTML response to remove the script tag before it is sent to the browser.
By default. the A/B test code will inspect the URL of a request and fire when it matches the test conditions. The trick is to make this decision earlier on the server and avoid inserting the A/B test script into the page source altogether so when the browser renders the page, the script tag is not present.
Selective loading can also be achieved with tag managers but calling A/B tests scripts through a tag manager isn’t recommended. That method causes a delay before the test can fire so users may already have interacted with the original version and will see flicker in the browser when the test snaps into place.
An innovative product called zaraz.com was launched in 2022 specifically to eliminate the impact of third party code. In short, it executes third party code in the cloud instead of on your website so the problem is bypassed and is ideal for tracking tools that don’t alter the layout of the page.
Initially a standalone company, it was purchased by Cloudflare and integrated into their suite of performance and security tools.
How it works?
When zaraz is activated for a site running on Cloudflare, users can choose which ‘tools’ they want to appear on the site. Tools are the zaraz name for scripts such as google analyics.
When a webpage is then loaded, a single zaraz script is injected into the HTML. This fires a request to zaraz, which executes the previously defined tools in the Cloudflare network without anything happening on the webpage. This dramatically improves performance without any loss of functionality.
In both scenarios, they tell the browser not to download the file right away but to do so asynchronously in the background. The browser can then focus on parsing the HTML, which is the critical content needed to present an informative page to users.
As you can see, defer offers the least inhibitive user experience, This table summarises each method:
|Standard||During HTML parsing||Immediately after script is downloaded||Code that has to execute at the first opportunity: A/B tests, tracking|
|Async||Asynchronously in the background||Immediately after script is downloaded||If order of script execution doesn’t matter|
|Defer||Asynchronously in the background||After HTML parsing is complete||If scripts need to execute in certain order or execution isn’t required to render main parts of page: email opt in, cookie banner|
Using these attributes can dramatically improve loading speed. Defer offers the largest potential gains because the code is downloaded in the background and execution is postponed until all HTML parsing is complete. Therefore, interruption to the user journey is minimal.
The Telegraph implemented defer on all of their scripts and saw close to 100% improvement on Lighthouse scores. While we wouldn’t recommend a global implementation of defer on all scripts as some things need to execute early run A/B tests, you should consider using it on as many script tags as possible.
To download a file from a third-party URL, the browser must complete a variety of steps before the download can begin such as DNS lookup, TLS and TCP handshake. Rather than starting these when a script is called, browsers have a feature called Resource Hints that allow the initial steps to be fired in the background before the file is needed. They are called dns-prefetch and preconnect and they accelerate downloads from new domains for the first time.
The objectives of each hint are the same but they differ in that preconnect goes further along the process than dns-prefetch:
If you have a file hosted on mysite.com/js/script.js, the request to mysite.com can have the DNS resolution completed advance by adding the following snippet to the top of the HTML:
<link rel=“dns-prefetch" href=“https://mysite.com”>
The browser will immediately execute the DNS lookup to establish the resolving IP where the script is hosted. Crucially, this occurs in the background while the HTML is being parsed so when the page eventually calls script.js, the request is 100ms faster because the origin IP of the mysite.com is known and the request can go there directly.
To use preconnect is just as easy:
<link rel=“preconnect" href=“https://mysite.com”>
The browser will immediately execute the DNS lookup, TLS and TCP handshake in the background so when script.js is eventually called even more of the steps in a HTTP connection will already be completed, which makes the request even faster.
Image courtesy of web.dev
Resource hints will fire on every page load but if you have a cached resource from a previous page load, the hints won’t add additional benefit. This is because the browser will load the file from cache instead of downloading from the remote URL. There are no major negatives from Resource hints firing without being needed except the browser using more CPU but limit usage to 1-2 domains to avoid wasted connections running on every page load.
When to use each one?
Preconnect goes further and will give more performance improvements so it makes sense to choose that but remember to use it wisely on no more 1-2 domains.
If you have more than 1-2 domains that you want to accelerate, DNS prefetch would be a wiser option to avoid lots of unnecessary connections firing in the background.
Some guides suggest using both preconnect and DNS prefetch together because browser support for preconnect isn’t universal and DNS prefetch acts as a fallback. However, Can I Use shows only Firefox as the last major browser that hasn’t added support, which is probably not a huge portion of your visitors. Moreover, Google measure PageSpeed with a Chrome instance so for PageSpeed Insights scores, using both is overkill.
|Resource Hint||Steps completed||Implementing||Use case|
|dns-prefetch||DNS lookup||When connection to 1-2 high priority domains are required|
|preconnect||DNS lookup, TLS and TCP handshake||When connections to 2+ domains are required|
Lazy loading is the practice of fetching things only when they are scrolled into the viewport rather than on the initial page load.
Consider a long article with a youtube video buried 5 or 6 scrolls down the page. The Youtube player requires lots of scripts and images to build its functionality so wastes resources on the initial page load because users can’t interact with the player immediately. A more efficient solution is to wait until the player is scrolled into the viewport and then fetch all the scripts necessary to build the player controls.
This behaviour can be achieved with Lazy loading and many libraries that utilize the Intersection Observer API make it a simple task. Our recommendation is Lazysizes because all that is required is:
<iframe data-src="//www.youtube.com/embed/A75PQLV-Nck" class="lazyload" frameborder="0" allowfullscreen></iframe>
Notice we said any element as it’s not just videos that can be lazy loaded - any images or iFrame can benefit from Lazy loading with Lazysizes:
<img data-src="flower.jpg" class="lazyload" alt="">
Above the fold content
The code to achieve it can be found in this stackoverflow thread.
If you are on Wordpress, existing plugins can do this for you:
|🗑️ Remove Slow Scripts||Delete|
|🔍 Selective Loading||Only load scripts when they are needed|
|⌛ Async & Defer||Postpone loading until important elements have rendered|
Scripts are an expensive resource to have on a website. They allow someone else to inject code onto your site, which you have no control over and ultimately slow down the speed users can view and interact with your content. If you can avoid having a script, that is the best way to achieve good performance.