DebugBear Blog DebugBear Blog Mon, 12 Dec 2022 00:00:00 GMT en <![CDATA[2022 In Review: What’s New In Web Performance?]]> /2022-in-web-performance Mon, 12 Dec 2022 00:00:00 GMT The way we measure and optimize website speed is always changing. New web standards are introduced (and eventually widely supported), new tools are developed, and new metrics suggested.

This article takes a look at some of the ways that the web performance landscape changed in 2022.

Priority Hints

Priority Hints are the highest-impact browser feature I’ve seen this year, providing quick wins when optimizing the Largest Contentful Paint. Simply add the fetchpriority="high" attribute to your most important image:

<img src="/hero.png" fetchpriority="high">

This will tell the browser to make this request before loading lower-priority resources.

By default image requests are low priority, and after rendering the page the priority of above-the-fold images is increased. Adding fetchpriority=high means that the browser can start the image request right away.

Request waterfall showing an image request with a fetchpriority attribute

No more OSCP requests in Chrome

The Online Certificate Status Protocol (OCSP) lets browsers check whether a given SSL certificate has been revoked. If a site was using an Extended Validation certificate, Chrome was making these checks when establishing a connection.

The request waterfall below shows an example of an OCSP request (gray) made as part of the SSL connection (purple). Basically a second request is made as part of the HTML document request.

request waterfall showing a request to as part of the SSL connection.

Chrome stopped making OSCP requests in Chrome 106. For sites that use an Extended Validation certificate this delivered a significant performance improvement.

TTFB and First Contentful Paint charts showing lower page load time.

Interaction to Next Paint Metric

Interaction to Next Paint (INP) is a new metric by Google that measures how quickly a page responds to user input. It measures how much time elapsed between a user interaction, like a click or key press, and the next update of the screen.

diagram showing the different components of the Interaction to Next Paint metric

INP may eventually replace First Input Delay (FID) as one of the Core Web Vitals.

Since INP also includes processing time and presentation delay it has a higher value than First Input Delay. It also looks at one of the slowest page interactions, unlike FID which only looks at the first one.

FID and INP on PageSpeed Insights

Desktop Core Web Vitals As A Ranking Factor

Google’s three Core Web Vitals metrics assess user experience and are used as a ranking factor by Google.

The original mobile rollout happened in June 2021, but in February 2022 the page experience ranking updates also started impacting desktop searches.

You can see how well your website is doing in Google Search Console. This year Google also started showing URL-level data where it’s available, so you can quickly see what pages you need to optimize.

Google Search Console screenshot showing URL-level metrics

Back/Forward Cache in Chrome

A lot of page navigations are just back/forward movements in the browser history. Mobile browsers have long tried to speed up these navigation by saving the page state and restoring it.

Chrome’s Back/Forward cache is now fully rolled out on mobile and desktop. Technically this change happened in November 2021, but it felt worth highlighting as the positive impact only became fully visible this year.

For example, the cache rollout caused a big improvement in Cumulative Layout Shift scores.

Improved CLS scores across various Ecommerce tools in January 2022

View Observed Metrics In PageSpeed Insights

The simulated throttling that many Lighthouse-based tools use has often caused confusion when interpreting the metrics reported by PageSpeed Insights.

We’ve now released a Chrome Extension that surfaces the original data Lighthouse collected from Chrome.

For example, if the throttled values are faster than the original metric that suggests a problem with the simulation. If the First Contentful Paint and Largest Contentful Paint are very close in the observed data and very different in the simulated data then that can also indicate limitations of the Lighthouse simulation.

Observed metrics shown in PageSpeed Insights

HTTP/3 Standardized

Browsers have long been experimenting with the HTTP/3 protocol, but it was finally standardized in June 2022.

HTTP/3 achieves several performance goals, for example reducing the number of network round trips to establish a connection and making it easier for mobile users to migrate connections when moving between different networks.

Browser support for HTTP/3

Better data on what requests are render-blocking

Render-blocking requests are important for performance as they prevent the whole page from rendering. But they can sometimes be hard to identify. Luckily Chrome has started reporting more details on whether a request is render blocking, as you can see in this waterfall view.

Request waterfall where some requests have a “Blocking” badge

The Resource Timing API now also reports a renderBlockingStatus property.

Performance resource timings with the renderBlockingStatus property

Finally, the new Performance Insights tab in DevTools also shows this data.

Render blocking request in the Performance Insights tab

Chrome DevTools Performance Insights Panel

The DevTools Performance tab shows a really comprehensive view of a website, but it doesn’t do much to prioritize information and generate insights. The new Performance Insights tab in Chrome DevTools aims to identify issues impacting Core Web Vitals more easily.

Performance Insights tab in Chrome DevTools

It shows a series of Insights relevant to performance and then suggests possible fixes.

Detailed insight on a render-blocking script

103 Early Hints

Browser resource hints tell the browser to load resources or create server connections before they are needed. The 103 HTTP status code allows web servers to tell the browser about resources that will be needed in the future before the full HTML response is ready.

That way the browser can start loading these resources while the server is still processing the request. For example, render-blocking stylesheets or web fonts can start loading early, or the browser could preconnect to an API subdomain.

This is especially useful when using a CDN. The CDN provides servers that are related in close proximity to the user’s location. It can return an early 103 response within milliseconds, before sending the document request off to the main website server for processing.

Full page prerendering in Chrome

Sometimes it’s very likely that the user will soon navigate to a new page, and since the release of version 108 in November Chrome will start loading the page prior to the actual navigation.

When the navigation happens the page that was loaded in the background is “foregrounded”, possibly rendering instantly.

For example, when I type “ads” into the omnibox Chrome is 83% certain that I’ll go to, so it will preload that page when I start typing. (Check out chrome://predictors/ to view information about your own browsing behavior.)

Predictors screenshot showing entered text, predicted URL, and confidenc

Websites can also use the Speculation Rules API to tell the browser about likely upcoming navigations.

New free website speed test

This October we launched our new free website speed test at See how fast your website is in the lab and in the field data collected by Google. You can click on each rendering metric to get a detailed request waterfall that tells you how to optimize it.

Site speed test result with metrics and rendering filmstrip.

AVIF support in Safari

Modern image formats like WebP and AVIF can significantly speed up websites by encoding the same information in smaller files.

While WebP has been widely supported for over two years, AVIF support only arrived in Safari this September with the release of iOS 16 and macOS Ventura.

AVIF especially shines when compressing low-fidelity images, meaning it’s a great choice when you want to show photos on your website and save bandwidth.

Keep in mind that not every Safari user has upgraded yet and that Edge still doesn’t support AVIF.

Browser support for AVIF

Native image lazy loading in Safari

Native image lazy loading ensures that images only load when they are about to enter the viewport, saving bandwidth and prioritizing the more important page content.

Since the release of iOS 15.4 this March Safari now also supports the loading="lazy" attribute. It's now supported in all major browsers with overall global support of 92% of users.

A look ahead to 2023

In 2023 it'll be interesting to see how tooling improves for page interactions after the initial page load. Lighthouse already supports running user flows and can report the Interaction to Next Paint metric. DevTools has also added a recorder feature for recording user flows.

There are also proposals around improving performance reporting for soft navigations. A single page app will show content for multiple URLs throughout its lifetime, but currently performance metrics are often only attributed to the initial landing page. Being able to track rendering milestones for history.push navigations would also help collect better data on user experience.

Lighthouse hasn't had a major release since November 2021, but we can expect version 10.0 next year with updates to how the Performance score is calculated.


A lot of new browser features have become available to improve website performance in the last year. We also have new tools to measure and improve site speed.

2022 also saw the return of the conference in Amsterdam. I really enjoyed getting to meet so many people working on web performance and learning from the talks as well.

Subscribe to our newsletter for monthly updates on the latest in web performance.

Or share this article on Twitter.

<![CDATA[Modern Image Formats for the Web]]> /image-formats Mon, 05 Dec 2022 00:00:00 GMT Choosing the right image format is the first and most important step when it comes to image performance. We want our websites to load fast, but we also want our images to look good. Balancing these two concerns is the core of image performance.

Image formats are file types for digital graphics that have evolved over time to make use of new software and hardware technologies and faster networks. These days we have plenty of options to choose from, including file types for raster images, animations, vector graphics, and next-generation images.

In this article, we’ll look into the most important image formats for the web and help you decide when to use which image file type.

Image compression

Why Do Image Formats Matter?

Finding the right format for your digital images is important because each image file type comes with its own feature set, has its advantages and disadvantages, and serves specific use cases. Using less effective formats for many of your images can increase page load times, make your website less visually attractive, and harm the overall user experience.

Image file formats vary in:

  • Scalability:
    • Raster, also known as bitmap, images (e.g. PNG, JPG, GIF) are made up of pixels, so they cannot be magnified without losing quality.
    • Vector images (e.g. SVG) are made up of geometric shapes, so they scale without any quality loss.
  • Compression method (lossy vs lossless compression) — see below in detail
  • Level of browser support
  • Other characteristics such as support for different color depths, transparency, animations, etc.

Choosing the best file type for an image is about finding the balance between visual quality and performance (defined by the size of the image file). Next-generation image formats (e.g. WebP and AVIF) aim to achieve smaller image sizes while providing advanced visual features.

For example, at DebugBear, we were able to cut our hero image size from 171 KB to 94 KB and improve our Largest Contentful Paint (LCP) score from 2.4 to 1.5 seconds by switching from PNG to WebP format — see the drop in both LCP and total image weight on the screenshot below (however, some other developers had different results when testing next-gen images, as we’ll see later in this post):

Converting PNG images to WebP format at, results shown in the DebugBear app

Lossy vs Lossless Compression

Most raster image formats you can use on the web are compressed formats. They are reduced in size by a compression algorithm (also known as a codec), which removes non-essential and/or less important data from a file following pre-defined rules.

There are lossless and lossy data compression algorithms — next-generation images support both types.

Lossless Compression

Lossless compression algorithms only remove non-essential data (i.e. unnecessary metadata). As a result, they usually (but not always) generate larger-size files than lossy algorithms. However, there’s no loss of quality (this is why they’re called ‘lossless’) and the compressed image looks the same as the original one.

Some examples of lossless data compression algorithms are:

Note that as these are general-purpose algorithms — they are not exclusively for image compression. For example, Huffman coding is used for compressing HTTP/3 headers, and DEFLATE is the basis of GZIP compression as well.

Lossy Compression

Lossy compression algorithms remove both metadata and essential data (less critical visual information), therefore they can achieve a higher compression ratio. While smaller image sizes improve web performance, lossy image compression does come with quality loss — for instance, you can end up with pixelated images.

Some examples of lossy compression algorithms are:

Since this type of compression comes with information loss and quality degradation, lossy image formats include quality settings to allow the application or user to determine how much quality loss they are willing to tolerate in exchange for a higher compression ratio.

Higher-quality images usually (but not always) mean smaller file sizes. For example, when we set the quality ratio for this JPG image to 75% in the Squoosh app, we could reduce the file size by 69%:

Lossy compression with Squoosh at 75 percent

With a quality ratio of 85%, the compressed file was just 48% smaller than the original one — while the compressed image is a little bit sharper now, there’s still not a huge difference in quality:

Lossy compression with Squoosh at 85 percent

However, with a quality ratio of 95%, the compressed image is 8% larger than the original one — this is the ratio where lossy compression isn’t worth it anymore:

Lossy compression with Squoosh at 95 percent

For reference, here’s what the image looks like with a quality ratio of 0% — the color palette is reduced to the most basic colors, and the figure on the image becomes unrecognizable:

Lossy compression with Squoosh at 0 percent

The Best Image Formats for the Web

The following list includes the best file formats you can use for displaying images on web pages in production.

There are other image file types that are supported by web browsers as well, such as the ICO format frequently used for favicons or the uncompressed BMP format, however they are not recommended for web content as they have better alternatives.

GIF (Graphics Interchange Format)

The GIF format was first released in 1987 with the aim to compress large image files to make them download faster. While these days it’s best known for animated GIFs, it’s actually a still image format, and animated GIFs are flipbooks of multiple still GIF images.

As GIF was created in the era of the early web, it has fewer features than other image formats, however due to its relatively small size, it can help with performance optimization and still has a place in modern web development.

Key Features of GIF

  • a lossless format for raster images
  • uses the LZW compression algorithm
  • 8-bit indexed palette (can display only 256 standard RGB colors on one image)
  • supports basic, auto-playing animations
  • supports one-level transparency (a pixel is either fully transparent or fully opaque)
  • extensive browser support


  • Use GIF for logos, grayscale photographs, and cartoon-style web graphics that are only made up of a few colors.
  • Don’t use GIF for high-resolution photos and detailed images that include more than 256 colors.
  • While animated GIFs are popular, they’re bad for web performance and accessibility, so consider replacing them with animations that can be stopped, e.g. short MP4 videos or animated SVGs.

GIF image with indexed color map, showing a smiling cartoon face Image: The breakdown of a GIF file made up of four colors
Credit: – Inside the GIF file format

PNG (Portable Graphics Format)

The PNG format was created in 1995 to provide a non-patented alternative to GIF, as the LZW algorithm used for compressing GIF files was patented at that time (the patent expired worldwide in 2004). PNG uses the non-patented DEFLATE algorithm which is a combination of LZW and Huffman coding. It offers more visual features than GIF, including semi-transparency via the alpha channel and a much broader color palette.

Key Features of PNG

  • a lossless format for raster images
  • uses both pre-compression and the DEFLATE compression algorithm
  • 24-bit color depth, also known as true color (8 bits per channel on the red, green, and blue channels), which is equal to 16,777,216 (256x256x256) colors in the standard RGB color space
  • supports transparency and semi-transparency via the 8-bit alpha channel
  • good text readability due to the lossless compression
  • extensive browser support


  • Use PNG for non-photographic images, logos or graphics with transparent backgrounds, and illustrations that include text such as screenshots, marketing banners, charts, or infographics.
  • Don’t use PNG for high-resolution photos as it will result in a huge file size.
  • You can use PNG as a fallback for the lossless versions of next-generation images.

PNG image, showing the logo of NASA with transparent background Image: Transparent NASA logo in PNG format Credit: PNGEgg

JPG/JPEG (Joint Photographic Experts Group)

The JPEG standard was created in 1992 by the Joint Photographic Experts Group which wanted to come up with a good-looking but lightweight digital format for photographic images. They introduced the concept of lossy compression for images, which is based on the science of how humans see and removes high-frequency visual information such as hue and sharp transitions.

The JPG format is associated with six file extensions: .jpg, .jpeg, .jpe, .jif, .jfif, .jfi — JFIF stands for JPEG File Interchange Format. There’s no difference between them; they were created to support different platforms, but today.jpg is the standard extension.

Key Features of JPG

  • a lossy format for raster images
  • its compression method is based on the DCT (Discrete Cosine Transform) algorithm
  • 24-bit RGB color depth (~16.77 million colors) like PNG
  • allows applications to set the compression ratio
  • comes with generational loss (the quality degrades with every modification to the image file)
  • extensive browser support
  • also has a lossless version called JPEG LS, but browsers and most image editing tools don’t support it


  • Use JPG for photographic images and photo-realistic digital graphics that don’t include any text.
  • Don’t use JPG for icons, line drawings, and text-heavy images because JPG images have poor readability due to the lossy compression.
  • You can use JPG as a fallback for the lossy versions of next-generation images.


JPG has some next-generation variations, including JPEG 2000, JPEG XR, and JPEG XL. The most advanced one is JPEG XL (it supersedes both JPEG 2000 and JPEG XR), but it’s still behind feature flags on all browsers — and the Google Chrome team have recently announced that they’ll remove support for it.

JPEG XL adds several advanced features to JPG images (e.g. animations, layers, overlays, alpha channels, depth maps, etc.), comes with better image quality, and supports both lossy and lossless compression. It’s also more advanced than both WebP and AVIF, the two next-gen image formats currently supported by web browsers.

JPG image, showing a landscape with mountains Image: An example of a high-resolution JPEG photo Credit: Errin Casano, Pexels

WebP (Web Picture)

The WebP format has been created by Google with the aim to replace GIF, PNG, and JPG with a more lightweight and flexible image format that has both a lossy and a lossless version. WebP was first announced on the Chromium blog in 2010, and its first stable version was released in 2018.

According to Google’s studies, lossless WebP compresses 23-42% better than PNG while lossy WebP compression generates 25-34% smaller image files than JPG. However, some independent test results suggest that WebP compression is not always worth it — for example, Jim Nielsen found that lossless WebP files can be larger than their PNG equivalent optimized with the ImageOptim API.

Key Features of WebP


  • To reduce page weight, consider converting your JPG images to lossy WebP and PNG images to lossless WebP format.
  • If you need to support Internet Explorer or other old browsers that don’t support WebP images, add the JPG or PNG version as a fallback.
  • Run your own tests to see whether the additional complexity is worth the performance gain and whether you can achieve a similar result by optimizing your existing images.

Side-by-side comparison of the JPG and WebP versions of the same picture Image: JPEG vs WebP comparison Credit: Google Developers, WebP Gallery

AVIF (AV1 Image File)

The AVIF image format was released in 2019 by AOMedia (Alliance for Open Media) with the aim to achieve a better compression quality than WebP. Both its lossy and lossless versions use the AV1 video codec for compression. AVIF comes with advanced visual features that currently no other production-ready image format offers, such as depth maps and overlays.

AVIF also supports three color depths: 24-bit (true color) which is equal to 8 bits/channel across the three RGB channels (red, green, blue), 30-bit (deep color) which is equal to 10 bits/channel, and 36-bit which is equal to 12 bits/channel. This makes it possible to use color spaces beyond sRGB (standard RGB) such as HDR (High Dynamic Range) and WCG (Wide Color Gamut).

While AVIF has been created to supersede WebP, performance tests have had some mixed results. The lossy version performs well on all tests (see some examples on Netflix Tech Blog and by Daniel Aleksandersen and Jake Archibald), however the lossless version is often outperformed by WebP and even PNG (see this article on and this discussion on GitHub).

Key Features of AVIF

  • a next-generation image format for raster images
  • supports both lossy and lossless compression
  • uses the AV1 compression algorithm
  • supports 24-bit, 30-bit, and 36-bit color depths
  • supports animations, alpha-channel transparency, depths maps, and overlays
  • patchy browsers support (global support is currently at 76.2%; IE and Edge don’t support it and Safari only has partial support)


  • As AVIF’s lossy version almost always outperforms JPEG and WebP, especially at low fidelity, consider converting your JPEG and WebP photos to AVIF format, but add WebP and JPG (or just JPG) as fallback for non-supporting browsers — see a code example later in the article.
  • Because of the poor test results of AVIF’s lossless version, don’t convert your PNG and lossless WebP images to AVIF, unless you want to use AVIF's visual features or your own performance tests show otherwise — instead, optimize your existing PNG images or convert them to WebP.

Compressing JPG to lossy AVIF, test image shows a landscape with hills Image: Compressing JPG to lossy AVIF with the Squoosh app

SVG (Scalable Vector Graphics)

SVG is a vector image format that you can use on the web. It's been developed by W3C since 1999, and the first version was released in 2001. Each SVG graphic is composed of geometric shapes such as lines, curves, and polygons defined as vectors in a Cartesian coordinate system. As SVG images consist of mathematical formulas (instead of pixels), you can increase their size without any loss of quality.

Even though browsers and vector image editing tools display SVG files as images, SVG is actually a text file that can also be edited as code. SVG files are usually more lightweight than their PNG equivalent (except for highly complex illustrations). As SVG uses the XML markup, it has a syntax similar to HTML and can be added to HTML pages as inline code.

Key Features of SVG

  • an XML-based vector image format
  • retains quality at any size
  • supports the 24-bit sRGB color space (like PNG, JPG, and WebP)
  • supports animations and transparency
  • can be styled with CSS and programmed in JavaScript
  • wide browser support (Internet Explorer only has partial support)


  • Use SVG for line graphics, text-heavy images, logos, icons, background patterns, and vector illustrations.
  • Don’t use SVG for complicated graphics such as illustrations using hundreds of colors or artwork with fine details.
  • To reduce the number of HTTP requests, consider adding less complex SVG images as inline code to your HTML page.
  • If you need to support Internet Explorer, use PNG or GIF as a fallback.

SVG vector background with clouds, shown inside the BGJar app Image: A customizable SVG background in BGJar

What Image Format Should You Use?

Choosing the best image formats for your website depends on many things, including your content strategy, your stack (e.g. some content management systems such as WordPress need plugins to add next-gen images), the browsers you need to support, the devices your audience typically use, the characteristics of your images, and others.

However, here are some pointers to help you choose:

If you don’t want to add next-generation images yet but still want to find the best balance between performance and quality, use:

  • JPG for photos and photo-realistic illustrations (you can optimize them with a tool such as MozJPEG)
  • PNG for screenshots and complex non-photographic illustrations, or instead of SVG if you don’t want to use vector graphics
  • SVG for icons, vector logos, illustrations with less complexity, text-heavy images, background patterns, and animations
  • GIF for cartoon-style web graphics, grayscale photographs, and non-vector logos

If you are willing to use next-generation formats, start by replacing your JPG images with the lossy version of WebP or AVIF. While AVIF is less widely supported by browsers than WebP, its lossy algorithm has a better compression ratio and it offers more advanced features.

You can use the <picture> HTML element to add your images in both AVIF and WebP formats complete with a JPG fallback to let the user’s browser decide which file type to use:

<source srcset="image.avif" type="image/avif" />
<source srcset="image.webp" type="image/webp" />
<img src="image.jpg" alt="Image" width="1280" height="960" />

Replacing your JPG files with WebP or AVIF images can also improve your Lighthouse performance scores as it can help you get rid of the ‘Serve images in next-gen formats’ issue.

Depending on the possible performance gain, this can be either a warning (yellow) or an error (red):

Serve images in next-gen formats warning in the Lighthouse app

As the lossless versions of WebP and AVIF have had some mixed test results, it’s not necessarily worth adding them, unless you want to use their other features, such as AVIF’s support for non-standard RGB color spaces. It can be a good alternative to optimize your PNG images with a tool such as Oxipng or ImageOptim or replace them with SVG wherever it’s possible to use vector graphics.

If you have an image-heavy website, it’s also a good idea to consider using an image CDN such as Cloudinary. These services automatically generate, select, and serve the most performant format and size for each image on your site.

How to test if images are slowing down your website

You can use DebugBear's free website speed test to see how image downloads are impacting your performance and Core Web Vitals.

Site speed test with slow images

Next Steps

Finding the best image format is just the first step of image optimization. There are many other best practices you can follow to improve image performance on your website, such as lazy-loading images and prioritizing them with Priority Hints.

You can also compare the different formats with Squoosh or test how your images perform from different locations around the world using a synthetic performance monitoring tool such as DebugBear. To get more insight into your image performance, check out our interactive demo or sign up for a 14-day, no-credit-card free trial.

DebugBear monitoring

<![CDATA[Using Local Overrides To Run Core Web Vitals Experiments In Chrome]]> /devtools-local-overrides Tue, 22 Nov 2022 00:00:00 GMT Many site speed testing tools provide recommendations to make your website faster. But it can be hard to tell whether these recommendations will work and how big the impact will be.

To estimate how much an optimization will help you need to try it out on your website. But deploying a new change to a staging server can be slow.

The local overrides feature in Chrome DevTools offers a solution. It allows you to make changes to your website locally and then measure how they impact performance.

This article explains local overrides work and how they can be used to test Core Web Vitals optimizations.

Screenshot showing DevTools local overrides

What are local overrides in Chrome DevTools?

Local overrides let you override server responses with file content saved locally on your computer. Instead of making a network request for a resource Chrome will serve it from a folder on your hard drive.

This lets you do a range of things:

  • Experiment with content changes on any website
  • Try out new CSS styles on any website
  • See how fast your website renders with certain render blocking files removed (to optimize Largest Contentful Paint)
  • Check if layout shift fixes are working correctly (to optimize Cumulative Layout Shift)

How to enable local overrides in DevTools

It takes a few steps to run your first experiment, but it’s easier after the initial setup:

  1. Open Chrome DevTools (by right-clicking the page and clicking Inspect)
  2. Switch to the Sources tab
  3. In the sidebar, select Overrides

Finding local overrides in Chrome DevTools

  1. In the sidebar Overrides tab, click Select folder for overrides (this is a folder on your computer where any custom HTML/CSS/... will be stored.)
  2. When you get the DevTools requests full access to the folder click Allow

Selecting a local folder for override contents

  1. Switch to the Page tab in the sidebar
  2. Right-click the file you want to override and click Save for overrides

Saving a file for local overrides

  1. The file now has a purple icon indicating that it’s served locally
  2. Click the file to edit it – in this example we just added a new style tag to the HTML
  3. Use Cmd+S (Mac) or Ctrl+S (Windows) to save your changes
  4. Reload the page to see the impact of the local overrides

GitHub homepage using red text after adding new style tag

Testing a Largest Contentful Paint optimization

Changing the text color is fun, but we want to optimize Core Web Vitals.

You can collect a performance profile using DevTools. If I go to the GitHub homepage with Slow 3G throttling enabled it takes about 4.8 seconds before the Largest Contentful Paint milestone is reached.

Baseline LCP test result

GitHub loads a number of JavaScript files that aren’t important for the initial render. Let’s try commenting them out.

Script tags commented out in Chrome DevTools

Now the page renders in just 3.0 seconds.

Lower LCP after running experiment

Of course if we wanted to do this in production we’d need to think more about whether we still need to load those scripts and when they should be loaded. But at least we have a rough idea of how big the performance impact of removing the scripts would be.

You can try out a range of LCP optimizations with local overrides, for example:

  • Adding a preload tag
  • Making scripts async
  • Lazy loading images

Testing a Cumulative Layout Shift optimization

Similarly, we can use local overrides to see if CLS fixes are working.

Let’s look at this layout shift on the Iceland homepage. Once the slider appears all other content shifts down on the page.

Layout shift in DebugBear test result

We can fix that by adding a CSS min-height to the slider.

.carousel {
min-height: 288px

The carousel now shows an empty area until the slider content loads. When it does load no page elements are pushed down the page and the layout remains stable.

DevTools performance profile showing an early page snapshot with empty space

Overriding Response Headers

Response header overrides are an experimental DevTools feature that allows you to substitute response headers in addition to the response body.

To enable this feature, first enter the DevTools settings.

DevTools settings gear icon in the top right corner

Then select the Experiments tab and check the Local overrides for response headers box.

DevTools Experiments menu

In the DevTools Network tab you can now right click a request and select Create response header override.

Create response header override in DevTools Network tab

Then you can click Add header override to create your custom header. In this example we add a link preload header to load a specific JavaScript file early.

Adding a response header override in DevTools

After reloading the page we can see that this header is returned with the document response and the file is preloaded.

The new link preload header in the Network tab

Limitations of local overrides

Local overrides allow you to change server responses locally, but testing in DevTools doesn’t always produce reliable data.

This is because DevTools throttling doesn’t accurately model network connections and resource priorities. So the results you see in DevTools may not always translate directly to the impact of your change on real users.

Running experiments on DebugBear

If you want to run site speed experiments in a reliable lab environment you can use the DebugBear Experiments feature.

This allows you to make changes to the page HTML while using high-quality throttling and realistic network connections.

You can also easily view before and after test results side by side.

Before and after view of a CLS optimization

<![CDATA[What Is CSS @import And Why Can It Slow Down Websites?]]> /avoid-css-import Thu, 17 Nov 2022 00:00:00 GMT The CSS @import rule can be a convenient way to load additional stylesheets. But it can also make resources on your website render more slowly, causing your website to take longer to render.

What is CSS @import?

The most common way to load a CSS file is by using a link tag:

<link rel="stylesheet" type="text/css" href="link.css" />

Another method is to reference one stylesheet inside another, by using @import "url" in CSS:

@import "imported.css";
/*contents of link.css */

This way, the browser starts another stylesheet request after loading the initial CSS file.

Why does CSS @import slow down your website?

Most CSS files are render-blocking resources, which means the browser has to download them before it can show the user any content.

When multiple stylesheets are added without @import (by using link tags in the HTML instead), the browser can download them in parallel.

In contrast, using @import to reference one CSS file inside another means they are downloaded sequentially, which takes longer. As a result, the website loads more slowly.

For example, this often happens when importing Google Fonts in a CSS file.

Request waterfall showing that content only appears after all render blocking files have been downloaded.

This request waterfall shows how @import creates a sequential dependency, slowing down the website. The Google fonts CSS only starts loading after style.css has been downloaded.

How to avoid using @import

If you can edit the source CSS file, remove the @import and instead, reference the secondary CSS file in the HTML document using a <link> tag.

Instead of doing this in a CSS file:

Response body showing @import for Google Fonts

Use this in your HTML :

<link rel="stylesheet" href="//"/>

If you can’t edit the CSS file you can use a preload resource hint to help the browser discover (and download) the @import resource sooner.

<!-- This is the CSS file that contained the original @import statement -->
<link rel="stylesheet" href="parentCSS.css"/>
<!-- This tells the browser to download the imported resource right away -->
<link rel="preload" href="//" as= "font"/>

How to check if your website uses @import (and could be faster)

  1. Go to
  2. Enter your website’s URL
  3. Scroll down to the Recommendations
  4. See if the recommendations include removing @import

DebugBear performance recommendation showing sequential request chain caused by @import

CSS @import in HTML style tags

In theory, using @import inside a style tag in the HTML would allow browsers to discover the stylesheet right away and start downloading it early.

@import url(file.css);

However, browsers don't always support this well. While some of the issues in that post have been addressed there's still a number of problems with @import.

@import in HTML is easy to avoid, simply use a stylesheet link tag instead.

<link rel="stylesheet" href="file.css" />

Try to use link tags instead of CSS @import whenever possible so that your website renders as quickly as possible.

When using link tags isn't possible consider preloading the stylesheets loaded with @import.

<![CDATA[What Does The Back/Forward Cache Mean For Site Speed?]]> /back-forward-cache Wed, 09 Nov 2022 00:00:00 GMT Loading a new web page takes time, but about 20% of mobile navigations are actually initiated by the browser Back and Forward buttons.

The back/forward cache speeds up these navigation by restoring previously loaded page content.

What is the back/forward cache?

If a page supports the back/forward cache (also called BFCache), the browser saves the full page state in memory when navigating to a new page.

When a user then navigates back to a previous page, the browser restores the full page from a cache instead of reloading it. That way, page content can appear almost instantly.

You can see an example of that in this video.

When is a page served from the back/forward cache?

To benefit from the back/forward cache, a page needs to be eligible for it. Here are some common reasons why a page might not be eligible:

  • It uses the unload event (as restoring the page after unload has been handled may break the page)
  • It uses the beforeunload event (in Firefox)
  • It uses the Cache-Control: no-store header that disables all caches

There are a wide range of reasons why a page might not support the back/forward cache, and you can find the full list here.

How can I check if my site can be served from the back/forward cache?

Chrome DevTools allows you to check whether your page is eligible for the back/forward cache.

  1. Right Click on the page and click Inspect to open Chrome DevTools
  2. Switch to the Application tab
  3. In the Cache section of the sidebar, select Back/forward cache
  4. Click Test back/forward cache

DevTools will then show you a status indicating whether your site can use the back/forward cache and, if it’s not eligible, what you could do to support it.

Successfully served from back/forward cache

In this case your page is eligible for the back/forward cache and you don’t need to do anything.

Chrome DevTools showing a site that’s eligible for the back/forward cache


This status shows that you should make a change to your website to support the back/forward cache.

Chrome DevTools showing a site that needs to be changed to support the back/forward cache

Pending Support

Finally, this status shows that your page isn’t currently eligible for the back/forward cache, but Chrome may support it in a future version.

Chrome DevTools showing a site that might be eligible for the back/forward cache in a future version of Chrome

How is the back/forward cache different from the HTTP cache?

The browser HTTP cache stores past responses to network requests to avoid having to redownload resources.

The back/forward cache is more comprehensive: the entire page state can be restored.

When only the HTTP cache is used some resources may still have to be redownloaded if they are not eligible for caching. After that the page still needs to run any on-page scripts and render the page contents. With the back/forward cache a lot of this work can be avoided.

What browsers support the back/forward cache?

Chrome, Safari, Firefox, and Edge all support the back/forward cache.

Impact on site speed metrics

If a page is loaded from the cache it can render extremely quickly, which is good for the Core Web Vitals of your website.

For example, here you can see that a page restored from the back/forward cache has a Largest Contentful Paint of just 100 milliseconds.

Site speed Chrome extension showing a TTFB of 16 milliseconds and an LCP of 100 milliseconds

Compare that to a typical page load without caching, where it takes half a second to load the page.

Site speed Chrome extension showing a TTFB of 146 milliseconds and an LCP of 427 milliseconds

The back/forward cache can also reduce layout shift. This was noticeable in Google’s Chrome User Experience Report after Chrome enabled the cache.

Core Web Vitals Report by HTTP archive showing ecommerce tools having lower Cumulative Layout Shift after the introduction of the back/forward cache in Chrome

<![CDATA[How To Set Up Google Search Console And View Core Web Vitals Data]]> /search-console-core-web-vitals Mon, 31 Oct 2022 00:00:00 GMT Google Search Console (GSC) is a free service Google provides to website owners. It provides them with insight on how much Google search traffic they get, what pages are showing up in Google, and what they can do to optimize their website.

Since the Page Experience Update in 2021, Google has used the Core Web Vitals metrics as a ranking factor. This article will take a closer look at the Web Vitals data that's available in Google Search Console.

Core Web Vitals data in Search Console

What is Google Search Console?

GSC, formerly known as Webmaster Tools, provides verified website owners with a wide range of information:

  • How many clicks does my website get from Google?
  • What keywords does my website rank for?
  • How many pages are included in the Google index?
  • Is my website mobile friendly?
  • Does my website use a secure connection (HTTPS)?
  • How fast is my website?

Google Search Console Overview Dashboard

This information allows website owners to make their website rank higher in Google, debug indexing and performance issues, and track their rankings over time.

Setting up your Search Console account

As this information is only shown to website owners, you first need to sign up to GSC and verify your website.

1. Sign into Search Console

Go to the Search Console homepage and click Start now.

Google Search Console Homepage

Then log into your Google account or create a new one.

Google Login Screen

2. Verify your website

GSC then asks you what website you want to verify. You have to options here:

  1. Verify ownership of an entire domain including all subdomains
  2. Verify ownership of a specific subdomain (or even a subpath of a subdomain)

High-level search console verification options

Option 1 requires you to edit DNS records with your domain name provider, so option 2 is often easier.

For example, you can:

  • Add a meta tag to your homepage HTML
  • Use your existing verified Google Analytics or Google Tag Manager accounts

Google Search Console Verification Options

Once your website is verified you can see data about its search performance.

How to view Core Web Vitals data in Google Search Console

Google collects Core Web Vitals as part of the Chrome User Experience Report and uses them as a ranking signal.

To see how your users experience your website, select the Core Web Vitals tab in the sidebar.

Google then shows you how many of your URLs it views as "Good", "Needs Improvement", or "Poor". The report is grouped into mobile and desktop experiences.

Search Console Core Web Vitals Report

Click Open Report to see more details about Core Web Vitals on your website.

The bar chart shows you how many pages meet the Web Vitals thresholds, and how many you need to work on. By default only "Poor" experiences are shown, so click on the "Need improvement" and "Good" headers to see all data.

Mobile Core Web Vitals Report

Under the chart there's a Why URLs aren't considered good section that tells on which of the three Core Web Vitals your website isn't doing well on:

For each issue you can also see the number of URLs that are affected in rankings.

How to view what website URLs have slow Core Web Vitals

You can click on each of the web vitals issues in Google Search Console to see specific groups of pages that are impacted.

Search Console LCP Report

By default just one example URL is shown, but you can get a longer list of URLs in that group by clicking on the group.

Detailed URL group in search console

What are URL groups?

Google doesn't have enough data about every page on your website in order to assess its Core Web Vitals.

Therefore, URLs are put together into URL groups if Google thinks that the pages have a similar type. For example, if you have many product pages Google might group them together and they would share the web vitals ranking signal.

However, this process isn't perfect and sometimes fast pages might be grouped with slower pages. Being included in a slow group doesn't necessarily mean that a particular URL is slow.

Deciding what pages to optimize

If your website gets a lot of traffic Search Console will show URL-specific data that you can use to decide what pages to optimize.

URL-level Core Web Vitals data

However, if you don't have URL-level data from real users, you need to test each URL in the lab. For example, you can use our free website speed test tool.

You can then focus your efforts on the slowest URLs, and those that get the most traffic.

Site speed test result

Monitoring Core Web Vitals

Google Search Console will show a timeline of how many pages meet the "Good" Core Web Vitals thresholds. But this is based on data from the last 30 days, so it will take a while for the data to update. You also won't be able to track the values of specific metrics over time.

DebugBear can help you optimize your pages and keep them fast by monitoring your website over time. We run daily lab tests on mobile and desktop devices across 20+ locations. But we also keep track of the URL-level Core Web Vitals data by Google.

Core Web Vitals monitoring in DebugBear

<![CDATA[A Comprehensive Guide To HTTP/3 And QUIC]]> /http3-quic-protocol-guide Tue, 25 Oct 2022 00:00:00 GMT The HTTP protocol lets browsers and other applications request resources from a server on the internet, for example, to load a web page. HTTP/3 is the latest version of this protocol, which was published by the Internet Engineering Task Force (IETF) as a proposed standard under RFC 9114 in June 2022.

It aims to make the web faster and more secure by providing an application layer over QUIC, a next-generation transport protocol running on top of the lightweight User Datagram Protocol (UDP). We’ll discuss the different network layers in depth further down in this article.

Unlike the previous versions of HTTP, HTTP/3 doesn’t introduce any new features on its own. At a high level, it provides the same functionalities as HTTP/2, such as header compression and stream prioritization. However, under the hood, the new QUIC transport protocol entirely changes the way we transfer data over the web.

In this article, we’ll take an in-depth look at the new features in HTTP/3 and QUIC, see how they fit into the overall ecosystem of network protocols, how HTTP/3 compares to the previous versions of HTTP, and what its main limitations are.

What is HTTP/3?

HTTP (Hypertext Transfer Protocol) is an application-layer network communication protocol of the Internet Protocol Suite, or according to its official website, the “core protocol of the World Wide Web”.

It defines a request-response mechanism between client (e.g. a browser) and server applications on the web that allows them to send and receive hypertext (HTML) documents and other text and media files.

HTTP/3 was firstly known as ‘HTTP-over-QUIC’ because its main goal is to make the HTTP syntax and all the existing HTTP/2 functionality compatible with the QUIC transport protocol.

Thus, the new features of HTTP/3 are all coming from the QUIC layer, including built-in encryption, a new cryptographic handshake, zero round-trip time resumption on prior connections, the removal of the head-of-line blocking issue, connection migration to support mobile users on the go, and native multiplexing.

HTTP/2 is also referred to as H2 and HTTP/3 can be shortened to H3.

HTTP in the TCP/IP protocol stack

Delivering information over the internet is a complex operation that involves both the software and hardware level. One protocol cannot describe the entire communication flow due to the different characteristics of the devices, tools, and software used throughout the process.

As a result, network communication is based on a stack of communication protocols in which each layer serves a different purpose. Although there are various conceptual models that describe the structure of protocol layers, such as the seven-layer OSI Model, the internet is based on the four-layer TCP/IP model, also known as the Internet Protocol Suite. It’s defined in the RFC 1122 specification as follows:

“To communicate using the Internet system, a host must implement the layered set of protocols comprising the Internet protocol suite. A host typically must implement at least one protocol from each layer.”

Here is how the four layers of the TCP/IP model stack up, from top to bottom:

PurposeID mechanism (examples)Protocols (examples)Devices/Tools (examples)
LAYER 4: Application layerprocess-to-process communicationapplication-level identification mechanismsHTTP, DNS, TLS, FTP, SMTP, SSDP, etc.web browsers and server applications, mail server and client applications, FTP server and client applications, etc.
LAYER 3: Transport layerhost-to-host communication for applicationsport numbers (ports are either TCP or UDP ports; QUIC and DCCP use UDP ports)TCP, UDP, QUIC, DCCP, ports (each LAYER-4 protocol has its commonly used port number — e.g. HTTP uses port 80)
LAYER 2: Internet layerrouting (selecting a path for traffic in a network or across multiple networks)IP addressesIP (IPv4 and IPv6), IPsec, interface controllers (NICs) — internal or external
LAYER 1: Link (network interface) layermoving network packets between different hosts on the same local networkMAC (Media Access Control) addressesIEEE 802 standards for LAN/MAN/PAN networks (Ethernet, Wi-Fi, etc.), PPP, etc.device drivers for NICs — e.g. PHY chips for Wi-Fi, Ethernet devices, etc.

As the above table shows, HTTP is an application-layer protocol that makes communication possible between two software applications: a web server and a web browser. HTTP messages (requests or responses) are delivered over the internet by a transport-layer protocol: either TCP (for HTTP/2 and HTTP/1.1 messages) or QUIC (for HTTP/3 messages) — we’ll see how transport protocols work in detail later in the article.

A brief history of HTTP

Like most communication protocols, HTTP/3 is defined in the RFC (Request for Comments) Series used for publishing, editing, and cataloging technical documents related to the internet.

HTTP/3 has been standardized as RFC 9114 in 2022. However, two previous versions of the protocol, HTTP/2 and HTTP/1.1 are still in active use.

Here’s a brief summary of the evolution of the HTTP protocol since its inception:

HTTP versionYear of standardizationSpecificationStatusKey features
HTTP/0.9(1991)has no RFC number; see the original doc created by Tim Berners-Leehistorical (not in use)only raw data transfer introduced the TCP/IP model and GET requests (also called the ‘one-line protocol’)
HTTP/11996RFC 1945historical (not in use)introduced HTTP status codes, Content-Type, the POST and HEAD methods, and request headers
HTTP/1.11997RFC 9112Internet Standardupdate to HTTP/1; introduces the Host header, the 100 Continue status, persistent connections, and new HTTP methods (PUT, PATCH, DELETE, CONNECT, TRACE, OPTIONS)
HTTP/22015RFC 9113Proposed Standardintroduces a new binary framing layer that’s not compatible with HTTP/1.1 and request and response multiplexing, stream prioritization, automatic header compression (HPACK), connection reset, server push
HTTP/32022RFC 9114Proposed Standardmakes HTTP compatible with QUIC, moves from TCP to UDP transport

See Cloudflare Radar for the current usage data of the three active versions of HTTP — 28% of Cloudflare’s traffic is already transferred via HTTP/3 and QUIC.

While most requests on Cloudflare’s global network still use HTTP/2, HTTP/3 traffic surpassed HTTP/1.1 in July 2022:

Number of requests on HTTP/1.1 vs HTTP/2 vs HTTP/3 connections on Cloudflare&#39;s global network, diagram

Image credit: Cloudflare Blog

What is QUIC?

QUIC (not an acronym; pronounced as ‘quick’) is a general-purpose transport-layer protocol published as an IETF Proposed Standard in 2021 — one year before HTTP/3. It can be used with any compatible application-layer protocol, but HTTP/3 is its most frequent use case.

QUIC runs on top of another transport protocol called UDP, which is responsible for the physical delivery of application data (e.g. an HTTP/3 message) between the client and server machines. UDP is a quite simple and lightweight protocol, which means that it’s fast, but on the other hand, it also lacks many features essential for reliable and secure communication. QUIC implements these higher-level transport features, so the two protocols work together to optimize the delivery of HTTP data over the network.

UDP has been around for more than 40 years — it was standardized back in 1980. The acronym stands for ‘User Datagram Protocol’ as UDP exchanges connectionless datagrams (basic transfer units) between two end machines.

This is what a datagram looks like — it doesn’t include any data related to connection establishment or information about the success of delivery. It only includes a lightweight header and the message:

The structure of a UDP datagram
Image credit: The Network Encyclopedia

As you can see above, a UDP header is very lightweight: only 64 bits altogether (16 bits for the source port, 16 bits for the destination port, 16 bits for the length of the message, and 16 bits for the checksum). This makes pure UDP delivery very fast — however, QUIC makes delivery slower with the implementation of additional features.

With version 3, HTTP moves from TCP-based to UDP-based connections. As a result, the entire underlying structure of network communication changes.


Like UDP, TCP (Transmission Control Protocol) is not a new transport protocol. It was created by two DARPA scientists in 1974 (first documented as RFC 675; the current version is standardized as RFC 9293).

It uses a different, connection-oriented, reliable approach to data transport that’s slower than the connectionless and fast but unreliable UDP. With UDP, we don’t know whether the packet has been delivered as it has no built-in feedback mechanism while with TCP, every dropped packet is retransmitted.

The diagram below shows the structure of a TCP packet and a UDP datagram side by side. For more information, see this TCP vs UDP comparison table by GeeksforGeeks:

TCP vs UDP messages

Image source: The Network Encyclopedia

As you can see in the diagrams above, a TCP packet includes all the information necessary for performing the SYN/SYN-ACK/ACK handshake that establishes a reliable connection between the client and server. On the other hand, a UDP datagram only consists of a 64-bit header and the message.

The main advantage of UDP is its connectionless nature — as there’s no established connection between the client and server, network packets can use different delivery routes. In this way, each packet can use the most optimal path that’s available at that moment.

However, unlike TCP, UDP doesn’t guarantee delivery, which is its main shortcoming. As it has no loss detection mechanism, if a datagram doesn’t reach its destination, it’s simply dropped. Plus, as packets are delivered independently of each other, they arrive at their destination out of order.

Why do we need QUIC?

QUIC was created to replace TCP with a more flexible transport protocol with fewer performance issues, built-in security, and a faster adoption rate (we’ll see this feature in detail in the ‘Resistance to protocol ossification’ section below). It needs UDP as a lower-level transport protocol primarily because most devices only support TCP and UDP port numbers.

In addition, QUIC leverages UDP’s:

  • connectionless nature that makes it possible to move multiplexing down to the transport layer and removes TCP’s head-of-line blocking issue (we’ll see this in detail later)
  • simplicity that allows QUIC to re-implement TCP’s reliability and bandwidth management features in its own way

QUIC transport is a unique solution. While it’s connectionless at the lower level thanks to the underlying UDP layer, it’s connection-oriented at the higher level thanks to its re-implementation of TCP’s connection establishment and loss detection features that guarantee delivery. In other words, QUIC merges the advantages of both types of network transport.

It has another important purpose as well — implementing an advanced level of security at the transport layer. QUIC integrates most features of the TLS v1.3 security protocol and makes them compatible with its own delivery mechanism. In the HTTP/3 stack, encryption is not optional but a built-in feature.

Here’s a recap of how the three transport-layer protocols, TCP, UDP, and QUIC, compare to each other:

Layer in the TCP/IP modeltransporttransporttransport
Place in the TCP/IP modelon top of IPv4 or IPv6on top of IPv4 or IPv6on top of UDP
Connection typeconnection-orientedconnectionlessconnection-oriented
Order of deliveryin-order deliveryout-of-order deliveryout-of-order delivery between streams, in-order delivery within streams
Guarantee of deliveryguaranteed (lost packets are retransmitted)no guarantee of deliveryguaranteed (lost packets are retransmitted)
Handshake mechanismnon-cryptographic handshakeno handshakecryptographic handshake
Data identificationknows nothing about the data it transportsknows nothing about the data it transportsuses stream IDs to identify the independent streams it transports

Differences between the HTTP/1.1 vs HTTP/2 vs HTTP/3 protocol stacks

Now that we looked into the differences and similarities of the three transport protocols, let’s see the main differences between the three HTTP stacks.

As discussed above, HTTP/3 comes with a new underlying protocol stack that brings UDP and QUIC to the transport layer. However, there’s another important change. As you can see in the diagram below, some of the roles and features of the application and transport layers also change:

Comparison of the HTTP/1.1 vs HTTP/2 vs HTTP/3 protocol stacks Image credit: Robin Marx: H3 Protocol Stack; GitHub

The most important differences between the HTTP/3-QUIC-UDP stack and the TCP-based versions of HTTP communication are as follows:

  • QUIC integrates most features of the TLS v1.3 security protocol, so encryption moves down from the application layer to the transport layer (we’ll discuss this in the next section in detail).
  • HTTP/3 doesn’t multiplex the connection between different streams as this feature is performed by QUIC at the transport layer — transport-layer multiplexing removes the head-of-line blocking issue present in HTTP/2 (HTTP/1.1 doesn’t have this issue because it opens multiple TCP connections and offers the option of pipelining instead of multiplexing, which turned out to have serious implementation flaws and was replaced with application-layer multiplexing in HTTP/2).
  • The UDP layer is more lightweight than the TCP layer because the latter has much more functionalities. In the HTTP/3 stack, QUIC is responsible for connection establishment, congestion control, and loss detection, which are handled by TCP in the two previous stacks.
  • The QUIC layer has many responsibilities: it re-implements TCP’s features, integrates the TLS security protocol, and adds some new features, e.g. connection migration, to the transport layer.

The best features of HTTP/3 and QUIC

The new features in HTTP/3 and QUIC can help make server connections faster, more secure, and more reliable.

A QUIC note regarding HTTP/3 features

Even though the features below are frequently referred to as the features of HTTP/3, most of them come from the QUIC layer. As mentioned above, HTTP/3 simply provides the application layer on top of these transport-layer features.

Note that the following section only includes a selection of the features of HTTP/3 and QUIC. For the full feature list, consult RFC 8999, 9000, 9001, and 9002 for QUIC and RFC 9114, 9204, and 9218 for HTTP/3.

The features discussed in the HTTP/3 specifications, such as the QPACK header, are not new features per se; they only make HTTP/2’s application-layer functionality compatible with the underlying QUIC transport.

1. Creating a secure and reliable connection in a single handshake

HTTP/2 needs at least two round-trips between the client and server to execute the handshake process: one for the TCP handshake for connection establishment and at least one for the TLS handshake for authentication (depending on the TLS version).

As QUIC combines these two handshakes into one, HTTP/3 only needs one round-trip to establish a secure connection between the client and server. The result is faster connection setup and lower latency.

QUIC integrates most features of TLS v1.3, the latest version of the Transport Layer Security protocol, which means that:

  • The encryption of HTTP/3 messages is not optional like with HTTP/2 and HTTP/1.1, but mandatory. With HTTP/3, all messages are sent via an encrypted connection by default.
  • TLS v1.3 introduces an improved cryptographic handshake that requires just one round-trip between the client and server as opposed to TLS v1.2’s two round-trips for authentication — (see the difference between a TLS v1.2 and a TLS v1.3 handshake).
    • QUIC integrates this with its own handshake for connection establishment, which replaces the TCP handshake.
  • As HTTP/3 messages are encrypted at the transport level, more information is secured than before:
    • In the HTTP/1.1 and HTTP/2 stacks, TLS runs in the application layer, so only the HTTP data is encrypted while the TCP headers are sent as plain text, which comes with some security risks.
    • In the HTTP/3 stack, TLS runs in the transport layer (inside QUIC), so not only the HTTP message is encrypted but most of the QUIC packet header too (except some flags and the connection ID — see later in the article).

In short, HTTP/3 uses a more secure transport mechanism than the previous, TCP-based versions of HTTP.

Here is how the structure of the TLS v1.3 handshake compares to the QUIC handshake:

As you can see in the diagram below, QUIC keeps TLS v1.3’s content layer that includes the cryptographic keys but replaces the record layer (responsible for fragmenting the data into smaller blocks/records to prepare it for transmission) with its own transport functionality:

TLS v1.3 vs QUIC cryptographic handshake diagrams Image source: RFC 9001

2. Zero round-trip time resumption on prior connections

On pre-existing connections, QUIC leverages the 0-RTT feature of TLS v1.3.

0-RTT stands for zero round-trip time resumption, which is a new performance feature of the TLS protocol, introduced in version 1.3.

With 0-RTT resumption, the client can send an HTTP request in the first round-trip on prior connections because the cryptographic keys between the client and server have already been negotiated — data sent on the first flight is called early data.

The diagram below shows how the HTTP/2 and HTTP/3 stacks compare in terms of connection setup:

  • If you use HTTP/2 with TLS v1.2, the client can send the first HTTP request in the fourth round-trip.
  • With HTTP/2 and TLS v1.3, the first request for application data can be sent in the third or second (on prior connections) round-trip.
  • With HTTP/3 and QUIC, which includes TLS v1.3 by default, the first HTTP request is sent in the second or first (on prior connections) round-trip.

Connection setup in the HTTP/2 vs HTTP/3 stacks

3. Head-of-line blocking removal

As the HTTP/3 protocol stack has a different structure than HTTP/2, it removes HTTP/2's biggest performance problem: head-of-line (HoL) blocking. This issue happens when a packet is dropped on an HTTP/2 connection. Until the lost packet is retransmitted, the entire data transfer process stops and all the packets have to wait on the network, which leads to longer page load times.

In HTTP/3, head-of-line blocking removal is made possible by native multiplexing, one of QUIC’s most important features.

HoL blocking terminology

To understand what head-of-line blocking is and why QUIC only has non-blocking byte streams, let’s see the most important concepts related to this phenomenon:

Byte stream

A byte stream (or just stream) is a sequence of bytes (units of eight binary digits/bits) sent over a network. Bytes are transported as packets of different sizes — e.g. the minimum size of an IPv4 packet is 20 bytes while its maximum size is 65,535 bytes (an IP packet can carry a UDP datagram or a TCP segment). A byte stream is essentially the physical manifestation of a single resource (file) sent over the network.


Multiplexing makes it possible to deliver multiple byte streams over one connection, which means that the browser can load multiple files on the same connection simultaneously.

While HTTP/1.1 doesn’t support multiplexing (it opens a new TCP connection for each byte stream), HTTP/2 introduces application-layer multiplexing (it opens just one TCP connection and sends all the byte streams over it), which results in head-of-line blocking.

Head-of-line blocking

Head-of-line blocking is a performance issue caused by TCP’s byte stream abstraction. TCP doesn’t have any knowledge about the data it transports and sees everything as a single byte stream. So when a packet is dropped anywhere on the network, all the other packets on the multiplexed connection stop delivering and wait until the lost one is re-transmitted — even if they belong to a different byte stream.

As TCP uses in-order delivery, the lost packet blocks the entire delivery process at the head of the line. At a higher rate of packet loss, this can significantly harm site speed. Even though multiplexing was introduced as a performance optimization feature to HTTP/2, at a 2% packet loss, HTTP/1.1 connections are usually faster (see more in the HTTP/3 Explained GitBook by Daniel Stenberg).

Native multiplexing

In the HTTP/3 protocol stack, multiplexing is moved down to the transport layer — this is called native multiplexing. QUIC identifies each byte stream with a stream ID, so it doesn’t see black boxes like TCP but has some knowledge about the data it delivers (it only sees the stream IDs, but still doesn’t know what files it delivers).

How does QUIC remove head-of-line blocking?

QUIC runs on UDP, which uses out-of-order delivery, so each byte stream is transported independently over the network (by finding the most optimal route available). However, for reliability, QUIC still ensures the in-order delivery of packets within the same byte stream so that the data related to the same request arrives in a consistent way.

As QUIC identifies each byte stream and streams are delivered on independent routes, if a packet gets lost, the unaffected byte streams don’t have to wait for its re-transmission. These resources can keep downloading without being blocked by the lost packet at the head of the line.

Here’s a diagram of how QUIC’s native multiplexing compares to HTTP/2’s application-layer multiplexing:

HTTP/2 vs QUIC multiplexing diagrams Image credit: Devopedia. 2021. "QUIC." Version 5, March 8 (CC BY-SA 4.0)

As you can see in the diagram above, both HTTP/2 and QUIC open just one connection between the client and server, but QUIC transports the byte streams independently, on different delivery routes so that they don’t block each other.

Even though QUIC eliminates the HoL blocking issue of HTTP/2, out-of-order delivery also has a downside: byte streams will not necessarily arrive in the same order they were sent in. For example, it can happen that the least important resource arrives first, and the web page can’t start loading.

This additional head-of-line blocking can be mitigated by resource prioritization on HTTP clients (e.g. the browser downloads render-blocking resources first). With priority hints, you can also assign a relative priority to resources to help browsers prioritize your resources.

4. QPACK field compression

QPACK is a field compression format for HTTP/3 that makes HTTP/2’s HPACK header compression format compatible with the QUIC protocol (‘header’ and ‘field’ are used synonymously; they refer to the metadata sent in the header or trailer of an HTTP message).

Field compression eliminates redundant metadata by assigning indexes to fields that are used multiple times during the connection. At a high level, HPACK and QPACK have the same functionality: both reduce the bandwidth required to transmit HTTP headers over the network. However, they use partly different mechanisms to address the different needs of the underlying transport protocols: TCP (HPACK) vs QUIC (QPACK).

How does HPACK work?

To reduce the size of the header, HPACK uses two indexing tables that assign indexes to fields:

  • A static table, which:
  • A dynamic table, which:
    • is empty initially
    • is built up over the course of the connection and updated incrementally with every request
    • includes the per-message changes either literally or as a reference to a field that was sent previously

To perform the header compression, both the client and server run an encoder and decoder. The HPACK header is encoded by the sender and decoded by the receiver application. As HTTP/2 sends and receives messages in order, HPACK can safely use references in the dynamic tables as they’ll always refer to a field that has already arrived.

Why is QPACK needed?

As opposed to HTTP/2, HTTP/3 cannot use the HPACK format which was created for TCP and assumes that byte streams arrive in order. If HTTP/3 used HPACK compression, it would result in additional head-of-line blocking because HPACK relies on references to previous fields.

However, with HTTP/3, byte streams don’t arrive in order, so it can happen that the dynamic table includes a reference to a message that hasn’t arrived yet — which would make the stream wait for the referenced one.

To solve this issue, QPACK introduces two unidirectional stream types: encoder and decoder streams. In addition to the bidirectional byte streams that deliver the HTTP/3 messages (including the compressed QPACK headers that also use indexes from the static and dynamic tables), the client and server can open encoder and decoder streams that deliver instructions to the other endpoint.

An encoder stream includes the encoder’s instructions for the decoder while a decoder stream includes the decoder’s instructions for the encoder. Each HTTP endpoint (client or server) can open one encoder and one decoder stream at most, however, they don’t necessarily have to do so — for instance, if they don’t want to use the dynamic table, they can avoid starting an encoder stream.

As opposed to the main bidirectional byte stream, the encoder and decoder streams are unidirectional, which means that they only deliver data in one direction without waiting for the response of the other endpoint. These are critical streams that stay open during the lifetime of the main connection and cannot be closed.

QPACK’s performance trade-off: field compression ratio vs. HoL blocking reduction

While adding extra (albeit lightweight) unidirectional byte streams to the communication comes with a performance overhead, it also mitigates the additional head-of-line blocking issue between independent byte streams arising from field compression.

The QPACK specification gives a fairly high level of freedom to client and server implementations to decide individually which one is more important and to what extent: head-of-line blocking mitigation or a higher level of field compression.

5. Flexible bandwidth management

Bandwidth management aims to distribute the available network capacity in the most optimal way between packets and streams. It’s essential functionality because the sender and receiver machines and the network nodes in-between them (e.g. routers and switches) all process packets at different speeds that also dynamically change over time.

Managing bandwidth helps avoid data overflow and congestion over the network, which result in slower server response times and also pose a security risk (e.g. vulnerability against flood attacks).

As UDP doesn’t have built-in bandwidth management, QUIC takes on this responsibility in the HTTP/3 stack. It re-implements the two pillars of TCP’s bandwidth management:

  • flow control, which limits the send rate at the receiver to prevent the sender from overwhelming it
  • congestion control, which limits the send rate at every node on the path between the sender and receiver to prevent congestion over the network

Per-stream flow control

To support independent streams, QUIC performs flow control on a per-stream basis. It controls the bandwidth consumption of stream data at two levels:

  • on each stream individually, by setting the maximum amount of data that can be allocated to one stream
  • across the entire connection, by setting the maximum cumulative number of active streams

Using per-stream flow control, QUIC limits the data that can be sent at the same time to prevent the receiver from being overwhelmed and to share the network capacity between the streams more or less fairly.

Note that QUIC uses a different flow control algorithm for cryptographic data used for authentication such as handshakes — this is controlled by TLS within QUIC.

Congestion control with optional algorithms

QUIC allows implementations to choose from different congestion control algorithms, as these are not specific to the transport protocol.

The most well-known algorithms are:

  • NewReno – the congestion control algorithm used by TCP, defined in RFC 6582, and used as an explanation of QUIC’s congestion control mechanism in RFC 9002
  • CUBIC – defined in RFC 8312, similar to NewReno, but uses a cubic function instead of a linear one to calculate the congestion window increase rate
  • BBR (Bottleneck Bandwidth and Round-trip propagation time) – doesn’t have an RFC yet; it’s currently developed by Google

On poorer connections, there can be significant differences between the performance of different congestion control algorithms.

For example, according to the measurements of the Gumlet video streaming service, the BBR algorithm improves server response times by 21% compared to CUBIC on the slowest connections. The performance gains have the biggest impact on lossy network connections; faster connections experience less noticeable improvements.

In the chart below, you can see how the two congestion control algorithms impact server response times on lossy connections. While at the 75th percentile (on the slowest 25% of connections), BBR is just 4% faster than CUBIC, at the 99th percentile (on the slowest 1% of connections), it's 21% faster!

BBR vs CUBIC congestion control algorithms on lossy connections Image credit: Gumlet Blog, BBR vs CUBIC – Server response time on lossy connections

6. Seamless connection migration

Connection migration is a performance feature of QUIC that supports users who experience a network change, such as mobile users on the go. QUIC makes connection migration (more or less) seamless by making use of connection identifiers.

Connection IDentifier (CID)

By attaching an unencrypted connection identifier (CID) to each QUIC packet header, QUIC doesn’t have to reset the connection like TCP if the device switches to a new network (for example, from a 4G network to Wi-Fi, or vice versa) or the IP addresses or port numbers change for any other reason.

With the help of connection migration, QUIC doesn’t have to redo the handshake under the new conditions and HTTP/3 doesn’t have to re-request the files that were being downloaded when the network migration happened — which can be a problem in the case of larger files or video streaming.

Note, however, that the client and server still need to re-negotiate the send rates discussed above in the ‘Flexible bandwidth management’ section.

Linkability prevention

To avoid privacy issues, e.g. to prevent hackers from following the physical movement of a smartphone user by tracking the unencrypted CID across networks, QUIC uses a list of connection identifiers instead of just one.

This feature is called linkability prevention. At the beginning of the connection, the client and the server agree on a randomly generated list of connection IDs that all map to the same connection. With every network switch, a new CID from the list is attached to the QUIC header, so different networks cannot be linked to the same user.

7. Resistance to protocol ossification

One of the main reasons for creating QUIC, and subsequently HTTP/3, was to make a transport protocol that’s resistant to protocol ossification, which is an inherent characteristic of protocols implemented in the operating system (OS) kernel, such as TCP.

OSs are rarely updated, which applies even more to the operating systems of middleboxes, such as firewalls and load balancers, which sit between the client and server but are still essential parts of the network.

Protocol ossification is a problem because it makes it hard to introduce new features, as middleboxes with an older version of the protocol don’t recognize the new feature and drop the packets for security reasons. As a result, the adoption rate of new TCP features is slow. QUIC aims to solve this issue.

QUIC has a higher resistance to protocol ossification than TCP for three reasons:

  • It runs in the user space (where native apps run) instead of the kernel, so it’s easier to deploy new implementations.
  • It has a higher level of encryption (e.g. most of the QUIC header is encrypted), so middleboxes can’t read the content of the packet, therefore don’t drop them — which frequently happens to TCP packets that include a newer feature that middleboxes with older operating systems don’t recognize and deem a security risk.
  • UDP is supported by every device, and the new features are added by the QUIC layer — on the other hand, adding new features via TCP extensions frequently requires an operating system update.

That said, QUIC streams can still be dropped for different reasons (we’ll see some of these in the next section), but the adoption of new QUIC features will be faster than TCP.

Limitations of HTTP/3 and QUIC

While the HTTP/3 protocol stack has several advantages, such as built-in encryption, head-of-line blocking reduction, 0-RTT connection setup on existing connections, and others, it also comes with some limitations.

Performance gains highly depend on the implementation

While the QUIC specifications give a lot of freedom to implementation developers, it’s still hard to correctly implement the features. For example, connection migration is great functionality, but many implementations don’t include it yet due to the complexity of its practical implications. Or, if a client implementation makes a poor choice of multiplexing algorithm, it can cause additional head-of-line blocking.

A research paper (2021) by Alexander Yu and Theophilus A. Benson also found that it’s difficult to deal with QUIC’s edge cases and properly implement congestion control algorithms. For now, HTTP’s real-world performance gains are not so high and show inconsistency across different implementations (in other words, it’s hard to tell why an implementation performs better under certain conditions than another one).

For more information on this subject, check out IETF’s list of all known QUIC implementations.

HTTP version negotiation is required before using HTTP/3

HTTP/3 generally doesn’t work for the first request because browsers assume by default that the server doesn’t support HTTP/3 and send the first request via either HTTP/2 or HTTP/1.1 on a TCP connection.

If the server supports HTTP/3, it responds with an Alt-Svc (Alternative Service) header that informs the client that it can send HTTP/3 requests. The browser can respond in different ways: it can open a QUIC connection right away or wait until the TCP connection is closed.

Either way, an HTTP/3 connection is only set up after the initial resources have been downloaded over an HTTP/1.1 or HTTP/2 connection.

Increased difficulty of network management

As QUIC encrypts not only the payload but also most of the packet metadata, it becomes more difficult to troubleshoot network errors and optimize networks for performance and security, which makes the job of network engineers more challenging. Setting up blocking and reporting rules becomes harder for the same reason too.

Because of the high level of encryption, providing firewalling and network health tracking services also gets more difficult than for TCP streams that come and go with unencrypted metadata in the header. Due to this and the increased level of complexity, many firewalls don’t support QUIC yet, which creates a security risk for organizations that rely on these services.

Some networks block UDP

As UDP has historically been used for different kinds of cyberattacks (e.g. denial-of-service type of attacks), 3 - 5% of networks block UDP, except for essential UDP traffic such as DNS requests.

If UDP is blocked on a network, the traffic falls back to TCP-based HTTP/2 connections. However, as RFC 9308, which discusses the applicability of QUIC, warns, “any fallback mechanism is likely to impose a degradation of performance and can degrade security”.

Browser and server support is still patchy

While there are many benefits to HTTP/3, the new functionalities can also be difficult to implement. Many server environments have just recently started to implement QUIC, and HTTP/3 is still an experimental feature in Safari browsers (see the current state of browser support).

HTTP/3 browser support

Learn more about HTTP/3 and QUIC

HTTP/3 and QUIC are extensive topics that are documented in several RFC documents (see a list of the most important RFCs related to HTTP/3 and QUIC on my blog).

For more knowledge on the subject, watch David Bombal’s discussion with Robin Marx on YouTube or Robin’s HTTP/3 talk at SmashingConf.

Wrapping up

HTTP/3 and QUIC change the way we use the internet by introducing a new, UDP-based protocol stack that makes use of independent streams and comes with built-in encryption and a new type of cryptographic handshake.

In theory, using HTTP/3 comes with many advantages related to performance, security, and connectivity, but in practice, it still needs time to be properly implemented and widely adopted.

Do you want to make your website faster?

With DebugBear, you can continuously monitor your website from 20 locations around the world and debug frontend performance issues.

Check out our interactive demo or sign up for our free trial and gain insight into the performance of your most important pages.

Request waterfall showing the network protocol

<![CDATA[How To Optimize Resource Loading With Priority Hints]]> /priority-hints Thu, 06 Oct 2022 00:00:00 GMT To make your website load quickly you need the browser to prioritize loading the most important resources first. The new fetchpriority attribute for img, iframe, script, and link tags can help achieve this by marking the most important resources in the HTML.

What are priority hints?

Browsers attempt to guess the importance of a resource when they discover it. For example, render-blocking stylesheets will be high priority, while a an asynchronous script can be loaded with a low priority.

But sometimes it's not clear to the browser how important a resource is. For example, images are loaded with low priority by default. Most of them are likely below the fold or hidden somewhere in a nested menu. But that's not always what you want, as images that represent the primary page content should be loaded quickly.

Priority hints solve this problem by providing the browser with additional information about the relative importance of a resource. For example, the fetchpriority attribute lets you mark specific important images as high priority.

<img src="photo.jpg" fetchpriority="high">

A common use case would be to increase the priority of images that trigger the Largest Contentful Paint.

A real-world example of priority hints in action

Take a look at this request waterfall showing the main content image being loaded on a website.

  1. The priority changes from Low to High after the page renders for the first time
  2. There's a long gray bar in the waterfall indicating the browser knows about the resource but hasn't started loading it yet

Low priority LCP image

We then add fetchpriority=high to two img elements that often end up being the LCP element.

Now the priority no longer changes, and the images are loaded immediately after the document request.

Fetchpriority attribute in waterfall

As a result, the Largest Contentful Paint now happens after 1.9 seconds instead of after 4.2 seconds.

LCP impact chart

Picture elements and priority hints

The HTML picture element lets website owners specify possible image files and the browser then picks the most image with the most appropriate file type and resolution.

To use priority hints with picture tags simply add the fetchpriority attribute to the img element inside the picture tag.

<source srcset="/image-small.png" media="(max-width: 600px)">
<img src="/image.png" fetchpriority="high">

What elements support the fetchpriority attribute?

You can use the fetchpriority attribute to control the request priority of resources loaded by the following HTML elements:

  • img
  • script
  • link
  • iframe

For example, let's say you want to preload a background image on the page. By default the image request will still be made with a low priority. The fetchpriority attributes fixes this:

<link rel="preload" as="image" href="/background.webp" fetchpriority="high" />

Instead than using <link> tags in your HTML to preload resources you can also include a link header in the document response. You can also include the fetchpriority hint here.

Link: </background.webp>; rel=preload; as=image; fetchpriority=high

How to use priority hints with the fetch API

The fetch API lets developers load additional data using JavaScript. Often JSON-formatted data is loaded from a backend API.

By default these requests are high-priority, but you can also set the priority to low by adding the priority property to the options object.

const req = new Request("/data.json", { priority: "low" });
fetch(req).then(res => res.json).then(res => console.log("Response", res));

What browsers support Priority Hints?

Chrome has supported priority hints since version 101 in April 2022. Edge also supports priority hints.

As of November 2022, Safari and Firefox don't support priority hints yet. However, since the fetchpriority attribute is just a hint nothing breaks if it is used in these browsers.

Browser support for priority hints

How to check if priority hints could make your website faster

Want to see request priorities on your website, and whether they change after the initial render?

Run a free website speed test to find out.

Website speed overview

Each test result includes a request waterfall with resource priority details.

Website speed test result showing priority change

Monitoring site speed and Core Web Vitals over time

Want to keep your website fast and optimize your Core Web Vitals?

DebugBear can monitor your website over time and provides in-depth reports to help you make it faster. Try it for free with a 14-day trial.

DebugBear website monitoring data

<![CDATA[Network Throttling in Chrome DevTools]]> /chrome-devtools-network-throttling Mon, 26 Sep 2022 00:00:00 GMT The web is best experienced with a fast network connection. Still, a large number of people will visit your site using slower speeds. They might visit your page while on the road or in a remote place.

Faster loading times can help you keep more visitors and positively influence your website SEO.

The Chrome DevTools network throttling feature lets you imitate degraded network conditions. In this article you will learn how to use it and how exactly it works.

How to test a page speed with a slower connection

In order to throttle the network, first open Chrome DevTools.

  1. Right click on a website and select Inspect
  2. Select the Network tab
  3. Choose the appropriate connection throttling speed from the dropdown
  4. Reload the page in order to experience the simulated network speed

Now you can observe the website being loaded on a slow connection.

Check the example below to better understand how to use this tool.

When network throttling or the local overrides feature feature are enabled, DevTools shows a warning triangle next to the Network tab title. In this case there's a "network throttling is enabled" tooltip that tells you Chrome is interfering with the normal network behavior on the current page.

network throttling is enabled message

What do Slow 3G and Fast 3G mean in DevTools?

The Slow 3G setting adds 2 seconds of request latency and reduces the bandwidth to 400 kilobits per second (Kbps).

The Fast 3G setting adds 560 milliseconds of latency and reduces bandwidth to 1.44 Mbps.

A closer look at network presets in Chrome DevTools

When you select the throttling setting Chrome shows three pre-defined profiles:

  • Fast 3G
  • Slow 3G
  • Offline

Let’s dive deeper into what they mean and when to use them.

Chrome DevTools uses a request-level throttling approach. A delay is only applied once the server response is received.

Request-level throttling does not consider individual network round trips like DNS resolution and establishing TCP/SSL connections.

We'll discuss request-level throttling in more depth further down in this article.

Latency settings

Due to the way DevTools throttling is implemented the latency values tend to be higher than what other tools use. However, they are selected in order to achieve a roughly equivalent slowdown.

PresetRequest latencyEquivalent in other tools
Fast 3G562.5ms150ms
Slow 3G2,000ms400ms


Bandwidth settings

PresetBandwidth down / upEquivalent in other tools
Fast 3G1.44 Mbps / 675 Kbps1.6 Mbps / 750 Kbps
Slow 3G400 Kbps / 400 Kbps500 Kbps / 500 Kbps


What is latency?

Latency measures the time it takes for data to pass from one point on a network to another. It’s most important in applications where the duration of a back and forth response is important. For example: collaboration tools, chats and multiplayer games.

Latency can also make a big difference when loading a page using resources from different servers. When sending requests to multiple servers, a connection needs to be established to each server. How long this takes depends greatly on network latency.

The request waterfall shows how time is spent establishing connections to different servers.


What is bandwidth?

Bandwidth is a measure of the capacity of a given network. It tells you how much data you can transfer in a given amount of time.

Mbps stands for megabits per second and is a unit used to measure bandwidth. On a 1 Mbps connection you can download 125 kilobytes per second.

Similarly, Kbps stands for kilobits per second.

What throttling setting should I use?

Use the Fast 3G setting to check your website’s performance on a decent mobile connection. Fast 3G also aims to match the mobile throttling used by Google's Lighthouse tool, so you can use it to compare your metrics to those collected by Lighthouse and the PageSpeed Insights lab data.

Slow 3G setting is a good choice when you want to optimize your website’s loading speed. It allows you to clearly see which elements load first. Using this setting lets you work on making the loading experience much smoother.

Custom network throttling profiles

DevTools also supports creating custom network configurations where you can choose the bandwidth and latency. For example, you could create a configuration that matches a typical customer.

  1. Go to the Network tab in the Chrome DevTools
  2. Open the throttling dropdown
  3. Click Add… below the Custom label
  4. Enter your desired network parameters
  5. Confirm with the Add button

You can now select your own profile from the throttling dropdown.

Testing websites offline

Many websites are interactive and can make requests for additional resources after the initial page load. Select the Offline profile to see what will happen when the internet goes down. It can be used to see if your site handles network errors properly.

The video below shows an example of offline mode in Google Translate.

You also might support offline functionality on your website by using a Service Worker. If that’s the case then DevTool gives you the ability to test it properly. Below you can see an example of Twitter in offline mode.

How exactly does DevTools network throttling work?

Chrome DevTools uses request-level network throttling. Not every network round trip is throttled individually.

The browser sends a request as usual. Once it receives a response it adds a delay until a minimum response time is reached.

We created a test website that sends three small requests with a server response time of 0ms (instant response), 500 ms and 1 second.

When no throttling is applied the responses arrive with minimal delay. Around 20ms is added to the server response time due to time spent on the network.


When using the Fast 3G option the requests have a minimum server response time applied to them, in this case around 550 milliseconds. The longest request doesn't take longer than before.


When we use the Slow 3G setting, all requests take equally long to complete, around 2 seconds.


Testing your site with request-level throttling can lead to lower metric accuracy, as the underlying differences in response time are hidden.

Real world example with TCP and SSL round trips

When first connecting to a site the browser needs to look up DNS records and then establish TCP and SSL connections. Only after that can it send the actual HTTP request.

These network round trips are only needed for the initial request to the server. Subsequent requests simply reuse the existing server connection, requiring only one round trip for the HTTP request (and possibly more to download the response).

DevTools doesn’t slow down all of those steps, it simply applies a minimum delay to the server response time. To make up for it, the delay is extended by a certain factor, defined by Chrome.

This works well for the first request due to the adjustment factor / equivalency mentioned in the tables above. But it also slows down subsequent requests disproportionately, making them slower than they would be in reality.

No throttling

In the waterfall below, the first request includes the steps needed to establish the connection. All the requests after that are faster, since they can simply use the already established channel.


Let’s take a note of the fact that establishing the TCP and SSL connections takes 668 milliseconds.

Connection time without throttling

Slow 3G throttling

After enabling throttling, establishing the connection takes 695 milliseconds. This is basically the same as the unthrottled request. We can tell that Chrome is not slowing down the network at this stage.

No connection throttling, but server response takes 2 seconds

The server response time (TTFB) time is increased to 2 seconds.

This large increase in latency is applied to all requests on the page, not just the initial request that consists of multiple network round trips.


The throttling method used by Chrome’s DevTools is only an approximation of an actually slower network. The connection starts equally fast and to compensate for this the TTFB throttling is increased.

When trying to replicate those findings on your own, make sure to clear the connection cache to get accurate results.

Using DevTools throttling with Lighthouse

Lighthouse can analyze the loading experience of your website and point out the biggest issues.

It uses simulated throttling by default. This works by sending unthrottled requests to the website and then simulating what the metrics would have been on a slower connection.

In contrast, DevTools throttling is a type of applied throttling where the page actually loads more slowly. It's generally more accurate than simulated throttling.

To run Lighthouse with DevTools throttling, open the advanced settings in the DevTools Lighthouse tab and uncheck Simulated throttling.

Disabling simulated throttling in Lighthouse

Monitoring your website with packet-level throttling

Packet-level throttling slows each individual network round trip at the operating system level. It's more accurate than both simulated throttling and DevTools throttling.

However, it can also be hard to set up as it requires admin rights and affects all programs running on your computer.

DebugBear is a site speed testing tool that uses packet-level throttling to collect high-quality data. Start a free trial and monitor Core Web Vitals and other metrics.

Site speed metrics in DebugBear

<![CDATA[How To Eliminate Render Blocking Resources]]> /render-blocking-resources Tue, 23 Aug 2022 00:00:00 GMT Render blocking websites can slow down your website and increase Core Web Vitals metrics like the Largest Contentful Paint.

This article explains the performance impact of render blocking resources and what you can do to solve these issues.

What are render blocking resources?

Browsers load a large number of resources when loading a web page, for example CSS files or images. If a resource is render blocking then the browser doesn't start showing page content until that resource has finished loading.

JavaScript and CSS files can be render blocking.

The screenshot below shows a rendering filmstrip and request waterfall for a website. While the request for the HTML document finishes after 600 milliseconds, the browser still shows a blank page at that point.

The browser only starts rendering the page once three additional resources have finished loading:

  • A JavaScript file (c4.min.js)
  • A CSS file (substack.css?v=29***)
  • Another CSS file (substack.css?v=a9***)

Render blocking JS and CSS on Substack

How do render blocking resources impact site speed metrics?

Rendering blocking resources delay rendering milestones like the First Contentful Paint and the Largest Contentful Paint.

How much a render blocking resource impacts page speed depends on a few factors:

  • How large is the resources being downloaded
  • Is a new server connections required to load the resource
  • Is there a chain of render blocking resources (see below for more on this)

Impact on Core Web Vitals and SEO

As LCP is a Core Web Vitals metric, having too many render blocking resources can hurt your Google rankings.

However, if you delay loading resources until later that could also cause layout shift when the resource is finally loaded.

What are render-blocking request chains?

A render blocking request chain happens when a render blocking resource triggers a request for another render blocking resource.

In this example the render blocking CSS file is loaded after 400 milliseconds. But that CSS file uses @import to reference another CSS file. This file also needs to be downloaded before the page renders after 700 milliseconds.

The longer these chains are the bigger the performance impact will be.

Render blocking request chain with CSS @import

Notably, the second CSS also requires a new server connection to be established to Because of this the request will take longer than if another resource from had been loaded.

What does parser blocking mean?

Scripts and stylesheets referenced in the head always block all rendering (at least if they are loaded synchronously). These are called initial render blocking.

But what about resources referenced in the body? Chrome marks those as in_body_parser_blocking. Whether they block render depends on where in the body they appear.

If they are placed at the end of the body tag then parser blocking scripts don't block rendering. But if a parser blocking script appears at the top of the body tag the script will block rendering.

Parser blocking resources are also called "subsequent render blocking".

Parser-blocking JavaScript request

How to identify render-blocking resources

Many articles say that JavaScript and CSS files in the head are render blocking. That's a good heuristic but that's not always the case (for example if the async attribute is used, as we'll see later on in this article).

Let's take a look at how different tools report render-blocking requests.

DebugBear and WebPageTest

We've already seen how DebugBear highlights render blocking requests. We use the data that Chrome provides in the ResourceSendRequest trace event.

WebPageTest uses the same data and highlights render blocking requests using an orange badge.

Render blocking badges on DebugBear and WebPageTest


The Lighthouse report shown on PageSpeed Insights also contains an "Eliminate render-blocking resources" audit.

However, this doesn't use the Chrome data and can sometimes miss render blocking files. For example, in the Discord example the Google Fonts CSS is incorrectly not shown as render blocking.

Lighthouse render blocking audit

How to eliminate render blocking resources

What you need to do to remove a render blocking request depends on the type of resource that's being loaded.

You might need to change script tags to load asynchronously or inline critical CSS.

Render blocking script tags

By default, the browser goes through the document from top to bottom. JavaScript code is run synchronously one script after the other.

For example, in this case the browser will first run chat.js and then analytics.js. The h1 tag is only shown after running the scripts.

<script src="chat.js"></script>
<script src="analytics.js"></script>
<h1>Hello world</h1>

However, many scripts don't need to be render blocking and can be run asynchronously. You can achieve that using the async attribute.

<script src="chat.js" async></script>
<script src="analytics.js" async></script>
<h1>Hello world</h1>

Now the browser will still start loading the JavaScript files as soon as possible, and run them as soon as they are downloaded. But in the meantime, rendering the h1 will no longer be blocked.

Also, if analytics.js finishes loading before chat.js, then analytics.js will run without first waiting for the chat widget code. If you want to maintain the order of execution you can use the defer attribute, which defers JavaScript execution until after the HTML document has been fully parsed by the browser.

This waterfall chart demonstrates the impact that async and defer have on page load behavior.

async and defer in the waterfall

Render blocking CSS

Reducing render blocking stylesheets is harder than reducing render blocking scripts, as a page will often look very different if stylesheets are missing. If you made an important stylesheet load asynchronously you'd get a flash of unstyled content (FOUC).

Page with and without CSS

While loading key CSS files asynchronously is not an option, loading stylesheets for third party code can be more viable, for example if you have a third party widget that's only used further down on the page. In that case updating the styling later on is acceptable. Another candidate for asynchronous CSS loading would be a stylesheet that only loads font references but does not affect the page layout.

Let's say you have this stylesheet in your HTML:

<link rel="stylesheet" href="widget.css">

This way widget.css will block rendering. To load the stylesheet asynchronously you can initially set the media attribute to print. Then, when the CSS file has loaded, you change it to all to apply the styles to the page.

<link rel="stylesheet" href="widget.css" media="print" onload="'all'">

Inlining critical CSS

Another way to remove render blocking CSS files is to embed the styles directly in the HTML document. This will increase the size of the HTML, but it can be a great solution for small CSS files under 10 KB.

Here's an example website where render blocking CSS is inlined into the page HTML. The page renders immediately after the document is loaded.

Page rendering immediately after the document request

When we look at the page HTML we see a large inline style tag.

Inline style tag

The downside of this approach is that the CSS has to be downloaded again with every HTML request, while a separate CSS file could have been in a cache. How bad this is depends on the amount of CSS being inlined.

How to reduce the performance impact of render blocking resources

Often not all render blocking resources can be eliminated. But you can still reduce the impact they have on performance.

Reduce file size

Downloading large files takes longer than downloading small files. Therefore, reducing the size of critical requests can speed up your website.

Large file downloads take more time

There are a few ways to reduce file size:

  • using better content encoding, e.g. switching from gzip to brotli
  • making sure only the most important content is included in blocking files, and additional content can then be loaded later on

The Chrome DevTools Coverage tab can help you identify and remove unused CSS and JavaScript code on your page.

Chrome DevTools Coverag tab

Reduce resource competition

Network connections can only provide a limited amount of bandwidth, so check if other resources are competing with the render blocking requests.

Here's an example of what resource competition looks like in a waterfall chart. The c4***.svg image is quite small, only 21 kilobytes. But the browser is allocating bandwidth to the JavaScript file below it, loading over 2 megabytes of data. So the SVG only finishes loading once the JavaScript resource has finished loading.

Network resources competing with each other

Reuse server connections

Connecting to a new server requires the browser to do a DNS lookup, establish a TCP connection, and enable a secure SSL connection. Each of these steps requires at least one round trip on a network. The browser can only start making the HTTP request once the connection is established.

For example, the Substack website is located on but then loads additional render blocking resources from and

New server connections for new domains

In contrast, the website loads all resources from and can therefore reuse the existing server connection.

Reusing existing connections

Reduce request chaining

Render blocking request chains happen when a render blocking resource starts loading another render blocking resource.

CSS @import

We saw this briefly earlier on in this article when we looked at how to identify render blocking files. The Discord homepage used @import to load a stylesheet from Google Fonts.

@import url(;

The browser first needs to load the Discord stylesheet to discover the Google Fonts file. We can use a preload resource hint to help the browser discover the resource sooner.

<link rel="preload" as="style" href="">

Adding this hint in the document HTML means the browser will start loading it without first waiting for the Discord CSS file.

This waterfall shows the page requests without the preload and then with the preload. After adding the preload the start time of the fonts CSS requests shifts to the left.

Preload hint


document.write can cause similar issues for JavaScript as @import does for CSS. If a render blocking script synchronously creates a new script element with document.write then the new JavaScript file will also be render blocking.

This waterfall shows an example where script.js synchronously adds jquery-3.6.0.js to the page and thus delays rendering.

document.write request chain

Again, this could be fixed by using a preload hint. Putting the script tag directly into the document HTML instead of using document.write would also address the issue.

Is the HTML document request render blocking?

The HTML document is at least partially render blocking, as the browser can't show the page without knowing what its contents are. To find that out the web server needs to start sending the HTML code to the client.

Therefore a slow Time to First Byte (server response time) will make your website render more slowly.

However, browsers use streaming parsers that start processing the HTML as soon as it comes in, rather than waiting until the full document has been downloaded. Therefore, pages can start rendering before the document has finished loading.

This example shows that the other resources referenced in the HTML document start downloading before the HTML request has completed.

CleanShot 2022-08-23 at 15 55 36

In this case we don't see the page rendering before the completion of the document request though. That's because downloading the page HTML is a high priority task for the browser, so the other render blocking resources have to compete with that. Due to the focus on loading the HTML, loading the CSS and JavaScript code for the page only happens after the HTML download is complete.

Are web fonts render blocking?

Web fonts don't block rendering of the page, but they can block rendering of the text itself. How text renders before fonts are loaded is specified by the font-display CSS property.

Waiting for web fonts can slow down your First Contentful Paint if no other content is rendered on the page. If your text is still hidden but an image has rendered somewhere on the page then web fonts won't make your FCP worse.

The Largest Contentful Paint will be impacted if the LCP element is a text node using a web font.

Page renders but text doesn't show

Are images render blocking?

Images are not render blocking. They can delay metrics like the Largest Contentful Paint, but the rest of the page will still render fine even if the browser is still downloading an image file.

Why isn't my page rendering after all render blocking resources have loaded?

Sometimes the request waterfall suggests that all render blocking requests have finished, but the filmstrip will still show a blank page. This can have a few reasons.

Is content being hidden with CSS?

Some A/B testing tools set the body opacity to 0 to avoid flicker, delaying when the page renders.

Single page apps

Some sites are pure single page apps with no server rendered content. In those case, even if rendering isn't blocked, there isn't any content to render until the JavaScript application has been loaded and run.

To fix this, consider rendering a header on the backend or at least embedding a loading spinner in the page HTML to indicate the browser is waiting for the JavaScript app to load.

Parser blocking resources

As mentioned above, synchronous scripts or stylesheets in the body block all rendering of content below that tag. This isn't a problem if the tags are placed at the end of the body tag, but if they appear before important content then rendering will be delayed.

Monitoring rendering milestones

DebugBear can help you detect render blocking resources, optimize your site speed, and monitor Core Web Vitals and other performance metrics over time.

Website rendering timeline

Once you've reviewed your metrics you can investigate test results in depth. Start a free trial today.

Rendering timeline

<![CDATA[Lighthouse Simulated Throttling]]> /simulated-throttling Mon, 08 Aug 2022 00:00:00 GMT This article explains what Lighthouse simulated throttling is and how it can lead to inaccurate site speed metrics. We'll also look at alternative ways to test your site speed.

What is Lighthouse?

Lighthouse is a free tool developed by Google that powers many other services under the hood:

  • PageSpeed Insights (Lab Data)
  • Chrome DevTools Lighthouse tab
  • Commercial tools like DebugBear, GTmetrix, or Calibre

Lighthouse site speed report

What is network throttling?

Web performance tests are often run on a computer with a fast network connection. Testing tools slow down the network in order to better show how a real user might experience a website, for example if a user is on a slow mobile connection.

Network throttling also ensures that metrics are more consistent, as the same network speed is used to run every test.

What is simulated throttling?

There are several different ways to slow down the network. Simulated throttling is one of them, and it's what Lighthouse uses by default.

With simulated throttling the initial site speed data is collected on a fast connection. Based on this data Lighthouse then estimates how quickly the page would have loaded on a different connection.

For example, if a page takes 2 seconds to render on a fast connection, Lighthouse might report a value of 6 seconds on a mobile device.

Simulated throttling provides low variability and makes test quick and cheap to run. However, it can also lead to inaccuracies as Lighthouse doesn't fully replicate all browser features and network behaviors.

Check out this article for an in-depth look at how simulated throttling works in Lighthouse.

What tools use simulated throttling?

Simulated throttling is the default for Lighthouse, but Lighthouse also supports other throttling methods.

These tools always use simulated throttling:

  • PageSpeed Insights (Lab Data)

These tools use simulated throttling by default but also provide alternative options:

  • Chrome DevTools Lighthouse tab
  • Lighthouse CLI

Mid-tier and up commercial solutions generally don't use simulated throttling.

Note that PageSpeed Insights provides both lab data collected using Lighthouse and real-user data from the Chrome User Experience Report. Real-user data does not use any type of throttling.

How can I tell if simulated throttling is used?

The bottom of the full Lighthouse report shows the test settings used to test the page. Hover over the network throttling details to see what type of throttling was used.

Network info in Lighthouse settings

What are observed metrics?

Observed metrics are real measurements that were collected by Chrome. When simulated throttling is not used these values are equal to the final values reported by Lighthouse.

When simulated throttling is used then the reported values are generally worse than the observed values, as a slower connection is simulated. If the reported values are better than the observed values this usually indicates an inaccuracy in the throttling simulation.

You can find the observed value in the full Lighthouse report JSON.

Observed FCP in Lighthouse metrics, 695ms for observed and 1475 for simulated

For PageSpeed Insights, our Site Speed Chrome extension surfaces the observed metrics in the UI.

Observed Lighthouse metrics on PageSpeed Insights

How do I disable simulated throttling in Chrome DevTools?

You can change the Lighthouse settings in the DevTools Lighthouse tab to throttle the network while the test is running.

  1. Click the gear icon in the top right of the Lighthouse tab – you will see two gear icons, make sure not to click the one for general DevTools settings!
  2. Untick Simulate throttling

Chrome will now use DevTools throttling to run the test. However, keep in mind that DevTools throttling comes with its own set of issues.

Simulated throttling disabled in Chrome DevTools

How do I disable simulated throttling with the Lighthouse CLI?

The Lighthouse command-line interface (CLI) provides a --throttling-method flag to control how data is collected.

You can set the flag to devtools to use the DevTools throttling discusses above.

lighthouse --throttling-method=devtools  --view

DevTools throttling in Lighthouse network details

Throttling method: provided

The provided throttling method disables all network and CPU throttling.

lighthouse --throttling-method=provided  --view

Why is this method called provided? Because the Lighthouse test is still subject to the general network conditions of the computer the test is running on.

For example, you can throttle the network on your computer using the throttle tool by This type of packet-level throttling is the most accurate way to get realistic site speed test data.

What else can I do to get accurate metrics?

Dedicated testing tools like DebugBear automatically use accurate throttling methods. Any site speed tools that don't throttle the network at all will also provide accurate data.

DebugBear request waterfall chart

<![CDATA[Why Does Site Speed Matter?]]> /why-site-speed-matters Thu, 04 Aug 2022 00:00:00 GMT A slow website can not only negatively impact the experience for visitors, but also make it harder for new users to find the website.

Site speed measures how long it takes for a website to load. After navigating to a page, it often takes several seconds for the page content to appear.

This article looks at some of the reasons why site speed matters to your users. We’ll also look at case studies showing the results that different companies have seen from optimizing site performance.

Improving Performance is Good for Traffic

In May of 2020, Google announced that “page experience” would soon become a ranking factor in Google Search. This update meant that web performance would be a bigger factor in how Google determines how useful a website is to a potential visitor. Google is now taking Core Web Vitals into account when determining how high a given website appears in the search results.

Factors included in the Google page experience update

For example, let’s say that there are two identical pizza restaurants near you. These two restaurants have identical websites. If you were to search for “pizza” after these new metrics went live, the one that meets the Core Web Vitals targets would appear above the one that doesn't. In practice, this means that faster, more performant websites see more traffic than their slower competition.

In the time since the announcement, Google has continued to release more information on exactly how performance can impact rankings in search.

When it comes to organic search, a slow website is going to have fewer potential customers.

Improving Performance is Good for Conversions

Once a user gets to a website, performance impacts whether they can achieve the goal that led them there. According to a survey of over 700 consumers, nearly 70% of users said that the performance of the website they were browsing impacted their likelihood to buy or return to the website. While this data is self-reported, it lines up with data we’ve seen from similar studies.

According to the 2017 Speed Matters study by Google and Awwwards, site speed is the most important factor for user experience, ahead of how easy the website is to use or how well-designed it is. The WPO Stats website lists many studies depicting the impact of things like the impact of performance on e-commerce.

Below are examples from several industries showing how a slow website can drive away potential customers.

Social Media: Pinterest

Pinterest was able to increase the performance of their mobile signup page by 60% and, consequently, increase the conversion rate of the page by 40%.

Instead of using an off-the-shelf metric, Pinterest created a custom metric focusing on what’s important to their users. In their case, that meant measuring how long it takes for images to show up on the screen.

Shopping: Swappie

Swappie’s approach to performance optimization is a great example of organizational cooperation at scale. By relating website performance and site speed to specific business metrics, Swappie was able to extend the focus on website speed beyond their development team.

To ensure that the impact of their work would directly impact business and financial metrics, Swappie began by determining what metric to use to measure user experience.

They chose a metric that would directly benefit from improved performance: relative mobile conversion rate. This is a measure of how well mobile users move through the conversion process relative to desktop users. The mobile conversion rate is usually lower, and the relative mobile conversion rate is often around 50%.

After only three months of work, Swappie saw relative mobile conversion rate go from 24% to 34%. This resulted in a 42% increase in mobile revenue.

Chart showing a decrease in mobile page load time and an increase in conversion rate

As part of the project, Swappie was able to increase their Core Web Vitals metrics across the board:

Telecoms: Vodafone

Vodafone A/B tested the impact of their performance improvements to see exactly how their improved user experience translated to sales. By comparing their sales and conversion numbers from before and after the improvements, Vodafone was able to determine that optimizing Largest Contentful Paint by 31% increased sales by 8%.

Their optimizations included minimizing render-blocking Javascript, properly sizing images and moving to server-side rendering.

Improving Performance Helps Visitors Save Mobile Bandwidth

According to data collected by the HTTP Archive, page sizes have increased significantly in recent years. In the last 10 years, the median size of web pages on the desktop web has tripled.


This can cause problems for bandwidth-conscious consumers. According to Statista, over 40% of mobile internet users in the US have less than 12 gigabytes of monthly bandwidth available.

Statista data on how much mobile bandwidth people have

The increase in page size can lead to consumers having to upgrade to larger plans and spend more on mobile data.

Improving Performance Reduces Hosting Costs

A faster website can reduce costs. Images, videos and other media uploaded to the website have to be stored somewhere, often with additional backups. In addition to storage, many cloud providers also charge for outgoing bandwidth when users request a file from your servers.

In practice, the same kinds of image optimizations that benefit things like Largest Contentful Paint and Cumulative Layout Shift can also reduce the amount of storage space required to house these assets.

One study from CrayonData found that not only did optimizing images and loading them through a CDN reduce storage costs, but it also reduced network consumption. This resulted in an 85% reduction in hosting costs, saving over $200,000.

Should I Improve My Site Speed?

This article has shown that optimizing site speed can benefit your business in multiple ways. Organic traffic, conversion rate, and sales can all be tied back to how fast or slow your website is.

Making a website faster for users can require buy-in from stakeholders across teams. It can require making difficult decisions about features, functionality and aesthetic elements that are impacting performance. Even organizationally, the additional engineering time and resources required can require approval from management.

Despite these challenges a fast website has the potential to be discovered and engaged with by more users, especially if it’s faster than its competitors. Once users find the website, they are also more likely to engage with it more deeply and ultimately convert to customers.

Test And Monitor Your Website

Want to know how to make your website faster? Test your website with DebugBear.

DebugBear not only creates an in-depth report of your website, but also monitors your site speed and Core Web Vitals over time.

Core Web Vitals in DebugBear

<![CDATA[What cloud provider has the fastest console page speed? Data from GCP, AWS, and Azure]]> /cloud-console-speed Wed, 27 Jul 2022 00:00:00 GMT I've always been annoyed by the slow speed of the Google Cloud Console web app. In 2020 I wrote about how a single page loads 16 MB of JavaScript. That same page now loads 21 MB.

But are other cloud providers better? I looked at real-user data from the Chrome User Experience Report and also ran my own tests.

This article looks at Google Cloud Platform (GCP), Amazon Web Services (AWS), and Microsoft Azure.

Performance data from real Chrome users

Google collects site speed metrics from real Chrome users as part of the Chrome User Experience Report (CrUX). We can look at the data for each of the cloud console apps.

Treo Site Speed is an incredibly helpful tool to look at origin-level CrUX data. You enter the domain of the website you’re interested in and get beautifully-presented site speed data.

Looking at the Largest Contentful Paint metric, we can see that Google Cloud is significantly slower than both AWS and Azure. It takes over 2 times as long for the largest content element to show in the GCP app than it does in Azure.

CrUX LCP across cloud providers

Google judges 74% of LCP experiences as good on, 48% on, but only 16% on

Google also has by far the worst Cumulative Layout Shift score, meaning that content moves around on the page after being rendered. However, here it is only slightly worse than Azure.

CrUX CLS across cloud providers

The Load event isn’t a great performance metric as it doesn’t focus on page behavior that’s directly relevant to users. But it does give us an idea of how long the initial load of a page takes.

Again Google Cloud has the slowest app of the three, with a Load time of 3.7 seconds. While this metric is similar for AWS, the Azure load time is much lower with 1.5 seconds.

CrUX Load Event across cloud providers

Finally, the Chrome User Experience Report also measures how quickly the page responds to user interaction like mouse clicks and keyboard input. It does that using the new Interaction to Next Paint metric.

For GCP and AWS the page takes 250 milliseconds to update after the user interaction. In contrast, for Azure it only takes half the time.

CrUX INP across cloud providers

Check out the full data on Treo!

LCP by country for different cloud providers

Testing page load time

I also measured page load time directly on my own computer.

GCP, AWS, and Azure all have some kind of serverless function product. I created a serverless function with each Cloud Provider and then measured how long it takes to load the list of functions in the console web app.

Surprisingly, the results were very similar with each app taking 3 to 4 seconds to render. Azure was still the fastest app, rendering the list of functions 3.1 seconds after page navigation.

Load time for list of cloud functions

The Google Cloud UI continued to update after the initial render. However, this only affected the page header. This suggests a good prioritization decision where the main page content is loaded before ancillary UI components.

Load times for list of cloud functions

You can see the rendering progress of the three apps side by side below.

Loading progress list of cloud functions

AWS may look fully loaded at the same time as GCP and Azure, but it only shows an empty list at first and then loads the functions.

What about interacting with the page after the initial load? I also measured how long after clicking on a serverless function the new page shows up.

Here AWS wins easily, as the new page appears almost practically instantly. Google Cloud and Azure both take several seconds to load and render the new content.

Load time of next page

Again, here’s a gif showing what this navigation looks like in each console app.

Rendering progress when navigating to an individual function

Note: I ran these tests in Chrome DevTools using a throttled connection with 10 megabits of bandwidth and 20 milliseconds of round trip time.

JavaScript size

Depending on whether it’s loaded before or after the main page content, loading and executing JavaScript code can have a significant impact on user experience.

The Google app loads 21 Megabytes of code – double what AWS loads and four times the size of the Azure application.

Chart showing compressed and full JavaScript size for cloud console providers

While all of this code has to be executed, thanks to file compression only a smaller amount of data has to be downloaded. Still, Google fetches over 5 megabytes of code.

Memory consumption

Another measure of complexity is how much memory the page consumes. For this I used Chrome’s Task Manager to see the memory footprint of each page.

Here the AWS app wins with only 81 megabytes. This isn’t a whole lot actually – for example, the React homepage takes up 59 megabytes.

However, the Google Cloud app is once a gain the most resource-hungry, using 171 megabytes of memory.

Chart showing browser memory consumption for different cloud console tabs


The big cloud providers all ship large complex console apps that consist of many megabytes of JavaScript code. Overall the data shows that Google Cloud has built the slowest UI of the three providers while Azure is the fastest.

We can also see AWS doing well when navigating within the app – it feels like a smooth transition, not like loading a whole new page.

While the local speed test did not support Google being slower than its competitors, I think this might be because the experience depends heavily on what content you're viewing. I only looked at a listing page with a single serverless function, but more complex pages (for example the logs view) tend to be especially slow.

DebugBear lets you continuously monitor site speed. Want to monitor how fast your app is for logged-in users? We support running through a login flow prior to tests.

<![CDATA[What is Google's Chrome User Experience Report?]]> /chrome-user-experience-report Mon, 20 Jun 2022 00:00:00 GMT Google’s Chrome User Experience Report (CrUX) is a dataset of real user metrics that assess the overall performance and user-friendliness of a website. In addition to other key indicators, it includes the three Core Web Vitals that Google uses in its search ranking algorithm.

Understanding the Chrome User Experience Report can help you improve your SEO rankings and page load times. You can also use it to compare your website to those of your competitors.

Article banner image

What is Google’s CrUX Report?

Google’s Chrome User Experience Report, or the CrUX Report for short, was first released in 2017. It’s a publicly available collection of web performance data you can access in different formats using various reporting and analytics tools.

The most important thing to know about the CrUX Report is that Google collects the metrics from real Chrome users while they’re surfing the internet. The result of this data collection method is called ‘field data’ as opposed to ‘lab data’ which is collected in controlled test environments.

But, is it legit to collect data on random internet users who are probably unaware of being tracked? According to Google, the answer is yes, as they get each user’s consent before starting to monitor their internet usage.

Who does Google collect CrUX data from?

Google solely collects CrUX data from internet users who:

  • use the Chrome web browser (though Google does not collect metrics in Chrome on iOS)
  • are logged into their Google account and have opted into browsing history syncing, but without setting up a Sync passphrase (which would make public data collection impossible)
  • have enabled usage statistics reporting in their settings

Plus, Google only reports data on websites and pages that meet a minimum traffic threshold and also limits what data can be queried.

To add a further layer to privacy, Google solely reports anonymized aggregate metrics. While Google publishes a list of websites it has data for, individual page URLs are not revealed. So the browsing history or other internet habits of particular users won’t be included in the CrUX Report.

Despite these requirements, Google still collects CrUX data currently for more than 16 million websites, including subdomains.

Is Google collecting CrUX data on me?

If you’re interested in whether your own Chrome browser sends data to the CrUX Report about your internet usage, simply type chrome://ukm into your address bar and check out if UKM metric collection is enabled for you.

UKM admin page example in Chrome

UKM stands for URL-Keyed Metrics, which is a larger set of field metrics that Chromium-based web browsers, such as Chrome, can collect. Metrics included in the CrUX Report are defined in the UKM API which you can check out in detail in the Chromium docs.

On your UKM admin page in Chrome, you’ll see a list of the websites you’ve visited while the admin page was open. You can also see all the data collected on you on each website. Here’s an example of the UKM metrics Chrome collected on me while visiting the Mozilla Developer Network:

UKM metrics example in Chrome

Metrics included in the CrUX Report

Google’s Chrome User Experience Report consists of three types of real user metrics:

  • Core Web Vitals
  • Other Web Vitals
  • Other field metrics

Almost all of these metrics are time-based values that measure the time difference between two browser events. The only exceptions are Cumulative Layout Shift, which is a unitless score, and Notification Permissions, which is a set of named values.

Now, let’s briefly see the metrics, one by one.

Core Web Vitals

The three Core Web Vitals are part of Google’s page experience signals and search ranking algorithm — globally, since August 2021.

They are as follows:

  • Largest Contentful Paint (LCP): the time difference between when the page starts loading and when the browser renders the largest content element within the viewport to the screen
  • First Input Delay (FID): the time difference between when the user first interacts with the page (e.g. clicks a button) and when the browser starts to process the belonging event handlers
  • Cumulative Layout Shift (CLS): the amount of unexpected movement of the page content after it was rendered to the screen

Other Web Vitals

In addition to Core Web Vitals, there are other Web Vitals that are not part of Google’s search ranking algorithm but that you can use to gain more insight into a website’s performance and find out the reasons behind low Core Web Vitals scores.

The CrUX Report contains all the non-Core Web Vitals that can be measured in the field:

  • First Contentful Paint (FCP): the time difference between when the page starts loading and when the browser renders the first content element to the screen
  • Time to First Byte (TTFB): the time difference between when the page starts loading and when it receives the first byte of content from the server
  • Interaction to Next Paint (INP) (experimental): the highest(-ish) time difference between any user input and the following content update throughout the entire page lifecycle

Other field metrics

There are also four field-measurable metrics in the CrUX Report that are not part of the Web Vitals initiative:

  • First Paint: the time difference between the page request and when the browser renders the first pixel of content to the screen
  • DOMContentLoaded: the time difference between the page request and when the browser has completely loaded and parsed the pure HTML page, without waiting for dependencies (stylesheets, images, scripts, etc.)
  • onload: the time difference between the page request and when the browser has completely loaded and parsed the entire HTML page with all of its dependencies
  • Notification Permissions: the user’s reaction to website notifications, with four available values: accept, deny, dismiss, ignore

CrUX metric performance

Google doesn’t only define the metrics but also evaluates them by assigning them with three qualitative scores: ‘good’, ‘needs improvement’, and ‘poor’.

For example, here are the status definitions of the three Core Web Vitals:

Core Web Vitals status definitions by Google

Image credit: Google Search Console Help

For SEO rankings, Google recommends that the 75th percentile (p75) of page loads for each CrUX metric should be in the ‘good’ range. This means that, for example, at least 75% of user experiences should have an LCP of 2.5 seconds or less.

Data segmentation options

Even though the CrUX Report gives you some options to segment your data, these options are pretty basic. Moreover, not all options are available from all reporting tools and for all the metrics.

Origin-level vs URL-level data

In theory, you can retrieve CrUX data from both domains (origins) and individual pages (URLs). Origin-level data includes the aggregated data of all the pages hosted on the same domain.

However, not all URLs will have public CrUX data. For example, if a page only gets three views a month, it won't have URL-level data, but another page on the same domain with 1000 views will have it.

Google doesn’t publish the exact thresholds from when data becomes available. We find that URL-level data tends to become available once a page reaches around 1,000 monthly views on a given device type. But this will depend on how many of your users share analytics data with Google.

In addition, not all CrUX reporting tools support URL-level data extraction. To get page-level insight into your or your competitors’ CrUX metrics, you’ll need a reporting tool that also supports data extraction from standalone URLs, such as PageSpeed Insights, the CrUX API, or DebugBear.

As I mentioned above, even with those tools, you can only extract URL-level data for pages that meet a minimum traffic threshold. PageSpeed Insights, for example, falls back to origin level for URLs where page-level data is not available.

Google also reports a wider range of metrics for origin-level data than it does for URL-level data.

Is there URL-level data for noindex pages?

Only publicly discoverable pages have URL-level data.

Noindex pages are excluded from search engines, so they won't have URL-level data.

Pages with HTTP status codes other than 200 also don't have URL-level data.

However, these pages will still be included in the origin-level data.


Google’s Chrome User Experience Report provides three dimensions by which you can segment the data. However, not all CrUX reporting tools support all the dimensions.

The three dimensions are as follows:

  • Device type (phone, tablet, mobile)
  • Network connection (4G, 3G, 2G, slow-2G, offline)
  • Country (identified by their ISO 3166-1 alpha-2 code)

CrUX reporting and monitoring tools

Now, let’s see the best tools for tracking, analyzing, and monitoring data collected by the Chrome User Experience Report.

PageSpeed Insights (PSI)

PageSpeed Insights is Google’s first-party tool that allows you to measure Core Web Vitals. It shows both field data collected on real users from the CrUX Report and lab data from Google’s Lighthouse tool. You can access PageSpeed Insights either using a free web app or programmatically via its API.

The PSI web app is the easiest and quickest way to check out the Web Vitals of any website. After running an audit, you’ll find the CrUX metrics at the top of the page. PSI shows the Core and other Web Vitals at both URL- and -origin-level, for both desktop and mobile devices (however, not for tablets).

The downside of both the web app and the PageSpeed Insights API is that historical data is not available — you can only see the aggregated average of the previous 28 days.

PageSpeed Insights test example

Google Search Console

As Core Web Vitals are part of Google’s search ranking algorithm, Google Search Console gives you access to CrUX data related to your own website(s). It only shows the three Core Web Vitals from the CrUX Report — however, it groups the data in a unique way.

Search Console organizes the URLs with the same performance issues into URL groups to make debugging easier. It also provides you with a list of the affected URLs so that you can check them out individually with PageSpeed Insights or another tool.

Google Search Console URL groups example

CrUX Dashboard

The CrUX Dashboard is another free CrUX reporting tool by Google. It has been built with the Google Data Studio platform to make it easy to set up and use without any programming knowledge.

It pulls data from Google’s CrUX BigQuery project (more on that below) and also lets you retrieve historical data for any website with available CrUX data. You can access all the metrics collected by the CrUX Report and segment the data by device type (desktop, mobile, tablet) and connection type (4G, 3G, 2G, slow-2G, offline).

However, as opposed to some other CrUX reporting tools, CrUX Dashboard doesn’t give you access to the 28-day rolling average of aggregated data. Instead, it only releases datasets once a month: on the second Tuesday of each month. It also only gives you access to origin-level data, which might be a problem if you want to see metrics at the URL level.

For example, here’s a screenshot of DebugBear’s CrUX Dashboard (with default settings) — to see how it works, check out our interactive Data Studio demo, too.

CrUX Dashboad with Core Web Vitals example

Google BigQuery

BigQuery is Google’s paid-for serverless data analytics platform running on the Google Cloud Platform. You can use it to run SQL queries on CRUX data (see some SQL examples or the BigQuery docs).

Similar to CrUX Dashboard, BigQuery allows you to extract historical data, but the metrics are only available at origin level and the data is only updated once a month (on the second Tuesday of each month). However, BigQuery gives you more segmentation options than CrUX Dashboard as you can also segment the data by country — in addition to device and connection type.


Be careful when running queries on large BigQuery datasets. You can easily spend hundreds of dollars on a few queries!

To see a quick example of how it works, here’s a SQL query that generates a histogram of DebugBear’s LCP data collected in April 2022:

SUM(bin.density) AS density
UNNEST(largest_contentful_paint.histogram.bin) AS bin
origin = ''

This is what BigQuery’s interface and SQL workplace look like:

BigQuery SQL editor

Once the query is processed, you can access the results in SQL and JSON formats or open it with Google Sheets or Data Studio. For example, I exported it to Google Sheets and created a basic line chart that shows the detailed distribution of the data:

BigQuery results in Google Sheets

As you see, you can get very granular data and create any kind of custom table or chart with BigQuery, but you need fairly good SQL knowledge to use it.


The CrUX API gives you programmatic access to the Google Chrome User Experience Report — however, not to everything.

You can extract both URL- and origin-level data, retrieve all the CrUX metrics, and segment the data by device type.

But historical data and segmentation by network connection type and country are not available from the API. On any day, you can only request the aggregated average of the previous 28 days.

You can make requests to the CrUX API from the command line or use a programming language such as JavaScript. To get started with the API, you need to create a free API key in your Google Cloud Console.

Here’s a quick example of how to use the CrUX API. The goal is to see’s detailed Cumulative Layout Shift data for desktop. The following code uses the cURL command line tool to retrieve the data in JSON format and save it into the newly created cls-results.json file:

curl --header  "Content-Type: application/json"  --data  "{'formFactor': 'desktop', 'metrics': 'cumulative_layout_shift', 'origin':'' }"  -o cls-results.json

The resulting JSON file includes both the histogram of the three metric statuses (good, needs improvement, poor) and the 75th percentile (p75) value.

As you can see below, 89.82% of’s desktop page loads resulted in a good CLS score, 7.04% got a mediocre one, and 3.14% scored poorly. Plus, the CLS score for the 75th percentile of desktop page loads was less or equal to 0.01, which is a pretty good result (the threshold for ‘good’ is 0.10).

"record": {
"key": {
"formFactor": "DESKTOP",
"origin": ""
"metrics": {
"cumulative_layout_shift": {
"histogram": [
"start": "0.00",
"end": "0.10",
"density": 0.89816653042158123
"start": "0.10",
"end": "0.25",
"density": 0.0704192455914983
"start": "0.25",
"density": 0.031414223986920495
"percentiles": {
"p75": "0.01"

Chrome UX Report Compare Tool

If you want to compare the CrUX data of different websites, you can also use Google’s lesser-known Chrome UX Report Compare Tool, which is based on the CrUX API, too.

It shows the six Web Vitals of each website side by side, supports both origin- and page-level data, and includes tablets in the device-based segmentation.

However, similar to the CrUX API, it doesn’t give you access to historical data; you can only see the 28-day rolling average of each metric.

Chrome UX Report Compare Tool example


DebugBear is our own web performance monitoring and debugging tool, which lets you view lab and field data side by side. Lab data is collected on our own servers running from 10+ test locations around the world, while field data is pulled from the CrUX API.

This approach provides you with a unique way to monitor Core Web Vitals and other performance metrics for both your own and your competitors’ websites.

DebugBear Web Vitals Trendlines

You can also see details for each Core Web Vitals metric, including the relevant code, page elements, timelines, and performance recommendations.

DebugBear detailed Web Vitals

Unlike Page Speed Insight and the CrUX API, DebugBear gives you access to historical data.

On the screenshot below, for example, you can see how an increase in the CLS score immediately showed up in the lab data, then gradually started showing up in Google’s field data.

DebugBear historical graphs

To see how we monitor Core Web Vitals and other web performance data, check out our interactive demo — no signup is necessary.

Limitations of Google’s CrUX Report

Even though the Chrome User Experience Report is an informative and comprehensive tool, it also has some drawbacks you need to be aware of.

So, before wrapping up, let’s briefly consider some of those limitations:

  • Google only returns anonymized aggregate data. On a low-traffic site, you can easily bump into the “chrome ux report data not found” or “no data” message.
  • You only get data from a subset of website visitors. Visitors who aren’t logged into Chrome or use another browser, such as Safari or Firefox, won’t show up in your metrics.
  • Origin-level data can change if some pages have been moved to a different domain.
  • Chrome extensions used by website visitors can impact performance and the resulting metrics.
  • New browser releases can lead to changes in the data (see how).
  • Different device sizes can change some of the results — for instance, the largest bit of content within the viewport can be a different element, which makes it hard to debug poor LCP scores.
  • Population differences might change the results, too — for example, when in a particular country, people typically have higher/lower-end devices and access to faster/slower network connections, which does have an effect on user experience metrics.

Wrapping up

In this article, we looked into the CrUX Report in detail, including the data collection method, the collected metrics, segmentation and other options, and the best CrUX reporting tools.

Google’s Chrome User Experience Report can give you more insight into the performance of your and your competitors’ websites and help you improve your search engine rankings, debug your performance issues, and provide a better experience to your users.

In addition to our interactive demo, DebugBear also comes with a 14-day trial where you can get access to the detailed CrUX data of any website of your interest, among many useful features. You can sign up here (no credit card required) or check out how our customers use DebugBear to monitor their websites and improve their Web Vitals and other web performance metrics.

<![CDATA[How to select a device configuration for site speed tests]]> /site-speed-device-configuration Mon, 13 Jun 2022 00:00:00 GMT Real-user metrics are aggregated across many different website visitors using different devices. In contrast, to run a lab test you need to select a single test device. That's the case both when testing locally in Chrome DevTools and when using a dedicated performance tool.

Discrepancies between lab and field data are incredibly common. This can be confusing: what data should you believe, and what are the best device and network settings to use when running tests?

The device configuration you should use depends on what your goals are. This guide explores different options and explains their pros and cons.

What device settings impact performance metrics?

Here are the five test device characteristics with the biggest impact on your site speed metrics:

  • Network Latency – how long does a network round trip between browser and server take?
  • Network Bandwidth – how much data can be transferred per second?
  • CPU Throttling – is the processor slowed down to simulate a mobile device?
  • Test Location – where in the world is the test run from?
  • Screen size – are you testing the mobile site or the desktop site?

Other device factors can also impact performance, for example what browser extensions are installed or what image formats are supported by the browser.

In addition to device configuration, past page interactions also impact metrics. For example, pages may load more slowly for logged-in users or they may load faster because some page resources are already cached from a previous visit.

This article will focus on network latency and bandwidth settings, and then take a quick look at CPU speed, test location, and viewport size.

Field vs lab data on PageSpeed Insights

Google's PageSpeed Insights (PSI) often shows far worse metrics for lab data than for field data.

In this example, only 75% of real users experience an FCP under 1.9 seconds, but Google's lab data puts it at 2.8 seconds.

Field vs lab data on PageSpeed Insights

Why? Because PageSpeed Insights simulates a slow mobile device on a slow connection. PSI is built on top of Lighthouse, which aims to simulate the Moto G4 phone that Motorola released in 2016. The network configuration matches the bottom 15% of 4G experiences.

Why simulate a low-end device and network connection?

There a three reasons to test in low-end conditions: making sure your website works for everyone, making it easier to interpret the result of performance tests, and to be able to see metric changes more clearly.

Building websites that are fast for everyone

Google's Core Web Vitals focus on the 75th percentile of experiences. If your website is slow for 20% of users this has no impact on your web vitals rankings. But this could still represent hundreds or thousands of people who have a poor experience with your business.

Website load times often have a long tail. For example, for most visitors of the DebugBear website pages load within 2 seconds. But a very small number of users waits over 10 seconds for the main page content to appear.

LCP Histogram

Making test results easier to interpret

Websites consist of many different resources and it's often hard to tell which resource is really holding back rendering.

Slowing down the page load lets you see your website render bit by bit, with requests finishing one at a time. This way you can better understand the dependencies between different resources, which in turn enables you to optimize your pages more effectively.

You can also see the potential performance impact of each resource more clearly. For example, consider these two request waterfalls, one collected on a fast connection and the other on a slow connection.

With the high-bandwidth connection you barely notice the effect that download size has on how long it takes to fetch the resource. With the slower connection you can clearly see that it takes longer to load large files.

Waterfall with high and low bandwidth

The slow waterfall chart also shows how Chrome allocates bandwidth between different requests. The areas shaded dark blue show when data is received for a request. While the first large file is being downloaded the other two resources basically sit idle until bandwidth becomes available again.

Seeing metric changes more clearly

Even when running two tests in the same conditions, metrics always vary somewhat between each test result.

If your LCP score increases from 1.1 seconds to 1.3 seconds it can be hard to tell if this is due to a change on your website or just random noise. But when testing in worse conditions the change will be more pronounced, let's say from 3.8 seconds to 4.5 seconds.

With bigger numbers you can more easily see when a site speed regression occurred.

Why run tests on a fast device and connection?

While some users will always have a poor page experience, usually the majority of users will use a reasonably new device on a reasonably fast connection. Optimizing for the slowest 5% of users only helps a small number of people.

From an SEO perspective, Google looks at the slowest 25% of experiences and usually reports metrics at the 75th percentile. The metrics that Lighthouse reports by default are usually far off from what typical users experience, even toward the slow end.

For example, while Lighthouse uses a bandwidth of 1.6 Mpbs, SpeedTest reports a median mobile bandwidth of 47 Mbps in the UK. And that's before considering that many people use their phones while on wifi.

Running tests on a fast connection will result in metrics that more closely match what your real users experience.

UK Median connection

What device configuration should I use?

For performance optimization

A slow device configuration usually makes it easier to identify potential improvements as well as observing the impact of an optimization.

The Lighthouse defaults work well for this, and these metrics will also match what you see on PageSpeed Insights. (Or at least they'll broadly match.)

To estimate real user metrics

To estimate real user metrics a faster configuration is better.

Real user experiences are too diverse to be reflected with only one or two device configurations. Don't expect lab data to match Google's web vitals data exactly.

At DebugBear we run faster mobile tests with a bandwidth setting of 12 Mbps and a latency of 70 milliseconds. This usually gets you into the ballpark of mobile data from the Chrome User Experience Report (CrUX).

Lab vs field data in DebugBear

You can also look for typical network speeds in your country and use those. However, note that usually these are median figures, rather than the 75th percentile values reported by Google.

Ideally you would know what devices users typically use to access your website, and under what network conditions.

Estimating realistic bandwidth and latency settings with the Lighthouse simulator

If you see an LCP value of 2.1 seconds in Google's field data, what device settings might give you a similar result in the lab?

You can use our Lighthouse simulator tool to estimate this. It runs Lighthouse with 100 different speed settings and then shows the metrics reported for each one.

Lighthouse simulation

We can see that a bandwidth of 8 Mbps and a latency of 125 milliseconds would result in an LCP of 2.2 milliseconds. Alternatively, 4 Mbps of bandwidth and 50 milliseconds of latency would result in an LCP of 2.0 seconds. Both settings would be potential candidates for your device configuration.

While these settings are worth a try, note that they are based on the potentially inaccurate metric simulation that also powers PageSpeed Insights.

Selecting a screen size

Screen size and resolution can impact performance in a variety of ways. Your LCP metric might be worse on a high-resolution mobile screen as the browser will load higher-resolution images. Or the LCP element might be different on mobile, as the usual LCP element is positioned outside the mobile viewport.

However, typically a 50px difference in viewport width isn't going to make a big difference, and running tests using the default desktop and mobile screen sizes used by Lighthouse is usually good enough.

If you see unexpected metric discrepancies between lab and field data it can be worth trying out different device sizes in Chrome DevTools to see if that can explain the differences.

Selecting a test location

To get started, pick the test location that's closest to your users. Optimizing your data here will help the majority of your users, and also benefit the rest.

If your user base has a wide geographic distribution it's worth adding another test location that's far away from where your servers are located.

Your selected test location will have a bigger impact on metrics when using a low network latency setting. If you run tests with a high network latency like 150 milliseconds then you won't see a huge difference between tests run from Paris versus Helsinki, as that round trip only adds another 40 milliseconds.

CPU speed

The CPU speed matters most for client-side apps without server-side rendering. If you are testing a static page this setting generally isn't very important.

I like to use a throttling factor of 2x for mobile and 1x (no throttling) on desktop. For reference, the default mobile setting used by PageSpeed Insight is 4x.


Testing with a low-end device is a good way to identify potential performance optimizations.

It's difficult to get lab metrics to match Core Web Vitals field data, but picking settings that are a bit worse than what's typical for your users usually gets you close enough.

<![CDATA[What is server-side rendering and how does it improve site speed?]]> /server-side-rendering Mon, 25 Apr 2022 00:00:00 GMT Server-side rendering (SSR) addresses the performance and search engine optimization issues of single-page JavaScript applications. In contrast to client-side rendering, it generates static content on the server before sending it over to the user’s browser.

Server-side rendering improves site speed and results in better Core Web Vitals scores. However, sometimes it can be difficult to implement and might also increase First Input Delay.

In this article, we’ll look into server-side rendering in detail. We’ll see how it works, what problems it solves, how it compares to client-side rendering, and what pros and cons it comes with.

An overview of single-page applications

Single-page applications (SPAs) are a web app architecture that appeared as an alternative to traditional websites and multi-page applications. SPAs, also known as client-side apps, became possible with the introduction of asynchronous JavaScript (AJAX), which makes it possible to update smaller parts of the user interface without reloading the full page.

Modern-day SPAs are often built with frontend UI frameworks such as React, Vue, and Angular. They consist of reusable JavaScript components fully rendered on the client side.

The main goal of this architecture is to make web apps similar to native mobile and desktop applications in terms of interactivity. As SPAs only have a single HTML page that fetches data from the server asynchronously, users can see updates instantly, without having to wait for the whole page to refresh.

How does client-side rendering work?

Client-side rendering (CSR) is the default rendering method for single-page applications.

In web development, rendering means the process of converting application code into interactive web pages. The page HTML is generated by a JavaScript engine. With client-side rendering, this is always done on the frontend. The browser then takes the generated HTML to visually render the page.

If you use client-side rendering, it’s the user’s browser that generates the entire app, including the user interface (UI), data, and functionality. No server is involved in the process, except to store the client-side code and data and transfer it to the browser.

As the following code example shows, in CSR apps, the HTML file only contains a blank root (often also named app) element and a script tag. The root element is populated by the browser that downloads and processes the JavaScript bundle to render all the other elements:

<!DOCTYPE html>
<html lang="en">
<meta charset="UTF-8">
<div id="root"><!-- blank --></div>
<script src="/bundle.js"></script>

Since the browser needs to download and run the whole application code before the content appears on the screen, the first page load is usually slow with client-side rendering (server-side rendering splits this process between the client and server).

As a result, users see a blank screen or loading spinner for a relatively long time. This leads to a poorer user experience and higher bounce rates (see Google’s discussion of how page load time impacts bounce rates).

Client-side rendering flowchart
Image credit: React PWA

Server-side rendering provides a solution to this problem.

What is server-side rendering (SSR)?

Server-side rendering, also known as universal or isomorphic rendering, is an alternative rendering method for single-page applications. SSR generates the static HTML markup on the server so that the browser gets a fully rendered HTML page. This is done by using a backend runtime such as Node.js that can run the JavaScript code to build the UI components.

Here’s an example HTML file, containing a simple newsletter signup form, that the browser could receive with server-side rendering. All HTML elements inside the root element were rendered on the server:

<!DOCTYPE html>
<html lang="en">
<meta charset="UTF-8">
<div id="root">
<div class="container">
<h2>Stay Updated</h2>
<form method="post">
<input type="email" name="email"
placeholder="Enter your email" required>
<button type="submit">Subscribe</button>
<script src="/bundle.js"></script>

As the browser doesn’t have to render the HTML, static content appears on the page faster with server-side rendering. However, the browser still needs to download and process the JavaScript file to add interactivity to the HTML elements. As a result, users will need to wait more before they can interact with the app, e.g. click buttons or fill input fields.

Faster-loading static content versus less time between content visibility and interactivity is a trade-off between server-side and client-side rendering — but more on that later.

Steps in the server-side rendering process

An SSR app processes the same JavaScript code on both the client and server side — this is why it’s also called universal rendering.

In brief, server-side rendering consists of the following steps:

  1. Client’s HTTP request – When the user enters the URL into the browser’s address bar, it establishes an HTTP connection with the server, then sends the server a request for the HTML document.
  2. Data fetching – The server fetches any required data from the database or third-party APIs.
  3. Server-side pre-rendering – The server compiles the JavaScript components into static HTML.
  4. Server’s HTTP response – The server sends this HTML document to the client.
  5. Page load and rendering– The client downloads the HTML file and displays the static components on the page.
  6. Hydration – The client downloads the JavaScript file(s) embedded into the HTML, processes the code, and attaches event listeners to the components. This process is also called hydration or rehydration.

Here’s how server-side rendering looks from the browser’s perspective. Note that the flowchart below starts with Step 4 when the browser gets the server’s response:

Server-side rendering flowchart Image credit: React PWA

Server-side rendering frameworks and tools

Popular frontend UI frameworks all have their own features that make it possible to write universal JavaScript code that also runs on the server side, respectively:

Processing server-side JavaScript also needs a backend JavaScript framework that runs on the Node.js server, such as Express.js or Hapi. These backend frameworks handle network requests, render the components on the server, and return the pre-rendered HTML to the browser. You can use them together with any frontend JavaScript framework.

There are also full-stack JavaScript frameworks for creating universal applications, such as Next.js for React or Nuxt.js and Quasar for Vue.

What are the advantages of server-side rendering?

Server-side rendering can make your website load more quickly and make it easier for search engines to index.

How big the positive impact will be depends heavily on how your website is built. Use a site speed testing tool to check if server side rendering is a good way to speed up your website.

Better search engine indexability

These days, search engine bots can easily crawl static HTML, but they still tend to have problems with indexing JavaScript-generated content. Even though Google can now index synchronous JavaScript, JavaScript SEO is a complicated question with several drawbacks such as delays in JavaScript indexing.

As a result, client-side rendering is still considered risky from an SEO perspective. Put simply, if you want to rank high in search engines, server-side rendering is the better choice.

Faster initial page loads

As SSR apps pre-render HTML on the server, it takes less time for the browser to load content onto the screen.

However, note that while first-time visitors do experience faster initial page loads with server-side rendering, caching might change this result for returning users. If the frontend page doesn’t load any dynamic data from the server and all code is already cached, the browser only needs to render the page locally with client-side rendering.

Faster Largest Contentful Paint (LCP)

Largest Contentful Paint is one of Google’s three Core Web Vitals now included in its search ranking algorithm. It’s also the one that’s the hardest to pass for both desktop and mobile search.

LCP is a time-based value measured in seconds. A lower value means a better LCP score. As the largest content element (either an image or text block) is part of the static content your server pre-renders, SSR will display it faster on the screen.

Lower Cumulative Layout Shift (CLS)

Cumulative Layout Shift is another Core Web Vitals score tracked by Google. It measures the amount of unexpected change in the dimension and position of your content elements after the first page render.

With server-side rendering, the browser doesn’t have to go over the rendering process step by step, which typically results in fewer random layout shifts and, therefore, better CLS scores.

Fewer issues with social media indexing

Similar to search engine bots, social media crawlers also have issues with indexing JavaScript content. For example, Facebook’s Open Graph Protocol and Twitter Cards don’t support client-side rendering. So, if social media is important for your marketing strategy, server-side rendering can be the better choice.

Better for accessibility

As the server sends pre-rendered content to the browser, SSR apps are more suitable for people who use older devices with less powerful CPUs.

Server-side rendering is also a frequent recommendation for SPA accessibility as assistive technologies such as screen readers can’t always parse client-side JavaScript.

Are there disadvantages to server side rendering?

Despite its numerous advantages, there are still some cases when SSR might not be worth the effort. It can increase implementation and hosting costs, and in some cases leads to a worse user experience if not implemented carefully.

Increased complexity

SSR increases complexity, which may or may not be worth it for you. You’ll have to write universal code that runs both on the server and client, take care of more complicated dependency management and caching, set up and maintain a server environment, find developers with the proper skillset, and more.

Obviously, this more complex architecture will be more expensive, harder to maintain and debug, and more prone to errors.

Potentially higher First Input Delay (FID)

First Input Delay is the third metric of Google’s Core Web Vitals. It’s also the one where server-side rendering might lead to web performance issues. FID is a time-based value measured in milliseconds. It shows how long it takes for the browser to respond to the user’s first interaction.

With server-side rendering, the browser displays static content faster (which leads to a better LCP), but it still needs time to hydrate the application. As a result, the app looks ready for interaction while the code is still being processed in the background. If the user tries to interact with the app during this period of time, there will be a delay in the browser’s response.

The extent of the first input delay depends on many things, including your app’s complexity, whether there are many interactive elements, the page weight, and others. For many SSR apps, first input delay won’t be an issue.

However, if you experience higher FID, you still don’t have to give up on server-side rendering, as there are ways to mitigate it.

For example, your UI can indicate to users that the app is not yet ready for input (e.g. you can hide or disable the buttons) so that they won't try to interact with it too early and, therefore, produce a high FID score. Alternatively, you can eliminate long-running blocking tasks by splitting up rendering into smaller chunks.

Less efficient caching

With client-side rendering, you can speed up your app by taking full advantage of browser caching. The initial page HTML is the same for all pages, so you can cache it and load it from a content delivery network (CDN) along with the JavaScript code.

With server-side rendering, the page HTML is different for each page, so it’s harder to cache this on a CDN. Users who load a page that hasn’t been cached on the CDN will experience a longer page load time.

Compatibility issues

There are several third-party libraries and tools that are not compatible with server-side rendering.

For example, at DebugBear, we recently started implementing server-side rendering for some of our components. Our frontend is written in TypeScript and imports CSS code for each UI component, which is then compiled by Webpack and served as a single JavaScript file.

However, on the backend, we use the standard TypeScript compiler rather than Webpack, so we had to switch from SCSS includes to Emotion, a CSS-in-JS library, to render these components on the server.

Even though nowadays compatibility issues are less of a problem than used to be, you still need to choose your dependencies carefully if you want to use server-side rendering.

Higher costs

As client-side apps don’t need a server, you can deploy them to a free or cheap static storage service such as Netlifly or Amazon S3. However, you’ll need to pay for a server or at least a “serverless” backend to deploy an SSR application, which means higher running costs.

Larger HTML size

SSR apps come with a larger HTML size because of the embedded hydration state.

This is not really an argument against SSR, just something to keep in mind as a potential risk if it’s implemented poorly. You can test your app for HTML bloat and other issues with our free HTML size analyzer tool.

Summary: Is server-side rendering better?

All in all, server-side rendering has a positive effect on page load times, SEO, social media support, and accessibility. On the other hand, client-side rendering can be cheaper, easier to build and maintain, and better for First Input Delay.

You don’t necessarily have to choose between these two options, though. There are hybrid solutions, too, which might work better for your application.

For example, you can use server-side rendering for pages important for SEO and client-side rendering for highly interactive pages. You can also set up dynamic rendering on the server to detect the client and serve a static version of your app to search engine crawlers.

Monitoring performance to see the impact of server-side rendering

To see how implementing server-side rendering impacts your site speed, you’ll need to monitor everything from page speed to Core Web Vitals scores.

With DebugBear, you can gain insight into your detailed web performance data, compare it against your competitors’ results, and run synthetic tests from multiple locations around the world — all from a single user-friendly dashboard that you can try for free.

DebugBear dashboard

If you’d rather stay with client-side rendering, we still have many web performance optimization tips that can help improve your Core Web Vitals, image rendering speed, React performance, and more.

<![CDATA[How CSS opacity animations can delay the Largest Contentful Paint]]> /opacity-animation-poor-lcp Mon, 07 Mar 2022 00:00:00 GMT Fade-in animations make a website look more polished, but can also cause a slower Largest Contentful Paint. That's because of how elements with an opacity of 0 are counted when measuring LCP.

This article explains why fade-in animations can delay LCP and what you can do about it.

Filmstrip showing LCP being delayed by fade-in animation

Elements with an opacity of 0 are not LCP candidates

The Core Web Vitals measure user experience, so counting a paint with opacity 0 as the LCP element doesn't make sense.

Accordingly, in August 2020 Chrome made a change to ignore these elements.

[LCP] Ignore paints with opacity 0
This changes the opacity 0 paints that are ignored by the LCP algorithm. [...] After this change, even will-change opacity paints are ignored, which could result in elements not becoming candidates because they are never repainted. In the special case where documentElement changes its opacity, we consider the largest content that becomes visible as a valid LCP candidate.

Even after an element is faded in it still doesn't become an LCP candidate unless it is repainted. However, if an element is repainted then the LCP will be higher than expected!

Note the special case for documentElement. Many A/B testing tools initially hide all page content, so without this exception no element would ever be counted for LCP.

What causes repaints?

If there's no repaint then another page element will simply be used to measure the LCP. However, your the LCP can increase if a large element repaints and becomes the new LCP element.

What might cause an element to repaint? A few examples:

  • Element changes, like when a web font finishes loading
  • Changing the lang attribute on the html tag
  • Resizing the window or changing the device orientation
  • Changes in content size because a scrollbar is added or removed (for example when showing a modal)

This is what was happening on the website where I first noticed this issue:

  • The H1 element was faded in with the AOS animation library
  • AccessiBe set the html lang attribute
  • The H1 was repainted

The H1 was registered as the LCP element in the last step, even though no visual change occurred at that point.

Example: repaint when web font load is loaded

Let's look at a page where the LCP element is the H1 element and that uses a web font.

LCP without an animation

Without an animation the LCP is registered when the heading is first rendered. When the web font is loaded later on it doesn't cause the LCP to update.

Then we'll add a CSS fade-in animation to the H1, starting at opacity 0.

h1 { animation: fade-in 0.2s forwards; }
@keyframes fade-in {
0% { opacity: 0; }
100% { opacity: 1; }

With the animation, no LCP is registered for the initial render as the element starts with an opacity of 0. The LCP doesn't update when the fade-in animation completes, but it is updated when the web font load causes a repaint.

LCP with the fade-in animation

How to fix this LCP issue

Disabling the animation would be the easiest fix.

You could also start the fade-in animation with a non-zero opacity like 0.1, to ensure the initial render counts as an LCP candidate.

This doesn't seem to apply to images

For both text and images, the LCP is reported when the content is repainted. However, it appears that for images the startTime reported by Chrome shows when the element was originally rendered, rather than the most recent update.

Image LCP reported after window resize

Compare this to a text node that's faded in, followed by resizing the browser to trigger a repaint.

Text LCP reported after window resize

<![CDATA[How anti-flicker snippets from A/B testing tools impact site speed]]> /ab-testing-anti-flicker-body-hiding Tue, 01 Feb 2022 00:00:00 GMT This article looks at how the anti-flicker snippets used by A/B testing tools like Optimizely and Adobe Target can negatively impact web performance.

After explaining the problem, we'll look at potential solutions that minimize flicker while also keeping site speed in mind.

What are anti-flicker snippets?

A/B tests and other customizations mean that the content on a website depends on the visitor who's viewing it. These customizations are usually implemented through a dedicated A/B testing service, in order to let marketing teams run tests without having to get developers involved.

The visitor's browser first downloads the page with default content, then the browser loads the list of customizations from the A/B testing service, and finally applies them to the page.

However, these customizations introduce flicker. The visitor first sees the default content, and then the content disappears and is replaced.

An anti-flicker snippet is a piece of code that prevents flicker by hiding the original page content until the customizations have been applied.

This graphic shows filmstrips indicating the rendering progress of a website in three scenarios:

  1. Without A/B testing
  2. With A/B testing and flicker
  3. With A/B testing and an anti-flicker snippet

Graphic showing rendering progress with and without A/B testing anti-flicker snippets

What's the problem with anti-flicker snippets?

Hiding content ensures that content customizations don't negatively impact the user experience when they are applied.

However, hiding content with anti-flicker snippets also causes a worse user experience by making the site load more slowly. Visitors to your site spend more time looking at an empty page.

How do anti-flicker snippets impact Core Web Vitals?

Anti-flicker snippets increase your Largest Contentful Paint (LCP) metric. This can also impact your Google rankings, as LCP is one of the Core Web Vitals that's used to assess site experience.

Applying customizations without an anti-flicker snippet can cause content to shift around on the page, if the custom content has a different size than the default content. These shifts in turn increase the Cumulative Layout Shift metric, another Core Web Vital.

So when optimizing your anti-flicker logic you need to balance these two competing concerns. I would lean towards first sorting out LCP issues, and then reviewing and addressing layout shifts one by one.

How can you fix poor performance caused by anti-flicker snippets?

To fix site speed issues caused by A/B testing tools you can:

  1. Run A/B tests server-side
  2. Configure the snippet to only hide some content
  3. Optimize how quickly the A/B tests load
  4. Accept the flicker instead of fighting it
  5. Disable A/B testing entirely on some pages

Server-side customizations and A/B testing

The reason anti-flicker snippets are necessary is that the web server first returns default content and then another tool modifies the content on the client.

If you're able to run implement A/B tests on the server this problem can be avoided entirely. However, this is likely difficult to implement.

Customize the anti-flicker snippet

Anti-flicker snippets typically hide all content in the HTML body tag. This is a drastic solution, and it's done because the snippet does not know which parts of the page will be modified until the customizations have been loaded.

However, as the person running the A/B tests, you will know more about what type of tests you run. Do you test different page headings? Then hide only h1 tags. Are you customizing call-to-action copy on a button? Then hide button as well.

In contrast, you might not be running any customizations on p tags or content in the website header. So these components don't need to be hidden while the customizations are being loaded.

Anti-flicker snippet that only hides content affected by A/B tests

You could still end up with layout shifts, for example if your customized h1 stretches over two lines instead of one. But a small layout shift might be ok, and is less jarring than page content getting swapped out. If it is a problem, specify a minimum height for your h1 and make sure it is always large enough to handle different content lengths.

Load customizations more quickly

The process of how an A/B testing tool loads customizations might look like this:

  1. Load a tag manager
  2. Load additional code for the A/B testing tool
  3. Make a fetch request to get a unique ID for the user
  4. Use that unique ID to load the customizations
  5. Apply the customizations

This process is often sequential and involves establishing connections to multiple servers. Like any other request chain on your website it can be optimized, though this may be harder as you don't have full control over how the third-party works.

Browser resource hints can be a useful tool to optimize sequential request chains that cannot be parallelized.

For example, if the customizations are loaded from, you add a preconnect hint for that domain to your document. The browser will establish a server connection before the actual fetch request. That way, when the fetch request is made, only one network round trip is needed, as the existing connection can be used.

Accept the flicker

If your users find that your site loads slowly, you might want to consider just accepting flicker when it happens.

This especially applies in these two cases:

  • you only run tests on a few pages at a time, but the anti-flicker snippet is loaded globally for your site
  • the tests you run don't target the most prominent page content, but instead tweak small UI components or customize below the fold content

Abandon A/B testing

Once you look into it, it might turn out the A/B testing tool isn't actually used very often. In that case you can just remove the tool from your website.

Vendor-specific documentation

Many A/B testing tools have pages explaining the negative speed impact of anti-flicker code and how to mitigate it:

Adobe Target
Google Optimize

Analyzing a real-world example of the impact of an anti-flicker snippet

As an example of what hiding body content looks like in practice, let's inspect the Asana homepage and look at how the data in the filmstrip seemingly conflicts with the request waterfall.

The request waterfall shows that:

  1. The last render-blocking request finishes after about 0.6 seconds
  2. The LCP image has loaded after 3.1 seconds

Yet, the filmstrip shows no content until 5.7 seconds after the page was opened.

Anti-flicker script hiding body content even after content is ready

Looking a bit deeper, we find that Asana uses Google Optimize. It also looks like Optimize only starts loading relatively late. I haven't confirmed this, but there might be a sequential request chain involving multiple Google Tag Manager requests.

Google Optimize being loaded for A/B tests

The HTML document contains styles that hide the body content, and the async-hide class is applied to the html tag.

.async-hide {
opacity: 0 !important

If we manually override those styles we can see that the page now starts to render after just 2.1 seconds, instead of the 5.7 seconds from before.

Disabling anti-flicker snippet to improve page speed

Anti-flicker snippets on DebugBear

DebugBear not only monitors site speed over time but also automatically detects when page content is hidden by an anti-flicker snippet.

Track Core Web Vitals and see how your anti-flicker optimizations impact performance.

DebugBear detecting an anti-flicker snippet

<![CDATA[How DebugBear uses DebugBear to run DebugBear]]> /dogfooding Fri, 28 Jan 2022 00:00:00 GMT Peter Suhm recently wrote about how his form builder Reform uses Reform for their own business.

In the same spirit, this post takes a look at how we use DebugBear to monitor site speed internally.

Tracking site speed over time

We use the DebugBear dashboard to stay on top of longer-term site performance trends, and to check how the performance optimizations we've deployed are working.

We also use it to sort our monitored pages by worst performance, to find out which URL we need to optimize.

Site speed trends

Getting alerted to regressions and investigating them

When our performance metrics go down, DebugBear sends us an alert via email. For example, we recently saw a big jump in our Largest Contentful Paint (LCP) metric on the mobile homepage.

Site speed LCP regression alert

When investigating the regression, it turned out that in this case our website didn't actually get any slower.

We had changed the title on the homepage to be a bit shorter, and it turned out that having a smaller H1 element meant that Chrome now detects a different element as the LCP element.

Investigating LCP change

Testing performance in CI

Ideally, we want to avoid rolling out performance regressions to production. So we've set up DebugBear to run as part of our Continuous Integration process.

Every time we push code to GitHub, DebugBear tests 4 of our pages and reports how the code changes impact performance.

Site speed testing in CI

Looking back at how DebugBear changed over time

This is not a performance feature, but DebugBear takes screenshots as part of running the performance tests, and sometimes it's useful to review how the site changed over time.

For example, we can see how our homepage messaging has changed, or look back to the day the first customer signed up. This is what DebugBear looked like then:

DebugBear old screenshot

Compared to now.

DebugBear new screenshot

We started working with a designer again recently, so there should be some further improvements soon.

Other people's "How X uses X" posts

<![CDATA[5 Site Speed Tools for Technical SEOs]]> /seo-site-speed-tools Mon, 24 Jan 2022 00:00:00 GMT Web performance has become a more important topic for Technical SEOs since Google has started using the Core Web Vitals metrics as part of its search result rankings.

This article looks at some of the tools you can use to measure performance and explains the advantages each tool brings.

  1. Page Speed Insights
  2. Treo Site Speed
  3. Google Search Console
  4. DebugBear
  5. WebPageTest

Page Speed Insights

PageSpeed Insights (PSI) is probably the most well-known web performance tool. It's made by Google to help optimize your site and make it rank well.

To use it, simply paste the URL of one of your pages and Google will generate a report.

PageSpeed Insights

It provides both field data collected from real Chrome users and lab data collected by Google's Lighthouse tool.

Field data

These real-user metrics (collected "in the field") are what Google uses as a ranking signal. Depending on how much traffic the tested page gets, Google provides either URL-level or origin-level data:

  • URL-level data has been collected from visitors of this particular page
  • Origin-level data has been collected across the entire website (well, subdomain technically)

Real-user data combines many individual experiences, so PSI shows two types of statistical information:

  • 75th percentile – this means that 25% of users had a worse experience than this metric, while the site was faster than this value for 75% of users
  • Rating buckets - Google has thresholds to rate a metric as "Good" or "Poor". Grouping user experiences into these buckets lets you see what percentage of users had a "Good" experience.

Field data in PageSpeed Insights

Lab data

Field data tells you how your users experience your website, but doesn't provide much data to help you debug problems. That's where lab data comes in, which is collected in a controlled server environment. Read more about lab vs field data here.

Google's lab data tool is called Lighthouse, and this is what PSI uses as well. The Lighthouse report provides suggestions on how to optimize your website. However, as we'll see below, the Lighthouse data provided by PSI can be inaccurate.

Lab data in PageSpeed Insights

Treo Site Speed

PSI shows how fast a website was over the last 28 days. Treo Site Speed sources field data from Google and lets you see how your site performance has changed over the last 12 months.

Unlike PageSpeed Insights, Treo Site Speed always shows origin-level data.

Treo Site Speed trends

Treo also provides an amazing map that lets you see how users in different locations experience your website, as long as Google has enough data.

Treo Site Speed map

Google Search Console

The Core Web Vitals section of Google Search Console shows which of your pages are not receiving a ranking boost due to performance.

Search Console timeline

So far we looked at URL-level or origin-level data. Search Console introduces a new level of granularity: page groups. If Google doesn't have enough data for one of your pages, it will instead use data for similar pages on your website.

When you see examples of specific slow pages, make sure to check the number of Similar URLs and click on the page group to see what other pages are included in this group.

Search Console page list


DebugBear monitors Lighthouse scores and Core Web Vitals over time. Both lab and field data are tracked at the URL-level, as long as Google provides this data.

DebugBear Web Vitals trends

The Lighthouse data provided by DebugBear is also more accurate than what you see in tools like PageSpeed Insights. This is because, by default, Lighthouse uses something called simulated throttling which can introduce inaccuracies.

Having continuous Lighthouse monitoring in place also means you can quickly detect performance regressions and analyze what caused them.

DebugBear site speed debug data


WebPageTest runs high-quality one-off performance tests in a lab environment.

Its request waterfall allows you to perform an in-depth technical analysis of your site speed, looking at details like server connections and resource prioritization.

WebPageTest waterfall

The connection view in particular helps understand how the browser manages server connections and how you can optimize them.

WebPageTest connection view

WebPageTest also offers technical features few other tools offer, like the ability to specify custom Chrome flags or capture a network packet trace with tcpdump.

<![CDATA[Working with web performance budgets]]> /working-with-performance-budgets Tue, 11 Jan 2022 00:00:00 GMT Performance budgets help your business stay on top of web performance, ensure a good user experience, and optimize SEO.

This article explains how performance budgets work and how you can implement them in practice.

How to set up and work with performance budgets - Lighthouse budgets example

What are performance budgets?

Budgets place limits on the the amount of resources used by your website, and define minimums on how quickly it has to load.

Here are some examples of site speed budget that your team could set:

  • The Largest Contentful Paint should be below 2 seconds
  • The Lighthouse Performance score should be over 90
  • The download size of images on the page should be below 1 MB

Why use performance budgets?

Having agreed on these limits allows you to think about how you want to spend your resources. Do you want to include an A/B testing script, or is it more important to load a web font that matches your company brand?

Budgets make performance a priority early on and ensure that the trade-offs of new functionality are considered throughout the lifetime of the project. They protect you against regressions as your website changes over time.

Catching regressions with performance budgets

Performance regressions are often gradual or unexpected.

Gradual regressions happen slowly over time. You add a small tracking script, or include another 5 kilobyte JavaScript library in your app. If you monitor your site speed you won't see a big jump in your metrics, but over time these changes add up and it will be difficult to pinpoint the cause.

Unexpected regressions happen when people make changes without being aware of the site speed impact. For example, when your marketing team uploads a 5 megabyte background image they might assume that your server will compress it before serving it to a user. If you don't consistently check your site performance this could go undetected for months.

Performance budgets establish clear thresholds that automated tools can use to decide whether a change is ok or whether it exceeds the resource limits you've set.

What happens when a performance budget is exceeded?

Your team notices, takes a closer look, and decides what to do next. Performance budgets aren't meant to create annoying roadblocks, but instead ensure that your team works with performance in mind.

For example, when running into a performance issues you might:

  • fix the issue, for example by compressing an image before uploading it
  • revert a change once you know how costly it is, for example a new third-party script that isn't essential
  • go ahead with the change as planned, but schedule future work to optimize elsewhere and get back under your budget
  • decide that the change is important enough to accept slightly worse performance and bump your budgets

How to select performance metrics

What metrics should you look at when evaluating your website, and which ones are a good fit for budgeting?

Core Web Vitals

The Core Web Vitals are a set of modern performance metrics defined by Google.

They are a great starting point for a performance budget because:

  • they focus on user experience
  • your SEO team is already looking at them
  • there are established metric values to aim for
  • Google is pushing them, and accordingly Web Vitals are well-supported by Lighthouse and other tools

However, these metrics can also vary a lot between tests, making it hard to say for sure if performance has become worse.

Resource sizes and counts

A more technical way to define a performance budget is to set limits on the network requests made by the page.

Some examples:

  • the page should not load more than 200 kilobytes of JavaScript
  • the page should not make more than 10 image requests

These metrics are extremely easy to measure consistently. While it might sometimes take 1.24 seconds to render your site and sometimes 1.31 seconds, your site will usually make a consistent number of network requests of a certain type.

The flip side is that they don't necessarily mean a lot to your users. Your budget will fail whether you add 200 kilobytes of render-blocking JavaScript, or whether you initialize ("hydrate") a calculator app after the whole page has already rendered.

Custom metrics

The Core Web Vitals can be measured for every web page. However, they sometimes don't capture what's really important to your users.

For example, on an article page loading the web font for the h1 tag will be more important than that a background image has fully finished fading in.

In these cases you can use the User Timing API to mark the time when the most important page element on your website has been rendered.

One downside of custom metrics is that fewer tools support them.

How to select metric thresholds

Now that you know what metrics you care about, how do you decide where to set the budget thresholds?

If you already have a site in production, start where the metrics are right now and focus on preventing regressions.

You can define performance goals for the future separately, and after you implement optimizations you can lower the budgets further.

An exception to this might be if you run into particularly large performance issues and are willing to consider larger architectural changes. Then it might make more sense to take the approach for new websites described below.

When starting a new website, start by thinking about who your users are and how important it is that your website loads quickly.

  1. What's the purpose of your website? Are you building a content website where users come in via search and want to quickly skim your articles? Or are you building an app that users will use for 15 minutes at a time?
  2. What kind of devices do your users use, and what does their network connection look like? What do network bandwidth and latency look like on the low-end?
  3. How fast are your competitors?

Starting by looking at similar sites is the easiest way to find ambitious yet realistic performance goals for a new website.

However, you can also start by identifying a high-level goal, break it down into lower-level metric thresholds, and finally implement your solution with these goals in mind.

Example: working out concrete budgets from scratch

Let's say you're building a website with pages about different locations along a specific hiking route. Visitors will find the website via search. It is likely to be accessed from rural locations with and should load quickly even for users with 1 Mbps (125 kilobytes per seconds) of bandwidth and 100 milliseconds of network latency.

When collecting your performance data you can use the same network settings as for the user you're targeting. Then, for example, set the Largest Contentful Paint threshold to 1.5 seconds.

Breaking a budget down into lower-level components

Now that we have this starting point we can work our way to more specific budgets. Let's say to show the page we need to load the HTML document and an image that's hosted on a different server.

That means we need to create 2 server connections. If each server connection requires 4 network round trips then this alone will take 800 milliseconds. Given this tight budget, we can say that we want our servers to respond to incoming requests within 150 milliseconds. Based on this we can set a budget for the Time to First Byte metric.

After creating the connections and waiting for the backend response we are left with 300 milliseconds to download the data. At 125 kilobytes per second that means we need to squeeze all content into, or 50 kilobytes. Based on this we could set an HTML budget of 10 kilobytes and an image budget of 40 kilobytes.

Having worked this out will inform technical decisions from the start.

Using Lighthouse for performance budgets

If you're using Google Lighthouse to test your website you can integrate performance budgets into your Lighthouse tests using the LightWallet feature.

To do this, you need to use Lighthouse via the command-line interface and pass in the --budget-path parameter.

The budget is a JSON file with thresholds for different metrics. For example, we could save this as our budget.json file:

"timings": [
"metric": "first-contentful-paint",
"budget": 1500
"metric": "largest-contentful-paint",
"budget": 5000
"resourceSizes": [
"resourceType": "total",
"budget": 2000
"resourceCounts": [
"resourceType": "font",
"budget": 5
"resourceType": "total",
"budget": 100

Then run Lighthouse and pass in the budget.

lighthouse --budget-path=./budget.json --view

At the bottom of the Lighthouse Performance section you can now see the values for each of the metrics you've set a budget for, as well as information about any budgets that have been exceeded.

Performance budgets in a Lighthouse report

<![CDATA[Core Web Vitals: which metric is hardest to pass?]]> /hardest-core-web-vitals-metric Tue, 14 Dec 2021 00:00:00 GMT In June 2021, Google started using Core Web Vitals as a search result ranking factor. The Core Web Vitals are a set of three user experience metrics: Largest Contentful Paint, Cumulative Layout Shift, and First Input Delay.

For each of these metrics, Google defined thresholds that websites have to meet in order to get SEO benefits. A website that doesn't pass the Core Web Vitals can drop in search rankings.

This article looks into which of these metrics is hardest to pass and causes the most problems for websites.

Core Web Vitals pass thresholds

Which of the Core Web Vitals is hardest to pass?

The HTTP Archive publishes aggregate data from Google's Chrome User Experience Report (CrUX).

This data shows that the Largest Contentful Paint metric is the hardest to pass. On mobile, less than half of websites provide a good LCP experience at least 75% of the time.

MetricMobile Pass RateDesktop Pass Rate
Largest Contentful Paint (LCP)45.8%60.3%
Cumulative Layout Shift (CLS)67.3%63.6%
First Input Delay (FID)90.5%99.9%

The Cumulative Layout Shift metrics is the second hardest to pass. Interestingly, it's also the only metric that's better on mobile than on desktop. Usually performance metrics are worse on mobile devices, as they tend to have less powerful processors and slower network connections. However, this is not the case for CLS as it looks at layout changes rather than load time.

First Input Delay rarely causes sites to fail the Core Web Vitals. Even on mobile only 9.5% of sites fail this metric.

What percentage of sites pass the Core Web Vitals?

Significantly less than half of all websites pass the Web Vitals.

MetricMobile Pass RateDesktop Pass Rate
All Core Web Vitals31.2%41.1%

However, it is important to keep in mind that Google won't penalize the whole site if Core Web Vitals are poor overall.

Instead, Google puts similar pages on a website into groups and assesses the Core Web Vitals for the group rather than for the whole website.

For example, take a site that contains both a relatively slow interactive JavaScript application and fast interactive content pages. Even if the site as a whole doesn't pass the web vitals, you should still expect the SEO-relevant content to rank well.

The data we looked at in this post all refers to origin-level metrics.

Bonus: First Contentful Paint

First Contentful Paint is a web vitals metric, but it's not one of the Core Web Vitals that impact rankings.

It actually has an even lower pass rate than the Largest Contentful Paint.

MetricMobile Pass RateDesktop Pass Rate
First Contentful Paint (FCP)38.1%59.5%
<![CDATA[Measuring user flow performance with Lighthouse]]> /lighthouse-user-flows Mon, 08 Nov 2021 00:00:00 GMT Lighthouse tests usually perform a non-interactive cold load. However, real users interact with the page, and load pages again with some resources already cached. User flow support in Lighthouse lets you test sites beyond the initial page load.

Lighthouse user flow test

Scripting a user flow

Before auditing a user journey with Lighthouse you either need to record a user flow with Chrome DevTools or script one yourself.

In this tutorial we'll take the exported Puppeteer script from the previous post on the DevTools Recorder tab. It goes to the GitHub homepage, searches for "react" and then clicks the first search result.

You can find the full exported script here.

Install dependencies

The script needs Puppeteer to control a Chrome instance, and we'll use Lighthouse to audit the user flow.

Run the following commands in the folder that contains your user flow recording:

npm init -y # create node module context and track local dependencies
npm install puppeteer lighthouse
node github-search.js

This will run through the user flow – in the next steps we'll add Lighthouse auditing to it.

Starting a Lighthouse user flow audit

We'll need to make a few changes at the top of the user flow script.

const puppeteer = require('puppeteer');

(async () => {
const browser = await puppeteer.launch();
const page = await browser.newPage();

Replace the code above with this code:

const puppeteer = require('puppeteer');
const { startFlow } = require('lighthouse/lighthouse-core/fraggle-rock/api.js');
const fs = require("fs");

(async () => {
const browser = await puppeteer.launch({ headless: false });
const page = await browser.newPage();

const flow = await startFlow(page, {
name: 'Go to homepage',
configContext: {
settingsOverrides: {
screenEmulation: {
mobile: false,
width: 1350,
height: 940,
deviceScaleFactor: 1,
disabled: false,
formFactor: "desktop",

This imports the modules we need and also makes sure Chrome opens a browser window rather than running in headless mode.

Then we start a new Lighthouse flow. Since we recorded the user journey on desktop we need to run the flow on desktop as well.

Also update the setViewport call to use a matching screen size:

await targetPage.setViewport({ "width": 940, "height": 1350 })

Before the browser.close() call at the end of the file we need to generate a report and save it as HTML.

const report = flow.generateReport();
fs.writeFileSync('report.html', report);
await browser.close();

Finally, find the list of steps in the user flow. To begin with, comment out all steps other than the first one shown below.

const targetPage = page;
const promises = [];
await targetPage.goto('');
await Promise.all(promises);

To capture a full Lighthouse report for the initial page, add a flow.navigate call.

await flow.navigate("")

const targetPage = page;
const promises = [];
await targetPage.goto('');
await Promise.all(promises);

After running node github-search.js && open report.html we can see the results of a standard non-interactive Lighthouse test.

Standard Lighthouse test result

User interaction

Next, restore the following two flows steps and add:

  1. Call startTimespan at the top
  2. Call endTimespan and snapshot at the bottom
await flow.startTimespan({stepName: 'Enter search term'});
const targetPage = page;
const element = await waitForSelectors([["aria/Search GitHub"],["body > div.position-relative.js-header-wrapper > header > div > > div.d-lg-flex.flex-items-center.px-3.px-lg-0.text-center.text-lg-left > div.d-lg-flex.min-width-0.mb-3.mb-lg-0 > div > div > form > label > input.form-control.input-sm.header-search-input.jump-to-field.js-jump-to-field.js-site-search-focus.js-navigation-enable.jump-to-field-active.jump-to-dropdown-visible"]], targetPage);
await{ offset: { x: 74.5, y: 24} });
const targetPage = page;
const element = await waitForSelectors([["aria/Search GitHub"],["body > div.position-relative.js-header-wrapper > header > div > > div.d-lg-flex.flex-items-center.px-3.px-lg-0.text-center.text-lg-left > div.d-lg-flex.min-width-0.mb-3.mb-lg-0 > div > div > form > label > input.form-control.input-sm.header-search-input.jump-to-field.js-jump-to-field.js-site-search-focus.js-navigation-enable.jump-to-field-active.jump-to-dropdown-visible"]], targetPage);
const type = await element.evaluate(el => el.type);
if (["textarea","select-one","text","url","tel","search","password","number","email"].includes(type)) {
await element.type('react');
} else {
await element.focus();
await element.evaluate((el, value) => {
el.value = value;
el.dispatchEvent(new Event('input', { bubbles: true }));
el.dispatchEvent(new Event('change', { bubbles: true }));
}, "react");

await flow.endTimespan();
await flow.snapshot({ stepName: "Search term entered" })

The Lighthouse report now contains a timespan entry and a snapshot entry.

Lighthouse user flow timespan and snapshot

Before looking at these results in more detail, let's apply the same change to the last two steps:

await flow.startTimespan({ stepName: 'Go to search result' });

const targetPage = page;
const promises = [];
const element = await waitForSelectors([["aria/react"],["#jump-to-suggestion-search-global > a >"]], targetPage);
await{ offset: { x: 41.5, y: 4} });
await Promise.all(promises);
const targetPage = page;
const promises = [];
const element = await waitForSelectors([["aria/facebook/react"],["#js-pjax-container > div > > div > ul > li:nth-child(1) > > div.d-flex > div > a"]], targetPage);
await{ offset: { x: 62.5, y: 12.21875} });
await Promise.all(promises);

await flow.endTimespan();
await flow.snapshot({ stepName: "Search result page" })

The timespan view now shows a filmstrip of the navigation, as well as layout shifts and blocking time collected along the way. Timespan recordings allow us to see whether user interaction after the initial load causes performance issues.

Lighthouse user flow timespan detail

The snapshot view doesn't show much performance data, but provides the Accessibilty and SEO audits for the page. Capturing a snapshot after simulating user interaction makes it possible to discover problems in the modified post-interaction DOM.

Lighthouse user flow snapshot detail

Click here to view the final Lighthouse user flow script.


User flow support in Lighthouse is still in development, and you'll likely run into some issues. For example, when trying to collect a timespan during the initial load this broke some of the later Puppeteer interaction for me.

Being able to test user journeys with Lighthouse will help create more realistic tests that uncover layout shift and accessibility issues that are currently hidden.

<![CDATA[Recording a user flow in Chrome DevTools]]> /chrome-devtools-user-flow-recorder Sat, 06 Nov 2021 00:00:00 GMT Chrome is adding a new Recorder tab to the DevTools, letting users record and replay user journeys.

This feature will be included in Chrome 97, due for stable release on January 4 2022. Use Chrome Canary to try this feature out now.

User journey recorded in Chrome DevTools

Creating a recording

  1. Navigate to the page where you want to start the recording (in this case I'm opening the GitHub homepage)
  2. Open the DevTools by right-clicking on the page and selecting Inspect
  3. Open the Recorder tab

Devtools Recorder tab

  1. Click the Create a new recording button

New recording button

  1. Enter a name for your user flow

Selecting a name for the user flow recording

  1. Click Start a new recording

  2. Go through the user journey on your page – I searched for "react" on GitHub, clicked the "search" button, and then selected the first search result

  3. Click End recording

Button to finish the user flow recording

  1. The recording is now complete

Finished recording

Replay and Measure Performance

The Replay button simply performs the recorded steps. This lets you check your recording is working correctly.

The Replay Settings let you control the emulated network speed – reducing the network speed to a slower connection is helpful when testing performance and capped speed in more consistent measurements.

DevTools Recording Replay settings

Measure Performance captures a DevTools Performance profile while going through the user flow. This can help you understand which parts of the process are slowing the user down.

There's lots of information here. Hovering over the filmstrip screenshots can give you an idea of what's going on at any given point. The CPU utilization timeline at the top of the page can point out potential JavaScript bottlenecks.

Performance Profile of DevTools user flow recording

Replay failures

If a replay isn't successful, for example because a DOM element wasn't found, the step where the replay failed is highlighted in the user flow.

DevTools Recorder replay failure

You can edit individual steps, for example picking a more reliable element selector.

Editing recording step

Exporting and running a Puppeteer script

DevTools can export your user journey as a Puppeteer script. Puppeteer is a Node library that lets you control a browser through code.

Export Puppeteer script button in DevTools Recorder

To run the exported script you need to

  1. Install Node.js
  2. Open a terminal window and navigate to the folder that you exported your script to
  3. Install the Puppeteer library by running npm install puppeteer
  4. Run the script with node github-search.js (or whatever name you used)

If you open the exported script you'll see this code near the top of the file:

const browser = await puppeteer.launch();
const page = await browser.newPage();

It launches a new Chrome instance and opens a new tab. By default Puppeteer uses a headless browser with no user-visible interface. This makes it difficult to see what the script does, so disable headless mode like this to test the script:

const browser = await puppeteer.launch({ headless: false });
const page = await browser.newPage();

When running the script now you'll see a Chrome window open and navigate through your user flow.

What does the Puppeteer script actually look like?

At the top of the exported file you'll see a bunch of helper functions like waitForSelectors, waitForElement, querySelectorsAll, and waitForFunction.

Then come the more interesting bits:

const targetPage = page;
await targetPage.setViewport({"width":1135,"height":338})
const targetPage = page;
const promises = [];
await targetPage.goto('');
await Promise.all(promises);
const targetPage = page;
const element = await waitForSelectors([["aria/Search GitHub"],["body > div.position-relative.js-header-wrapper > header > div > > div.d-lg-flex.flex-items-center.px-3.px-lg-0.text-center.text-lg-left > div.d-lg-flex.min-width-0.mb-3.mb-lg-0 > div > div > form > label > input.form-control.input-sm.header-search-input.jump-to-field.js-jump-to-field.js-site-search-focus.js-navigation-enable.jump-to-field-active.jump-to-dropdown-visible"]], targetPage);
await{ offset: { x: 74.5, y: 24} });
const targetPage = page;
const element = await waitForSelectors([["aria/Search GitHub"],["body > div.position-relative.js-header-wrapper > header > div > > div.d-lg-flex.flex-items-center.px-3.px-lg-0.text-center.text-lg-left > div.d-lg-flex.min-width-0.mb-3.mb-lg-0 > div > div > form > label > input.form-control.input-sm.header-search-input.jump-to-field.js-jump-to-field.js-site-search-focus.js-navigation-enable.jump-to-field-active.jump-to-dropdown-visible"]], targetPage);
const type = await element.evaluate(el => el.type);
if (["textarea","select-one","text","url","tel","search","password","number","email"].includes(type)) {
await element.type('react');
} else {
await element.focus();
await element.evaluate((el, value) => {
el.value = value;
el.dispatchEvent(new Event('input', { bubbles: true }));
el.dispatchEvent(new Event('change', { bubbles: true }));
}, "react");

What can we see here?

  • Each step is wrapped in curly braces, separating the steps and creating a separate scope for variables
  • waitForSelectors is called with multiple selectors, so if one selector doesn't work (e.g. due to a new deployment causing DOM changes) there are others to fall back to, making the script less likely to break and easier to debug when it does
  • waitForSelectors uses Puppeteer's custom query handlers, so the script looks for an element matching aria/Search GitHub rather than CSS selector
  • There's some code to handle setting the value on non-standard (?) elements – not quite sure what this is for

You can see the generated Puppeteer script here.

<![CDATA[Optimizing Core Web Vitals without improving site performance]]> /optimizing-web-vitals-without-improving-performance Wed, 20 Oct 2021 00:00:00 GMT The Core Web Vitals are a set of user experience metrics that Google uses as part of it's search result rankings. But how easy is it to game them?

Setting up a test page

I created a test page where the largest element is a 5 MB image. We get a high Largest Contentful Paint (LCP) value, as the image takes a while to download and render.

When the image appears it pushes down the image attribution below it. This layout shift increases the Cumulative Layout Shift (CLS) metric.

Let's see how we can solve these issues.

Filmstrip showing image that appears late and pushes down content, bad web vitals metrics

Largest Contentful Paint

Let's review what LCP measures:

The Largest Contentful Paint (LCP) metric reports the render time of the largest image or text block visible within the viewport, relative to when the page first started loading.

The LCP element on the test page is the img tag with the 5 MB picture. Can we convince the browser the element isn't actually visible?

If we set the image opacity to 0 and only fade the image in once it's been downloaded then the LCP will only update when the animation is complete.

To prevent the LCP from updating we set the animation duration to 1 day, or 86400 seconds.

style="width: 100%; opacity: 0;"
onload=" = 'fadein 86400s forwards'"

Our fadein animation then looks like this, instantly showing the image.

@keyframes fadein {
from { opacity: 0; }
0.000001% { opacity: 1; }
to { opacity: 1; }

The slow image is now no longer the LCP element. Instead, the LCP value is determined by the h1 tag that appears as soon as the page starts to render.

An alternative LCP trick

DevisedLabs demonstrates an alternative LCP hack using a very large image.

They insert an image overlay containing a transparent SVG at the top of the body tag. This image renders right away, and is the largest page element.

The pointer-events: none CSS style ensures users can still interact with the underlying page.

Cumulative Layout Shift

The slow LCP metric is fixed now, but we still need to fix the layout shift that occurs when the image pushes down the image attribution.

A layout shift occurs any time a visible element changes its position from one rendered frame to the next.

Again we can use the opacity animation to make the p tag "invisible":

setTimeout(() => {
const style = document.createElement("style");
style.innerHTML = `
*:not(img) {opacity: 0; animation: fadein 86400s forwards}
}, 200);
  • we exclude the img from the CSS selector as the element still needs to be invisible when the image download finishes
  • we use setTimeout to delay adding the style tag as otherwise no LCP value would be recorded at all

Unfortunately showing and hiding the content causes a flicker. We can fix this by making the content nearly invisible from the start (but not totally invisible as that would prevent a contentful paint).

* { opacity: 0.01; }

Problem solved!

Filmstrip showing image that appears late and pushes down content, good web vitals metrics

Alternative approach

Another way to prevent layout shifts is replacing the DOM element that gets pushed around with a new element containing identical HTML code. For example, you can overwrite the body HTML to regenerate the DOM nodes:

document.body.innerHTML = document.body.innerHTML;

You'd need to do this just before the image renders – running this code in the onload listener is too late. But that can be worked around by cloning the img tag, removing the src attribute from the original, waiting for the cloned tag to download the image, and then restoring the src attribute and regenerating the DOM.

The downside of this approach is that interactive parts of the replaced content can break, as the new DOM nodes won't have the same event listeners as before.

First Input Delay

First Input Delay looks at long chunks of CPU activity, so it's harder to cheat than the other Web Vitals. Luckily it's also the metric that's least likely to cause problems – 90% of mobile sites have good FID, compared to only 47% with good LCP scores.

A lot of potential "hacks" are just genuinely good for users:

  • breaking CPU activity into several separate chunks means DOM event handlers can run in-between without delay
  • designing the UI to discourage user interaction before the app is ready also improves user experience

I also don't think it's possible to create a fake user interaction, for example by calling dispatchEvent.

However, this could work:

  • cover the page in a transparent full-screen iframe overlay
  • the user tries to interact with the busy page, but actually interacts with the iframe
  • when the app is ready remove the iframe

The iframe main thread would be idle and user input can be handled without delay.

Google says iframes are considered when scoring Web Vitals, but it's unclear how exactly this works. Either way, there's no long input delay in the main frame as the user never interacted with it.

Two main-threads, parent thread is busy while iframe is quiet

Performance metrics have definitions

Site performance metrics have moved away from purely technical measurements (server response time, page download size) to more user-centric assessments. These metrics provide more insight on end-user experience, but they also have more complex definitions.

This post explained how one could make content "invisible" to improve Web Vitals, but developers also run into the opposite problem. Fast-loading elements that are not counted because they have an opacity animation can lead to a worse LCP metric.

The metric definitions change over time, so a new version of Chrome can introduce shifts in metric values. Google publishes a Web Vitals Changelog detailing the refinements that have been made.

There's also an open Chromium bug on the opacity loophole, and it will likely be fixed eventually.

Changes to the LCP metrics definition over time

What does this mean for Core Web Vitals?

Google's adoption of Core Web Vitals as a ranking signal has drawn the attention of website owners towards web performance.

While it's possible to game the metrics, site speed improvements not only improve SEO, but also lead to better user experience and higher conversion rates. So site owners will at least consider finding a proper solution.

Gaming the metrics also requires custom implementation work, and risks accidentally breaking functionality or causing content to flash. There's no simple script that automatically "fixes" measured Web Vitals for all websites. As browsers update their metric definitions workarounds will require ongoing maintenance while real solutions won't need to be updated.

Automatically scoring user experience is hard, some fast sites will be wrongly assessed as slow, and some slow sites will use workarounds to achieve better metrics. Site speed is only a small component of the overall search result rankings, and the new metrics often provide meaningful feedback to website owners.


I've assumed that Google Search uses the standard Chrome APIs and metric definitions to assess Web Vitals. This seems like a reasonable assumption, as duplicating this would be a lot of work and show up in the Chrome source code.

While I was able to improve the metrics reported for the test site, I didn't try this out on a production site (you'd need to wait 28 days to see the full results in the Chrome UX report).

Hopefully this post helped you develop a deeper understanding of how the Core Web Vitals work and why you might sometimes not see the values you'd expect.

I've also written about why lab and field data are often different on PageSpeed Insights, and why Lighthouse scores differ based on the environment where the test is run.

<![CDATA[Blowing up HTML size with responsive images]]> /srcset-page-html-size Tue, 07 Sep 2021 00:00:00 GMT I was recently looked at the speed of the Waitrose homepage and was surprised by its massive uncompressed HTML document size: 3.5 MB.

This article takes a look at how images make the document so big and asks if this reduces site speed.

Image URLs in the srcset attribute

Traditional img tags only take one src image URL that's used regardless of the size of the device. That means either images will look pixelated on large high-res screens, or a lot of unnecessary bandwidth will be consumed on small devices.

<img src="img-800px.png">

The srcset and sizes attributes fix this problem. Developers can use srcset to provide multiple URLs that serve the same image at different sizes – or sometimes a different image that uses the available space more effectively. sizes is used to tell the browser how large the image is supposed to be at different screen sizes.

The browser then decides what image to load based on the rendered size and screen resolution.

srcset="img-400.png 400w, img-800.png 800w, img-1600.png 1600w"
sizes="(max-width: 600px) 400px, 800px">

Using lots of srcset URLs

Now let's take a look at the HTML of the Waitrose homepage.

This section of the page looks pretty simple, but it's built with half a megabyte of HTML code.

Section with four columns with one image each

The website is responsive and uses the picture tag and srcset attribute to serve different images at different screen sizes. Each <picture> tag in this section contains 180 possible image URLs.

On top of that, the website uses Bootstrap classes like visible-xs to show different DOM elements depending on screen size. Each of the three blocks contains mostly similar content, and the browser ends up loading one image out of 540 possible URLs.

HTML picture tag and large srcset attribute

Does it affect download speed?

Picture elements account for most of the HTML code, but there's also a 900KB __PRELOADED_STATE__ global to initialize the React app.

While extra image URLs add a lot of code, the URLs are very repetitive. So it they should be easy to compress when transferring over the network.

Repetitive image URLs

Without compression, the srcset attributes make up 70% of the total page size of 3.5 MB.

HTML size breakdown with compression

After gzipping the file, srcset only contributes 37% to an overall size of ~200 KB.

I actually expected compression to help a bit more!

HTML size breakdown without compression

Okay, so let's assume the responsive image code actually added 74 KB of download size. That's meaningful, but only 1.4% of the total page download weight (5.1 MB). On a slowish 10 Mbit connection downloading 74 KB would take around 60 ms.

And the whole point of including responsive images is to prevent spending bandwidth downloading images that are too high-res. So increasing document size can save data later on.

However, HTML size has a larger performance impact than other on-page resources. While images are low-priority and can be loaded later on, HTML is high priority and competes for bandwidth with other render-blocking resources like CSS files.

Generally I wouldn't say the extra download size is a big concern. However, for this particular website the duplication seems excessive and there's likely low-hanging fruit to pick.

Does it affect overall site speed?

While response compression reduces the impact of large duplicated content, the browser still needs to decompress and process the large HTML response. For example, this would result in more time spent parsing HTML.

How long does it take to parse 2.5 MB of HTML?

I ran a test adding 10 MB of picture tags to a page and it increased parse time by around 300ms. Let's say 1 MB of uncompressed HTML means an extra 30ms of parse time.

Then 2.5 MB of HTML takes about 75 ms to parse. On a mobile device that's 4x slower this might be closer to 300 ms.

Parsing the extra HTML will have an impact that's just about noticeable, but not massive.

What's the overall performance impact?

My very rough guess is that the extra HTML slows down the initial render by 90 to 360 ms, depending on CPU speed and network connection. While this is not a major problem, it's enough to consider optimizing.

The results suggest that HTML parse time has a much larger impact than the additional download time.

How to fix this?

The root cause of this issue is likely that multiple layers of abstraction are stacked on top of each other. Each srcset attribute only specifies around 18 URLs. But many picture tags contain multiple source tags for different screen sizes. And these bits of code are then duplicated for different screen sizes again.

Developers and authors are not aware of this multiplicative effect, as they only interact with one layer of abstraction at a time. The most impactful solution would therefore be to review the architecture and find ways to reduce duplication. Maybe classes like visible-xs can be avoided entirely in favor of single responsive HTML blocks.

Alternatively, reduce the number of image URLs per picture:

  • You might not need both a 600px wide image and a 680px one
  • A visible-xs block is only shown on screens narrower than 544px, and doesn't need a 4000px wide image
  • A source element with a media="(max-width: 544px)" attribute also doesn't need a 4000px wide image
  • While in a visible-xl block, there's no need to include a source tag with a media="(max-width: 544px)" attribute
<![CDATA[Why does Lighthouse lab data not match field data?]]> /lighthouse-lab-data-not-matching-field-data Sun, 15 Aug 2021 00:00:00 GMT Lab-based performance tests often give different results from data that's collected from real users. This article explains the differences between these two ways to measure site speed and how they lead to different test results.

We'll focus on Lighthouse and the Chrome User Experience report, but note that there are many other ways to collect lab or field data.

Example mismatch in PageSpeed Insights

PageSpeed Insights shows both types of data when testing a URL. Field data is shown at the top and lab data below it.

(The overall score at the top of the report is based on lab data.)

PageSpeed Insights report showing discrepancies between lab and field data

The results of this tests are fairly typical:

First Input Delay is a field-only metric, lab data instead shows Total Blocking Time.

Why does lab data in Lighthouse and PageSpeed Insights not match field data?

Lab data reports how a website behaves in a controlled test environment, for example using a certain network speed. In contrast, real user data is aggregates experiences from many different users. The website may load fast for some users (for example those with a fast network, located close to the website servers) and slow for others.

Lighthouse uses a fairly slow test device by default, so typically the lab metrics are worse than the real user data. It doesn't try to describe a typical user experience, but rather shows how the slowest 5% - 10% experience your website.

How does Lighthouse collect lab data?

Lighthouse is the tool PageSpeed Insights uses to collect lab data. It can run tests on demand in a test environment using a fixed network and CPU speed.

Because of this, lab data is a great way to identify how changes on a website impact its performance. As the environment is fixed, results between different test runs are relatively consistent.

The lab environment is also able to capture detailed diagnostic data. Everything you see on PageSpeed Insights below the metric summary is comes from the Lighthouse test.

PageSpeed Insights Lighthouse diagnostics data

How is field data collected for the Chrome User Experience Report (CrUX)?

Field data is collected by measuring the experience of real users. Google collects data on Core Web Vitals through Chrome and publishes it in the Chrome User Experience Report (CrUX).

Real-user data aggregates the experiences of many different users, collecting data from different locations, devices, and networks. The result is a distribution of experiences.

Google focusses on the 75th percentile of experiences. In the example above, that means that in 75% of cases the Largest Contentful Paint took up to 1.9 seconds. Conversely, 25% of the time it took more than 1.9 seconds.

Explanation of PSI field data distribution for LCP

Google aggregates field data over a 28-day window, so changes to your website won't be reflected immediately. Often multiple URLs are grouped together, so the metrics you see in PageSpeed Insights are not necessarily for that particular URL.

What causes metric discrepancies between lab and field data?

Lab metrics are different from real user data because real users:

  • Have faster or slower network connections
  • Visit websites repeatedly and benefit from caching
  • Have a faster or slower CPU in their device
  • Interact with the page after the initial load
  • Access your site from many different locations

Different network environments

By default, Lighthouse tests on mobile are run using a network connection with a bandwidth of 1.6 Mbps and a latency of 150 ms. That means each request to a server without an existing connection will take at least 600 ms, and the maximum download speed is 200 KB/s.

These default settings are very slow. Average mobile download speed in the US is around 41 Mbps, with a latency of 47 milliseconds. When you compare the Lighthouse test result to real users in the US you'll see a big difference.

The different network environments often explain most of the differences in Largest Contentful Paint.

Bar chart showing latency and bandwidth for the US and Lighthouse

Different CPU speeds

While network differences are the main source of discrepancies between lab and field data, CPU speeds also differ. Lighthouse tests are run on powerful desktop machine or server, so Lighthouse throttles the CPU by a factor of 4 to approximate a mobile CPU.

This difference can impact Largest Contentful Paint and First Input Delay.


Lighthouse always tests a cold load where the user has never visited the page before. Accordingly, all resources used on the page need to be downloaded.

In contrast, real users often navigate from one page to the other, or visit the same website multiple times. For the first visit they need to download all resources used on the page. But for the second visit, many of those resources are saved in the browser cache and don't have to be fetched again.

Because some resources are cached, the load experience and Largest Contentful Paint will be better for the subsequent page visits. This will mostly improve the Largest Contentful Paint metric.

Test location

Lab-based data is based on a test result in a specific geographic location. In contrast, the field data on PageSpeed Insights aggregates the experience users had globally.

For example, if your server is based in the US, the lab data might look good. But if a significant percentage of your customers are based in Australia, then the real user metrics will look worse.

PageSpeed Insights runs tests in one of four different locations based on where you as the user are located.

Map of PageSpeed Insights test locations

Simulated throttling

Lighthouse runs tests in Chrome on a fast connection, then simulates how the page would have loaded on a slower connection. This process can introduce inaccuracies.

Sometimes simulated throttling result in a better Largest Contentful Paint than what real users experience. Often this indicates a particular browser behavior that is not accurately modelled in the simulation. Here are some examples where this can happen:

  • Your site has many preload tags, making resource prioritization worse
  • Extended Validation certificates causing OCSP requests when loading the document
  • Single slow XHR/Fetch requests (or a small number of them)

Scrolling down the page

Lighthouse tests load the page, but unlike real users Lighthouse always waits patiently for the page to finish loading instead of starting to scroll down.

This commonly affects Cumulative Layout Shift. The top of your page might be fine, but there may be CLS issues further down the page. For example, a user on a slow connection might scroll down the page and run into images that are still loading. When the images finally load the rest of the page content may shift down.

Other user interactions

Lighthouse also doesn't use any interactive parts of your page, like forms and buttons. Real users therefore experience aspects of the page that are hidden from the simple lab test.

This is especially common with single-page apps, where users may interact with a page for hours. Largest Contentful Paint does not update once the user starts clicking or scrolling down the page. However, Cumulative Layout Shift does update if there is a larger shift later on.

Final notes

This article referred to Lighthouse as used in PageSpeed Insights. However, note that it's possible to run Lighthouse with different settings for bandwidth, latency, and CPU throttling. It's also possible to run Lighthouse with alternative throttling methods that provide more accurate data.

Keep in mind that, even if 75% of users have a good experience, that still leaves 25% whose experience may be very poor. Lab data collected on a slower-than-average device and network may be able to highlight opportunities for improvement.

Field data is also subject to some amount of drift over time, as your audience changes, or as your audience start using better devices and network connections. Lab data in contrast keeps these values fixed, making it easier to compare test results over longer periods.

Lighthouse and CrUX data in DebugBear

DebugBear monitors the performance of your website in the lab, but the Web Vitals tab also shows field data from the Chrome User Experience Report.

Try DebugBear to track and optimize your site speed.

DebugBear Web Vitals tab

<![CDATA[CSP error noise caused by Chrome extensions]]> /chrome-extension-csp-error-noise Thu, 29 Jul 2021 00:00:00 GMT A Content Security Policy (CSP) lets developers improve security by putting restrictions on what resources can be loaded on a page. For example, a CSP can only allow requests from certain domains, or block inline script tags.

Developers can also specify a URL that the browser can send reports to if a page attempts to load a blocked resource.

If browser extensions load resources which are blocked by the CSP this can create noise in the error reports.

Example Content Security Policy

A CSP can be specified using a content-security-policy HTTP header on the document or using a meta tag.

Here's an example of a strict Content Security Policy that only allows resources to be loaded from the same origin as the document. It uses the report-uri directive to make the browser generate reports when a resource is blocked.

default-src 'self'; report-uri

How often do extensions cause CSP reports?

With the strict CSP shown above, 19 out of 1009 tested extensions (1.9%) caused a report. With a weaker policy used on the Apple homepage only 14 extensions (1.4%) attempted to load resources that were blocked.

LINER Search Assistant (200k users) and Cashback service LetyShops (1M users) each caused over 300 CSP errors.

However, most of these errors for the first 6 extensions are caused by blocked font loading, where Chrome repeatedly tries to load the font. So this very high number of errors seems to be a Chrome issue.

Number of CSP error by Chrome extension

Give the large number of CSP errors, if a report-uri is specified a lot of CSP reports requests are made.

Number of network requests by Chrome extension

What's causing the CSP errors?

The most common blocked resource types are fonts and stylesheets.

CSP blocks by resource type

Font loading errors

For example, LINER first successfully loads a CSS file from Google Fonts, but loading the actual font is then blocked.

It's not clear to me why Chrome attempts to load the font over 700 times.

LINER CSP errors in console

Modifying the CSP

Instead of causing CSP errors by introducing new resources, ScriptSafe actually modifies the CSP and blocks all scripts.

'name': 'Content-Security-Policy',
'value': "script-src 'none'"

Accordingly the normal on-page scripts no longer load correctly.

ScriptSafe CSP errors in console


In the case of MozBar an embedded iframe is blocked.

Mozbar CSP errors in console

When do Chrome extensions cause CSP reports?

Chrome extension content scripts are normally somewhat isolated from the rest of the page.

Let's say we want to load a Google font on a page with a strict Content Security Policy.

var l = document.createElement("link")
l.rel ="stylesheet"

If we just paste this code in the DevTools console then the stylesheet request will be blocked.

Stylesheet request blocked

However, if we change the context to that of an extension the stylesheet loads successfully. The font request triggered by the stylesheet is blocked though.

Font blocked

As another example, let's take a look at inserting a script from an extension. At first this works fine, even though the page CSP is blocking inline script tags.

But if the inserted script itself is injecting another script tag then the CSP applies and blocks the script.

Injected script blocked

<![CDATA[How do Chrome extensions impact browser performance?]]> /chrome-extension-performance-2021 Mon, 05 Jul 2021 00:00:00 GMT This report investigates how 1000 of the most popular Chrome extensions impact browser performance and end-user experience.

Key findings of the 2021 report:

  • Popular extensions like Honey, Evernote Web Clipper, and Avira Browser Safety can have a significant negative impact on website speed
  • On ad-heavy websites, ad blockers and privacy tools can greatly improve performance

As the performance impact varies between different websites, five different pages were included in the test: a simple test page,,, and news articles by The Independent and the Pittsburgh Post-Gazette.

Want to find out if any of the extensions you use are slowing you down? Look up the extension here.


  1. Increasing website CPU usage
  2. Impact on page rendering times
  3. Background CPU usage
  4. Browser memory consumption
  5. How do ad blockers and privacy tools affect browser performance?
  6. What happens if I have more than one extension installed?
  7. How do these browser performance results compare to last year's?
  8. A look at individual extensions
  9. Methodology

Increasing website CPU usage

Many Chrome extensions have the ability to run extra code on every page you open, although well-built ones only run code where necessary.

Among the 100 most popular Chrome extensions, Evernote Web Clipper has the biggest negative impact on performance. It spends 368 milliseconds running code on every page you open. If you try to interact with the page during this time the response will feel sluggish.

Chrome extension with large on-page CPU time: Evernote, Loom, Avira Password Manager, Clever

Each of these browser extensions has been installed over a million times. While a few hundred milliseconds may not sound like much, if multiple extensions are installed this can have a significant impact on user experience.

The speed impact of a browser extension depends on the website opened by the user. The results above were collected on a very simple website and generally represent the minimum per-page performance impact of a Chrome extension.

When testing extensions on the Apple homepage we can see that a dark mode extension called Dark Reader spends 25 seconds analyzing and adjusting images so that they better fit into a dark theme. As a result the page loads much more slowly, as we'll see later on.

Chrome extension with large on-page CPU time: Dark Reader, Honey, Avira Password Manager, Loom

Coupon code finder Honey also significantly impacts site speed on ecommerce websites, adding 825 ms of CPU processing time.

Finally, when running the tests on the Toyota homepage, we can see that Norton Password increases CPU activity the most, adding about 1 second of CPU time.

Chrome extension with large on-page CPU time: Norton Password Manager, Dashlane, Avira Safe Shopping, Dark Reader

This chart only shows the 5 extensions with the biggest impact on performance. Even without any extensions installed, uses over 3 seconds of CPU time, so it's harder to separate random variation from the impact of an extension.

Top 1000 extensions

Let's look at some other extensions that are less popular, but still have more than 100,000 installs each.

Ubersuggest, a marketing tool with over 200,000 users, adds 1.6 seconds of CPU activity to every page.

Chrome extension with large on-page CPU time: Ubersuggest, ProWritingAid, Meow the cat pet, MozBar

Substitutions is a Chrome extension that automatically replaces certain words on a page. On a small website it has little performance impact (adding about 10 ms of CPU time), but on a larger page like it adds 9.7 seconds of CPU activity.

Chrome extension with large on-page CPU time: Substitutions, Trusted Shops, Screen Reader, ProWritingAid

Impact on page rendering times

CPU activity can cause a page to hang and become unresponsive, as well as increasing battery consumption. But if the processing happens after the initial page load the impact on user experience may not be that big.

Several extensions like Loom and Ghostery run a large amount of code without impacting when the page starts rendering.

However, other extensions like Clever, Lastpass, and DuckDuckGo Privacy Essentials run code as soon as the page starts loading, delaying the point at which the user is first able to view page content. The chart uses the First Contentful Paint metric to measure this.

Chrome extension with large rendering delay: Clever, LastPass, Rakuten, Avast Online Security

While the Apple homepage normally renders in under a second, with Dark Reader installed it takes almost 4 seconds.

On an ecommerce website, Honey also delays the appearance of page content by almost half a second.

Chrome extension with large rendering delay: Dark Reader, Honey, Evernote, Loom

Avira Browser Safety and some ad blockers can also delay when page content starts to appear.

Chrome extension with large rendering delay: Avira Browser Safety, AdGuard AdBlocker, AdBlock best ad blocker, Ghostery

Top 1000 extensions

Looking at the 1000 most popular extensions shows that a social media tool called 壹伴 · 小插件 delays rendering times by 342 milliseconds and a sales tool called Outreach Everywhere adds a 251 millisecond delay.

Chrome extension with large rendering delay: Outreach Everywhere, Clever, Fuze, axe DevTools

When loading the Toyota homepage an anonymous browsing proxy called anonymoX delays rendering by over 2 seconds – however this isn't surprising as traffic is routed through another server.

Avira Browser Safety delays rendering by 369 milliseconds. This is not caused by code running on the visited page but by the background work performed by the extension, as we'll see in the next section.

Chrome extension with large rendering delay: anonymoX, Avira Browser Safety, Total AdBlock, AdGuard AdBlocker

Background CPU usage

Chrome extensions can run code not only on the pages you visit but also on a background page that belongs to the Chrome extension. For example, this code can contain logic that blocks requests to certain domains.

Even when visiting a simple page, Avira Safe Shopping keeps the CPU busy for over 2 seconds.

Chrome extension with large background activity: Avira Safe Shopping, Avira Password Manager, Avira Browser Safety, Evernote

On a more complex page – in this case the Toyota homepage – the Dashlane password manager and AdGuard AdBlocker also spend over 2 seconds on background activities.

Chrome extension with large background activity: Avira Safe Shopping, Dashlane, AdGuard AdBlocker, Avira Browser Safety

Top 1000 extensions

When viewing a news article from The Independent, three extensions cause more than 20 seconds of CPU activity: uberAgent, Dashlane, and Wappalyzer.

Chrome extension with large background activity:  uberAgent, Dashlane, Wappalyzer, TwoSeven

Browser memory consumption

Chrome extensions can increase the memory usage of every page being visited, as well as memory being spent on the extension itself. This can hurt performance, especially on low-spec devices.

Ad blockers and privacy tools often store information about a large number of websites, requiring a large amount of memory to store this data. That being said, they can also reduce overall memory consumption when many ad-heavy pages are open in the browser.

Chrome extension with large memory consumption: AdBlock Best ad blocker, AdBlock Plus, Dashlane, Avira Safe Shopping

Top 1000 extensions

When looking at the Top 1000 extensions, ad blockers continue to take up a significant amount of memory, with the Trustnav ad blocker adding almost 300 MB of memory consumption.

Chrome extension with large memory consumption: AdBlocker by TrustNav, Hola ad remover, Easy Adblocker, AdBlock best ad blocker

How do ad blockers and privacy tools affect browser performance?

While ad blockers can cause additional processing on ad-free websites, they can significantly speed up ad-heavy pages. This section looks at 15 ad blockers that have more than 500,000 installations each.

Loading trackers and rendering ads is often CPU-intensive, although the exact impact varies by website. News websites are often particularly ad-heavy, so this report will look at the CPU usage of two news articles: one from The Independent and the other from the Pittsburgh Post-Gazette.

Without ad-blockers per-page CPU time is 17.5 seconds. Even the lowest-performing blocker (by Trustnav) reduces this by 57% to 7.4 seconds.

Ghostery, the best-performing ad blocker in this test, reduces CPU activity by 90% down to just 1.7 seconds on average.

Lowest on-page CPU activity: Ghostery, uBlock Origin, AdBlocker Ultimate

Raymond Hill, author of uBlock Origin, points out on Twitter that while all extensions reduce on-page CPU activity some also introduce a significant amount of CPU activity in the extension's background page, cancelling out some of the savings.

Work that's done in the background is less likely to impact the performance of the web page itself, but it does still slow down your computer overall.

Lowest on-page CPU activity: Ghostery, uBlock Origin, AdBlocker Ultimate

Ad blockers and privacy tools also reduce data volume by 43% to 66%.

Lowest page size: Ghostery, Disconnect, AdBlocker Ultimate

Without an ad blocker, each article makes on 793 network requests on average. With Ghostery this goes down 90% to just 83.

Lowest number of network requests: Ghostery, AdBlocker Ultimate, Disconnect, DuckDuckGo Privacy Essentials

Without ad-blockers installed, the average total browser memory consumption with one of the news articles open is 574 MB. Disconnect reduces this by 54% to just 260 MB.

However, as browser extensions always take some memory to run, other ad blockers like the one by Trustnav slightly increase memory consumption. In this case the savings from blocking ads don't outweigh the additional cost of the ad blocker.

However, keep in mind that this only applies if you have a single ad-heavy page open. If you have 10 tabs open, all showing news articles, then you'll see 10x the memory savings but generally no equivalent increase in the memory consumption of the ad blocker.

Lowest memory consumption: Disconnect, Privacy Badger, Ghostery, Fair AdBlocker

What happens if I have more than one extension installed?

In the majority of cases the effect of multiple Chrome extensions will be cumulative.

This screenshot shows a Chrome DevTools page performance profile for when four extensions are installed: axe Web Accessibility Testing, Evernote Web Clipper, LastPass, and Skype.

You can see that CPU tasks run one after the other. If an extension is configured to run as soon as the page starts loading, this delays the initial render of the page.

Chrome DevTools CPU recording showing code running in order: axe DevTools, LastPass, Website code, Skype, Evernote

How do these browser performance results compare to last year's?

I looked at 96 of the most popular extensions that were included both in this year's tests and in last year's.

Taking the average across all the changes shows that on-page CPU time went down by 28 milliseconds.

Most extensions show some improvement, about 100ms among the extensions with significant activity

However, the tests in 2021 were run using Chrome 91 and the 2020 tests used Chrome 83. As Chrome gets faster over time these improvements might not necessarily mean that the Chrome extensions themselves have been optimized.

When running this year's tests with the old version of Chrome the average improvement is only 13 milliseconds.

CPU time improvement is significantly reduced but still noticeable

Note that this comparison only looks at one metric on one website (the simple test page).

Grammarly, Microsoft Office, Okta Browser Plugin, Avira Safe Shopping, and Avira Browser Safety all showed reductions in on-page CPU time of over 100 milliseconds. The biggest regressions were seen in Save to Pocket, Loom, and Evernote.

A look at individual extensions

Improvement in Grammarly

Last year, Grammarly was loading a 1.3 MB Grammarly.js file on every page. Now on most websites only a 112 KB Grammarly-check.js script is loaded. Only if, for example, the user focuses on a text area does the extension load the full Grammarly.js file.

However, some websites still always load the full-size script. This list includes Gmail, Twitter, YouTube, LinkedIn, Medium, Slack, Reddit, Upwork, Zendesk and other websites where text entry is common. On these websites the performance impact will be greater than that shown in these tests.

Grammarly improved from about 500ms of CPU time to only about 100 on most pages

Regression in Save to Pocket

In last year's tests, Save to Pocket injected one small stylesheet into every page, but this had no noticeable impact on performance.

However, Save to Pocket now always loads a 2 MB JavaScript file, adding 110 milliseconds of CPU time.

Pocket used to have no performance costs but now adds about 200 ms of CPU time to each page

Evernote, Outreach Everywhere, and Ubersuggest

Evernote loads 4.3 MB of content scripts on every page, up from 2.9 MB a year ago. Accordingly parsing, compiling, and running this code takes a good amount of time.

Outreach Everywhere loads 4.5 MB of code on every page. However, the performance impact of this code is far greater as it's loaded on document_start rather than on document_idle. That means the code runs before the visited page starts to render, thus delaying when page content shows up.

This image shows a Chrome DevTools performance profile where both extensions are installed.

Outreach Everywhere code runs before website code, Evernote code runs after

Ubersuggest loads a 7.5 MB JavaScript file on every page. A lot of this appears to be geo data – for example, this list of 38,279 different locations.

List of data with city names and countries

Avira Safe Shopping

Avira Safe Shopping has over 3 million users. Why does it sometimes delay page rendering by almost half a second?

The extension contains a safelist of 39,328 websites. When navigating to a new website Avira iterates over this list, causing the website to load much more slowly.

Code profiled in Chrome DevTools showing over 1 second of CPU activity

Dashlane and uberAgent

Dashlane and uberAgent both had more than 20 seconds of background CPU activity when viewing an article by The Independent.

For every network request, uberAgent sets up a timer that fires every 50 milliseconds to check if the page has finished loading. For a page that makes almost 1000 requests this means many timers are created and the computer is slowed down significantly.

Many small tasks in Chrome DevTools for uberAgent

While uberAgent runs many small tasks, Dashlane runs occasional long tasks taking over 500 milliseconds.

Adjacent long tasks in Chrome DevTools for Dashlane

Look up the performance impact of a specific extension

Wondering if an extension you're using affects performance? Look it up here.

Chrome Extension Performance lookup


Tests were run on an n2-standard-2 Google Cloud instance; the numbers in this report show the median of 7 test runs.

Data was collected using Lighthouse, and the results in this test show unthrottled observed metrics rather than simulation results.

A total of 1004 extensions were included in the test. A large percentage of extensions only modify the New Tab screen; these generally don't hurt performance and so most aren't included in the results. Some extensions where test results had errors are also not included.

<![CDATA[What's new in Lighthouse 8.0?]]> /lighthouse-v8 Fri, 04 Jun 2021 00:00:00 GMT Google released Lighthouse version 8 this week. This article looks at how the Performance score and Lighthouse report have changed compared to version 7.

Updated Performance score

The Lighthouse Performance score is made up of 6 different metrics. The weighting of each metric is adjusted over time.

In Lighthouse v8, Cumulative Layout Shift now accounts for 15% of the overall score, compared to just 5% in v7. Total Blocking Time has also increased in importance, from 25% to 30%. These changes reflect the increased focus on Core Web Vitals.

Breakdown of the Lighthouse Performance score

The metrics that have been deprioritized are First Contentful Paint, Speed Index, and Time to Interactive.

Metric score updates

In addition to updating the weighting of each metric, the way individual metrics are scored has also changed for First Contentful Paint and Total Blocking Time.

In both cases scoring has become stricter. To get an FCP score of 90 a page now has to render within 1.8s, compared to 2.3s in Lighthouse v7.

Different metric scores in Lighthouse v7 and v8

Impact of these changes

Overall, Google says that 20% of sites will see a drop in their Performance score, 20% will see no change, and 60% will see an improvement.

Keep in mind that the Lighthouse Performance score is the result of a lab test. The Core Web Vitals that Google uses as a ranking factor are collected from real users.

Cumulative Layout Shift changes

Starting with Lighthouse 7.5, CLS not only counts layout shifts in the main frame but also in embedded iframes.

Additionally, Lighthouse 8.0 uses the new "windowed" definition of CLS. That means layout shifts that happen around the same time (in a window of up to 5s) are grouped together. The window with the greatest amount of layout shift is used to calculate the CLS metric.
This is in contrast to the previous definition, which added up all layout shifts throughout the entire existence of a page. On long-lived pages, for example in single-page apps, this approach resulted in inflated CLS values.

While the new CLS definition will mostly affect values collected from real users who interact with the page over a longer period of time, some pages also see improvements in lab-based tests. For example, this New York Times article saw a decrease in the CLS score.

Separate CLS windows resulting in lower CLS score

JavaScript treemap and code coverage

Lighthouse now includes a treemap view showing JavaScript page size across different bundles. You can access it through the View treemap button near the top of the Performance section.

Lighthouse JavaScript treemap showing size by bundle

If your app provides public source maps Lighthouse will break down bundle size by source file. Chrome also collects code coverage data, showing which parts of the code are run and which parts are unused. Click Unused bytes to highlight the percentage of unused code in red.

In this example we can see that a JavaScript bundle from Trello loads the intl module. However, while over 10 languages are loaded only one is actually used by the page.

Lighthouse JavaScript treemap showing bundle breakdown

Element screenshots

Many Lighthouse audits report errors related to specific DOM elements. But sometimes it can be difficult to identify these just by their HTML code, so Lighthouse now includes screenshots highlighting the DOM element causing the issue.

Lighthouse element screenshots

Metric filter

A small change, but a super handy one: if you're seeing issues with a specific performance metric you can now select it and view audits that highlight opportunities to improve the metric.

Lighthouse metric filter

Content Security XSS Audit

A Content Security Policy (CSP) can prevent cross-site scripting (XSS) attacks, where an attacker can run their own code on your website and gain access to the data of other users.

The Lighthouse Best Practices category now checks if your website has a Content Security Policy, and shows how it could be improved.

Lighthouse CSP XSS audit

Lighthouse 8.0 on DebugBear

DebugBear now runs Lighthouse 8.0 and continuously monitors Lighthouse scores and performance metrics. Get in-depth reports showing exactly what Lighthouse audits caused your scores to go up or down.

Start monitoring Lighthouse scores!

Lighthouse CSP XSS audit

<![CDATA[Optimizing page performance by lazy loading images]]> /image-lazy-loading Wed, 28 Apr 2021 00:00:00 GMT I recently made some performance improvements to the DebugBear homepage. This article explains how to use the loading="lazy" HTML attribute to make your website render more quickly.

Page load timeline with and without image lazy loading

The problem: resource prioritization

The filmstrip above shows that the background image of the hero section only loads after about 1.6s. Before that a plain blue background is used.

Blue and image backgrounds

Why does it take so long? We can find out by looking at the detailed request waterfall.

The request for the background image is only made relatively late. Before that the browser loads several large below-the fold images, using up the available bandwidth.

Order page images are loaded in

The right-most part of each request shows the response body being downloaded. The darker areas show when data for that request is received, so you can see what requests the browser is spending bandwidth on.

Most of this is allocated to the below-the-fold images, but the browser is also loading some JavaScript code before loading the hero background image.

What does the HTML loading="lazy" attribute do?

By default, all images on a page are loaded as soon as the user opens the page. Setting the loading attribute to lazy can defer fetching images and iframes until the user scrolls near the element.

<img src="image.png" loading="lazy" />

That avoid unnecessary downloads and helps the browser focus on the most important requests.

Impact of implementing the solution

The below-the-fold images are now deprioritized, and the browser loads the background image much earlier. The Largest Contentful Paint goes from 1.6s to 0.8s.

(The filmstrip makes it look like the above-the-fold image loads later than before, but this is only minor variation, not an effect of the lazy loading change.)

Rendering the homepage with and without lazy loading images

What browsers support the loading="lazy" attribute?

Native lazy loading is currently supported by Chrome, Safari, Edge, and Firefox support it. Firefox and Safari don't support the loading attribute on iframes yet.

If a browser doesn't support the loading attribute the images will be downloaded as soon as the user opens the page.

loading attribute brower support table

Why is native image lazy loading better than using a library?

Using a library to lazy load images means that you first need to load JavaScript code before starting to load the image. If the browser starts scrolling down it may take a while before images show up.

This is especially noticeable if the LCP image is lazy loaded. In this waterfall you can see a long request chain for the LCP image.

Request waterfall with lazy loaded LCP image

How do you lazy load a background image?

The loading=lazy attribute doesn't work for background images, so you'll need to use JavaScript to achieve the same effect. You can check if the image is near the viewport and then either modify the style attribute directly or add a class with a background-image defined in a CSS file.

You can use the Intersection Observer API to check whether the image is in the viewport.

What if my user loads an article and then wants to read it offline?

If you're writing a long blog post readers might prefer to load it on their phone and then read it when they don't have an internet connection.

When images are lazy loaded they'll be missing once the reader scrolls down the page. To avoid that you can remove the loading attribute from the img tags after the initial page load.

setTimeout(() => {
.forEach(el => el.setAttribute("loading", ""))
}, 5000)

The images will then be downloaded right away, but without having affected the initial page load.

Images are loaded right away when loading attribute is removed

The trade-off here is bandwidth consumption: on a mobile plan with limited traffic users may prefer not loading images that they aren't going to see.

A worse Lighthouse Performance score

One thing that happened when I enabled image lazy loading and later removed the loading attribute is that my Lighthouse Performance score went down.

Charts showing LCP getting better and Performance and TTI going down

The initial render was a lot quicker, but Time to Interactive went up because more network activity was moved to later in the page load process, keeping the network active for longer.

Is this a problem? Not really. I care about user experience and the Core Web Vitals Google uses for search engine rankings. Time to Interactive doesn't matter in either case.

One easy workaround would be to increase the delay from 5s to 10s. Lighthouse assumes the network is idle after 5s of no activity, so pushing back the extra image requests will improve Time to Interactive.


You can use image lazy loading to prioritize more important requests and speed up the initial rendering of your page.

The Lighthouse Performance score isn't the most important metrics to optimize for, and sometimes it's ok to make a performance optimization that reduces the Performance score.

Want to try lazy loading on your website? The DebugBear Experiments feature lets you run performance tests on your website without having to deploy any changes.

Try DebugBear for free to monitor and optimize your site speed.

DebugBear trendlines

<![CDATA[Profiling site speed with the Chrome DevTools Performance tab]]> /devtools-performance Thu, 22 Apr 2021 00:00:00 GMT The Chrome DevTools Performance tab is packed full of features that let you audit page performance in depth. This article explains how to use it to profile your site and interpret the results.

Recording a performance profile

To access the Performance tab, navigate to the website you want to profile, then open Chrome DevTools by right-clicking and selecting Inspect.

Menu to open Chrome Devtools

Select the Performance tab inside Chrome DevTools.

The easiest way to capture a performance profile is by clicking the Start profiling and reload page icon. The profile will stop automatically once CPU and network activity on the page stops.

Empty DevTools performance tab

You might prefer running performance tests in Incognito Mode, as Chrome extensions can impact site performance.

Overview of the the Performance tab

A Performance profile in Chrome can get pretty complicated! But having a wide range of information available means you can correlate different types of page activity and identify the cause of a performance problem.

The next few sections will look at a few key components of the Performance tab and how to interpret the data in them.

Stripe profile showing frames, CPU utilization, filmstrip, network requests, and flame chart

CPU utilization timeline

This chart shows how busy the CPU is with different types of tasks, usually mostly JavaScript (yellow) and Layout work (purple).

CPU activity normally becomes fairly quiet after an initial burst of activity, as you can see on the Stripe homepage.

Stripe CPU activity

The example below is from the Asana homepage, and you can see that the CPU remains busy after the initial page load. Especially on slower devices this could make the page slow to interact with.

Asana CPU activity


The filmstrip recording shows the rendering progress of your website in an intuitive way. You can hover over the filmstrip to see a screenshot from that point in time.

Chrome filmstrip recording for Stripe

Starting the recording from a blank page

When using the Start profiling and reload page option it can be hard to tell at what point the page started rendering, as the filmstrip shows the fully rendered page from the start.

You can record a filmstrip starting from a blank page instead:

  1. Go to about:blank
  2. Click on the Record icon in the Performance tab
  3. Once the page is loaded click the Record icon again to stop the recording

The page now starts from an empty page and then gradually renders. I also used network throttling to make the page render a little more gradually.

Slower Chrome filmstrip recording starting from blank page

Network request timeline

The network section shows a request waterfall, starting with the HTML request at the top and then showing additional requests below it.

Click on each request to see additional information like the full URL, request duration, resource priority, and download size.

Network request waterfall in the DevTools Performance tab

The network timeline is especially useful to correlate requests to UI updates or CPU activity. For example, this screenshot shows the Stripe homepage just before a font finishes loading.

Screenshot from before font loads

If you see a change in the UI you can look at the requests section to identify what network request was holding back the UI update.

In this screenshot from immediately after loading the font file you can see that the UI has rerendered using the downloaded font.

Screenshot from after font loads

CPU flame chart

The main-thread CPU section contains an inverted flame chart showing how CPU tasks are broken down into different components.

For example, you can see a waitForCssVars function call in the flame chart. Looking above it tells us that this function was called by an anonymous function, which in term was called because it was used as a requestAnimationFrame callback.

We can also see that the init function is called from within waitForCssVars.

CPU flame chart showing JavaScript activity

Selecting a JavaScript function in the flame chart will show additional information about this function call, including the source location.

Clicking on the source location navigates to the source code. I also used the Prettify button in the bottom left of the Sources panel to make the code readable.

Source code for the CPU activity

Forced reflows

Normally the browser first finishes running JavaScript code and then updates the UI for the user. A forced reflow is when JavaScript code accesses an element's layouts properties while there are pending UI changes that could affect the layout properties of the element. The browser has calculate the layout updates synchronously while JavaScript code is running.

Forced reflows don't always have a large impact on performance. Forced style recalculation pull work forward, so if the layout doesn't change later on then no additional work is required.

Forced reflow in Chrome DevTools

The detailed CPU task data provided by Chrome's profiler can help understand and debug synchronous layouts. The style recalculation task points to two locations in the source code:

  1. Recalculation Forced: The code that triggered the relayout by accessing DOM element properties that depend on the layout
  2. First Invalidated: The code that changed the DOM, meaning layout recalculations would be necessary the next time the UI is updated

Source code for layout invalidation and reflow

Aggregated task breakdown

If no specific CPU task is selected, the details panel at the bottom of the Performance tab shows an overall breakdown of CPU activity into four categories:

  • Loading: Making network requests and parsing HTML
  • Scripting: Parsing, compiling, and running JavaScript code, also includes Garbage Collection (GC)
  • Rendering: Style and layout calculations
  • Painting: Painting, compositing, resizing and decoding images

Pie chart breaking down CPU activity

By default the page main thread is selected, but you can select different threads by clicking on them or by using the up and down arrow keys.

This screenshot shows the CPU breakdown for a web worker.

Web worker CPU activity in DevTools

Bottom-Up tab

You can select the Bottom-Up view to see a more fine-grained breakdown of CPU activities. It shows the lowest-level types of activities from the bottom of the call tree, so you'll often see native browser functions like getBBox or setAttribute.

Expand these low-level functions to find out what code is calling them. This helps you find the code that you have control over.

Bottom-up chart in Chrome DevTools

Call Tree tab

The Call Tree tab is similar to the flame chart: it shows how much time different subtasks and function calls contribute to the overall duration of a task.

The advantage over the flame chart is that the Call Tree aggregates repeated code invocations rather than looking at one call at a time. This makes it easier to see where time is spent on average.

Call Tree in Chrome DevTools

Frames and frame rate (FPS)

The Frames section of the Performance tab shows a screenshot of every UI update on the page. Each UI update is called a frame, and if the UI is frozen for a long time the frame is called a long frame. The screenshot below shows several long frames, where UI updates are delayed due to heavy JavaScript activity.

Long frames and the frame rate (Frames Per Second, FPS) are also shown right on top of the CPU activity chart.

DevTools frames section

If you click on the frame snapshot in the details pane you can step through all captured frames in order.

Keep in mind that, on a web page, a low frame rate isn't always a problem. When playing a video game the UI is updating constantly, and you'll need a high frame rate. On a website it's normal for the frame rate to go down to zero after the initial page load, unless there are ongoing animations.

Page with no UI updates or frames

Web Vitals and other timings

The DevTools Performance tab can also show Web Vitals and User Timing metrics.

The Timings lane also shows the First Paint (FP) and the DomContentLoaded (DCL) and Load (L) events.

Web Vitals and other timings in Chrome DevTools

Hovering over a layout shift in the Experience lane will highlight the DOM node that changed position on the page, assuming that DOM node still exists. Clicking on the layout shift entry shows additional information like the location the element moved from and to.

The Long Tasks lane shows CPU tasks that take longer than 50ms to run, making the page less responsive to user input. The time in excess of 50ms is counted as blocking time, which is marked using black stripes. This can help you debug the Total Blocking Time metric.

CPU throttling

When optimizing the speed of your website you'll often run into situations where the site is fast on your device but slow for some of your users. For example, pages often load more slowly on phones with slow CPUs than on a desktop device.

DevTools can throttle the CPU and network connection in order to emulate how a user on a slower device would experience your website.

To throttle the CPU, click the gear icon in the Performance tab – not the one at the top right of DevTools! You can then enable a 4-6x slowdown of the CPU.

Enabling DevTools CPU throttling

The screenshot above shows that on a slower device the CPU remains busy even after the initial load. This can make the page less responsive to user interaction, as well as using extra battery power.

For reference, compare the CPU chart above to the one below, where the CPU isn't throttled.

Fast CPU in DevTools Performance recording

Using throttling to make page activity easier to understand

In addition to emulating the page experience of your users, throttling also makes it easier to investigate performance problems.

Unless your page is really slow, when loading a page hundreds of events often happen at once, making it hard to understand the relationships and dependencies of different types of page activity. Applying Slow 3G and 6x CPU slowdown throttling will make the page load really slowly, allowing you to look at network requests and CPU tasks one at a time.

Advanced paint instrumentation

The Enable Advanced Paint Instrumentation option collects additional debug data about page rendering performance. Collecting this data slows down the page, so if this setting is enabled other performance metrics will be less accurate.

If you find a slow Paint event in the timeline you can select it and get a detailed breakdown of what the browser has been drawing and how long it took.

You can also select an item in the Frames lane to see the layers (groups of page content) that make up the page.

Layers that were painted in that frame are colored in. Select each layer and find out why this part of the page was promoted to its own layer.

Layers visualization in Chrome DevTools

Looking at the Stripe homepage, one interesting thing is that there are layers for each section of the header menu.

The position and opacity of the menu changes as the user hovers over the navigation, so Stripe uses the will-change: transform, opacity CSS property to make Chrome aware of this. Chrome then puts these elements in their own layer to speed up these transformations.

will-change-layer CSS property

Monitoring performance with DebugBear

The DevTools Performance tab is a great tool for an in-depth performance investigation of your website. But it doesn't allow you to continuously monitor site speed or compare test results to each other.

DebugBear keeps track of site speed and the Core Web Vitals metrics that Google uses as a ranking signal.

Core Web Vitals charts

We also provide detailed debug data to help you understand and optimize web performance.

Sign up for a free trial!

Rendering filmstrip

<![CDATA[Web Vitals FAQ]]> /web-vitals-faq Mon, 22 Mar 2021 00:00:00 GMT Google will start using Core Web Vitals as part of their rankings in May 2021. This page goes through common questions about Core Web Vitals and how they affect rankings.

All answers to the FAQs are based on what John Mueller has said in the official Google Search Central office hours. Update: Answers from this Web Vitals Q&A are now also included.

The quotes in this article have been edited to make them easier to read and are not always direct quotes from the videos. Each answer links to the original video where the question was answered.

  1. Does Google use field data or lab data for rankings?
  2. How does Google determine the Core Web Vitals scores of a page?
  3. Why am I seeing different scores reported in different tools, such as Lighthouse and the Chrome User Experience Report?
  4. What are Web Vitals? What is the difference between Web Vitals and Core Web Vitals?
  5. How much real-user data is needed to be included in the CrUX dataset?
  6. Is the page experience ranking boost per country?
  7. Do Google rankings only use data for a specific page, or a combination of page and origin level data?
  8. Is page experience a binary ranking factor, or is there a gradation?
  9. How much will Web Vitals impact rankings?
  10. What if a metric doesn't accurately reflect page experience?
  11. Google currently aggregates 28 days of CrUX data. Could Google update Web Vitals data more quickly?
  12. How often is Web Vitals data updated?
  13. Do the ranking changes apply to both mobile and desktop?
  14. How does Google group pages?
  15. Is performance data shared between subdomains?
  16. Do noindex pages affect Web Vitals?
  17. Do pages that are behind login impact Web Vitals?
  18. Why does Google Search Console show big fluctuations in Web Vitals?
  19. Is the yellow "needs improvement" rating good enough to rank well, or does the rating need to be "good"?
  20. Why are AMP and non-AMP pages sometimes shown separately?
  21. Why do I see different values for the Web Vitals in CrUX and PageSpeed Insights?

Other Web Vitals FAQs:

Does Google use field data or lab data for rankings?

We use the field data, which is based on what users actually see. So it's based on their location, their connections, their devices. That's usually something that mirrors a lot closer what your normal audience would see.

You can read more about field vs lab data here.

How does Google determine the Core Web Vitals scores of a page?

The scores represent the 75th percentile of Google's field data, meaning 25% of users have a better experience than stated, and 25% have a worse experience.

We measure the the three Core Web Vitals in Chrome [...] for every page load. And then we take the 75th percentile of each of those separately. So if your 75th percentile of all your page loads meets Largest Contentful Paint's 2,500 millisecond threshold then your page meets Largest Contentful Paint. [...]

] And then either it meets the third threshold [for First Input Delay] or, since not every page has input, it doesn't have enough samples. [...]

[Meeting the Web Vitals means that] three out of four [visitors are going to be having a good experience.

Why am I seeing different scores reported in different tools, such as Lighthouse and the Chrome User Experience Report?

Metric discrepancies across tools are super common problem.

There's a lot to tease apart here, I'll take a stab at a few of the key points.

One of the first ones is that we have two fundamentally different sources of data here that we're dealing with. So we have our field data, which is [...] used for Core Web Vitals, this is "what are your real users experiencing?", this is people who are engaging with your content, and you are getting signals back from them as to how good their experience is.

But you need a way to debug and diagnose that before your user is engaging with that content. And so you want to have some more control and some more granular data there. And that's where simulated data comes in also called lab data.

So that's the first key point is that there are two different data sources. And so you're going to see different, you're going to see different values, because one is representing all of your users. And then the other is representing a simulated load.

The second point [...] is that there are also different runtime conditions depending on the tool that you're looking at. So for instance, if you're accessing lighthouse in the DevTools panel, then you are going to be operating locally on your own machine, it's going to represent conditions that are local to you. Whereas if you're using PSI (PageSpeed Insights), you are pinging servers and getting that response back. So there are going to be deltas there as well.

Philip Walton also highlights the importance of the difference between lab and field data.

Lighthouse is a lab based tool, meaning a real user is not interacting with it. It's running in a simulated environment, whereas the Chrome User Experience Report (CrUX) is where the Core Web Vitals data that you'll see in tools like Search Console or PageSpeed Insights is coming from, what we call field data. Sometimes it's called RUM (real-user monitoring) data. It's coming from real users that are actually going to those pages and interacting with them.

What are Web Vitals? What is the difference between Web Vitals and Core Web Vitals?

Web Vitals is the name of the initiative, or the program that we have here in Chrome. That covers everything, the whole Web Vitals program, where vitals is also a term that we use to describe the individual metrics that are part of the Web Vitals program.

The Core Web Vitals specifically are [...] the subset of Web Vitals that we feel like are the most important to measure, and they have to meet certain criteria to be qualified.

  • They have to be measurable in the field by real users.
  • They they have to be representative of user experience
  • They have to generally apply to all web pages

If you are looking for just the minimal amount of things to focus on, the Core Web Vitals are a great place to start.

And then we have other Web Vitals that are often that are good performance metrics to care about. And they are useful in debugging [...] the Core Web Vitals.

For example, Time To First Byte is a Web Vital. And Time To First Byte is often useful in debugging your Largest Contentful Paint, it helps you know whether maybe your server code is slow or your browser code/front-end code is slow.

How much real-user data is needed to be included in the CrUX dataset?

No clear answer here:

At a high level, we just want to make sure that whatever we're actually sharing has reached a certain threshold for [proper anonymization]. That's kind of how we we determine [...] where that threshold is, in terms of what we're actually kind of publishing in the CrUX dataset.

We don't do any capping though. If you have more data than the the minimum sample size, there's just more data being used to calculate the CrUX scores.

Is the page experience ranking boost per country?

If your site is fast in the UK, but slow in Australia, will UK search result get a boost?

It's all Chrome users together. This is a part of the reason why we're using the 75th percentile, so in that example more than a quarter of your customers are in Australia, and they're getting slower times.

Ideally you'd be using a CDN (Content Delivery Network).

Do Google rankings only use data for a specific page, or a combination of page and origin level data?

Google uses page level data when enough data is available, and falls back to page groups or the whole origin if there's not much data.

This is, I think, a source of confusion because a lot of tools will report origin and page level data. I think that can sometimes make it more confusing than it needs to be where people might think that you get like a single score for your entire site. But that's not true, you get a score per page.

In some cases, in some cases, you get a score for page group. If you go into Search Console, you might see page groups that all have kind of a certain score. Depending on how much data the site has, you might then not have enough, it might be all pages in the same origin will be grouped together. [...]

For most of the ranking signals that we have, we don't have like one documentation page that says this is how exactly we use this signal in ranking, because there's just so many edge cases and also situations where we need to have the flexibility to adapt to changes that happen in the ecosystem.

Is page experience a binary ranking factor, or is there a gradation?

Do only pages meeting the "Good" threshold get a ranking boost? Or can site speed optimizations help even if you don't reach that threshold?

In the area from needs improvement to good, that's kind of the range where we would see a gradual improvement with regards to the ranking signal.

Once you've reached that "Good" threshold, then [you're at a] stable point. Micro optimizing things like extra milliseconds here and there, that's not going to do [anything specific to your rankings]. It might have an effect on what users see. And with that, you might have other positive effects. But at least when it comes to search ranking, that's not going to be something where you're going to see improvements would be like five milliseconds faster than the next one.

It is not the case that unless you reach the "Good" threshold for all of the Core Web Vitals metrics [you won't get a metrics boost], like you have to reach that threshold to get a ranking boost, that is not the case.

It's kind of the opposite. Once you reach the mean, you will get a ranking boost for reaching the good threshold for all pages. But beyond that point, you don't get additional boost for reaching an even better [level], like if [you get your LCP] all the way down to one second. [That] will not increase your ranking.

However, if you have a very, very slow page, like maybe LCP is 20 seconds, and you improve it to 10 seconds, that could potentially boost your ranking.

Now we get a lot of [people saying meeting the Core Web Vitals is really hard for them]. Yes, it is. It's supposed to identify the best content on the web, and that you don't really necessarily [...] need to improve beyond that. You might see additional benefits from your users, but we don't take that into account.

How much will Web Vitals impact rankings?

Page Experience will be an additional ranking factor, and Web Vitals will be a part of that.

Relevance is still by far, much more important. So just because your website is faster with regards to Core Web Vitals than some competitors, that doesn't necessarily mean that come May (this has now been postponed to mid-June) you'll jump to position number one in the search results.

It should make sense for us to show this site in the search results. Because as you can imagine, a really fast website might be one that's completely empty. But that's not very useful for users. So it's useful to keep that in mind when it comes to Core Web Vitals.

It is something that users notice, it is something that we will start using for ranking, but it's not going to change everything completely. So it's not going to, like destroy your site or remove it from the index if you have it wrong. It's not going to catapult you from page 10 to number one position, if you get it right.

What if a metric doesn't accurately reflect page experience?

Automated performance metrics try to assess the user experience of a page, but don't always match the experience of real user. Google is working on improving metrics over time – you can find a changelog for the core web vitals here.

John suggests contacting the Chrome team if the browser is misinterpreting the page experience.

I think for things where you feel that the calculations are being done in a way that doesn't make much sense, I would get in touch with the Chrome team. Especially Annie Sullivan is working on improving the Cumulative Layout Shift side of things.

Just make sure that they see these kinds of examples. And if you run across something where you say, oh, it doesn't make any sense at all, then make sure that they know about it.

You can see an example of a change in the Largest Contentful Paint definition here. Originally, a background image was dragging down the Performance score of the DebugBear homepage, but this was then fixed in a newer version of Chrome.

Change in LCP definition showing in monitoring data when upgrading Chrome

You can expect the page experience signals to be continuously updated, with some advance warning.

Our goal with the Core Web Vitals and page experience in general is to update them over time. I think we announced that we wanted to update them maybe once a year, and let people know ahead of time what is happening. So I would expect that to be improved over time.

Google currently aggregates 28 days of CrUX data. Could Google update Web Vitals data more quickly?

After making improvements to a site's performance it can take a while for those changes to be reflected in the Search Console. Can Google speed up the process?

I don't know. I doubt it. The data that we show in Search Console is based on the Chrome User Experience Report data, which is aggregated over those 28 days. So that's that's the primary reason for that delay there.

It's not that Search Console is slow in processing that data or anything like that. The way that the data is collected and aggregated just takes time.

So how can you catch regressions early and track performance improvements?

What I recommend, when people want to know early when something breaks, is to make sure that you run your own tests on your side in parallel for your important pages. And there are a bunch of third party tools that do that for you automatically.

You can also use the PageSpeed Insights API directly. Pick a small sample of your important pages and just test them every day. And that way, you can tell if there are any regressions in any of these setups you made fairly quickly.

Obviously a lab test is not the same as the field data. So there is a bit of a difference there. But if you see that the lab test results are stable for a period of time, and then suddenly they go really weird, then that's a strong indicator that something broke in your layout in your pages somewhere.

DebugBear is one tool you can use to continuously monitor web vitals in a lab environment.

How often is Web Vitals data updated?

The Chrome UX Report is updated daily, aggregating data from the last 28 days. If you make an improvement, after one day you're 1/28th of the way to seeing the results.

What if you fix performance a few days before the page experience ranking update is rolled out?

Probably we would not notice that. If you make a short term change, then probably we would not see that at the first point. But of course, over time, we would see that again. It's not the case that on that one date, we will take the measurement and apply that forever.

If you need a couple more weeks before everything is propagated and looks good for us as well, then, like, take that time and get it right and try to find solutions that work well for that.

The tricky part with the lab and the field data is that you can incrementally work on the lab data and test things out and see what works best. But you still need to get that confirmation from users as well with the field data.

Do the ranking changes apply to both mobile and desktop?

Both mobile and desktop from February 2022.

The rollout first started with mobile in June 2021.

[Mobile is where] the harder barriers are, where users really see things a lot slower than on desktop. Because on desktop, you tend to have a really powerful computer, and you often have a wired connection.

On mobile, you have this much slimmed down processor with less capabilities, and a smaller screen and then a slow connection sometimes. And that's where it's really critical that pages load very fast.

How does Google group pages?

Google doesn't have real user performance data for every page that's in the search index. So how are pages categorized?

We use the Chrome User Experience Report data as the baseline. We try to segment the site into parts, what we can recognize there from the Chrome User Experience Report, which is also reflected in the Search Console report.

Based on those sections, when it comes to specific URLs, we will try to find the most appropriate group to fit that into. So if someone is searching, for example, for AMP pages within your site, and you have a separate AMP section, or you have a separate amp template that we can recognize, then we'll be using those metrics for those pages.

It's not so much that if there's some slow pages on your site, then the whole site will be seen as slow. It's really, for that group of pages where we can assume that when a user goes to that page, their experience will be kind of similar, we will treat that as one group.

So if those are good, then essentially pages that fall into those groups are good.

Here's an example of Google Search Console showing different page categories with many similar URLs.

Core Web Vitals data for different categories in Google Search Console

Regarding Accelerated Mobile Pages (AMP), John points out that the AMP report does not take traffic into account. A site where the vast majority of users have a good experience might look slow if there are also a large number of slower pages that are accessed less frequently.

The tricky part with the AMP report is that we show the pages there, but we don't really include information on how much traffic goes to each of those parts.

So it might be that you have, let's say 100 pages that are listed there, and your primary traffic goes to 10 of those pages. Then it could look like you have like those 90% of pages that are slow, and those 10 that people go to are fast. But actually the majority of your people will be going to those fast pages. And we'll take that into account.

Is performance data shared between subdomains?

As far as I recall, the CrUX data is kept on an origin level. So for Chrome, the origin is the hostname, which would be kind of on a sub domain and protocol level. So if you have different subdomains, those would be treated separately.

Do noindex pages affect Web Vitals?

Many websites have slower, more complex pages that are not included in the search index. Will performance on these pages affect rankings?

This depends on whether Google categorizes the pages together with pages that are in the search index. Especially if there's not a lot of other data Google might draw on the performance data collected for the noindex pages.

If it's a smaller website, where we just don't have a lot of signals for the website, then those no index pages could be playing a role there as well. So my understanding, like I'm not 100% sure, is that in the Chrome User Experience Report we do include all kinds of pages that users access.

There's no specific "will this page be indexed like this or not" check that happens there, because the index ability is sometimes quite complex with regards to canonicals and all of that. So it's not trivial to determine, kind of on the Chrome side, if this page will be indexed or not.

It might be the case that if a page has a clear no index that will then we will be able to recognize that in Chrome. But I'm not 100% sure if we actually do that.

Do pages that are behind login impact Web Vitals?

Pages often load additional content and functionality for logged in users. This additional content can slow down the page – will these logged-in pages impact rankings?

If you have the same URL that is publicly accessible as well, then it's it's very likely that we can include that in the aggregate data for Core Web Vitals, kind of on the real user metrics side of things, and then we might be counting that in as well.

If the login page exists in a public forum, that we might think some users are seeing this more complicated page, we would count those metrics.

Whereas if you had separate URLs, and we wouldn't be able to actually index the separate URLs, then that seems like something that we will be able to separate out.

I don't know what exact guidance here is from the Core Web Vitals side, though. I would double check, specifically with regards to Chrome in the CrUX help pages to see how that would play a role in your specific case.

Why does Google Search Console show big fluctuations in Web Vitals?

Why would ratings for a group of pages go up and down repeatedly? What do I do if my page ratings keep switching between green and yellow?

What sometimes happens with these reports is that if a metric is right on the edge between green and yellow.

If for a certain period of time, the measurements tend to be right below the edge, then everything will swap over to yellow and look like oh, green is completely gone. Yellow is completely here. And when those metrics go up a little bit more back into the green, then it's swapped back.

Those small changes can always happen when you make measurements. And to me this kind of fluctuation back and forth, points more towards, well, the measurement is on the edge. The best way to improve this is to provide a bigger nudge in the right direction.

Is the yellow "needs improvement" rating good enough to rank well, or does the rating need to be "good"?

Aim for a "good" rating marked in green:

My understanding is, we see if it's in the green, and then that counts as it's okay or not. So if it's in yellow, that wouldn't be kind of in the green.

But I don't know what the final approach there will be [based on the question above it seems like a full "Green" rating is less important]. Because there are a number of factors that come together. I think the general idea is if we can recognize that a page matches all of these criteria, then we would like to use that appropriately in search for ranking.

I don't know what the approach would be when some things are okay and some things that are not perfectly okay. Like, how that would balance out.

The general guideline is that we would like to use these criteria to also be able to show a badge in the search results, which I think there have been some experiments happening around. For that we really need to know that all of the factors are compliant. So if it's not on HTTPS, then even if the rest is okay that wouldn't be enough.

Why are AMP and non-AMP pages sometimes shown separately?

What if Search Console shows a slow URL on the website, but the AMP version of the page is shown as fast?

We don't focus so much on the theoretical aspect of "this is an AMP page and there's a canonical here". But rather, we focus on the data that we see from actual users that go to these pages.

So that's something where you might see an effect of lots of users going to your website directly, and they're going to the non-AMP URLs, depending on how you have your website set up.

And in search, you have your AMP URLs, then we probably will get enough signals that we track them individually for both of those versions. So on the one hand, people going to search going to the AMP versions and people maybe going to your website directly going to the non-AMP versions. In a case like that we might see information separately from those two versions.

Whereas if you set up your website in a way that you're consistently always working with the AMP version, that maybe all mobile users go to the AMP version of your website, then we can clearly say "this is the primary version – we'll focus all of our signals on that".

But in the theoretical situation that we have data for the non AMP version and data for the AMP version, and we show the AMP version in the search results, then we would use the data for the version that we show in search as the basis for the ranking.

Why do I see different values for the Web Vitals in CrUX and PageSpeed Insights?

Lab data from PageSpeed Insights will often be different from the field data. Usually the lab data will be slower, as Lighthouse simulates a very slow device with a bandwidth of 1.6Mbps and 150ms of roundtrip network latency.

Here's a typical example from the BBC homepage.

Metric discrepancy between PageSpeed Insights lab data and field data

John also notes that PageSpeed Insights tries to compress all performance metrics into a single score:

In PageSpeed Insights we take the various metrics and we try to calculate one single number out of that. Sometimes that's useful to give you a rough overview of what the overall score would be. But it all depends on how strongly you weigh the individual factors.

It can be the case that one user see a page that's pretty fast and sleek, but when our systems test it they find some theoretical problems that could be causing issues.

The overall score is a really good way to get a rough estimate. And the the actual field data is a really good way to see like what people actually see and usually

How do you use lab data and field data together?

What I recommend is using field data as a basis to determine "should I be focusing on improving the speed of the page or not?" Then use lab testing tools to determine the individual metric values and for tweaking them as you're doing the work.

Use lab data to check that you're going in the right direction, because the field data is delayed by about 30 days. So for any changes that you make, the field data is always 30 days behind and if you're unsure if you're going in the right direction then waiting 30 days is kind of annoying.

<![CDATA[Common problems with rel="preload"]]> /rel-preload-problems Tue, 16 Mar 2021 00:00:00 GMT Preload <link> tags are a type of browser resource hint, telling the browser to start downloading a file with a high priority.

You can use preload tags to make your site load faster, but when used incorrectly they can also slow it down. This article highlights common mistakes when using preload hints, and explains how to avoid them.

How to use preload correctly

First, let's look at how preload is supposed to work.

This WebPageTest waterfall shows a common performance problem that can be solved using preload. A CSS file references a font, but this font doesn't start loading until the CSS file has been fetched.

Sequential requests without preload

By adding a preload tag to the document, the browser can start loading the font and the CSS file at the same time. You can see this on the Shopify homepage, where the fonts are preloaded to make sure the page quickly renders with the correct fonts.

Parallel requests with preload

Preloading too many resources

Preload tags can help the browser prioritize important requests. But preloading a lot of files can actually make the prioritization worse, the your page will load more slowly.

Take a look at this example from the Asana homepage. The green line in the request waterfall shows when the page starts rendering.

Asana making a large number of requests before the page starts to render

It looks like all of these JavaScript files are render-blocking. But actually, the page contains 26 preload tags.

Instead of loading the important render-blocking resources, Chrome focusses on a large number of low-priority files.

Preloading unused resources

If a preloaded file isn't used Chrome will show a message like this:

The resource was preloaded using link preload but not used within a few seconds from the window's load event. Please make sure it has an appropriate as value and it is preloaded intentionally.

There are a few common causes of this:

  • using preload (high-priority) when you meant prefetch (low-priority)
  • removing a resource from a page, but not removing the corresponding preload tag
  • preloading resources loaded by a third-party script, but then the third-party stops using that resource

Deprioritizing important resources

This waterfall shows a site with a large render-blocking CSS file and a preloaded JavaScript file that's used later on (angular.js).

The preload is competing with the render-blocking file for bandwidth. As a result, the download takes longer and the page renders more slowly.

CSS and JavaScript files competitng for bandwidth

The page renders 0.9s faster without the preload.

CSS file loading before JavaScript

The downside of this change is that the JavaScript file will now finish loading later. Depending on the relative importance of the initial render compared to the full app load this may be fine, or you might actually want to intentionally prevent rendering until the app code has loaded.

One simple way to speed up this example is using a preconnect tag. This makes sure that the browser establishes a server connection, but does not start consuming bandwidth downloading the resource.

Preconnecting to the server hosting the JavaScript file

If the app code is important, but less important than the initial render, consider lazy-loading other content like images. That way bandwidth will first be used to load the render-blocking CSS file, then to load the app code, and finally to load the images.

CORS mode mismatch

Preload can prioritize resource loading by starting requests early, but this only works if the subsequent browser request matches the preload request. (Thanks to Jakub for bringing up this issue on Twitter.)

This is especially common with fonts, which are always loaded using anonymous mode CORS.

Let's say this is your preload tag:


If you look at the request waterfall you can see that the font is actually loaded twice.

(In this case it's then served from the browser cache. This behavior seems inconsistent and might be a Chrome bug.)

Font is loaded again

The console will show a warning like this:

A preload for '' is found, but is not used because the request credentials mode does not match. Consider taking a look at crossorigin attribute.

Adding the crossorigin attribute to the link tag ensures that both requests are made using CORS headers. The preload response can then be safely reused for the actual font load.

Preloaded font is used, no second request

How to check if browser resource hints are working correctly

We've built a free resource hint validator that automatically tests your page for common problems.

Browser resource hint validator

Monitor site speed and Core Web Vitals over time

DebugBear keeps track of your website speed over time and provides an in-depth analysis that you can use to make it faster.

Start a free 14-day trial.

DebugBear monitoring data

<![CDATA[Optimizing React performance by preventing unnecessary re-renders]]> /react-rerenders Fri, 12 Feb 2021 00:00:00 GMT Re-rendering React components unnecessarily can slow down your app and make the UI feel unresponsive.

This article explains how to update components only when necessary, and how to avoid common causes of unintentional re-renders.

Use React.memo or React.PureComponent

When a component re-renders, React will also re-render child components by default.

Here's a simple app with two Counter components and a button that increments one of them.

Simple app using React.memo

function App() {
const [counterA, setCounterA] = React.useState(0);
const [counterB, setCounterB] = React.useState(0);

return (
<Counter name="A" value={counterA} />
<Counter name="B" value={counterB} />
onClick={() => {
console.log("Click button");
setCounterA(counterA + 1);
Increment counter A
function Counter({ name, value }) {
console.log(`Rendering counter ${name}`);
return (
{name}: {value}

Right now, both Counter components render when the button is clicked, even though only counter A has changed.

Click button
Rendering counter A
Rendering counter B

The React.memo higher-order component (HOC) can ensure a component is only re-rendered when its props change.

const Counter = React.memo(function Counter({ name, value }) {
console.log(`Rendering counter ${name}`);
return (
{name}: {value}

Now only counter A is re-rendered, because it's value prop changed from 0 to 1.

Click button
Rendering counter A

For class-based components

If you're using class-based components instead of function components, change extends React.Component to extends React.PureComponent to get the same effect.

Make sure property values don't change

Preventing the render in our example was pretty easy. But in practice this is more difficult, as it's easy for unintentional prop changes to sneak in.

Let's include the Increment button in the Counter component.

React.memo demo with callback prop

const Counter = React.memo(function Counter({ name, value, onClickIncrement }) {
console.log(`Rendering counter ${name}`);
return (
{name}: {value} <button onClick={onClickIncrement}>Increment</button>

The App component now passes in an onClickIncrement prop to each Counter.

onClickIncrement={() => setCounterA(counterA + 1)}

If you increment counter A, both counters are re-rendered.

Rendering counter A
Rendering counter B

Why? Because the value of the onClickIncrement prop changes every time the app re-renders. Each function is a distinct JavaScript object, so React sees the prop change and makes sure to update the Counter.

This makes sense, because the onClickIncrement function depends on the counterA value from its parent scope. If the same function was passed into the Counter every time, then the increment would stop working as the initial counter value would never update. The counter value would be set to 0 + 1 = 1 every time.

The problem is that the onClickIncrement function changes every time, even if the counter value it references hasn't changed.

We can use the useCallback hook to fix this. useCallback memoizes the function that's passed in, so that a new function is only returned when one of the hook dependencies changes.

In this case the dependency is the counterA state. When this changes, the onClickIncrement function has to update, so that we don't use outdated state later on.

onClickIncrement={React.useCallback(() => setCounterA(counterA + 1), [

If we increment counter A now, only counter A re-renders.

Rendering counter A

For class-based components

If you're using class-based components, add methods to the class and use the bind function in the constructor to ensure it has access to the component instance.

constructor(props) {
this.onClickIncrementA = this.onClickIncrementA.bind(this)

(You can't call bind in the render function, as it returns a new function object and would cause a re-render.)

Passing objects as props

Unintentional re-renders not only happen with functions, but also with object literals.

function App() {
return <Heading style={{ color: "blue" }}>Hello world</Heading>

Every time the App component renders a new style object is created, leading the memoized Heading component to update.

Luckily, in this case the style object is always the same, so we can just create it once outside the App component and then re-use it for every render.

const headingStyle = { color: "blue" }
function App() {
return <Heading style={headingStyle}>Hello world</Heading>

But what if the style is calculated dynamically? In that case you can use the useMemo hook to limit when the object is updated.

function App({ count }) {
const headingStyle = React.useMemo(
() => ({
color: count < 10 ? "blue" : "red",
[count < 10]
return <Heading style={headingStyle}>Hello world</Heading>

Note that the hook dependency is not the plain count, but the count < 10 condition. That way, if the count changes, the heading is only re-rendered if the color would change as well.

children props

We get the same problems with object identity and unintentional re-renders if the children we pass in are more than just a simple string.

<strong>Hello world</strong>

However, the same solutions apply. If the children are static, move them out of the function. If they depend on state, use useMemo.

function App({}) {
const content = React.useMemo(() => <strong>Hello world ({count}</strong>, [

return (

Using keys to avoid re-renders

Key props allow React to identify elements across renders. They're most commonly used when rendering a list of items.

If each list element has a consistent key, React can avoid re-rendering components even when list items are added or removed.

Toggle container demo

function App() {
console.log("Render App");
const [items, setItems] = React.useState([{ name: "A" }, { name: "B" }]);
return (
{ => (
<ListItem item={item} />
<button onClick={() => setItems(items.slice().reverse())}>Reverse</button>
const ListItem = React.memo(function ListItem({ item }) {
console.log(`Render ${}`);
return <div>{}</div>;

Without the key on <ListItem> we're getting a Warning: Each child in a list should have a unique "key" prop message.

This is the log output when clicking on the Reverse button.

=> Reverse
Render app
Render B
Render A

Instead of moving the elements around, React instead updates both of them and passes in the new item prop.

Adding a unique key to each list item fixes the issue.

<ListItem item={item} key={} />

React can now correctly recognize that the items haven't changed, and just moves the existing elements around.

What's a good key?

Keys should be unique, and no two elements in a list should have the same key. The key we used above isn't ideal because of this, as multiple list elements might have the same name. Where possible, assign a unique ID to each list item – often you'll get this from the backend database.

Keys should also be stable. If you use Math.random() then the key will change every time, causing the component to re-mount and re-render.

For static lists, where no items are added or removed, using the array index is also fine.

Keys on fragments

You can't add keys to fragments using the short syntax (<>), but it works if you use the full name:

<React.Fragment key={}>

Avoid changes in the DOM tree structure

Child components will be remounted if the surrounding DOM structure changes. For example, this app adds a container around the list. In a more realistic app you might put items in different groups based on a setting.

Toggle container demo

function App() {
console.log("Render App");
const [items, setItems] = React.useState([{ name: "A" }, { name: "B" }]);
const [showContainer, setShowContainer] = React.useState(false);
const els = => <ListItem item={item} key={} />);
return (
{showContainer > 0 ? <div>{els}</div> : els}
<button onClick={() => setShowContainer(!showContainer)}>
Toggle container
const ListItem = React.memo(function ListItem({ item }) {
console.log(`Render ${}`);
return <div>{}</div>;

When the parent component is added all existing list items are unmounted and new component instances are created. React Developer Tools shows that this is the first render of the component.

React Developer Tools update because of first render

Where possible, keep the DOM structure the same. For example, if you need to show dividers between groups within the list, insert the dividers between the list elements, instead of adding a wrapper div to each group.

Monitor the performance of your React app

DebugBear can track the load time and CPU activity of your website over time. Just enter your URL to get started.

Total Blocking Time timeline

<![CDATA[Debugging site speed with the Chrome DevTools Network tab]]> /devtools-network Mon, 04 Jan 2021 00:00:00 GMT Chrome's developer tools provide a lot of information on what's slowing down your site and how to make it faster. This article explains how to use the DevTools Network tab to debug performance issues.

Getting started: is the network the performance bottleneck?

Before looking at the requests made by the page we first need to check if the network is actually what's slowing it down. Heavy CPU processing is also a common cause of slow page load times.

To check what's slowing down your page, open Chrome DevTools by right-clicking on the page and selecting Inspect. Then select the Performance tab and click the Start profiling and reload page button.

If the CPU timeline contains a lot of orange then the page is running a lot of JavaScript, and it might be better to look into that instead of focussing on the network.

Example 1: Youtube homepage

The Youtube homepage spends a lot of time running JavaScript and rendering the UI.

Youtube homepage CPU timeline

Switching to the Network tab, we can see that the document request could probably be sped up, and the JavaScript bundle could be loaded more quickly. But, compared to the 6.5s of JavaScript execution time, this isn't too important.

Youtube DevTools Network tab

Example 2: Getty Images homepage

By comparison, the Getty Images homepage doesn't require a lot of CPU processing.

Getty images CPU timeline

Instead rendering is blocked by a large number of concurrent image requests, slowing down a render-blocking script.

Getty Images network tab

Here the Network tab will help identify opportunities to improve site performance.

Finding the cause of a slow request

If a request is slow, then either the server needs to respond to requests more quickly, or the size of the response needs to be reduced.

To break down the duration of a request, either hover over the Waterfall column, or click on the request and select the Timing tab.

This way you can find out if the request is slow because the response takes a long time to download (left), or if the server takes a long time before starting to send the response (right).

Long Content download on the left, long Waiting (TTFB) on the right

Large responses

If a response takes too long to download you need to make the response body smaller.

For example, if the the slow request loads an image:

  • Serve a modern format like WebP to browsers that support it
  • Increase image compression
  • Resize the image so it's not larger than necessary

Or, for JavaScript files:

  • Use gzip or brotli to compress the response
  • Minifiy the JavaScript code
  • Remove large dependencies
  • Lazy load non-essential code

Slow server responses

To resolve slow server responses you'll need to look at your backend code. It might be doing unnecessary work or run slow database queries.

Learn more about reducing slow Time To First Byte (TTFB).

Network throttling

As a developer you probably use a relatively fast internet connection. So a 10MB image might load quickly on your computer, but will take a long time on a 3G connection. Likewise, running your server locally means there's practically no round-trip latency.

To make investigating performance easier, Chrome DevTools includes a network throttling option that artificially increases response delays and reduces bandwidth. This lets you simulate how your site loads on a slower connection.

To enable throttling, select an option from the dropdown on the right of the "Disable cache" checkbox.

Chrome DevTools network throttling

Here's a site on a fast connection without any throttling.

Loading a site without throttling

And here's the same site loaded using the Slow 3G setting.

Loading a site with network throttling

Throttling the network allows you to watch your page render gradually, observe what order content is displayed in, and which block rendering.

Note that DevTools uses a relatively basic type of network throttling. Learn more about the different types of network throttling.

Network round-trips

A network request to a new website consists of multiple sequential round-trips:

  • DNS lookup to resolve the domain name
  • Establish TCP connection
  • Establish SSL connection
  • Make the actual HTTP request

The DevTools waterfall shows each part of the request in a different color.

Network round-trips on first load, includes DNS and TCP time

Again, you can hover over the request to get an explanation of the breakdown.

Chrome DevTools request breakdown showing DNS lookup, Initial TCP connection, and SSL time

However, you'll only see these round-trips if Chrome hasn't previously connected to the website's server. If you load the same page again you'll only see the actual request round-trips, as the existing server connection is reused.

Second load reuses existing server connection

Clearing the DNS and connection caches

To simulate the first load experience you need to clear Chrome's DNS and connection caches.

1. Clear OS-level DNS cache

On Mac OS, run this in the terminal:

sudo killall -HUP mDNSResponder

On Windows:

ipconfig /flushdns

2. Clear DNS cache via Chrome

Go to chrome://net-internals/#dns and click Clear host cache.

This button sounds like it should clear the OS-level cache, but my experience just doing this isn't enough by itself.

Chrome DNS page

3. Close existing server connections

Go to chrome://net-internals/#sockets and click Flush socket pools.

Chrome Sockets page

Then reload the page you're testing and you should see the DNS lookup again, as well as the time spent on establishing the TCP connection.

DevTools DNS and TCP roundtrips

DevTools Network settings

Click the gear icon in the top right corners of the Network tab to view the Network tab settings.

Gear icon for DevTools network settings

Large request rows

This setting means additional information will be shown in each row, for example the Size column will show both the compressed response size that's transmitted over the network, as well as the full unzipped size of the response body.

The Time column will show the server response time in addition to the total request duration.

Large DevTools network request rows with additional information


Check Capture screenshots to view a rendering filmstrip alongside your requests. This helps identify requests that block a particular part of your page from loading.

Hovering over a screenshot shows a yellow line in the waterfall column, indicating when that screenshot was taken.

Screenshots above network requests table

Group by frame

The Group by frame option can make the list of requests more manageable if a lot of requests are made by iframes.

Requests grouped by iframe

Request columns

You can customize what information Chrome shows about each request.

Customized request columns showing Connection ID and content encoding

Right-click on the requests table to select the columns you want to see.

Context menu showing column options

Connection ID

This column shows which server connection was used for each request. Ideally you want to avoid creating a new connection and instead use the same connection for many requests. This avoids the round-trips involved in establishing a new server connection.

If the Protocol column shows h2 for HTTP/2 then the browser can reuse the same connection for multiple concurrent requests.


The initiator column explains why Chrome made the request. This could be because of an image tag in the HTML, or because of a fetch call in the JavaScript code. Click on the link in the column to see the relevant source code.

If you hover over the initiator of a resource that was loaded via JavaScript, you can see a call stack showing where the request was made.

JavaScript initiator call stack


This column shows the number of cookies that were sent when making the request.

Content-Encoding header

This shows how the response body was compressed, e.g. using gzip or br (Brotli).

Copy as fetch/cURL

This menu option generates code for making the selected request and copies it to the clipboard. For example, it can generate a browser fetch call or a cURL command that can be run in the terminal.

Copy as fetch dialog

This is an example of a fetch call generated by Chrome DevTools:

fetch("", {
"headers": {
"accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9",
"accept-language": "en-US,en;q=0.9",
"cache-control": "no-cache",
"pragma": "no-cache",
"upgrade-insecure-requests": "1"
"referrerPolicy": "strict-origin-when-cross-origin",
"body": null,
"method": "GET",
"mode": "cors",
"credentials": "omit"

This feature is helpful when debugging failing requests in the front-end, or to make it easy for a backend developer to replicate a request error.

Highlight initiators

Hold the Shift key while hovering over the list of requests to see how different requests relate to each other.

The request's initiator is shown in green. Requests that it initiated are shown in red.

View request initiators in Chrome DevTools

Export and import HAR

If you want to share more in-depth debugging information you can export an HTTP Archive (HAR) file that contains information about all requests made on that page.

Another developer can then investigate the issue on their machine and figure out what went wrong.

The import/export HAR buttons are in the top right corner of network tab.

DevTools import and export HAR buttons

Viewing request headers and response status

Click on each request to view the request headers as well as the response headers and status returned by the servers.

Here you can see what cache settings the server provided, or what cookies were sent along with the request.

DevTools network details showing request and response headers

Filtering requests

Click on the funnel icon to search the list of requests or only show specific request types.

The search filter supports regular expression, so if you want to see both CSS and font files you could use /css|woff2/ as a filter.

DevTools network requests regex filtering

You can search the response text of all page requests using the search feature. Click on the magnifying glass to the left of the "Preserve log" checkbox to start a search.

Full-text search in Chrome DevTools

Monitoring site speed over time

DebugBear is a site speed monitoring service that continuously tests your website.

In addition to keeping track of site speed metrics, we also provide in-depth debug data to help you speed up your website and understand performance regressions.

Page weight regression in DebugBear

<![CDATA[Why is the Google Cloud UI so slow?]]> /slow-google-cloud-ui Wed, 09 Dec 2020 00:00:00 GMT Opening a page in the Google Cloud Console always takes a long time.

Here are some metrics I collected on a high-end 2018 MacBook Pro on a UK-based Gigabit internet connection.

PageDownloadJavaScriptCPU TimeMain ContentFully Loaded
Cloud Functions4.2 MB15.7 MB5.3s6.7s8.1s
Compute Engine4.5 MB15.1 MB6.5s6.7s8.1s
Cloud Storage4.3 MB16.2 MB6.2s6.5s8.2s
Download size is the compressed size, JavaScript size is uncompressed. Main Content is the time when e.g. the Cloud Functions become visible, Fully Loaded is when no more changes are made to the UI.

We can see that each page loads over 15 MB of JavaScript code. A look at the performance timeline in Chrome DevTools confirms that running this code is the primary cause of the poor page performance.

DevTools CPU timeline showing a large amount of JavaScript work

This article will take a closer look at the page load process of the Google Cloud Functions page, and examine how it could be sped up.

You can use these strategies to investigate and improve the performance of the apps you're working on.

Loading the HTML document

The initial HTML request is very fast and only takes about 150ms. It contains an embedded SVG spinner that shows while the first chunk of JavaScript code is loading.

Loading the initial JavaScript bundles

These are the first two JavaScript bundles the page starts loading.

  • routemap 21 KB (103 KB uncompressed)
  • core,pm_ng1_bootstrap 1.3 MB (4.8 MB uncompressed)

These files don't take too long to download, but running the code freezes the UI for a while. The spinner SVG becomes stuck at this point, until it's replaced by a skeleton UI for Google Cloud Console page.

Filmstrip showing initial rendering of the GCP page

Here's what happens when the browser wants to run some JavaScript code.

  1. Parsing (done lazily at first, and then as needed later on)
  2. Compilation (also happens lazily)
  3. Initialization – the browser runs module initialization code, i.e. code that runs when loading a module rather than when calling one of its functions
  4. Running core app code – renders the application using the initialized modules

For the whole Google Cloud page, just parsing the source code takes 250ms, and compilation takes another 750ms (not including the 113 ms spent on "Compile Script").

DevTools profile showing a breakdown of CPU activity

The initial render of the Angular app takes about 1s.

JavaScript execution flamechart

Eventually we start to see a new spinner.

Page frame and new spinner

Loading page bundles

Once the generic Google Cloud UI is rendered the page starts loading 18 additional JavaScript files with an overall size of 1.5 MB.

Making a lot of separate requests isn't actually a problem though – it can improve performance by increasing the likelinhood of cache hits, and splitting up bundles makes it easy to load only necessary code.

After loading the first set up bundles the app starts making fetch requests and loads 3 more bundles at a total size of 6 MB.

When loading the page on my normal network the requests all kind of blurred together and it was hard to see which requests were sequential. So this screenshot shows the request chart on a throttled connection.

Request waterfall showing three sets of JavaScript being loaded sequentially

Loading the list of Cloud Functions

The request loading the list of Cloud Functions takes about 700ms. But it doesn't start as soon as the bundles are loaded, in part because there's a testIamPermissions request that needs to finish first.

As a result the CPU ends up being idle for half a second – this time could be used better if the request started sooner.

Waterfall showing requests made to load the list of cloud functions

Finally the app re-renders and we get the list of Cloud Functions we wanted to see.

Page showing GCP Cloud Functions

Detecting unused code in Chrome DevTools

Chrome DevTools has a code coverage tool tracks which parts of the code actually run on the current page. This can help identify code that doesn't have to be loaded.

The Cloud Functions page runs 53% of the JavaScript code it downloads. This is actually a bit disappointing, as it means that even if only necessary code is loaded it would still only cut the total JavaScript size of the page in half.

Chrome DevTools Code Coverage tool

Moving configuration into JSON

A good amount of the code actually consists of configuration objects. For example, this 200 KB object with 4997 keys.

Configuration object in a JavaScript bundle

Loading this as a JSON string with JSON.parse could be faster, as JSON is simpler to parse than a JavaScript object. This would be easy to do, but might not result in a huge performance improvement.

Ideally the app wouldn't need to load the full list on the client, but this would be harder to implement.

Reduce code duplication

The 200KB JSON object above is actually included in two of the JavaScript bundles. Breaking it out and reusing it would save download and processing time.

The same seems to apply to a bunch of UI components, like this one.

Duplicate code in DevTools code search

Prioritize primary content

The Google Cloud page loads a large initial JavaScript bundle. The longer it takes to load and initialize this code, the longer it takes to load page-specific code and to render the list of Cloud Functions the user wants to see.

But the initial bundle also contains secondary content, like the complex navigation sidebar. This menu becomes functional before the main page content is loaded, but it should only be loaded after the primary content.

Sidebar menu is open while main content is still loading

Google Cloud already does this in some cases. For example, the page initially renders a simpler version of the header and then loads more complex features later on.

Header doesn&#39;t show project dropdown at first and then shows it later


While the performance of static pages tends to be dominated by render-blocking network requests, single-page apps are often blocked by JavaScript execution or loading account data.

Downloading large amounts of code can hurt performance on slow connections, but due to compression and caching CPU processing often has a greater impact.

If you want to track the performance of your website, including logged-in pages, give DebugBear a try.

Monitoring the speed of your web app and making it faster

DebugBear makes it easy to keep track of the key web performance metrics of your site and produces in-depth reports showing you how to make it faster.

Start a free 14-day trial now.

Website monitoring data

<![CDATA[Website builder performance review]]> /website-builder-performance-review Thu, 19 Nov 2020 00:00:00 GMT Site builders let you create your own website without writing any code, but the websites they generate aren't always fast. Slow page load times not only affect the experience of your visitors, but can also hurt SEO.

I built a similar website using 14 different website builders and tested their site speed. This post first presents the overall results and then looks at each website maker in detail.

  1. Site builder performance results
  2. Rendering performance
  3. A look at the performance of each site builder
  4. Takeaways for site builder developers
  5. More metrics

Site builder performance results

The table below shows the test results for each website builder. It's sorted by the site's Lighthouse score, which gives an overall assessment of the performance of a web page. (This is also the score you would get from PageSpeed Insights.)

The Performance score isn't always the best metric to evaluate website performance. You can click on the column headings in the table to sort the different site builders by a different metric.

While Versoly has the highest Lighthouse score on mobile, Wix has the highest score on a desktop device. Strikingly renders initial content the fastest, but it takes a long time for the page to become interactive.

Site Builder Score FCP SI LCP TTI CPU Size
Versoly802.11 s3.43 s4.37 s4.38 s672 ms453 KB
Webflow771.74 s3.13 s4.95 s4.96 s1.35 s671 KB
Wix721.67 s2.67 s5.24 s6.69 s3.26 s759 KB
Site123672.61 s3.21 s3.40 s5.57 s2.30 s558 KB
GoDaddy632.30 s3.02 s3.93 s7.02 s3.77 s783 KB
Jimdo583.62 s5.54 s5.70 s4.34 s1.35 s517 KB
Yola542.08 s4.60 s4.62 s3.65 s3.22 s615 KB
Webnode483.75 s4.92 s9.05 s6.26 s2.21 s855 KB
Weebly393.40 s6.74 s7.33 s7.40 s3.74 s996 KB
Wordpress.com342.65 s4.88 s5.54 s15.9 s9.46 s878 KB
Strikingly321.12 s4.01 s22.5 s28.7 s10.6 s2.32 MB
SquareSpace312.09 s8.29 s8.79 s6.97 s3.56 s994 KB
Weblium233.68 s6.44 s6.93 s19.0 s3.67 s1.14 MB
UCraft182.67 s10.6 s15.6 s22.7 s10.4 s3.29 MB

Performance metrics

Lighthouse Performance Score

This is an overall assessment of the website's performance, combining 6 different metrics into a score ranging from 0 to 100.

First Contentful Paint, Speed Index (SI), and Largest Contentful Paint (LCP)

The First Contentful Paint measures when the user first starts seeing page content, such as text or an image.

Speed Index visually measures how quickly the page content reaches its final state.

The Largest Contentful Paint measures when the largest element on the page was rendered. Unlike Speed Index, the LCP will increase even if the newly painted element is similar to the previous element content.

TTI (Time to Interactive)

Time to Interactive measures how quickly the page becomes idle, meaning there isn't much ongoing network or CPU activity. This usually means that any interactive elements on the page are ready to be used by the visitor.

CPU Processing Time

This measures how much time the browser spends on things like running JavaScript code or rendering page content.

Page Size

This measures the overall (compressed) download weight of the page.

Rendering performance

Minimizing the time it takes for page content to appear after navigating to a page is one of the most important aspects of site performance.

The image shows a side by side view of the rendering timelines of all tested website builders.

Filmstrips for all tested website builders

(I added Webflow later, so it's not included in this image.)

A look at the performance of each site builder


Versoly takes a while to render the image, but doesn't run any additional JavaScript processing once the page has loaded.

A different default background color for the image would make it easier to read the text early on.

Versoly performance filmstrip

There are two render-blocking CSS files, both on different domains. That means the existing server connection can't be re-used. The background image does not start loading until the render-blocking CSS has been finished loading.

Versoly request chart

Versoly only makes 10 requests overall, which is the lowest request count among all tested site builders.


Webflow also starts to render quite quickly, but then takes a while to start downloading the image.

Webflow performance filmstrip

With a size of just 2KB, the initial page HTML loads very quickly. However, there are two render-blocking CSS and JavaScript requests, both on different domains that require a new server connection.

The hero background image is defined in the render-blocking CSS file, and therefore doesn't start to load until after the CSS file.

At 355KB the background image is also fairly large and takes a while to download. It could be worth first serving a low-resolution version of the image before starting the full download.

Webflow request chart


Wix quickly renders the general page layout and then loads the image. It looks like the text doesn't show up until a web font is loaded, however this doesn't seem to be the case when testing with a newer version of Chrome.

Wix performance filmstrip

Compared to Versoly, the Wix site is a lot chunkier with 66 network requests and 2s of JavaScript execution time.

Wix does not have any render-blocking resources apart from the root document, but downloading the 102KB HTML document can take half a second on a slow connection.

Wix request chart


The Site123 page loads fairly quickly, loading two render-blocking CSS files in addition to the document. Once the site has rendered there's isn't much additional work that's done by the CPU.

Site123 performance filmstrip

The background image starts loading immediately after the HTML response has arrived, thanks to a rel="preload" tag.

Site123 request chart


The GoDaddy site contains several render-blocking scripts and stylesheets. The background image loads quickly, as GoDaddy first serves a 1.3KB low-resolution image before serving the higher-resolution version.

GoDaddy performance filmstrip

In total, GoDaddy makes 148 network requests when loading the page. Part of this consists of initializing a service worker, so that the site is available offline after the initial load.

GoDaddy request chart


The initial render for the Jimdo site is quite slow, and it takes over 6s for the image to show up.

Jimdo performance filmstrip

The cause of this is a chain of render-blocking CSS requests. First, Jimdo loads layout.css, which in turn contains an @import statement fetching a font definition.

Jimdo request chart


The Yola site only has partial server-side rendering, most of the work is done by client-side JavaScript. As a result, the page spends 1s running JavaScript before starting to load the background image.

Yola performance filmstrip

The JavaScript application code, as well as multiple CSS files, also block the initial render for a while.

Jimdo request chart


On the Webnode site, no content is rendered for the first 4s.

Webnode performance filmstrip

Again we can see a chain of render-blocking CSS requests. The first Typekit CSS file uses @import to load another CSS file.

Importantly, this CSS file is on a different domain, meaning a new server connection has to be set up. This could be sped up by preconnecting to the domain.

Webnode request chart


Weebly starts rendering after 4s, but text doesn't appear until 5 seconds after opening the page.

Weebly performance filmstrip

This is because the page is waiting to load the web fonts before rendering the text. This delay could be avoided by using the CSS font-display: swap option, which would use a default font until the desired font is available.

Weebly request chart

The site starts to render after about 4s and the image loads after 6s.

Wordpress performance filmstrip

After the initial load it takes a while for the page to become idle. It makes a total of 355 requests, mostly contacting various advertising domains.

Wordpress request chart


Strikingly inlines all necessary CSS into the initial document request. As a result, there are no render-blocking scripts or stylesheets, and Strikingly has the fastest First Contentful Paint with a value of just 1.1s.

Strikingly performance filmstrip

However, the site then continues to download 1.8MB of JavaScript code. All of this needs to load and execute before the page becomes interactive, for example so that visitors can click on the menu icon.

To speed up that process, the bundle size could be reduced and the two scripts could be loaded in parallel.

Strikingly request chart

When testing the site with PageSpeed Insights, Lighthouse actually prematurely ends the test before the page finishes loading and becomes interactive. As a result, it reports a Time to Interactive of just 4.5s, and an overall Performance score of 72.

Strikingly PageSpeed Insights score


SquareSpace starts loading reasonably quickly, but it then takes a long time before the background image shows up.

Squarespace performance filmstrip

This is because the image is not included in HTML that was rendered on the server, and as a result the browser first needs to download and run 609KB of JavaScript code before starting to load the image.

Squarespace request chart


Rendering of the Weblium site is blocked for a while due to various CSS files, including a 172 KB stylesheet with embedded web fonts.

Weblium performance filmstrip

After the initial render, Weblium launches a React app and starts downloading the background image. Later on in the process a 475 KB JavaScript file called legacy.js is loaded and run.

Weblium request chart


The UCraft site starts rendering fairly quickly, but the background image is lazy loaded and depends on JavaScript code to run and trigger the image download.

UCraft performance filmstrip

Once that JavaScript code has run, the page not only downloads the background image but also another 1.16MB that isn't used on the site but appears to be part of the template I used.

UCraft request chart


I tested Mozello as well, but didn't include it in the results because the free plan doesn't support HTTPS. HTTPS is good for security but slightly slows websites down, so it didn't feel like an apples to apples comparison.

Having said that, Mozello's mobile Lighthouse score of 83 was actually the highest overall score. I would expect Mozello sites to be fast even with HTTPS enabled.

With a page weight of 258 KB, Mozello also had the lowest download size. After the initial render there's no further network or CPU activity.

Mozello filmstrip


In each website builder, I created a simple site with two components: a heading section with a background image, and a three column text section. I tried to remove all unnecessary content from the page and disabled parallax on the image background where it was enabled by default.

BizSolutions website

Tests were run in Chrome 84 using Lighthouse 6.3.0, packet-level network throttling, and CPU throttling using Chrome DevTools. The throttling settings were chosen to match the default mobile and desktop configurations in Lighthouse.

Each site was tested 7 times and the median run was picked based on the First Contentful Paint and Time to Interactive metrics.

To ensure consistent measurements, and to avoid ending tests prematurely, the default Lighthouse network idle and CPU idle timeouts were increased from 5s to 10s.

Takeaways for site builder developers

There are a few lessons that can be drawn from this post:

  • avoid render-blocking request chains of more than 2 requests (e.g. using @import in your CSS)
  • when rendering the page on the server, make sure that above-the-fold images are included
  • avoid re-rendering the page using large JavaScript apps

More metrics

This table includes additional performance metrics for each website builder.

You can also find full, up to date performance results in the DebugBear project I used to run these tests.

Site Builder Score #Req CLS TBT Size JS CSS HTML
Versoly8010069 ms453 KB43.1 KB36.1 KB2.15 KB
Webflow77290.03130 ms671 KB54.1 KB14.4 KB2.27 KB
Wix72660.01252 ms759 KB484 KB1.38 KB102 KB
Site12367250802 ms558 KB253 KB62.2 KB8.96 KB
GoDaddy631480.1707 ms783 KB484 KB6.02 KB14.1 KB
Jimdo58160389 ms517 KB267 KB71.4 KB5.93 KB
Yola54240.011.40 s615 KB164 KB6.85 KB8.51 KB
Webnode48290514 ms855 KB148 KB57.0 KB9.68 KB
Weebly39390.02676 ms996 KB429 KB43.2 KB6.68 KB
Wordpress.com343550.011.64 s878 KB197 KB84.0 KB67.7 KB
Strikingly325207.28 s2.32 MB1.80 MB76.5 KB29.9 KB
SquareSpace31150.051.91 s994 KB610 KB72.9 KB18.5 KB
Weblium232101.83 s1.14 MB839 KB199 KB37.2 KB
UCraft18340.126.21 s3.29 MB1.35 MB196 KB20.7 KB

#Req Number of network requests
CLS Cumulative Layout Shift
TBT Total Blocking Time
Sizes Overall page weight and page weight for each resource type

What makes sites built with website builders slow?

Unlike custom-built websites, website builders face a special problem: their developers don't know what the final page will look like. It might be a simple static website with a contact form. Or there might be a blog or a store, or both.

Because many layout elements need to be supported a lot of unnecessary code is loaded. While ideally only the necessary modules should be loaded, this requires more architecture work for the makers of the website builder.

Adding to this each page element needs to be renderable in the site editor, and it's not always easy to split out the editor code from the code for the published site.

<![CDATA[Creating a web performance team]]> /web-performance-team Tue, 10 Nov 2020 00:00:00 GMT Creating a web performance team is essential for many online businesses. Improving web performance for the long term requires a culture that understands the value of performance and treats it as a priority.

Setting up a team comes with a variety of challenges, many of them depending on your company and its culture. This post guides you through some of these difficulties.

Get support from someone higher up in your organization

Creating a performance team is a lot of work. There are many things to prepare before the team starts making its first performance optimizations.

Starting is much easier if you have someone higher up in your organization supporting your efforts. Ideally, this is someone in management with budgetary responsibility. But if that's not the case, your team lead can also take on this role.

This person pushes web performance inside your company. They could promote your successes, organize budget, or help you to handle internal politics. They should understand the importance of performance and invest in your team.

To find someone, you need to promote your initiative. For example, you can speak at internal or public conferences. Company events like hackathons are a good way to start working on performance topics and form a group of co-workers that's interested in web performance.

Speaking the right language

Website performance means different things to different people. For a front-end developer, performance means site speed or load time. For a marketer, website performance relates to engagement, conversion rates, or page views. SEO experts talk about the ranking of key pages.

Managers look at the business value the website generates.

Planning for the long term requires these people to understand why web performance matters. They need to know the correlation between speed and measurable outcomes. You need to speak their language to convince these different disciplines.

You can find many performance case-studies on WPO stats (WPO stands for Web Performance Optimization). These case studies show how performance impacts business results. You can filter the case studies by category, to find ones discussing metrics that are relevant to the person in your organization you're targeting.

Building the team

No matter how big your company, start by finding like-minded people. People that are passionate about web performance and who recognize its value.

Use your company meetings, hackathons, or any other events that your company already organizes. Pick a web performance topic and start forming a team around it. The advantage of finding like-minded people is that you don't need to convince them of the value web performance brings.

A good size to start with is 3 to 5 team members. Not all of them need to be performance engineers. Depending on your company's tech stack, the team could consist of many disciplines, including front-end developers, back-end developers, data analysts, marketers, and managers. A cross-functional or mixed performance team approach is a good option, but usually the team will lean toward technical roles.

Each development team could have a performance ambassador who's also a member of the performance team.

Performance team and performance ambassadors

Setting up a performance plan

After you've found a group of people who care about performance, start creating a performance plan that captures what your team will do.

It describes performance metrics, monitoring, infrastructure, and accountability for performance issues. Additionally, it defines the tools you use and how you'll educate and incentivize the people in your organization.

Goals and Engagement

To decide what goals to put into your performance plan, you need to identify what's most important for your users. What keeps them engaging with your website?

The resulting goal might be time on site or custom metrics like clicking on a link. You need to start measuring these metrics as they will form your baseline data for any changes you make in the future.

Mapping performance metrics to business and or marketing metrics

Once you've gathered enough data, make sure that your team understands each metric. This knowledge is essential, as it forms the basis of all your future work.

In the next step, tie your performance metrics to business or marketing outcomes. For example, you could take the time it takes until the product page is interactive and compare that to the number of product purchases.

Page views versus bounce rate

Mapping business or marketing metrics to performance metrics is necessary for any performance team. The results will be the most potent arguments to push your work within the organization. And the results will help you estimate the impact of any optimizations you plan to make.

Accountability, education, and empowerment

Accountability is a core part of your performance plan. One goal of the performance team is to educate, incentivize, and empower people to care about performance.

Depending on the size of your company, you can't be accountable for every issue. You should teach people how they can care about performance. They should know how to use your performance tools.

Everybody who touches your website (developer, content manager, marketing consultant) is responsible for its performance. Make people accountable by giving them ownership, and help them identify performance issues on their own.

Performance budgets are a reliable way to monitor a page. You can specify a threshold for a metric, and you'll need to take action when this is exceeded. How you define your budget depends on your goals and user engagement metrics. For example, you could pick a certain Lighthouse score, or set a threshold for the download size of your images.

A quick tip for performance budgets: you can motivate your employees by choosing the status quo as your first performance budget. Whenever you've optimized your website, reduce the budget to the new values.

Example of a JavaScript size performance budget chart

Suppose your company is too big to educate everyone about web performance. Your performance team will be busy and you don't don't want them to be gatekeepers who are blocking the work of other teams.

In that case, you can create performance ambassadors inside each team who can take charge without being reliant on the performance team.

Performance ambassadors in each team together form a performance team

Use performance tools and infrastructure

Web performance tools vary a lot. Some tools collect real user data, while others test site performance in a lab environment. Some are very good at monitoring your key pages, while others give you a better overview of your entire site. Which tool you should use depends on your website or project. In most cases, it will be a combination of multiple tools.

A developer could use Lighthouse to check if the code has a performance issue. After a commit, continuous integration can spin up some test servers, and you can test if any performance budgets have been breached.

Another milestone could be a performance test after each new release. You have to decide which of these approaches fits your needs. Testing on each commit could be pricey, but testing only releases makes it challenging to identify the cause of a performance regression.

Celebrate your wins

Celebrating wins is an essential part of building a performance team. Actively celebrate successes periodically.

You could write a tweet or a blog post. Appoint a web performance employee of the month. Award a performance hero price. The hero doesn't need to be a developer or a performance engineer. A performance hero could also be your marketing manager.

Internal or public PR

No matter where you work, it is vital to communicate your wins within the organization. You show your successes to your boss or team lead to get a promotion someday. The same goes for your web performance wins. Modestly show them around your company, create simple graphs, or write internal blog posts. The people in your company shouldn't take it for granted that your website is fast.


Optimization is one core job of your performance team. The performance team identifies and prioritizes opportunities for improvement. What aspects of your website you can optimize depends on your site and your users. Before you start, you need to analyze and monitor the performance of your website.

First, identify performance issues on your site and can create a list of potential improvements.

Next, estimate the impact of each issue and how much effort would be required to address it.

In the beginning, this might be a little tricky as you won't have any past optimizations to compare to. You could start by creating a sample issue that everybody can estimate, and then compare other issues to the sample issue. It's simpler to compare a new issue to an existing one rather than speculate about the impact.

For example, let's say reducing all your images to under 100 KB takes two days and speeds up your site by 500 ms. One of your current issues could be to hide some pictures on mobile. In our case, removing images on mobile devices could influence load times, but it won't take much time to finish.


One of your biggest enemies as a web performance team are regressions. Regressions happens more often than you think, as it's easier to make a website fast once than keeping it fast long-term. Sometimes preventing regressions can be more impactful than shipping optimizations.

Depending on your infrastructure, you could use your tools to prevent regression. Put in place continuous integration for performance testing. If an issue happens, send daily, weekly or monthly reports. Besides that, you should send performance alerts to an accountable person in real-time.


Building a dedicated web performance team depends on web performance culture. This team's primary goal is to promote and encourage this culture. The members are doing this by sharing knowledge and educating all other employees.

Becoming and staying fast can not be done without a group of people who care about web performance. The challenges might differ between companies, but it's worth it to stay fast.

<![CDATA[Reducing variability in web performance metrics]]> /web-performance-test-variability Thu, 05 Nov 2020 00:00:00 GMT Web performance metrics always vary somewhat between tests. This variability will be lower for simple static sites, and higher for complex dynamic content that changes every time the page is loaded.

One way to reduce variability in recorded metrics is to run each test several times and only save the average result.

This article will look at three different websites and investigate how much running tests 1, 3, 5, or 7 times reduces variability.

We'll also take a look at how to select the average result.

Impact of repeating web performance tests on metric chart variability


Three pages were included in the test:

  • the CircleCI homepage
  • a Wikipedia article
  • a New York Times article

The performance of each page was tested 150 times over the course of one week. Tests were repeated between 1 and 7 times.

Performance metrics were collected using Lighthouse with an emulated mobile device, packet-level network throttling, and DevTools CPU throttling. Note that Lighthouse uses simulated throttling by default, which generally reduces variability between test runs compared to other methods.

Determining the average result

Lighthouse CI determines the median run by looking at the First Contentful Paint (FCP) and Time to Interactive (TTI) metrics.

The median value is determined for both metrics and the test results are scored based on how far away from the median they are.

const distanceFcp = medianFcp - firstContentfulPaint
const distanceInteractive = medianInteractive - interactive
return distanceFcp * distanceFcp + distanceInteractive * distanceInteractive

Baseline variability

First, let's take a look at the absolute Time To Interactive values. We can see that the Wikipedia page becomes idle much more quickly than the other two pages.

Time to Interactive: CircleCI 29s, Wikipedia 6s, NYT Article 36s

Accordingly, the absolute standard deviation of is much lower for the Wikipedia article, with a value of 0.25s compared to over 2s for the other two pages.

To make it easier to compare the impact of running the tests several times, we'll mostly look at the coefficient of variation from now on. This takes the different absolute values into account.

Standard deviation and coefficient of variation for Time to Interactive

Impact on variability

Now that we've established a baseline, let's look at how much repeating the test reduces variability.

Most of the variability reduction is achieved just by repeating the test 3 times. This makes sure one-off – making sure that one-off flukes are thrown out. This reduced the coefficient of variation by 37% on average. Running the tests 7 times cuts variability in half.

Coefficient of variation reducing from 6.9% to 4.3% for CircleCI, 4.1% to 1.6% for Wikipedia, and 10.2 to 5.1% for the NYT Article

What does this look like in practice when you look at the monitoring results over time? The charts becomes much more smooth as the number of test runs is increased.

Practical impact on web performance metric charts

However, the variability reduction for the First Contentful Paint is noticeably lower. Here, running the test 3 times reduces the coefficient of variation by 26%, running it 7 times reduces it by 37%.

We'll look into why this improvement is smaller later on in this article.

FCP CoV reducing from 11.7% to 7.9% for CircleCI, 13.2% to 6.7% for Wikipedia, and 9.5% to 6.6% for the NYT Article

Does repeating tests prevent outliers?

Instead of looking at the standard deviation, let's look at the overall range of the values. For example, for one test run the CircleCI Time To Interactive ranges from 26s to 34s, so the size of the range is 8s.

TTI range reductions

On average, running the tests 3 times instead of once reduced the Time To Interactive range by 34%. With 7 test runs this increased to 51%.

This chart shows an example of this. The green line shows the Time to Interactive when running the test 7 times, overlayed on top of the less stable blue line when the test is only run once.

Chart showing zaggy and smooth lines

How to select the average test run

One thing I noticed is that even when repeating the test several times, the FCP would still show occasional outliers.

One-off First Contentful Paint outlier

This chart shows the results of all 7 test runs. While the results clearly cluster around an FCP of 1.9s, the run with an FCP of 2.6s was selected instead.

Bubble chart with FCP on the x axis and TTI on the y axis, bubbles cluster in the top left but selected run is on the right by itself

To understand what's going on, let's review the distance formula and apply it to each test run.

distanceFcp * distanceFcp + distanceInteractive * distanceInteractive

Distance calculation for each run using Google Sheets

The range between the smallest and largest FCP values is just 0.8s. Compare that to the 10s range of the TTI metric.

As a result, for pages that take a while to become idle, TTI variation dominates the result selection process. And the TTI of Run 1 is actually the median value, so its distanceInteractive is 0. So it ends up getting selected as the average result despite being an outlier in terms of FCP.

You can tweak the selection algorithm based on the metrics whose variability you want to reduce. For example, you could weight the FCP differently, or throw out outliers for each metric.

<![CDATA[Building custom Lighthouse audits]]> /custom-lighthouse-audits Mon, 02 Nov 2020 00:00:00 GMT Lighthouse automatically evaluates the performance, accessibility, and technical SEO of your website. But did you know that you can add your own custom tests as well?

Custom audits can check requirements specific to your website. Here are some examples we'll discuss in this article:

  • Checking that Twitter Card meta tags are present
  • Ensuring that the page is included in the sitemap
  • Scoring a custom performance metric

Lighthouse social sharing custom audit

Lighthouse audits and categories

By default, a Lighthouse report consists of 5 categories:

  • Performance
  • Accessibility
  • Best Practices
  • SEO
  • Progressive Web App

Each category consists of multiple audits, each testing some aspect of the page. The audit scores are then combined into an overall category score.

Lighthouse categories contain audits groups, which contain audits, which contain details

You can see all predefined audits in the default Lighthouse config.

Adding a Twitter Cards audit

Our first custom audit will check if the page contains a twitter:card meta tag. If this meta tag is present, Twitter shows a page preview whenever someone tweets the URL.

1. Add Lighthouse to your node modules

npm init
npm install lighthouse

2. Create Twitter Cards audit

The TwitterCardsAudit consists of a bunch of metadata plus the audit logic that actually evaluates the page and assigns a score.

Artifacts contain information that Lighthouse has collected about the page. In this case that's the list of meta tags. We'll learn more about artifacts and how to create custom ones later on.

// twitter-cards.js
const { Audit } = require("lighthouse");

class TwitterCardsAudit extends Audit {
static get meta() {
return {
id: "twitter-cards",
title: "Has Twitter Cards meta tags",
failureTitle: "Does not have required Twitter Cards meta tags",
description: "Twitter needs a twitter:card meta element to show previews.",
requiredArtifacts: ["MetaElements"],

static audit(artifacts) {
const twitterCardsMetaTag = artifacts.MetaElements.find(
meta => === "twitter:card"
const score = twitterCardsMetaTag ? 1 : 0

return {