Skip to main content

Lighthouse Performance Score

Google's Lighthouse tool collects a range of performance metrics for a web page and combines them into a single score.

This article looks at how the Lighthouse Performance score is determined, how to interpret it, and how it varies between runs and tools

Like the scores for other Lighthouse categories (e.g. SEO), the Performance score ranges from 0 to a perfect score of 100.

The Lighthouse Performance score is a lab-only metric specific to Lighthouse, and is not measured for real users.

Performance score in the Lighthouse report

How is the Lighthouse Performance score calculated?​

The Performance score is made up of 6 different web performance metrics.

The overall composition has changed over time, but this is the breakdown as of Lighthouse 9.2 (Jan 2022).

MetricAcronym% of Performance score
First Contentful PaintFCP10%
Speed IndexSI10%
Largest Contentful PaintLCP25%
Time to InteractiveTTI10%
Total Blocking TimeTBT30%
Cumulative Layout ShiftCLS15%

Note that only these metrics count for the Performance score – the Opportunities and Diagnostics audits do not directly impact it.

Lighthouse Performance audits

Lighthouse scoring calculator​

Google provides a calculator for the overall Lighthouse score. You can change the value of each performance metric to see how it affects the metric subscore and overall score.

Lighthouse score calculator

Lighthouse Performance score vs. Core Web Vitals​

Both Lighthouse and the Core Web Vitals were created by Google, and so Lighthouse prominently includes web vitals in the Performance score.

As Lighthouse tests are run in the lab they don't provide a First Input Delay metric. Without a user interacting with the page there is no input delay to record.

The Total Blocking Time metric aims to provide an lab alternative to First Input Delay. You can think of it as a "theoretical First Input Delay" – if the user were to interact with the page at some point, how often would they face an input delay?

First Contentful Paint and Speed Index are also not included in the Core Web Vitals. These metrics are strongly correlated with the Largest Contentful Paint. While they do provide valuable information about how fast your page renders, they were not included as Core Web Vitals in their own right.

Finally, Time to Interactive is also not a Core Web Vitals metric. Time to Interactive provides useful information on how long it takes for your page to load fully, but it only looks at CPU and network activity rather than focussing on the user.

Performance score on mobile vs desktop devices​

Unlike the Core Web Vitals, the Lighthouse Performance score uses different score thresholds on desktop and on mobile.

For example, to get an LCP subscore of 90+ in Lighthouse you need a Largest Contentful Paint below 2.5 seconds on mobile and below 1.2 seconds on desktop.

Measuring the Lighthouse Performance score with PageSpeed Insights​

The lab data in Google's PageSpeed Insights tool is an is collected from Lighthouse, so you can find the Performance score at the top of the Diagnose performance issues section.

Lighthouse scores on PageSpeed Insights

How to improve the Lighthouse Performance score​

Look at the individual performance metrics to improve your Performance score. The documentation linked above provides a lot more detail on what you can do for each metric.

The Lighthouse report includes audits that show you how to improve your site speed. You can use the Show audits relevant to filter to view suggestions on how to optimize the metric you're interested in.

Lighthouse performance audit filter

Why is the Lighthouse Performance score inconsistent between runs?​

The Lighthouse score often varies between tests, and this increases with the complexity of your website. Here are a few common reasons for this variability:

  • Different server response times (e.g. if your server is busy)
  • Different routes that data takes on the network
  • A/B tests or other page customizations
  • The computer running the tests might have used CPU power for other processes
  • Lighthouse might finish the test at different times, depending on whether it has seen a sufficiently long idle period

Why is the Lighthouse Performance score inconsistent between tools?​

You will often see that different tools report different Performance scores. This might be for a few reasons:

  • different test locations (e.g. UK or US)
  • more or less powerful CPUs
  • different data quality levels
  • different Lighthouse settings
  • different versions of Lighthouse/Chrome

By default, Lighthouse is run using a low-accuracy mode that simulates a slow network rather than testing on a slow connection. For example, this is what PageSpeed Insights uses, and it's also the default setting in Chrome DevTools.

Learn more about why Lighthouse scores vary between tools.

How to monitor the Lighthouse Performance score​

DebugBear is an automated tool that monitors Lighthouse scores over time. It reports high-quality data you can rely on and gives you access to in-depth performance reports.

Lighthouse score monitoring

DebugBear is a site speed monitoring service. Start tracking Lighthouse scores and Core Web Vitals in minutes.
Start monitoring your websiteGo to app