Web Vitals FAQ

22 Mar 2021

Google will start using Core Web Vitals as part of their rankings in May 2021. This page goes through common questions about Core Web Vitals and how they affect rankings.

All answers are based on what John Mueller has said in the official Google Search Central office hours.

The quotes in this article have been edited to make them easier to read and are not always direct quotes from the videos. Each answer links to the original video where the question was answered.

  1. Does Google use field data or lab data for rankings?
  2. How much will Web Vitals impact rankings?
  3. What if a metric doesn't accurately reflect page experience?
  4. Google currently aggregates 28 days of CrUX data. Could Google update Web Vitals data more quickly?
  5. How often is Web Vitals data updated?
  6. Do the ranking changes apply to both mobile and desktop?
  7. How does Google group pages?
  8. Is performance data shared between subdomains?
  9. Do noindex pages affect Web Vitals?
  10. Do pages that are behind login impact Web Vitals?
  11. Why does Google Search Console show big fluctuations in Web Vitals?
  12. Is the yellow "needs improvement" rating good enough to rank well, or does the rating need to be "good"?
  13. Why are AMP and non-AMP pages sometimes shown separately?
  14. Why do I see different values for the Web Vitals in CrUX and PageSpeed Insights?

Does Google use field data or lab data for rankings?

We use the field data, which is based on what users actually see. So it's based on their location, their connections, their devices. That's usually something that mirrors a lot closer what your normal audience would see.

You can read more about field vs lab data here.

How much will Web Vitals impact rankings?

Page Experience will be an additional ranking factor, and Web Vitals will be a part of that.

Relevance is still by far, much more important. So just because your website is faster with regards to Core Web Vitals than some competitors, that doesn't necessarily mean that come May you'll jump to position number one in the search results.

It should make sense for us to show this site in the search results. Because as you can imagine, a really fast website might be one that's completely empty. But that's not very useful for users. So it's useful to keep that in mind when it comes to Core Web Vitals.

It is something that users notice, it is something that we will start using for ranking, but it's not going to change everything completely. So it's not going to, like destroy your site or remove it from the index if you have it wrong. It's not going to catapult you from page 10 to number one position, if you get it right.

What if a metric doesn't accurately reflect page experience?

Automated performance metrics try to assess the user experience of a page, but don't always match the experience of real user. Google is working on improving metrics over time – you can find a changelog for the core web vitals here.

John suggests contacting the Chrome team if the browser is misinterpreting the page experience.

I think for things where you feel that the calculations are being done in a way that doesn't make much sense, I would get in touch with the Chrome team. Especially Annie Sullivan is working on improving the Cumulative Layout Shift side of things.

Just make sure that they see these kinds of examples. And if you run across something where you say, oh, it doesn't make any sense at all, then make sure that they know about it.

You can see an example of a change in the Largest Contentful Paint definition here. Originally, a background image was dragging down the Performance score of the DebugBear homepage, but this was then fixed in a newer version of Chrome.

Change in LCP definition showing in monitoring data when upgrading Chrome

You can expect the page experience signals to be continuously updated, with some advance warning.

Our goal with the Core Web Vitals and page experience in general is to update them over time. I think we announced that we wanted to update them maybe once a year, and let people know ahead of time what is happening. So I would expect that to be improved over time.

Google currently aggregates 28 days of CrUX data. Could Google update Web Vitals data more quickly?

After making improvements to a site's performance it can take a while for those changes to be reflected in the Search Console. Can Google speed up the process?

I don't know. I doubt it. The data that we show in Search Console is based on the Chrome User Experience Report data, which is aggregated over those 28 days. So that's that's the primary reason for that delay there.

It's not that Search Console is slow in processing that data or anything like that. The way that the data is collected and aggregated just takes time.

So how can you catch regressions early and track performance improvements?

What I recommend, when people want to know early when something breaks, is to make sure that you run your own tests on your side in parallel for your important pages. And there are a bunch of third party tools that do that for you automatically.

You can also use the PageSpeed Insights API directly. Pick a small sample of your important pages and just test them every day. And that way, you can tell if there are any regressions in any of these setups you made fairly quickly.

Obviously a lab test is not the same as the field data. So there is a bit of a difference there. But if you see that the lab test results are stable for a period of time, and then suddenly they go really weird, then that's a strong indicator that something broke in your layout in your pages somewhere.

DebugBear is one tool you can use to continuously monitor web vitals in a lab environment.

How often is Web Vitals data updated?

The Chrome UX Report is updated daily, aggregating data from the last 28 days. If you make an improvement, after one day you're 1/28th of the way to seeing the results.

What if you fix performance a few days before the page experience ranking update is rolled out?

Probably we would not notice that. If you make a short term change, then probably we would not see that at the first point. But of course, over time, we would see that again. It's not the case that on that one date, we will take the measurement and apply that forever.

If you need a couple more weeks before everything is propagated and looks good for us as well, then, like, take that time and get it right and try to find solutions that work well for that.

The tricky part with the lab and the field data is that you can incrementally work on the lab data and test things out and see what works best. But you still need to get that confirmation from users as well with the field data.

Do the ranking changes apply to both mobile and desktop?

Just mobile for now.

We announced that the page experience ranking factor would apply to mobile and not to desktop.

At least initially, I don't know what the long term plans are. And my guess is at some point, we'll figure things out a little bit more fine grained, but at least the initial announcement that we made is based purely on the mobile site.

That's also where for the most part the harder barriers are, where users really see things a lot slower than on desktop. Because on desktop, you tend to have a really powerful computer, and you often have a wired connection.

On mobile, you have this much slimmed down processor with less capabilities, and a smaller screen and then a slow connection sometimes. And that's where it's really critical that pages load very fast.

How does Google group pages?

Google doesn't have real user performance data for every page that's in the search index. So how are pages categorized?

We use the Chrome User Experience Report data as the baseline. We try to segment the site into parts, what we can recognize there from the Chrome User Experience Report, which is also reflected in the Search Console report.

Based on those sections, when it comes to specific URLs, we will try to find the most appropriate group to fit that into. So if someone is searching, for example, for AMP pages within your site, and you have a separate AMP section, or you have a separate amp template that we can recognize, then we'll be using those metrics for those pages.

It's not so much that if there's some slow pages on your site, then the whole site will be seen as slow. It's really, for that group of pages where we can assume that when a user goes to that page, their experience will be kind of similar, we will treat that as one group.

So if those are good, then essentially pages that fall into those groups are good.

Here's an example of Google Search Console showing different page categories with many similar URLs.

Core Web Vitals data for different categories in Google Search Console

Regarding Accelerated Mobile Pages (AMP), John points out that the AMP report does not take traffic into account. A site where the vast majority of users have a good experience might look slow if there are also a large number of slower pages that are accessed less frequently.

The tricky part with the AMP report is that we show the pages there, but we don't really include information on how much traffic goes to each of those parts.

So it might be that you have, let's say 100 pages that are listed there, and your primary traffic goes to 10 of those pages. Then it could look like you have like those 90% of pages that are slow, and those 10 that people go to are fast. But actually the majority of your people will be going to those fast pages. And we'll take that into account.

Is performance data shared between subdomains?

As far as I recall, the CrUX data is kept on an origin level. So for Chrome, the origin is the hostname, which would be kind of on a sub domain and protocol level. So if you have different subdomains, those would be treated separately.

Do noindex pages affect Web Vitals?

Many websites have slower, more complex pages that are not included in the search index. Will performance on these pages affect rankings?

This depends on whether Google categorizes the pages together with pages that are in the search index. Especially if there's not a lot of other data Google might draw on the performance data collected for the noindex pages.

If it's a smaller website, where we just don't have a lot of signals for the website, then those no index pages could be playing a role there as well. So my understanding, like I'm not 100% sure, is that in the Chrome User Experience Report we do include all kinds of pages that users access.

There's no specific "will this page be indexed like this or not" check that happens there, because the index ability is sometimes quite complex with regards to canonicals and all of that. So it's not trivial to determine, kind of on the Chrome side, if this page will be indexed or not.

It might be the case that if a page has a clear no index that will then we will be able to recognize that in Chrome. But I'm not 100% sure if we actually do that.

Do pages that are behind login impact Web Vitals?

Pages often load additional content and functionality for logged in users. This additional content can slow down the page – will these logged-in pages impact rankings?

If you have the same URL that is publicly accessible as well, then it's it's very likely that we can include that in the aggregate data for Core Web Vitals, kind of on the real user metrics side of things, and then we might be counting that in as well.

If the login page exists in a public forum, that we might think some users are seeing this more complicated page, we would count those metrics.

Whereas if you had separate URLs, and we wouldn't be able to actually index the separate URLs, then that seems like something that we will be able to separate out.

I don't know what exact guidance here is from the Core Web Vitals side, though. I would double check, specifically with regards to Chrome in the CrUX help pages to see how that would play a role in your specific case.

Why does Google Search Console show big fluctuations in Web Vitals?

Why would ratings for a group of pages go up and down repeatedly? What do I do if my page ratings keep switching between green and yellow?

What sometimes happens with these reports is that if a metric is right on the edge between green and yellow.

If for a certain period of time, the measurements tend to be right below the edge, then everything will swap over to yellow and look like oh, green is completely gone. Yellow is completely here. And when those metrics go up a little bit more back into the green, then it's swapped back.

Those small changes can always happen when you make measurements. And to me this kind of fluctuation back and forth, points more towards, well, the measurement is on the edge. The best way to improve this is to provide a bigger nudge in the right direction.

Is the yellow "needs improvement" rating good enough to rank well, or does the rating need to be "good"?

Aim for a "good" rating marked in green:

My understanding is, we see if it's in the green, and then that counts as it's okay or not. So if it's in yellow, that wouldn't be kind of in the green.

But I don't know what the final approach there will be. Because there are a number of factors that come together. I think the general idea is if we can recognize that a page matches all of these criteria, then we would like to use that appropriately in search for ranking.

I don't know what the approach would be when some things are okay and some things that are not perfectly okay. Like, how that would balance out.

The general guideline is that we would like to use these criteria to also be able to show a badge in the search results, which I think there have been some experiments happening around. For that we really need to know that all of the factors are compliant. So if it's not on HTTPS, then even if the rest is okay that wouldn't be enough.

Why are AMP and non-AMP pages sometimes shown separately?

What if Search Console shows a slow URL on the website, but the AMP version of the page is shown as fast?

We don't focus so much on the theoretical aspect of "this is an AMP page and there's a canonical here". But rather, we focus on the data that we see from actual users that go to these pages.

So that's something where you might see an effect of lots of users going to your website directly, and they're going to the non-AMP URLs, depending on how you have your website set up.

And in search, you have your AMP URLs, then we probably will get enough signals that we track them individually for both of those versions. So on the one hand, people going to search going to the AMP versions and people maybe going to your website directly going to the non-AMP versions. In a case like that we might see information separately from those two versions.

Whereas if you set up your website in a way that you're consistently always working with the AMP version, that maybe all mobile users go to the AMP version of your website, then we can clearly say "this is the primary version – we'll focus all of our signals on that".

But in the theoretical situation that we have data for the non AMP version and data for the AMP version, and we show the AMP version in the search results, then we would use the data for the version that we show in search as the basis for the ranking.

Why do I see different values for the Web Vitals in CrUX and PageSpeed Insights?

Lab data from PageSpeed Insights will often be different from the field data. Usually the lab data will be slower, as Lighthouse simulates a very slow device with a bandwidth of 1.6Mbps and 150ms of roundtrip network latency.

Here's a typical example from the BBC homepage.

Metric discrepancy between PageSpeed Insights lab data and field data

John also notes that PageSpeed Insights tries to compress all performance metrics into a single score:

In PageSpeed Insights we take the various metrics and we try to calculate one single number out of that. Sometimes that's useful to give you a rough overview of what the overall score would be. But it all depends on how strongly you weigh the individual factors.

It can be the case that one user see a page that's pretty fast and sleek, but when our systems test it they find some theoretical problems that could be causing issues.

The overall score is a really good way to get a rough estimate. And the the actual field data is a really good way to see like what people actually see and usually

How do you use lab data and field data together?

What I recommend is using field data as a basis to determine "should I be focusing on improving the speed of the page or not?" Then use lab testing tools to determine the individual metric values and for tweaking them as you're doing the work.

Use lab data to check that you're going in the right direction, because the field data is delayed by about 30 days. So for any changes that you make, the field data is always 30 days behind and if you're unsure if you're going in the right direction then waiting 30 days is kind of annoying.

DebugBear is a website monitoring tool built for front-end teams. Track performance metrics and Lighthouse scores in CI and production. Learn more.

Get new articles on web performance by email.

Track and analyze site speed with DebugBear.
➔ Learn more