Learn about recently added features here. To see upcoming features, or to make suggestions, check out the roadmap.
The project overview page now shows weekly averages for the last 10 weeks, rather than just the most recent scores and metrics.
DebugBear now runs test using the latest version of Lighthouse.
The mobile project overview now also shows performance metrics, and you can use the same filters as on desktop.
Why is the Google Cloud UI so slow – a look at a large JavaScript application and what's slowing it down.
Debugging web performance with the Chrome DevTools Network tab – a detailed explanation of the information DevTools provides about network activity.
Want to script user journeys and measure their performance? This article explains how to do that for single-page apps.
To make setup easier, you can now select multiple device types and test locations and set up monitoring for them in one step.
You can now update some properties of multiple pages at once:
First, click the edit icon in the top right of the Project overview page.
Then select the pages you want to update, either by using the standard search filters or by toggling the checkboxes. Then set the new values, and click the Update button.
You can now navigate directly from one monitored pages to another. The dropdown normally shows pages with the same URL first, so you can easily switch between Desktop and Mobile monitoring results.
To make space for the dropdown, the "Open tested page" link has moved to the top right, next to the page ID.
Lighthouse automatically tests the Performance, SEO , and accessbility of your website, but you can also add your own audits and audit categories.
Repeating performance tests reduces overall metric variability – this blog post quantifies how by much variance is reduced when running tests 3, 5, or 7 times.
Creating a web performance team can help make site speed a priority in your company. Marc Radziwill explains how to get started and make performance teams successful.
Millions of websites are built using website builders – we took a look at how site performance compares between different site builders.
The documentation now contains an overview of the Core Web Vitals, which start affecting Google search rankings next year.
There's also an in-depth look at one of the Core Web Vitals, the Largest Contentful Paint.
You can now tag your pages to make them easier to group and filter. Check your project settings to show tags in the navbar.
Another improvement to the page listing: sort pages by metric to identify pages that are slow or have SEO opportunities.
Click on the heading for the metric column to enable sorting. In this screenshot we're sorting by First Contentful Paint.
Device settings now have an option to block ads and tracking using uBlock. This helps reduce test variability and makes sure DebugBear tests don't impact analytics. However, if the ads on your site have a meaningful performance impact, it can also skew your results and make them look better than they are.
Stats mode allows you to aggregate data over a time range to identify longer-term trends. You can enable stats mode via the date dropdown.
You can also aggregate metrics across all pages in order to track trends across your website, rather than for specific pages.
There are now 4 default simulated devices that you can test on:
You can also create new devices that match the characteristic of your users:
It's now possible to compare the experience of a user in Australia to that of a user in Finland. Or you can compare a test result from today to one from a year ago.
To do that, go to the Overview tab of one of the pages you want to compare and scroll down to the Compare section.
For example, this site is notably slower in Brazil than it is in the US:
We've published an in-depth article on how to front-end JavaScript performance. Learn about common performance issues, how to identify them, and how to fix them.
That article focusses on execution times, but you can also read about JavaScript memory leaks.
Finally, a new documentation page takes an in-depth look at the Cumulative Layout Shift metric.
Each DebugBear result now contains a filmstrip and CPU timeline. Use it to understand how your page renders and what's holding back performance.
You also select "Filmstrips" on the project overview page to compare performance with your competitors.
If you primarily use the API to trigger tests you can now disable scheduled tests.
Do you have 10 URLS you need to monitor? Instead of submitting the "new page" form 10 times you can now set all of them up in one go.
You can specify page titles by putting a space after the URL followed by the desired title. If no title is passed in the origin and pathname will be used, for example "example.com – /about".
Here's a roundup of some of the changes we've made recently.
You can now run tests up to 7 times and then save the median result. This removes outliers and results and avoids unnecessary alerts.
Save 20% on your subscription by paying annually.
See how users in Mumbai and Singapore experience the performance of your website.
Lighthouse 6.1 includes bug fixes, more data on long JS tasks, and a new SEO audit that makes sure search engines can crawl your links.
DebugBear now also uses Chrome 84 to test your pages.
Does Lighthouse sometimes finish the test before your page has fully loaded? You can now set up a JavaScript expression that needs to be fulfilled before the test finishes.
Reduce variance between Lighthouse runs
Debug and improve server response time
The list in the requests tab now shows the request duration by default, and you can add other columns as well. You can also click on the column headers to sort by that column.
For example, you can break down the request duration into time spent on each part of the HTTP transaction: DNS lookup, TCP connection, SSL connection, Time to First Byte, and actually downloading the response content.
Or you can look at the content encodings, decoded response size, and response statuses. The request start time is relative to when the initial document request was made.
Larger request changes will also in the overview tab. Here you can see that the First Contentful Paint increased because the response for the initial document request took longer.
The console tab now has a custom design rather than showing the text-based diff by default.
Console messages will also include a call stack and code snippet where available.
Request errors also show an HTML snippet, if the request was triggered by the page HTML.
DebugBear now tests your websites with version 6 of Lighthouse. We've also upgraded Chrome from version 78 to 83.
Lighthouse 6.0 introduces several new metrics and changes how the overall performance score is calculated
The composition of the Performance score has changed as follows:
Existing metrics
New metrics
Deprecated metrics
You can find the charts for the new metrics in the Performance tab.
Read more about these metric changes in the Lighthouse 6.0 announcement post.
Lighthouse performance budgets now support more timing metrics. If you've set up a performance budget on DebugBear those metrics will also show up in the Lighthouse report.
These were introduced in Lightouse 5.6.0, but DebugBear was previously running version 5.5.0. The Lighthouse report now includes recommendations tailored to your tech stack:
DebugBear automatically generates notifications if it looks like there's been a regression on your site. That means you don't need to do any work to get set up, but you might get some notifications that aren't relevant to you.
From now on you can configure when a notification is sent. It's been possible to mute specific notifications for a while, but I've now added some documentation for it.
One common issue has been notifications for performance issues that can't be reproduced later on. Maybe the server was busy, or something weird happened with DNS. To avoid this problem in the future, some performance alerts are now only sent if the performance problem occurs more than once in a row.
DebugBear now has a Validation tab which shows errors and warnings generated by the W3C HTML validator.
Most of these errors aren't very helpful. The HTML might not be valid, but as long as all browsers handle it fine that's not a problem. And sometimes the validator doesn't know about a recently added feature and will complain about it.
So DebugBear doesn't list common validation errors by default, and currently no email or Slack alerts are sent if there's a regression.
However, there are many potential problems the validator can identify:
style="background: [object Object]"
Until yesterday, DebugBear had a Login Steps feature that allowed you to fill out a login form before testing your page. There were a few problems with this though:
User flows are the solution to these problems. Rather than shoehorning all this functionality into a login form, you can now set up flexible steps that run before the actual page analysis.
We've published a new version of the Node API. Here's an example of what you can do with it:
const { DebugBear } = require("debugbear")
const debugbear = new DebugBear(process.env.DEBUGBEAR_API_KEY)
const analysis = await debugbear.pages.analyze(pageId, {
// Commit Hash is required to generate a build
commitHash: "abc123",
customHeaders: {
"X-Enable-Experiment": "true"
},
})
const result = await analysis.waitForResult()
console.log(result.build.status) // "success"
Check out the migration guide if you're moving from version 1 of the API.
Browsers make OCSP requests to check if a certificate is revoked. Chrome only does this for Extended Validation (EV) certificates.
These requests are now included in the request chart, so it should be easier to understand if you SSL connection takes a long time: