Changelog

1 Nov 2020

October 2020 release notes

Manage monitored pages with tags

You can now tag your pages to make them easier to group and filter. Check your project settings to show tags in the navbar.

DebugBear tags

Order pages by metric

Another improvement to the page listing: sort pages by metric to identify pages that are slow or have SEO opportunities.

Click on the heading for the metric column to enable sorting. In this screenshot we're sorting by First Contentful Paint.

Pages ordered by First Contentful Paint

Block ads and tracking

Device settings now have an option to block ads and tracking using uBlock. This helps reduce test variability and makes sure DebugBear tests don't impact analytics. However, if the ads on your site have a meaningful performance impact, it can also skew your results and make them look better than they are.

Ad blocking option in the device settings

Stats mode for the page comparison

Stats mode allows you to aggregate data over a time range to identify longer-term trends. You can enable stats mode via the date dropdown.

Stats mode for page performance comparison

You can also aggregate metrics across all pages in order to track trends across your website, rather than for specific pages.

Aggregated page performance metrics

2 Oct 2020

September 2020 release notes

Customize device speed

There are now 4 default simulated devices that you can test on:

  • Mobile – 1.6Mbps bandwidth, 150ms latency, 4x CPU throttling – this matches the default Lighthouse Mobile settings
  • Mobile Fast – 12Mbps bandwidth, 70ms latency, 2x CPU throttling – this matches the LTE setting on WebPageTest
  • Desktop – 8Mbps bandwidth, 40ms latency, no CPU throttling – this matches the default Lighthouse Desktop settings
  • Desktop Fast – 100Mbps bandwidth, 2ms latency, no CPU throttling – this matches a fast wifi connection

You can also create new devices that match the characteristic of your users:

Setting up a 5G emulated device for monitoring

Compare tests arbitrary tests

It's now possible to compare the experience of a user in Australia to that of a user in Finland. Or you can compare a test result from today to one from a year ago.

To do that, go to the Overview tab of one of the pages you want to compare and scroll down to the Compare section.

Compare site speed results on DebugBear

For example, this site is notably slower in Brazil than it is in the US:

Comparing web performance between countries

New articles

We've published an in-depth article on how to front-end JavaScript performance. Learn about common performance issues, how to identify them, and how to fix them.

That article focusses on execution times, but you can also read about JavaScript memory leaks.

Finally, a new documentation page takes an in-depth look at the Cumulative Layout Shift metric.

24 Aug 2020

August 2020 release notes

Filmstrips and CPU timeline

Each DebugBear result now contains a filmstrip and CPU timeline. Use it to understand how your page renders and what's holding back performance.

Web performance filmstrip

Compare filmstrips

You also select "Filmstrips" on the project overview page to compare performance with your competitors.

Comparing site speed with competitors

Disable scheduling for pages

If you primarily use the API to trigger tests you can now disable scheduled tests.

Only test web performance as needed

Set up monitoring for many URLs more easily

Do you have 10 URLS you need to monitor? Instead of submitting the "new page" form 10 times you can now set all of them up in one go.

You can specify page titles by putting a space after the URL followed by the desired title. If no title is passed in the origin and pathname will be used, for example "example.com – /about".

Bulk website monitoring setup

30 Jul 2020

July 2020 release notes

Here's a roundup of some of the changes we've made recently.

Reduce variability by running tests multiple times

You can now run tests up to 7 times and then save the median result. This removes outliers and results and avoids unnecessary alerts.

Running each performance test once, 3 times, 5 times, or 7 times

Annual plans

Save 20% on your subscription by paying annually.

Two new monitoring locations

See how users in Mumbai and Singapore experience the performance of your website.

Lighthouse 6.1

Lighthouse 6.1 includes bug fixes, more data on long JS tasks, and a new SEO audit that makes sure search engines can crawl your links.

DebugBear now also uses Chrome 84 to test your pages.

Page loaded conditions

Does Lighthouse sometimes finish the test before your page has fully loaded? You can now set up a JavaScript expression that needs to be fulfilled before the test finishes.

New articles

Reduce variance between Lighthouse runs

Debug and improve server response time

Performance impact of Chrome extensions

First Contentful Paint and how to improve it

21 Jul 2020

Improved list of network requests

The list in the requests tab now shows the request duration by default, and you can add other columns as well. You can also click on the column headers to sort by that column.

Response time in list of requests

For example, you can break down the request duration into time spent on each part of the HTTP transaction: DNS lookup, TCP connection, SSL connection, Time to First Byte, and actually downloading the response content.

DNS, TCP, SSL, TTFB, and download

Or you can look at the content encodings, decoded response size, and response statuses. The request start time is relative to when the initial document request was made.

Request response status and content encoding

Larger request changes will also in the overview tab. Here you can see that the First Contentful Paint increased because the response for the initial document request took longer.

Overview tab list of requests

19 Jun 2020

Console tab redesign with code snippets

The console tab now has a custom design rather than showing the text-based diff by default.

Console messages will also include a call stack and code snippet where available.

Console errors in monitoring results

Request errors also show an HTML snippet, if the request was triggered by the page HTML.

text

19 May 2020

Lighthouse 6.0

DebugBear now tests your websites with version 6 of Lighthouse. We've also upgraded Chrome from version 78 to 83.

New metrics and scoring

Lighthouse 6.0 introduces several new metrics and changes how the overall performance score is calculated

  • Total Blocking Time – How often do JavaScript or rendering tasks make the page unresponsive?
  • Largest Contentful Paint – When is the largest content element displayed on the screen?
  • Cumulative Layout Shift – How much does page content move around after being rendered?

The composition of the Performance score has changed as follows:

Existing metrics

  • First Contentful Paint – 20% ➔ 15%
  • Speed Index – 26.7% ➔ 15%
  • Time to Interactive – 33.3% ➔ 15%

New metrics

  • Largest Contentful Paint – 0% ➔ 25%
  • Total Blocking Time – 0% ➔ 25%
  • Cumulative Layout Shift – 0% ➔ 5%

Deprecated metrics

  • First Meaningful Paint – 6.7% ➔ 0%
  • First CPU Idle – 13.3% ➔ 0%

New metric for Lighthouse V6 in the Lighthous report

You can find the charts for the new metrics in the Performance tab.

Charts for TTFB, largest contentful paint, total blocking time, and cumulative layout shift

Read more about these metric changes in the Lighthouse 6.0 announcement post.

Better timing budgets

Lighthouse performance budgets now support more timing metrics. If you've set up a performance budget on DebugBear those metrics will also show up in the Lighthouse report.

Lighthouse performance budgets for First Contentful Paint, Speed Index, Time to Interactive, Largest Contentful Paint, and Total Blocking Time

Stack packs

These were introduced in Lightouse 5.6.0, but DebugBear was previously running version 5.5.0. The Lighthouse report now includes recommendations tailored to your tech stack:

Lighthouse React stack pack

23 Mar 2020

Better and more customizable notifications

DebugBear automatically generates notifications if it looks like there's been a regression on your site. That means you don't need to do any work to get set up, but you might get some notifications that aren't relevant to you.

From now on you can configure when a notification is sent. It's been possible to mute specific notifications for a while, but I've now added some documentation for it.

Configuring DebugBear alerts

One common issue has been notifications for performance issues that can't be reproduced later on. Maybe the server was busy, or something weird happened with DNS. To avoid this problem in the future, some performance alerts are now only sent if the performance problem occurs more than once in a row.

25 Feb 2020

HTML validation

DebugBear now has a Validation tab which shows errors and warnings generated by the W3C HTML validator.

Most of these errors aren't very helpful. The HTML might not be valid, but as long as all browsers handle it fine that's not a problem. And sometimes the validator doesn't know about a recently added feature and will complain about it.

So DebugBear doesn't list common validation errors by default, and currently no email or Slack alerts are sent if there's a regression.

However, there are many potential problems the validator can identify:

  • duplicate attributes (e.g. two style attributes on the same element)
  • Stray start and end tags
  • Invalid inline CSS, like style="background: [object Object]"

Monitoring HTML validation errors, charts showing error and warning count, list of errors

20 Feb 2020

User flows

Until yesterday, DebugBear had a Login Steps feature that allowed you to fill out a login form before testing your page. There were a few problems with this though:

  • What if your site's login flow is split over two separate pages?
  • What if you have more than two form fields that need to be filled out?
  • What if you want to fill out a search from rather than do a login?

User flows are the solution to these problems. Rather than shoehorning all this functionality into a login form, you can now set up flexible steps that run before the actual page analysis.

text

14 Feb 2020

API updates: TypeScript typings, custom headers, and access build results

We've published a new version of the Node API. Here's an example of what you can do with it:

const { DebugBear } = require("debugbear")
const debugbear = new DebugBear(process.env.DEBUGBEAR_API_KEY)

const analysis = await debugbear.pages.analyze(pageId, {
  // Commit Hash is required to generate a build
  commitHash: "abc123",
  customHeaders: {
    "X-Enable-Experiment": "true"
  },
})
const result = await analysis.waitForResult()
console.log(result.build.status) // "success"

Check out the migration guide if you're moving from version 1 of the API.

24 Jan 2020

Capturing OCSP requests

Browsers make OCSP requests to check if a certificate is revoked. Chrome only does this for Extended Validation (EV) certificates.

These requests are now included in the request chart, so it should be easier to understand if you SSL connection takes a long time:

OCSP request in request chart

© 2020 DebugBear Ltd