The only performance metrics that really matter

John Gluck
February 21, 2024

If you think your site is fast enough, think again. According to Deloitte, for every second it takes your page to load, your conversion rate decreases by 4.4%. They were studying retail, travel, and luxury brands, but every product should heed the warning. The same study showed that a one-second decrease in page load times caused a 4.6% decrease in bounce rates, faster load times, increased engagement rates, and search engine rankings. How many products have you abandoned for slow performance? Your personal experience using slow apps should tell you everything you need to know about the business impact of poor performance.

Performance testing helps you know how the new code you’re shipping is helping or hurting your user experience. When it comes to performance testing in general, developers tend to gravitate toward load, endurance, spike, etc., because these testing types can be fun engineering problems to solve. And, yes, that testing is valuable. However, your first concern should be website page performance — that is, how snappy your app feels — because those customer-facing features directly impact satisfaction and business outcomes. And, while page performance testing will reveal the most painful wait times your customers encounter, it can also reveal performance problems in other parts of the application.  

Fortunately, those modern test automation frameworks make it easier to run  performance tests in your CICD pipeline. You, too, can have nice things by exploiting what’s available in your Chrome Developer Tools and automating in any framework that supports the Chrome DevTools Protocol (CDP).

You can’t fix what you don’t measure over and over

It’s impossible to overstate the importance of key metrics like Largest Contextual Paint (LCP), First Input Delay (FID), Cumulative Layout Shift (CLS), and Time to Interactive (TTI) as benchmarks for website performance. Without these, it’s pretty much impossible to accurately identify areas on your pages that need improvement. And let’s not forget that Google specifically looks at those metrics when determining your website’s SEO ranking.

Most teams leave this sort of thing for developers to run occasionally. How often do they do it? Who knows? But if you want to guarantee that website performance is consistently optimized over time, getting those performance test results into regular regression testing and, preferably, into your CICD pipeline is essential. Doing so will raise the visibility of the metrics and allow you to identify issues before they become more significant problems. 

Tools like Google’s Lighthouse and Chrome browser's built-in performance APIs have everything your team needs to gather the right metrics. But you may not know that up-and-coming frameworks like Playwright (QA Wolf Winner’s pick) and Cypress are built on Chrome DevTools Protocol (CDP), making automating web page performance metrics a breeze. More on that in a moment.

We’re going to talk about some of the critical performance metrics your team will need to start on the journey of web performance tuning. We’ll provide some tested sample code using the Chrome Performance API and Playwright because they are both open source. We’ll also give you some targets for each metric optimized for the mobile web.

Page load time

Page load time is the time it takes for a web page to fully load in the user's browser, including rendering all HTML, CSS, JavaScript, images, and other media files. It’s the king of all web performance metrics. If you are going to implement just one thing, this is the one because everyone looks at it and it is the easiest to understand. It may go without saying, but we’ll say it anyway: a shorter page load time correlates with better user engagement, lower bounce rates, and higher conversion rates. The faster your page loads, the better. There is no such thing as a web page that is too fast. 


const { chromium } = require('playwright');

(async () => {
   const browser = await chromium.launch();
   const page = await browser.newPage();

   // Enable request interception to capture load event timing
   await page.route('**/*', route => {
       route.continue().then(response => {
           if (response && response.request().frame() === page.mainFrame()) {
               const loadEventTimePromise = page.evaluate(() => window.performance.timing.loadEventEnd);
               const navigationStartTimePromise = page.evaluate(() => window.performance.timing.navigationStart);

               Promise.all([loadEventTimePromise, navigationStartTimePromise]).then(([loadEventTime, navigationStartTime]) => {
                   const pageLoadTime = loadEventTime - navigationStartTime;
                   console.log(`Page load time: ${pageLoadTime} ms`);
               }).catch(error => {
                   console.error('Error in Promise.all:', error);
               });
           }
       }).catch(error => {
           console.error('Error in route:', error);
       });
   });

   // Navigate to a webpage
   await page.goto('https://example.com');

   await browser.close();
})();

Ideal Target: Under 2 seconds.

Poor Target: Over 6 seconds.

The Web Vitals initiative

The Web Vitals Initiative is an internal Google program that provides unified guidance for web page quality so they can provide a good user experience. It is quickly becoming the gold standard for performance testing, mainly because the metrics it includes are part of the Core Web Vitals that Google recommends for improving your SEO.  

When we’re talking about gathering metrics on how pages perform, some metrics don’t make sense unless we measure them from a device in a real place and not just in a closet at company headquarters.  That’s because situations like being relegated to a 3G network on your vacation in Puerto Vallarta or using a VPN for your work will affect how fast those code bytes return to your browser.

The metrics we gather from real devices are called “field” measurements, as opposed to “lab” metrics (e.g., the server closet or an AWS EC2 instance). The Web Vitals initiative aims to have metrics developers can measure from the lab and the field. 

Largest Contextful Paint (LCP)

LCP measures the time it takes for the largest content element visible within the viewport to be fully rendered on the screen after a user navigates to a webpage. 

Pages that display quickly have higher engagement rates, better conversion rates, and lower bounce rates because users are more likely to interact with content that's visible right away.  Duh.  LCP indicates how quickly the main content of a page becomes visible to users. 


const { chromium } = require('playwright');

(async () => {
 // Launch the browser
 const browser = await chromium.launch();
 const page = await browser.newPage();

 // Navigate to the page
 await page.goto('https://example.com');

 // Execute script to subscribe to LCP events and return the LCP value
 const lcp = await page.evaluate(() => {
   return new Promise((resolve) => {
     const observer = new PerformanceObserver((list) => {
       const entries = list.getEntries();
       const lastEntry = entries[entries.length - 1];
       observer.disconnect(); // Disconnect the observer once the LCP is captured
       resolve(lastEntry.renderTime || lastEntry.loadTime);
     });
     observer.observe({ type: 'largest-contentful-paint', buffered: true });

     // In case LCP is not available, set a timeout
     setTimeout(() => {
       observer.disconnect();
       resolve(null);
     }, 5000); // Adjust the timeout as necessary
   });
 });

 console.log(`Largest Contentful Paint (LCP): ${lcp} ms`);

 // Close the browser
 await browser.close();
})();

  • Ideal Target: Under 2.5 seconds.
  • Poor Target: Over 4 seconds.

First Input Delay (FID) and Total Blocking Time (TBT)

FID measures when a user first interacts with a page (e.g., clicks a link or taps a button) to when the browser can respond to that interaction.  Because it’s measuring the user experience, it can only be accurately measured in the field.  So Google designates a good substitute for lab conditions: TBT.

TBT quantifies the amount of time a webpage is unresponsive to user input due to processing tasks occupying the main thread. In other words, it tells us when the page will become available to the user to interact with.  Two culprits monopolize the main thread and cause delays in processing: JavaScript execution and resource loading.

const { chromium } = require('playwright');

(async () => {
const browser = await chromium.launch();
const page = await browser.newPage();

let totalBlockingTime = 0;

page.on('metrics', metrics => {
 const blockingTime = metrics.TaskDuration;
 totalBlockingTime += blockingTime;
 console.log(`Total Blocking Time (TBT): ${totalBlockingTime} ms`);
});

await page.goto('https://example.com');

await browser.close();
})();

  • Ideal Target FID: Under 100 milliseconds (ms).
  • Poor Target FID: Over 300 ms.
  • Ideal Target TBT: Under 200 milliseconds (ms).
  • Poor Target TBT: Over 600 ms.

Cumulative Layout Shift (CLS)

You know how sometimes you are browsing, and what you’re looking at jumps down the page? CLS measures the jumpiness and, by extension, how discombobulated you’re making your users. Such shifts can be jarring and frustrating, especially if they cause unintended clicks on buttons or links. Sites that rely on ads for revenue and monetization are prone to CLS problems because they don’t have much control over what gets loaded into their ad slots. 

To improve your CLS, ensure proper sizing and dimensions for images and embedded elements, avoid dynamically injected content that pushes existing content down, and use CSS properties like aspect ratio to reserve space for dynamically loaded content. You’ll want to use analytics to find your smallest viewport sizes and optimize for those since those are the ones that will shift the most.


const { chromium } = require('playwright');

(async () => {
const browser = await chromium.launch();
const page = await browser.newPage();

let cumulativeLayoutShift = 0;

page.on('metrics', metrics => {
 const layoutShiftScore = metrics.LayoutShiftCumulativeScore;
 cumulativeLayoutShift += layoutShiftScore;
 console.log(`Cumulative Layout Shift (CLS): ${cumulativeLayoutShift}`);
});

await page.goto('https://example.com');

await browser.close();
})();

  • Ideal Target: Under 15 ms per 1,000 pixels squared
  • Poor Target: Over 50 ms per 1,000 pixels squared

Critical automated testing metrics 

Because automated tests are significantly faster than users, automators frequently encounter situations where specific performance metrics can help them determine if a test is failing because it’s busted or if the page or network is contributing to the problem.  The following performance metrics are part of an automated tester’s group of best friends.

Time to Interactive (TTI)

As automated testers, if we have to pick our favorite metric, this is the one.  

Even if a page appears visually complete to your users, some resources or scripts may still be loading or executing in the background, preventing users from interacting with the page smoothly. If there’s anything more frustrating than a button you can see but can’t click, we haven’t experienced it. TTI helps developers understand when the page is usable and responsive to user actions (e.g., clicking buttons, filling out forms, navigating links).

Automated testers love TTI because it reflects the user's actual experience of when they can start using a page. A low TTI can help ensure that tests don't attempt to interact with page elements before they are ready, reducing the risk of race conditions and false negatives.  TTI is a great way to end “differences of opinion.”  Math is cool sometimes. 


const { chromium } = require('playwright');

(async () => {
   const browser = await chromium.launch();
   const page = await browser.newPage();

   // Enable request interception to capture TTI event
   await page.route('**/*', route => {
       route.continue().then(response => {
           // Check if response exists and if it's for the main frame
           if (response && response.request().frame() === page.mainFrame()) {
               page.evaluate(() => {
                   const tti = window.performance.now();
                   // Additional checks can be added to determine when the page is fully interactive
                   return tti;
               }).then(ttiMetrics => {
                   console.log(`Time to Interactive (TTI): ${ttiMetrics} ms`);
               });
           }
       });
   });

   // Navigate to a webpage
   await page.goto('https://example.com');

   await browser.close();
})();

  • Ideal Target: Under 3 seconds.
  • Poor Target: Over 7 seconds.

Time to first byte (TTFB) 

TTFB is the time from when the browser initiates an HTTP request to when it receives the first byte of the server's response. It indicates both server responsiveness and network latency, as it reveals how quickly the server sends data back to the client after receiving the request. A high TTFB can indicate potential bottlenecks in the server or network that may impact page load times and user experience overall, not just on that page.


const { chromium } = require('playwright');

(async () => {
   const browser = await chromium.launch();
   const page = await browser.newPage();

   // Enable request interception to capture TTFB
   await page.route('**/*', route => {
       route.continue().then(response => {
           if (response) {
               const request = response.request();
               const url = request.url();
               const requestStartTime = request.timing().startTime;
               const responseReceivedTime = response.timing().receiveHeadersEnd;
               const ttfb = responseReceivedTime - requestStartTime;

               console.log(`TTFB for ${url}: ${ttfb} ms`);
           } else {
               console.log('Response is undefined');
           }
       }).catch(error => {
           console.error('Error in route:', error);
       });
   });

   // Navigate to a webpage
   await page.goto('https://example.com');

   await browser.close();
})();

  • Ideal Target: Under 150 milliseconds (ms).
  • Poor Target: Over 500 ms

Time to First Contextual Paint (TFCP)

TFCP measures the time the browser takes to render the first piece of content from the DOM (Document Object Model) on the screen when loading a web page. Contentful paint refers to the point in the loading process when any meaningful content (e.g., text, images, or non-white background elements) becomes visible to the user. Unlike Time to First Paint (TTFP), which measures the time until any visual change occurs on the screen, TFCP focuses explicitly on rendering content that contributes to the user's understanding and engagement with the webpage.


const { chromium } = require('playwright');
(async () => {
 // Launch the browser
 const browser = await chromium.launch();
 const page = await browser.newPage();

 // Navigate to the page
 await page.goto('https://example.com');

 // Execute script to capture paint timings and extract FCP
 const fcp = await page.evaluate(() => {
   return new Promise((resolve) => {
     if (window.performance) {
       // Use the PerformanceObserver to listen for 'paint' entries
       const observer = new PerformanceObserver((list) => {
         const entries = list.getEntriesByName('first-contentful-paint');
         if (entries.length > 0) {
           observer.disconnect(); // Disconnect the observer once FCP is captured
           const fcpEntry = entries[0];
           resolve(fcpEntry.startTime); // Resolve with the FCP time
         }
       });
       observer.observe({ type: 'paint', buffered: true });
     } else {
       resolve(null); // Resolve with null if the Performance API is not supported
     }
   });
 });
 
console.log(`First Contentful Paint (FCP): ${fcp} ms`);
 // Close the browser
 await browser.close();
})();


  • Ideal Target: under 1.8 seconds
  • Poor Target: over 3 seconds.

Other important metrics you might be interested in

We won’t go into implementation details on the rest of these metrics. Suffice it to say that the metrics above are essential to just about any web-facing application. However, depending on your application, some may be more or less important. Those include:

  • DOM Content Loaded: DOM Content Loaded marks the point in the page load timeline when the HTML document has been completely loaded and parsed, and the DOM is ready for manipulation via JavaScript. 
  • Time to DomContentLoaded: Similar to DOM Content Loaded, Time to DomContentLoaded measures the time it takes for the DOMContentLoaded event to fire, indicating when the HTML document has been fully loaded and parsed.
  • Onload Time: Onload Time measures the time it takes for all resources on a web page to load and for the onload() event to fire. It indicates when a page is fully rendered and ready for user interaction. 
  • Resource Load Times: Resource Load Times measure the time it takes to fetch and load individual resources such as images, scripts, stylesheets, and other media files. Monitoring resource load times helps identify bottlenecks and optimize resource delivery for faster page rendering.
  • Cache Hit Ratio: Cache Hit Ratio measures the percentage of requests served from cache compared to total requests. A higher cache hit ratio indicates effective caching strategies, reduces server load, and speeds up page load times for repeat visitors.

Something is always better than nothing

You should start gathering one or more metrics using the code snippets and targets mentioned above. Even if you have to run your scripts manually for a while, it’s always good for your team to have the information. Eventually, you can get them all into tests and run them as part of your pipeline. Ultimately, your goal should be to get the team to agree to some targets and never exceed the maximum, whatever you decide to set it at. Your performance tests, like functional e2e tests, should eventually act as gates.

You can set up monitors for bonus points. Products like NewRelic will let you make dashboards and set targets if you are fortunate enough to be part of an organization with something like that. The key takeaway should be that performance testing is a practice, not a one-time thing. You can’t simply optimize once. You need to keep on it, so we recommend integrating it into your pipeline.

When you work with us, we use the Lighthouse library which can access all of the Chrome Performance API goodness. We can give you just about any metric imaginable and make sure you hit your target every time you run your tests.

Keep reading