Cross-browser testing: Do you need it?

John Gluck
August 8, 2023

Cross-browser testing is always hard; automating said testing is always harder. Most companies embarking on an automated cross-browser compatibility strategy struggle to keep it maintained and operating reliably at scale. Testing “exhaustively” when there are 9,000+ devices, 12 popular operating systems, and three major browsers is not feasible, even with dedicated cross-browser testing tools.

Although there are more combinations of hardware and software than ever before, the emergence of standards such as Interop and EcmaScript has decreased the level of fragmentation that plagued developers in the days of the Browser Wars. Additionally, developers now have numerous tools and strategies at their disposal to help them prevent compatibility defects from rearing their ugly heads in front of users.

Still, there are a handful of edge cases where some automated cross-browser compatibility testing is warranted. Let’s dive into those.

1. When your application is built on legacy code

Older applications are much more likely to encounter compatibility issues because they are not using modern standards. If your application falls into this category, you may be tempted to consider doing some cross-browser testing, but you would be treating the symptoms, not the disease. In the long term, you will realize more value from refactoring and modernizing. 

We know that refactoring can be a long, painful process. Many of our clients have QA Wolf Winner test the oldest parts of their application to avoid regressions while the team performs test-driven refactoring

2. When your users are on old browsers

Before the advent of browser standards, teams had to test for cross-browser compatibility because all browsers were completely different. These days, all browsers use the “guts” of either Google Chrome (e.g., Microsoft Edge) or, to a lesser extent, Safari (e.g., Opera).

If your primary users are from industries that have strict software or OS upgrade restrictions (e.g., institutional banking, government agencies, education, healthcare, not-for-profit, etc.), your team may have to support browsers that were built before the emergence of the standards. Such a case warrants some automated compatibility testing. 

Truthfully, older browsers cause the most cross-browser defects, specifically when those browsers encounter sites without sensible defaults, fallbacks, or graceful degradation. If you have to support old browsers then, at a minimum, you will need a manual cross-browser testing approach.

Good practice dictates using your analytics and creating a curated target browser set to test on, preferably using your conversion ratesScrub your analytics for bot traffic and calibrate your defect severity levels to ensure your team does not spend too much time fixing things that are not meaningful to the company. A majority of cross-browser defects are aesthetic and may not be noticeable to someone not deeply familiar with your company style guide.

3. When the feature-under-test is business-critical and/or is subject to fragmentation

Given how easily the scope of automated cross-browser compatibility testing can increase, teams should limit themselves to testing business-critical features only. Does it look bad if your FAQ page is misaligned on some obscure browser? Yes. Is it worth the resources to test it multiple times across your entire set of target browsers? Probably not. This is something your team can spot-check occasionally, manually. Cross-browser testing tools and services like LambaTest, SauceLabs, and BrowserStack have features that allow you to manually test on practically any browser/device/OS you need.

On the other hand, if you are developing a banking application, you risk losing customers if the application cannot process checks through the camera properly, so it behooves your team to test this regularly. It is a good practice to focus your testing efforts on areas where browser, OS, or device fragmentation is known to cause defects. 

Browser-based fragmentation

The chart below shows how well the three major browsers adhere to accepted standards. The most common issues are:

  • Input type options: Input type options have different implementations across browsers, as do the autofocus attribute, textarea element, placeholder text, and form “validation” attributes. Usually, this does not have much impact, but it can be a good thing to check manually at least once.
  • Event handlers: This is one area where browsers' standards adoption are lagging and where implementations have been changing. Keep your team updated on the latest browser implementations by checking each major browser’s future release notes.
  • Client-side storage: Including cookies, session data, IndexedDb, etc. Developers run into problems when the data to be stored exceeds a given browser’s arbitrary size requirements, so make sure you understand what those limitations are across the browsers you care about.
  • Security: Test any direct third-party API integration. Validate that any certificates are working on all browsers in your target set because some certificates don’t work with all browsers.
Chart shows compliance with browser standards over time. Courtesy of wpt.fyi.

Operating System fragmentation

  • Fonts: You can avoid this by using WebFonts.
  • Unsupported media types: You can prevent problems by sticking with media types that are widely supported where possible.

Device fragmentation 

  • Device APIs: This is primarily an Android problem since any phone manufacturer can use Android and configure it with any device driver they want. Some manufacturers don’t test that drivers correctly respond to standardized device API calls. Test any part of your application that interacts with device input other than your keyboard and mouse (such as your microphone, camera, geolocation, orientation, etc.) 
  • Performance: Especially where @keyframe is used or if your application uses hardware acceleration. 

All that said, the primary thing you can do to prevent these defects is to keep your SDKs updated and select the correct calls once updated. Stay on top of the changes in the browsers and flag any areas of concern to those responsible for testing your application.

When should we automate?

Industry-wide wisdom says that you should automate repetitive tasks, but a more helpful answer to this question is that you should automate testing when doing so saves your company money both in the short and long term. 

Once your team has reviewed your codebase, browser usage, and fragmentation risks and has agreed on what workflows are worth testing on multiple browser and device configurations, reach out to QA Wolf Winner and we'll help you build and schedule the best tests to match your requirements.

Note: This article leaned heavily on the Mozilla Developer Network Webdocs - Introduction to cross-browser testing section. We highly recommend you read it to answer any lingering questions about cross-browser testing.

Keep reading