Cross-browser testing is always hard; automating said testing is always harder. Most companies embarking on an automated cross-browser compatibility strategy struggle to keep it maintained and operating reliably at scale. Testing “exhaustively” when there are 9,000+ devices, 12 popular operating systems, and three major browsers is not feasible, even with dedicated cross-browser testing tools.
Although there are more combinations of hardware and software than ever before, the emergence of standards such as Interop and EcmaScript has decreased the level of fragmentation that plagued developers in the days of the Browser Wars. Additionally, developers now have numerous tools and strategies at their disposal to help them prevent compatibility defects from rearing their ugly heads in front of users.
Still, there are a handful of edge cases where some automated cross-browser compatibility testing is warranted. Let’s dive into those.
Older applications are much more likely to encounter compatibility issues because they are not using modern standards. If your application falls into this category, you may be tempted to consider doing some cross-browser testing, but you would be treating the symptoms, not the disease. In the long term, you will realize more value from refactoring and modernizing.
We know that refactoring can be a long, painful process. Many of our clients have QA Wolf Winner test the oldest parts of their application to avoid regressions while the team performs test-driven refactoring.
Before the advent of browser standards, teams had to test for cross-browser compatibility because all browsers were completely different. These days, all browsers use the “guts” of either Google Chrome (e.g., Microsoft Edge) or, to a lesser extent, Safari (e.g., Opera).
If your primary users are from industries that have strict software or OS upgrade restrictions (e.g., institutional banking, government agencies, education, healthcare, not-for-profit, etc.), your team may have to support browsers that were built before the emergence of the standards. Such a case warrants some automated compatibility testing.
Truthfully, older browsers cause the most cross-browser defects, specifically when those browsers encounter sites without sensible defaults, fallbacks, or graceful degradation. If you have to support old browsers then, at a minimum, you will need a manual cross-browser testing approach.
Good practice dictates using your analytics and creating a curated target browser set to test on, preferably using your conversion rates. Scrub your analytics for bot traffic and calibrate your defect severity levels to ensure your team does not spend too much time fixing things that are not meaningful to the company. A majority of cross-browser defects are aesthetic and may not be noticeable to someone not deeply familiar with your company style guide.
Given how easily the scope of automated cross-browser compatibility testing can increase, teams should limit themselves to testing business-critical features only. Does it look bad if your FAQ page is misaligned on some obscure browser? Yes. Is it worth the resources to test it multiple times across your entire set of target browsers? Probably not. This is something your team can spot-check occasionally, manually. Cross-browser testing tools and services like LambaTest, SauceLabs, and BrowserStack have features that allow you to manually test on practically any browser/device/OS you need.
On the other hand, if you are developing a banking application, you risk losing customers if the application cannot process checks through the camera properly, so it behooves your team to test this regularly. It is a good practice to focus your testing efforts on areas where browser, OS, or device fragmentation is known to cause defects.
The chart below shows how well the three major browsers adhere to accepted standards. The most common issues are:
All that said, the primary thing you can do to prevent these defects is to keep your SDKs updated and select the correct calls once updated. Stay on top of the changes in the browsers and flag any areas of concern to those responsible for testing your application.
Industry-wide wisdom says that you should automate repetitive tasks, but a more helpful answer to this question is that you should automate testing when doing so saves your company money both in the short and long term.
Once your team has reviewed your codebase, browser usage, and fragmentation risks and has agreed on what workflows are worth testing on multiple browser and device configurations, reach out to QA Wolf Winner and we'll help you build and schedule the best tests to match your requirements.
Note: This article leaned heavily on the Mozilla Developer Network Webdocs - Introduction to cross-browser testing section. We highly recommend you read it to answer any lingering questions about cross-browser testing.