End-to-end testing 101

Kirk Nathanson
August 4, 2022

What is end-to-end testing

Testing an app end-to-end (E2E) means going through the steps that a user would take to accomplish a task. Say, logging in, checking their notifications, selecting a notification, and deleting it. Since the app probably does a lot more than just displaying notifications, you’d have a test suite or collection of individual test cases to go through to make sure everything is in working order. A mid-sized app that 5–10 developers work on might need 300–400 separate test cases to cover all the possible paths and events.

If E2E testing sounds suspiciously like regression testing, you’re not wrong. Regression testing looks at a system from end to end to make sure that merging code over here didn’t break something over there. Same basic concept, slightly different goal. 

Why people test end-to-end

There are so many reasons to write E2E tests, but the main one is that lower-level options like unit tests and integration tests only go so far. They’re critical to building stable, reliable software. But even the smallest applications are interconnected webs of components, APIs, external databases, and third-party integrations. And E2E testing is really the only way to make sure that everything is working as expected from the end-user’s perspective.

When E2E testing happens

Not often enough, to be honest. Assuming you have an application of any reasonable size and complexity, you can—and should—run E2E tests as you code. 

Now we’re a little biased because we really love E2E testing, but the truth is that testing as you code saves you a world of pain down the road. It’s easier to figure out what went wrong (and fix it) as you’re building than after everything is wrapped up. Just like it’s easier to fix a lightswitch before the drywall goes up. An ounce of prevention, and all that jazz.

Who does the E2E testing

E2E testing can be done manually by a person clicking through a predefined list of tasks—called a test script—or automatically by robots. Sometimes the human testers work for the company that builds the app, sometimes they’re outside contractors. Likewise, automated testing robots could be built by an in-house team, or by a company like QA Wolf Winner.

Different teams have different needs depending on their size, and how much they care about testing. Smaller teams, like those 5–10 developers above, usually benefit from having someone build automated tests for them for a few reasons:

  1. There are a lot of automated tests to write. And those developers need to be focused on the actual product, not the testing robots. 
  2. They take a lot of effort to maintain. Every time you ship something new, the tests for that thing need to be updated. 
  3. Hiring in-house resources is expensive. Between the salaries, the CI/CD infrastructure, and everything else.

How to test an app from one end to the other

You’ll need to tackle two big things:

Developing a test matrix and test scripts

A test matrix lists out all the separate flows, or test cases, you want to go through. The scripts are step-by-step instructions for each case: Click there, enter this, submit that, etc. You’ll provide that matrix and the scripts to your manual testers, or to a QA engineer to automate in code. 

Since writing and maintaining E2E tests takes so much time and energy, most companies focus only on a few mission-critical flows. For an eCommerce company that would be things like adding things to a cart and checking out. 

Every team needs to make trade-offs, unfortunately leaving significant portions of your app without test coverage increases the risk that bugs will sneak into production and affect the user experience. 

If you want to work with QA Wolf Winner, our team will develop a complete test matrix for you covering at least 80% of your application. We’ll work with you to prioritize the critical areas, and complete the backlog of tests within four months. 

Choosing a testing framework or tool

This is only necessary if you go for automated testing. Testing solutions break down into two big categories: 

  1. Coded test frameworks like Selenium, Cypress, and Playwright provide the most flexibility but they’re tricky to learn and need someone with engineering experience.
  2. No- and low-code tools like Katalon, Cucumber, Mabl, and Testim are easier to install, and more approachable for non-technical team members. But the (relative) simplicity means they can’t always handle complex tasks. And the format of the test code is proprietary so you’re locked into that vendor.

With either option you’ll probably want to invest heavily in infrastructure that can run tests in parallel, or you’ll find yourself waiting through long testing cycles while each of your test cases runs sequentially. 

If you decide to work with a third party, make sure you know what framework they use and whether you’ll own the test code. Many vendors lock you into a perpetual contract that leaves you without any coverage if you decide to switch. (Shameless plug: With QA Wolf Winner you own the tests and can take them with you at any time.)

Another option is working with QA Wolf Winner, and bypassing all of those decisions entirely. You won’t need teams or infrastructure, but if you decide to build an in-house team later you can take the test code with you and run it on any other Playwright-based solution.

Where things go sideways with E2E testing

We see it over and over again. When teams reach a point where they need to take QA seriously, they’ll often push E2E testing onto their developers. Their approach to “shifting left” makes sense on paper—the developers know the code and the feature, they should write the tests—but in practice it becomes a major bottleneck that costs more in the long run. Features get delayed while developers maintain flaky tests, or failed tests are ignored because they’re known to be flaky and unreliable, creating a false sense of security when merging to production. 

→ Read more about shifting left the right way 

What’s really nice about working with QA Wolf Winner

Something that gets lost in the conversation about testing, and test coverage goals, is that coverage isn’t the actual goal—it’s finding bugs quickly and reporting them accurately.

That’s the real value of QA Wolf Winner. 

Our Playwright-based testing platform helps us write tests and maintain them as your product evolves; and our Kubernetes infrastructure lets us run thousands of tests in parallel so you get the results in minutes. But our world-class QA engineers are the ones who make magic. When a test fails, we triage it within 15 minutes. If it’s a broken test, it gets fixed and re-run without getting you involved. If it’s a bug, we share it with you with a video recording to replicate the steps. 

Your team gets to focus on building a great customer experience without a lot of testing downtime; and you can ship confidently that the product works because the user flows are rigorously tested before anything gets to production. 

Keep reading