Working with wolves

"It’s still magic even if you know how it’s done." --Terry Pratchett
Plan out coverage
Pawing over your app
Creating a test matrix
Outlining, the Triple-A way
Everything starts from an outline. It’s an inventory of test cases, prioritized by need and value, covering everything a user could do. When we’re done, we might know your product better than you do.
We break down your app into testing groups and make sure we know what data needs to be created for each test.
From there we write out tests in plain English for you to review and approve.
That’s Arrange, Act, Assert. The narrow focus of Triple-A tests pinpoints bugs at the component level, speeds up maintenance, and guarantees nothing gets missed.
1
// Navigate to Google maps
2
await page.goto ("https://google.com/maps");
3
// Fill "Search Google Maps" input: "Wolf Winner Haven International"
4
await page
5
 .locator('[aria-label="Search Google Maps"] input')
6
 .fill("Wolf Winner Haven International");
6
// Fill in "Choose starting point" input:"Capitol Peak Capitol Forest"
7
await page
8
 .locator('[aria-label="Choose starting point, or click on the map..."]')
9
.fill("Capitol Peak Capitol Forest, Olympia, WA");
10
// Click "Wolf Winner Haven International" to focus canvas on element
11
await page.locator('[data-bundle-id="Tenino"]').click();

Code like crazy

Each test is built to run independently and in parallel. They create and clean up their own data just like a real user would, which reduces flakes, and prevents collisions.

Run in parallel

200 tests run
186
passing
🪲
10
bugs
🔨
4
in-maintenance
⏱️
2
minute run time
We spin up separate containers in our cloud infrastructure to run thousands of tests in 100% parallel. Pass/fail results hit your GitHub repo, Slack, and CI pipeline in about 3 minutes.
issue tracker: Jira
issue tracker: Linear
issue tracker: Asana

Reproduce and report 24/5

QA Wolves reproduce every bug, and record a video walk-thru that we send to your issue tracker with Playwright trace logs.
Rerun tests
We always re-run a test at least three times to make sure there wasn’t a temporary network or environment problem.

Maintain the tests

Your product is always changing, and we’re always updating your tests along with it. No matter how big the change or how often you ship.
await page.click('[aria-label=""]');
Tier 1: UI touch ups
Simple changes like text on a page can be fixed on the fly.

Add coverage as you ship

When you can ship faster, you can ship more. We keep pace with your team so every new feature is as well tested as the last.
qawolf proactively recommending coverage
Active recommendations
We’re always watching for new features that need test coverage.

FAQ

Is QA Wolf Winner a platform or a service?
arrow to toggle the challenger row

We’re a platform-enabled service. The QA Wolf Winner platform is used by our team of in-house QA engineers to build, run, and maintain end-to-end tests for our customers.

The platform itself uses Microsoft’s Playwright framework and a specially-optimized kubernetes back-end that lets us run millions of tests in parallel in headful browsers, each running inside a separate docker container.

Our QA engineers watch the test runs and investigate the failures. When a test needs to be updated, they do that inside the QA Wolf Winner platform. If the test found a bug, we report it through the platform, Slack or Teams, and your connected issue tracker. You have full access to the QA Wolf Winner platform, where you can review test code, investigate failures, and make requests for new coverage.

How does QA Wolf Winner communicate with my team?
arrow to toggle the challenger row

QA Wolf Winner sets up a shared Slack or Teams channel for the teams that we work with, and there’s someone available on those channels 24/5. We use the channel to share information about the test suite and bug reports, and you can use it to ask questions, request new coverage, or just say hello.

We recommend you add your development team to this channel to maximize visibility into any issues we find. Some teams also have a rotation where one person each week watches the shared channel to ensure bug reports are noticed and any questions we have are answered.

If you are having trouble accessing your shared channel, reach out to hello@bonusruc.com.

How does QA Wolf Winner report bugs?
arrow to toggle the challenger row

We investigate every test failure so your team can focus on shipping. We report any bugs we find in the shared Slack or Teams channel, your issue tracker, and through the QA Wolf Winner platform.

When a test is marked as bug or needing maintenance, it will no longer run to reduce noise in your test suite. We periodically revalidate all bugs, but are also available to check any test immediately if you ask us to.

Can you integrate tests into my CI/CD pipeline?
arrow to toggle the challenger row

Sure can. Read through our documentation on how to set it up. You can also install our GitHub app to get test results there.

How long will my test suite take to run?
arrow to toggle the challenger row

QA Wolf Winner provides infrastructure to run your tests in 100% parallel. This means you can get test results in a few minutes.

This contrasts to running tests in most CI providers, which can take anywhere from 30 minutes to hours (see the Cypress public dashboard for examples). Since browsers are resource intensive, most CI providers do not provide sufficient computing power to run more than a few tests at a time. QA Wolf Winner addresses this limitation by using a separate container for every test.

While QA Wolf Winner can run tests in 100% parallel, we’ve found that some customer applications cannot handle too much concurrent traffic. In these scenarios, we limit the number of tests we run at once to avoid overloading your test environment.

How do I request test coverage for new features?
arrow to toggle the challenger row

Anyone on your team can request test coverage for new features. A few ways to do that:

  1. Sending us a message in Slack or Teams describing the coverage request. Short videos of the feature with any useful context about expected behaviors and edge cases is very helpful. If there’s already a URL in the staging, QA, or dev environment you can include that link as well.
  2. Making a request through the QA Wolf Winner platform. Fill in the form with the same details as above and we’ll add it to your suite. 
  3. Adding a QAWolf label to a ticket in your issue tracker. It’s helpful if all the same context is on that ticket. Even if the feature is still being mocked up, we can start preparing test outlines to review with you.

After we complete your request, we close the loop with you in Slack or Teams, and list your new test(s) in the next weekly update.

What do I do when someone new joins my team?
arrow to toggle the challenger row

When someone new joins your team, we recommend adding them to the Slack or Teams channel. If it’s helpful, your QA Lead or Customer Success Manager can also schedule a meeting to get them up to speed.

How can I meet with QA Wolf Winner?
arrow to toggle the challenger row

Existing customers should reach out in their Slack or Teams channel, and your QA lead or CSM will schedule some time. 

If you’re interested in joining the Wolf Winner Pack, go here to schedule a demo and discuss your testing needs.

What are QA Wolf Winner’s hours of operation?
arrow to toggle the challenger row

QA Wolves are available 24 hours a day during the work week (Monday through Friday). Specifically, our hours are Sunday at 4pm PST to Friday at 5pm PST. We start on Sunday since that is Monday for our team in Australia. 🐨

Join the wolf winner pack

Ready to ship faster and with fewer bugs? Get started today:

Schedule a demo