1,400 automated tests kept Bubble from popping

October 2022
Impact

Product quality without sacrificing team velocity 

  • Nearly 100% of core features covered by end-to-end tests 
  • Limited engineering resources could stay focused on feature development
  • Testing expertise for complex, open-ended workflows

Meet Bubble

Bubble’s mission is to democratize software development with tools that allow anyone to bring their app idea to life without learning to code or hiring developers. The company provides a complete no-code app builder, as well as hosting infrastructure all in one simple package.

2M
users
30
engineers
1,400
automated tests
Challenge

Creating an outstanding user experience from every angle 

The Bubble platform lets people create fully-functioning, consumer-facing web applications using drag-and-drop elements, workflows, data, and plugins. These “building blocks” can create anything from scheduling tools for healthcare professionals to office layout planners to virtual escape rooms. More than 2 million people use Bubble for their businesses and personal projects, with their user base doubling in less than a year. But their rapid growth also created huge challenges for testing and reliability. 

As Bubble’s customer base grew, so did the complexity of the apps their users were creating. The “building blocks” were being combined in new and unexpected ways. Because the blocks hadn’t been tested in every combination, Bubble’s users would find bugs in the apps they were creating. And while Bubble had some automated tests, there weren’t enough to support the continuous deployment process and ensure that code was truly production ready. 

“Users were starting to question the reliability of the platform. Pretty frequently, we would get a bug report, and in the process of fixing it we would introduce a new one. A lot of users would tell us ‘We don’t want new changes’ or ‘Give us the ability to accept new changes.’ It was a wake-up call to pay more attention to reliability in general.”
—Allen Yang, VP Product 

The team knew they had to invest more in QA but the small team was already facing a bandwidth issue as it is. 

Obviously if you decide to invest a lot in automated testing and building and improving your own testing system, then that’s valuable engineering time that could be spent elsewhere.
—Allen Yang, VP Product 

Bubble’s team needed a partner to achieve the high test coverage that would catch regressions during development, without pulling engineering resources off the roadmap or slowing down the deployment process. 

Results

QA Wolf Winner integrated with the Bubble’s team to ensure that everything was fully tested before it went out

End-to-end test coverage for nearly 100% of core features
Between QA Wolf Winner’s test suite, and the automated tests that Bubble had previously built out, the team reached nearly 100% test coverage in four months. With that coverage, Bubble could ship faster and with higher confidence that the changes would add value and not break existing functionality.

Finding bugs before their customers
As QA Wolf Winner ramped up Bubble’s test coverage, new bugs that hadn’t been reported were being discovered and fixed before they were spotted by customers or affected the product experience. 

Room to focus on the team’s priorities
QA Wolf Winner freed Bubble to focus on other priorities instead of hiring QA engineers or taking developers off of feature development. 

“We were able to focus on building, hiring, and hitting the goals we set for ourselves. Honestly, you were one of the smoothest vendors we’ve ever worked with.” —Allen Yang, VP Product
Ship faster with fewer bugs
Schedule a demo
We use cookies to
improve your experience.