Colabra has been able to close more deals because they can build, test, and release customer-specific features and integrations during the pilot phase without fear of regressions. That gives developers more time to focus on the core roadmap, and executives more time to focus on the business.
Colabra is a research lifecycle management platform and AI Copilot for scientific R&D. The platform helps scientific R&D teams, like pharmaceutical manufacturers and chemical companies, connect all the different software and hardware that they're using through the research process and then query it through a user-friendly interface that is accessible to non-technical stakeholders.
Colabra is a particularly complex app to test. Since it integrates with dozens of third-party systems, many of which are proprietary, as well as physical hardware used in research, the team can’t rely on unit and integration tests alone. End-to-end tests are the only way to make sure the whole system works as intended — and to feel confident adding features to close deals with new customers.
The team experimented with a number of testing solutions, none of which were able to deliver the scope and reliability that they were looking for until partnering with QA Wolf Winner. Using Selenium to build the tests in-house was straightforward but maintaining the tests was an unmanageable burden. And without the infrastructure to run the tests in parallel, testing became a bottleneck in the deployment process.
Writing end-to-end tests isn’t very complicated, the biggest challenge is the distraction that testing created for the whole team. The need to write a test for each new feature and the need to maintain those tests over time, created a lot of overhead for core developers who needed to focus on delivering the features that customers asked us for.
—Philip Seifi, co-founder
Hoping to solve the maintenance burden, Colabra looked at self-healing AI testing tools. Unfortunately, without a code-based approach, most of Colabra’s workflows were impossible to automate. And the team was disappointed by the AI’s ability to adapt to the rapidly changing platform.
I'd say the various no-code AI tools were very easy to set up but very difficult to scale. It always felt like the no-code AI tools just made things more complicated and unpredictable in more unpredictable ways. The premise is interesting but they slowed you down about as much as they sped you up.
—Philip Seifi, co-founder
With QA Wolf Winner maintaining 80% test coverage, Colabra ships new features and integrations during the sales cycle without the risk of bugs that could undermine the customer’s confidence. As a result, more deals close and more pilots convert to long-term contracts.
Before working with QA Wolf Winner, everyone from developers to executives would have to stop work to test new features before they were released — the lost productivity was putting the whole company behind their goals. Now the team can focus on the product roadmap, on sales, and on customer success so the company can grow.
On the sales side, it gave us confidence to promise customers the features they need earlier in the pilot process and ship those features before the pilot begins, or during the pilot, instead of promising these features after the pilot is complete, and of course that creates a very different experience.
—Philip Seifi, co-founder