$4 million richer, Walrus.ai has a pitch for companies looking for QA-testing tools

The co-founders of Walrus.ai, a new software company that raised $4 million in a new round of financing from Homebrew, Felicis Ventures and Leadout Capital, started their business with one problem.

Jake Marsh, Akshay Nathan and Scott White had a problem. They left Wealthfront to launch a new service that would solve what they saw as a key problem with new business workflows. Their idea was to integrate the disparate software silos that different parts of their former business used to complete assignments.

The company was going to be called Monolist and it was going to aggregate tasks across every tool into a single actionable list. Unfortunately it wasn’t working.

They had founded the business back in 2018 and had gone on to raise seed capital from Homebrew and Leadout Capital, but they were hitting walls in their product development.

“Reliability was a huge problem for us,” said company co-founder, Scott White. “There were various frameworks that would let you test your automation so that before you launch your software, you catch bugs… There were some code languages that exist that can help you do this, but they didn’t work for us at all.”

The browser testing frameworks that White and his co-founders were using hadn’t kept up with the evolution of the software development industry and couldn’t adequately recreate the ways that actual users would interact with the software. “The stuff is super brittle,” said White.

Typically, according to White, these assurance tests break and then force engineers and developers to then investigate why the tests broke, to see if they can figure out what went wrong with the test even before they move on to any quality assurance of the actual changes made to a product.

“They weren’t designed to handle that much complexity,” White said of the existing testing tools.

So White and his co-founders thought about how they’d solve what they see as one of the critical problems that engineers face.

“The problem for engineers right now is that writing tests for your applications is hard because you have to write code and the frameworks are very inflexible and flaky,” White said. “Engineers spend tons of time running tests and if those tests fail then your code would not get shipped so you have to debut all those tests.”

Enter the new venture from White and his co-founders.

That would be Walrus.ai . “We’re outsourced engineering through an API,” said White. “We understand how to do testing and we can do it way better and more quickly.”

Using simple text descriptions of a planned user interface, Walrus.ai’s co-founder said his company can run diagnostics on just how effectively the code manages to execute its planned commands.

Given its status as a relatively new kind on the testing block, Walrus.ai only has tens of paying customers right now as it spins out from Monolist.

The company sees its competition coming primarily from outsourced quality assurance companies like Rainforest QA; test recorders like Mabel and Testim; and testing frameworks like Selenium and Cypress, but believes that its ability to take natural language prompts and run QA tests will be enough of a differentiator to capture a significant share of the market.