Pachyderm Wants To Be The Data Processing Tool For The Docker Generation

Next Story

Microsoft Acquires Revolution Analytics To Bolster Its Analytics Services

If you’ve collected a large amount of data that you want to analyze, the go-to method for years has been to follow a programming paradigm called MapReduce, typically using Apache’s Hadoop framework. It’s a tried and true process, but it isn’t simple: Hadoop, which is mostly written in Java, has a reputation for being difficult.

Companies that want to get serious about data analysis often have to hire elite programmers who specialize in writing Hadoop MapReduce jobs. Or, they could contract a third-party company such as Cloudera to facilitate this kind of analysis. Neither of these options are an easy or inexpensive undertaking. This all means that early stage companies or projects often just don’t have the resources or know-how to take advantage of “big data.”

Pachyderm is a new startup launching out of the Winter 2015 class of Y Combinator that aims to make big data analysis much simpler and accessible. Claiming to provide the power of MapReduce without the complexity of Hadoop, Pachyderm is an open source tool that purports to enable programmers to run analysis of large amounts of data without writing a line of Java or knowing a thing about how MapReduce works.

Co-founded by former RethinkDB staffers Joey Zwicker and Joe Doliner, Pachyderm is possible because of a range of infrastructure improvements that have emerged over the past ten years, most notably cluster management system CoreOS and Docker, the cloud- and language-agnostic app deployment platform.

According to the founders, using Pachyderm, which is available on its website and GitHub, all a programmer who wants to analyze a large amount of data has to do is implement an http server that fits inside a Docker container. The company touts that “if you can fit it in a Docker container, Pachyderm will distribute it over petabytes of data for you.” One cool example that uses Pachyderm is this MapReduce job for analyzing and learning from blunders in chess games.

The exciting thing about what Pachyderm is doing is that it could make data analysis much more accessible to people beyond backend and infrastructure engineers. With Pachyderm, the promise is that programmers who specialize in front-end engineering and design could run serious MapReduce-type jobs themselves, to help inform all kinds of product decisions. “The barrier to do really interesting data analysis should be so much lower than it is,” Doliner says.

Funded only by Y Combinator at the moment, Pachyderm is still in its earliest stages. It eventually plans to make money in the same way that other modern open source-oriented companies do, through providing additional paid features and services. Pachyderm also plans to build out a GitHub-like web platform interface for writing data analysis jobs.

It bears mention that Pachyderm is not the only open source platform currently aiming to provide an alternative to Hadoop MapReduce when it comes to processing large amounts of data: Apache Spark and Storm are variations on a similar theme, and languages such as Scala have emerged to make using Hadoop easier.

This all goes to show that while “big data” has been a buzzword for years, actually making the most of it is a problem that is far from being solved. With the backing of Y Combinator and the potential support of its wider community of developers, Pachyderm has a good chance at emerging as an important player in the next generation of data processing.