When Facebook Grappled With The Ultimate Build Versus Buy Decision

At some point, as companies mature, they face a build versus buy technology decision.

A company like Facebook encounters that kind of choice constantly, but back in the 2009/2010 timeframe, it had an extraordinary one.

It was growing faster than just about any company on the planet and was having issues keeping up with that scale. That’s when it decided it had to start designing its own hardware and building its own data centers to meet the company’s very specific needs.

At that time, Facebook was purchasing equipment through the usual channels and placing it in leased co-location facilities, but it was finding commercial equipment, even from the most reputable manufacturers wasn’t flexible enough to meet its needs, Jay Parikh vice president of infrastructure engineering at Facebook told TechCrunch earlier this month.

Parikh, who is in charge of developing the software and hardware infrastructure that runs Facebook spoke candidly about the challenges and choices his company eventually made. It was a significant decision involving a major investment while changing the way the entire company worked.

Yet Facebook was able to make the transition remarkably fast — and it hasn’t looked back.

Making The Big Switch

The company reached the build-buy crossroads essentially by accident because it was growing too quickly. It was constantly running into obstacles as a result of that growth and it was having an impact on the business.

“We were having to slow down products and features for the business because we didn’t have performance characteristics we needed when they were all off-the-shelf components,” Parikh said.

We were having to slow down products and features for the business because we didn’t have performance characteristics we needed when they were all off-the-shelf components. Jay Parikh

As Facebook grew, it required getting under the hood of this equipment and making changes, but the nature of proprietary equipment made this extremely challenging. The company had to debug different things that other people had written and the deeper it got into the process, the harder it became to understand the lowest levels of infrastructure.

“We could [have kept] going down the [same] path and buying stuff and making it all work, but it was error prone, not flexible, costly and hard to troubleshoot,” he explained.

It was at this point, the company made the build decision.

“In 2009/2010 the first thing that happened was the realization that we weren’t going to keep up and be flexible and perform and be at right cost structure. In order to get flexible and [control] the cost metrics, it forced us to go build ourselves,” Parikh said.

The Advantages Of Going Your Own Way

Once Facebook was in control of its own destiny, it enabled the company to approach hardware in completely new ways. It was no longer bound by old rules about the physical design of the equipment. It could abandon preconceived notions, which engineers have developed over the years to provide a standard way of storing the equipment. When the company was designing and building the racks and the equipment, it gave engineers the power to experiment and rethink every aspect of the design.

Facebook hard disk array.

And that’s precisely what it’s done. As Facebook’s hardware lab, director of engineering Matt Corddry told TechCrunch last year, Facebook knows its own requirements better than anyone:

“We understand our challenges, costs, operating environment and needs better than an outside vendor and we are able to specialize on the specific needs of Facebook,” he explained at the time.

We understand our challenges, costs, operating environment and needs better than an outside vendor and we are able to specialize on the specific needs of Facebook. Matt Corddry

Since it made its decision, Facebook has designed a range of equipment such as networking top of rack switches, which enables the company to programmatically control every part of the equipment, giving it tremendous flexibility.

Facebook 6 pack of switches.

At the same time, Facebook began developing software to manage these custom pieces of equipment such as FBoss Agent, the software the company created to run those custom top of rack switches.

Finally, it designed highly efficient spaces to house that equipment such as the one it opened in Altoona, Iowa last Fall. Facebook looked at every aspect of the data center design from how it was cooled — using 100 percent outside air instead of expensive air conditioning systems — to the electrical equipment and the racks that housed the equipment.

Open Sourcing The Results

After Facebook began designing its own equipment and data centers, it made another decision —  to bring the power of the community to bear on the problem by open sourcing not just the software, but also its hardware designs. It launched the Open Compute Project Foundation, which is an organization created by Facebook to help share these designs.

According to the organization’s mission statement, “The Open Compute Project Foundation is a rapidly growing community of engineers around the world whose mission is to design and enable the delivery of the most efficient server, storage and data center hardware designs for scalable computing.”

It’s not surprising that the statement in many ways mirrors the mission of Facebook itself. The purpose is clearly to give others the chance to take advantage of Facebook’s hardware designs with the goal of advancing scalable computing, while helping Facebook improve its designs. It’s a situation where everybody should win.

Facebook started the project 4 years ago because it recognized other companies had similar problems related to scale and it would be efficient to work together. “We wanted to bring together a group to share a common set of problems and come together to solve them,” Parikh said.

Last year there were 1,000 engineers who contributed to open source projects Facebook started but who don’t work at Facebook, he said.

Measuring Twice, Cutting Once

In spite of the speed in which Facebook made this transition, it would be a mistake to think it did so willy nilly. Facebook had a plan and it relied on data to make sure it was achieving the desired results. At the macro level, data is embedded in every decision Facebook makes, Parkih explained.

“When we started down the path [to building our own hardware], we dipped our feet in,” Parikh said. “We only designed and built one server. We didn’t do all of the configurations. We started with the simplest design from a hardware perspective, building a web server.”

Along the same lines, it built just one data center and grew from there.

The company constantly reviews the data. If it turns out it makes more sense to buy than build, it does that, and it’s something the company is constantly evaluating. It also looks to in-house expertise and asks people who know the most about a particular problem and takes that into consideration, Parikh explained.

“We want to empower technology leads to get the data they need to drive their parts of their business,” he said.

In spite of all these decisions, tests and large-scale transition, Parkih made it sound like it wasn’t that big a deal. “It’s not rocket science. We know what our strategy is and we look at the data.” Sounds pretty simple, right?