Intel And Facebook Team To Open Source Data-Center Designs, Open Compute Project Grows With 12 New Members

Facebook announced today at the Open Compute Summit that it is open-sourcing more of its data center designs for storing pictures, high availability and power consumption in data centers.

The news couples with a series of announcements at the event here today:

  • Silicon photonics: Intel is contributing designs for its forthcoming silicon photonics technology, enabling 100 Gbps interconnects (enough to last multiple processor generations).
  • “Group Hug” board: Facebook is contributing a new common slot architecture specification for motherboards (nicknamed “Group Hug”); this can produce boards that are completely vendor-neutral and can accommodate up to 10 SOCs.
  • New SOCS: AMD, Applied Micro, Calxeda and Intel have all announced support for the Group Hug board; Applied Micro has built a mechanical demo of the new design.
  • New members: More than a dozen organizations have joined, including storage-oriented companies like EMC, Fusion-io, Hitachi, Applied Micro, ARM and Sandisk.

Facebook  stores 300 million pictures per day. They needed ways to store the pictures but not archive them so people could access them if needed.

Facebook used the OpenCompute “Open Rack” to create a cold storage rack for photos. The specs are now available for anyone to use.

The company is also contributing DragonStone — a design spec for a low-power database server, one CPU board and redundant power, for “cold data” storage. DragonStone has been integrated into Facebook’s data center in Lulea, Sweden, and is seeing 40 percent more efficiency.

Finally, Facebook is contributing Winterfell, a new web server design for fitting more servers on a rack.

Facebook needs the innovation that comes with opening data center designs, not only from the social network but also from traditional providers, as well. Intel, for example, collaborated with the community to make its specs available for the silicon photonics technology with 100-gigabyte-per-second connectivity with unprecedented latency characteristics.