IBM Assembles Record 120-Petabyte Storage Array

IBM Research has just set a world record in data storage by building a drive array capable of holding 120 petabytes. It was done at the request of an unnamed research group that needs this unprecedented amount of space for running simulations of some sort. These simulations have been expanding in size as the datasets grow, but also as more backups, snapshots, and redundancies are added.

How did they do it? Well, the easy part was plugging in the 200,000 individual hard drives that make up the array. The racks are extra-dense with units, and need water cooling, but beyond that the hardware is fairly straightforward.

The problems come when you start having to actually index this space. Some filesystems have trouble with single files above 4 GB or so, and some can’t handle single drives larger than around 3 TB. This is because they just weren’t designed to be able to track so many files over so large a space. Imagine if your job was to name everyone in the world a different name — it’s easy at first, but after a billion or so you start running out of permutations. It’s the same way with file systems, though modern ones are much more forward-looking in their design, and I doubt you’ll have that problem again — unless you’re IBM Research.

120 petabytes of storage is an insane amount, eight times larger than the 15 PB arrays already out there, and they already had to deal with address space issues. In IBM’s huge array, tracking the location and calling data for its files takes up fully 2 PB of its own space. You’d need a next-generation file index just to index the index!

Their homegrown file system is called General Parallel File System, or GPFS. It’s designed with huge volumes and massive parallelism in mind: think RAID for thousands of drives. Files are striped across as many drives as they need to be, reducing or eliminating read and write capacity as a bottleneck for performance. And boy does it perform: IBM recently set another record, indexing 10 billion files in 43 minutes. The previous record? 1 billion files — in three hours. So yeah, it scales pretty well.

The array, built by IBM’s Storage Systems team at Almaden, will be used by the nameless client as part of a simulation of “real-world phenomena.” That implies the natural sciences, but it could be anything from subatomic particles to planetary simulations. These projects are generally taken on as much to advance the field as to provide a service, though. And of course now IBM gets to boast that it built this thing, at least until an even bigger one comes along.