Researchers Create “Near-Exhaustive,” Ultra-Realistic Cloth Simulation


Cloth is hard to simulate yet it’s important in gaming, scientific analysis, and CGI. That’s why scientists at Berkely and Carnegie Mellon have spent six months exhaustively exploring all of the possible configurations of a single cloth robe on a cute little animated figure, thereby reducing error and creating some of the nicest simulated cloth you’ll see today. They report on their findings in a paper that will be released at SIGGRAPH today.

“The criticism of data-driven techniques has always been that you can’t pre-compute everything,” said Adrien Treuille, associate professor of computer science and robotics at Carnegie Mellon. “Well, that may have been true 10 years ago, but that’s not the way the world is anymore.”

The cloth you see above is made of 29,000 vertices and rendered at 60 frames per second. It flows and moves just like real cloth and because all possible motions are rendered and taken into account using a sort of graph of all possible vertex positions. Why is this important? Because it allows for online simulations of clothing on a human body, it can make games far cooler than they are now, and you can use the technology to see how materials will perform in various configurations, different weather patterns, and the like. In short, it gives virtual robots real clothes.

Can we expect to see this technology in games any time soon? Not on current consoles.

A common concern about the viability of data-driven techniques focuses on run-time memory footprint. While our approximately
70 MB requirement is likely too large to be practical for games targeting modern console systems (for example, the Xbox 360 has only 512 MB of RAM), we believe its cost is modest in the context of today’s modern PCs (and the coming generation of gaming consoles) which currently have multiple GBs of memory. Furthermore, we have not fully explored that gamut of cloth basis or secondary graph compression strategies and so both better compression, as well as run-time solutions that stream regions of the secondary graph (leaving only the basis representation in core), are likely possible.

It’s quite cute to see how these little figures move inside robes and “casual” clothing and I’d say we’re just a little past the uncanny valley, at least when it comes to clothing. With a few more months of rendering I wonder what they could do with floppy bellies and arm fat?