MIT’s AI streaming software aims to stop those video stutters

Next Story

Country music goes high tech with 360 videos for its ‘CMA Fest’ broadcast

MIT’s Computer Science and Artificial Intelligence Lab (CSAIL) wants to ensure your streaming video experience stays smooth. A research team led by MIT professor Mohammad Alizadeh has developed an artificial intelligence (dubbed ‘Pensieve’) that can select the best algorithms for ensuring video streams both without interruption, and at the best possible playback quality.

The method improves upon existing tech, including the adaptive bitrate (ABR) method used by YouTube that throttles back quality to keep videos playing, albeit with pixelation and other artifacts. The AI can select different algorithms depending on what kind of network conditions a device is experiencing, cutting down on the downsides associated with any one method.

During experimentation, the CSAIL research team behind this method found that video streamed with between 10 and 30 percent less rebuffing, with 10 to 25 percent improved quality. Those gains would definitely add up to a significantly improved experience for most video viewers, especially over a long period.

  1. Pensieve overview

  2. Pensieve outperforming existing approaches

  3. Pensieve neural network detailed diagram

The difference in CSAIL’s Pensieve approach vs. traditional methods is mainly in its use of a neural network instead of sticking to a strictly algorithmic-based approach. The neural net learns how to optimize through a reward system that incentivizes smoother video playback, rather than setting out defined rules about what algorithmic techniques to use when buffering video.

Researchers say the system is also potentially tweakable on the user end, depending on what they want to prioritize in playback: You could, for instance, set Pensieve to optimize for playback quality, or conversely, for playback speed, or even for conservation of data.

The team is making their project code open source for Pensieve at SIGCOMM next week in LA, and they expect that when trained on a larger data set, it could provide even greater improvements in terms of performance and quality. They’re also now going to test applying it to VR video, since the high bitrates required for a quality experience there are well suited to the kinds of improvements Pensieve can offer.