“Machine Listening” is the idea that computers can be programmed to interpret audio signals the same way humans do. This means that they can tell when a song belongs to the blues genre rather than techno. And they can detect musical characteristics like tempos, transition types, and harmonies.
The technology has some obvious practical uses. It could be used to compile collections of music with the same sound or with similarities to the music someone already knows they like. Applications could also be designed to create the perfect mixtapes, with songs picked and ordered in just the right ways.
The Echo Nest is a company that’s bringing machine listening to Web 2.0. It was founded by two MIT PhD students and is supported by a government grant. Today, the company releases the first of several “Musical Brain” APIs intended to improve three main aspects of music-related web services: search, recommendations, and interactivity.
The first API, which focuses on signature analysis and is being released through Mashery, can be used to retrieve an XML file with information about a particular song. A proof of concept website called This is my jam has been set up to demonstrate its capabilities. Load up a few of your favorite artists and it will automatically arrange songs from them in an order deemed most suitable given their audio characteristics.
The Echo Nest will lend all of its APIs to non-commercial projects for free, but it will charge commercial sites with a usage fee. The company plans on showcasing a website for each of its APIs, but it doesn’t currently have any plans to create a consumer destination of its own with the tech.