MIT CSAIL research offers a fully automated way to peer inside neural nets

MIT’s Computer Science and Artificial Intelligence Lab has devised a way to look inside neural networks and shed some light on how they’re actually making decisions. The new process is a fully automated version of the system the research team behind it presented two years ago, which employed human reviewers to achieve the same ends.

Coming up with a method that can provide similar results without human review could be a significant step towards helping us understand why neural networks that perform well are able do succeed as well as they do. Current deep learning techniques leave a lot of questions around how systems actually arrive at their results – the networks employ successive layers of signal processing to classify objects, translate text, or perform other functions, but we have very little means of gaining insight into how each layer of the network is doing its actual decision-making.

The MIT CSAIL team’s system uses doctored neural nets that report back the strength with which every individual node responds to a given input image, and those images that generate the strongest response are then analyzed. This analysis was originally performed by Mechanical Turk workers, who would catalogue each based on specific visual concepts found in the images, but now that work has been automated, so that the classification is machine-generated.

Already, the research is providing interesting insight into how neural nets operate, for example showing that a network trained to add color to black and white images ends up concentrating a significant portion of its nodes to identifying textures in the pictures. It also found that networks trained to identify objects in video dedicated many of their nodes to scene identification, while networks trained to identify scenes do exactly the opposite, committing many nodes to ID-ing objects.

Because we don’t fully understand how humans think, classify and recognize information, either, and neural nets are based on hypothetical models of human thought, the research from this CSAIL team could eventually shed light on questions in neuroscience, too. The paper will be presented at this year’s Computer Vision and Pattern Recognition conference, and should provoke plenty of interest from the artificial intelligence research community.