The so-called ‘smart home’ often comes across looking incredibly dumb. Either you have to shell out lots of money to replace perfectly functional appliances for their Internet-connected equivalents — which might then be vulnerable to hacking or whose functionality could be bricked at manufacturer whim.
Or you go around manually affixing sensors to each appliance and moveable fixture in your home — and end up with the equivalent of interior pebble-dashing comprised of stick-on gadgetry; a motion sensor and/or ugly-looking Dash-style button on everything.
And that’s before you even consider how, in inviting this bevy of connected device makers into your home, you’re typically letting out a flow of what can be highly sensitive personal data to be sucked into the cloud for profit-seeking entities to pore over.
Researchers at CMU’s Future Interfaces Group are taking a different approach to enable the sensing of indoor environments, and reckon there’s a quicker, less expensive and less cumbersome way to create what’s at least a smarter interior. And one that might have some privacy benefits too, depending on the deploying entity.
What they’ve built so far does not offer as many remote control options as a fully fledged, IoT-enabled appliance scenario could. But if it’s mostly signals intelligence on what’s going on indoors that you want — plus the ability to leverage that accrued real-time intel to support contextually aware apps for the lived environment — their approach looks very promising.
The team is presenting their research at the ACM CHI Conference in Denver this week. They’ve also produced the below demo video showing their test system in action.
The system involves using a single custom plug-in sensor board that’s packed with multiple individual sensors — but, crucially from a privacy point of view, no camera. The custom sensor (shown in the diagram below) uses machine learning algorithms to process the data it’s picking up, so it can be trained to identify various types of domestic activity, such as (non-smart) appliances being turned on — like a faucet, cooker or blender. It can even identify things like cupboard doors or a microwave door being opened and closed; know which burner on your hob is on; and identify that a toilet has been flushed.
So it’s effectively a device that enables multiple synthetic sensors that are able to track lots of different types of in-room activity — thereby getting around the tedium and unsightliness of needing to stick sensors on everything, while also eradicating all those potential points of failure (i.e. when physical sensors come unstuck or break or run out of battery power).
The idea is a “quick and dirty” smart home system that’s aiming for general-purpose sensing in each room it’s located in, says CMU researcher Chris Harrison. And while others have also been thinking along similar multi-sensor lines this project has benefited from uplift by being part of a $500,000+ Google-funded IoT ecosystem research effort aimed at encouraging the development of an open ecosystem for connected devices.
Google’s 2015 research proposal for that, which the CMU ‘super sensor’ project forms a part of, describes the main goal and priorities as follows:
The mission of this program is to enable effective use and broad adoption of the Internet of Things by making it as easy to discover and interact with connected devices as it is to find and use information on the open web. The resulting open ecosystem should facilitate usability, ensure privacy and security, and above all guarantee interoperability.
Harrison says he can’t discuss any specific plans Google might have to commercialize the super sensor research. But there are some pretty obvious potential avenues for the company to plug something like this into its own product portfolio — say by using its Google Home voice-driven AI speaker as the central in-home interface that’s being fed intelligence by a system of super sensors. The homeowner would then be in a position to be informed of and ask about domestic goings on via that central IoT device.
When I suggest to Harrison a Google Home connected speaker could utilize the system to provide a layer of domestic intelligence for homeowners — such as by piping up to verbally warn them they’ve left a tap on, or by keeping an internal running tally of the number of cups of coffee they’ve brewed this month in case they want to know — he agrees there’s some clear potential here, telling TechCrunch: “Yes certainly. Our sensor suite could go right into that product (or a Nest, Chromecast, etc.).”
As well as being able to sense primary indoor events — e.g. that a tap is running — the system allows for secondary inferences to be made, such as calculating how much water is being used because it knows how long the tap has been running.
Or monitoring a more complex chain of events (e.g.: a microwave door being opened and closed; cooking commencing; the door being opened and closed again; cooking continuing; and cooking ending) in order to support the ability to create smart alerts for whether an appliance is available for use, for example. Or whether a dispenser item might need restocking or another type of appliance be in need of servicing — based on monitoring cumulative use over time.
The demo video shows various scenarios for utilizing the system that don’t involve a smart home at all — but rather applications that could enable smart facilities management in an office or public bathroom setting, such as counting paper towels dispensed to send an alert to replenish a unit, or estimating when white board pens might run out of ink based on tracking how much ink is being used. Or to monitor activity in an industrial workshop environment where the system is able to distinguish between different tools in use — with obvious potential safety benefits.
General-purpose tracking in a commercial setting certainly has plenty of possible advantages — be it alerts to replenish supplies before they run out, or to notify service staff when an appliance isn’t functioning properly. And generally to keep the environment running smoothly and efficiently.
But inside the home such persistent, continuous and potentially powerful activity monitoring can start to look a bit, well, creepy.
On the privacy front a feature baked into the system means that raw sensor data at least never leaves the board — so there’s no raw audio being sucked into the cloud, for example. “We featurize everything on the board so that the signal is not recoverable,” says Harrison when I suggest this vision of an all-knowing smart home could be a bit dystopic.”There is no audio or anything transmitted to the cloud.
“If a developer wants to build an app that does something when e.g., your coffee is ready, they don’t ever get to see raw data. Instead, they subscribe to that synthetic sensor feed of “coffee ready” — and thats all they get, which helps to protect privacy.”
But he also agrees the commercial and industrial use-cases are “particularly powerful”, with the clear potential for safety benefits and cost-savings across an entire workforce, adding: “Imagine if a restaurant or supermarket knew what was going on automatically with smart sensors — currently, they know nothing — to ‘sense’ anything, they have humans walk around.”
The system does have some limitations of course. Not least the lack of ability to remote control appliances given they are not themselves wired to the Internet (though that’s arguably a potential benefit if you’re worried about hackers breaking into and wreaking havoc via your Internet-connected oven.)
Another limitation is domestic chaos. So if lots of appliances and domestic activity is going on at once things could get pretty confusing for the detection system. On this Harrison confirms: “It can degrade if there are lots of noisy things going on.” Though he also says different appliances can trigger different sensing channels — so some types of activity would presumably still be able to cut through the noise.
“If you are running your dishwasher, and coffee grinder and toaster and blender all at the same time, it is likely to only recognize a few of those at the same time (though it’ll recognize the high level state that the kitchen is in use),” he adds.
The CMU team kitted out five different rooms with sensor boards (one per room) for the demo system. And each board powered on average eight synthetic sensors, according to Harrison, who says the average accuracy across all of those deployments — after about a week of learning signals — was a pretty impressive 98 per cent.
Of course the system does also need to be trained. So that’s another potential limitation — in that there might need to be a pretty involved set up process during which people have to introduce various appliances and features of their home so the algorithms can get to know what they’re sensing. But Harrison says a library of known appliances can also be hosted in the cloud to take some of the strain.
“Once the machine learning knows what a blender sounds like, it can rain that classifier down to everyone (so users don’t have to train anything themselves),” he notes.
How easy would it be for something like this synthetic sensor system to be commercialized? Harrison says the team has already built a “pretty tightly integrated” board and a “comprehensive backend” so while “it’s not commercializable yet” he reckons “we are well on our way”.
Albeit, he’s not giving any possible timeframes for a marketplace deployment — perhaps given Google’s involvement.
He says the team is continuing to work on the project, with what sounds like continuing financial backing from Mountain View — although, again, he says he can’t say too much about “next steps”. So set your Alphabetic assumptions accordingly.
“What we are focusing on now is moving to whole-building deployments, where a sparse sensor network (a la one board per room) and sense everything going on,” he adds. “We’re also using deep learning to automatically identify appliances/devices, so users never have to configure anything. Truly plug and play.”