Deep Science: Automated peach sniffers, orbital opportunity and AI accessibility

I see far more research articles than I could possibly write up. This column collects the most interesting of those papers and advances, along with notes on why they may prove important in the world of tech and startups.

In this week’s roundup: a prototype electronic nose, AI-assisted accessibility, ocean monitoring, surveying of economic conditions through aerial imagery and more.

Accessible speech via AI

People with disabilities that affect their voice, hearing or motor function must use alternative means to communicate, but those means tend to be too slow and cumbersome to speak at anywhere near average rates of speech. A new system could change that by context-sensitive prediction of keystrokes and phrases.

Someone who must type using gaze detection and an on-screen keyboard may only be able to produce between five and 20 words per minute — one every few seconds, a fraction of average speaking rates, which are generally over 100.

A person uses a brain-computer interface to type in a Stanford study. Image Credits: Stanford University

But like everyone else, these people reach for common phrases constantly depending on whom they are speaking to and the situation they’re in. For example, every morning such a person may have to laboriously type out “Good morning, Anne!” and “Yes, I’d like some coffee.” But later in the day, at work, the person may frequently ask or answer questions about lunch or a daily meeting.