Startups

Deep Science: ‘Twisted light’ lasers, prosthetic vision advances and robot-trained dogs

Comment

Image Credits: University of Pennsylvania (opens in a new window)

I see far more research articles than I could possibly write up. This column collects the most interesting of those papers and advances, along with notes on why they may prove important in the world of tech and startups.

In this edition: a new type of laser emitter that uses metamaterials, robot-trained dogs, a breakthrough in neurological research that may advance prosthetic vision and other cutting-edge technology.

Twisted laser-starters

We think of lasers as going “straight” because that’s simpler than understanding their nature as groups of like-minded photons. But there are more exotic qualities for lasers beyond wavelengths and intensity, ones scientists have been trying to exploit for years. One such quality is… well, there are a couple names for it: Chirality, vorticality, spirality and so on — the quality of a beam having a corkscrew motion to it. Applying this quality effectively could improve optical data throughput speeds by an order of magnitude.

The trouble with such “twisted light” is that it’s very difficult to control and detect. Researchers have been making progress on this for a couple of years, but the last couple weeks brought some new advances.

First, from the University of the Witwatersrand, is a laser emitter that can produce twisted light of record purity and angular momentum — a measure of just how twisted it is. It’s also compact and uses metamaterials — always a plus.

The second is a pair of matched (and very multi-institutional) experiments that yielded both a transmitter that can send vortex lasers and, crucially, a receiver that can detect and classify them. It’s remarkably hard to determine the orbital angular momentum of an incoming photon, and hardware to do so is clumsy. The new detector is chip-scale and together they can use five pre-set vortex modes, potentially increasing the width of a laser-based data channel by a corresponding factor. Vorticality is definitely on the roadmap for next-generation network infrastructure, so you can expect startups in this space soon as universities spin out these projects.

Tracing letters on the brain-palm

Research into prosthetic vision has hit a number of brick walls since early advances in microelectrode arrays and control systems produced a wave of hype around the turn of the century. But one need not reproduce a rich visual scene to provide utility to the vision-impaired. This research from the University of Pennsylvania looks at stimulating the visual cortex in a new and effective way.

The visual cortex is laid out roughly like our field of vision is, making it theoretically easy to send imagery to. Surprise: It’s not that simple! Without the fineness of the signals normally sent through the retina and optic nerve, simultaneous stimulation of multiple points on this part of the brain produces a muddled blip, not a stark outline or overall impression.

This experiment showed that by drawing a stimulus, for instance a letter, across the visual cortex as if tracing out a letter on a palm, works like a charm. Blind study participants were able to recognize these forms at a rate of over one per second for minutes at a time.

It won’t replace audiobooks or Braille in its current form, but this is a new avenue for prosthetic vision and perhaps a more realistic place to start than representing the “blooming, buzzing confusion” of the world.

Dog-robot and robo-dog interactions

After millennia of co-existence, dogs and humans make a great team. But does the canine eagerness to please people extend to robots that just look like people? This Yale study took dozens of dogs and put them in a situation where a Nao robot — humanoid in shape but definitely different — interacted normally with real humans, gave the test dog a treat, then told it to sit. A control group had the command come from a speaker.

Turns out dogs are way more likely to respond to the commands of a robot, although, as expected, they were generally a bit perplexed by the whole thing. You can watch the Very Good Video below:

This has a bearing on future pet care products for sure. Unlike a Roomba, which can take any form as long as it vacuums well, a pet care robot (you know they’re coming) may perform much better in anthropomorphic form.

In another piece of work totally unrelated except that it involves four legs, engineers at NASA’s Johnson Space Center and Georgia Tech put together a weird but effective new form of locomotion for wheeled robots, in particular those that need to navigate tricky or slippery terrain.

Instead of reinventing the wheel — literally, since that’s what NASA has had to do for Mars rovers — they combined the capability of rolling with that of walking, with wheels serving as feet in a shuffling quadrupedal form of locomotion that works even on sandy slopes:

This type of switchable movement could be really helpful for rovers on Mars, where the dust is notoriously difficult to manage. It also could have implications for autonomous robots here on Earth that must navigate more prosaic, but no less difficult, terrain, like gravel walkways and stairs.

A light touch

The sensitivity of the human fingertip is a truly remarkable thing, making it a tool for all occasions — though we end up just banging these miraculous instruments on keyboards all day. Robotic sensory faculties have a long way to go before they match up, but two new approaches using light may indicate a way forward.

One is OmniTact, an evolution of existing light-based touch-detection devices. These were devised years ago, essentially a miniature camera that watches the inside of a flexible illuminated surface, observing deformations and the pressures and movements they imply. These were limited in the size and area they could cover, but OmniTact uses multiple cameras and LEDs to cover a much larger space — about the size of a thumb. The slightest shift of the surface produces a pattern the cameras pick up on and track.

ETHZ in Switzerland has a similar approach, using a film filled with tiny microbeads, the locations of which are tracked by a tiny camera inside the sensor.

There are several advantages to these techniques. The touch-sensitive surface is just plain silicone or another inert, durable material, meaning it can be deployed in all kinds of situations. And the nature of the force tracking means these can detect not just point pressure on the surface, but sideways shear forces as well. They’re fairly low-cost as well, using small but far-from-exotic cameras and electronics. This approach is definitely a contender in the fast-evolving domain of robotic sensation.

Video games good, social media bad

A few years ago one might have expected to hear about the deleterious effects of gaming and the great promise of social networks. Well, as usual, things are never quite how we expect.

Blue represents vaccine-positive groups like the Gates Foundation; red, small anti-vax groups; green, undecideds — clustered as you see more tightly around anti-vaxxers.

The latest ill effect from social networks turns out to be a set of network effects that amplify the spread of anti-vaccination groups and views. Based on sophisticated models of interactions, groupings and other data from months of Facebook use, researchers found that groups espousing good science were larger but fewer, and showed little growth — while anti-vaccination groups were small but numerous, multiplying and growing like crazy. If current trends continue, the authors warn in their paper published in Nature, anti-vaccination views could soon dominate, with serious detriment to world health.

Lastly, a palate cleanser. An interesting study tested seniors aged 80-97 on their working memory and a few other factors before and after taking part in regular sessions of the online multiplayer action game Star Wars Battlefront (!). After three weeks of regular 30-minute sessions, the game-playing group showed significant improvements to visual attention, task-switching and working memory compared with a control group. Just imagine folks in their 90s playing Battlefront and then being sharper than ever. I hope LAN parties become a regular pastime at retirement communities.

More TechCrunch

As part of the update, Reddit also launched a dedicated AMA tab within the web post composer.

Reddit introduces new tools for ‘Ask Me Anything,’ its Q&A feature

Here are quick hits of the biggest news from the keynote as they are announced.

Google I/O 2024: Here’s everything Google just announced

LearnLM is already powering features across Google products, including in YouTube, Google’s Gemini apps, Google Search and Google Classroom.

LearnLM is Google’s new family of AI models for education

The official launch comes almost a year after YouTube began experimenting with AI-generated quizzes on its mobile app. 

Google is bringing AI-generated quizzes to academic videos on YouTube

Around 550 employees across autonomous vehicle company Motional have been laid off, according to information taken from WARN notice filings and sources at the company.  Earlier this week, TechCrunch reported…

Motional cut about 550 employees, around 40%, in recent restructuring, sources say

The keynote kicks off at 10 a.m. PT on Tuesday and will offer glimpses into the latest versions of Android, Wear OS and Android TV.

Google I/O 2024: Watch all of the AI, Android reveals

It ran 110 minutes, but Google managed to reference AI a whopping 121 times during Google I/O 2024 (by its own count). CEO Sundar Pichai referenced the figure to wrap…

Google mentioned ‘AI’ 120+ times during its I/O keynote

Google Play has a new discovery feature for apps, new ways to acquire users, updates to Play Points, and other enhancements to developer-facing tools.

Google Play preps a new full-screen app discovery feature and adds more developer tools

Soon, Android users will be able to drag and drop AI-generated images directly into their Gmail, Google Messages and other apps.

Gemini on Android becomes more capable and works with Gmail, Messages, YouTube and more

Veo can capture different visual and cinematic styles, including shots of landscapes and timelapses, and make edits and adjustments to already-generated footage.

Google Veo, a serious swing at AI-generated video, debuts at Google I/O 2024

In addition to the body of the emails themselves, the feature will also be able to analyze attachments, like PDFs.

Gemini comes to Gmail to summarize, draft emails, and more

The summaries are created based on Gemini’s analysis of insights from Google Maps’ community of more than 300 million contributors.

Google is bringing Gemini capabilities to Google Maps Platform

Google says that over 100,000 developers already tried the service.

Project IDX, Google’s next-gen IDE, is now in open beta

The system effectively listens for “conversation patterns commonly associated with scams” in-real time. 

Google will use Gemini to detect scams during calls

The standard Gemma models were only available in 2 billion and 7 billion parameter versions, making this quite a step up.

Google announces Gemma 2, a 27B-parameter version of its open model, launching in June

This is a great example of a company using generative AI to open its software to more users.

Google TalkBack will use Gemini to describe images for blind people

Firebase Genkit is an open source framework that enables developers to quickly build AI into new and existing applications.

Google launches Firebase Genkit, a new open source framework for building AI-powered apps

This will enable developers to use the on-device model to power their own AI features.

Google is building its Gemini Nano AI model into Chrome on the desktop

Google’s Circle to Search feature will now be able to solve more complex problems across psychics and math word problems. 

Circle to Search is now a better homework helper

People can now search using a video they upload combined with a text query to get an AI overview of the answers they need.

Google experiments with using video to search, thanks to Gemini AI

A search results page based on generative AI as its ranking mechanism will have wide-reaching consequences for online publishers.

Google will soon start using GenAI to organize some search results pages

Google has built a custom Gemini model for search to combine real-time information, Google’s ranking, long context and multimodal features.

Google is adding more AI to its search results

At its Google I/O developer conference, Google on Tuesday announced the next generation of its Tensor Processing Units (TPU) AI chips.

Google’s next-gen TPUs promise a 4.7x performance boost

Google is upgrading Gemini, its AI-powered chatbot, with features aimed at making the experience more ambient and contextually useful.

Google’s Gemini updates: How Project Astra is powering some of I/O’s big reveals

Veo can generate few-seconds-long 1080p video clips given a text prompt.

Google’s image-generating AI gets an upgrade

At Google I/O, Google announced upgrades to Gemini 1.5 Pro, including a bigger context window. .

Google’s generative AI can now analyze hours of video

The AI upgrade will make finding the right content more intuitive and less of a manual search process.

Google Photos introduces an AI search feature, Ask Photos

Apple released new data about anti-fraud measures related to its operation of the iOS App Store on Tuesday morning, trumpeting a claim that it stopped over $7 billion in “potentially…

Apple touts stopping $1.8B in App Store fraud last year in latest pitch to developers

Online travel agency Expedia is testing an AI assistant that bolsters features like search, itinerary building, trip planning, and real-time travel updates.

Expedia starts testing AI-powered features for search and travel planning

Welcome to TechCrunch Fintech! This week, we look at the drama around TabaPay deciding to not buy Synapse’s assets, as well as stocks dropping for a couple of fintechs, Monzo raising…

Inside TabaPay’s drama-filled decision to abandon its plans to buy Synapse’s assets