Space

Deep Science: Dog detectors, Mars mappers and AI-scrambling sweaters

Comment

Dogs are detected and their positions estimated by a computer.
Image Credits: Microsoft Research

Research papers come out at far too rapid a rate for anyone to read them all, especially in the field of machine learning, which now affects (and produces papers in) practically every industry and company. This column aims to collect the most relevant recent discoveries and papers, particularly in but not limited to artificial intelligence, and explain why they matter.

This week in Deep Science spans the stars all the way down to human anatomy, with research concerning exoplanets and Mars exploration, as well as understanding the subtlest habits and most hidden parts of the body.

Let’s proceed in order of distance from Earth. First is the confirmation of 50 new exoplanets by researchers at the University of Warwick. It’s important to distinguish this process from discovering exoplanets among the huge volumes of data collected by various satellites. These planets were flagged as candidates but no one has had the chance to say whether the data is conclusive. The team built on previous work that ranked planet candidates from least to most likely, creating a machine learning agent that could make precise statistical assessments and say with conviction, here is a planet.

“A prime example when the additional computational complexity of probabilistic methods pays off significantly,” said the university’s Theo Damoulas. It’s an excellent example of a field where marquee announcements, like the Google-powered discovery of Kepler-90 i, represent only the earliest results rather than a final destination, emphasizing the need for further study.

In our own solar system, we are getting to know our neighbor Mars quite well, though even the Perseverance rover, currently hurtling through the void in the direction of the red planet, is like its predecessors a very resource-limited platform. With a small power budget and years-old radiation-hardened CPUs, there’s only so much in the way of image analysis and other AI-type work it can do locally. But scientists are preparing for when a new generation of more powerful, efficient chips makes it to Mars.

Automatically labeled landscape imagery from Mars.
Image Credits: JPL

Automatically classifying terrain, autonomously identifying and navigating to objects of interest, and local hosting and processing of scientific data are all on the table, as proposed by the Machine Learning-based Analytics for Autonomous Rover Systems (MAARS) program. Though the capabilities of a future rover may be orders of magnitude greater than what we have headed there now, efficiency and reliability will always be paramount — it’s the ultimate in edge deployment. You can even help train a Mars-bound navigation algorithm right now.

In orbit, the proliferation of communications satellites in constellations like SpaceX’s Starlink is leading to much worry on the part of astronomers, whose Earth-based telescopes must look past those pesky objects to observe the sky. A recent multiorganization study simulating a satellite-filled future night sky concludes that it will “fundamentally change astronomical observing,” and that “no combination of mitigations can fully avoid the impacts.”

Among the recommendations, software to “identify, model, subtract and mask satellite trails in images on the basis of user-supplied parameters” is foremost for observatories. This kind of task is highly suitable for ML agents, as we’ve seen in other digital media manipulation tools. I would be astonished if there were less than a dozen concurrent projects in private and public R&D to address this need, as it will be a persistent part of all astronomical observation going forward.

Starlink satellites streak through a telescope’s observations. Image Credits: IAU

One more space-surface interaction we need to be aware of: Turns out quantum computers may be extremely sensitive to natural radiation, including the minute amounts emitted by metals in soil and of course those rascally cosmic rays. Just one more thing to isolate those fragile qubits from.

Another quick note for those of us here in the atmosphere: Berkeley National Lab tested a handful of consumer-grade air quality monitors to see if they actually do what they’re supposed to. Surprisingly, they do, but consistently overestimate the level of particulates in the air by as much as 2.4 times. That makes sense from a liability point of view — better to overreport than under.

The study suggests that a network of these cheaper sensors, while their readings are not to be relied on at face value, could prove an invaluable resource for tracking air quality trends across scales. If these devices are not already contributing to environmental and climate research, they should be and probably will be soon. But like other IoT devices, they’ll face privacy questions. The benefits of establishing clear rules and permissions for this kind of thing are becoming more clear by the day.

A robot in the forest scans the environment while a soldier watches using a head mounted display.
Image Credits: U.S. Army

On the ground, the U.S. Army Research Lab has come up with an interesting way to promote a sort of symbiosis between humans and robots, each limited in their own way. A robot buddy traveling alongside a human can scan the environment more quickly and thoroughly than a person, but lacks the ability to tell whether changes it observes around it are important. The ARL and UC San Diego put together a system that watches for discrepancies in what its lidar systems detect, such as movement or a new or absent object, and highlights them in a heads-up display worn by a human. It skips the whole problem of “understanding” what’s happening by passing that on to a human, while leveraging the robot’s superiority in superficial sensing. This paradigm could be a very helpful one — and a relief to those rightly worried that robots aren’t really smart enough to make judgments like this.

Lastly, some news inside the body that’s been made unfortunately timely by the tragic passing of Chadwick Boseman. Colorectal cancer is deadly and difficult to detect early, and a shortcoming of colonoscopies is it is difficult to say with certainty that the doctor has inspected every square inch of the tract. Google’s Colonoscopy Coverage Deficiency via Depth algorithm observes video of the procedure and builds a 3D map of the colon, noting as it goes which parts of it weren’t adequately visible to the camera.

A simulated colon, analysis of the image, and resulting map of the virtual colon.
A simulated colon, analysis of the image and resulting map of the virtual colon. Image Credits: Google

Ideally the procedure could capture enough to be sure, but this would be a helpful tool to prevent against malpractice or just improve efficiency so that a second procedure isn’t necessary. Colorectal cancer is a major risk, especially for men, and furthermore especially for Black men, who tend to develop it earlier and more often. More and better tools may help detect it earlier and save thousands of lives.

ECCV

The European Conference on Computer Vision took place in late August and there are, as always, a lot of interesting papers that come out of it.

Facebook has a fun new database and tool called ContactPose, a collection of grips of everyday objects by a variety of people either “using” or “handing off” whatever it is. Gripping an object in an intelligent way is a remarkably difficult problem and the best source for how to do it is human technique.

3D render of a banana and the way it was held by human hands.
Image Credits: Facebook/Georgia Tech

ContactPose provides joint and hand poses for things like game controllers, cameras, bananas and sunglasses, showing contact heat maps and other information useful to a computer trying to figure out how to hold something. You can play around with it here.

The company is also, predictably, concerned that tools used to identify and analyze individuals in photos may be disrupted somehow. We’ve seen studies that showed how taking advantage of a machine learning models’ biases can make a turtle be classified as a gun, and so on, but a harder problem is tricking the AI into thinking that there’s nothing instead of something.

Image of a man wearing a sweatshirt with a pattern that confuses the computer vision system.
Image Credits: Facebook

This paper shows that it is indeed possible to engineer patterns that, worn or otherwise shown to a computer vision system, seem to confound it and make it think that the wearer is not a person but part of the background. The resulting clothing isn’t exactly haute couture but more attractive scrambler patterns are probably on the way.

Microsoft is addressing an old favorite: estimating the positions of multiple people on camera. This kind of research goes back to the Kinect and while it never really took off in terms of gaming, it has proven useful in countless other ways.

A computer estimates the body positions of several people on camera.
Image Credits: Microsoft Research

This paper looks at a new way of identifying and analyzing the body positions of multiple people from the perspectives of multiple cameras simultaneously. It can be tough to figure that out from one 2D image, but with two or three of them it becomes a solvable problem — just a computationally complex one. But they’re confident in this approach, which will eventually be documented at this GitHub page.

Just because a problem is a little wacky doesn’t mean it isn’t worth solving. Such is the case with this paper describing “end-to-end dog reconstruction from monocular images.” Maybe the utility of being able to tell the exact shape and position of a dog from a single image isn’t obvious to you. Or me. Indeed, perhaps there is no “utility” as the concept is commonly understood.

3D models of dogs with various changes to tail, ear, and body size.
Just a few of the many shapes dogs can, and should, take. Image Credits: Microsoft Research

But think about it this way: Humans can recognize dogs instantly no matter how they’ve folded their furry bodies, or whether they’re a small dog with long floppy ears or a big one with pointy triangular ears. If computer vision systems are to meet or exceed the capabilities of humans, shouldn’t they at least be able to do that?

Seriously: Being able to identify an object (in this case an animal) despite that object having numerous unpredictable variations is a powerful and fundamental vision task, one we do every day almost automatically. Pursuing it as an abstract goal is an important line of inquiry and while “reconstructing a 3D dog mesh” won’t save any lives, it’s important basic research that happens to involve a lot of very good girls and boys.

Google’s ECCV spread had fewer obvious standouts, though this paper points to a feature I would appreciate and have secretly wished for from Maps: live shadows. Or not live exactly but reasonably accurate predictions. Using multiple images taken of the same location at street level, the team can create a good model of how the sun and other lights affect the scene, allowing them to change it arbitrarily for positions of the sun or sky conditions.

An image of an intersection has its lighting artificially adjusted.
Image Credits: Google

If this doesn’t end up in Google Maps within a year or two I’ll be very surprised. Having Street View reflect current weather patterns, or being able to tell whether a cafe is in the sun or shade at a given time on a given day is a hugely useful feature and the kind of wizardry the company loves to pack into one of the few products where it is truly still a leader. (Here’s a longer video on how it works.)

Another area it excels is computational photography, and a lot of its ECCV papers are the kind of thing that lead to products down the line there as well. Pose estimation, detection of objects and actions in videos, accelerating lidar analysis, that sort of thing. Anyone with a competing product could probably make a lot of informed speculation about their roadmap. But as few have more general interest, I’ll leave it to them.

More TechCrunch

The AI industry moves faster than the rest of the technology sector, which means it outpaces the federal government by several orders of magnitude.

Senate study proposes ‘at least’ $32B yearly for AI programs

The FBI along with a coalition of international law enforcement agencies seized the notorious cybercrime forum BreachForums on Wednesday.  For years, BreachForums has been a popular English-language forum for hackers…

FBI seizes hacking forum BreachForums — again

The announcement signifies a significant shake-up in the streaming giant’s advertising approach.

Netflix to take on Google and Amazon by building its own ad server

It’s tough to say that a $100 billion business finds itself at a critical juncture, but that’s the case with Amazon Web Services, the cloud arm of Amazon, and the…

Matt Garman taking over as CEO with AWS at crossroads

Back in February, Google paused its AI-powered chatbot Gemini’s ability to generate images of people after users complained of historical inaccuracies. Told to depict “a Roman legion,” for example, Gemini would show…

Google still hasn’t fixed Gemini’s biased image generator

A feature Google demoed at its I/O confab yesterday, using its generative AI technology to scan voice calls in real time for conversational patterns associated with financial scams, has sent…

Google’s call-scanning AI could dial up censorship by default, privacy experts warn

Google’s going all in on AI — and it wants you to know it. During the company’s keynote at its I/O developer conference on Tuesday, Google mentioned “AI” more than…

The top AI announcements from Google I/O

Uber is taking a shuttle product it developed for commuters in India and Egypt and converting it for an American audience. The ride-hail and delivery giant announced Wednesday at its…

Uber has a new way to solve the concert traffic problem

Here are quick hits of the biggest news from the keynote as they are announced.

Google I/O 2024: Here’s everything Google just announced

Google is preparing to launch a new system to help address the problem of malware on Android. Its new live threat detection service leverages Google Play Protect’s on-device AI to…

Google takes aim at Android malware with an AI-powered live threat detection service

Users will be able to access the AR content by first searching for a location in Google Maps.

Google Maps is getting geospatial AR content later this year

The heat pump startup unveiled its first products and revealed details about performance, pricing and availability.

Quilt heat pump sports sleek design from veterans of Apple, Tesla and Nest

The space is available from the launcher and can be locked as a second layer of authentication.

Google’s new Private Space feature is like Incognito Mode for Android

Gemini, the company’s family of generative AI models, will enhance the smart TV operating system so it can generate descriptions for movies and TV shows.

Google TV to launch AI-generated movie descriptions

When triggered, the AI-powered feature will automatically lock the device down.

Android’s new Theft Detection Lock helps deter smartphone snatch and grabs

The company said it is increasing the on-device capability of its Google Play Protect system to detect fraudulent apps trying to breach sensitive permissions.

Google adds live threat detection and screen-sharing protection to Android

This latest release, one of many announcements from the Google I/O 2024 developer conference, focuses on improved battery life and other performance improvements, like more efficient workout tracking.

Wear OS 5 hits developer preview, offering better battery life

For years, Sammy Faycurry has been hearing from his registered dietitian (RD) mom and sister about how poorly many Americans eat and their struggles with delivering nutritional counseling. Although nearly…

Dietitian startup Fay has been booming from Ozempic patients and emerges from stealth with $25M from General Catalyst, Forerunner

Apple is bringing new accessibility features to iPads and iPhones, designed to cater to a diverse range of user needs.

Apple announces new accessibility features for iPhone and iPad users

TechCrunch Disrupt, our flagship startup event held annually in San Francisco, is back on October 28-30 — and you can expect a bustling crowd of thousands of startup enthusiasts. Exciting…

Startup Blueprint: TC Disrupt 2024 Builders Stage agenda sneak peek!

Mike Krieger, one of the co-founders of Instagram and, more recently, the co-founder of personalized news app Artifact (which TechCrunch corporate parent Yahoo recently acquired), is joining Anthropic as the…

Anthropic hires Instagram co-founder as head of product

Seven orgs so far have signed on to standardize the way data is collected and shared.

Venture orgs form alliance to standardize data collection

As cloud adoption continues to surge toward the $1 trillion mark in annual spend, we’re seeing a wave of enterprise startups gaining traction with customers and investors for tools to…

Alkira connects with $100M for a solution that connects your clouds

Charging has long been the Achilles’ heel of electric vehicles. One startup thinks it has a better way for apartment dwelling EV drivers to charge overnight.

Orange Charger thinks a $750 outlet will solve EV charging for apartment dwellers

So did investors laugh them out of the room when they explained how they wanted to replace Quickbooks? Kind of.

Embedded accounting startup Layer secures $2.3M toward goal of replacing QuickBooks

While an increasing number of companies are investing in AI, many are struggling to get AI-powered projects into production — much less delivering meaningful ROI. The challenges are many. But…

Weka raises $140M as the AI boom bolsters data platforms

PayHOA, a previously bootstrapped Kentucky-based startup that offers software for self-managed homeowner associations (HOAs), is an example of how real-world problems can translate into opportunity. It just raised a $27.5…

Meet PayHOA, a profitable and once-bootstrapped SaaS startup that just landed a $27.5M Series A

Restaurant365, which offers a restaurant management suite, has raised a hot $175M from ICONIQ Growth, KKR and L Catterton.

Restaurant365 orders in $175M at $1B+ valuation to supersize its food service software stack 

Venture firm Shilling has launched a €50M fund to support growth-stage startups in its own portfolio and to invest in startups everywhere else. 

Portuguese VC firm Shilling launches €50M opportunity fund to back growth-stage startups

Chang She, previously the VP of engineering at Tubi and a Cloudera veteran, has years of experience building data tooling and infrastructure. But when She began working in the AI…

LanceDB, which counts Midjourney as a customer, is building databases for multimodal AI