Hardware

Put down your phone if you want to innovate

Comment

Image Credits: Fuse (opens in a new window) / Getty Images

We are living in an interstitial period. In the early 1980s we entered an era of desktop computing that culminated in the dot-com crash — a financial bubble that we bolstered with Y2K consulting fees and hardware expenditures alongside irrational exuberance over Pets.com. That last interstitial era, an era during which computers got smaller, weirder, thinner and more powerful, ushered us, after a long period of boredom, into the mobile era in which we now exist. If you want to help innovate in the next decade, it’s time to admit that phones, like desktop PCs before them, are a dead-end.

We create and then brush up against the edges of our creation every decade. The speed at which we improve — but not innovate — is increasing, and so the difference between a 2007 iPhone and a modern Pixel 3 is incredible. But what can the Pixel do that the original iPhone or Android phones can’t? Not much.

We are limited by the use cases afforded by our current technology. In 1903, a bike was a bike and could not fly. Until the Wright Brothers and others turned forward mechanical motion into lift were we able to lift off. In 2019 a phone is a phone and cannot truly interact with us as long as it remains a separate part of our bodies. Until someone looks beyond these limitations will we be able to take flight.

While I won’t posit on the future of mobile tech, I will note that until we put our phones away and look at the world anew we will do nothing of note. We can take better photos and FaceTime each other, but until we see the limitations of these technologies we will be unable to see a world outside of them.

We’re heading into a new year (and a new CES) and we can expect more of the same. It is safe and comfortable to remain in the screen-hand-eye nexus, creating VR devices that are essentially phones slapped to our faces and big computers that now masquerade as TVs. What, however, is the next step? Where do these devices go? How do they change? How do user interfaces compress and morph? Until we actively think about this we will remain stuck.

Perhaps you are. You’d better hurry. If this period ends as swiftly and decisively as the other ones before it, the opportunity available will be limited at best. Why hasn’t VR taken off? Because it is still on the fringes, being explored by people stuck in mobile thinking. Why is machine learning and AI so slow? Because the use cases are aimed at chatbots and better customer interaction. Until we start looking beyond the black mirror (see what I did?) of our phones, innovation will fail.

Every app launched, every pictured scrolled, every tap, every hunched-over moment davening to some dumb Facebook improvement is a brick in the bulwark against an unexpected and better future. So put your phone down this year and build something. Soon it might be too late.

More TechCrunch

Replacing Sutskever is Jakub Pachocki, OpenAI’s director of research.

Ilya Sutskever, OpenAI co-founder and longtime chief scientist, departs

Intuitive Machines made history when it became the first private company to land a spacecraft on the moon, so it makes sense to adapt that tech for Mars.

Intuitive Machines wants to help NASA return samples from Mars

As Google revamps itself for the AI era, offering AI overviews within its search results, the company is introducing a new way to filter for just text-based links. With the…

Google adds ‘Web’ search filter for showing old-school text links as AI rolls out

Blue Origin’s New Shepard rocket will take a crew to suborbital space for the first time in nearly two years later this month, the company announced on Tuesday.  The NS-25…

Blue Origin to resume crewed New Shepard launches on May 19

This will enable developers to use the on-device model to power their own AI features.

Google is building its Gemini Nano AI model into Chrome on the desktop

It ran 110 minutes, but Google managed to reference AI a whopping 121 times during Google I/O 2024 (by its own count). CEO Sundar Pichai referenced the figure to wrap…

Google mentioned ‘AI’ 120+ times during its I/O keynote

Firebase Genkit is an open source framework that enables developers to quickly build AI into new and existing applications.

Google launches Firebase Genkit, a new open source framework for building AI-powered apps

In the coming months, Google says it will open up the Gemini Nano model to more developers.

Patreon and Grammarly are already experimenting with Gemini Nano, says Google

As part of the update, Reddit also launched a dedicated AMA tab within the web post composer.

Reddit introduces new tools for ‘Ask Me Anything,’ its Q&A feature

Here are quick hits of the biggest news from the keynote as they are announced.

Google I/O 2024: Here’s everything Google just announced

LearnLM is already powering features across Google products, including in YouTube, Google’s Gemini apps, Google Search and Google Classroom.

LearnLM is Google’s new family of AI models for education

The official launch comes almost a year after YouTube began experimenting with AI-generated quizzes on its mobile app. 

Google is bringing AI-generated quizzes to academic videos on YouTube

Around 550 employees across autonomous vehicle company Motional have been laid off, according to information taken from WARN notice filings and sources at the company.  Earlier this week, TechCrunch reported…

Motional cut about 550 employees, around 40%, in recent restructuring, sources say

The keynote kicks off at 10 a.m. PT on Tuesday and will offer glimpses into the latest versions of Android, Wear OS and Android TV.

Google I/O 2024: Watch all of the AI, Android reveals

Google Play has a new discovery feature for apps, new ways to acquire users, updates to Play Points, and other enhancements to developer-facing tools.

Google Play preps a new full-screen app discovery feature and adds more developer tools

Soon, Android users will be able to drag and drop AI-generated images directly into their Gmail, Google Messages and other apps.

Gemini on Android becomes more capable and works with Gmail, Messages, YouTube and more

Veo can capture different visual and cinematic styles, including shots of landscapes and timelapses, and make edits and adjustments to already-generated footage.

Google Veo, a serious swing at AI-generated video, debuts at Google I/O 2024

In addition to the body of the emails themselves, the feature will also be able to analyze attachments, like PDFs.

Gemini comes to Gmail to summarize, draft emails, and more

The summaries are created based on Gemini’s analysis of insights from Google Maps’ community of more than 300 million contributors.

Google is bringing Gemini capabilities to Google Maps Platform

Google says that over 100,000 developers already tried the service.

Project IDX, Google’s next-gen IDE, is now in open beta

The system effectively listens for “conversation patterns commonly associated with scams” in-real time. 

Google will use Gemini to detect scams during calls

The standard Gemma models were only available in 2 billion and 7 billion parameter versions, making this quite a step up.

Google announces Gemma 2, a 27B-parameter version of its open model, launching in June

This is a great example of a company using generative AI to open its software to more users.

Google TalkBack will use Gemini to describe images for blind people

Google’s Circle to Search feature will now be able to solve more complex problems across psychics and math word problems. 

Circle to Search is now a better homework helper

People can now search using a video they upload combined with a text query to get an AI overview of the answers they need.

Google experiments with using video to search, thanks to Gemini AI

A search results page based on generative AI as its ranking mechanism will have wide-reaching consequences for online publishers.

Google will soon start using GenAI to organize some search results pages

Google has built a custom Gemini model for search to combine real-time information, Google’s ranking, long context and multimodal features.

Google is adding more AI to its search results

At its Google I/O developer conference, Google on Tuesday announced the next generation of its Tensor Processing Units (TPU) AI chips.

Google’s next-gen TPUs promise a 4.7x performance boost

Google is upgrading Gemini, its AI-powered chatbot, with features aimed at making the experience more ambient and contextually useful.

Google’s Gemini updates: How Project Astra is powering some of I/O’s big reveals

Veo can generate few-seconds-long 1080p video clips given a text prompt.

Google’s image-generating AI gets an upgrade