Hardware

When Robots Come To Pray

Comment

Image Credits: ktsdesign (opens in a new window) / Shutterstock (opens in a new window) (Image has been modified)

Sean Lorenz

Contributor

Sean Lorenz is the founder of Senter, a startup that seeks to improve chronic care with Internet of Things and deep learning in the home.

A developer colleague of mine recently went on and on about Google Photos. He knew my background in computational neuroscience and thought I would be interested in what Google was doing with deep learning. That night I moved all my iPhone photos from an external hardware to The Magical Cloud, then forgot about it for a week. Like every other tired Boston subway passenger, I checked my phone religiously and opened the app to find images of my wife, kids and friends as separate photo clusters.

Well done, Google. Later in the day I brought up a certain wine I liked in conversation but couldn’t remember the name. I did, however, take a photo of the label and typed “wine” into the Google Photos app search for shits and giggles. Of course it found the photo of my wine — and that’s the moment I began to realize just how powerful Google’s technology is becoming.

The more jaded of you out there might say, “It classified items in some pictures. Big deal.” Well, my jaded friend, it is a big deal. Figure-ground segregation, i.e., the ability to discriminate an object in the foreground from what’s behind it, is something computer vision researchers have been working on for decades.

Today we can throw massive amounts of images into a deep learning algorithm and fairly accurately pick out a cow from the field in which it’s grazing. The thing is, deep learning has actually been around as backpropagation (with some recently added tricks by machine learning godfather, Geoffrey Hinton) since the days of Cabbage Patch Kids and Bruce Willis singing R&B.

Now that we have a combination of massive compute power and obscene amounts of data thanks to tech titans like Google and Amazon, deep learning algorithms keep getting better, causing the likes of Elon Musk and Stephen Hawking to speak up about the many future possible dangers of artificial intelligence.

A few words of warranted caution from intelligent minds is often translated as “SkyNet is coming!!!” in the general press. Can you blame them? Just about every movie with robots and artificial intelligence involves some sort of dystopian future requiring Schwarzeneggerian brute force to overcome our future overlords.

Despite being called “neural networks,” deep learning in its current form is not even close to how biological brains process information. Yes, vaguely speaking we process an input (touch, taste, smell) and multiply that by a weight (a synapse somewhere in the brain) to send an output (move my hand). But that’s where the similarity ends.

Remember our figure-ground example? The brain doesn’t require knowledge of all existing priors to solve the problem. Infants are born with twice the number of neurons required to figure out what is important in the world around them. Regarding the vision system, babies wire their wee brains by learning basic things like line orientation, depth perception and motion. They then use subtle eye movements, called saccades, to assess what’s happening in a scene, combining it with what they learned regarding shapes and depth to know where a coffee cup ends and where the table begins.

Companies like Neurala and Brain Corp. are foregoing the typical flavors of deep learning to build adaptive biological models for helping robots learn about their environment. In other words, a camera lens could act as an eye, sending signals to AWS for replicating a human retina, thalamus, primary visual cortex up through middle temporal and inferior temporal cortex for higher-level understanding of “cup” or “table.”

Biologically inspired neural models require massively parallel computation and an understanding of how each cortical and subcortical region work together to elicit what we call consciousness. The cause for concern should really come when tech giants discover the limitations of their current deep learning models and turn to neuroscientists for coding functions like detecting your wife’s face, driving around potholes or feeling empathy for someone who lost a loved one.

This is when things get interesting. This is when multisensory integration, cognitive control and neural synchrony combine to give rise to something new — qualitative experiences (or qualia) in non-biological systems. This is when embodied machines learn from their experiences in a physical world. The Internet of Things (IoT) is the precursor to this. Right now, IoT devices are mostly dumb telemetry devices connected to the Internet or other machines, but people are already starting to apply neural models to sensor data.

What we learn from processing sensors on IoT products will soon carry over to robots with touch, vestibular, heat, vision and other sensors. Just like humans, robots with bio-inspired brains will make mistakes like we do by motor babbling while constantly updating information from their sensors to learn higher and higher depths of association from the world around them.

There’s a famous philosophy-of-mind thought experiment called Mary’s Room where a scientist named Mary was stuck her entire life in a black-and-white room, but has read everything to know about color theory. One day Mary is allowed to leave the room and sees a bright red apple. Everything she read about the color red could not prepare her for the conscious experience of “redness” in that moment. Can robots have an experience of redness like Mary did? Or is it all just vapid linear number crunching?

I believe the only way for robots to become truly conscious and experience “redness” would be for them to be embodied. Simulations won’t do. Why? Because it is the physical, electrical synchrony of all those different brain regions working together at the same time that elicits an “OH MY GLOB” moment of a novel, pleasurable stimulus experience. If you’re interested in the details on the physical dependencies for robot consciousness, check out my post here.

So now we are living with conscious robots. Crazy. What does a mixed society with reasoning, empathetic non-biological machines and human beings look like? And, finally, getting to the topic at hand — what happens when a robot wants to join our church, synagogue or temple? Despite some critics who see religion as a nefarious byproduct of human evolution, a majority of scholars believe religion serves evolutionarily advantageous purposes.

For example, Jewish tradition has numerous food and body restrictions centered on the topic of cleanliness. Avoiding “unclean” eating habits or the act of circumcision likely increased the Jewish population’s natural selection fitness in a time before hand sanitizer. There are, of course, other social and group dynamic benefits, as well. All this is to say, if we are able to replicate human brain function in a synthetic brain, there’s a good chance something like religious and spiritual sentiments could arise in robots.

As a practicing Christian, this possibility gives me a bit of the chills. Throughout Judeo-Christian history, humans are told that we are built in the image of God — the Imago Dei — but now there may be a robot that tells us it had a spiritual experience while worshipping in a church service on Sunday. Did it really? Was that a truly conscious experience? And is the soul separate from our conscious life or not? If robots are conscious, does that mean they have souls, or is that something different? I hope this is making both atheists and believers alike squirm.

I have no idea what the difference between the soul and consciousness might be. This gets at the very heart of who we are as humans, and whether or not some piece of us physically lives on after we die. Are there higher dimensions that house our soul, then send down insights via consciousness to our four-dimensional world? Or is this all we get?

As someone who, for better or worse, holds to a faith in something larger than myself, I truly want to believe the former. Either way, there is likely going to be a time when we have to address both scenarios as machines adapt and become more like us.

Amen.

More TechCrunch

CoreWeave has formally opened an office in London that will serve as its European headquarters and home to two new data centers.

CoreWeave, a $19B AI compute provider, opens European HQ in London with plans for 2 UK data centers

The Series C funding, which brings its total raise to around $95 million, will go toward mass production of the startup’s inaugural products

AI chip startup DEEPX secures $80M Series C at a $529M valuation 

A dust-up between Evolve Bank & Trust, Mercury and Synapse has led TabaPay to abandon its acquisition plans of troubled banking-as-a-service startup Synapse.

Infighting among fintech players has caused TabaPay to ‘pull out’ from buying bankrupt Synapse

The problem is not the media, but the message.

Apple’s ‘Crush’ ad is disgusting

The Twitter for Android client was “a demo app that Google had created and gave to us,” says Particle co-founder and ex-Twitter employee Sara Beykpour.

Google built some of the first social apps for Android, including Twitter and others

WhatsApp is updating its mobile apps for a fresh and more streamlined look, while also introducing a new “darker dark mode,” the company announced on Thursday. The messaging app says…

WhatsApp’s latest update streamlines navigation and adds a ‘darker dark mode’

Plinky lets you solve the problem of saving and organizing links from anywhere with a focus on simplicity and customization.

Plinky is an app for you to collect and organize links easily

The keynote kicks off at 10 a.m. PT on Tuesday and will offer glimpses into the latest versions of Android, Wear OS and Android TV.

Google I/O 2024: How to watch

For cancer patients, medicines administered in clinical trials can help save or extend lives. But despite thousands of trials in the United States each year, only 3% to 5% of…

Triomics raises $15M Series A to automate cancer clinical trials matching

Welcome back to TechCrunch Mobility — your central hub for news and insights on the future of transportation. Sign up here for free — just click TechCrunch Mobility! Tap, tap.…

Tesla drives Luminar lidar sales and Motional pauses robotaxi plans

The newly announced “Public Content Policy” will now join Reddit’s existing privacy policy and content policy to guide how Reddit’s data is being accessed and used by commercial entities and…

Reddit locks down its public data in new content policy, says use now requires a contract

Eva Ho plans to step away from her position as general partner at Fika Ventures, the Los Angeles-based seed firm she co-founded in 2016. Fika told LPs of Ho’s intention…

Fika Ventures co-founder Eva Ho will step back from the firm after its current fund is deployed

In a post on Werner Vogels’ personal blog, he details Distill, an open-source app he built to transcribe and summarize conference calls.

Amazon’s CTO built a meeting-summarizing app for some reason

Paris-based Mistral AI, a startup working on open source large language models — the building block for generative AI services — has been raising money at a $6 billion valuation,…

Sources: Mistral AI raising at a $6B valuation, SoftBank ‘not in’ but DST is

You can expect plenty of AI, but probably not a lot of hardware.

Google I/O 2024: What to expect

Dating apps and other social friend-finders are being put on notice: Dating app giant Bumble is looking to make more acquisitions.

Bumble says it’s looking to M&A to drive growth

When Class founder Michael Chasen was in college, he and a buddy came up with the idea for Blackboard, an online classroom organizational tool. His original company was acquired for…

Blackboard founder transforms Zoom add-on designed for teachers into business tool

Groww, an Indian investment app, has become one of the first startups from the country to shift its domicile back home.

Groww joins the first wave of Indian startups moving domiciles back home from US

Technology giant Dell notified customers on Thursday that it experienced a data breach involving customers’ names and physical addresses. In an email seen by TechCrunch and shared by several people…

Dell discloses data breach of customers’ physical addresses

Featured Article

Fairgen ‘boosts’ survey results using synthetic data and AI-generated responses

The Israeli startup has raised $5.5M for its platform that uses “statistical AI” to generate synthetic data that it says is as good as the real thing.

20 hours ago
Fairgen ‘boosts’ survey results using synthetic data and AI-generated responses

Hydrow, the at-home rowing machine maker, announced Thursday that it has acquired a majority stake in Speede Fitness, the company behind the AI-enabled strength training machine. The rowing startup also…

Rowing startup Hydrow acquires a majority stake in Speede Fitness as their CEO steps down

Call centers are embracing automation. There’s debate as to whether that’s a good thing, but it’s happening — and quite possibly accelerating. According to research firm TechSci Research, the global…

Retell AI lets companies build ‘voice agents’ to answer phone calls

TikTok is starting to automatically label AI-generated content that was made on other platforms, the company announced on Thursday. With this change, if a creator posts content on TikTok that…

TikTok will automatically label AI-generated content created on platforms like DALL·E 3

India’s mobile payments regulator is likely to extend the deadline for imposing market share caps on the popular UPI (unified payments interface) payments rail by one to two years, sources…

India likely to delay UPI market caps in win for PhonePe-Google Pay duopoly

Line Man Wongnai, an on-demand food delivery service in Thailand, is considering an initial public offering on a Thai exchange or the U.S. in 2025.

Thai food delivery app Line Man Wongnai weighs IPO in Thailand, US in 2025

Ever wonder why conversational AI like ChatGPT says “Sorry, I can’t do that” or some other polite refusal? OpenAI is offering a limited look at the reasoning behind its own…

OpenAI offers a peek behind the curtain of its AI’s secret instructions

The federal government agency responsible for granting patents and trademarks is alerting thousands of filers whose private addresses were exposed following a second data spill in as many years. The…

US Patent and Trademark Office confirms another leak of filers’ address data

As part of an investigation into people involved in the pro-independence movement in Catalonia, the Spanish police obtained information from the encrypted services Wire and Proton, which helped the authorities…

Encrypted services Apple, Proton and Wire helped Spanish police identify activist

Match Group, the company that owns several dating apps, including Tinder and Hinge, released its first-quarter earnings report on Tuesday, which shows that Tinder’s paying user base has decreased for…

Match looks to Hinge as Tinder fails

Private social networking is making a comeback. Gratitude Plus, a startup that aims to shift social media in a more positive direction, is expanding its wellness-focused, personal reflections journal to…

Gratitude Plus makes social networking positive, private and personal