AI

Defining our relationship with early AI

Comment

a human and a robot - they're friends!
Image Credits: Bryce Durbin / TechCrunch

Andrew Heikkila

Contributor

Andrew Heikkila is a tech enthusiast and writer from Boise, Idaho.

More posts from Andrew Heikkila

“I’ve seen things you people wouldn’t believe. Attack ships on fire off the shoulder of Orion. I watched C-beams glitter in the dark near the Tannhäuser Gate. All those moments will be lost in time, like tears…in…rain. Time to die.” — Roy Batty, Blade Runner

Artificial intelligence has fascinated mankind for more than half a century, with the first public mention of computer intelligence recorded during a London lecture by Alan Turing in 1947. More recently, the public has been exposed to headlines that have increasingly contained references to the growing power of AI, whether that’s been AlphaGo’s defeat of legendary Go player Lee Se-dol, Microsoft’s racist AI bot named Tay or any other number of new developments in the machine learning field. Once a plot device for science-fiction tales, AI is becoming real — and human beings are going to have to define their relationship with it sooner rather than later.

Peter Diamandis, co-founder and vice-chairman at Human Longevity, Inc., touches on that relationship in a post he authored on LinkedIn, titled “The next sexual revolution will be digitized.” Diamandis points to recent reports showing that the Japanese are increasingly abandoning sex and relationships, while a growing subset of men report that they prefer to have virtual girlfriends over real ones.

“This is only the beginning,” he said. “As virtual reality (VR) becomes more widespread, one major application will inevitably be VR porn. It will be much more intense, vivid, and addictive — and as AI comes online, I believe there will be a proliferation in AI-powered avatar and robotic relationships, similar to those characters depicted in the movies Her and Ex Machina.”

Our budding relationship with AI

Let’s back up a minute. Did Diamandis really say that he thinks people will begin to form relationships with AI robots? It’s not that hard to believe, given the example of men who prefer virtual girlfriends to real ones — but how close are we to actually creating an avatar that loves you back?

To answer this, first we have to understand what AI actually is, and what AI has come to represent to the public world. There are two basic types of AI: strong AI, and applied or “weak” AI (technically, cognitive simulation, or “CS,” is another type of AI, but we’ll be focusing on these first two for now).

Strong AI is a work-in-progress, but the ultimate goal is to build a machine with intellectual ability indistinguishable from that of a normal human being. Joseph Weizenbaum of the MIT AI Laboratory has described the ultimate aim of strong AI as “nothing less than to build a machine on the model of man, a robot that is to have its childhood, to learn language as a child does, to gain its knowledge of the world by sensing the world through its own organs, and ultimately to contemplate the whole domain of human thought.”

Strong AI is also the type of AI that you hear about in the movies — the Skynet program that rebels against its human creators in the Terminator movies, or HAL 9000 of 2001: A Space Odyssey. It is predicted that if or when this type of superhuman intelligence is made possible and brought online, we will have triggered the singularity. This type of AI is years away from completion, if its completion is even possible — many contest that we will ever reach this point.

Weak/applied AI, on the other hand, is the type of thing you read about in the headlines. Anything with the adjective “smart” slapped onto it is generally relying on weak AI of some kind — any artificial form of intelligence that may “learn” and even figure out ways to write its own code, but that is limited in function to very few tasks.

The programs that drive smart cars, the chatbots that guide us through customer service and even the previously mentioned AlphaGo are all examples of weak or applied AI. These systems exist within the boundaries of a “micro-world” that the AI has recognized, and may even be so advanced as to be considered “expert systems.” These systems have no “common sense” or understanding of how their recommendations fit into a larger context, or the world beyond their micro-world. They are essentially complex input/output systems that specialize in one area, easily distinguishable from human intelligence based on these deficiencies.

The focus on interface

The focus on a human-like input/output system is what seems to capture much of society’s attention in terms of AI. There is no better measure of the illusion of human intelligence than the ability to converse with a human as a human would. This is evident in the fact that we’ve always put so much emphasis on the Turing test as a way to determine whether or not something is artificially intelligent. If the program can interface with one human and pass as another, bam: The average user would call that “AI.

If it doesn’t pass the Turing test, even if it’s close, we know that what’s on the other side of that screen is fake, and the genuine nature of the conversation is lost. Yet, even if we do know that we’re talking to AI, and that AI is able to navigate a conversation deftly, we’re often amazed by how human the interaction can feel — so much so, that we can suspend our disbelief and forget that we’re talking to a machine altogether.

Unfortunately, even within whatever micro-world it’s designed to serve, AI often doesn’t pass muster in conversation. Chatbots are where we see this in action almost daily. With Microsoft and Facebook announcing chatbot offerings earlier this year, many companies began turning to said technology to help improve customer engagement — but even Salesforce, whose customer service and support outranked all others in the TA CRM Market Index last year, is pointing out that chatbots simply are not yet where they need to be. The only way, it seems, to solve the problem of the inefficient chatbot is to make these systems act more… well, human.

Just how human should we make weak AI?

So here’s where all of this culminates. Chatbots and the interfacing aspect of AI isn’t going anywhere. Look at Siri or Cortana, for example. These are, technically, chatbots that double as virtual assistants — and they’re only going to become more advanced as time goes on. As is, these and other chatbots don’t pass the Turing test — and even if they did, somehow, it could be said that these machines still are not “intelligent” or “sentient” in any way, because they possess no understanding of the actual conversation being had. They, like the early Stanford communication programs named “Eliza” and “Parry,” rely on pre-programmed, canned responses to simulate conversation. To philosopher Ned Block, this makes these systems “no more intelligent than a juke box.”

Nevertheless, at a certain point, we have to ask ourselves just how human we are willing to make weak AI appear? Without understanding the differences between weak and strong AI, what types of psychological effects could deployment of an indistinguishable-from-human chatbot have?

Joseph Rauch, a writer for TalkSpace, a company that provides online messaging therapy, speaks on the need for verifying human-ness in his line of work.

“We frequently hear from potential clients who want to be sure they are chatting with a therapist, not a chatbot,” he writes. “All of our therapists are licensed, flesh and blood humans, but we understand the concern. Whether it’s online therapy, social media or online dating, everyone deserves to chat with the humans they believe they are connecting with.”

He mentions online dating, where chatbots have already been known to trick people into joining affiliate sites, or even exist just to make the male:female ratio seem less tilted toward males. But what if these chatbots were used in business? Going back to the CRM example, a group called Legion Analytics is trying to sell its lead-generation bot named Kylie, which understands small talk, will bring topics back up that have been previously mentioned (such as a child’s soccer game), and has even been flirted with by a prospect.

If bots like these become advanced enough, might people feel manipulated or even violated by machines that seemed to know them better than themselves? Especially if these bots truly are able to sell products to people better than the average human can? That’s obviously a long way off, but a chatbot well-versed in conversation and hooked up to a data warehouse with a complete psychological profile of you, the customer, might be able to make a sale using information synthesized with persuasive measures that a regular human just wouldn’t be able to tap into.

Teaching bots emotion

Of course, the way to really personify weak AI would be to teach it emotion — or at least teach it to emulate emotion — which Koko, a company co-founded by Fraser Kelton, purports to do. In an article on Fast Company, Kelton speaks on the need to provide a more human feel to chatbots: “We’re working toward providing empathy as a service to any voice or messaging platform,” he says. “We think that’s a critical user experience for a world in which you’re conversing with computers.” The article likens licensing an empathy API from Koko, which could be connected to virtually any chatbot, to sticking a heart into a robot.

There are upsides to AI that understand subtle nuances in human emotion. A recent study from JAMA showed that smartphone assistants such as Siri performed extremely poorly when responding to users who complain about sensitive issues, even going so far as to mock one user when asked for help with rape, sexual assault and sexual abuse. In a webinar concerning the next 20 years in healthcare, Carl W. Nelson, associate professor at Northeastern University D’Amore-McKim School of Business, also points out that “Big Data does have its challenges in terms of confidentiality and things that you would be worried about — but [can be used] appropriately to guide decision-making, to make judgements…” and how complete would an automated medical diagnostic system be without proper knowledge of human emotion to guide it?

So while there is a need for even weak AI to understand and emulate emotion, are we running the risk of creating a homunculus that feigns recognition of the human condition, and even may regurgitate cues to generate an emotional response in its user — even though these are “canned” responses. Will those with little-to-no knowledge of these bots begin to treat them as more than just a bot?

Consequences to society

As time goes on, it’s apparent that our technology will continue to astound us. As we see more movies and TV shows with AI robots, we will, no doubt, stop seeing these things as elements of science fiction and begin to wonder when they will become real. While movies such as Blade Runner played with the question long ago, recent advancements such as the Android Dick project, in conjunction with new shows like Westworld, make us realize that perhaps we’ll be dealing with the ethics of AI sooner rather than later.

The ethical question doesn’t center on whether or not these AI actually have feelings or rights or anything like that — but rather, what are the consequences to us as the humans who keep them? For example, how do you explain to a child that this indistinguishable-from-a-real-person butler is not and never was human, so it’s okay that you’re throwing him in the trash? Or that, à la Westworld, it’s alright to “kill” or “rape” them, because they’re not actually alive nor are they able to consent? When does emulation of life become just as important as life to a human?

These are all questions we’ll have to tease out over time, and for which there are no easy answers. Ultimately, we’ll have to define our relationship with AI, and find the thin, blurry line that separates weak AI from strong AI, if the latter is even possible. Hopefully, as we look into the mirror at these humanoid creations we’re constructing, we’ll learn more about and strengthen our own sense of humanity, instead of relinquishing it violently, or letting it wash away like tears in rain.

More TechCrunch

The deck included some redacted numbers, but there was still enough data to get a good picture.

Pitch Deck Teardown: Cloudsmith’s $15M Series A deck

The company is describing the event as “a chance to demo some ChatGPT and GPT-4 updates.”

OpenAI’s ChatGPT announcement: What we know so far

Unlike ChatGPT, Claude did not become a new App Store hit.

Anthropic’s Claude sees tepid reception on iOS compared with ChatGPT’s debut

Welcome to Startups Weekly — Haje‘s weekly recap of everything you can’t miss from the world of startups. Sign up here to get it in your inbox every Friday. Look,…

Startups Weekly: Trouble in EV land and Peloton is circling the drain

Scarcely five months after its founding, hard tech startup Layup Parts has landed a $9 million round of financing led by Founders Fund to transform composites manufacturing. Lux Capital and Haystack…

Founders Fund leads financing of composites startup Layup Parts

AI startup Anthropic is changing its policies to allow minors to use its generative AI systems — in certain circumstances, at least.  Announced in a post on the company’s official…

Anthropic now lets kids use its AI tech — within limits

Zeekr’s market hype is noteworthy and may indicate that investors see value in the high-quality, low-price offerings of Chinese automakers.

The buzziest EV IPO of the year is a Chinese automaker

Venture capital has been hit hard by souring macroeconomic conditions over the past few years and it’s not yet clear how the market downturn affected VC fund performance. But recent…

VC fund performance is down sharply — but it may have already hit its lowest point

The person who claims to have 49 million Dell customer records told TechCrunch that he brute-forced an online company portal and scraped customer data, including physical addresses, directly from Dell’s…

Threat actor says he scraped 49M Dell customer addresses before the company found out

The social network has announced an updated version of its app that lets you offer feedback about its algorithmic feed so you can better customize it.

Bluesky now lets you personalize main Discover feed using new controls

Microsoft will launch its own mobile game store in July, the company announced at the Bloomberg Technology Summit on Thursday. Xbox president Sarah Bond shared that the company plans to…

Microsoft is launching its mobile game store in July

Smart ring maker Oura is launching two new features focused on heart health, the company announced on Friday. The first claims to help users get an idea of their cardiovascular…

Oura launches two new heart health features

Keeping up with an industry as fast-moving as AI is a tall order. So until an AI can do it for you, here’s a handy roundup of recent stories in the world…

This Week in AI: OpenAI considers allowing AI porn

Garena is quietly developing new India-themed games even though Free Fire, its biggest title, has still not made a comeback to the country.

Garena is quietly making India-themed games even as Free Fire’s relaunch remains doubtful

The U.S.’ NHTSA has opened a fourth investigation into the Fisker Ocean SUV, spurred by multiple claims of “inadvertent Automatic Emergency Braking.”

Fisker Ocean faces fourth federal safety probe

CoreWeave has formally opened an office in London that will serve as its European headquarters and home to two new data centers.

CoreWeave, a $19B AI compute provider, opens European HQ in London with plans for 2 UK data centers

The Series C funding, which brings its total raise to around $95 million, will go toward mass production of the startup’s inaugural products

AI chip startup DEEPX secures $80M Series C at a $529M valuation 

A dust-up between Evolve Bank & Trust, Mercury and Synapse has led TabaPay to abandon its acquisition plans of troubled banking-as-a-service startup Synapse.

Infighting among fintech players has caused TabaPay to ‘pull out’ from buying bankrupt Synapse

The problem is not the media, but the message.

Apple’s ‘Crush’ ad is disgusting

The Twitter for Android client was “a demo app that Google had created and gave to us,” says Particle co-founder and ex-Twitter employee Sara Beykpour.

Google built some of the first social apps for Android, including Twitter and others

WhatsApp is updating its mobile apps for a fresh and more streamlined look, while also introducing a new “darker dark mode,” the company announced on Thursday. The messaging app says…

WhatsApp’s latest update streamlines navigation and adds a ‘darker dark mode’

Plinky lets you solve the problem of saving and organizing links from anywhere with a focus on simplicity and customization.

Plinky is an app for you to collect and organize links easily

The keynote kicks off at 10 a.m. PT on Tuesday and will offer glimpses into the latest versions of Android, Wear OS and Android TV.

Google I/O 2024: How to watch

For cancer patients, medicines administered in clinical trials can help save or extend lives. But despite thousands of trials in the United States each year, only 3% to 5% of…

Triomics raises $15M Series A to automate cancer clinical trials matching

Welcome back to TechCrunch Mobility — your central hub for news and insights on the future of transportation. Sign up here for free — just click TechCrunch Mobility! Tap, tap.…

Tesla drives Luminar lidar sales and Motional pauses robotaxi plans

The newly announced “Public Content Policy” will now join Reddit’s existing privacy policy and content policy to guide how Reddit’s data is being accessed and used by commercial entities and…

Reddit locks down its public data in new content policy, says use now requires a contract

Eva Ho plans to step away from her position as general partner at Fika Ventures, the Los Angeles-based seed firm she co-founded in 2016. Fika told LPs of Ho’s intention…

Fika Ventures co-founder Eva Ho will step back from the firm after its current fund is deployed

In a post on Werner Vogels’ personal blog, he details Distill, an open-source app he built to transcribe and summarize conference calls.

Amazon’s CTO built a meeting-summarizing app for some reason

Paris-based Mistral AI, a startup working on open source large language models — the building block for generative AI services — has been raising money at a $6 billion valuation,…

Sources: Mistral AI raising at a $6B valuation, SoftBank ‘not in’ but DST is

You can expect plenty of AI, but probably not a lot of hardware.

Google I/O 2024: What to expect