Crunch Network

Do iPhones Dream Of Twitter Follows? What The AI Arms Race Means For Creative Technology

Next Story

Google Open Sources Two Tools To Import Mail Into Gmail

You’re a tech giant that has built an artificial intelligence engine 5-7 years ahead of your competition that uses advanced machine learning to power many of your key products to the top of their class in functionality. What’s next? Share it with the entire internet.

Last week Google open-sourced TensorFlow, the machine learning library that allows Google to recognize spoken words and translate languages — it’s the code that allows Google Photos to search the content of your pictures and Google Inbox to smart reply.

While this will be an incredible benefit to the entire computer science community, Google’s decision isn’t entirely philanthropic. Open sourcing TensorFlow will allow them to learn from the community at large and iterate quickly on the source code for rapid advancements in machine learning.

Google’s decision to open source it’s library is a surprising move in the growing Silicon Valley A.I. arms race. Artificial Intelligence, an endless source of fascination for sci-fi fanatics, has gained traction as tech-giants have doubled down on machine learning as a solution for analyzing and gleaming value from the 2.5 quintillion bytes of data we produce each and every day.

According to a report from market intelligence firm Tractica, the market for enterprise AI system applications will increase from $202.5 million in 2015 to $11.1 billion by 2024.

Before you begin imagining self-aware robots doing battle with Rick Deckard and John Connor, remember that the threshold for what constitutes artificial intelligence is set much lower than a sentient robot that truly understands human language and context. What is commonly called artificial intelligence refers to advanced machine learning that allows computers to display intelligent behavior that can replicate functions of the human brain.

This intelligent behavior is reflected in algorithms that understand and respond to human stimuli in ways useful to the end user.

The A.I. Arms Race

Do Androids Dream Of Electric Sheep
Image: Flickr/Bernard Goldbach via a cc-by 2.0 license

In just a 4 day span in early October, Apple acquired two artificial intelligence companies and has committed to hiring nearly 100 artificial intelligence PhDs and experts to bolster its machine learning process.

Google has turned over a significant percentage of its searches to a machine learning tool named RankBrain. RankBrain is able to make educated guesses about what a user is looking for by recognizing patterns and connections between long, seemingly ambiguous queries— an absolute must for handling the 450 million searches a day that the engine has never processed before.

Not one to be outdone, Facebook’s AI lab is “committed to advancing the field of machine intelligence and developing technologies that give people better ways to communicate.”

Facebook just debuted a prototype of a video recognition program that is able to discern and identify between 487 unique sports. The prototype can even tell the difference between highly similar activities — it’s able to not only recognize that someone is roller skating but that they are on a freestyle slalom course.

Don’t make the mistake of equating image and video recognition with the classic search engine image search that is merely looking for metadata or site text that correspond with your query. A program that identifies and ascribes meaning to a visual medium is in a sense able to understand what it’s watching.

The A.I. boom is powered largely by a deep learning movement that has transformed after decades as an academic outlier into a tech buzzword. To put it simply, deep learning is a subset of machine learning that utilizes algorithms that derive meaning out of data by using a hierarchy of multiple, complex processing layers that mimic the neural networks of our brain. To put it simply, if you provide the system tons of information, it begins to understand it and respond in useful ways.

Although it has been more than 30 years since University of Toronto Professor Geoffrey Hinton and NYU’s Yann LeCun developed the “back-propagation” algorithm that serves as the starting point for deep machine learning, Hinton now works for Google and LeCun for Facebook.

In those several decades, the rest of the A.I. community committed to finding shortcuts that could exhibit human-like behavior. Deep learning academics spun their tires waiting for processing power to advance to the point that a computer is able to process information in a manner similar to the human brain.

Now the perfect storm of big data, exploding processing power, and a connected world are demanding the internet giants of today deliver highly personalized digital experiences that connect humans to technology in previously unforeseen ways.

The Zuckerbergs and Musks of the industry seem to believe the answer lies in deep learning. But make no mistake about it, the tech company that wins the A.I. arms race will be at the forefront of a technological revolution comparable to the rise of the personal computer and the first data that flowed across the interweb.

Meeting Each Other Halfway

“Some meaningful comparison exists between human and mechanical behavior. As the external world becomes more animate, me may find that we- the so called humans – are becoming more inanimate in the sense that we are led, directed by built in tropisms, rather than leading. So we and our elaborately evolving computers may meet each other halfway.” – Philip K. Dick

As first and foremost a creative technologist, I’m fascinated by the potential artificial intelligence holds for allowing us to meet technology “halfway”. I’m fascinated by the what ifs:

  • What if a social app could “Tinderfy” our entire night based on our preferences, social circles (and their availability), and what’s going on? Swiping yes or no to see a band at a centrally located venue for you and three of your closest friends that are available that night and have listened to the band in the past year. The moment you swipe an Uber or perhaps your self-driving Tesla is on the way.
  • What if every e-commerce experience you had only displayed products you are interested in? The eBook that you overheard your co-worker mention and impress your boss, it’s waiting for you on the Amazon homepage.
  • What if the perfect outfit was waiting in our closet each morning? The smart closet understands we have a big meeting and selects our lucky black suit, with a handkerchief in the pocket for the cold we are fighting, and beeps when you forget to grab the umbrella that will keep you from getting soaked by the statistically probable rain storm coming that morning just in time for your walk through the parking lot into the office.
  • What if tiny nanobots could be implanted into our brain and evolve the scope of human intelligence?
  • What if a machine learning system could process every documented recovery from cancer and find a cure?

The more inputs our big data warehouses provide these “smart” machines, the better the outcomes we can stand to expect.

Given its incredible potential, concern regarding the misuse and militarization of A.I. is certainly wise, but with proper governance it will change the world we live in for the better.

Featured Image: Digital Surgeons