Is a master algorithm the solution to our machine learning problems?

Machine learning is not new. We have witnessed it since the 1990s, when Amazon introduced a new “recommended for you” section for its users to display more personalized results. When we search for something on Google, machine learning is behind those search results. The “Friends” recommendations or the suggested pages on Facebook or a product recommendation on any e-commerce site all depend on machine learning.

In other ways, these websites know a lot about us. Every click or search we perform is recorded and provides more information about us to these sites, but none of these sites know completely about us. Google knows what we are searching for, Amazon knows what we are looking to buy, Apple knows about our music interests and Facebook knows a lot about our social behavior. But none of these sites knows about our preferences and choices throughout the day. They only can predict by looking at our previous clicks and not by looking at the big picture of us.

What is a master algorithm?

But suppose there’s an algorithm that knows what we’re searching for on Google, what we’re buying on Amazon and what we’re listening to on Apple Music or watching on Netflix. It also knows about our recent statuses and shares on Facebook.

Now this algorithm knows a lot about us and has a better and more complete picture of us.

This powerful “master algorithm” is at the heart of work postulated by Pedro Domingos, author of The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World.

Machine learning has different schools of thought, and each looks at the problem from a different perspective. The symbolists focus more on philosophy, logic and psychology and view learning as inverse of deduction. The connectionists focus on physics and neurosciences and believe in the reverse engineering of the brain. The evolutionaries, as the name suggests, draw their conclusions on the basis of genetics and evolutionary biology, whereas the Bayesians focus on statistics and probabilistic inference. And the analogizers depend on extrapolating the similarity judgements by focusing more on psychology and mathematical optimization.

A deeper look at the different schools of thought of machine learning

The connectionists

This school of thought believes in deducing knowledge through the connections between the neurons. The connectionists focus on physics and neuroscience and believe in the reverse engineering of the brain. They believe in the back-propagation or “backward propagation of errors” algorithm to train the artificial neural networks to get the results.

Geoff Hinton of University of Toronto is one of the top researchers in this area of machine learning. Hinton works actively with Google and is also the man behind the “deep learning” model that has revolutionized AI in different areas, like speech recognition, image characterizing and generating readable sentences.

Is artificial intelligence setting tech development on a dangerous path?

All the big names, including Facebook, Microsoft and Google, are using this model to improve their systems. Navdeep Jaitly, who did his research under Hinton’s supervision, works as a research scientist with Google’s “Brain” team. He used the deep learning model to outsmart the already “fine-tuned” algorithms for speech recognition in Android OS.

Yann LeCun, director of Facebook AI Research (FAIR), is another notable name in this area of research. Yann also did his Ph.D. under the supervision of Hinton and has dedicated himself in the area of deep learning.

Yoshua Bengio, who is the head of Montreal Institute of Learning Algorithms, is another notable name who is leading the research following the connectionists’ approach. He has been organizing different AI-related events and conferences, including the Learning Workshop. Bengio co-authored Deep Learning along with his student, Ian Goodfellow, now a researcher at OpenAI, and Aaron Courville.

Many researchers in the field of machine learning — especially the connectionists — believe that the deep learning model is the answer to all the problems of AI and consider it a master algorithm.

The symbolists

The symbolists’ approach is based on the “high-level” interpretation of problems. The symbolists focus more on philosophy, logic and psychology and view learning as the inverse of deduction. John Haugeland called it “Good Old-Fashioned Artificial Intelligence” (GOFAI) in his book Artificial Intelligence: The Very Idea. The symbolists’ approach solves the problem using pre-existing knowledge to fill the gaps. Most of the expert systems use the symbolists’ approach to solve the problem with an If-Then approach.

Tom Mitchell of Carnegie Mellon University is among the top researchers who are leading the symbolists’ school of thought. Sebastian Thrun, co-founder of Udacity, former Google VP and professor at Stanford University, and Oren Etzioni, CEO of Allen Institute for Artificial Intelligence, are also among the notable students of Tom Mitchell.

Stephen Muggleton, writer of Inductive Acquisition of Expert Knowledge from Imperial College of London, and Ross Quinlan, founder of the RuleRequest, a company known for different data mining tools, are among the other notable researchers following the symbolists’ approach for machine learning.

The evolutionaries

The third school of thought, the evolutionaries, draw their conclusions on the basis of genetics and evolutionary biology. John Holland, who died in 2015 and previously taught at the University of Michigan, played a very important role in bringing Darwin’s evolution theory into the computer sciences. Holland was the pioneer of genetics algorithms and his “fundamental theorem of genetics algorithm” is considered the foundation in this area.

Are we going to become slaves to the machines, or is AI the gateway to the ultimate progress of mankind?

Most of the work in the areas of robotics, 3D printing and bioinformatics is being carried by the evolutionaries like Hod Lipson, director of the Creative Mechanics labs at Columbia University. John Koza, former Stanford professor and founder of the Scientific Games Corporation is another pioneer in the field of genetics algorithms. Meanwhile, Serafim Batzoglou, professor of computer science at Stanford and founder of Serafim’s Lab, is another notable researching working in the area of computational genomics.

The Bayesian school of thought

If you’ve been using emails for 10 to 12 years now, you know how spam filters have improved. This is all because of the Bayesian school of thought in machine learning. The Bayesians focus on the probabilistic inference and Bayes’ theorem to solve the problems. The Bayesians start with a belief that they call a prior. Then they obtain some data and update the prior on the basis of that data; the outcome is called a posterior. The posterior is then processed with more data and becomes a prior and this cycle repeats itself until we get the final answer. Most of the spam filters work on the same basis.

Judea Pearl, from UCLA’s computer science department, is among the most prominent researchers who follow the Bayesian approach. David Heckerman, director of Genomics Group at Microsoft, is also one of the notable scientists focusing on the Bayesian approach. He helped Microsoft develop different data-mining tools and junk-mail filters in Outlook and Hotmail.

Michael Jordan from University of California Berkeley is also known for his work in the same area.

The analogizers

The fifth tribe of machine learning, the analogizers, depend on extrapolating the similarity judgements by focusing more on psychology and mathematical optimization. The analogizers follow the “Nearest Neighbor” principal for their research. The product recommendations on different e-commerce sites like Amazon or movie ratings on Netflix are the most common examples of the analogizers’ approach.

Douglas Hofstadter, from University of Indiana, is the most prominent name in cognitive sciences. Vladimir Vapnik, co-inventor of the “support vector machine” and the main developer of the Vapnik-Chervonenkis theory is another prominent scientists known for working in the same area. Facebook recently hired him to join its AI lab, along with other prominent researchers. Peter Hart, co-writer of the Pattern Classification and founder of Ricoh Innovations, is a well-known name who follows the analogizers’ approach.

Problems and dangers

All of the above schools solve different problems and present different solutions. The real challenge is to design an algorithm that will solve all the different problems these approaches try to solve — that single algorithm will be the “master algorithm.”

We’re still living in the early days of machine learning and AI. A lot more has to be done. We don’t know when and where problem will arise that will slow down this whole process and bring the next “Winter of AI,” or when and where a new breakthrough will completely change the present scenario.

Progress in machine learning will be more like an evolution. Like bacteria develop faster than humans, machine learning will progress faster, but there will come a stage when these learning algorithms will become too complicated to evolve quickly.

And there are other dangers. An “ideal” master algorithm will know everything about us. Although machine learning needs human input to start, it will eventually reach a point when it will outsmart us. What will happen then? A slight divergence in their goals and our goals could be enough to end humanity.

This is only one scenario. Let’s say we successfully build a mechanism to control these super intelligent machines, which is similar to an idea that ants will make a mechanism to control us, the humans. Still, there will be conflicts of interest among the nations, people and groups that can initiate a Skynet-like attack.

How machine learning is already changing the world

There are many startups focused on machine learning and its implications to bring solutions to the different problems in life; more importantly, every big technology company is backing them. DeepMind bought by Google is focused on healthcare and is working to cure cancer with the help of machine learning. The Chan Zuckerberg Initiative backed by Facebook has announced that it plans to invest $3 billion over the next decade to help cure, prevent and manage diseases.

There’s also the Partnership on AI, bringing together some of the largest companies in the world, like Amazon, Facebook, Google and Microsoft, to share their large databases for conducting research and promoting the best practices.

Conclusion

Is artificial intelligence setting tech development on a dangerous path? Are we going to become slaves to the machines, or is AI the gateway to the ultimate progress of mankind?

Anyone eager to learn more about AI and machine learning should read The Master Algorithm by Pedro Domingos and Superintelligence by Nick Bostrom. The free Data Sciences Overview Course offered by the Microsoft is also a good starting point.