Rise Of The Quants — Again

Editor’s note: Robin Vasan is a managing director of Mayfield, a global early-stage venture capital firm. He previously founded or worked in key positions at several software startups. He has an MBA from Harvard Business School and a dual bachelor’s degree in industrial engineering and economics from Stanford University.

I graduated from Stanford in the late 1980s with a dual degree in engineering and economics, and, like so many others of my day, I was drawn to Wall Street. Reaganomics was in full swing, Bloomberg terminals were still in their early days and popular culture was full of colorful characters like Gordon Gekko.

Wall Street was booming and firms were waking up to the massive potential of marrying technology and complex mathematics. Technology was transitioning from PCs running Lotus 123 to firms installing real-time data feeds and legions of Sun workstations. Foundational algorithms like Black-Scholes and binomial pricing models and Monte Carlo simulations were just starting to take hold.

Firms needed new hires with PhDs and Masters degrees in varying complex quantitative disciplines like physics, statistics and mathematics, as well as armies of software developers. The race was on to leverage new computing platforms and combine market data with complex algorithms to create new financial products and trading strategies. These new hires, with the skills to combine programming algorithms and large data sets, became known as quantitative analysts (or “quants”).

Over the ensuing several decades, quants completely transformed Wall Street from a suspender-wearing, sales-dominated culture to an environment built around supercomputers paired with proprietary trading algorithms. The changes impacted every aspect of finance, and became embedded in nearly every aspect of the industry.

Today, Silicon Valley is the hottest place for quants to be – though people with this skill set are often referred to now as data scientists. A similar confluence of factors — data, technology and algorithms — has combined to enable a new class of transformational opportunities. These opportunities are not limited to just financial services; they are showing up in every sector of the economy.

The volume and variety of data sources has exploded, with companies now regularly directly capturing all manner of user web and mobile traffic, e-commerce and real-world transactions, social profile information, location and even sensor data. In addition, there are vast pools of third-party data available through APIs for everything from advertising and beauty to yellow pages and ZIP codes.

Technology has also taken a dramatic leap forward to almost limitless cheap storage and incredibly efficient and scalable cloud computing. For the daily price of a Starbucks latte (~$3.50), you can rent a medium-sized server and 1TB of storage. Relational databases played a key role in simplifying data access in the 1990s, and today a new crop of open-source data technologies, including Hadoop and NoSQL are playing key enabling roles in managing and making accessible the vastly larger data stores of today’s world.

Finally, new foundational analytical techniques, most notably in machine learning and deep learning, have emerged from academia to help data scientists discover patterns and models across hundreds or thousands of parameters.

They have esoteric names (similar to the past) like convolutional nets, random forests and restricted Boltzmann machines, but the goal is still the same – to use large data sets, massive compute power and increasingly sophisticated algorithms to make sense of massive amounts of structured and unstructured data to make decisions and predictions that formerly required humans.

Wall Street circa 1990 All industries circa today
Data Real-time quotes ClickstreamTransactionsUser profiles & socialLocation & sensor
Technology Unix workstationsRelational databases Cloud compute & storageHadoop & NoSQL
Algorithms Black-Scholes & BinomialMonte Carlo simulation Random ForestEnsemble ModelsDeep Learning

 

Consequently, tech companies are lusting after all manners of data scientists with the biggest companies (Google, Facebook, Baidu, Microsoft, etc.) already having made early acquisitions in machine/deep learning. However, these acquisitions may largely be used just to fuel their existing businesses in search, social and other applications inside their enterprises.

There are many, many more real-world problems to solve. The hunt is on across both horizontal business functions like sales, marketing, finance and security, as well as vertical industries such as retail, manufacturing, healthcare and even transportation. The Holy Grail here is finding patterns and insights where we didn’t know they existed.

For example, Lyft debuted last month discounted pricing from specific high-traffic street corners in San Francisco. This is a great marriage of rider data, population data, and demand data to improve their service. Further advancements down the road could leverage third-party event data (like sporting events or conferences), flight info and weather data to ensure there are enough drivers available at the right places at the right times.

An example of a company formed specifically to solve a very targeted problem is Enlitic. The company was founded to tackle the vital and humanitarian problem of cancer screening. Interestingly, the initial team was not stocked with oncologists or even MDs, but rather just experts in deep learning. By getting access to properly coded radiology images, they have been able to build models that will hopefully reduce the cost and improve imaging diagnostics.

Financial trading has long been a fertile ground for new algorithms, but there is still room for new entrants. Binatix initially applied new machine-learning techniques to speech recognition, but they subsequently pivoted into analyzing market price and trade data to identify unseen patterns they could exploit for their own trading purposes – effectively setting up an algorithmic trading shop.

However, there are still incredibly valuable and important broad-based problem areas that have not yet been solved. There are startups like MetaMind and SkyMind that are trying to provide platform technology for deep learning – but selling frameworks for developers can be a difficult proposition.

The potential projects here are endless so finding the right business application that has the highest impact is what matters.

Palantir, for example, started focused on security intelligence, found success, and then expanded its focus to many different industries, including finance, health, law, pharma and insurance, among others. From an opposite perspective, Kaggle and Wise.io started too broadly before narrowing their focus to a few markets.

Taking advantage of these opportunities isn’t easy. It requires a tight interlock between knowing the business problem worth tackling and the technical skills to actually tackle them – data, technology and algorithms.

This shift in tech toward quantitative disciplines will impact all walks of life, with the opportunities for both investors and quantrepreneurs only just now being realized.

*Mayfield portfolio companies included in this article: Lyft.