AI

The evolution of machine learning

Comment

Image Credits: Bryce Durbin / TechCrunch

Catherine Dong

Contributor

Catherine Dong is a summer associate at Bloomberg Beta and will be working at Facebook as a machine learning engineer.

Major tech companies have actively reoriented themselves around AI and machine learning: Google is now “AI-first,” Uber has ML running through its veins and internal AI research labs keep popping up.

They’re pouring resources and attention into convincing the world that the machine intelligence revolution is arriving now. They tout deep learning, in particular, as the breakthrough driving this transformation and powering new self-driving cars, virtual assistants and more.

Despite this hype around the state of the art, the state of the practice is less futuristic.

Software engineers and data scientists working with machine learning still use many of the same algorithms and engineering tools they did years ago.

That is, traditional machine learning models — not deep neural networks — are powering most AI applications. Engineers still use traditional software engineering tools for machine learning engineering, and they don’t work: The pipelines that take data to model to result end up built out of scattered, incompatible pieces. There is change coming, as big tech companies smooth out this process by building new machine learning-specific platforms with end-to-end functionality.

Large tech companies have recently started to use their own centralized platforms for machine learning engineering, which more cleanly tie together the previously scattered workflows of data scientists and engineers.

What goes into a machine learning sandwich

Machine learning engineering happens in three stages — data processing, model building and deployment and monitoring. In the middle we have the meat of the pipeline, the model, which is the machine learning algorithm that learns to predict given input data.

That model is where “deep learning” would live. Deep learning is a subcategory of machine learning algorithms that use multi-layered neural networks to learn complex relationships between inputs and outputs. The more layers in the neural network, the more complexity it can capture.

Traditional statistical machine learning algorithms (i.e. ones that do not use deep neural nets) have a more limited capacity to capture information about training data. But these more basic machine learning algorithms work well enough for many applications, making the additional complexity of deep learning models often superfluous. So we still see software engineers using these traditional models extensively in machine learning engineering — even in the midst of this deep learning craze.

But the bread of the sandwich process that holds everything together is what happens before and after training the machine learning model.

The first stage involves cleaning and formatting vast amounts of data to be fed into the model. The last stage involves careful deployment and monitoring of the model. We found that most of the engineering time in AI is not actually spent on building machine learning models — it’s spent preparing and monitoring those models.

The meat of machine learning — and avoiding exotic flavors

Despite the focus on deep learning at the big tech company AI research labs, most applications of machine learning at these same companies do not rely on neural networks and instead use traditional machine learning models. The most common models include linear/logistic regression, random forests and boosted decision trees. These are the models behind, among other services tech companies use, friend suggestions, ad targeting, user interest prediction, supply/demand simulation and search result ranking.

And some of the tools engineers use to train these models are similarly well-worn. One of the most commonly used machine learning libraries is scikit-learn, which was released a decade ago (although Google’s TensorFlow is on the rise).

There are good reasons to use simpler models over deep learning. Deep neural networks are hard to train. They require more time and computational power (they usually require different hardware, specifically GPUs). Getting deep learning to work is hard — it still requires extensive manual fiddling, involving a combination of intuition and trial and error.

With traditional machine learning models, the time engineers spend on model training and tuning is relatively short — usually just a few hours. Ultimately, if the accuracy improvements that deep learning can achieve are modest, the need for scalability and development speed outweighs their value.

Attempting to stick it all together — tools from data to deployment

So when it comes to training a machine learning model, traditional methods work well. But the same does not apply to the infrastructure that holds together the machine learning pipeline. Using the same old software engineering tools for machine learning engineering creates greater potential for errors.

The first stage in the machine learning pipeline — data collection and processing — illustrates this. While big companies certainly have big data, data scientists or engineers must clean the data to make it useful — verify and consolidate duplicates from different sources, normalize metrics, design and prove features.

At most companies, engineers do this using a combination SQL or Hive queries and Python scripts to aggregate and format up to several million data points from one or more data sources. This often takes several days of frustrating manual labor. Some of this is likely repetitive work, because the process at many companies is decentralized — data scientists or engineers often manipulate data with local scripts or Jupyter Notebooks.

Furthermore, the large scale of big tech companies compounds errors, making careful deployment and monitoring of models in production imperative. As one engineer described it, At large companies, machine learning is 80 percent infrastructure.”

However, traditional unit tests — the backbone of traditional software testing — don’t really work with machine learning models, because the correct output of machine learning models isn’t known beforehand. After all, the purpose of machine learning is for the model to learn to make predictions from data without the need for an engineer to specifically code any rules. So instead of unit tests, engineers take a less structured approach: They manually monitor dashboards and program alerts for new models.

And shifts in real-world data may make trained models less accurate, so engineers re-train production models on fresh data on a daily to monthly basis, depending on the application. But a lack of machine learning-specific support in the existing engineering infrastructure can create a disconnect between models in development and models in production — normal code is updated much less frequently.

Many engineers still rely on rudimentary methods of deploying models to production, like saving a serialized version of the trained model or model weights to a file. Engineers sometimes need to rebuild model prototypes and parts of the data pipeline in a different language or framework, so they work on production infrastructure. Any incompatibility from any stage of the machine learning development process — from data processing to training to deployment to production infrastructure — can introduce error.

Making it presentable — the road forward

To address these issues, a few big companies, with the resources to build custom tooling, have invested time and engineering effort into creating their own machine learning-specific tools. Their goal is to have a seamless, end-to-end machine learning platform that is fully compatible with the company’s engineering infrastructure.

Facebook’s FBLearner Flow and Uber’s Michelangelo are internal machine learning platforms that do just that. They allow engineers to construct training and validation data sets with an intuitive user interface, decreasing time spent on this stage from days to hours. Then, engineers can train models with (more or less) the click of a button. Finally, they can monitor and directly update production models with ease.

Services like Azure Machine Learning and Amazon Machine Learning are publicly available alternatives that provide similar end-to-end platform functionality but only integrate with other Amazon or Microsoft services for the data storage and deployment components of the pipeline.

Despite all the emphasis big tech companies have placed on enhancing their products with machine learning, at most companies there are still major challenges and inefficiencies in the process. They still use traditional machine learning models instead of more-advanced deep learning, and still depend on a traditional infrastructure of tools poorly suited to machine learning.

Fortunately, with the current focus on AI at these companies, they are investing in specialized tools to make machine learning work better. With these internal tools, or potentially with third-party machine learning platforms that are able to integrate tightly into their existing infrastructures, organizations can realize the potential of AI.

A special thank you to Irving Hsu, David Eng, Gideon Mann and the Bloomberg Beta team for their insights.

More TechCrunch

The deck included some redacted numbers, but there was still enough data to get a good picture.

Pitch Deck Teardown: Cloudsmith’s $15M Series A deck

The company is describing the event as “a chance to demo some ChatGPT and GPT-4 updates.”

OpenAI’s ChatGPT announcement: What we know so far

Unlike ChatGPT, Claude did not become a new App Store hit.

Anthropic’s Claude sees tepid reception on iOS compared with ChatGPT’s debut

Welcome to Startups Weekly — Haje‘s weekly recap of everything you can’t miss from the world of startups. Sign up here to get it in your inbox every Friday. Look,…

Startups Weekly: Trouble in EV land and Peloton is circling the drain

Scarcely five months after its founding, hard tech startup Layup Parts has landed a $9 million round of financing led by Founders Fund to transform composites manufacturing. Lux Capital and Haystack…

Founders Fund leads financing of composites startup Layup Parts

AI startup Anthropic is changing its policies to allow minors to use its generative AI systems — in certain circumstances, at least.  Announced in a post on the company’s official…

Anthropic now lets kids use its AI tech — within limits

Zeekr’s market hype is noteworthy and may indicate that investors see value in the high-quality, low-price offerings of Chinese automakers.

The buzziest EV IPO of the year is a Chinese automaker

Venture capital has been hit hard by souring macroeconomic conditions over the past few years and it’s not yet clear how the market downturn affected VC fund performance. But recent…

VC fund performance is down sharply — but it may have already hit its lowest point

The person who claims to have 49 million Dell customer records told TechCrunch that he brute-forced an online company portal and scraped customer data, including physical addresses, directly from Dell’s…

Threat actor says he scraped 49M Dell customer addresses before the company found out

The social network has announced an updated version of its app that lets you offer feedback about its algorithmic feed so you can better customize it.

Bluesky now lets you personalize main Discover feed using new controls

Microsoft will launch its own mobile game store in July, the company announced at the Bloomberg Technology Summit on Thursday. Xbox president Sarah Bond shared that the company plans to…

Microsoft is launching its mobile game store in July

Smart ring maker Oura is launching two new features focused on heart health, the company announced on Friday. The first claims to help users get an idea of their cardiovascular…

Oura launches two new heart health features

Keeping up with an industry as fast-moving as AI is a tall order. So until an AI can do it for you, here’s a handy roundup of recent stories in the world…

This Week in AI: OpenAI considers allowing AI porn

Garena is quietly developing new India-themed games even though Free Fire, its biggest title, has still not made a comeback to the country.

Garena is quietly making India-themed games even as Free Fire’s relaunch remains doubtful

The U.S.’ NHTSA has opened a fourth investigation into the Fisker Ocean SUV, spurred by multiple claims of “inadvertent Automatic Emergency Braking.”

Fisker Ocean faces fourth federal safety probe

CoreWeave has formally opened an office in London that will serve as its European headquarters and home to two new data centers.

CoreWeave, a $19B AI compute provider, opens European HQ in London with plans for 2 UK data centers

The Series C funding, which brings its total raise to around $95 million, will go toward mass production of the startup’s inaugural products

AI chip startup DEEPX secures $80M Series C at a $529M valuation 

A dust-up between Evolve Bank & Trust, Mercury and Synapse has led TabaPay to abandon its acquisition plans of troubled banking-as-a-service startup Synapse.

Infighting among fintech players has caused TabaPay to ‘pull out’ from buying bankrupt Synapse

The problem is not the media, but the message.

Apple’s ‘Crush’ ad is disgusting

The Twitter for Android client was “a demo app that Google had created and gave to us,” says Particle co-founder and ex-Twitter employee Sara Beykpour.

Google built some of the first social apps for Android, including Twitter and others

WhatsApp is updating its mobile apps for a fresh and more streamlined look, while also introducing a new “darker dark mode,” the company announced on Thursday. The messaging app says…

WhatsApp’s latest update streamlines navigation and adds a ‘darker dark mode’

Plinky lets you solve the problem of saving and organizing links from anywhere with a focus on simplicity and customization.

Plinky is an app for you to collect and organize links easily

The keynote kicks off at 10 a.m. PT on Tuesday and will offer glimpses into the latest versions of Android, Wear OS and Android TV.

Google I/O 2024: How to watch

For cancer patients, medicines administered in clinical trials can help save or extend lives. But despite thousands of trials in the United States each year, only 3% to 5% of…

Triomics raises $15M Series A to automate cancer clinical trials matching

Welcome back to TechCrunch Mobility — your central hub for news and insights on the future of transportation. Sign up here for free — just click TechCrunch Mobility! Tap, tap.…

Tesla drives Luminar lidar sales and Motional pauses robotaxi plans

The newly announced “Public Content Policy” will now join Reddit’s existing privacy policy and content policy to guide how Reddit’s data is being accessed and used by commercial entities and…

Reddit locks down its public data in new content policy, says use now requires a contract

Eva Ho plans to step away from her position as general partner at Fika Ventures, the Los Angeles-based seed firm she co-founded in 2016. Fika told LPs of Ho’s intention…

Fika Ventures co-founder Eva Ho will step back from the firm after its current fund is deployed

In a post on Werner Vogels’ personal blog, he details Distill, an open-source app he built to transcribe and summarize conference calls.

Amazon’s CTO built a meeting-summarizing app for some reason

Paris-based Mistral AI, a startup working on open source large language models — the building block for generative AI services — has been raising money at a $6 billion valuation,…

Sources: Mistral AI raising at a $6B valuation, SoftBank ‘not in’ but DST is

You can expect plenty of AI, but probably not a lot of hardware.

Google I/O 2024: What to expect