4 questions to ask before building a computer vision model

In 2015, the launch of YOLO — a high-performing computer vision model that could produce predictions for real-time object detection — started an avalanche of progress that sped up computer vision’s jump from research to market.

It’s since been an exciting time for startups as entrepreneurs continue to discover use cases for computer vision in everything from retail and agriculture to construction. With lower computing costs, greater model accuracy and rapid proliferation of raw data, an increasing number of startups are turning to computer vision to find solutions to problems.

However, before founders begin building AI systems, they should think carefully about their risk appetite, data management practices and strategies for future-proofing their AI stack.


TechCrunch+ is having a Memorial Day sale. You can save 50% on annual subscriptions for a limited time.


Below are four factors that founders should consider when deciding to build computer vision models.

Is deep learning the right tool for solving my problem?

It may sound crazy, but the first question founders should ask themselves is if they even need to use a deep learning approach to solve their problem.

During my time in finance, I often saw that we’d hire a new employee right out of university who would want to use the latest deep learning model to solve a problem. After spending time working on the model, they’d come to the conclusion that using a variant of linear regression worked better.

To avoid falling into the so-called prototype-production gap, founders must think carefully about the performance characteristics required for model deployment.

The moral of the story?

Deep learning might sound like a futuristic solution, but in reality, these systems are sensitive to many small factors. Often, you can already use an existing and simpler solution — such as a “classical” algorithm — that produces an equally good or better outcome for lower cost.

Consider the problem, and the solution, from all angles before building a deep learning model.

Deep learning in general, and computer vision in particular, hold a great deal of promise for creating new approaches to solving old problems. However, building these systems comes with an investment risk: You’ll need machine learning engineers, a lot of data and validation mechanisms to put these models into production and build a functioning AI system.

It’s best to evaluate whether a simpler solution could solve your problem before beginning such a large-scale effort.

Perform a thorough risk assessment

Before building any AI system, founders must consider their risk appetite, which means evaluating the risks that occur at both the application layer and the research and development stage.

Roughly speaking, in R&D, the risk is that a model won’t meet certain metric-based performance criteria, and at the application-level, the risk is that the production system will not succeed within the context in which it is placed.

While machine learning-oriented founders tend to focus on R&D risks, a better first step is to create an assessment criteria for the application-level risk. Factors in this assessment will differ by application, but they often include potential risks in regulation, public perception and systems-level engineering.

The first step of building an effective framework often involves understanding the consequence of model errors (such as false positives or false negatives) within your application. The target use case has an important effect on this analysis — after all, there’s a huge difference between the application risk for using AI to filter emails and using AI to run autonomous vehicles.

The consequence of a model allowing one of every 1,000 spam emails to go to your inbox is minor. At worst, receiving a spam email moderately annoys someone, so this model has an acceptable application risk level for production. However, the consequence of mistaking a green light for a red one is severe. A computer vision model that mistakes one of every 1,000 green lights for red is just not capable of going into production.

Founders should first map out the consequences of errors in their application, because these consequences influence the evaluation of R&D risk. Depending on the application risk, AI systems need to meet different performance benchmarks before going into production.

For low-risk applications, simply beating the (human-based) status quo is often enough. High-risk applications, such as self-driving cars, need to meet new gold standards before people can trust the model’s performance. It doesn’t matter if autonomous vehicles are less likely to crash than human drivers, because the technology is held to a higher standard.

Beware the prototype-production gap

Making a proof-of-concept model for a given use case is (often) relatively simple. Making a model suitable for an application in a production environment requires more than an order of magnitude of work.

To avoid falling into the so-called prototype-production gap, founders must think carefully about the performance characteristics required for model deployment, and how these needs will influence the length and resourcing of the development cycle.

Consider the development cycle required for deploying a computer vision model designed for a high-risk application. Let’s say a model achieved 95% accuracy at the prototype stage. However, to go into production, that model needs to make predictions accurately 99.99% of the time. In terms of development, closing that 4.99% accuracy gap is much more challenging than building the prototype.

To achieve that level of accuracy, the model must train on vast amounts of data and learn to react appropriately to all types of situations. AI systems lack common sense, and computer vision models can’t reason as a human would. When they encounter an unexpected scenario that they have never seen before, these models won’t perform predictably. These scenarios, called edge cases, are notoriously difficult to debug within a machine learning context, because machine learning engineers must locate the few examples out of millions where the model fails for a systematic reason.

Edge cases often prevent models from achieving 100% accuracy in the testing phase. Again, autonomous vehicles are a great example, because human drivers can use reason while computer vision models can’t. For example, let’s say after training on enormous amounts of data, a model becomes capable of recognizing cyclists, but then it encounters a reflection of a cyclist. In this situation, the model will likely evaluate the situation as if a cyclist were present and behave unexpectedly, acting as if a cyclist rather than a reflection were there. A human would not make this mistake.

Founders should be aware that applications requiring a high-level of accuracy to enter production require more training time and more training data during the development cycle, and they need to make allowances for additional resources such as time and money before they begin building their models.

Take a data-centric approach

Once founders decide to build a model, they should take a data-centric rather than model-centric approach.

As open source models continue to improve, a company’s competitive edge will no longer come from building more sophisticated models: it’ll come from the quality and quantity of its data. The data, not the model, will become the core of the IP.

To understand how not taking a data-centric approach has stifled deep learning progress, consider the algorithm bias problem.

A lot of medical AI fails to make the jump from the research lab to the real world. That’s because researchers have tended to focus on improving the accuracy of the model in controlled settings rather than think carefully about whether their training data is representative of the population at large.

When medical AI models train on biased datasets, they do not learn how to make predictions about people of varying ages, racial demographics and genders. This knowledge gap leads to misdiagnoses and the perpetuation of existing medical biases.

With a data-centric approach, the aim is to think from first principles what the data that the model needs to train on to achieve the best performance possible.

When building data-centric AI for computer vision, your success will depend on how well you source data. Procuring the best proprietary datasets available is a priority. Unlike more established companies that have been generating their own data, startups may find obtaining exclusive datasets challenging and should consider partnering with established companies or using creative methods such as sophisticated scraping to secure unique datasets.

After securing a supply of data, set up a data management system that enables machine learning engineers to effectively store, filter, query and visualize data in a scalable way. The system needs to be structured so that it can accommodate future needs and uses, including ingesting additional data, reorganizing data, deleting data, cleaning data, querying data with arbitrary points of inquiry and more.

With a management system in place, the next step is ensuring a process for continuous annotation and review. The real world contains messy and imperfect data, so data-centric AI requires robust and iterative annotation pipelines as opposed to once-off annotations.

Think about the subject-matter expertise and labeling tools you’ll need to ensure that high-quality annotations can be completed as efficiently as possible. Also, keep in mind that in the world of data-centric AI, the annotation layer is no longer just procedural. The label structures and architectural design choices will influence how the system is going to learn, and these data labeling techniques will become intellectual property that can give companies a competitive advantage.

Taking a data-centric approach also enables companies to remain model-agnostic, which means they can reap the rewards of future innovations. Having a system dependent on a particular architecture limits a company’s ability to take advantage of more advanced models. For instance, if a company relies on a label ingestion system built for the needs of one model, then refactoring that process might prove difficult and prevent a company from incorporating a newer, better model into its business.

At Encord, we know it’s the data, not the model that matters most, and investing in a data-centric approach allowed us to use the same model for both detecting gastrointestinal polyps and for finding illegal fishing vessels in the ocean.

The technological landscape is evolving rapidly, and in five years, deep learning will look very different. As a result, any AI system developed today needs to take a data-centric approach so that it can incorporate the models of the future.