AnyClip snaps up $47M for its video search and analytics technology

Comment

Image Credits: AnyClip

Video is, quite literally, what gets the world moving online these days, and is expected to account for 82% of all IP traffic this year. Today a startup that has built a set of tools to help better parse, index and ultimately discover that trove of content is announcing a big round of funding to expand its business after seeing 600% growth in the last year.

AnyClip — which combines artificial intelligence with more standard search tools to provide better video analytics for content providers to improve how those videos can be used and viewed — has raised $47 million, money that it will be using to build out its platform.

The funding is being led by JVP, with La Maison, Bank Mizrahi and internal investors also participating. The company is not officially disclosing its valuation but has raised $70 million to date and I understand from reliable sources that it is around $300 million.

Founded in Tel Aviv and now co-headquartered in New York, the challenge that AnyClip is tackling is the fact that there is a huge amount of video out in the world today, and it remains one of the most used content mediums, whether you are a consumer binging a Netflix series, someone trying to dig up an obscure classical music recording on YouTube, a business user on Zoom, or something in the very large in-between. The problem is that in most cases, people are just scratching the surface when they search.

That’s not just because hosts tweak algorithms to lead to watching some things instead of others; it’s because in most cases it’s too difficult, and some might say impossible, to search everything in an efficient way.

AnyClip is among the tech companies that believe it’s not impossible. Using technologies that include deep learning models based on computer vision, NLP, speech-to-text, OCR, patented key frame detection and closed captioning to “read” the content in videos. It can recognize people, brands, products, actions, millions of keywords and build taxonomies based around what the videos contain. These can be based on, for example, content category, brand safety, or whatever a customer requests.

The videos that AnyClip currently works with are hosted by AnyClip itself — on AWS, president and CEO Gil Becker tells me — and the process of reading and indexing is super quick, “10x faster than real time.”

The resulting data and what can be done with it, as you might guess, has a lot of potential uses. Currently, Becker said that AnyClip is finding a strong audience among customers that are looking for ways of better organising their video content for a variety of use cases, whether that’s for internal purposes, for B2B purposes, or for consumers to better discover something.

As the illustration above shows, that tech can be used, naturally, to better monetize video. By identifying more objects, themes, moods and language in videos more efficiently, AnyClip essentially can build a framework not just for people to better discover videos, but for advertisers to place ads next to whatever they want to be near (or conversely, better avoid content that they do not want any association with at all).

The list of those it works with is pretty impressive — although Becker would not get very specific on what it does for all of its clients. It includes Samsung, Microsoft, AT&T, Amazon (Prime Video specifically), Heineken, Discovery, Warner Media (the latter two soon to be one), Tencent, Internet Brands and Google.

AnyClip does not count Google as an investor per se, but it has received funding from it, specifically as part of its Google News Initiative’s Innovation Challenge to create a streaming video page experience for media companies that mimics the functionality and design of today’s most popular video-on-demand services while accessing advanced video management tools supported by AnyClip’s AI backbone. AnyClip was chosen from among hundreds of companies for its solution that allows companies to transform any library into a “Netflix or YouTube-like” library, creating channels and subchannels, in less than 30 seconds.

AnyClip has an interesting history that led to it building the search and discovery tools that it sells today. It started life back in 2009 with a concept that spoke directly to its name: It let media companies create clips of films that could be shared around the internet, which it hosted on a site of its own. These could be found using a number of taxonomies built by AnyClip’s algorithms, by humans at the company, and by contributors. Kind of like a Giphy before its time, if you will. 

It turned out to be possibly too far ahead of its time. At a time when piracy was still a big deal, and there were no Netflixes or other places for streaming efficiently and legally, the idea proved to be too complicated and too hard of a sell for rights owners. The company subsequently pivoted to building a video-based ad network, which itself was probably too early, too.

But there was something to the technology, given the right place and right time, and that seems to be where the startup has landed today, with patents behind what it has built and a team of engineers continuing to expand the tech. It hopes that this will be enough to keep it ahead of competitors, which include the likes of Kaltura, Brightcove and many others. And naturally, given the size of the opportunity, that competition will not be disappearing soon.

Notably, AnyClip’s own growth, on the back of what has up to now been a modest amount of funding ($30 million in 12 years) definitely speaks of its own ability not just to win business against them, but be capital efficient in what is typically considered a very bandwidth- and resource-intensive medium.

“There is a revolution coming in the way enterprises use video to convey their message and their identity”, says Erel Margalit, JVP founder and Chairman, and AnyClip’s board chairman, in a statement. “For the first time, AI meets video. Companies and organizations are now working to utilize this to create a new mode of communications, internally and externally, in all areas where video dominates in a much stronger way than text. Whether it’s how to create videos for consumers or training videos for the organization, or learning how to manage conferences run by video on Zoom that need intelligent management in the retrieving of content. This is a new era, and AnyClip is a vital tool for anyone embarking upon it.”

Breaking down the specs of a successful video ad

More TechCrunch

Replacing Sutskever is Jakub Pachocki, OpenAI’s director of research.

Ilya Sutskever, OpenAI co-founder and longtime chief scientist, departs

Intuitive Machines made history when it became the first private company to land a spacecraft on the moon, so it makes sense to adapt that tech for Mars.

Intuitive Machines wants to help NASA return samples from Mars

As Google revamps itself for the AI era, offering AI overviews within its search results, the company is introducing a new way to filter for just text-based links. With the…

Google adds ‘Web’ search filter for showing old-school text links as AI rolls out

Blue Origin’s New Shepard rocket will take a crew to suborbital space for the first time in nearly two years later this month, the company announced on Tuesday.  The NS-25…

Blue Origin to resume crewed New Shepard launches on May 19

This will enable developers to use the on-device model to power their own AI features.

Google is building its Gemini Nano AI model into Chrome on the desktop

It ran 110 minutes, but Google managed to reference AI a whopping 121 times during Google I/O 2024 (by its own count). CEO Sundar Pichai referenced the figure to wrap…

Google mentioned ‘AI’ 120+ times during its I/O keynote

Firebase Genkit is an open source framework that enables developers to quickly build AI into new and existing applications.

Google launches Firebase Genkit, a new open source framework for building AI-powered apps

In the coming months, Google says it will open up the Gemini Nano model to more developers.

Patreon and Grammarly are already experimenting with Gemini Nano, says Google

As part of the update, Reddit also launched a dedicated AMA tab within the web post composer.

Reddit introduces new tools for ‘Ask Me Anything,’ its Q&A feature

Here are quick hits of the biggest news from the keynote as they are announced.

Google I/O 2024: Here’s everything Google just announced

LearnLM is already powering features across Google products, including in YouTube, Google’s Gemini apps, Google Search and Google Classroom.

LearnLM is Google’s new family of AI models for education

The official launch comes almost a year after YouTube began experimenting with AI-generated quizzes on its mobile app. 

Google is bringing AI-generated quizzes to academic videos on YouTube

Around 550 employees across autonomous vehicle company Motional have been laid off, according to information taken from WARN notice filings and sources at the company.  Earlier this week, TechCrunch reported…

Motional cut about 550 employees, around 40%, in recent restructuring, sources say

The keynote kicks off at 10 a.m. PT on Tuesday and will offer glimpses into the latest versions of Android, Wear OS and Android TV.

Google I/O 2024: Watch all of the AI, Android reveals

Google Play has a new discovery feature for apps, new ways to acquire users, updates to Play Points, and other enhancements to developer-facing tools.

Google Play preps a new full-screen app discovery feature and adds more developer tools

Soon, Android users will be able to drag and drop AI-generated images directly into their Gmail, Google Messages and other apps.

Gemini on Android becomes more capable and works with Gmail, Messages, YouTube and more

Veo can capture different visual and cinematic styles, including shots of landscapes and timelapses, and make edits and adjustments to already-generated footage.

Google Veo, a serious swing at AI-generated video, debuts at Google I/O 2024

In addition to the body of the emails themselves, the feature will also be able to analyze attachments, like PDFs.

Gemini comes to Gmail to summarize, draft emails, and more

The summaries are created based on Gemini’s analysis of insights from Google Maps’ community of more than 300 million contributors.

Google is bringing Gemini capabilities to Google Maps Platform

Google says that over 100,000 developers already tried the service.

Project IDX, Google’s next-gen IDE, is now in open beta

The system effectively listens for “conversation patterns commonly associated with scams” in-real time. 

Google will use Gemini to detect scams during calls

The standard Gemma models were only available in 2 billion and 7 billion parameter versions, making this quite a step up.

Google announces Gemma 2, a 27B-parameter version of its open model, launching in June

This is a great example of a company using generative AI to open its software to more users.

Google TalkBack will use Gemini to describe images for blind people

Google’s Circle to Search feature will now be able to solve more complex problems across psychics and math word problems. 

Circle to Search is now a better homework helper

People can now search using a video they upload combined with a text query to get an AI overview of the answers they need.

Google experiments with using video to search, thanks to Gemini AI

A search results page based on generative AI as its ranking mechanism will have wide-reaching consequences for online publishers.

Google will soon start using GenAI to organize some search results pages

Google has built a custom Gemini model for search to combine real-time information, Google’s ranking, long context and multimodal features.

Google is adding more AI to its search results

At its Google I/O developer conference, Google on Tuesday announced the next generation of its Tensor Processing Units (TPU) AI chips.

Google’s next-gen TPUs promise a 4.7x performance boost

Google is upgrading Gemini, its AI-powered chatbot, with features aimed at making the experience more ambient and contextually useful.

Google’s Gemini updates: How Project Astra is powering some of I/O’s big reveals

Veo can generate few-seconds-long 1080p video clips given a text prompt.

Google’s image-generating AI gets an upgrade