3 methodologies for automated video game highlight detection and capture

With the rise of livestreaming, gaming has evolved from a toy-like consumer product to a legitimate platform and medium in its own right for entertainment and competition.

Twitch’s viewer base alone has grown from 250,000 average concurrent viewers to over 3 million since its acquisition by Amazon in 2014. Competitors like Facebook Gaming and YouTube Live are following similar trajectories.

The boom in viewership has fueled an ecosystem of supporting products as today’s professional streamers push technology to its limit to increase the production value of their content and automate repetitive aspects of the video production cycle.

The largest streamers hire teams of video editors and social media managers, but growing and part-time streamers struggle to do this themselves or come up with the money to outsource it.

The online streaming game is a grind, with full-time creators putting in eight- if not 12-hour performances on a daily basis. In a bid to capture valuable viewer attention, 24-hour marathon streams are not uncommon either.

However, these hours in front of the camera and keyboard are only half of the streaming grind. Maintaining a constant presence on social media and YouTube fuels the growth of the stream channel and attracts more viewers to catch a stream live, where they may purchase monthly subscriptions, donate and watch ads.

Distilling the most impactful five to 10 minutes of content out of eight or more hours of raw video becomes a non-trivial time commitment. At the top of the food chain, the largest streamers can hire teams of video editors and social media managers to tackle this part of the job, but growing and part-time streamers struggle to find the time to do this themselves or come up with the money to outsource it. There aren’t enough minutes in the day to carefully review all the footage on top of other life and work priorities.

Computer vision analysis of game UI

An emerging solution is to use automated tools to identify key moments in a longer broadcast. Several startups compete to dominate this emerging niche. Differences in their approaches to solving this problem are what differentiate competing solutions from each other. Many of these approaches follow a classic computer science hardware-versus-software dichotomy.

Athenascope was one of the first companies to execute on this concept at scale. Backed by $2.5 million of venture capital funding and an impressive team of Silicon Valley Big Tech alumni, Athenascope developed a computer vision system to identify highlight clips within longer recordings.

In principle, it’s not so different from how self-driving cars operate, but instead of using cameras to read nearby road signs and traffic lights, the tool captures the gamer’s screen and recognizes indicators in the game’s user interface that communicate important events happening in-game: kills and deaths, goals and saves, wins and losses.

These are the same visual cues that traditionally inform the game’s player what is happening in the game. In modern game UIs, this information is high-contrast, clear and unobscured, and typically located in predictable, fixed locations on the screen at all times. This predictability and clarity lends itself extremely well to computer vision techniques such as optical character recognition (OCR) — reading text from an image.

The stakes here are lower than self-driving cars, too, since a false positive from this system produces nothing more than a less-exciting-than-average video clip — not a car crash.

A computer vision methodology has drawbacks, though. The AI required is computation-heavy — too heavy for an average user to run while their computer is already tied up with rendering a modern game at 1080p and 60-plus frames per second, and encoding a live video stream on top of that.

That means the AI has to run in the cloud. Raw video is uploaded to Athenascope’s server cluster — called “Athena” — and after processing, the highlights are delivered to the user’s inbox for downloading. The upkeep of these high-end video analytics servers is a cost incurred by Athenascope. Another downside is the round-trip processing time and quality loss associated with streaming raw video to external servers and back again.

Early-stage startups like Clip It, which we co-founded, attempt to eliminate this downside by streamlining the image processing AI so that it can be run at the edge directly on the user’s computer, resulting in quicker results for the users and lower infrastructure costs for the company.

Game memory access

The difficult trade-offs involved in computer vision motivate a completely different approach to the same problem. Rather than using rendered video pixels as input, a program can instead inspect the game’s raw memory as it’s running, skipping the video rendering entirely and directly accessing the internal representation of in-game notifications and events in their purest form.

Overwolf is the incumbent pioneer of this particular variation. Founded in 2010 with a $100,000 seed investment, this year Overwolf launched a $50 million fund for creators utilizing their platform, which is built on this methodology. In contrast to Athenascope as a consumer service, Overwolf monetizes by licensing its technology to other developers.

Direct memory access is faster and more reliable than computer vision. It requires no expensive image analysis, and the data collected is immediately actionable.

However, the practice of inspecting the running memory of a third-party program is a security gray area. In fact, it’s the same method used by most cheat programs like first-person-shooter aimbots, a violation of games’ terms of service.

As a result, a lot of time and effort in the game development industry is spent blocking this approach. Game memory is often obfuscated or encrypted, and the anti-cheat software of many mainstream competitive games will monitor and block any unauthorized memory access.

In June, an update to Call of Duty’s anti-cheat system blocked Overwolf when it was flagged as malicious. It took over a month for Overwolf to work with Call of Duty’s development team to create a manual exception for their software and restore functionality for Overwolf’s customers.

On top of these security issues, any update to the game’s code that involves a change in internal memory representation will also temporarily break compatibility for any memory-reliant programs, as the particular bits and bytes they had been utilizing for may have changed locations. This results in a brittleness that requires constant development attention, and some unavoidable amount of customer downtime, on every update for every supported game.

In a sense, the cloud infrastructure maintenance cost of a computer vision method is traded for an ongoing development cost to stay in sync with game updates, as well as outreach and negotiation directly with game developers if necessary.

Playstream.gg is another example of the internal memory method in action — with their unique value proposition being automated in-game challenges rather than video clip capture.

GPU-integrated SDKs

A third distinct method can be identified among automated highlight detection software.

This is the approach developed by NVIDIA in its NVIDIA Highlights software, formerly called Shadowplay. NVIDIA’s unique positioning in the computer graphics pipeline gives it direct access to video data on the graphics processing unit (GPU) itself — in contrast to Athenascope, which requires video data streaming in from across an internet connection.

The result is lightning-fast and high-quality video capture. Unlike other systems, NVIDIA delegates control over clip generation to the actual games themselves; they offer a software development kit (SDK) for game developers to hook into an NVIDIA GPU and request a clip of the last 15-30 seconds on demand.

This puts the onus of development onto each individual game developer, and if they don’t create bindings for NVIDIA Highlights, it won’t be usable for that game.

Another obvious requirement is that the users must have an NVIDIA GPU — AMD users will be out of luck since AMD’s replay recording software does not have any support for automatic highlight recording. There’s no way to support console gamers either (Xbox, PlayStation, etc). In contrast, computer vision approaches are universal, and the platform that the video originates from is not a relevant factor in the compatibility of the product.

A solution that combines the GPU-accelerated video capture of NVIDIA Highlights with the computer vision methodology of Athenascope could be a unique combination of tradeoffs: the immediacy of Overwolf with the portability of a computer-vision approach, perhaps with the machine learning itself also running on the GPU. At the moment, no such application utilizes this particular approach.

The spectrum of gameplay analysis

A major differentiator between competitors in this space pivots on how early in the video rendering pipeline the analysis occurs.

Athenascope lives at one extreme end of this spectrum — receiving the final video output for analysis, after capture, encoding, mixing with overlays or filters, and uploading to Athenascope servers.

Clip It moves the analysis one step closer to the user, using the same technique but in real time on the user’s local computer.

Overwolf and Playstream both move the analysis even closer to the game itself by inspecting the game’s memory as it’s running. Arriving at the other end of the spectrum, we have NVIDIA Highlights, which pulls video directly from bare metal of the GPU, triggered from SDK bindings integrated in the game’s actual code itself.

The subtle distinctions between these methodologies are the basis for competition in the niche market of automated video game highlight detection and capture. As computer vision AI becomes more and more sophisticated, the flexibility and portability of purely software-based approaches will become more and more competitive with the hardware benefits on the other side of the scale.