3 methodologies for automated video game highlight detection and capture – TechCrunch

With the rise Live streaming, gaming has evolved from a consumer product like a toy to a legitimate platform and a medium in its own right for entertainment and competition.

Since its acquisition by Amazon in 2014, Twitch’s audience base has grown from an average of 250,000 simultaneous viewers to 3 million. Competitors like Facebook Gaming and YouTube Live are the same way.

The boom in viewership has boosted the ecosystem of auxiliary products as today’s commercial streamers push technology to its limits to increase the production value of their content and automate repetitive aspects of the video production cycle.

The biggest streamers hire teams of video editors and social media managers, but the growing and part-time streamers themselves struggle to do this or come up with money to outsource it.

The online streaming game is a force to be reckoned with, with full-time creators putting in a 12-hour performance if not eight on a daily basis. To attract the attention of valued spectators, 24-hour marathon streams are also not uncommon.

However, these hours in front of the camera and keyboard are only half of the streaming grind. Maintaining a consistent presence on social media and YouTube increases the growth of the stream channel and attracts more viewers to watch the stream live, where they can purchase monthly subscriptions, make donations and view ads.

Dissolving the most impressive five to 10 minutes of content out of eight or more hours of raw video becomes a nominal time commitment. At the top of the food chain, the biggest streamers may hire teams of video editors and social media managers to tackle this part of the job, but growing and part-time streamers themselves struggle to find time to do this or come up with outsourced. Money to do. There are not enough minutes in the day to carefully review all the footage on top of other life and work priorities.

Computer Vision Analysis of Game UI

An emerging solution is to use automated tools to identify key moments in longer broadcasts. Many startups compete to dominate this emerging structure. The difference in their approaches to solving this problem is that they distinguish competitive solutions from each other. Many of these approaches follow the classic computer science hardware-vs-software dichotomy.

Athenascope was one of the first companies to implement this concept on a scale. Backed by $ 2.5 million venture capital funding and an impressive team of Silicon Valley Big Tech alumni, Athenascope has developed a computer vision system to identify highlight clips in long recordings.

Theoretically, it is no different from how a self-driving car works, but instead of using the camera to read nearby road signs and traffic lights, the instrument captures the gamer’s screen and recognizes indicators in the game’s user interface that communicate important events. Game: Murder and death, goals and savings, wins and losses.

These are the same visual cues that traditionally inform the player of the game what is happening in the game. In the modern game UI, this information is high-contrast, clear and insecure, and is usually located in predictable, fixed places on the screen. These predictions and explanations are very good for computer vision techniques such as Optical Character Recognition (OCR) – to read text from an image.

The stakes here are even lower than for a self-driving car, as the false positives from this system produce nothing but a less provocative video clip than the average નહીં not a car accident.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *