Opinion

AI is the unsung hero of the metadata generation: Peter Chave of Akamai

Peter Chave is Principal Architect at Akamai.

One of the most valuable commodities today is viewer attention. From broadcasters to advertisers, everyone with a message is competing for it. In a world already saturated with content, how do you stand apart from the rest?

Broadcasters are now beginning to realise how much value their data has. Metadata can be harnessed not only to lead viewers to content, but also to enable content owners to reach their target audiences. By offering a platform for content owners to connect to the right audience at the right time, metadata can improve both visibility and customer satisfaction.

If metadata is managed properly, the content presented and described will be more consistent and the AI-driven recommended content to viewers will more accurately reflect their individual interests. But where does this data come from?

All too often in the past, valuable metadata was cast off early in the production process when making content. The records of who’s appearing in a shot, the location, the date it was filmed and so on was lost as soon as a film was edited. However, this is the data that would help to make recommendations to someone who might be interested, for example, in documentaries filmed in Dubai, films starring Tom Cruise or films from the 1980s. To make good recommendations, this sort of metadata needs to be recreated.

However, recreating metadata can be an onerous process; people have to scour through content and manually record the actors, locations, themes and music that they experience so that this can be entered into the system. AI is proving to be an excellent way of automating the discovery of this information, through tools such as facial recognition and music identification software.

AI-driven metadata generation can also be used when automating other functions, such as censorship and copyright protection, in addition to content curation.

A good recommendation, however, is only part of the story – content providers need to be able to deliver. This can be harder than you first think. For example, with only basic tools for recommendation at their disposal, a broadcaster might see that a viewer is coming to the end of episode two of a drama series and automatically recommend episode three. It can be fairly confident that this will be the viewer’s next choice (it’s unlikely, after all, that he or she will skip to episode four) and can start caching that episode in advance, ensuring a smooth transition.

Now imagine that the broadcaster has an advanced AI-driven recommendation service and that, at any point in the show, the viewer can pause and call up information about a scene, clicking through to recommendations based on the actors, the music, the location, the timing and so on. The number of options for the viewer has increased enormously, so how can a broadcaster make the transition experience just as good for them?

This can, in part, be resolved with AI too. However, it’s critical in this scenario that there’s a content distribution network in place that can provide low latency and reduce buffering, so that start times can be minimised. Accessing a large, distributed platform where content can be pushed to the edge of the internet can help a broadcaster ensure that the new AI-driven services being launched to viewers deliver an all-round positive experience.