Spectator Centric Motor Racing Circuit Commentary

A bit over a decade ago, and several times since, I’ve idly wondered about being able to compete virtually in replay of an actual sporting event (Re:Play – The Future of Sports Gaming? “I’ll Take it From Here…”). Every so often, the idea pops up again (for example, Real racing in the virtual world), but now, it seems that real time gaming against live F1 racers [is] “only two years away”:

“We launched our virtual Grand Prix channel this year, which gives us the platform to produce a fully virtual version of the race live using the data,” said Morrison [John Morrison, Chief Technical Officer, Formula One Management]. “The thing we have to crack is we have to produce accurate positioning.
“Then we can do the gaming stuff and you can be in the car racing against other drivers. I reckon we are about two years away from that. We need accuracy to the nearest centimetre, so cars aren’t touching when they shouldn’t be touching. Right now we are more at 100-200mm accuracy.”

Whatever…

With multiple cameras offering 360 views, there are increasing opportunities for providing customised viewing perspectives using real footage. But simulated views from arbitrary viewpoints are also possible. For example, think of the virtual camera views that can be generated by Hawk Eye over a snooker table and then apply the same thing to 3D rendered models of F1 cars as they drive round a circuit (which has also been lidar scanned):

But that’s video… What about providing audio commentaries for spectators at a circuit that are created specifically for the listener according to where they are on the circuit?

For example, as a particular car goes by, I want my personal commentary to tell me what position they are in, as well as having bits of more general commentary about what’s going on elsewhere on the circuit. Through knowing the position of the cars on the circuit, and the position of the listener on the circuit (for example, based on wifi hotspot triangulation), we should be able to automatically generate a textual commentary that passes on information about the cars that the spectator can see from their current location, and then render that commentary to audio via a text to speech service.

Increasingly, I think there is a market in the automated generation of sports commentaries from sports data, it’s just I hadn’t thought about generating commentaries from a particular perspective to support the viewing of a live event from a particular location (“location specific” or “location sensitive” commentary).

The Associated Press (AP) would perhaps agree, aspiring as they are to the automation of 80 percent of their content production by 2020 (The AP wants to use machine learning to automate turning print stories into broadcast ones). They’re also looking at generating multiple versions of the same story, appropriate for different formats, from a single source.

Apparently, [o]n average, when an AP sportswriter covers a game, she produces eight different versions of the same story. Aside from writing the main print story, they have to write story summaries, separate ledes for both teams, convert the story to broadcast format, and more. How much easier it would be to just write one version and then generate the alternative presentations from it, which leads to this:

… a cross-sectional team of five AP staffers has been working on developing a framework to automate the process of converting print stories to broadcast format.

The team built a prototype that just identifies elements in print stories that need to be altered for broadcast. (Stories are shorter, sentences are more concise, attribution comes at the beginning of a sentence, numbers are rounded, and more.)

Hmmm… for location specific commentaries, I see another possibility: a generic commentary about events happening across a motor-racing circuit, intercut with live, custom commentary relating to what the spectator can actually see in front of them at that time, as if the commentator were sat by their side.

Related: eg in terms of automatically generating race commentaries from data – Detecting Undercuts in F1 Races Using R.

2 comments