Generative artificial intelligence is banging on TV’s door. How afraid should we be?
A funny thing happened on the stage of America’s Got Talent in 2022. Tom Graham brought gasps from the audience when, with the aid of a talented performer and a lot of deepfake technology, he brought a vision of Simon Cowell to life, apparently singing a ballad while swinging his arms around. Cowell looked equally bewildered and impressed, before asking: “Is it inappropriate to fall in love with a contestant?”
Fifteen months later, during a thought-provoking session on generative artificial intelligence in Cambridge, Graham – the CEO and co-founder of Metaphysic, which developed the software for his America’s Got Talent stunt – explained that it wouldn’t be long before such tools were at everyone’s fingertips.
“Everything about the content creation process is going to change quite rapidly over the next two, five, 10 years,” he said. “Slowly these tools will become available to regular people, who will be able to create Hollywood-quality content in their basement. It will feature their favourite characters from the cinematic universes they love, and they can put themselves in the content.”
In this session, Herman Narula, Co-founder and CEO of London-based tech company Improbable, was still glowing from a recent success: “Last night, we broadcast our first Major League Baseball game. We put a group of about 1,500 fans all around the world simultaneously into a virtual ballpark and recorded the telemetry (a collection of data at remote points) and movement of all the players. We handled 20 billion messages a second; we were able to use AI to predictably determine where those messages were going to go.”
For Narula, this is a key example of where AI might allow the entertainment industry to develop: “Populating interactive experiences is extremely expensive in terms of content – most big-budget video games cost hundreds of millions of dollars – so, if you can take the cost of content down, you can make interactive experiences – which all data shows us are far more engaging – much more available to people.
“If you combine that with technology like ours, we can have stadiums full of thousands of people engaging in [interactive] environments. I think sports broadcasters and content creators need to use AI, of course, but they’ve also got to be thinking in terms of interactive experiences.”
No one on the panel, which included Grace Boswood (Technology Director of Channel 4) and Kamal Ahmed (Co-founder, The News Movement) was under any illusion that AI will not fundamentally change every aspect of the television industry.
"10 years from now… 90% of everything… on screen will be some form of AI-generated pixels"
Narula referenced a friend, a classical scholar, who had written a dissertation on specific deaths in the work of the poet Virgil, and set Narula’s software the challenge of replicating the work. He recalled: “In three hours, it came to the same conclusions he had in a year. To look at this as anything other than a disruption on the order of magnitude of the steam engine…
“We do everyone a disservice if we are not honest with them; this isn’t going to be like last time, this isn’t going to be just more of the same. It’s going to be something else, it’s going to be weird and it’s going to be a stretch.”
Graham agreed: “It took us 100 years to move society to adapt to the changes through the industrial revolution; I think we’re going to see the same order of magnitude of change happen in the next 10 to 15 years. Ten years from now, probably 90% or more of everything that every single person on earth looks at on screen will be some form of AI-generated pixels, whether that’s coming to you through VR or a screen, or you’re listening to it.
“People who serve up content are going to have massive control over what we believe in and what we value, because our experience of reality is going to be largely augmented by AI-generated content that we consume every day.”
He added: “The things to look out for are – how can you use AI to create content more effectively? Because, if you don’t, someone else will, and quickly.”
Ahmed stressed the importance for leaders of all media organisations to recognise the magnitude of potential change and harness it. He said: “You have to think strategically, firstly about your purpose. What are you here to do before you even discuss what generative AI is?
“You can start thinking about what generative AI can do to help you, and there are fantastic advantages to generative AI for us as a creative business. For example, we are already testing when we upload a video on to Developer with Yahoo, we need to create captions from that content, and we need to create metadata from that content. [Currently], that is a [manual], relatively laborious, job.
“AI can help do that laborious, commoditised job for you as a business. Also, we’re looking at how it can help with accessibility; so, we write in English – how can we create other languages and other language content for what we do?”
Boswood explained how AI could help with time-intensive processes for a broadcaster: “We need to work out where the parts of our workload are non-differentiating or constraining our opportunities. If you can solve that problem and get through that workload more quickly, that will create a differentiator for our business and stop the constraint.”
She cited how rig cameras had previously revolutionised where factual teams could film: “Suddenly, you had access in hospitals, schools, and loads of new programmes came out of it.” And she posed the question: “What is the equivalent in the production workload that AI can unlock? Is it casting, editing, massive data processing? That is the really creative opportunity here, and that must be built on.”
With such opportunities also come responsibilities and risks. When moderator Nick Kwek raised the prospect of many jobs being lost in this new era, Boswood, framed it as an evolution rather than an exodus.
“I’m not pretending that all the skills that we need now will be needed within 10 to 15 years,” Boswood said. “We’ve got a whole load of people who need creative skills in order to make this technology work, so [there will be a need] for lots of new training.
“In the near term, you combine the skills of design and curation with personal and social media. There’s a massive explosion in value here that we need more people, not fewer people, to do. The new technology is going to enable us.”
The US actors’ strike has highlighted the concerns of all performing artists of having their likenesses ripped off and harvested by AI. Ahmed reflected: “There’s a very big discussion for us to have as a creative industry around how we are going to create the opportunities that you so excitingly described…
“How are you thinking about that notion of the manipulated world ripping off the work of people who are then not included in or remunerated for the work you’re doing?”
For Graham, safeguards were key. He explained: “At the heart of enabling and empowering people – including actors, performers and regular people – as this technology comes at scale to all of us, is to allow people to have control over their data being used to train these algorithms. Also, to provide them with pathways and remedies to control how their likenesses, their voices, etc, may be used.
“In the case of actors, it might be their performance that’s been used. There will be contracts going backwards and forwards, so there’s quite a lot of clarity on some level. For society [as a whole], it will be damaging if all of us give up all our data to large technology companies to own, to do as they want with us, probably create content and market it back to us. We should all own and control our data and that should be the fundamental concern.”
Ahmed saw challenges with this. “We’ve been through this before, with people just giving up their data to allow social media to exist, and we’ve seen some of the huge negatives that have come from people not understanding what you’re even asking them to give up. To put the responsibility on the individual is going to be a difficult conversation…”
Worse still, Ahmed added, AI, being based on human-created data, is just as susceptible to “problems that we all have as human beings; racism, ableism, sexism, misinformation…”
"People who serve up content are going to have massive control over what we believe in and what we value"
Another challenge is that, with open-source developers using such technology to create deepfake pornography or political information – most people have already marvelled at the Tom Cruise-alike visible on TikTok – the capacity for ill-harm will become exponential. While the panellists touched on regulation, both Narula and Graham stressed the limitations of what will be possible.
Narula suggested: “What we should legislate for is outcomes, for me trying to copy your face, we can stop that. But trying to worry about the method, the machine, the device, or any AI, it’s just impossible now.”
Graham said that he had recently been down this road himself: “Three months ago, I registered copyright in the AI version of myself, trying to get the remedy of takedowns (within 24 hours) for infringement of registered copyright under US law. I’m waiting for the copyright office in the US to come back to say whether I can register my copyright of the AI version of myself.
“So that’s an existing legal instrument that we can co-opt to enable individuals to have rights and remedies to address some of the outcomes of the technology. But the technology, the open-source community, it is not going anywhere. Development is not just happening in big companies, it is happening everywhere all at once.”
Thought-provoking, paradigm-shifting stuff, all in all. As Boswood summed up, “We need to work out a way to do this safely and carefully… all I would ask of this room is, you need to learn and listen and stay in touch with it, otherwise you will get taken over.”
For Ahmed, enabling the tools of AI to serve the TV industry, as for every other, must be the aspiration, and he pointed out the USP of human endeavour. “The more of your business in the creative industry that you do by AI, the more it can be copied by another business. The more of your business that is human talent supported by AI, the more defendable it is as a model, because AI can’t send one of our reporters to Syria to find out the latest about the ISIS camps.
“Things that you can do as human beings are incredibly valuable, so that’s how we approach it.” He quoted another industry executive who had reminded him: “AI can’t put up a camera in a hurricane.”
In Session Fifteen, ‘AI: Friend or foe?’, the panellists were: Kamal Ahmed, Co-founder and Editor-in-Chief, The News Movement; Grace Boswood, Technology Director, Channel 4; Tom Graham, CEO and Co-founder, Metaphysic; and Herman Narula, Co-founder and CEO, Improbable. The chair was technology journalist Nick Kwek. The producer was Diana Muir. Report by Caroline Frost.