The case for and against AI in TV

The case for and against AI in TV

By Simon Bucks,
Friday, 9th June 2023
AI used in the fight against malaria: Synthesia tools permitted David Beckham to apparently speak nine languages while Charlatan aged him by decades for the advert Malaria Must Die So Millions Can Live. Credit: Digital Domain.
Twitter icon
Facebook icon
LinkedIn icon
e-mail icon

From fake news to copyright infringement, AI is raising alarm across TV, but there are real benefits, too, says Simon Bucks

Should the television industry be worried about artificial intelligence (AI) or excited? Or both?

Each day seems to bring fresh warnings that AI will wreck businesses, cast thousands out of work and even destroy the human race. So, is TV safe? Judge for yourself from this little experiment.

“Can you suggest a format for a new television gameshow?” I asked the AI chatbot ChatGPT.

“Here’s an idea. Title: Brain Busters,” it offered, instantly. “Format: two teams of three contestants each compete against each other in a series of rounds that test their knowledge, memory and problem-­solving abilities.”

“Rubbish!” cried Peter Bazalgette, the former ITV Chair, who knows a thing or two about formats, having brought Big Brother to the UK.

OK, not gameshows. How about drama? “Sure, here’s an idea for a 13-part series. The Legacy follows the lives of three siblings who are forced to confront their family’s dark secrets and long-buried traumas after their wealthy, influential father dies suddenly.”

There is derivative and then there is downright plagiarism. “It underlines my basic belief that artificial intelligence is more artificial than it is intelligent,” is Bazalgette’s pithy verdict.

So far, so entertaining. It’s easy to mock ChatGPT’s lamentable pitches, and, on this evidence, the striking Hollywood screenwriters are worrying unduly that bots may take their jobs. Nevertheless, AI undoubtedly brings real opportunities, as well as threats, to the TV and film industries.

ITVX comedy series Deep Fake Neighbour Wars used AI technology to turn impressionists into celebrities, such as a fake Idris Elba, embroiled in petty disputes with their A-list neighbours. Credit: ITVX.

ITVX’s recent show Deep Fake Neighbour Wars is a parody that uses AI technology to create realistic avatars of celebrities, including Idris Elba, Olivia Colman and even, weirdly, Greta Thunberg. “This is all fake. The stories are all made up,” advises the opening caption.

It is fun, but it is nowhere near the limit of ITV’s enthusiasm for AI. ITV Studios has recently set up a “global innovation hub”, embracing all its scripted and unscripted labels, with AI as a key focus. “It’s a big hot topic for us right now,” says Helen Killeen, Director of Non-Scripted Production in the UK, who leads the group with Ben Russell, Director of Production at ITV Studios International.

“One of our priorities is freeing up the time of our creatives so that they are not doing things that can be automated, but instead using their brains to develop ideas or produce the best content.”

ITV Studios has already used AI virtual locations for Emmerdale and Coronation Street, while the innovation hub is encouraging producers to look for further opportunities. The next step will be to use virtual sets for unscripted, including gameshows. “It’s a tougher challenge because of the problems of cutting between multiple cameras,” says Killeen, “We think we might have found a solution, so we won’t talk about it in case it doesn’t work.”

Russell stresses that one of the drivers for using AI is sustainability, providing solutions that reduce ITV Studios’ carbon footprint. For example, using AI tools means you don’t have to send loggers on location for weeks at a time.

AI based on pattern recognition supports activities ranging from development and production to distribution and scheduling.

“The big streamers have been at the forefront,” says Dawn Airey, who casts an experienced eye over this developing landscape. “Look at what Netflix did with House of Cards: it was very analytical of viewing behaviour – [it concluded] that a remake of House of Cards would hit and find an audience. And sure enough, bang, it worked perfectly.”

In fact, machine learning has long been working its production magic on video games, with films and TV piggybacking off technologies such as the 3D computer graphics game engine Unreal Engine. What is new and, for some, concerning, is that “generative” AI has developed into a “deep learning” platform that will pass the Turing test by making a plausible stab at behaving like a human.

“I think we’re at a tipping point, because things such as ChatGPT have made the language of this stuff conversational,” says Simon Fell, Chair of the Digital Television Group and a former Director of Technology and Innovation at the European Broadcasting Union. “You can ask it a question, then you can enhance that question and ask it a bit more. And it doesn’t forget what you’ve asked it previously, which is what search engines have always done.”

The ability of AI to generate convincing fakes has already led the CEO of BBC News, Deborah Turness, to warn: “What is happening right now is nothing short of frightening. We have to act.” Speaking at a BBC event, Turness continued: “We are seeing the acceleration of AI having an impact on disinformation. It is amplifying everything that counterweights our good, true and valued journalism.”

Turness’s worries are echoed by a procession of tech gurus. Geoffrey Hinton, the Google software pioneer credited as the “Godfather” of AI, spoke for many in fearing that it will lead to the internet being flooded with false content and the average person will “not be able to know what is true any more”. Recently, Hinton left Google so he could speak more freely about his growing fears regarding the risks that AI poses to humanity.

A deepfake video conspiracy was the central theme of the successful BBC thriller series The Capture, but Turness’s warning is not a response to sci-fi drama. Although humanoid avatars have been around for some time, rapid advances in AI technology have made their development much easier and cheaper.

Synthesia, which made headlines by reproducing David Beckham speaking in nine languages, says its mission is “to empower everyone to make video content – without cameras, microphones or studios”. Also on parade at this year’s NAB, the US media convention, was Hour One, which – for $25 a month – allows you to create apparently authentic presenter-led videos by using “virtual humans”, following written instructions.

In a sinister development, two apparently genuine news-style videos were recently posted on YouTube. The videos supporting the Venezuelan government led by President Nicolás Maduro were eventually identified as fakes and removed. The Financial Times reported that the newsreaders were avatars, and their American accents were synthesised using AI technology.

AI-generated backdrops made by Cuebric: fast but loose with geometry. Credit: Cuebric.

Fell, recently returned from NAB, says: “Every booth seemed to have an AI story.” Among them was Cuebric, a new product for TV and film, named after director Stanley Kubrick. It boasts the ability to render 2.5D backdrops (so-called because they convey a sense of depth without permitting 3D editing), far superior to conventional green screen, in almost real time. The producers simply type in a “prompt” – a description of the scene they want – and the machine does the rest.

It is the brainchild of Cuebric’s founders, Pinar Seyhan Demirdag and Gary Lee Koepke. Demirdag claims: “It saves money and time. It removes the heavy lifting of drawing, composing three-dimensional objects everywhere, whereas with generative AI it’s text to image. For example, a director is filming a scene with pyramids and realises the pyramid is not looking good.

“With our tool you can erase the pyramid in one simple move and replace it with an obelisk, in real time. It saves on reshoots and it removes the tedious and unfun, repetitive parts of the process, allowing much more time for creativity.”

Fell says: “There’s a whole world there just waiting to be exploited. “The picture quality of some of the image-rendering programmes is not always perfect but it will improve. Ultimately, you may not have to go on location to shoot anything.”

Producers insist AI won’t cost jobs, rather that it will free people from drudgery to do more creative work. However, Bazalgette, who has been voicing concerns about the risks of digital insurgency for many years, foresees other threats.

“Machine learning is potentially going to drive a coach and horses through copyright,” he says. He cites a recent case in the music industry in which a track emerged featuring fake AI-generated vocals that copied the sound of Drake and The Weeknd. After a copyright action, it was taken down by platforms such as Spotify, but he fears the same thing will happen in TV.

“This is a clear and present danger to the creative industries and to the screen industries, where intellectual property – ownership of it, its resilience and the ability to protect it and earn money from it – is their lifeblood.”

For Airey, transparency is key: “Tell me what it is that I’m consuming. Tell me where this has come from. And if you can’t, then one should be highly sceptical. The regulator will need to be all over it.”

The UK Government has just published a white paper that promises a light-touch approach. Instead of giving responsibility for AI governance to a new single regulator, it will tell existing ones to devise tailored, context-specific approaches that suit the way AI is being used in their sectors.

For the television industry this will be in the hands of the Digital Regulation Co-operation Forum, a voluntary coalition comprising Ofcom and other key regulators. It is early days but a spokesperson says it is confident that “by working together, we can maximise the benefits of AI and algorithms for UK citizens, industry and wider society”.

Will light touch be enough? The Prime Minister Rishi Sunak now appears to be contemplating a tougher approach, which will be welcome news for Bazalgette. He wants watermarking on TV shows and international agreements to enforce it. Computers have their place, he says, to make content look beautiful or brilliant. “But in terms of the actual narrative, and the basic idea, it’s got to come from a person to resonate with me. And I want to know the difference. I want to know where it came from.”

So will software eat the world, as the futurologist Marc Andreessen asserted? At ITV Studios, Killeen concedes: “AI is not something we can control, all we can do is look at where it’s appropriate to use it, make sure we’re using the best partners and monitor what the output is.

“But it’s bigger than all of us. Wasn’t it Stephen Hawking who said it will be the end of humanity? It’s terrifying as well as innovative.”

You are here