Deepfakes of Idris Elba and Kim Kardashian on ITV raise alarm for actors
Idris Elba is furious. His upstairs neighbour, Kim Kardashian, has been sunbathing for hours on the carefully-manicured lawn of their shared garden in Catford, south east London.
Before long, the dispute escalates: Idris and Kim come to blows, hurling pots of coriander over a paddling pool in the driveway.
If this scenario seems surreal, that’s because it is. These are not the real Idris Elba and Kim Kardashian, but rather AI-generated avatars in ITV’s new comedy series, Deep Fake Neighbour Wars.
The programme, which also features computer-generated versions of Greta Thunberg, Harry Kane, Mark Zuckerberg and Adele, uses artificial intelligence (AI) technology to map the likenesses of celebrities onto the faces of actors. In some cases, it is eerily accurate.
ITV's show is the most prominent use of so-called “deep fake” technology on mainstream TV – a kind of Spitting Image for the TikTok generation.
While trailblazing, ITV's use of the technology is fuelling concerns among actors that they will soon find themselves undercut and potentially out of pocket. Why pay for the real Robert DeNiro when you can just recreate him digitally?
Equity, the actors union, has warned of “dystopian” consequences if the law fails to keep up with the new technology.
Liam Budd, industry official for audio and new media at Equity, says the development of AI technology is “significantly disrupting the nature of work”.
“The challenge we’re seeing is that the technology is developing really quickly and is already starting to replace jobs for our members,” he says.
As ever, the dispute revolves around who gets paid and how much.
Idris Elba and Kim Kardashian were not paid, nor even consulted, about ITV’s Deep Fake Neighbours. There is no suggestion that the celebrities mimicked in the series are upset with the programme, but the situation highlights what Equity sees as a shortfall in the law.
“If actors are placed into virtual reality environments, do they deserve royalties for that?” says Michael Wooldridge, a professor of computer science at the University of Oxford.
“There are challenges ahead and I would anticipate serious legal challenges to the use of this technology and yet another raft of ethical concerns that we need to grapple with.”
Performers’ rights are still governed by copyright laws from 1988, with no comprehensive legislation covering image rights.
Equity is calling for laws to be updated to cover “performance synthetisation” using AI, as well as the introduction of image rights.
Even actors involved in AI programmes – those who either read the lines or act out the scenes while having the likenesses of others digitally grafted on – worry they will be underpaid or not paid at all.
Equity says its members are being presented with ambiguous contracts – and often non-disclosure agreements – that leave them at risk of signing away their rights.
Some performers are being offered one-off payments for image or voice rights that can be reproduced in perpetuity, often in unexpected ways.
Bev Standing, a Canadian voice actor, has filed a lawsuit against TikTok after discovering that phrases she had recorded during a translation job for the Chinese Institute of Acoustics were being used in viral videos on the social media platform.
Imitation has been at the heart of cinema since its inception but with primitive puppets and amateur impersonations, it was always clear what was real and what was not. Increasingly complex technology blurs that dividing line.
While Deep Fake Neighbour Wars is the most prominent example of AI-generated video in mainstream entertainment, it is not the first. In 2020, Channel 4 created an alternative Christmas message featuring an artificial Queen, while deep fake videos of Tom Cruise became a viral sensation on TikTok the following year.
YouTube creators, including the makers of South Park, have been playing with the technology for years and deep fakes are starting to crop up in music videos with growing frequency.
In cinema, AI technology has already been used to resurrect actors from the dead: Peter Cushing featured in Star Wars spin-off Rogue One more than two decades after his death and Anthony Bourdain narrated parts of a documentary about his life and death by suicide, Roadrunner.
A deep fake of Albert Einstein has been used in a UK advertising campaign to promote smart energy metres, with the Government paying an undisclosed sum to the Hebrew University of Jerusalem, which holds the famous scientist's rights and continues to profit from the use of his name and image.
Alongside the legal implications of this brave new world of creativity are ethical concerns.
Deep fake technology has frequently garnered negative attention due to its use in pornographic videos, as well as other harmful material such as fake news and fraud.
British AI firm ElevenLabs has been forced to crack down on how its technology is used after users of online forum 4chan manipulated it to create offensive videos featuring celebrities, including one of Emma Watson reading Mein Kampf.
“Whilst deep fake tech presents really exciting opportunities for production teams, what we’re seeing is it’s creating opportunities for abuse and exploitation as well,” says Budd.
Equity is fighting back against government plans for a new exception to copyright infringement for data mining – a move it warns would allow any publicly-available video or sound recording to be reused without consent.
The maker of Deep Fake Neighbour Wars insisted it was clear that the show was artificial, with a warning at the start of each episode, a ‘Deep Fake’ watermark throughout and the unmasking of the real actors behind the AI at the end of each programme.
An ITV spokesman said: “Our show is not a satire, all the celebrities featured are heroes to our writers, reimagined in daft and unlikely scenarios. All the celebrities’ characters have been completely made up to live in our surreal world. It is clear that the real celebrities have had nothing to do with the show.”
The spokesman added that the series complied with Ofcom regulations and all relevant laws.
Budd insists that the actors’ union is not anti-AI, adding that it could have positive benefits if applied responsibly. Performers could appear in multiple productions at once and boost their income, for example.
Like it or not, the technology appears here to stay. Professor Wooldridge says future tech will allow people to create videos and virtual realities on spec, much like the wildly popular chatbot ChatGPT can already do for text.
As a result, artists are grappling to get ahead of the game and cling on to their work – as well as their identities.
“We can’t stop the floodgates from opening,” says Budd. “But our members should always have the ability to be paid fairly and have a system of consent.”
A spokesman for the Intellectual Property Office said: “The IPO continues to monitor developments in technology to consider if any change may be needed in the future, where there is evidence to support this.”