Technology

Within the unusual new world of being a deepfake actor

within-the-unusual-new-world-of-being-a-deepfake-actor

While deepfakes have been around for a few years, deepfake casting and acting are relatively new. Early deepfake technologies weren't very good and were mainly used in dark corners of the internet to turn celebrities into porn videos without their consent. But as deepfakes have become more realistic, more and more artists and filmmakers are using them in broadcast quality productions and television commercials. This means hiring real actors for one aspect or another of the performance. Some jobs require an actor to provide "basic material". others need a voice.

It opens up exciting creative and professional opportunities for actors. But it also raises a number of ethical questions. "This is so new that there is no real process or anything like that," says Burgundy. "I mean, we just made up something and pushed around."

"Do you want to be Nixon?"

The first thing Panetta and Burgundy did was ask both companies what kind of actor they needed to make the deepfakes work. "It was not only interesting what the important criteria were, but also what wasn't," says Burgundy.

For the graphics, Canny AI specializes in replacing video dialogs, which uses an actor's mouth movements to manipulate another's mouth in existing footage. In other words, the actor serves as a puppeteer who is never seen in the final product. The person's appearance, gender, age and ethnicity do not matter.

For the audio, Respeecher, who converts one voice to another, said it would be easier to work with an actor who has a similar register and accent as Nixon. Armed with this knowledge, Panetta and Burgundy began posting on various acting forums and emailing local acting groups to notify them. Your pitch: "Do you want to become Nixon?"

Actor Lewis D. Wheeler spent days in the studio training the deepfake algorithms and mapping his voice and face to Nixon.

PANETTA AND BURGUNDY

This was how Lewis D. Wheeler, a white male actor from Boston, was in a studio for days listening and repeating excerpts from Nixon's audio. There were hundreds of snippets, each only seconds long, "some of which weren't even complete words," he says.

The excerpts had been taken from various Nixon speeches, most of them from his resignation. Given the gravity of the lunar disaster speech, Respeecher needed training materials that captured the same somber tone.

Wheeler's job was to re-record each snippet in his own voice, exactly in rhythm and intonation. These little bits were then fed into Respeecher's algorithm to match his voice with Nixon's. "It was pretty exhausting and pretty arduous," he says, "but also very interesting building it brick by brick."

The last forgery of Nixon with the speech "In the event of a lunar catastrophe".

PANETTA AND BURGUNDY

The visual part of the deepfake was a lot easier. In the archival material that was supposed to be manipulated, Nixon had provided the real moon landing address right in front of the camera. Wheeler just had to deliver his alternative, from start to finish, the same way, so the production crew could capture his mouth movements at the correct angle.

Here he began as an actor to find familiar things. Ultimately, his performance would be the part of him that would make it into the final deepfake. "That was the greatest challenge and the greatest reward," he says. “I really had to put myself in the mindset to do that, okay, what is this speech about? How can you tell the American people that this tragedy happened? "

"How do we feel?"

At first sight, Zach Math, a film producer and director, was working on a similar project. He was hired by Mischief USA, a creative agency, to run two ads for a voting campaign. The ads would contain fake versions of North Korean leader Kim Jong-un and Russian President Vladimir Putin. But he ended up in the middle of something completely different from Panetta's and Burgundy's experiment.

In consultation with a deepfake artist, John Lee, the team opted for face-swapping with the open source software DeepFaceLab. This meant that the final display would include the actors' bodies so they had to create believable body duplications.

The ad would also feature the real actors' voices and add an additional casting consideration. The team wanted the deepfake executives to speak English, but with authentic North Korean and Russian accents. So the casting director went in search of male actors who were similar to any leader in terms of physique and facial structure, who corresponded to their ethnicity and who could perform convincing voice imitations.

The process of training DeepFaceLab to create Kim Jong-un's face.

MISCHIEF USA

For Putin, the casting process was relatively easy. There is an abundance of available footage of Putin making various speeches and providing ample training data for the algorithm to falsify his face and create a range of expressions. As a result, there was more flexibility in what the actor could look like, as the deepfake could do most of the work.

But for Kim, most of the videos available showed that he was wearing glasses, which obscured his face and caused the algorithm to crash. If you limit the training material to the videos without glasses, you can learn far fewer training patterns. The resulting deep fake still looked like Kim, but his facial movements looked less natural. Swapped face to an actor, the actor's expressions were muted.

To counter this, the team began running all of the actors' casting tapes through DeepFaceLab to see which one looked the most compelling. To their surprise, the winner looked the least like Kim but had the most expressive performance.

0 Comments
Share

Steven Gregory