deepfakeLast week, I had the pleasure of being on a panel called, “Bots and Deepfakes as Influencers In the Social Media Landscape” at the annual Data Summit, presented by Mississippi State University’s National Strategic Planning and Analysis Research Center.

My co-panelists were the CEO of a company that uses computer vision to identify objects and brands in videos, and a guy who created a bot Instagram influencer that has more than 47,000 followers.

As with so many discussions around artificial intelligence, the talk turned a bit negative, as we debated the spread of false news, malicious bots, and lack of disclosure.

But it wasn’t all doom and gloom.

Nor should it be.

Life Is Like a Box of Chocolates

When you think about it, the idea of a “deepfake” video, isn’t new.

Hollywood has been using special effects to put people in actual scenes they were never in for years.

Remember all the famous people who ‘met’ Forest Gump?

Now, AI has reached the point where you don’t need a studio behind you to create a quality fake.

But, for the time being, you still need some coding skills.

Want a fun, if slightly off-kilter viewing experience?

Check out Nicholas Cage’s ‘performance’ in some popular movies he did NOT appear in, in this deepfake video compilation.

Or watch Bill Hader morph into Tom Cruise before your very eyes.

Part hilarious, part scary, these are two examples of deepfake video.

Granted both are used for entertainment.

And they are entertaining.

That’s one of the positive uses of deepfakes.

What Happens If You Don’t Like What’s Inside the Box?

On the negative side, deepfake videos can blur the line between fantasy and reality.

Take this example featuring President Obama.

Clearly, this is something he never said.

And you when you watch the whole video, you see the bigger picture of who’s behind it.

But what if a person only shared a clip of the beginning? When the President said some outrageous things.

Conspiracy theorists might easily try to pass it off as fact.

Maybe the video would go viral.

The opposing side would issue a rebuttal and offer proof the video was a fake.

Of course, the deniers would continue denying and believing their truth.

And the fake news machine would grind on…

Sounds all too familiar, doesn’t it?

And good luck trying to convince a believer a fake isn’t real.

Their confirmation bias makes it really difficult for them to change their mind.

What Are Deepfake Videos?

A deepfake is a believable video where the person in it is either doing or saying something they did not say or do.

They’re created by an AI algorithm called a generative adversarial network or GAN.

GANs are a bit like the ‘Spy versus Spy’ of the algorithm world.

The algorithm contains two parts: a generator and a discriminator.

Both are trained on the same set of data.

In simple terms, the generator creates an image and tries to convince the discriminator the fake is real.

But the discriminator is discriminating and they go back and forth again and again until the generator develops something that’s realistic enough to fool even a jaded discriminator.

And a deepfake video is born.

(OK, it’s more complicated than that, but you get the general idea.)

What’s In a Face?

And now we have deepfaces—photos of people that don’t exist, but who have social media accounts, and maybe even a blog.

Deepfaces are created by the same type of algorithm as deepfake videos, and they can seem quite real.

They’re often used as the ‘face’ of a bot account, to make it look and seem…human.

If you accept requests from people you don’t know on LinkedIn, some of those may be deepfaces.

And it is tough to tell the real from the fake.

But if you look closely, you might notice something is off.

Maybe it’s their eyes, some pixels in the background, or possibly their expression.

Challenges and Issues for PR Pros

It’s not hard to imagine how deepfake videos and deepfaces could be used to spread disinformation that could affect an election campaign.

But what about revenge porn?

Or a particularly bitter custody case?

Could someone produce fake videos in court showing their ex-partner to be nothing but an irresponsible, abusive cad? How might the court react?

How about the less than ethical PR or government relations folks who might decide it’s OK to make and distribute deepfake videos supporting their cause in the same way they’re OK with whisper campaigns?

Or they could use a deepfake as a way to discredit an actual video that might show a politician or CEO in a less than favorable light.

Regardless of their reasons, that is NOT OK.

It also contravenes the PRSA Code of Ethics.

Three Ways to Get Started

As a communications professional, what can you do to determine if a video is a fake?

And if it is, how can you tell your side of the story and try to restore a damaged reputation in a way people might believe?

Here are three things we can do right now:

  1. Make sure your social listening channels properly are set up and that you’re monitoring not only keywords, but images, as well.
  2. Sharpen your media literacy. By that, I mean pay close attention to the details in videos and photos, and to the way language is used in an update or post. If something seems off, search further to gather data that could help you determine if the story or video is real or fake. Learn about the tools and applications that can help you detect deepfakes. The paradox here is that the same algorithms that can detect a deepfake can also generate them.
  3. If you’re not one already, become an expert in applying the PESO model. Start early by building relationships with your community and other partners through the content you create on your owned and shared channels. This helps you develop trust. Then call on your community to help spread your side of the story during a deepfake crisis. And amplify with paid social.

Above all, do what you can to learn about AI, how it works, and some of the issues and opportunities.

The change isn’t coming, it’s here.

How are you and your organization staying on top of AI developments that will affect communications, reputation, and trust?

Share your thoughts in the comments below.

Photo Credit: Alex Della Corte

Martin Waxman

Martin Waxman, MCM, APR, is a senior advisor to Spin Sucks and runs a consultancy, Martin Waxman Communications. He leads digital and social media training workshops, conducts AI research, and is a LinkedIn Learning author and one of the hosts of the Inside PR podcast. Martin teaches social media at McMaster University, the Schulich School of Business, Seneca College, and the UToronto SCS and regularly speaks at conferences and events across North America.

View all posts by Martin Waxman