PR in the deepfake AI era is facing challenges like never before.
Remember that viral video of former President Joko Widodo giving a flawless speech in Mandarin?
It caused quite a stir before being confirmed as a deepfake, an AI-generated fake video.
While it might seem like just another internet hoax, this kind of content has real consequences.
In this case, the video spread fast and sparked confusion among the public, even after officials stepped in to clarify.
This is where things get serious for PR professionals.
PR in the deepfake AI era means more than just issuing statements. It’s about responding quickly, managing public trust, and protecting a reputation that may have taken years to build.

RadVoice Indonesia breaks down the key threats deepfakes pose to PR teams and how to stay one step ahead.
What’s a Deepfake and Why It’s a PR Headache?
Deepfake is an AI-manipulated video or audio that makes it look like someone said or did something they didn’t.
We’re talking face swaps, fake voiceovers, and even lip-syncs that look shockingly real.
Created for entertainment, this tech is now being misused to spread misinformation, and it’s happening more often than you might think.
According to Entrust Cybersecurity Institute (as reported by CNN Indonesia), deepfake-related attacks occurred every five minutes in 2024, with digital forgery cases increasing by 244% compared to the previous year.

For PR in the deepfake AI era, that’s a massive concern.
For instance, imagine a fake video that shows your company’s CEO making a controversial comment about a sensitive issue.
It looks real enough that people start believing it, and it happens fast.
Before there’s time to clarify, the video goes viral, and public trust begins to crack.
This kind of scenario is exactly what makes PR in the deepfake AI era so challenging. One misleading clip is all it takes to undo years of reputation-building.
That’s why PR teams need to act quickly, verify the facts, and communicate clearly before misinformation takes hold.
What PR Teams Should Do When Deepfakes Hit
Here are a few key steps to handle deepfake threats effectively:
Act Fast and Verify
When questionable content starts making the rounds, your first move is to check if it’s real. L
ook for signs like odd blinking, strange facial movements, or audio that doesn’t match lip movements.
Bring in your IT or digital team to help analyze the content. Once confirmed as a deepfake, use that analysis as the foundation for a clear public statement.

Report the Fake Content
If the video is circulating on platforms like Instagram, TikTok, or X, report it immediately through their official channels. The quicker it’s taken down, the less damage it can do.
Keep Everyone in the Loop
In the era of deepfakes, internal and external communication is key.
Let employees, customers, and stakeholders know what’s going on. Address it by email or an official statement on your website and social channels. Transparency helps build trust, even in crisis mode.

Educate Your Team
One of the smartest ways to prevent deepfake-related chaos is to educate your internal team.
Brief them on what deepfakes are, how to spot them, and what to do if they come across one.
Creating simple protocols can go a long way in keeping everyone alert and informed.
Use Monitoring Tools
Don’t wait for a crisis to show up on your feed. Use brand monitoring tools to keep an eye on mentions, keywords, and videos related to your company.
Early detection gives your PR team a head start in controlling the narrative. It’s an essential part of surviving PR in the deepfake AI era.

Summary
PR in the deepfake AI era is full of new challenges. What was once science fiction is now a daily concern for brands, organizations, and public figures.
But with the right tools, fast thinking, and clear communication, PR teams can tackle deepfake threats head-on.
It’s all about staying alert, acting fast, and being transparent with the people who matter most: your audience.
In a world where what’s fake can look very real, trust is your most valuable asset. Don’t let a deepfake take it away.
This article was first published in Indonesian on 9 June 2025.