Zoom Call With ‘Execs’ Turns Out to Be North Koreans Using AI Deepfakes

Cover Image

Zoom Call With ‘Execs’ Turns Out to Be North Koreans Using AI Deepfakes: The New Era of Cyber Threats

Introduction: When “Executives” Aren’t Who They Seem

Imagine entering a high-stakes Zoom call expecting to finalize an important deal with company executives—only to discover that the people on the screen aren’t who they claim to be. Recent revelations show that sophisticated cybercriminals, including operatives from North Korea, are using advanced artificial intelligence (AI) technologies—specifically, deepfakes—to impersonate business leaders and deceive unsuspecting victims. This alarming trend marks a significant escalation in the threat landscape, where technology once confined to science fiction is now weaponized for financial crime, corporate espionage, and geopolitical manipulation. Below, we dive deep into how AI-powered deepfakes are reshaping the world of cybercrime—and what you can do to protect yourself and your organization.

Understanding AI Deepfakes: Tools Turned to the Dark Side

AI has revolutionized industries, elevating innovation and efficiency. But like a double-edged sword, it brings substantial risks. As expert Marti Hoffman explains, “AI is like a knife. You can use it to make a delicious Caesar salad, or you can use it to kill someone. The knife is neither good nor evil.” The real danger lies not in the technology itself, but in how it’s used—and by whom.

  • What Are Deepfakes? Deepfakes use AI to synthesize realistic videos or audio recordings of people, making it appear as though someone said or did something they never did. In the wrong hands, this can be used for fraud, misinformation, and identity theft.
  • How Easy Is It? Only a single high-resolution image is often enough to clone a face convincingly. For voice replication, today’s tools need just 15-30 seconds of audio. That means a brief podcast, video interview, or even a WhatsApp voice note is enough for attackers to copy your identity.
  • Why Is This a Problem? Deepfakes can be exploited to scam companies out of millions, damage reputations, and disrupt financial markets and political stability.

We have already witnessed high-profile scams, such as fraudulent calls from fake CEOs instructing finance departments to wire large sums of money. In one real case cited in the transcript, a scammer used a cloned CEO voice to authorize a $35 million transfer.

Anatomy of the Scam: How North Korean Actors Weaponize Deepfake Technology

Not all cybercriminals are lone, hoodie-clad hackers. Today’s threat actors are organized, professional, and global. According to recent reports, North Korean groups have leveraged AI-generated deepfakes to pose as legitimate business executives in virtual meetings—manipulating, stealing, and deceiving at a new scale.

  • The Multibillion-Dollar Cybercrime Industry
    Cybercrime is now a trillion-dollar industry. Ransomware alone costs the world more than $10 trillion a year and, if measured by GDP, would make cybercrime the third-largest economy in the world—behind only the US and China.
  • Professionalization of Crime
    Cybercrime rings now offer customer support, recruitment, and even affiliate programs—with some offering a 20% commission for using their tools. This level of organization mirrors legitimate tech businesses, making attacks ever-more efficient and scalable.
  • Focus on Human Error
    The overwhelming majority of cyberattacks still rely on some form of human error—such as an employee clicking a malicious link, plugging in an unknown USB drive, or responding to a social engineering call. Deepfakes dramatically increase the chances of fooling even vigilant staff, as visual and audio cues can be nearly impossible to discern as fake in real time.

The Science and Real-World Impact: When Technology Outpaces Human Judgment

Cyberattackers aren’t just breaking new ground with technology—they’re exploiting the psychological principle of authority. As shown in the transcript, people tend to follow instructions if delivered by someone they perceive as an authority, especially when those cues are reinforced visually and audibly via deepfakes.

A study conducted at PCMag found that deepfake attacks are no longer hypothetical. In a reported incident, North Korean actors posed as company executives in Zoom calls by leveraging AI-generated deepfakes to trick targets into thinking they were engaging with legitimate business leaders. The study underscores how the advancement and accessibility of deepfake technology enable increasingly convincing social engineering attacks, eroding trust in digital interactions and posing grave security risks even to the best-prepared organizations.

  • Deepfake-driven attacks cause confusion and doubt even when encountered post-incident. Employees may begin to question legitimate communications, undermining trust in internal processes.
  • Political and financial destabilization is a new frontier, with deepfakes used in attempts to influence stock prices, electoral politics (e.g., fake videos of leaders like Zelensky urging surrender), or destroy reputations overnight.
  • Victims span C-suite executives, board members, administrative assistants, and public-facing figures—anyone whose media presence can be repurposed for deception.

The risk is so acute that in courtrooms, suspects could claim incriminating videos are AI-generated fakes, challenging legal evidence and criminal investigation norms.

Defending Against Deepfake Attacks: Building a Human Firewall

While AI deepfakes are daunting, there’s good news: human factors remain a critical line of defense. A robust cybersecurity posture combines technology, training, and real-world vigilance. Here are actionable steps individuals and organizations can take:

  • Establish secret passphrases or code words—Within families and organizations, agree on code words or security questions that can verify identity in emergencies or high-stakes requests.
  • Always verify before acting—If you receive an urgent financial or personal request (even from a trusted face or voice), confirm via a secondary channel—use a known phone number, not the one provided in the message.
  • Educate and train—Deliver cybersecurity awareness training to all employees, especially targeting those who feel they “aren’t technology people.” Make learning engaging and relatable.
  • Implement security layers—Move beyond reliance on passwords and verification codes. Incorporate multifactor authentication, user behavior analytics, and anomaly detection.
  • Monitor and limit data sharing—Public figures and employees should be aware of what content (audio, video) is available online, as every published second of video/audio can help attackers build more convincing fakes.
  • Promote a question-asking culture—Empower team members to double-check, ask questions, and escalate suspicions without fear of reprimand.

The old advice of “be careful what you post online” is more critical than ever. Now, a single shared video or an Instagram reel can be repurposed into a tool of deception.

Conclusion: Security in the Age of Synthetic Realities

The use of AI deepfakes in cybercrime represents a paradigm shift in digital risk. As demonstrated by North Korean operatives fooling businesses with synthetic identities, the boundary between reality and fabrication is increasingly blurred. Yet, with behaviorally aware staff, layered cybersecurity controls, and an understanding of deepfake risks, we can build more resilient organizations.

AI is the greatest opportunity—and, if misused, perhaps the greatest risk—of our era. The real danger is not the technology, but failing to prepare for its consequences. By fostering open dialogue, prioritizing practical education, and encouraging both vigilance and verification, we can reclaim digital trust and embrace the benefits of AI while resisting its abuses.

For more on this topic, see the original report: Zoom Call With ‘Execs’ Turns Out to Be North Koreans Using AI Deepfakes.

About Us

At AI Automation Brisbane, we empower local businesses to navigate the evolving digital landscape with confidence. As AI technology advances, so do its challenges—like deepfake threats highlighted in this article. Our tailored automation solutions help companies streamline workflows while prioritizing security and awareness, supporting a safer, more efficient future for your team.

Related Articles