{"id":3137,"date":"2025-05-13T06:00:00","date_gmt":"2025-05-13T06:00:00","guid":{"rendered":"https:\/\/cybersecurityinfocus.com\/?p=3137"},"modified":"2025-05-13T06:00:00","modified_gmt":"2025-05-13T06:00:00","slug":"deepfake-attacks-are-inevitable-cisos-cant-prepare-soon-enough","status":"publish","type":"post","link":"https:\/\/cybersecurityinfocus.com\/?p=3137","title":{"rendered":"Deepfake attacks are inevitable. CISOs can\u2019t prepare soon enough."},"content":{"rendered":"<div>\n<div class=\"grid grid--cols-10@md grid--cols-8@lg article-column\">\n<div class=\"col-12 col-10@md col-6@lg col-start-3@lg\">\n<div class=\"article-column__content\">\n<div class=\"container\"><\/div>\n<p>An employee in the finance department at a retail company recently got a call from his CFO directing him to wire $700,000 to a business the company was in the process of acquiring. The executive noted that the transaction was extremely time sensitive.<\/p>\n<p>It seemed a bit out of the ordinary. But the employee, not wanting to ruffle feathers and question the CFO, promptly carried out the order from his boss and made the money transfer.<\/p>\n<p>The problem is, the voice on the phone was not the CFO\u2019s. It was an extremely realistic deepfake voice impersonation generated using artificial intelligence, and because of the attack the retailer lost the $700,000 to a cybercriminal. The fake CFO provided instructions that would enable him to intercept the funds, rather than having the money go to the target company.<\/p>\n<p>\u201cThe combination of the authenticity of the voice, the sense of urgency, and the CFO being in a position of authority resulted in the employee not asking critical questions or verifying the request,\u201d says <a href=\"https:\/\/www.bipc.com\/michael-g.-mclaughlin\">Michael McLaughlin<\/a>, cybersecurity and data privacy practice group co-leader at law firm Buchanan, Ingersoll &amp; Rooney, which represents the retail company.<\/p>\n<p>The request for the financial transaction deviated from standard operating procedure, McLaughlin says, but the employee was so convinced it was the CFO on the call that he went ahead with wire transfer.<\/p>\n<p>The tipoff was when the acquisition target called the retailer a few days later asking when it should expect payment to arrive.<\/p>\n<p>To address the incident and prevent similar attacks from being successful, the organization implemented several measures, including enhanced verification protocols for financial transactions requiring multiple approvals, McLaughlin says.<\/p>\n<p>This includes verifying all requests by independently calling a known number for the requester, <a href=\"https:\/\/www.csoonline.com\/article\/3604803\/security-awareness-training-topics-best-practices-costs-free-options.html\">regular training sessions<\/a> for employees on identifying deepfake content, and collaboration with cybersecurity firms to develop detection tools and response strategies.<\/p>\n<h2 class=\"wp-block-heading\">Fake-out threats on the rise<\/h2>\n<p>The incident is one of a growing number of deepfake attacks against organizations, and CISOs and other cybersecurity leaders need to work with business executives to bolster defenses against such attacks.<\/p>\n<p>Deepfakes don\u2019t just involve celebrities and other public figures anymore. Virtually anyone at any time can have their likeness used for the commission of cybercrimes. According to a 2024 survey conducted by Deloitte, around 15% of executives said <a href=\"https:\/\/www.csoonline.com\/article\/3529639\/deepfakes-break-through-as-business-threat.html\">cybercriminals targeted their companies\u00a0using deepfakes<\/a>\u00a0at least once over the previous year.<\/p>\n<p>The problem has drawn the attention of the US Congress. In April 2025, a bipartisan group of senators reintroduced legislation to address the issue of unauthorized uses of voices and likenesses for AI-generated deepfakes. The No Fakes Act would give individuals the right to authorize use of their likeness and voice in a digital representation, in an effort to reduce the use of\u00a0deepfakes.<\/p>\n<p>The legislation would hold individuals or companies liable if they produce an unauthorized digital replica of an individual in a performance; hold platforms liable for hosting an unauthorized digital replica if the<\/p>\n<p>platform has actual knowledge of the fact that the replica was not authorized by the individual depicted; and largely preempt state laws addressing digital replicas to create a workable national standard.<\/p>\n<p>In one of the biggest known deepfake attacks, <a href=\"https:\/\/www.cnn.com\/2024\/05\/16\/tech\/arup-deepfake-scam-loss-hong-kong-intl-hnk\/index.html\">engineering group Arup lost $25 million<\/a> in a videoconference scam that employed fake voices and images.<\/p>\n<h2 class=\"wp-block-heading\">Real-world fabrications<\/h2>\n<p>Even security vendors have been victimized. Last year, the governance risk and compliance (GRC) lead at cybersecurity company Exabeam was hiring for an analyst, and human resources (HR) qualified a candidate that looked very good on paper with a few minor concerns, says <a href=\"https:\/\/www.exabeam.com\/blog\/author\/kevin_kirkwood\/\">Kevin Kirkwood<\/a>, CISO.<\/p>\n<p>\u201cThere were gaps in how the education represented in the resume, but beyond that it was immaculate,\u201d Kirkwood says.\u00a0During the online interview, the candidate \u201cwas a bit scripted in her responses and appeared to be trying to answer questions that the HR screener wasn\u2019t really asking, but they were still good answers.\u201d<\/p>\n<p>The interviewee was passed forward to the GRC team, which conducted its own video interview. \u201cAlmost immediately, they began to notice some oddities,\u201d Kirkwood says. As the interview progressed, the team noticed additional things that raised concerns.<\/p>\n<p>\u201cThe person seemed to be too stationary, not blinking, not moving her body, and the facial expression remained the same,\u201d Kirkwood says.\u00a0\u201cThe mouth did move.\u00a0The answers that the person was giving were still not directly aligned to the questions that were being asked.\u201d<\/p>\n<p>The GRC manager approached Kirkwood about the interview and what she had experienced with the interviewee.\u00a0\u201cIt rang a bell with me and I pulled up a website that explained deepfake videos, and she immediately said, \u2018That\u2019s exactly what that was!\u2019\u201d<\/p>\n<p>Kirkwood\u2019s team shared the finding with the HR team \u201cto create awareness that this was something that we were beginning to see and that it was going to be something that we expected to occur more often,\u201d he says.\u00a0\u201cAwareness, at the time, was enough.\u00a0HR was trained on how to spot the anomalies and screening became a more intense process with recruiters looking for specific factors in the video.\u201d<\/p>\n<p>At the time of the incident, video processing and deepfake tools were not as advanced as they are now, Kirkwood says. \u201cJust a few short months ago you would have been okay with using most visual cues to identify when a person was using a deepfake,\u201d he says.<\/p>\n<p>The use of the deepfake in the interview was believed to be <a href=\"https:\/\/www.csoonline.com\/article\/3609972\/north-korean-fake-it-workers-up-the-ante-in-targeting-tech-firms.html\">an example of the North Korean fake IT worker scam<\/a> that organizations <a href=\"https:\/\/www.csoonline.com\/article\/3497138\/how-not-to-hire-a-north-korean-it-spy.html\">have been contending with increasingly of late<\/a>.<\/p>\n<p>Another security firm, KnowBe4, <a href=\"https:\/\/blog.knowbe4.com\/how-a-north-korean-fake-it-worker-tried-to-infiltrate-us\">experienced a similar incident in July 2024<\/a> when it discovered that a newly hired employee named \u201cKyle\u201d wasn\u2019t from Atlanta as stated, but from North Korea.<\/p>\n<p>\u201cWe were dealing with a synthetic identity along with a deepfake image,\u201d says <a href=\"https:\/\/blog.knowbe4.com\/author\/james-mcquiggan\">James McQuiggan<\/a>, security awareness advocate at the company. \u201cWithin moments of receiving his company-issued laptop, Kyle attempted to install malware. Security tools triggered alerts, and the team quickly isolated the device and the account before any damage occurred.\u201d<\/p>\n<p>Had the alerts not triggered, Kyle might have stayed undetected for much longer, McQuiggan says. On closer inspection, Kyle\u2019s job application was fabricated, including an AI-generated headshot, he says. \u201cFurther investigation revealed Kyle was part of a North Korean campaign to embed operatives in organizations for espionage and financial gain.\u201d<\/p>\n<p>The incident was an unsettling wake-up call for the cybersecurity and HR teams, McQuiggan says. \u201cThe hiring process had been fully remote,\u201d he says. \u201cThere were no red flags in background checks. The submitted documents passed standard HR scrutiny. The attacker used convincing synthetic identity techniques \u2014 a blend of factual and fabricated details, all created to evade detection.\u201d<\/p>\n<h2 class=\"wp-block-heading\">Fighting the fakers: Tips for securing your enterprise<\/h2>\n<p>To effectively protect against the threat of deepfakes, organizations need to adopt a multi-layered defense strategy that includes teaching employees what to look for in such attacks, Buchanan, Ingersoll &amp; Rooney\u2019s McLaughlin says.<\/p>\n<p>\u201cKey defenses include employee training and awareness programs that educate staff about the risks posed by deepfakes and the importance of verifying requests before acting on them,\u201d McLaughlin says. \u201cAdditionally, organizations should implement strict verification processes for sensitive actions, such as financial transactions.\u201d<\/p>\n<p>Here are several steps CISOs can take to help combat this <a href=\"https:\/\/www.csoonline.com\/article\/3529639\/deepfakes-break-through-as-business-threat.html\">rising business threat<\/a>.<\/p>\n<p><strong>Implement deepfake awareness training.<\/strong> Deepfake awareness education \u201cis the only control for humans,\u201d says <a href=\"https:\/\/warrenaverett.com\/people\/paul-perry\/\">Paul Perry<\/a>, risk advisory practice leader at accounting and advisory firm Warren Averett. It should provide understanding about why there are so many deepfake attacks and what the latest tactics are, \u201cto help them understand when something is out of the ordinary and unusual,\u201d he says.<\/p>\n<p>Training needs to teach users to question everything they receive instead of taking the quick way out and just responding to a request without pause, Perry says.<\/p>\n<p>Employee awareness and training programs need to be ongoing rather than one-offs, and the training should teach employees to spot red flags such as stilted audio, mismatched lip movements, and urgent requests, says <a href=\"https:\/\/www.linkedin.com\/in\/mitr\/\">Mithilesh Ramaswamy<\/a>,\u00a0senior engineer at Microsoft.<\/p>\n<p>\u201cTo effectively guard against deepfakes, it\u2019s essential to integrate training and skill-building into daily workflows, rather than relying on infrequent, one-off sessions,\u201d Ramaswamy says. \u201cThis ongoing approach helps employees stay prepared to identify and neutralize manipulated video and audio content as soon as it appears.\u201d<\/p>\n<p><strong>Conduct drills and establish clear internal policies.<\/strong> Threat actors often rely on human lapses in judgment, Ramaswamy says. He also recommends simulation exercises as part of the learning process. \u201cConduct drills or <a href=\"https:\/\/www.csoonline.com\/article\/570871\/tabletop-exercises-explained-definition-examples-and-objectives.html\">tabletop exercises<\/a>, where employees practice responding to suspicious calls or videos claiming to be from executives,\u201d he says.<\/p>\n<p>In addition, clear internal policies and processes\u00a0play a critical role in stopping deepfakes.<\/p>\n<p><strong>Scrutinize business workflow policies for potential weaknesses.<\/strong> At KnowBe4, the hiring process has been significantly revamped to mitigate risks, McQuiggan says. \u201cIt includes adding greater scrutiny to background checks to ensure a comprehensive evaluation of candidates,\u201d he says. \u201cWhen a laptop is shipped to a new hire, they are to collect it at a local UPS store and provide matching identification to ensure it\u2019s the correct person.\u201d<\/p>\n<p>To keep the hiring team informed and vigilant, HR and recruiting teams receive updated briefings on deepfake tactics. This aims to equip them with the knowledge necessary to recognize and respond to sophisticated deception techniques, McQuiggan says.<\/p>\n<p>From a process perspective, having multiple verification points much like multi-factor authentication for passwords can be applied when dealing with videos and deepfake voice requests, Perry says. \u201cOnce the request comes in, individuals need to do X, Y and Z to further approve or validate the need,\u201d he says.<\/p>\n<p>\u201cUnfortunately, history tells us technology grows, so the threats will grow as technology enhances itself,\u201d Perry says. \u201cConstant validation or verification \u2014 or the human interaction \u2014 will become key.\u201d<\/p>\n<p><strong>Revamp incident response plans.<\/strong> Enterprises can also develop and regularly update <a href=\"https:\/\/www.csoonline.com\/article\/3829684\/how-to-create-an-effective-incident-response-plan.html\">incident response plans<\/a> that outline how to respond to suspected deepfake incidents. This can enhance organizational resilience and readiness to handle such threats, McLaughlin says.<\/p>\n<p><strong>Consider investing in deepfake-targeted tools and skills. <\/strong>AI-based detection software that can analyze video and audio content for inconsistencies and signs of manipulation, and flag suspicious content before it reaches employees or other stakeholders is a wise investment, McLaughlin says.<\/p>\n<p>\u201cEmploying digital forensics experts can further enhance media authenticity verification through techniques such as analyzing metadata and pixel-level anomalies,\u201d McLaughlin says. \u201cAdditionally, utilizing blockchain technology for content verification can help establish authenticity by embedding digital watermarks or hashes in legitimate media.\u201d<\/p>\n<p>Some deepfake defense tools on the market might be limited at this point, however. \u201cThere are new tools that help detect deepfake videos that claim to be able to identify patterns that repeat in the representation of the deepfake video,\u201d Exabeam\u2019s Kirkwood says.<\/p>\n<p>These are interesting, until you consider that you will have to layer this detection in place in line with whatever communication tool the organization is using to conduct interviews, Kirkwood says. \u201cI would prefer to have those [communications] tools be the source of the detection and layer it in,\u201d he says.\u00a0\u201cIt would be a case of AI detecting AI and alerting.\u201d<\/p>\n<p><strong>Know the law.<\/strong> Enterprises need to be aware of applicable law in relation to the consequence of not addressing deepfakes once they are known, says <a href=\"https:\/\/www.cm.law\/people\/reiko-feaver\/\">Reiko Feaver<\/a>, partner at CM Law, a privacy and data security attorney whose practice focuses on AI.<\/p>\n<p>\u201cNot just statutory laws, but common law concepts such as negligence, tort, misrepresentation, [and] fraud,\u201d Feaver says. \u201cCompanies have to be aware of\u00a0not only\u00a0becoming a victim but\u00a0of their obligations if they are a victim and don\u2019t do anything about it.\u201d<\/p>\n<\/div>\n<\/div>\n<\/div>\n<\/div>","protected":false},"excerpt":{"rendered":"<p>An employee in the finance department at a retail company recently got a call from his CFO directing him to wire $700,000 to a business the company was in the process of acquiring. The executive noted that the transaction was extremely time sensitive. It seemed a bit out of the ordinary. But the employee, not [&hellip;]<\/p>\n","protected":false},"author":0,"featured_media":3138,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[3],"tags":[],"class_list":["post-3137","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-education"],"_links":{"self":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts\/3137"}],"collection":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=3137"}],"version-history":[{"count":0,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts\/3137\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/media\/3138"}],"wp:attachment":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=3137"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=3137"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=3137"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}