色多多视频

Reinsurance
Explore our offerings
Gwenn Cujdik_120x120

By

Incident Response & Cyber Services Lead, 色多多视频 Americas

Cyber criminals are always looking for new methods to gain access to funds and critical information. That’s largely because organizations have caught on to the phishing schemes and sneaky ransomware attacks of the past and have implemented tools and educated employees to keep cyber thieves at bay.

But with the rapid evolution of artificial intelligence, especially generative AI, and public demand for easy access to the new technology, cyber crime is primed to make a big leap forward.

Deepfakes – videos, pictures or audio made with AI to appear real – are the latest weapons in the cyber criminals’ arsenal. There has been a notable rise in the use of deepfakes to commit cyber crime, and the only things slowing the progress is the sophistication, expertise and effort required to effectively use the technology.

But as the technology becomes more accessible and more user-friendly, deepfakes will likely play a big role in the future of cyber crime. Combatting this emerging threat will require staying one step ahead of the criminals through the use of technology and strict protocols to block access to funds or critical information and, most importantly, educating employees about how to spot a deepfake.


Why are deepfakes so effective?

Deepfakes, as we know them today, are videos, images or audio that are created to look realistic with the use of artificial intelligence. In a broader sense, deepfake technology has been around for decades. Early motion pictures manipulated images long before computers and AI were available. And fake audio recordings are easy to create with or without the use of modern technology – a good celebrity impersonator can be just as effective.

Deepfake technology has been widely employed in the world of politics. Official campaigns typically avoid using such tactics, but a candidate’s supporters have been known to create unflattering and realistic images of political opponents to share on social media.

The technology has some legitimate uses, particularly in the world of cinema. The motion picture, “Forrest Gump,” could not have been made without the blending of Tom Hanks’ main character with historical footage. More recently, AI technology was used to fascinating and realistic effect in the Netflix documentary, “Dirty Pop,” about boy-band impresario Lou Pearlman. The producers combined dialogue from Pearlman’s memoir with a video of him speaking to the camera and a “mouth actor” who moved his lips to match the words and some AI effects to bring it all together. If the program hadn’t warned viewers beforehand that it used AI-generated trickery to make Pearlman a narrator of his own documentary, it would be hard to tell.

Deepfakes can be an effective tool for cyber crime because of social engineering, which is the psychological manipulation of people into performing actions or divulging confidential information, such as passwords or access to financial accounts. Social engineering is often one of many steps in a more complex fraud scheme.

If you’ve ever received a realistic-looking email that appears to be from your bank or cable TV company but is actually from an unfamiliar email account, that’s a scam utilizing the concept of social engineering. Deepfakes are similar to a fake email scam, but taken to a new, more sophisticated level.

Most cyberattacks that employ social engineering techniques play on the victim’s emotions and create a sense of urgency, because the cyber criminals want to put the victim in an emotional state and get them to make a decision quickly before they have time to think about it critically.

Aside from letting AI do all the detective work, humans well trained in identifying deepfakes can uncover the truth by simply analyzing the quality and consistency of the video or image.

Deepfake detection and protection

As quickly as deepfake technology has evolved, so have the methods to detect when an image, video or audio file is an AI-generated fake. There are several software tools available that can help detect a fake video. It’s like using AI for good against those who would use it to commit crimes.

Aside from letting AI do all the detective work, humans well trained in identifying deepfakes can uncover the truth by simply analyzing the quality and consistency of the video or image. Distortions, blurriness and mismatched colors or objects can raise suspicions. Also, look for unusual behavior in those speaking in the video, like awkward motions, unnatural positions and lip movement that doesn’t sync with the words being heard. Finally, it’s critical to verify the source and the origin of the video or image.

To protect your organization against deepfake cyber threats, continue to follow the same tried-and-true cybersecurity protocols you have in place. Deepfakes by themselves are not a security threat. But they can be a means to an end for nefarious types to get past security protocols. Deepfakes, therefore, are really a variation of an existing threat that can make social engineering scams harder to detect.

The human element continues to be one of the biggest dangers organizations face when it comes to cybersecurity.

Organizations should continue to utilize multi-factor authentication (MFA), an electronic authentication method in which a user is granted access to a website or application only after successfully presenting two or more pieces of evidence to an authentication mechanism. Think of signing on to a website with a login, a password and, finally, a 6-digit code that was sent as a text to your smart phone.

As new threats appear on the horizon, such as deepfakes, organizations should continually review and update employee cyber training.

Deepfakes require a level of sophistication, training and effort that most cybercriminals have not yet mastered, but they are a real and emerging risk. Employees should be trained now about how to identify a deepfake threat and protect critical information from cybercriminals.

 

Download this article as a PDF

 

Gwenn Cujdik is a subject matter expert in cyber incident response and is the Incident Response & Cyber Services Lead for 色多多视频North America. In this role, she oversees cyber first-party claims/incident response processes and procedures aimed at assisting claims specialists and clients navigate cyber incidents before, during and following an actual or suspected cyber, tech, or privacy incident. Gwenn also oversees AXA XL’s breach response provider panel and cyber services and is responsible for creating and managing cyber services including proactive and incident response services. She can be reached at gwenn.cujdik@axaxl.com.

To contact the author of this story, please complete the below form

First Name is required
Last Name is required
Country is required
Invalid email Email is required
 
Invalid Captcha
Subscribe
Subscribe to Fast Fast Forward

Global Asset Protection Services, LLC, and its affiliates (鈥溕喽嗍悠礡isk Consulting鈥) provides risk assessment reports and other loss prevention services, as requested. In this respect, our property loss prevention publications, services, and surveys do not address life safety or third party liability issues. This document shall not be construed as indicating the existence or availability under any policy of coverage for any particular type of loss or damage. The provision of any service does not imply that every possible hazard has been identified at a facility or that no other hazards exist. 色多多视频Risk Consulting does not assume, and shall have no liability for the control, correction, continuation or modification of any existing conditions or operations. We specifically disclaim any warranty or representation that compliance with any advice or recommendation in any document or other communication will make a facility or operation safe or healthful, or put it in compliance with any standard, code, law, rule or regulation. Save where expressly agreed in writing, 色多多视频Risk Consulting and its related and affiliated companies disclaim all liability for loss or damage suffered by any party arising out of or in connection with our services, including indirect or consequential loss or damage, howsoever arising. Any party who chooses to rely in any way on the contents of this document does so at their own risk.

US- and Canada-Issued 色多多视频 Policies

In the US, the 色多多视频insurance companies are: Catlin 色多多视频 Company, Inc., Greenwich 色多多视频 Company, Indian Harbor 色多多视频 Company, XL 色多多视频 America, Inc., XL Specialty 色多多视频 Company and T.H.E. 色多多视频 Company. In Canada, coverages are underwritten by XL Specialty 色多多视频 Company - Canadian Branch and AXA 色多多视频 Company - Canadian branch. Coverages may also be underwritten by Lloyd’s Syndicate #2003. Coverages underwritten by Lloyd’s Syndicate #2003 are placed on behalf of the member of Syndicate #2003 by Catlin Canada Inc. Lloyd’s ratings are independent of AXA XL.
US domiciled insurance policies can be written by the following 色多多视频surplus lines insurers: XL Catlin 色多多视频 Company UK Limited, Syndicates managed by Catlin Underwriting Agencies Limited and Indian Harbor 色多多视频 Company. Enquires from US residents should be directed to a local insurance agent or broker permitted to write business in the relevant state.