In March 2024, the National Association of State Chief Information Officers' conference call with government and business IT leaders highlighted the evolution of old security issues into today's biggest threats: For end users. Cybersecurity awareness training is back near the top of government cybersecurity concerns, and we've seen this development before. Or is there?
A new generation of AI-generated phishing attacks is targeting government agencies in unprecedented ways, reaching through email, text, voice messages, and even video. These sophisticated new cyberattacks present new challenges to organizational defenders as they are delivered without the typos, formatting errors, and other mistakes seen in targeted and spear phishing campaigns of the past. .
Even scarier are AI-generated deepfakes that can imitate a person's voice, face, and gestures. New cyberattack tools have the potential to deliver disinformation and fraudulent messages at an unprecedented scale and sophistication.
Simply put, detecting and stopping AI fraud is harder than ever. His recent 2024 examples include fake messages imitating President Biden, Florida Governor Ron DeSantis, and his CEO of a private company. Beyond elections and political influence, a deepfake video of his CFO at a multinational company recently tricked employees into making bank transfers, resulting in a loss of $26 million.
So how can businesses address these new data risks?
In recent years, there has been a movement in the industry to move beyond traditional security awareness training for end users to a more comprehensive set of measures to combat human-targeted cyberattacks.
Simply put, effective security awareness training truly changes security culture. People become more engaged and begin to ask questions, understand and report risks, and realize that security is not just a workplace issue. It also concerns their personal safety and the safety of their families.
The term that many people are currently adopting is “Human Risk Management” (HRM). Research and consulting firm Forrester defines HRM as “a solution that manages and mitigates cybersecurity risks caused by or to humans by: detecting and measuring human security behaviors and reducing human risk. Initiate policy and training interventions based on human risk. Educate and enable employees to protect themselves and the organization from cyber attacks. We are building.”
So what does this mean for immediately addressing AI-generated deepfakes?
First, employees need to be (re)trained to detect this new generation of sophisticated phishing attacks. They need to know how to authenticate the sources and content they receive. This includes showing you what to look for, such as:
- Audio or video quality mismatch
- Lip sync or audio sync mismatch
- unnatural facial movements
- Uncharacteristic behavior or speech patterns
- Verifying the source
- Enhancement of detection skills
- Using watermarks on images and videos
Second, it provides tools, processes, and techniques for verifying message authenticity. If these new tools are not available, establish a process through which employees feel able to question the legitimacy of messages through a verification process encouraged by management. Also, report deepfake content: If you come across a deepfake involving you or someone you know, please report it to the platform hosting the content.
Third, consider new enterprise technology tools that use AI to detect message fraud. that's right. In much the same way that email security tools detect and disable traditional phishing links and quarantine spam messages, next-generation cyber tools can be used to stop AI-generated messages from getting in the way. may need to be dealt with with fire. Some new tools allow staff to check messages and images for fraud, even if they can't automatically check every incoming email.
This new generation of cyberattacks that use deepfakes to deceive humans essentially undermines trust in all things digital. Indeed, digital trust is becoming increasingly difficult for governments to gain, and current trends are not encouraging and require immediate action.
As Albert Einstein once said, “Those who are careless about the truth in small matters cannot be trusted in important matters.”