You'd be hard-pressed to find a cybersecurity provider who isn't touting AI-based protection as a panacea. However, AI protection is by no means a silver bullet and should not be treated as one.
AI Data Poisoning: A Hidden Threat on the Horizon
AI data poisoning is not a theoretical concept, but a concrete threat. The newly updated OWASP for LLM lists training data poisoning as a top 10 vulnerability (LMM03). Modelers do not have complete control over data input to LLM. Cybercriminals take advantage of this fact and use contaminated data to bypass her AI-based security defenses, leading them astray and teaching them to make wrong decisions. This deliberate manipulation becomes a silent accomplice and opens the door for exploiting an unprotected system.
Imagine this super-smart AI model tasked with finding anomalies in your system. What if someone sneaks in data during training or refinement and intentionally teaches us to ignore real threats?
For attackers, it's all about making the data look legitimate. In some cases, bad actors simply use real data and tweak a few numbers to trick the AI into thinking it's legitimate data. Essentially, the data is used to teach the AI to make incorrect decisions.
Fake it 'til you break it: Use collected data to bypass AI security
In a more sophisticated development, data can actually be used to fool AI teeth Authentic. In this scenario, the attacker collects real data (usually stolen) and plays it back to evade his AI model. His one way to make this work is to Digital fingerprints collected It also includes recorded mouse movements and gestures that are hard-coded and randomized to develop automated scripts. While the data is technically real, it is not authentic because it was not originally generated by the person using it.
This data is then fed at scale to the security tool's AI model, allowing it to evade defenses if the AI or ML model cannot detect that the data was collected. Please understand this carefully. Even the most advanced AI models will be ineffective as a security defense if they cannot verify that the data presented is real and not harvested (fake).
Outdated anti-bot defenses unprepared for attacks on AI
Teams relying on traditional bot management solutions are now seeing 90% of fraudulent bot requests due to these new types of data poisoning and fake data attacks, which use data and automation to bypass ML and AI security detections. You may miss more than that.
To illustrate the magnitude of this problem, let's take a look at what happened when Kasada's bot protection was turned on for a major streaming provider.
Kasada was deployed behind the streaming company's CDN-based bot detection. The 4:15pm go-live event saw a significant spike in detected and mitigated rogue bots (in red) as a result of data being collected and replayed that was evading the CDN's AI detection. was. Before implementing Kasada, CDN-based detection had a 98% false negative rate as it accepted fake data as real. Human activity (blue) is completely dwarfed by the scale of the bot problem.
Ironically, because the data collected looked human-like, customers were unaware that a large portion of the traffic was not genuine, leading to a wake-up call for both fraud and marketing (digital ad fraud) teams. became.
One of the most important elements for realizing the potential of AI in cyber defense while minimizing the impact of tampering with data inputs is proof of execution (PoE). This is Kasada's collective term for client-side and server-side technologies designed to verify that the data presented to AI detection models is, in fact, real. PoE can verify that the data presented to the system was generated and executed in real time.
3 steps defenders can take now
Database attacks against AI require the attention of defenders using or considering security solutions that employ machine learning or AI models. Here are some recommendations you can take to protect your organization from such attacks.
- Monitor for abnormal behavior. Look at security solutions that rely heavily on AI models. Find out what the human day-night cycle should be and whether it can be observed in traffic.
- Diversify your defense. Don't rely solely on server-side AI learning for bot detection. Implement client-side detection and rigorous validation checks to ensure your security controls are working as intended.
- Always be alert and proactive. Bot detection and mitigation solutions enable you to: Verify data authenticity presented to the system. It is important to have a keen sense of what is real.
As more security solutions add “AI” to their technology, the ability to identify and stop data poisoning and collected data will become paramount as attackers seek to evade AI protections. AI has provided adversaries with new attack surfaces and database evasion techniques that defenders must address. As the adversarial game of cat and mouse continues, questions remain: Is your defense dynamic and adaptive enough? How to meet the challenge?
Kasada is designed with layered defense in mind to outwit modern automated attacks and motivated adversaries behind them. Request a demo Find out how our experts can help you today.
The article AI Data Poisoning: How Misleading Data Is Evading Cybersecurity Protections appeared first on Kasada.
*** This is a Kasada Security Bloggers Network syndicated blog written by Neil Cohen. Read the original post: https://www.kasada.io/ai-data-poisoning/