According to a recent study, so-called malicious bots released by cybercriminals now account for almost 75% of internet traffic. The top five attack categories are fake accounts, account takeover, scraping, account management, and in-product exploitation.
Gavin Reid is at the forefront of this attack. He is the Chief Information Security Officer at HUMAN Security, where he helps clients across industries stop online fraud, which is often automated by bots.
HUMAN differentiates between bad and good bots that perform helpful tasks for customers, such as customer service and content moderation. The bad guys hog the spotlight. Reid told me that his New York-based company has seen a five-fold increase in malicious bot activity in the last year alone.
It damages trust in companies and brands.
“We have customers who come to us because they have been fooled by these bots,” says Reid, a CISO whose clients include Priceline, Wayfair, and Yeti. The typical scenario he hears is: “When he put out something new to sell on his platform, 80% of all his traffic was bots and the public couldn't even access it.”
Generative AI makes it easier for criminals to create bots that imitate humans online, Reid explains. So “it's really, really difficult for companies and people like us to protect our infrastructure from attacks and still allow our users to buy our products.”
The security compliance regimes that organizations follow do little to protect against automated attacks, Reid says. This includes the Security Operations Center (SOC), the team responsible for detecting, analyzing, and responding to cyber threats, and the International Organization for Standardization (ISO) guidelines.
“I feel like we're in a bit of a groove,” Reid says. “And when we have an opening, the bad guys take advantage of that and use it against us.”
Mistrust within the company may be exacerbating the problem.
Cybersecurity and fraud prevention remain siled in some companies. Mr. Reed points out that times have changed, but he is not convinced.
“Honestly, most financial fraud and other business fraud happens online,” he says. “Therefore, separating these groups is of no use at all.”
So why does it persist?
“Typically, it has to do with political reasons or organizational structure, and it doesn't make sense to solve this particular problem,” Reid said.
This rift is more common among older organizations, he notes. For example, large U.S. banks typically have separate fraud and cyber departments. That's because they started out with a team to deal with old-school crimes like sticker and check fraud, and then a cybersecurity group to fight online crimes like hacking, phishing, and ransomware.
But that wall is coming down. Reid said most large financial institutions now operate “fusion centers” where both sides work together. “Convergence continues, but it's happening slowly.”
For companies seeking a more collaborative cybersecurity and fraud strategy, Reed suggests following the banks' lead. “It's like getting into the pool together,” he says of his two divisions. “So they can maintain the organization, they can maintain the politics, but the real people who are dealing with day-to-day issues can work together very closely.”
The second step is “single leadership responsible for delivering both” and ensuring shared access to tools and capabilities, Reid said.
There's no “if,” “and,” or bot about it.
nick rockell
Nick.rockel@consultant.fortune.com
In other news
quick rich man
The Swifties have good reason not to take that coveted concert ticket at face value. British bank Lloyds has warned customers of a spike in ticket fraud related to Taylor Swift's upcoming show. British fans are estimated to have lost £1 million ($1.25 million) since July last year. More than 600 Lloyds customers have complained of being defrauded, mainly via Facebook. Talk about bad blood.
fashion victim
So what else is new? Fast fashion giant Shein has been accused of copyright infringement again. A U.S. class action lawsuit alleges that Chinese companies used electronic surveillance and AI to scour the internet for popular designs and stole them from artists to make products. This doesn't bode well for Shein, which has come under fire for treating its workers poorly and operating an environmentally unsustainable business.
be careful of gaps
Unethical use of AI could hinder its funding and development, believes Paula Goldman, chief ethical and humane use officer at Salesforce. “The next AI winter could be caused by trust issues and people adoption issues in AI,” Goldman said in her recent talk. luck Conference in London. To build worker trust in AI tools, she called for “conscious friction,” or checks and balances so that AI tools do more good than harm. Let's hope it's not as unpleasant as it sounds.
flight risk
Boeing's credibility concerns continue. Sam Salepour, a quality engineer and whistleblower at the airline giant, told a Senate hearing that management ignored his repeated warnings about safety issues. Salepour said he witnessed gaps between the plane's fuselage panels that could have put Boeing passengers at risk, and he said he was “frankly told to shut up.” Inspection documentation confirmed these sightings to be true of the plane.
Trust exercise
“Enterprises want to harness the power of generative AI, but struggle with the question of trust: how to build generative AI applications that provide accurate responses and do not cause hallucinations. This problem has been plaguing the industry for the past year, but it turns out we can learn a lot from an existing technology: search.
By exploring what search engines work well (and what doesn't), you can learn how to build more reliable generative AI applications. This is important because generative AI can bring significant improvements to efficiency, productivity, and customer service.But only if companies can be confident that their generated AI apps will provide reliable and accurate information. ”
The tendency of generative AI to hallucinate, or in other words, deliver false or misleading information, erodes trust for companies. Sridhar Ramaswamy, CEO of cloud computing company Snowflake, suggests a way forward. To solve this reliability problem, Ramaswamy suggests combining the best things about search engines with the strengths of his AI.
Unlike large-scale language models (LLMs), search engines are good at sifting through mountains of information and identifying high-quality sources, he points out. Ramaswamy envisions AI apps emulating these ranking techniques to make the results more reliable. This means prioritizing corporate data that is most frequently accessed, searched, and shared, as well as sources that are considered trustworthy.
Ramaswamy argues that it is helpful to think of LLMs as interlocutors rather than sources of truth. GenAI may speak smoothly, but there needs to be more substance to its words.