The widespread adoption of artificial intelligence (AI) and machine learning techniques in recent years has given attackers “sophisticated new tools to carry out attacks,” cybersecurity firm Kaspersky Research said in a press release on Saturday. mentioned in.
The security firm explained that one such tool is a deepfake, which includes a generated human-like voice or a replica of a person's photo or video. Kaspersky warned that businesses and consumers need to be aware that deepfakes may become more of a concern in the future.
Deepfakes (a portmanteau of the words deep learning and fake) are the creation of “fake images, videos, and audio using artificial intelligence,” Kaspersky Lab explains on its website.
The security firm has warned that it has discovered deepfake creation tools and services available on “darknet marketplaces” and used for fraud, identity theft and theft of sensitive data.
“Kaspersky experts estimate that deepfake videos can be purchased for as little as $300 per minute,” the press release states.
According to a press release, Kaspersky Lab's recent research found that 51% of employees surveyed in the Middle East, Turkiye, and Africa region said they could tell the difference between deepfakes and real images. However, in testing, only his 25pc was able to distinguish between real images and AI-generated images.
“This puts organizations at risk, given that employees are often prime targets for phishing and other social engineering attacks,” the company warned.
“Although the technology to create high-quality deepfakes is not yet widely available, one of the most likely use cases to emerge from this is to generate audio in real time to impersonate someone,” the press release said. is quoted by Hafeez Rehman of the technology group. A Kaspersky manager said:
Lehman added that deepfakes are a threat not only to businesses but also to individual users. “They are being used to spread misinformation, commit fraud and impersonate someone without their consent,” he said, adding that they must be protected from growing cyber threats. emphasized.
The World Economic Forum's Global Risks Report 2024, released in January, warned that AI-based misinformation is a common risk for India and Pakistan.
In Pakistan, deepfakes are being used to further political objectives, especially ahead of general elections.
Former Prime Minister Imran Khan, currently in Adiala Jail, used AI-generated image and audio clones to address an online election rally in December, which received 1.4 million views on YouTube. It has been viewed more than once, and tens of thousands of people participated live.
Pakistan is drafting an AI bill, but digital rights activists criticize it for lacking guardrails against disinformation and failing to protect vulnerable communities.