fast facts
- Instances of AI-generated deepfakes have steadily increased over the past year.
- Cybersecurity expert Marcia Sedova told TheStreet that the proliferation of synthetic information online is leading to the destruction of digital trust.
The issue of artificial intelligence-driven deepfakes is not new. But in recent months, it has turned into something of a crisis.
Instances of AI-powered deepfake celebrity porn began surfacing more than six years ago. Over the years, the output of these tools has become much more realistic, and their accessibility and speed have steadily improved.
Last year, a New Jersey high school student used an AI image generator to create and spread fake explicit images of his classmates. The ongoing issue of deepfake abuse received further attention in January when a sexually explicit fake image of Taylor Swift went viral on social media.
RELATED: Microsoft engineer says company asked him to remove alarming findings
A few weeks later, Microsoft (MSFT) The engineer said in an open letter that “Microsoft was aware of the potential for abuse” long before the image went viral, adding that this particular case was “not unexpected.”
Some of the accounts responsible for the posts also posted fake images of Ariana Grande and Emma Watson before they were eventually banned, a result that prompted a swarm of Swift fans to join the platform. It only happened after I reported the account and filled in the photo.
Although many of these accounts were eventually banned from the platform, some still existed as of Monday afternoon and continued to post AI-generated sexually suggestive images of celebrities. .
One such account, which is still actively posting on X, includes a link to Patreon, where you can pay $5 a month to see all explicit AI-generated posts. can do.
Meanwhile, the number of easily accessible tools that claim to “undress” the people in the photos users upload continues to grow, as Wired Monday pointed out.
This doesn't even include mention of other AI-generated deepfakes, such as imitating musicians and comedians without permission, real-time deepfake videos, and the theft of millions of dollars through AI-generated phone calls. President Joe Biden encouraged people not to vote in New Hampshire's primary.
These deepfake tools are being used to power online harassment, bullying, and mass disinformation campaigns.
Masha Sedova, a cybersecurity expert and vice president of human risk strategy at Mimecast, told TheStreet that the golden age of the information age is over. The era of digital mistrust has arrived.
RELATED: Deepfake porn: It's not just a Taylor Swift problem
The collapse of online trust
“We are moving into a whole new world where all trust online is broken down,” Sedova said, adding that the problem extends beyond the walls of the internet to other forms of communication such as phone calls and emails. He added that it is also pervasive.
She says people can no longer trust the information they encounter through digital media, whether it's related to political elections or personal family news.
She called this a “fundamental disruption” in modern relationships.
And, according to Sedova, the existence of technology that can synthetically clone humans means people will have to adapt very quickly to an environment where online information should not be trusted.
Still, there are limits to just being aware of the problem. Sedova said it is unreasonable to expect individuals to be able to detect fraudulent deepfake attacks in real time, given the level of accessible technology out there.
“If Taylor Swift can't protect herself online, how can our teenagers be protected?” — Marcia Sedova
Rather, she said, the blame lies with the platforms that are giving these composite images both opportunity and risk to their viewers.
“They should do a better job of filtering deepfake content. It's possible, but deepfakes aren't magic,” Sedova said, adding that it's up to telecom providers to regularly prove that a call is likely to be real. He added that it was his responsibility. It is up to social media platforms to enforce watermarking efforts to prove that images can be genuine and allow humans to “understand how much trust to apply” to certain content.
There are currently some efforts along these lines, but they have not yet gained wide adoption.
Sedova said an important next step in this environment is the need to explore creative ways to prove trustworthiness. Watermarking is a good step in that direction, but the method is still in its infancy and imperfect, she said.
She added that slang among colleagues and family members will become a necessary norm and protocols will inevitably need to start changing.
“I think we as a society can fulfill that mission,” she said.
Related: Deepfake programs show horrifyingly destructive aspects of AI technology
Fundamental problems with social media
However, this new age of digital mistrust, gained through increased awareness of the capabilities of these tools, does not solve all the problems that arise from the easy availability of deepfake generators.
I touched on the issue of deepfake porn and how it affects young women and girls. Sedova put her head in her hands and sighed.
“Social media just makes it harder to be a parent,” she said at length. “I don't think we understood online bullying and harassment for a long time…The risks are increasing.”
Sedova said parents don't need recent cases of online deepfake harassment to know that keeping children safe online is not an easy task. But she said these cases “could be a turning point for younger generations to be more cautious online.”
Learn more about AI:
Sedova said encountering real-life images, videos, and audio of people doing or saying things they've never done or said is a great way for parents to say, “If you put this online, It's a much more intuitive experience than saying, 'I might not be able to do it.'' Find a job in 10 years. ”
“Frankly, I think this is another societal challenge that we are probably not prepared to address,” Sedova said. “Unfortunately, we haven't been able to solve it with much lower stakes. If Taylor Swift can't protect herself online, how can our teenagers? Do you want it?”
Sedova says this is a challenge faced by teenagers and politicians alike. Both groups will have to convince the public that no one is lying and that their mouths are not their own.
“If we can't even trust the person across from us, then who are we supposed to trust? How does this impact our online trust structures? And how can we move forward and leverage the Internet?” How do we begin to navigate that in a way? All of that greatness without completely collapsing on us?'' Sedova said.
Related: Cybersecurity experts say the next generation of identity theft has arrived: 'Identity hijacking'
The slow race towards corporate responsibility
A key element of this ongoing conversation revolves around questions of responsibility and liability (which have not yet been decided by courts or laws).
84% of U.S. voters think companies behind AI models used to generate fake political content should be held accountable, according to a poll by the Artificial Intelligence Policy Institute (AIPI). . AIPI further found that 70% of respondents support legislation that strengthens responsibility sharing.
A recent AIPI poll found that 82% of voters believe AI companies should be held accountable when their technology is used to create fake pornography of real people. 87% think social media platforms that spread such images should be held responsible as well.
The organization promotes a duty of care for model developers. Daniel Colson, executive director of AIPI, previously told TheStreet that regulation will force technology companies to think about how their technology can be misused. in front They make their tools available to the public.
Sedova similarly said, “I think it's morally right to protect the fabric of society, so a lot of the responsibility lies with corporations.''
She says security remains an afterthought. And she's not confident that companies will change course anytime soon.
“'The more the better, and then you pay for your mistakes after the fact,'” she said, adding that, as Colson said, some external factors, such as regulation, can force companies to slow down and think more. He added that it was necessary. — Potential for exploitation before the technology is available.
But she doesn't expect that regulation to go into effect anytime soon, certainly not before the 2024 US presidential election, and says she thinks the technology could be dangerous if left untethered. She believes this will make policymakers keenly aware of just how true the situation is.
“But I think it's too little too late,” Sedova said of regulatory efforts. “I think there’s going to be a lot of collateral damage by the time we get to a set of policies that hold organizations accountable.”
For tips and AI stories, contact Ian via email at ian.krietzberg@thearenagroup.net or Signal 732-804-1223.
Related article: No, Elon Musk says AI self-awareness is not 'inevitable'