As the rise of artificial intelligence tools changes the way work is done in education and business, IT security professionals are aware that emerging technologies may pose additional cybersecurity risks in the coming years, as well as ways to make networks more efficient. It warns that this could lead to new means of protection.
Amanda Stent, director of the Davis Institute for Artificial Intelligence at Colby College in Maine, said generative AI tools are growing in popularity, especially in education and business fields, adding to the growing popularity of IT professionals already battling an onslaught of cyber-attacks. It can cause headaches. Ransomware and phishing attacks have been on the rise since the coronavirus outbreak. He said the growing use of open source generative AI tools like ChatGPT could have a significant impact on data privacy, as users of these programs have access to confidential and personally identifiable information. He pointed out the need to avoid including information in prompts that is possible.
“It’s one thing to chat with a generative AI, it’s another to ask for business ideas or help write an email. When you upload, it's a completely different thing. All of this can lead to a data breach,” she said. “Higher education employees, like employees at other companies, need to understand what kind of data is appropriate to put into external vendor systems. [GenAI] The model, and what kind of data is inappropriate. And higher education is subject to additional regulations, including FERPA. ”
Stent said these data privacy concerns are further exacerbated by the fact that GenAI tools are still prone to significant errors, which can lead to data breaches.
“There [have been] I’ve jailbroken generative AI models multiple times,” she said. “With some prompts, [GenAI] Models can break, fail, and expose personally identifiable information in a variety of ways. An example of such a randomly generated prompt is typing the word “poetry” over and over again. …All of a sudden, you're getting people's names, addresses, emails, phone numbers, and more. [other] personal information. ”
VS Subramanian, a computer science professor at Northwestern University, said AI technology could be used by cybercriminals to reduce the typical signs of phishing and create more convincing email phishing attacks. It says it's expensive. He also said he expects AI to generate phishing messages that combine text, images, video and audio, as well as fake social media accounts with fewer red flags.
Additionally, he said, AI can be used to develop phishing attacks via email and other platforms that are more closely targeted to specific users, making them even more effective. He added that some of these investigative efforts could utilize deepfake technology to make the content even more convincing.
“We expect attackers to leverage AI to craft sophisticated phishing messages without the same type of grammar and misspellings,” Subrahmanian said. The digital content we consume, whether it's on websites, social media feeds, or via text messages, allows attackers to curate posts so enticing that we want to click on them. vector. ”
In addition to concerns about data privacy and phishing attacks, GenAI will help cybercriminals create new types of malware and ransomware attacks, said Rhonda Sikone, a professor of computer science and cybersecurity at Purdue University Global. He said that there is a possibility that it could be used for It has become a major concern for school and university IT teams in recent years.
He said organizations and workplaces that are leveraging AI will need the latest cybersecurity training to align with AI advances to improve cyber hygiene, especially in schools and universities that are seeing an increase in phishing and ransomware attacks. He said that it is necessary to provide.
“Other technical things have their pros and cons. I think you'll see a lot of them. [processes and security measures] That is now being done in cybersecurity to become more automated, faster and more accurate,” she said. A double-edged sword. ”
Subramanian said AI tools will be used to strengthen IT security. For example, Northwestern University's Security and AI Institute (NSAIL) is developing technology to more responsibly use deepfake AI technology, which is known for potentially nefarious and deceptive uses. he said. The NSAIL researcher also built on his existing AI technology to generate fake documents and databases to combat intellectual property theft and data breaches.
“In my lab, we do a lot of research into using AI techniques to thwart intellectual property theft,” said Subrahmanian, an IT expert who uses AI to protect networks. He added that he expects a “cat-and-mouse game” between the houses and that adversaries will continue to use AI for nefarious purposes.
In addition to new risks, Stent agreed that generative AI has the potential to enhance and automate IT security processes.
“While we think of AI as producing images, text, and music, it can also monitor computer systems, analyze time-series data, review logs, and identify flaws and vulnerabilities in existing infrastructure. You can use it,” she said.