Technology

Artificial intelligence a ‘double-edged sword’ in world of cybersecurity: experts


Denis Villeneuve has worked in cybersecurity for 15 years but seldom have the threats he’s come across felt as personal as they do these days.


Employees at his workplace, technology firm Kyndryl, have been sent fake videos of CEO Martin Schroeter designed to lure them into handing over their login credentials to fraudsters.


Villeneuve has also seen a pal who runs a small engineering firm be preyed on when his wife was left a voice mail using what sounded like his voice to falsely convey that he was in trouble and needed her to quickly post bail money.


“I was like, ‘Oh my God.’ This hit home close because this is a good friend of mine,” recalled Villeneuve, a cybersecurity and resilience practice leader at Kyndryl Canada.


The attacks were made possible by artificial intelligence-based software, which has become even more affordable, accessible and advanced in recent years.


But despite the cybersecurity threats, Villeneuve — like much of the tech industry — is careful not to frame AI as all bad.


In the fight against cyber attackers, they reason AI can help just as much as it harms.


“It’s a double-edged sword,” Villeneuve explained.


As AI improves, experts feel there will always a bigger or more innovative way of trying to get through a company’s defences, but those defences are getting a boost from the technology, too.


“AI, ultimately, is a much better thing for the defenders than the attackers,” said Peter Smetny, regional vice-president of engineering at cybersecurity firm Fortinet Canada.


His reasoning lies in the sheer number of attacks some companies face and the resources it takes to handle them or ward them off.


A 2023 study from EY Canada of 60 Canadian organizations found that four out of five had seen at least 25 cybersecurity incidents in the past year. Indigo Books & Music, London Drugs and Giant Tiger have all been victims of high-profile incidents.


While not all cyber attacks are successful, Smetny said many companies see thousands of attempts to penetrate their systems every day.


AI makes handling them more efficient.


“You may have only four or five people on your team and there’s only so many alerts they can manually go through, but this allows them to focus and tells them which ones to prioritize,” Smetny said.


Without AI, an analyst would manually have to check if each attack is linked to an internet protocol address, a unique identifier assigned to every device connected to the internet, which can help trace the origins of an attack.


The analyst would also study whether the person behind the address was already known to the company and the extent of their attack.


With AI, an analyst can now query software using simple language to quickly compile and present everything about an attacker and their IP address, including where they were able to enter a system and what actions they carried out.


“It’s able to really, really save you a lot of time and point you in the right direction, so you focus on the things that are important,” Smetny said.


But attackers have the same tools in their arsenal.


Dustin Heywood, the chief architect of IBM’s global intelligence agency X-Force, said anyone with malicious intent can turn to AI to help round up data from several breaches and piece together a profile of a target.


For example, if the data tells them someone shops frequently at Toys “R” Us or at Walmart for kids’ products, it might tell an attacker someone recently had a kid.


Sometimes the attackers resort to a practice known as “pig butchering” to fill in any information they are missing.


“You’ll have a bot start talking to somebody, start building a rapport using things like generative AI,” Heywood said. “They’ll make them feel all nice and trusted, then they’ll … start extracting information.”


When attackers gain financial details, a social insurance number or enough personal information to get into an account, the data can be used to falsely apply for a credit card or sold to other criminals.


The potential harm snowballs even further when there’s good enough material to make a deep fake, which is a clip of someone doing or saying something they haven’t. Villeneuve’s example of his friend apparently leaving a message for his wife is an example of this tactic.


For smaller targets, AI does a lot of the heavy lifting, freeing attackers up to focus their attention on high value victims.


“You can have a bot operator talk to 20 people at once,” Heywood said. “Before it used to be a farm of people out in a third nation, typing away at mobile phones.”


He’s also heard of people using augmented reality glasses that instantly pull up information on someone, including their personal data being sold on the dark web, as soon as you look at them, and others working to “jailbreak” AI chatbots intro extracting personal information people have inputted.


The evolution in attacks has convinced him that AI is “changing the game.”


“Back in the ’90s, it used to be teenagers, kids, college students that used to break into websites to deface them,” he said. “And then recently we had the shift over to ransomware where companies would have their computers encrypted.”


Now, the focus has shifted to taking on someone’s identity, a “really big business” Heywood said AI is fuelling further.


The Canadian Anti-Fraud Centre has said the country has counted 15,941 victims of fraud in the first half of the year, with $284 million lost in those incidents. There were 41,988 victims and $569 million lost the year before.


Heywood, Smetny and Villeneuve feel the fight against attackers isn’t futile and companies are taking it seriously.


Their employers are running exercises for businesses such as banks and major retailers, simulating what it would be like if their companies were under attack, and helping them prepare staff to address threats and locate and patch software vulnerabilities.


It’s not hard to get businesses to take action, Heywood said, because a cybersecurity breach can cost companies an average of $6 million and result in a stock slump, fewer sales and a broken relationship with customers.


Anything they can do to stop an attack is worth it, he added because “trust is gained in inches but it’s lost virtually instantly.”


This report by The Canadian Press was first published Oct. 20, 2024. 

Shares:

Leave a Reply

Your email address will not be published. Required fields are marked *