Crooks cook up wide menu of AI-flavoured crimes

Cybercriminals will leverage artificial intelligence both as an attack vector — with deepfakes being currently the best-known use of AI — and an attack surface, according to a jointly developed new report by Europol, the United Nations Interregional Crime and Justice Research Institute (UNICRI) and Trend Micro.

“AI promises the world greater efficiency, automation and autonomy,” said Edvardas Šileris, Head of Europol’s Cybercrime Centre. 

“At a time where the public is getting increasingly concerned about the possible misuse of AI, we have to be transparent about the threats, but also look into the potential benefits from AI technology,” said Šileris.

The report warns that new screening technology will be needed in the future to mitigate the risk of disinformation campaigns and extortion, as well as threats that target AI data sets.

For example, AI could be used to support convincing social engineering attacks at scale; document-scraping malware to make attacks more efficient; evasion of image recognition and voice biometrics; ransomware attacks through intelligent targeting and evasion; and data pollution by identifying blind spots in detection rules.

To address these, the report recommends harnessing the potential of AI technology as a crime-fighting tool to future-proof the cybersecurity industry and policing, and continuing research to stimulate the development of defensive technology.

Also, the paper recommends the promotion and development of secure AI design frameworks, de-escalating politically loaded rhetoric on the use of AI for cybersecurity purposes, and leveraging public-private partnerships and establish multidisciplinary expert groups.

“As AI applications start to make a major real-world impact, it’s becoming clear that this will be a fundamental technology for our future,” said Irakli Beridze, Head of the Centre for AI and Robotics at UNICRI. “However, just as the benefits to society of AI are very real, so is the threat of malicious use.”

The paper also warns that AI systems are being developed to enhance the effectiveness of malware and to disrupt anti-malware and facial recognition systems.

“AI is already being used for password guessing, CAPTCHA-breaking and voice cloning, and there are many more malicious innovations in the works,” said Tony Lee, head of consulting in Hong Kong and Macau at Trend Micro.