After years of generative AI adoption, the thrill has waned and attackers and defenders alike are working arduous to combine AI-powered instruments into real-world use circumstances.
Decreasing the barrier to entry for script kiddies and enabling new capabilities for high-skilled black hat hackers means AI is on each defender’s thoughts.
AI-generated cyber-attacks have been cited as the first risk to orgnaizations within the Infosecurity Europe Cybersecurity Trends Report 2025. AI can also be driving elevated funding with 71% of those that count on to boost their cybersecurity budgets citing AI because the main cause, the report discovered.
In the meantime, genAI corporations are poised to launch the subsequent technology of their AI-powered assistants, AI brokers that can carry out duties on our behalf.
In gentle of those new AI developments, it’s essential to debate how organizations can quantity a protection in opposition to AI assaults.
Towards this backdrop, one of many first keynote classes on the upcoming Infosecurity Europe 2025 convention, will deliver collectively AI consultants who will present insights how defenders can combat in opposition to AI threats.
The session, titled “Calling BS on AI – Strategies to defeat Deepfake and other AI attacks,” will give attention to deepfakes and AI-powered social engineering campaigns, two of essentially the most distinguished AI threats in the present day.
Andrea Isoni, Chief AI Officer at AI Applied sciences, will likely be accompanied by Heather Lowrie, Co-Founding father of Resilionix; Zeki Turedi, Discipline CTO for Europe at CrowdStrike; and Graham Cluley Host of the ‘Smashing Safety’ and ‘The AI Repair’ podcasts.
Too Late for Textual content & Picture Deepfake Detectors
Chatting with Infosecurity, Isoni mentioned he believes {that a} distinction ought to be made between defending in opposition to AI-powered texts and pictures, on the one hand, and AI-generated video and audio on the opposite.
“Sadly, detecting faux content material in photographs and texts could be very arduous and goes to fail typically, primarily as a result of AI-powered generated textual content and picture technology applied sciences are already too good and bettering,” Isoni mentioned.
He believes that detection applied sciences for artificial texts and pictures ought to be a part of “baseline safety,” alongside using passwords, encryption applied sciences, enabling multifactor authentication (MFA) and coaching the workforce.
He additionally argued that these applied sciences should embed AI.
“To combat AI at scale, we do want AI. For any scenario the place a large quantity of data is at play, AI brokers or different AI-powered software program options will likely be wanted,” he mentioned.
“Sure, there are some software program options geared toward detecting artificial content material poisoning with out essentially utilizing AI, like watermarking, however they’ve proven combined outcomes thus far,” he added.
Isoni is extra optimistic in regards to the efficacy of deepfake detectors in combating AI-generated video and audio. That is for 2 important causes:
- AI technology applied sciences for audio and video are nonetheless not adequate
- Information a couple of particular particular person is tougher to catch – until you might be well-known, it’s tougher to get an extended sufficient pattern of you speaking or recording a video
Nonetheless, no matter these deepfake detectors finally develop into, Isoni believes they won’t be enough in opposition to AI-generated content material and deepfakes.
“Deepfake detectors is not going to put an finish to artificial content material poisoning, similar to antiviruses didn’t put an finish to malware,” he mentioned.
Utilizing Requirements and Rules for AI Threat Evaluation
Exterior of fundamental safety measures and detection instruments, Isoni advocated for organizations to evaluate their worst-case risk eventualities and develop an incident response plan primarily based on them, incorporating a danger administration method.
“Requirements like ISO 420001 and laws, just like the EU AI Act, may help organizations elaborate a risk-based plan as they make clear what the dangers are and the fines concerned,” the knowledgeable mentioned.
Isoni additionally suggested organizations fascinated about mitigating AI threats to discover the rising business of ‘AI security layer’ merchandise, designed to safe and management AI fashions from being hacked and harming end-users with malicious output.
These options might show particularly helpful because the adoption of AI brokers grows, he concluded.
Study Extra About AI Threats at Infosecurity Europe
AI threats and safety dangers will likely be a major focus of this version of Infosecurity Europe. Register here to attend and uncover the most recent developments and analysis in genAI and the broader cybersecurity panorama.
The complete program will be seen here.
The 2025 occasion will have fun the 30th anniversary of Infosecurity Europe, going down on the London ExCel from June 3-5, 2025.