Cybercriminals Are Weaponising AI For Attacks

08/08/2025
Lasse Peters

In a recent meeting, I asked, “What do we do if someone uses AI to imitate our voices?” Most people looked uneasy. And rightly so. In 2024, a multinational company lost $25 million to a deepfake scam. By 2027, AI is expected to be involved in 17% of all cyberattacks.

AI has become another tool for criminals to exploit. Criminals are using it for phishing, deepfakes, and automated attacks. The tools are cheap and accessible. The threat is real.

Phishing Becomes Perfect

Generative AI, including large language models, is making phishing emails more convincing. According to the Zscaler ThreatLabz 2025 Phishing Report, many attacks now rely on AI-generated content. These personalised emails are written in perfect English. They mimic the style and grammar of real contacts.

You receive an email from your “colleague.” The grammar checks out. The tone feels right. You’re urged to act quickly. You click. It’s too late.

Success rates are rising. Fraud attempts are becoming more efficient. What used to stand out due to spelling mistakes is now perfectly disguised.

Deepfakes Deceive People

Synthetic voices and videos seem real. Trend Micro reported in July 2025 that deepfakes are being used for fraud, extortion, and identity theft. The tools are cheap and widely available.

Companies have reported cases where CEO voices were imitated. The fake executives authorised transfers. The FBI has been warning about deepfake voice phishing since May 2024. Victims are being manipulated by AI-generated voices.

How to detect deepfake calls:

  • Ask for details only the real person would know
  • Listen for unnatural pauses or repetitions
  • Hang up and call back using a known number
  • Set up codewords for important requests

Autonomous AI Attacks

Agentic AI refers to autonomous agents. They carry out tasks without human intervention. They make decisions based on their goals.

TechRadar and Netscout warn that these agents now perform reconnaissance, automate credential stuffing, and independently run phishing campaigns. An attacker simply instructs: “Find login credentials for this target.” The bot takes care of the rest.

According to Picus Security, the ransomware group “Global Group” is already using chatbots to negotiate with victims. AI is handling parts of the extortion process. While humans still make critical decisions, the system allows attackers to scale more efficiently.

Attack Methods in Detail

  • Automated phishing: AI language models generate millions of personalised, grammatically perfect emails
  • Deepfake attacks: Synthetic voices and videos used for fraud and impersonation
  • AI-powered password spraying: AI analyses password patterns and tests variations efficiently
  • Autonomous reconnaissance: AI agents independently gather intelligence on targets
  • Automated ransomware: Chatbots handle negotiations with victims

Why This Is So Dangerous

It’s about speed, scale, and precision. What used to take time and effort, AI now accomplishes in seconds. Millions of phishing emails. Personalised outreach. Deceptive voice messages. Criminals are scaling attacks to a level that was previously impossible.

Europol warns in its 2025 EU report that organised crime is becoming faster and more precise thanks to AI. This isn’t just about isolated cases of fraud. These are complex operations targeting states, businesses, and individuals simultaneously.

Many systems operate anonymously. Tools like GhostGPT allow perpetrators with no technical knowledge to create fake websites, phishing messages, and malware.

Immediate Measures for Companies

  1. Raise employee awareness
  • What does “urgency” mean in an email?
  • How do I recognise deepfake emails or calls?
  • When should I ask questions instead of acting?
  1. Implement technical security measures
  • Multi-factor authentication across all systems
  • Endpoint Detection and Response (EDR)
  • DNS filtering to block malicious domains
  • Email security with AI-based detection
  1. Introduce a Zero Trust Architecture
  • Don’t automatically trust anyone
  • Verify every request
  • Limit access to the bare minimum

The security vendor Check Point emphasises: AI-powered defence is now essential. You can no longer rely on reactive measures. You need to proactively detect whether AI systems are being used against you.

Researchers are also proposing intelligent adaptations of cyber kill chain models and counterstrategies for the AI era such as adversarial training or deception techniques.

Act Now

AI is no longer a future technology. It’s a real threat today. Cybercriminals are already using AI to:

  • Make phishing emails more effective and believable
  • Leverage deepfakes for fraud and extortion
  • Automate attacks through agentic AI systems
  • Scale ransomware operations via chatbots

Your security strategy must rest on three pillars:

  • People: Aware employees recognise deception early
  • Technology: AI-powered defence mechanisms stop automated attacks
  • Processes: Clear protocols ensure the right response in a crisis

Attacks are becoming faster, more sophisticated, and harder to trace. Without effective countermeasures, we risk becoming passive spectators in a digital war that has already begun.

Take the time to review your security measures, raise awareness across your teams, and ensure your processes are ready to handle AI-driven threats. A clear strategy built on people, technology, and routines can make all the difference. If you’d like support reviewing your current setup, our experts are here to help.