Anticipating the Next Wave of AI-Driven Threats

At RSAC 2025, I had the privilege of taking a retrospective look at A Year(ish) of Countering Malicious Actors’ use of AI along with some of my colleagues from industry and government. But as we look to the future, the next wave of AI-enabled cyber threats will challenge traditional defense strategies in deeper, more insidious ways. Defenders risk falling onto the back foot in the AI race if they are unable to effectively counter an onslaught of different trends in the AI arena.
AI-generated phishing is no longer new. As I noted during the RSAC panel, the majority of adversarial AI uses that we know of have been focused around tricking people rather than systems. What’s new, however, is the ability for attackers to move from generic fraud to highly personal, well-timed spear phishing at scale.
Social engineering campaigns of the past relied on lucky guesses or stolen credentials. In 2023 and 2024, early AI and LLM-powered phishing attacks focused on reducing errors in syntax and moving toward more generically believable stories. But the next generation of large language models (LLMs) can digest and synthesize enormous volumes of publicly available information (PAI)—including social media, breach dumps, and scraped online profiles—to create convincing, personalized attacks.
These aren’t just coming to your work inbox. They’re arriving via SMS, personal Gmail accounts, and calendar invites. Unless technical controls span both personal and enterprise environments, organizations will struggle to contain these multi-vector intrusions.
AI can detect threats, but only if it’s trained on the right data.
In ideal conditions—complete telemetry, accurate labeling, continuous visibility—AI-driven threat detection excels. Unfortunately, most real-world environments don’t meet those conditions. Many agencies lack the historical logs, consistent endpoint coverage, and clean labels that are necessary for supervised learning.
Generic use case data is insufficient to fuel detection-based defensive AI. Because each type of organization has its own ways of operating and structuring access, privileges, and security, creating a detection-based AI model that works out of the box is nearly impossible. A model trained on Fortune 500 traffic patterns may not recognize threats inside a government SCIF or utility operator’s isolated network.
This asymmetry—where attackers can use AI out-of-the-box, while defenders require curated training datasets—means that malicious actors’ use of AI to attack networks is likely to outpace defenders’ ability to counter those attacks, particularly as AI-enhanced malware toolkits (opens a new window) begin to proliferate.
Campaigns from nation-state-backed threat actors like Volt Typhoon and Salt Typhoon highlight a broader trend: pre-positioning attacks on the privately owned and operated infrastructure that forms the backbone of day-to-day life.
Utilities, power, telecoms, and healthcare networks have become high-value targets—not for their IP, but for the critical role they play in modern society. AI-enhanced reconnaissance and lateral movement tools make it easier than ever for adversaries to learn about and exploit these environments. Use of hard-to-detect Living off the Land (LOTL) (opens a new window) techniques makes it difficult for operators to detect adversaries’ presence, particularly if the robust datasets mentioned earlier are not in place to feed defensive tools.
Defending against these risks requires more than software-based detection techniques, which will by nature struggle to detect zero days and other novel attack vectors. Instead, technologies like hardware-enforced browser isolation, high-security virtual environments, and content filtering. These technologies prevent ingress of malicious code and unauthorized movement of data and Command and Control (C2) information in and out of the critical networks.
AI threats aren’t the only ones accelerating. Advances in quantum computing are also forcing a fundamental rethink of encryption.
Public key encryption (PKE) may remain viable for a few more years, but “hack now, crack later” strategies are already in play. Nation-state adversaries are actively stealing encrypted data today (opens a new window), betting they’ll be able to decrypt it when quantum systems reach scale.[GA1]
What’s at stake? Schematics, research data, diplomatic communications, and operational planning documents—many of which will retain their intelligence value even if they are decrypted months or even years after acquisition. The U.S. and allies are racing to deploy Post-Quantum Cryptographic (PQC) algorithms, but the window to re-encrypt critical data is shrinking.
AI isn’t going away. Neither are the threats. Defending against AI-accelerated risks means confronting strategic blind spots, not just patching technical ones.
Federal and critical infrastructure leaders must rethink their assumptions about identity, access, and trust. Security controls must operate across domains, data types, and user devices. And AI must be treated as both an asset and a risk.
Everfox is committed to helping government and critical infrastructure organizations meet this moment—with secure, dynamic, and future-ready solutions.
Download the “Unleashing AI for Government” Whitepaper to learn more about how we harden AI pipelines, sanitize content, and secure data across domains.
Field CTO, Cybersecurity
Adam Maruyama is the Field CTO for Digital Transformation and AI at Everfox. A passionate technologist, Adam is an expert in countering emerging cyber threats like adversarial AI and leveraging trusted, high-assurance solutions to securely adopt the latest technologies, including AI, into their environment.
Before his time in the private sector, Adam has over 15 years experience in government supporting cyber and counterterrorism operations, including numerous warzone tours and co-leading the drafting of the 2018 National Strategy for Counterterrorism. During his time in the industry, Adam has also served commercial and government customers at McKinsey & Company and Palo Alto Networks.
Adam has published extensively in AI, cybersecurity, and policy publications, including DarkReading, AI Journal, Cipher Brief, and The Hill. He has presented at numerous conferences and virtual venues, including RSAC 2025.