Staying Ahead of the New AI Threat Landscape
Benjamin Clark recaps his recent session with Moriah Hara on how organizations can enhance their cybersecurity as AI threats rise.
September 30, 2025 | Ben Clark
TLDR: AI is transforming cybersecurity, and enterprises must adopt secure-by-design practices and proactive governance now to stay ahead of emerging threats and evolving regulations. View the whole conversation here.
Tomorrow kicks off Cybersecurity Awareness Month, a time when security takes center stage for organizations everywhere. It’s the perfect backdrop for exploring how artificial intelligence is reshaping the way we think about data, security, and compliance.This time of year makes my recent conversation with Moriah Hara, a three-time Fortune 500 CISO and one of the leading voices in AI and cybersecurity, especially timely. Together, we examined how AI is transforming the threat landscape, where security frameworks are falling behind, and what organizations can do to future-proof their data strategies.
The AI Gold Rush Meets Exploding Data Growth
Every day, enterprises are generating massive amounts of unstructured data including emails, videos, call recordings, and documents. By 2025, we’ve reached a staggering 463 exabytes of new data created daily. AI thrives on this fuel, learning from and generating even more unstructured information in a never-ending loop.
Though great for innovation, it is also a double-edged sword. Legacy compliance models were never designed to handle the velocity and complexity of AI-driven data flows. Data is no longer just being stored and analyzed, it is processed in real time, often across multiple clouds and vendors. This creates new blind spots and risks.
Emerging Threats You Can’t Ignore
Moriah and I highlighted some of the most pressing AI-specific attack vectors already in play:
- Data poisoning: attackers slip bad data into training sets to corrupt outcomes. In enterprise scenarios, bad actors can poison a fraud detection model with benign-looking fraudulent transactions, training it to overlook actual fraud patterns.
- Model theft and inversion: bad actors reverse-engineer proprietary algorithms and exfiltrate sensitive data. Security researchers have successfully used model inversion techniques on facial recognition systems to reconstruct approximate images of people from training data, even when original photos weren’t publicly available.
- Deepfakes and synthetic fraud: fake job applicants on Zoom, AI-generated CEO voicemails pressuring staff into fraudulent actions and manipulated audio or video that undermines trust. The FBI reported over $410 million in losses from deepfake audio scams in the first half of 2025 alone, and these attacks are accelerating.
- AI-mutated ransomware: malware that rewrites itself faster than signature-based tools can keep up. Security researchers demonstrated “BlackMamba,” an AI-powered keylogger that dynamically rewrote parts of its code each time it ran, making it invisible to traditional antivirus tools.
These are not future scenarios. They are happening now, and most enterprises still lack the governance or security frameworks to respond effectively.
Building Secure AI by Design
The solution is to move beyond bolt-on controls and embrace Secure by Design principles for AI. For context, the term “secure by design” means thinking about security and governance from day one, not as an afterthought. In an AI setting, that means securing not just the infrastructure but the data lifecycle itself: how data is sourced, stored, labeled, accessed, and used for model training.
Moriah laid out several critical steps in her ARCHITECT-AI framework, including:
- Classifying and sanitizing data before it enters your AI pipelines
- Scanning both code and model artifacts for vulnerabilities
- Enforcing zero-trust access controls with MFA
- Testing for model fairness and bias
- Encrypting models in transit and at rest
- Monitoring for “model drift” and suspicious behaviors in real time
- Updating incident response plans to account for AI-driven breaches
This playbook reduces risk without slowing down innovation, and it is one every enterprise should be implementing now.
Regulation and Accountability Are Moving Quickly
From California’s algorithmic discrimination laws to the EU AI Act, regulators are moving fast. Even if your state or industry has not passed AI legislation yet, auditors will expect clear lineage, documentation, and explainability for how your AI systems operate.
Moriah shared a simple framework: track data provenance, document model governance, enforce role-based access, validate pipelines, and prepare audit-ready evidence. The message was clear. Get defensible now, not later.
Why Collaboration Matters Most
One of my favorite parts of our discussion was the reminder that securing AI is not just a security team job. It takes infrastructure, developers, legal, compliance, and business leaders all rowing in the same direction. As Moriah put it, “This is a team sport.”
Organizations need dedicated leadership for AI risk management. While many companies don’t yet have a Chief AI Risk Officer, security leaders are uniquely positioned to step into this role given their existing mandate around risk ownership and experience with secure-by-design practices. The goal is transforming security from just managing risk into a competitive advantage.
Preparing Now for the AI Future
If there is one takeaway from this session, it is that AI cannot be treated like just another tech trend. It changes the attack surface, the compliance landscape, and the way teams collaborate. By building secure foundations and proactive governance today, you can set your organization up not just to manage AI risk but to thrive in an AI-driven future.
At Nasuni, we are helping enterprises secure and protect the unstructured data that fuels AI. Because at the end of the day, the organizations that move boldly and embrace innovation while safeguarding their data won’t just keep pace with AI. They’ll set the standard for how it’s done.
Related resources
March 11, 2025 | Ben Clark
Four Ways Nasuni’s New CrowdStrike SIEM Integration Helps Large Enterprises
Nasuni’s Ben Clark discusses Nasuni’s latest product announcement: a SEIM integration with CrowdStrike.
Read more
September 10, 2024 | Ben Clark
Nasuni Study Reveals Data Security Strategy Disconnect
Ben Clark discusses the disconnect in many enterprises’ data security strategy highlighted by Nasuni’s latest research.
Read more
Ransomware protection
Get better ransomware risk mitigation. Our platform collects hundreds of immutable data snapshots daily and detects malicious activity at the edge.
Learn more