CYE Strategy

GenAI and LLM Security: 4 Threats CISOs Can’t Ignore

September 29, 2025

GenAI and LLM Security: 4 Threats CISOs Can’t Ignore

GenAI adoption is accelerating, and so are AI-driven exposures. Generative AI is reshaping both offense and defense, with analysts warning that it’s making it faster and easier for attackers to outpace our defenses. Gartner forecasts that by 2027, AI agents will cut the time to exploit account exposures by 50%, compressing defenders’ response windows. These exposures start with people and data, then move along plausible routes to business-critical assets. Here are four rapidly growing threats, what they look like in practice, and the first moves that make a difference. 

1. Executive deepfakes and impersonation

What it looks like: Real-time voice or video convincingly mimics a trusted leader to authorize payments, share credentials, or approve sensitive actions. Messages mirror internal tone, timing, and workflows, so they pass “looks right” checks. 

Why it matters: Why it matters: A single fraudulent “approval” has triggered multi-million-dollar transfers. In 2024, engineering firm Arup disclosed a $25M loss after a Hong Kong video call used a digitally cloned executive to order transfers – a clear example of executive impersonation via deepfake.  

Why it matters: A single fraudulent “approval” has triggered multi-million-dollar transfers. In 2024, engineering firm Arup disclosed a $25M loss after a Hong Kong video call used a digitally cloned executive to order transfers – a clear example of executive impersonation via deepfake. 

First moves: Add out-of-band verification for payment and access approvals, step-up authentication for high-value actions, and simple two-person checks for urgent requests.

2. Model and data integrity attacks  

What it looks like: 

  • Poisoning: Malicious or low-quality training data subtly alters outputs over time, degrading analytics and decisions. 
  • Injection: When LLMs or agents fetch and act on web content, adversaries plant hidden instructions in pages, PDFs, or docs to subvert behavior. 

Why it matters: Integrity failures are hard to detect, roll back, and attribute. An injected instruction can pivot an automated workflow or exfiltrate data without tripping traditional controls. 

First moves: Validate and catalog data sources before training, inventory AI agents and their permissions, allow-list and sanitize fetched content for agent browsing, restrict tool use and function calling, and monitor model behavior for drift or unexpected actions. 

3. AI-tuned social engineering

What it looks like: Highly contextual emails, chats, and tickets that match internal style guides, meeting cadences, and naming conventions. Lures often cite fresh public signals (press, social, recent commits) to raise credibility. 

Why it matters: Credential theft, session hijacking, and lateral movement often start here. AI boosts volume, fit, and speed, raising the odds of a single click or reply. 

First moves: Use phishing-resistant MFA, route sensitive requests through authenticated portals instead of email, and update training with AI-shaped examples that match your organization’s voice. 

4. Everyday use of public LLMs

What it looks like: Employees paste source code, contracts, or internal notes into public tools. Data may be stored or processed outside your control; responses can be re-shared without context. 

Why it matters: Confidential data and IP can leave the enterprise boundary. Even small snippets – API keys and internal project names help adversaries map your environment. 

First moves: Publish clear acceptable-use guidelines, prefer enterprise or private LLM endpoints, log and monitor egress to AI services, and mask or tokenize sensitive data where feasible.

What boards will ask, and how to answer 

Two questions come up every time: What could this cost, and what actions reduce our exposure fastest? A CRQ-first approach quantifies exposure in financial terms, maps exploitable routes to business-critical assets, and prioritizes mitigations that remove the most pathway for the least effort.

Tools to Gain Visibility and Reduce Exposure

Here’s how teams use Hyver to address the four AI-driven exposures. 

  • See AI-origin paths to crown jewels via the Attack Graph for deepfakes and impersonation, poisoning and injection, AI-tuned social engineering, and public LLM use 
  • Quantify exposure in financial terms so you can rank what to fix first 
  • Prioritize targeted mitigations that remove the most AI-driven exposure with the least effort
  • Strengthen governance and guardrails across policy, model validation, and user practices, aligned to frameworks
  • Prove progress to the board with views of AI exposure, routes removed, and financial impact 

Want to go deeper? Download The CISO’s Guide to Uncovering and Mitigating GenAI-Driven Threats to learn how to stay ahead of evolving AI risks.

Download the e-book now!

Dr. Nimrod Partush

By Dr. Nimrod Partush

Nimrod is CYE’s VP of AI and Innovation, bringing together deep cybersecurity expertise and extensive AI research experience. He previously served in an elite IDF cyber unit, founded a successful cybersecurity AI startup, and holds a Ph.D. in Computer Science from the Technion. At CYE, he drives the development of cutting-edge insights and capabilities that power the company’s cybersecurity maturity research.