Intel

AI Security & Safety

security Rising Active
Momentum 7.1
Total Mentions 33
First Seen 16 Feb 2026
Last Seen 29 Mar 2026

Weekly Change

Mentions: +3 Momentum: +1.30

Why It Matters

Security incidents involving AI systems are increasing. Enterprises deploying LLMs without robust safety measures face data breach risks, reputational damage, and regulatory penalties that can exceed the value of the AI deployment.

Summary

Emerging threats and defensive patterns for AI systems in production. Covers prompt injection, model poisoning, data exfiltration via LLMs, red teaming practices, and guardrail architectures.

Momentum Over Time

Source Breakdown

SourceTypeItems
Lex Fridman Podcast Podcast 1
Import AI (Jack Clark) 1
Andreessen Horowitz (a16z Blog) Vc pe 1
@saboreman X influencer 1

Notable Excerpts

We are advising all our portfolio companies to establish AI red teams. The threat surface of LLM-powered applications is fundamentally different from traditional software. Prompt injection, data poisoning, model theft, and adversarial inputs require specialised security expertise that most organisations lack.

Andreessen Horowitz (a16z Blog) 84% relevant

New research from ETH Zurich demonstrates prompt injection attacks that bypass all known defensive measures with 97% success rate. As enterprises connect LLMs to internal tools and databases, the attack surface expands dramatically. We need to treat LLM-connected systems with the same security rigour as we treat database-connected web applications.

83% relevant

We are starting to see a new category of tech debt: AI-generated code that nobody on the team fully understands. It works, it passes tests, but when it breaks nobody knows why. Engineering leaders need to think about this before rolling out autonomous coding tools org-wide.

@saboreman 70% relevant

Related Items

The Prompt Injection Problem Is Getting Worse

New research from ETH Zurich demonstrates prompt injection attacks that bypass all known defensive measures with 97% success rate. As enterprises connect LLMs to internal tools and...

83% High

Why Every Company Needs an AI Red Team

We are advising all our portfolio companies to establish AI red teams. The threat surface of LLM-powered applications is fundamentally different from traditional software. Prompt i...

Andreessen Horowitz (a16z Blog) 84% High

Dario Amodei on Responsible AI Scaling

The question is not whether AI will be transformative -- it will. The question is whether we can build institutions and norms that allow us to capture the benefits while managing t...

Lex Fridman Podcast 68% High