Why Every Company Needs an AI Red Team
We are advising all our portfolio companies to establish AI red teams. The threat surface of LLM-powered applications is fundamentally different from traditional software. Prompt i...
Regulatory compliance is becoming a prerequisite for enterprise AI deployment. Early movers on governance frameworks gain competitive advantage while laggards face escalating compliance costs and deployment delays.
The evolving landscape of AI regulation across jurisdictions, including the EU AI Act implementation, US executive orders, and corporate governance frameworks for responsible AI deployment.
| Source | Type | Items |
|---|---|---|
| Andreessen Horowitz (a16z Blog) | Vc pe | 2 |
| Lex Fridman Podcast | Podcast | 1 |
| Import AI (Jack Clark) | 1 | |
| Exponential View (Azeem Azhar) | 1 | |
| McKinsey Digital Insights | Vc pe | 1 |
| @sataboranova | X influencer | 1 |
We have mapped 1,200+ enterprise AI companies across 47 categories. The fastest-growing segments are: AI agents for operations (340% YoY growth in funding), code generation platforms (280% YoY), and AI governance tools (250% YoY). Notably, the "AI wrapper" category that critics dismissed is now producing companies with $50M+ ARR.
Our annual survey of 1,800 enterprises shows that 72% now have at least one AI application in production, up from 55% a year ago. However, only 18% report having AI embedded across multiple business functions. The primary barriers remain: data quality (cited by 64%), talent shortages (58%), and unclear governance frameworks (52%).
The organisations succeeding with AI share a common pattern: they treat AI as an operating model change, not a technology project. This means restructuring decision rights, creating AI product management roles, establishing federated governance, and measuring outcomes rather than deployments. The technology is 20% of the challenge; the organisation is 80%.
The EU AI Act's first enforcement provisions take effect this quarter. High-risk AI systems -- including those used in employment, credit scoring, and critical infrastructure -- now require conformity assessments, technical documentation, and human oversight mechanisms. The fines for non-compliance can reach 7% of global revenue.
We are advising all our portfolio companies to establish AI red teams. The threat surface of LLM-powered applications is fundamentally different from traditional software. Prompt injection, data poisoning, model theft, and adversarial inputs require specialised security expertise that most organisations lack.
We are advising all our portfolio companies to establish AI red teams. The threat surface of LLM-powered applications is fundamentally different from traditional software. Prompt i...
Our annual survey of 1,800 enterprises shows that 72% now have at least one AI application in production, up from 55% a year ago. However, only 18% report having AI embedded across...
Talked to 30 enterprise CIOs this quarter. Every single one has AI in production. Only 4 have a formal AI governance framework. The gap between deployment velocity and governance m...
The organisations succeeding with AI share a common pattern: they treat AI as an operating model change, not a technology project. This means restructuring decision rights, creatin...
The question is not whether AI will be transformative -- it will. The question is whether we can build institutions and norms that allow us to capture the benefits while managing t...
The EU AI Act's first enforcement provisions take effect this quarter. High-risk AI systems -- including those used in employment, credit scoring, and critical infrastructure -- no...
We have mapped 1,200+ enterprise AI companies across 47 categories. The fastest-growing segments are: AI agents for operations (340% YoY growth in funding), code generation platfor...