What is AI Security Posture Management (AI-SPM)?
AI Security Posture Management (AI-SPM) is the discipline of continuously inventorying, assessing, and improving the security posture of every AI asset across an enterprise — models, agents, datasets, integrations, MCP servers, RAG pipelines, third-party APIs. It is the AI-native analog of CSPM (Cloud Security Posture Management) for cloud and DSPM (Data Security Posture Management) for data — same operational rhythm, different inventory.
What AI-SPM actually covers
A mature AI-SPM program tracks four categories of asset:
-
Models in use — every foundation model API integration, every fine-tuned model, every adapter, every distilled variant. Including shadow-AI models — usage that didn't go through procurement or security review.
-
Agentic deployments — every AI agent in production, with its connected tools, data sources, MCP servers, and authority scope.
-
AI training and inference data flows — what data crosses into model context, what data is used for fine-tuning, what data is in retrieval indexes. Maps to data classifications (PII, PHI, regulated, internal-only).
-
Third-party integrations — every embedding API, every RAG provider, every AI SaaS that holds enterprise data. Each is a third-party security dependency.
For each asset, AI-SPM tracks: what it is, where it lives, who owns it, what data flows through it, what its security controls are, and what its current risk score is.
Why AI-SPM is its own discipline
Existing security tooling — vulnerability scanners, CSPM, SIEM, DLP — doesn't see AI assets natively:
- Model integrations don't appear in vulnerability scans. A
pip install langchainfollowed by an OpenAI API call doesn't surface as a security event. - Shadow AI usage is invisible to traditional MDM/EDR. Employees connecting personal ChatGPT, Claude, or Cursor accounts to enterprise data go unmonitored.
- Agentic systems have no equivalent to network firewall rules. Tool authorities and prompt-time data flows live above the network layer.
- Data lineage doesn't reach into model context. Standard DSPM tools track data at rest and in transit, not "this PII was retrieved into a model's context window."
The AI-SPM workflow
A continuous loop:
- Discover — automated discovery of AI assets across cloud accounts, repositories, browser usage, and inbound traffic
- Classify — categorize each asset by sensitivity, regulatory regime, and business criticality
- Assess — measure current security posture against frameworks (OWASP LLM Top 10, NIST AI RMF, ISO 42001)
- Prioritize — rank gaps by impact × likelihood, surface to owners
- Remediate — apply controls, with verification
- Monitor — continuous visibility on changes, new assets, drift
How it differs from AI red teaming
- Red teaming finds specific exploitable vulnerabilities through adversarial probing
- AI-SPM maintains continuous visibility on the whole inventory and tracks posture over time
They're complementary: red teaming finds the holes, AI-SPM ensures you know what assets exist to test in the first place.
See also
Repello's VANTAGE framework lays out the full operational model. The product side is AI Asset Inventory.