Back to all blogs

|
|
7 min read


TL;DR: Palo Alto Networks acquired Protect AI and absorbed its products into Prisma Cloud. For teams that used Protect AI's standalone tools for ML model scanning, AI red teaming, and supply chain security, the acquisition means a different pricing model, a different vendor relationship, and a roadmap that now serves Palo Alto's broader security platform priorities. This post covers five like-for-like alternatives for teams re-evaluating their AI security testing stack.
What the Palo Alto acquisition changed
Protect AI built a suite of AI-specific security tools: Guardian for ML model scanning, Recon for AI asset discovery, and a red teaming layer for LLM vulnerability testing. It also maintained ModelScan, an open source tool for scanning model files for malicious code and backdoors. Before the acquisition, Protect AI was one of the few vendors focused specifically on the AI security stack rather than adapting general-purpose security tooling.
Palo Alto Networks acquired Protect AI and integrated its capabilities into the Prisma Cloud platform. The products continue to exist, but they now sit inside Palo Alto's broader enterprise security portfolio.
The practical impact for most teams is threefold. First, pricing. Protect AI's standalone tools were accessible to AI teams at AI-company price points. Inside Prisma Cloud, pricing follows Palo Alto's enterprise licensing model. For teams that are not already Palo Alto customers, the acquisition made the cost of entry significantly higher.
Second, roadmap focus. Palo Alto's core business is network security, SASE, and cloud security. AI security is now one investment area among many. Teams that need deep, continuously updated coverage of AI-specific attack surfaces are betting on Palo Alto prioritizing that investment over the platform integrations and network security features that drive most of its revenue.
Third, ModelScan. Protect AI's open source model scanning project is now community-maintained without Protect AI's engineering backing. Teams that depended on it for ML supply chain security need to evaluate whether the project's maintenance trajectory is sufficient for production use.
The five alternatives
1. Repello AI (ARTEMIS)
ARTEMIS is Repello AI's automated red teaming engine. It is an independent AI security platform with no acquisition by a network security or cloud infrastructure vendor, which directly addresses the ecosystem lock-in concern the Palo Alto acquisition raised.
ARTEMIS runs context-specific attack simulations against the full AI application stack, drawing from 15M+ evolving attack patterns across OWASP LLM Top 10, NIST AI RMF, and MITRE ATLAS. Coverage is multimodal across text, images, voice, and documents in 100+ languages. It tests agentic systems natively: multi-agent pipelines, MCP server integrations, RAG pipelines, and agentic workflows built on LangGraph, CrewAI, and AutoGen. Output is compliance-mapped reports with prioritized remediation steps tied to framework controls.
Where Protect AI's red teaming layer tested primarily at the LLM endpoint level, ARTEMIS tests the full AI application in context. The attack patterns are generated specifically for the application under test, not generic probes. ARTEMIS also covers agentic attack surfaces that Protect AI's testing layer did not address.
For teams that need runtime protection beyond testing, Repello also offers ARGUS, and AI Inventory for asset discovery. For this comparison, the like-for-like layer is ARTEMIS. Book a demo to see coverage against your specific stack.
2. ModelScan
ModelScan is the open source ML model scanning tool originally built by Protect AI. It scans serialized model files (pickle, PyTorch, TensorFlow SavedModel, Keras, Hugging Face) for malicious code embedded in model weights before loading. It is now community-maintained following the acquisition.
For teams specifically concerned about ML supply chain security at the model file level, ModelScan addresses a gap that most LLM red teaming tools do not cover at all. Scanning a model file before serving it in production is a different problem from testing a deployed model's behavior, and ModelScan is purpose-built for the former.
The caveats: ModelScan is a model file scanner, not a red teaming platform. It does not test LLM behavior, agentic systems, prompt injection, or any runtime attack surface. It also carries the uncertainty of a project that recently lost its primary corporate sponsor. Teams considering ModelScan for production use should evaluate the current contribution activity and release cadence before depending on it.
3. Garak
Garak is an open source LLM vulnerability scanner maintained independently with support from NVIDIA. It runs probes across a taxonomy of LLM failure modes: prompt injection, jailbreaking, hallucination, data leakage, and toxicity generation. It has no affiliation with any security platform vendor or model provider.
For teams that used Protect AI's LLM testing layer and want an open source, independently maintained replacement with no ecosystem lock-in, Garak is the closest equivalent. The probe library is large and actively updated. It integrates into CI/CD pipelines via Python and works against any model with an API.
Garak tests models, not AI applications. It does not cover agentic workflows, RAG pipelines, or MCP integrations. It has no compliance reporting. Results require an engineer to interpret and act on.
4. PyRIT
PyRIT (Python Risk Identification Toolkit) is Microsoft's open source framework for red teaming generative AI systems. It provides programmatic multi-turn adversarial conversation support, automated attack orchestration, and scoring across safety dimensions.
PyRIT is model-agnostic and maintained by Microsoft's AI Red Team, which publishes research based on its internal use. For teams that want a framework rather than a product and have engineering resources to configure custom attack scenarios in Python, PyRIT is a capable option.
The gaps are the same as Garak: PyRIT is a framework, not a platform. There is no dashboard, no compliance reporting, and no coverage of agentic attack surfaces or runtime protection. It requires dedicated engineering to operate.
5. Giskard
Giskard is an open source ML testing framework covering both traditional ML models and LLM-based applications. Its LLM module includes prompt injection detection, hallucination testing, and a RAG evaluation component that tests retrieval pipeline behavior under adversarial conditions.
Giskard covers a similar breadth to Protect AI's testing layer for ML models, with the addition of a RAG-specific testing module that most alternatives do not have. For teams that ran Protect AI against both traditional ML models and LLM applications, Giskard is the open source option with the broadest coverage across both.
Giskard does not cover agentic attack surfaces, MCP integrations, or runtime protection. Like the other open source tools here, it requires engineering time to configure and maintain.
Comparison table
Platform | Pre-production testing | Agentic/MCP coverage | ML model scanning | Framework coverage | Compliance reporting |
|---|---|---|---|---|---|
Repello AI (ARTEMIS) | Yes | Yes | No | OWASP, NIST, MITRE ATLAS | Yes |
Protect AI (via Prisma Cloud) | Yes | Limited | Yes (Guardian) | OWASP, NIST | Yes |
ModelScan | No (scanning only) | No | Yes | N/A | No |
Garak | Yes | No | No | Partial OWASP | No |
PyRIT | Yes | No | No | Custom | No |
Giskard | Yes | No | No | Custom | No |
How to choose
If your primary concern is ML model file security and supply chain scanning, ModelScan covers that specific problem. No other tool in this list does. The community maintenance question is the main risk to evaluate before committing.
If you need open source LLM testing with no vendor lock-in, Garak covers broad vulnerability scanning and Giskard covers RAG-specific testing. Both require engineering to operate.
If you need compliance reporting against OWASP LLM Top 10, NIST AI RMF, or MITRE ATLAS, and results that non-engineers can act on, open source tools are not the right fit. ARTEMIS covers all three frameworks with compliance-mapped output and covers the agentic attack surfaces that Protect AI's testing layer and most open source tools do not.
If you are not already a Palo Alto customer and were using Protect AI as a standalone product, the acquisition restructured the cost of staying. That is the right time to evaluate whether the coverage and price point of an independent AI security platform better fits your team's requirements.
FAQ
What happened to Protect AI after the Palo Alto acquisition? Palo Alto Networks acquired Protect AI and integrated its products into the Prisma Cloud platform. Guardian, Recon, and the LLM red teaming layer are now part of Palo Alto's AI security module within Prisma Cloud. ModelScan, the open source model scanning project, is now community-maintained. The standalone Protect AI product offering no longer exists as an independent purchase.
Is ModelScan still usable after Protect AI was acquired? ModelScan remains available as an open source project under its existing license. It is now community-maintained without Protect AI's engineering team behind it. Teams using it for production ML supply chain security should monitor contribution activity and issue response times to assess whether the project's maintenance level is sufficient for their use case.
What is the main difference between Protect AI and ARTEMIS as red teaming tools? Protect AI's red teaming layer tested primarily at the LLM endpoint level. ARTEMIS tests the full AI application in context, including RAG pipelines, multi-agent orchestrations, and MCP server integrations natively. ARTEMIS also maps output to OWASP LLM Top 10, NIST AI RMF, and MITRE ATLAS. Protect AI's reporting mapped primarily to OWASP LLM Top 10 and NIST.
Do I need to be a Palo Alto customer to use the former Protect AI tools? The former Protect AI products are now part of Prisma Cloud, which is Palo Alto's cloud security platform. Accessing them requires Prisma Cloud licensing. Teams that are not existing Palo Alto customers will encounter enterprise licensing requirements rather than standalone AI security tool pricing.
What attack surfaces did Protect AI cover that teams now need to replace? Protect AI covered ML model scanning (Guardian), AI asset discovery (Recon), and LLM vulnerability testing. ModelScan (open source) covers model file scanning. ARTEMIS covers LLM vulnerability testing with broader agentic and framework coverage than Protect AI's testing layer. For AI asset discovery, Repello AI's AI Inventory covers that use case.
Share this blog
Subscribe to our newsletter











