AI models are becoming the new core infrastructure for your business, but most of them arrive as “black boxes.”
You’re pulling models from open-source communities and AI platforms at high speed — yet you often can’t see what’s really inside:
embedded code, backdoors, poisoned components or hidden dependencies.
At the same time, your proprietary models and training data are your competitive advantage.
Moving them out of your environment just to scan them creates new exposure points and compliance headaches.
Security teams are stuck between two less-than-ideal choices: slow everything down with manual reviews or accept unknown risk in production.
Prisma AIRS® AI Model Security is built to remove that tradeoff — so you don’t have to choose between speed and safety.
Prisma AIRS AI Model Security analyzes models directly within your environment, inspecting their structure and components
to uncover malicious code, backdoors and hidden risks. It validates each model’s origin and dependencies using
global threat intelligence to detect supply chain compromise across open-source and third-party sources.
These checks integrate into CI/CD and MLOps workflows, automatically evaluating models as they move through
development so teams can deploy AI confidently without exposing sensitive assets or slowing release cycles.
Analyze 35+ model file types (PyTorch, ONNX, TensorFlow and more) for 25+ categories of threats, including embedded malicious code, backdoors and other structural risks — so models stop being a blind spot.
Leverage Palo Alto Networks Advanced WildFire® plus insights from the huntr ethical hacker community to validate models against known and emerging threats across millions of scanned models. Validation results are logged and retained to support audit and compliance workflows.
Keep proprietary models and data within your environment while still getting full security analysis, helping reduce IP exposure and simplifying compliance.
Use API-first integration to embed model scanning into build, test and deployment workflows, enabling continuous protection and consistent enforcement without manual ticketing between security and data science teams.
We're innovating at the speed of AI. Check out the newest features and updates in Prisma AIRS AI Model Security.
Scans models in Artifactory and GitLab
January 2026
Applies custom labels to scans
January 2026
Scans models directly from cloud storage
January 2026
Expands model-violation visibility and configuration
December 2025