Policy is a lazy man's AI security practice. Security teams should be implementing technical guardrails
Policy Can Only be a Starting Point
Most AI governance programs start with a usage policy. That makes sense — you need to establish acceptable use, define data classification expectations, and create an accountability framework before you can build anything on top of it. Policy is the foundation.
The problem is when policy becomes the entire program. An AI usage document that sits in an employee handbook and gets acknowledged once a year is not a security control. It is a statement of intent. It communicates what the organization expects. It does not enforce anything.
The history of information security is largely a story of recognizing where behavioral controls alone are insufficient and replacing them with technical ones. Password complexity requirements gave way to password managers. Secure coding guidelines gave way to SAST in CI pipelines. Phishing awareness training gave way to email filtering that intercepts malicious links before they reach inboxes. In each case, the shift was from asking people to make the right decision to building systems that make the right outcome the default.
AI security is at that same inflection point now.
The Case for Enablement Over Restricting
There is a version of AI security that treats restriction as the primary tool: ban unapproved tools, limit AI access to certain teams, require manual approval for new use cases. This approach feels conservative, but in practice it tends to produce worse outcomes — not better ones.
Employees who cannot access AI through official channels find unofficial ones. Shadow AI usage increases. Security visibility decreases. The organization gets slower without getting safer.
A more durable approach is to give teams access to the tools they need while building controls that make data leakage technically difficult regardless of how the tools are used. The employee experience stays fast and frictionless. The security outcome is enforced at the infrastructure layer, not left to individual judgment at the moment of use.
This is the model that DLP at the AI gateway layer enables.
Where Policy Falls Short
It is worth being specific about the gap policy leaves, because it is larger than it appears.
Most violations are invisible. Consumer AI interactions leave no logs visible to your security team. When an employee pastes sensitive data into a personal ChatGPT account, there is no telemetry to detect. The policy may have been violated; you will likely never know.
Most exposure is unintentional. Employees are generally not trying to leak data. They are trying to finish a task efficiently. Policy frameworks built around consequences assume bad intent, but the majority of AI data exposure involves no malicious intent at all — just someone pasting context they did not stop to classify.
Enforcement is inherently reactive. DLP prevents. Policy enforcement responds. In a world where a single prompt can expose hundreds of records to a third-party model, prevention is the right model.
Policy cannot keep pace with AI adoption. New tools appear constantly. Consumer use cases evolve faster than policy language can be updated, reviewed, and re-acknowledged. A technical control that scans traffic patterns adapts to new surfaces automatically.
DLP for AI Traffic: How It Works
Data Loss Prevention for AI applies the same principle that has worked for email and endpoint security for decades: scan data in motion, detect sensitive patterns, and take a configured action before the data reaches a destination outside your control. The surface is different — prompts going to AI models and responses coming back — but the logic is the same.
The video below walks through how this works at the infrastructure level:
What Cloudflare AI Gateway DLP Detects
Stereos is built on Cloudflare AI Gateway, which includes a production-grade DLP engine — the same detection infrastructure Cloudflare runs for its enterprise Zero Trust customers. Here is what it covers.
Bidirectional scanning
The engine scans both incoming prompts and outgoing AI responses. This matters because AI introduces a second exposure vector that traditional DLP misses: a model can surface sensitive information through its response, not just receive it in a prompt. Scanning both directions closes that gap.
Content detection profiles
Financial Information — credit card numbers, bank account and routing numbers
Personally Identifiable Information — names, addresses, phone numbers, email addresses
Government Identifiers — Social Security numbers, passport numbers, driver’s license numbers
Healthcare Information — medical record numbers and patient data (HIPAA-relevant)
Source Code — proprietary code patterns passed to external models
Credentials and Secrets — API keys, tokens, and private keys embedded in prompts
Customer Data — customer records and CRM-style data in prompt context
Custom Profiles — organization-specific patterns for data types unique to your environment
Intent detection
Beyond data patterns, the engine evaluates the intent behind prompts — catching attack vectors that do not contain sensitive data themselves but are designed to extract it:
Jailbreak attempts — prompts engineered to bypass model safety constraints
Requests for malicious code — prompts asking models to generate exploits or attack scripts
PII extraction attempts — prompts designed to cause the model to surface personal data from its training or context
Enforcement actions
Flag — log the detection for audit and review without blocking the user. Useful for understanding your exposure baseline before enforcing.
Block — hard stop. The request or response does not proceed. The sensitive data never reaches the model or the end user.
The recommended rollout pattern is to start in Flag mode to calibrate detection accuracy against your real traffic, then graduate to Block once you have confidence in the profiles. This is standard DLP practice, and it works the same way for AI traffic.
Regulatory coverage
The detection profiles map directly to the frameworks most commonly cited in enterprise AI governance: GDPR (PII and customer data), HIPAA (healthcare information), and PCI DSS (financial and payment data). Gateway-level DLP gives compliance teams an auditable enforcement record — logged detections, blocked requests, timestamps — that acknowledgment signatures cannot provide.
How Stereos Handles This for Customers
Configuring Cloudflare AI Gateway DLP directly involves standing up a Zero Trust account, building detection profiles in the dashboard, wiring them to your gateway configuration, and keeping that setup current as your AI tooling evolves. It is manageable, but it is operational overhead on top of everything else a security team is already running.
Stereos takes that off your plate. Every gateway we provision includes the Cloudflare DLP layer as part of the default configuration. We handle the setup, the profile management, and the maintenance. Your team defines what to flag and what to block; we make sure it runs reliably across every AI provider your organization routes through the gateway.
Because DLP is enforced at the gateway layer rather than per-provider, it applies uniformly whether your teams are routing to OpenAI, Anthropic, or any other model. One policy surface. No gaps when someone switches models or adds a new tool.
From Policy to Practice
The organizations building durable AI security programs are not the ones with the most restrictive policies. They are the ones who recognized early that policy alone is not a control, and invested in the infrastructure layer to back it up.
DLP at the AI gateway layer is a mature, deployable solution today. It gives employees the access they need to stay productive, and gives security teams the enforcement they need to actually manage risk — not just document it.
That combination — enablement plus enforcement — is what a modern AI security practice looks like.
Stop writing policies.
Start giving access.
Give your team the freedom to use the AI tools they love, while your security team gets full visibility and control — without the policy wars.