Towards Policy-Compliant Agents: Learning Efficient Guardrails For Policy Violation Detection


δμ

When ‘safe’ agents quietly break the rules, what trust can we truly place in their decisions?

PolicyGuardBench is a benchmark for evaluating whether autonomous agents comply with domain-specific and external policies over long trajectories. While agent safety has been widely studied, policy compliance remains largely overlooked, as agents in practice frequently complete tasks while violating rules such as overspending or bypassing constraints. To address this gap, PolicyGuard-4B is introduced as a lightweight guardrail model that detects trajectory-level policy violations, making agent behavior more reliable.

Main figure for the project

Abstract

Autonomous web agents need to operate under externally imposed or human-specified policies while generating long-horizon trajectories. However, little work has examined whether these trajectories comply with such policies, or whether policy violations persist across different contexts such as domains (e.g., shopping or coding websites) and subdomains (e.g., product search and order management in shopping). To address this gap, we introduce PolicyGuardBench, a benchmark of about 60k examples for detecting policy violations in agent trajectories. From diverse agent runs, we generate a broad set of policies and create both within subdomain and cross subdomain pairings with violation labels. In addition to full-trajectory evaluation, PolicyGuardBench also includes a prefix-based violation detection task where models must anticipate policy violations from truncated trajectory prefixes rather than complete sequences. Using this dataset, we train PolicyGuard-4B, a lightweight guardrail model that delivers strong detection accuracy across all tasks while keeping inference efficient. Notably, PolicyGuard-4B generalizes across domains and preserves high accuracy on unseen settings. Together, PolicyGuardBench and PolicyGuard-4B provide the first comprehensive framework for studying policy compliance in web agent trajectories, and show that accurate and generalizable guardrails are feasible at small scales.

PolicyGuardBench Construction

PolicyGuardBench is constructed through a pipeline that begins by cleaning raw agent logs, removing noise, and canonicalizing actions into standardized trajectories. From these, over 2,000 unique policies are synthesized via LLM prompting and human curation, covering obligations, prohibitions, ordering, and conditional rules. Trajectories are paired with policies using embedding retrieval and heuristics, producing nearly 60,000 trajectory–policy pairs across five domains, of which about 42% involve violations and 42% require cross-subdomain transfer. Each pair is annotated for violation types—such as missing obligations, forbidden actions, or conditional breaches—through a mix of human labels and LLM assistance. The benchmark further supports prefix-based splits (N=1–5 steps) to evaluate early violation detection.

Benchmark Evaluation


PolicyGuard-4B reaches 90.1% accuracy and 87.6% F1, while maintaining the lowest latency (22.5 ms per example) among all evaluated models. Compared to large foundation models such as Llama-3.3-70B-Instruct, which achieve similar scores at much higher inference cost, PolicyGuard-4B offers a far more efficient solution. Frontier models like Claude-Sonnet-4 and Gemini-1.5-Pro also perform strongly but remain closed-source and slower. In contrast, existing safety guardrails—including Llama Guard and ShieldGemma—show substantially lower effectiveness, with F1 often below 0.60.


Prefix-Based Violation Detection


Prefix-based evaluation tests whether models can anticipate policy violations from early trajectory steps. As shown above, most foundation and guardrail models suffer clear accuracy drops as prefix length increases, with smaller models (e.g., Llama-3.2-3B, Qwen3-4B) falling below 0.6 by N=5. Larger models like Llama-3.3-70B and Qwen3-235B remain stronger but still decline. In contrast, PolicyGuard-4B sustains accuracy above 0.85 across all prefixes, showing robustness to both short prefixes and longer trajectories.


BibTeX


        To Add Soon.