Autonomous web agents need to operate under externally imposed or human-specified policies while generating long-horizon trajectories. However, little work has examined whether these trajectories comply with such policies, or whether policy violations persist across different contexts such as domains (e.g., shopping or coding websites) and subdomains (e.g., product search and order management in shopping). To address this gap, we introduce PolicyGuardBench, a benchmark of about 60k examples for detecting policy violations in agent trajectories. From diverse agent runs, we generate a broad set of policies and create both within subdomain and cross subdomain pairings with violation labels. In addition to full-trajectory evaluation, PolicyGuardBench also includes a prefix-based violation detection task where models must anticipate policy violations from truncated trajectory prefixes rather than complete sequences. Using this dataset, we train PolicyGuard-4B, a lightweight guardrail model that delivers strong detection accuracy across all tasks while keeping inference efficient. Notably, PolicyGuard-4B generalizes across domains and preserves high accuracy on unseen settings. Together, PolicyGuardBench and PolicyGuard-4B provide the first comprehensive framework for studying policy compliance in web agent trajectories, and show that accurate and generalizable guardrails are feasible at small scales.
PolicyGuardBench is constructed through a pipeline that begins by cleaning raw agent logs, removing noise, and canonicalizing actions into standardized trajectories. From these, over 2,000 unique policies are synthesized via LLM prompting and human curation, covering obligations, prohibitions, ordering, and conditional rules. Trajectories are paired with policies using embedding retrieval and heuristics, producing nearly 60,000 trajectory–policy pairs across five domains, of which about 42% involve violations and 42% require cross-subdomain transfer. Each pair is annotated for violation types—such as missing obligations, forbidden actions, or conditional breaches—through a mix of human labels and LLM assistance. The benchmark further supports prefix-based splits (N=1–5 steps) to evaluate early violation detection.
To Add Soon.