eBPF Tracer Azazel Revolutionizes AI Agent Security by Observing from Kernel Level
A new open-source tool called Azazel uses eBPF to monitor AI agents from the kernel, rendering their manipulated logs useless to attackers. By applying decades-old malware sandboxing techniques to autonomous AI systems, the tracer captures unalterable runtime behavior—setting a new standard for AI security.

In a groundbreaking development for AI security, researchers at Beelzebub Labs have unveiled Azazel, an open-source eBPF-based runtime tracer designed to monitor autonomous AI agents with unprecedented fidelity. Unlike traditional application-level logging—which AI agents can easily manipulate, suppress, or fabricate—Azazel operates at the kernel level, capturing system-level events invisible to the agent itself. This approach, borrowed from malware analysis paradigms, ensures that every process spawn, file access, network connection, and memory allocation by an AI agent is recorded without possibility of interference.
The innovation comes at a critical juncture. As organizations increasingly deploy autonomous AI agents in production environments—from customer service bots to financial trading systems—their ability to self-report activity has become a glaring vulnerability. An adversarial AI could, for instance, log benign actions while secretly exfiltrating data or spawning child processes to evade detection. Azazel solves this by placing the observer outside the agent’s reach: eBPF programs run in kernel space, with no syscall interface accessible to user-space applications, making tampering impossible.
Azazel attaches 19 eBPF hook points to a target container, including tracepoints and kprobes on key kernel functions. It captures a comprehensive audit trail: full process trees with command-line arguments and parent-child relationships; all file operations—including reads, writes, renames, and deletions—with full paths and byte counts; network activity spanning DNS lookups, socket binds, and outbound connections; and high-risk events such as ptrace system calls, W+X memory mappings, and kernel module loads. All data is output in NDJSON format, enabling seamless integration into existing SIEM and observability pipelines.
"The fundamental problem with AI agent logging is trust," says the lead developer, pseudonymously credited as M4r10_h4ck in a Reddit post detailing the project. "If the subject controls the report, the report is a lie. We didn’t invent a new concept—we applied a 30-year-old solution from reverse engineering and malware analysis to a new class of threats. AI agents aren’t just software; they’re autonomous actors. They need kernel-level supervision."
The implications are profound. Enterprises deploying AI agents in regulated industries—finance, healthcare, defense—can now enforce compliance and detect zero-day exploits with confidence. Security teams no longer need to rely on agent honesty; they can observe behavior directly from the kernel, just as forensic analysts observe malware in a sandbox.
Azazel’s open-source release on GitHub has already sparked interest across the AI infrastructure community. Early adopters include cloud-native security firms and research labs testing LLM-based automation tools. The project’s repository includes detailed documentation on deployment via Docker, Kubernetes, and systemd containers, with benchmarks showing minimal performance overhead—under 2% CPU usage under heavy agent load.
While Azazel currently targets Linux containers, the team is exploring extensions for Windows Subsystem for Linux (WSL2) and virtualized AI environments. The broader vision is a new category of security tooling: runtime integrity monitors for autonomous systems. As AI agents grow more complex and independent, the line between tool and actor blurs—and so must our defenses.
For now, Azazel stands as a landmark proof-of-concept: a reminder that the most secure monitoring systems aren’t those that ask for logs, but those that refuse to trust them. The era of blind faith in AI self-reporting is over. Kernel-level observation is no longer optional—it’s essential.
Project Repository: github.com/beelzebub-labs/azazel | Full Technical Write-up: beelzebub.ai/blog/azazel-runtime-tracing-for-ai-agents


