How to Set Up AI Agent Tracking with Suvadu
A complete guide to tracking shell commands from Claude Code, OpenCode, Cursor, Windsurf, and other AI tools — with prompt capture, risk assessment, and agent reports.
AI coding assistants execute shell commands in your terminal with your full permissions — installing packages, running tests, modifying configs. Whether you use Claude Code, OpenCode, Cursor, or another tool, Suvadu captures every command with full metadata, risk assessment, and — for tools with hook support — the prompts that triggered them. Here's how to set it up for every major AI tool.
How Detection Works
Suvadu uses two complementary approaches to detect and track AI-driven commands:
- Shell hook detection (automatic via environment variables) — When an IDE opens a terminal, it injects environment variables into the shell session. Suvadu's shell hooks check for these variables on every command and automatically tag the executor. This works for IDE terminals like Cursor, VS Code, Windsurf, Antigravity, IntelliJ, and PyCharm.
- Dedicated hook/plugin integration — Some tools bypass the shell entirely, spawning commands via child processes. For these, Suvadu provides native integrations: PostToolUse hooks for Claude Code, and a plugin for OpenCode. These integrations also capture the prompts that triggered each command.
Both approaches feed into the same local database, so you get a unified view of every command regardless of which tool ran it.
IDE Terminal Detection
When you open a terminal inside an IDE, the IDE sets environment variables that Suvadu's shell hooks detect automatically. No extra configuration is needed beyond the standard shell hook installation.
| IDE | Detection Variable | Executor Label |
|---|---|---|
| Cursor | $CURSOR_INJECTION | cursor |
| Antigravity | $ANTIGRAVITY_AGENT | antigravity |
| Windsurf | $WINDSURF | windsurf |
| VS Code | $VSCODE_INJECTION | vscode |
| IntelliJ | $INTELLIJ_ENVIRONMENT_READER | intellij |
| PyCharm | $PYCHARM_HOSTED | pycharm |
Setup
Install Suvadu's shell hooks and run the init command for your IDE:
# Install shell hooks (if not already done)
suv init
# IDE detection is automatic — just open a terminal in your IDE
# and every command will be tagged with the correct executor There's nothing else to configure. Once the shell hooks are active, any command you run inside an IDE terminal is automatically detected and labeled. You can verify it's working by running a command in your IDE terminal and checking:
suv search --last 1 The executor field should show the IDE name (e.g., cursor, vscode) instead of terminal.
Claude Code Integration
Claude Code is different from IDE terminals. It doesn't just open a shell — it executes commands programmatically via its tool-use system. This means shell hooks alone won't catch everything. Suvadu integrates with Claude Code through its PostToolUse and UserPromptSubmit hooks, giving you two things no other approach provides:
- Capture of every command Claude Code runs, even when it bypasses the shell
- The actual prompt that triggered each command
Setup
suv init claude-code This registers Suvadu's hooks in Claude Code's configuration. From that point on, every Bash tool invocation is recorded, and every prompt you submit is linked to the commands it produces.
Prompt Capture
This is the key differentiator. When you ask Claude Code to do something, Suvadu records your prompt and links it to every command that follows. Here's what the output looks like:
$ suv agent prompts --last 3
Prompt: "Add error handling to the database connection module"
├─ cat src/db/connection.rs [exit: 0] safe
├─ cargo check [exit: 0] safe
└─ cargo test db::connection [exit: 0] medium
Prompt: "Fix the failing CI lint step"
├─ cargo clippy --all-targets 2>&1 [exit: 1] safe
├─ cat src/lib.rs [exit: 0] safe
└─ cargo clippy --all-targets [exit: 0] safe
Prompt: "Update the README with the new CLI flags"
└─ cat README.md [exit: 0] safe Each prompt becomes a logical grouping for the commands beneath it. This makes it trivial to understand why a command was run, not just what was run.
How the Hooks Work
Under the hood, Suvadu registers two Claude Code hooks:
- UserPromptSubmit — fires when you submit a prompt, recording the prompt text and starting a new prompt group
- PostToolUse — fires after Claude Code executes a Bash tool call, recording the command, exit code, duration, and linking it to the current prompt group
Both hooks are lightweight shell scripts that call suv add with the appropriate metadata. They add no perceptible latency to Claude Code's execution.
OpenCode Integration
OpenCode presents a different challenge. It's a Node.js application that spawns commands using child_process.spawn, which bypasses the shell entirely. Your Zsh hooks never fire because Zsh is never involved.
The Plugin Solution
Suvadu provides a native OpenCode plugin that hooks into OpenCode's command execution lifecycle directly:
suv init opencode This installs the Suvadu plugin into OpenCode's plugin directory. The plugin intercepts command executions at the application level, before they reach child_process.spawn, and records them with the same metadata as any other Suvadu entry.
Prompt Capture
Prompt capture works the same way as with Claude Code. The plugin hooks into OpenCode's prompt submission lifecycle and links every subsequent command to the prompt that triggered it. The output format is identical — you can use suv agent prompts to see OpenCode prompts grouped with their commands.
How the Plugin Hooks Work
The OpenCode plugin registers lifecycle callbacks that mirror Suvadu's shell hook approach:
- On prompt submission: record the prompt text and start a new group
- On command execution: capture the command string, working directory, and environment
- On command completion: record exit code, duration, and write everything to Suvadu's database
Since the plugin runs inside OpenCode's process, it has direct access to the command metadata without needing to parse shell output.
Custom Agent Support
For AI tools that run commands inside IDE terminals (and therefore set environment variables), you can add custom detection rules in Suvadu's settings:
suv settings Navigate to the Agents tab to define new environment variable mappings. Specify the variable name and the executor label you want, and Suvadu will detect it automatically.
For tools that bypass the shell entirely (like OpenCode), custom plugins require deeper integration. Open an issue if you'd like support for a specific tool.
The Agent Dashboard
Once you have agents tracked, the dashboard gives you a real-time overview of all AI-driven activity:
suv agent dashboard The dashboard provides:
- Session overview — active and recent agent sessions with command counts, durations, and success rates
- Risk assessment — color-coded risk levels for every command, with drill-down into why a command was flagged
- Filtering — filter by time range, specific agent (Claude Code, OpenCode, Cursor, etc.), or risk level
- Detail pane — select any command to see its full metadata: working directory, exit code, duration, linked prompt, and risk assessment
- Copy support — copy individual commands or entire sessions to clipboard for sharing or documentation
Risk Assessment
Every command recorded by Suvadu is automatically assigned a risk level based on pattern matching against known command categories:
| Risk Level | Examples | Why |
|---|---|---|
| Critical | rm -rf /, git push --force origin main | Irreversible data loss or destructive operations on protected branches |
| High | npm install <pkg>, chmod 777 | Introduces external code or weakens security boundaries |
| Medium | git reset, docker run | Modifies state in ways that may be hard to undo |
| Safe | git status, ls, cat | Read-only operations with no side effects |
Risk assessment is purely informational — Suvadu never blocks commands. It highlights what deserves a closer look during your post-session review.
Agent Reports
Suvadu generates agent activity reports in three formats:
# Plain text (terminal-friendly)
suv agent report --format text
# Markdown (great for PR descriptions)
suv agent report --format markdown
# JSON (for scripts and integrations)
suv agent report --format json The markdown format is particularly useful for PR descriptions. Here's an example of the output:
## Agent Activity Report
**Session:** 2026-02-10 14:30 - 15:45
**Agent:** claude-code
**Commands:** 23 total (21 succeeded, 2 failed)
**Risk Summary:** 0 critical, 1 high, 5 medium, 17 safe
### Prompts and Commands
#### "Refactor the auth middleware to use async/await"
| # | Command | Exit | Risk |
|---|---------|------|------|
| 1 | `cat src/middleware/auth.js` | 0 | safe |
| 2 | `npm test -- --grep "auth"` | 0 | medium |
| 3 | `npm test` | 1 | medium |
| 4 | `npm test` | 0 | medium |
#### "Add rate limiting to the API endpoints"
| # | Command | Exit | Risk |
|---|---------|------|------|
| 1 | `npm install express-rate-limit` | 0 | high |
| 2 | `cat src/routes/api.js` | 0 | safe |
| 3 | `npm test` | 0 | medium | Copy the markdown output directly into your pull request description to give reviewers full visibility into what the AI agent did and why.
Querying Agent Commands
Suvadu's search and stats commands work across all agent types. Here are the most useful queries:
# All agent commands today
suv search --executor-type agent --after today
# Claude Code commands in current project
suv search --executor claude-code --here
# OpenCode commands
suv search --executor opencode
# Failed agent commands this week
suv search --executor-type agent --exit-code 1 --after "7 days ago"
# All non-human commands (agents + IDE terminals)
suv search --executor-type agent,ide --after today
# Stats breakdown by agent over the last 30 days
suv agent stats --days 30 The --here flag scopes results to the current working directory and its children, making it easy to see only the commands relevant to the project you're in.
Best Practices
- Review after every AI session — Run
suv search --executor-type agent --after todayat the end of each session. Skim the risk column for anything unexpected. - Watch for package installs — AI agents love to
npm installorpip installthings. These are flagged as high risk. Check that every new dependency is intentional. - Bookmark useful patterns — When an agent runs a command you want to remember, bookmark it with
suv bookmark <id>. Bookmarked commands show up in dedicated searches. - Add notes for context — Use
suv note <id> "reason for this command"to annotate commands that need explanation. Notes persist and show up in reports. - Use tags for per-project organization — Tag commands with project names or feature branches to keep things organized across multiple workstreams.
- Generate reports for PR descriptions — Run
suv agent report --format markdownand paste the output into your PR. Reviewers will know exactly what the AI did.
Let your AI tools do what they do best. Then use Suvadu to verify what they did.
Ready to start tracking? Install Suvadu from suvadu.sh and run suv init to get started.