Features
Everything You Need to Govern AI Agents in Production
From agent discovery to compliance automation, observability SDKs to adversarial testing, NodeLoom gives you the complete governance layer for AI operations.
Observability
Monitor Every Execution
Full visibility into your workflows, AI agents, and resource usage. Catch issues before they impact users.
Real-Time Execution
Watch workflows execute node-by-node with live status updates, input/output inspection, and execution timelines.
Behavioral Monitoring
Track AI agent behavior patterns, detect anomalies, and set up alerts for unexpected outputs or failures.
Drift Detection
Monitor AI model output quality over time. Get alerted when response patterns shift or degrade.
Token Tracking
Track AI token usage and costs per workflow, per execution, and per agent. Set budget limits and alerts.
Compliance Dashboard
Real-time compliance posture with guardrail coverage, credential health, and audit integrity. Generate one-click SOC2, GDPR, HIPAA, and SOX reports.
SIEM Export
Stream audit logs to Splunk, Datadog, Elasticsearch, or custom webhooks for enterprise-grade observability.
{ "url": "https://api.example.com/users",
"method": "GET" }"status": 200, "data": [...]
Agent Discovery
Find Every Agent in Your Organization
Most teams don't know how many AI agents are running in their infrastructure. Agent Discovery automatically scans your cloud providers, code repositories, and MCP gateways to build a complete inventory, so you can govern what you can see.
Cloud Provider Scanning
Automatically discover AI agents running on AWS Lambda, ECS, GCP Cloud Run, Azure Functions, and more. Identify LLM API calls, inference endpoints, and agent frameworks.
GitHub Repository Scanning
Scan your GitHub organization for repositories using AI/ML libraries. Detect LangChain, CrewAI, AutoGen, and custom agent patterns across your codebase.
MCP Gateway Observation
Monitor MCP (Model Context Protocol) gateways to discover agents connecting through standardized tool interfaces. Catalog every agent interaction.
eBPF Kernel Monitoring
Deploy zero-instrumentation probes that intercept LLM API calls at the kernel level. Discover agents without modifying any application code — ideal for finding shadow AI.
Automated Inventory
Build and maintain a complete agent inventory automatically. Track agent owners, deployment locations, AI providers, and last-seen timestamps.
Risk Classification
Automatically classify discovered agents by risk level based on data access patterns, network exposure, and monitoring status. Prioritize what to govern first.
Onboard to Monitoring
One-click onboarding from discovery to full monitoring. Generate SDK tokens, get instrumentation snippets, and start tracking agents in minutes.
| Agent | Source | Framework | Risk | Status |
|---|---|---|---|---|
| support-agent | AWS Lambda | LangChain | Low | Monitored |
| fraud-detector | AWS ECS | Custom | Low | Monitored |
| content-writer | MCP Gateway | CrewAI | High | Unmonitored |
| data-pipeline | GitHub Actions | LangChain | Medium | Unmonitored |
| onboarding-bot | GCP Cloud Run | Custom | Low | Monitored |
| billing-advisor | eBPF Probe | OpenAI SDK | Medium | Monitored |
Observability SDK
Instrument Any AI Agent
Bring external agents into NodeLoom's monitoring pipeline. Install a lightweight SDK, add a few lines of code, and get full observability across LangChain, CrewAI, or any custom framework.
4 Language SDKs
First-class support for Python, TypeScript, Java, and Go. Install a package, add a few lines, and your agents report telemetry.
Traces and Spans
Structured observability with traces (agent runs) and spans (individual operations). Automatically maps to the NodeLoom monitoring model.
Token Cost Tracking
Report token usage per LLM call. Track costs across all your agents in one place, with budget alerts and per-model breakdowns.
Framework Integrations
Built-in callback handlers for LangChain and CrewAI. Drop in a handler and get automatic tracing with zero manual instrumentation.
Batching and Retry
Events are queued locally and sent in batches. Automatic retry with exponential backoff. Telemetry never blocks your agent logic.
Full Monitoring Pipeline
SDK-instrumented agents get the same monitoring as native workflows: anomaly detection, drift alerts, evaluations, and compliance tracking.
pip install nodeloom-sdk
from nodeloom import NodeLoom, SpanType
client = NodeLoom(
api_key="sdk_...",
endpoint="https://your-instance.nodeloom.io"
)
trace = client.trace("support-agent",
input={"query": "How do I reset my password?"})
with trace.span("gpt-4o", span_type=SpanType.LLM) as span:
result = openai.chat.completions.create(...)
span.set_output({"response": result.choices[0].message.content})
span.set_token_usage(
prompt_tokens=result.usage.prompt_tokens,
completion_tokens=result.usage.completion_tokens,
model="gpt-4o"
)
trace.end(status="success", output={"answer": "..."})
client.shutdown()Create Token
Generate an SDK token from Settings
Install SDK
Add the package for your language
Instrument
Wrap your agent runs with traces and spans
Monitor
See everything in the Monitoring dashboard
Governance
Control Every Stage
Enterprise governance controls for AI workflows. Manage environments, enforce approvals, and automate incident response.
Incident Response Playbooks
Automated quarantine, notification, escalation, and rollback when drift thresholds or guardrail violations trigger. Define custom playbooks per workflow.
Multi-Environment Deployments
Promote workflows through Development, Staging, and Production. Each environment gets its own definition snapshot, webhook tokens, schedules, and credentials.
Human Approval Gates
Require manual approval before workflows execute in production. Configurable approval chains with email/Slack notifications and timeout policies.
Custom Guardrails
Define content filters, PII redaction rules, output validators, and behavioral constraints. Apply per-workflow or globally across your team.
Token Budget Controls
Set per-workflow and per-agent token budgets with automatic throttling. Get alerts at configurable thresholds and track cost attribution across teams.
Security Testing
Attack Before Attackers Do
Automated adversarial testing and LLM evaluation to find vulnerabilities before they reach production.
Red Team Testing
Run automated adversarial attacks against your AI agents. Test for prompt injection, jailbreak attempts, data exfiltration, and harmful output generation. Get a vulnerability report with severity ratings.
LLM-as-Judge Evaluation
Use a secondary LLM to automatically score your agent's outputs against custom criteria. Tie evaluation results directly to compliance reports for audit trails.
Continuous Security Monitoring
Schedule recurring red team tests and evaluations. Track security posture over time and get alerted when new vulnerabilities appear.
Version Control
Version Everything
Full version history with structured diffs and A/B testing. Never lose a workflow change again.
Structured Diffs
See exactly what changed between versions, including nodes added, removed, or modified, with a visual diff viewer.
A/B Testing
Run two workflow versions side by side. Use LLM-as-Judge to automatically evaluate which version performs better, then tie results to compliance reports for full audit trails.
Version History
Every save creates a version. Roll back to any previous version instantly with full change tracking.
Comparison View
Compare any two versions side by side with highlighted differences in node configurations and connections.
You are a helpful assistant. Answer user questions about our product.
You are a helpful onboarding assistant. Guide new users through setup. Be concise and friendly.
Version 5 shows improved response quality with the updated prompt. Responses are more concise and better guide users through onboarding.
- • v5 responses are 40% shorter on average
- • v5 includes setup steps in 95% of responses vs 60% for v4
- • v5 error handling catches 2 additional edge cases
AI Agents
Intelligent Agents That Reason
Build AI agents with advanced reasoning, memory management, and tool integration. Three distinct modes for every use case.
ReAct Mode
Reason-Act-Observe loop for complex multi-step problem solving. The agent thinks through each step before taking action.
Conversational Mode
Natural dialogue with persistent memory. Perfect for chatbots, support agents, and interactive assistants.
Tools-Only Mode
Direct function calling without reasoning overhead. Ideal for structured tasks like data extraction and API orchestration.
4 Memory Types
Choose the right memory strategy for your agent's needs.
Core Capabilities
Everything you need for production AI agents.
Visual Editors
Two Editors, One Platform
Choose the right editor for your workflow complexity. Switch between them at any time.
Canvas Editor
For complex workflows
- Drag-and-drop node placement
- Multi-branch parallel paths
- Zoom, pan, and minimap
- Node grouping and alignment
- Keyboard shortcuts
- Connection validation
Simple Flow Editor
For quick automations
- Linear top-to-bottom flow
- Quick node insertion
- Simplified configuration
- Mobile-friendly editing
- Perfect for simple automations
- Instant switch to Canvas
Chat Widgets
Embed AI on Any Website
Deploy conversational AI experiences with a single script tag. Fully customizable, secure by default, with built-in analytics.
Deployment
Branding
Security
Analytics
Quick Start
<!-- Add before </body> -->
<script
src="https://app.nodeloom.io/widget.js"
data-widget-id="your-id"
data-theme="auto"
data-brand-color="#3B82F6"
data-position="bottom-right"
></script>97+ Integrations
Connect to Your Entire Stack
Pre-built integrations with OAuth2 support. No coding required. Just authenticate and automate.
AI & Machine Learning6
Cloud & Infrastructure6
CRM & Sales6
DevOps & Development6
Databases6
Communication6
Productivity6
Don't see your integration? Use the HTTP Request node to connect to any REST API.
Ready to govern your AI agents?
Discover, monitor, and secure AI agents with full observability and enterprise-grade compliance. Start your free trial today.