Features

Everything You Need to Govern AI Agents in Production

From agent discovery to compliance automation, observability SDKs to adversarial testing, NodeLoom gives you the complete governance layer for AI operations.

Observability

Monitor Every Execution

Full visibility into your workflows, AI agents, and resource usage. Catch issues before they impact users.

Real-Time Execution

Watch workflows execute node-by-node with live status updates, input/output inspection, and execution timelines.

Behavioral Monitoring

Track AI agent behavior patterns, detect anomalies, and set up alerts for unexpected outputs or failures.

Drift Detection

Monitor AI model output quality over time. Get alerted when response patterns shift or degrade.

Token Tracking

Track AI token usage and costs per workflow, per execution, and per agent. Set budget limits and alerts.

Compliance Dashboard

Real-time compliance posture with guardrail coverage, credential health, and audit integrity. Generate one-click SOC2, GDPR, HIPAA, and SOX reports.

SIEM Export

Stream audit logs to Splunk, Datadog, Elasticsearch, or custom webhooks for enterprise-grade observability.

Execution Inspector
exec_a3f8k2
1
Webhook Trigger
webhook
23ms
2
HTTP Request
http-request
847ms
Input
{ "url": "https://api.example.com/users",
  "method": "GET" }
Output
"status": 200, "data": [...]
3
AI Agent
ai-agent
running
4
Send Email
email
pending
Execution History
Live
Customer Onboarding
Webhook·2m ago
1.2s
Daily Report
Schedule·15m ago
3.8s
Slack Bot Handler
Webhook·23m ago
0.4s
Rate limit exceeded
Data Sync Pipeline
Manual·now
-
Customer Onboarding
Webhook·1h ago
1.1s
Weekly Analytics
Schedule·2h ago
12.4s

Agent Discovery

Find Every Agent in Your Organization

Most teams don't know how many AI agents are running in their infrastructure. Agent Discovery automatically scans your cloud providers, code repositories, and MCP gateways to build a complete inventory, so you can govern what you can see.

Cloud Provider Scanning

Automatically discover AI agents running on AWS Lambda, ECS, GCP Cloud Run, Azure Functions, and more. Identify LLM API calls, inference endpoints, and agent frameworks.

GitHub Repository Scanning

Scan your GitHub organization for repositories using AI/ML libraries. Detect LangChain, CrewAI, AutoGen, and custom agent patterns across your codebase.

MCP Gateway Observation

Monitor MCP (Model Context Protocol) gateways to discover agents connecting through standardized tool interfaces. Catalog every agent interaction.

eBPF Kernel Monitoring

Deploy zero-instrumentation probes that intercept LLM API calls at the kernel level. Discover agents without modifying any application code — ideal for finding shadow AI.

Automated Inventory

Build and maintain a complete agent inventory automatically. Track agent owners, deployment locations, AI providers, and last-seen timestamps.

Risk Classification

Automatically classify discovered agents by risk level based on data access patterns, network exposure, and monitoring status. Prioritize what to govern first.

Onboard to Monitoring

One-click onboarding from discovery to full monitoring. Generate SDK tokens, get instrumentation snippets, and start tracking agents in minutes.

Agent Inventory
Last scan: 2 min ago23 agents found
AgentSourceFrameworkRiskStatus
support-agentAWS LambdaLangChainLowMonitored
fraud-detectorAWS ECSCustomLowMonitored
content-writerMCP GatewayCrewAIHighUnmonitored
data-pipelineGitHub ActionsLangChainMediumUnmonitored
onboarding-botGCP Cloud RunCustomLowMonitored
billing-advisoreBPF ProbeOpenAI SDKMediumMonitored

Observability SDK

Instrument Any AI Agent

Bring external agents into NodeLoom's monitoring pipeline. Install a lightweight SDK, add a few lines of code, and get full observability across LangChain, CrewAI, or any custom framework.

Python|TypeScript|Java|Go

4 Language SDKs

First-class support for Python, TypeScript, Java, and Go. Install a package, add a few lines, and your agents report telemetry.

Traces and Spans

Structured observability with traces (agent runs) and spans (individual operations). Automatically maps to the NodeLoom monitoring model.

Token Cost Tracking

Report token usage per LLM call. Track costs across all your agents in one place, with budget alerts and per-model breakdowns.

Framework Integrations

Built-in callback handlers for LangChain and CrewAI. Drop in a handler and get automatic tracing with zero manual instrumentation.

Batching and Retry

Events are queued locally and sent in batches. Automatic retry with exponential backoff. Telemetry never blocks your agent logic.

Full Monitoring Pipeline

SDK-instrumented agents get the same monitoring as native workflows: anomaly detection, drift alerts, evaluations, and compliance tracking.

quick-start
$
pip install nodeloom-sdk
from nodeloom import NodeLoom, SpanType

client = NodeLoom(
    api_key="sdk_...",
    endpoint="https://your-instance.nodeloom.io"
)

trace = client.trace("support-agent",
    input={"query": "How do I reset my password?"})

with trace.span("gpt-4o", span_type=SpanType.LLM) as span:
    result = openai.chat.completions.create(...)
    span.set_output({"response": result.choices[0].message.content})
    span.set_token_usage(
        prompt_tokens=result.usage.prompt_tokens,
        completion_tokens=result.usage.completion_tokens,
        model="gpt-4o"
    )

trace.end(status="success", output={"answer": "..."})
client.shutdown()
1

Create Token

Generate an SDK token from Settings

2

Install SDK

Add the package for your language

3

Instrument

Wrap your agent runs with traces and spans

4

Monitor

See everything in the Monitoring dashboard

Governance

Control Every Stage

Enterprise governance controls for AI workflows. Manage environments, enforce approvals, and automate incident response.

Incident Response Playbooks

Automated quarantine, notification, escalation, and rollback when drift thresholds or guardrail violations trigger. Define custom playbooks per workflow.

Multi-Environment Deployments

Promote workflows through Development, Staging, and Production. Each environment gets its own definition snapshot, webhook tokens, schedules, and credentials.

Human Approval Gates

Require manual approval before workflows execute in production. Configurable approval chains with email/Slack notifications and timeout policies.

Custom Guardrails

Define content filters, PII redaction rules, output validators, and behavioral constraints. Apply per-workflow or globally across your team.

Token Budget Controls

Set per-workflow and per-agent token budgets with automatic throttling. Get alerts at configurable thresholds and track cost attribution across teams.

Environment Pipeline
Customer Onboarding Workflow
DevelopmentActive
v12·2m ago
StagingActive
v10·1d ago
ProductionActive
v8·3d ago
All checks passed
Recent Promotions
Dev v12Staging·2h agoby Jane D.
Staging v9Production·1d agoby Alex K.Approved

Security Testing

Attack Before Attackers Do

Automated adversarial testing and LLM evaluation to find vulnerabilities before they reach production.

Red Team Testing

Run automated adversarial attacks against your AI agents. Test for prompt injection, jailbreak attempts, data exfiltration, and harmful output generation. Get a vulnerability report with severity ratings.

LLM-as-Judge Evaluation

Use a secondary LLM to automatically score your agent's outputs against custom criteria. Tie evaluation results directly to compliance reports for audit trails.

Continuous Security Monitoring

Schedule recurring red team tests and evaluations. Track security posture over time and get alerted when new vulnerabilities appear.

Red Team Report
Onboarding Agent · 2m ago
Tests Run8
6Passed
2Vulnerabilities
Severity:
1 Critical
1 High
0 Medium
0 Low
Prompt Injection (Direct)highPASS
Prompt Injection (Indirect)criticalFAIL
Jailbreak (DAN)highPASS
Jailbreak (Role-Play)mediumPASS
Data ExfiltrationcriticalPASS
Harmful Content GenerationhighPASS
PII LeakagehighFAIL
System Prompt ExtractionmediumPASS
2 vulnerabilities require attention
Indirect prompt injection and PII leakage tests failed. Review guardrail configurations and add input sanitization rules.

Version Control

Version Everything

Full version history with structured diffs and A/B testing. Never lose a workflow change again.

Structured Diffs

See exactly what changed between versions, including nodes added, removed, or modified, with a visual diff viewer.

A/B Testing

Run two workflow versions side by side. Use LLM-as-Judge to automatically evaluate which version performs better, then tie results to compliance reports for full audit trails.

Version History

Every save creates a version. Roll back to any previous version instantly with full change tracking.

Comparison View

Compare any two versions side by side with highlighted differences in node configurations and connections.

Version History
v5
Added error handling to API nodeCurrent
2m agoby Jane D.
v4
Updated AI agent prompt
1h agoby Jane D.
v3
Added Slack notification branch
3h agoby Alex K.
v2
Connected webhook trigger
1d agoby Jane D.
v1
Initial workflow creation
2d agoby Alex K.
Version Comparison
Comparing v4 ↔ v5
Added error handling to HTTP Request node, modified AI Agent prompt, removed debug logger.
Error Handleradded
retryCount: 3backoff: exponential
AI Agentmodified
Changed settings:
systemPrompttemperature
Debug Loggerremoved
AI Agentai-agent
Version 4
You are a helpful assistant. Answer user questions about our product.
Version 5
You are a helpful onboarding assistant. Guide new users through setup. Be concise and friendly.
Version Evaluation
Evaluation v4 vs v5· Completed
Evaluation Summary

Version 5 shows improved response quality with the updated prompt. Responses are more concise and better guide users through onboarding.

Version 4 Score
72.5%
Version 5 Score
91.3%
Recommendation
Deploy Version 5. The updated prompt produces more focused and helpful onboarding responses.
Key Differences
  • • v5 responses are 40% shorter on average
  • • v5 includes setup steps in 95% of responses vs 60% for v4
  • • v5 error handling catches 2 additional edge cases

AI Agents

Intelligent Agents That Reason

Build AI agents with advanced reasoning, memory management, and tool integration. Three distinct modes for every use case.

ReAct Mode

Reason-Act-Observe loop for complex multi-step problem solving. The agent thinks through each step before taking action.

Conversational Mode

Natural dialogue with persistent memory. Perfect for chatbots, support agents, and interactive assistants.

Tools-Only Mode

Direct function calling without reasoning overhead. Ideal for structured tasks like data extraction and API orchestration.

4 Memory Types

Choose the right memory strategy for your agent's needs.

Buffer Memory
Full conversation history
Window Memory
Last N messages
Summary Memory
AI-compressed summaries
Token Memory
Token-budget aware

Core Capabilities

Everything you need for production AI agents.

Multi-Provider
OpenAI, Anthropic, Gemini, Ollama
Tool Integration
Connect any node as an agent tool
Streaming
Real-time token-by-token output
Token Tracking
Cost and usage per execution

Visual Editors

Two Editors, One Platform

Choose the right editor for your workflow complexity. Switch between them at any time.

Canvas Editor

For complex workflows

Saved
Webhook
POST /new-signup
IF Condition
Plan = Business
Send Email
Welcome email
Slack Message
#new-signups
AI Agent
Onboarding flow
truefalse
  • Drag-and-drop node placement
  • Multi-branch parallel paths
  • Zoom, pan, and minimap
  • Node grouping and alignment
  • Keyboard shortcuts
  • Connection validation

Simple Flow Editor

For quick automations

Saved
1
ScheduleTrigger
Every day at 9:00 AM
2
HTTP Request
GET api.example.com/users
3
Send Email
SMTP via Gmail
DuplicateDelete
Subject:
Welcome aboard!
  • Linear top-to-bottom flow
  • Quick node insertion
  • Simplified configuration
  • Mobile-friendly editing
  • Perfect for simple automations
  • Instant switch to Canvas

Chat Widgets

Embed AI on Any Website

Deploy conversational AI experiences with a single script tag. Fully customizable, secure by default, with built-in analytics.

Acme Support
We typically reply within minutes
Welcome to Acme! I can help with orders, returns, and product questions.
I want to track my order #12345
Your order #12345 is out for delivery and should arrive by 3pm today. Is there anything else?
Can I change the delivery address?
Let me check if the address can still be updated for your shipment...
Type a message...
Powered by NodeLoom

Deployment

Single Script Tag
One line of code to embed
Multiple Widgets
Different bots per page

Branding

Custom Themes
Match your brand identity
Custom CSS
Full style control

Security

XSS Protection
Multi-layer sanitization
Rate Limiting
Abuse prevention built-in

Analytics

Usage Metrics
Conversations and messages
Session Tracking
User engagement data

Quick Start

embed.html
<!-- Add before </body> -->
<script
  src="https://app.nodeloom.io/widget.js"
  data-widget-id="your-id"
  data-theme="auto"
  data-brand-color="#3B82F6"
  data-position="bottom-right"
></script>

97+ Integrations

Connect to Your Entire Stack

Pre-built integrations with OAuth2 support. No coding required. Just authenticate and automate.

AI & Machine Learning6

OpenAI
Anthropic
Google Gemini
Ollama
Hugging Face
Replicate

Cloud & Infrastructure6

AWS S3
Google Cloud
Azure
Docker
Kubernetes
Cloudflare

CRM & Sales6

Salesforce
HubSpot
Pipedrive
Zoho CRM
Freshsales
Close

DevOps & Development6

GitHub
GitLab
Bitbucket
Jira
Linear
Jenkins

Databases6

PostgreSQL
MySQL
MongoDB
Redis
Elasticsearch
Supabase

Communication6

Slack
Discord
Microsoft Teams
Twilio
SendGrid
Mailchimp

Productivity6

Google Sheets
Notion
Airtable
Todoist
Trello
Asana

Don't see your integration? Use the HTTP Request node to connect to any REST API.

Ready to govern your AI agents?

Discover, monitor, and secure AI agents with full observability and enterprise-grade compliance. Start your free trial today.