Real Data Integration
Connect your real LLM and agent metrics to AgentStat. No simulation — just live, accurate charts.
Core Concept: Push Data via updateAgent()
AgentStat is a visualization layer only. In production, simulateData defaults to false — you push metrics using the imperative ref API from anywhere in your component tree:
const agentStatRef = useRef<AgentStatRef>(null);
// Call this whenever you have new data — streaming, polling, or event-driven.
// Anomalous statuses ('stuck', 'hallucinating') are pinned until you pass a
// non-anomalous status, so a transient error stays visible.
agentStatRef.current?.updateAgent(
'agent-id', // must match an id in your agents array
tokensPerSecond, // number — generation speed
progressPercent, // 0-100 — task completion
'active' // AgentStatus
);AgentStat handles smoothing, health scoring, and rendering automatically. Always use optional chaining — the ref is null before the first render.
Integration Patterns (Copy-Paste Ready)
1. Vercel AI SDK
useCompletion hook — token rate from character count approximation
'use client';
import { useMemo, useRef, useEffect } from 'react';
import { useCompletion } from 'ai/react';
import {
AgentStat,
createAgent,
type Agent,
type AgentStatRef,
} from 'agentstat';
export default function MonitoredChat() {
const agentStatRef = useRef<AgentStatRef>(null);
const startTimeRef = useRef<number>(0);
const agents = useMemo<Agent[]>(
() => [
{
...createAgent('chat-agent', 'Chat Assistant', '#111111'),
config: { expectedTokensPerSec: [5, 25] },
},
],
[]
);
const { completion, isLoading } = useCompletion({
api: '/api/chat',
onResponse: () => {
startTimeRef.current = performance.now();
agentStatRef.current?.updateAgent('chat-agent', 0, 1, 'thinking');
},
onFinish: () => {
agentStatRef.current?.updateAgent('chat-agent', 0, 100, 'complete');
},
});
useEffect(() => {
if (!isLoading || !completion) return;
const elapsed = (performance.now() - startTimeRef.current) / 1000;
// Approximate: ~4 characters per token for English text
const approxTokens = completion.length / 4;
const tokensPerSec = elapsed > 0 ? approxTokens / elapsed : 0;
const progress = Math.min(99, approxTokens * 2);
agentStatRef.current?.updateAgent('chat-agent', tokensPerSec, progress, 'active');
}, [completion, isLoading]);
return (
<div className="space-y-6">
<AgentStat ref={agentStatRef} agents={agents} height={400} />
{/* Your chat UI here */}
</div>
);
}2. LangChain / LangGraph
Callback handler — attach to any LLM chain or agent
// lib/agent-stat-monitor.ts
import { BaseCallbackHandler } from 'langchain/callbacks';
import type { AgentStatRef } from 'agentstat';
export class AgentStatMonitor extends BaseCallbackHandler {
name = 'AgentStatMonitor';
private startTime = 0;
private tokenCount = 0;
constructor(
private ref: React.RefObject<AgentStatRef>,
private agentId: string
) {
super();
}
handleLLMStart() {
this.startTime = performance.now();
this.tokenCount = 0;
this.ref.current?.updateAgent(this.agentId, 0, 2, 'thinking');
}
handleLLMNewToken(_token: string) {
this.tokenCount++;
const elapsed = (performance.now() - this.startTime) / 1000;
const tokensPerSec = elapsed > 0 ? this.tokenCount / elapsed : 0;
const progress = Math.min(95, this.tokenCount * 0.5);
this.ref.current?.updateAgent(this.agentId, tokensPerSec, progress, 'active');
}
handleLLMEnd() {
this.ref.current?.updateAgent(this.agentId, 0, 100, 'complete');
}
handleLLMError() {
// 'stuck' pins the agent until you call updateAgent with a
// non-anomalous status — useful so transient errors stay visible.
this.ref.current?.updateAgent(this.agentId, 0, 0, 'stuck');
}
}
// Usage:
// const monitor = new AgentStatMonitor(agentStatRef, 'my-agent');
// const chain = new LLMChain({ llm, prompt, callbacks: [monitor] });3. WebSocket / SSE
Real-time streaming from your backend
// hooks/use-agent-stream.ts
import { useEffect } from 'react';
import type { AgentStatRef } from 'agentstat';
export function useAgentStream(
agentStatRef: React.RefObject<AgentStatRef>,
agentId: string,
endpoint: string
) {
useEffect(() => {
const ws = new WebSocket(endpoint);
ws.onmessage = (event) => {
const { tokensPerSec, progress, status } = JSON.parse(event.data);
agentStatRef.current?.updateAgent(agentId, tokensPerSec, progress, status);
};
ws.onerror = () => {
agentStatRef.current?.updateAgent(agentId, 0, 0, 'stuck');
};
return () => ws.close();
}, [agentStatRef, agentId, endpoint]);
}
// SSE variant (recommended for one-way metric streams):
export function useAgentSSE(
agentStatRef: React.RefObject<AgentStatRef>,
agentId: string,
endpoint: string
) {
useEffect(() => {
const source = new EventSource(endpoint);
source.onmessage = (event) => {
const { tokensPerSec, progress, status } = JSON.parse(event.data);
agentStatRef.current?.updateAgent(agentId, tokensPerSec, progress, status);
};
source.onerror = () => {
agentStatRef.current?.updateAgent(agentId, 0, 0, 'stuck');
};
return () => source.close();
}, [agentStatRef, agentId, endpoint]);
}4. Model Context Protocol (MCP)
MCP runs server-side. Bridge metrics to the browser via SSE, then call updateAgent() on the client.
// 1. MCP server: expose a tool that records metrics server-side
// (mcp-server/index.ts)
import { McpServer } from '@modelcontextprotocol/sdk/server/mcp.js';
import { z } from 'zod';
const metrics: Record<string, object> = {};
server.tool(
'report_agent_metrics',
{
agentId: z.string(),
tokensRate: z.number(),
progress: z.number().min(0).max(100),
status: z.enum(['active', 'thinking', 'stuck', 'complete', 'hallucinating']),
},
async ({ agentId, tokensRate, progress, status }) => {
metrics[agentId] = { tokensRate, progress, status, ts: Date.now() };
return { content: [{ type: 'text', text: 'ok' }] };
}
);
// 2. Expose metrics via SSE from your Next.js API route
// (app/api/agent-metrics/route.ts)
export async function GET() {
const stream = new ReadableStream({
start(controller) {
const send = () => {
controller.enqueue(
`data: ${JSON.stringify(metrics)}\n\n`
);
};
const id = setInterval(send, 200);
return () => clearInterval(id);
},
});
return new Response(stream, {
headers: { 'Content-Type': 'text/event-stream', 'Cache-Control': 'no-cache' },
});
}
// 3. Browser: consume the SSE stream and push to AgentStat
// (components/McpMonitor.tsx)
export function useMcpMetrics(agentStatRef: React.RefObject<AgentStatRef>) {
useEffect(() => {
const source = new EventSource('/api/agent-metrics');
source.onmessage = (event) => {
const allMetrics = JSON.parse(event.data);
Object.entries(allMetrics).forEach(([agentId, m]: [string, any]) => {
agentStatRef.current?.updateAgent(
agentId, m.tokensRate, m.progress, m.status
);
});
};
return () => source.close();
}, [agentStatRef]);
}5. VS Code Extension
Embed AgentStat in a webview panel — extension sends metrics via postMessage
// extension.ts — the Node.js extension host
import * as vscode from 'vscode';
export function activate(context: vscode.ExtensionContext) {
const panel = vscode.window.createWebviewPanel(
'agentMonitor',
'Agent Monitor',
vscode.ViewColumn.Beside,
{ enableScripts: true, retainContextWhenHidden: true }
);
panel.webview.html = getWebviewHtml(panel.webview, context.extensionUri);
// Forward metrics from your extension logic to the React webview
function pushMetrics(
agentId: string,
tokensRate: number,
progress: number,
status: string
) {
panel.webview.postMessage({ type: 'agentstat', agentId, tokensRate, progress, status });
}
// Example: hook this into your language server, AI tooling, or any event source
context.subscriptions.push(
vscode.commands.registerCommand('myExt.reportMetrics', pushMetrics)
);
}
// ─────────────────────────────────────────────────────────────
// webview/App.tsx — the React app running inside the webview
// ─────────────────────────────────────────────────────────────
import { useEffect, useMemo, useRef } from 'react';
import { AgentStat, createAgent, type Agent, type AgentStatRef } from 'agentstat';
export default function App() {
const agentStatRef = useRef<AgentStatRef>(null);
const agents = useMemo<Agent[]>(
() => [createAgent('main', 'Extension Agent')],
[]
);
useEffect(() => {
const handler = (event: MessageEvent) => {
if (event.data?.type !== 'agentstat') return;
const { agentId, tokensRate, progress, status } = event.data;
agentStatRef.current?.updateAgent(agentId, tokensRate, progress, status);
};
window.addEventListener('message', handler);
return () => window.removeEventListener('message', handler);
}, []);
return <AgentStat ref={agentStatRef} agents={agents} />;
}What Metrics Should You Send?
| Metric | How to Calculate | Why It Matters |
|---|---|---|
| tokensRate | tokensGenerated / elapsedSeconds | Shows generation speed and model responsiveness |
| progress | (currentStep / totalSteps) × 100 | Visual completion indicator for multi-step agents |
| status | 'active' | 'thinking' | 'stuck' | 'complete' | 'hallucinating' | Drives health scoring and color-coded status in overlays |
For token rate without a streaming SDK: divide completion.length by 4 (chars per token) and by elapsed seconds.
Error Handling & Fallbacks
Always wrap metric pushes in try/catch to avoid breaking your app if AgentStat isn't ready:
function useAgentStatPush(
agentStatRef: React.RefObject<AgentStatRef>,
agentId: string
) {
return useCallback(
(tokensRate: number, progress: number, status: AgentStatus) => {
try {
agentStatRef.current?.updateAgent(agentId, tokensRate, progress, status);
} catch (error) {
console.warn('AgentStat push failed:', error);
}
},
[agentStatRef, agentId]
);
}Quick Start Checklist
- 1Memoize your
agentsarray withuseMemoor declare it at module scope - 2Create a ref:
useRef<AgentStatRef>(null) - 3Push data with
updateAgent()on key lifecycle events (stream start, each token, completion, error) - 4For local exploration, pass
simulateDatato see the layout and theming without wiring real data
Full Working Example (Vercel AI SDK)
'use client';
import { useMemo, useRef, useEffect } from 'react';
import { useCompletion } from 'ai/react';
import {
AgentStat,
createAgent,
type Agent,
type AgentStatRef,
} from 'agentstat';
export default function MonitoredChat() {
const agentStatRef = useRef<AgentStatRef>(null);
const startTimeRef = useRef<number>(0);
const agents = useMemo<Agent[]>(
() => [
{
...createAgent('chat-agent', 'Chat Assistant', '#111111'),
config: { expectedTokensPerSec: [5, 25] },
},
],
[]
);
const { completion, isLoading } = useCompletion({
api: '/api/chat',
onResponse: () => {
startTimeRef.current = performance.now();
agentStatRef.current?.updateAgent('chat-agent', 0, 1, 'thinking');
},
onFinish: () => {
agentStatRef.current?.updateAgent('chat-agent', 0, 100, 'complete');
},
});
useEffect(() => {
if (!isLoading || !completion) return;
const elapsed = (performance.now() - startTimeRef.current) / 1000;
const approxTokens = completion.length / 4; // ~4 chars per token
const tokensPerSec = elapsed > 0 ? approxTokens / elapsed : 0;
const progress = Math.min(99, approxTokens * 2);
agentStatRef.current?.updateAgent('chat-agent', tokensPerSec, progress, 'active');
}, [completion, isLoading]);
return (
<div className="space-y-6">
<AgentStat
ref={agentStatRef}
agents={agents}
height={400}
referenceLine={{ value: 50, label: 'Threshold' }}
/>
{/* Your chat UI below */}
</div>
);
}Troubleshooting
| Issue | Solution |
|---|---|
| Lines don't appear | Confirm updateAgent() is being called after mount and that the id matches one in your agents array. Check the browser console for errors. |
| Numbers jump wildly | Smooth tokensRate before sending — a simple moving average over the last 3-5 values eliminates jitter. |
| Canvas blank on load | AgentStat needs at least 2 data points to draw a curve. Call updateAgent() twice in quick succession on mount, or enable simulateData temporarily. |
| Ref methods are undefined | Use optional chaining: ref.current?.updateAgent(). The ref is null before the component mounts. |
| Health score lower than expected | Set config.expectedTokensPerSec on each agent. Without a range, the default [5, 25] is used for tokenEfficiency scoring. |
| Stuck status won't clear | When you push 'stuck' or 'hallucinating' via updateAgent, AgentStat locks the agent so the sim won't auto-recover. Pass a non-anomalous status like 'active' to clear the lock. |