Agent spans
Whole-execution span with iteration count, tokens used, status.
Production agents are distributed systems. They invoke tools, delegate to workers, run planning loops, and stream tokens. Without tracing, debugging looks like reading a transcript and guessing. LM-Kit emits OpenTelemetry spans following the GenAI semantic conventions for agents, tools, planning steps, delegations, and orchestration nodes. Plug into Application Insights, Jaeger, Honeycomb, or any OTLP-compatible backend.
Whole-execution span with iteration count, tokens used, status.
One span per tool invocation with arguments and result.
Trace ReAct steps and supervisor delegations across workers.
A failing agent might call three tools, delegate to a worker, retry once, stream half a response, and abort. Logs show fragments. Traces show the tree. The difference between a five-minute fix and a five-day investigation is whether you can see the tree.
SpanKind distinguishes Agent, Tool, Planning, Delegation, Orchestration, Inference. Filter, group, alert per kind.
Tags follow the OpenTelemetry GenAI semantic conventions: model name, prompt tokens, completion tokens, tool name, latency. Backends recognise and aggregate them automatically.
ActivitySourceAgentDiagnostics.ActivitySource integrates with .NET's native diagnostics. Use existing OTel exporters; no custom plumbing.
Streaming runs emit token-level events you can sample. Spot which tokens were thinking, which were content, which triggered tool calls.
InMemoryTracer captures spans without an external backend. Perfect for unit tests asserting an agent took the expected path.
Implement ITraceExporter for proprietary backends. The pipeline is identical to mainstream observability: pluggable, tested, batched.
Subscribe an ActivityListener to AgentDiagnostics.ActivitySource
or, in ASP.NET Core, register OTel with the source name. Done. Every
agent run produces a span tree with parent/child relationships intact.
Register the agent activity source with OpenTelemetry to stream spans to Jaeger, Honeycomb, or the console.
using OpenTelemetry.Trace; using LMKit.Agents.Observability; // Standard OTel registration. AgentDiagnostics.SourceName is the source. builder.Services.AddOpenTelemetry() .WithTracing(tp => tp .AddSource(AgentDiagnostics.SourceName) .AddOtlpExporter() // Jaeger, Honeycomb, etc. .AddConsoleExporter()); // or write spans to console // That is it. Every agent run, tool call, plan step, and delegation now traces. var result = await agent.RunAsync("Diagnose pipeline failure in build #4321");
Capture spans in memory inside a unit test to assert that the agent took the expected path.
using LMKit.Agents.Observability; var tracer = new InMemoryTracer(); AgentTracing.Configure(tracer); await agent.RunAsync("What is the weather in Toulouse?"); // Assert the agent took the expected path. Assert.That(tracer.Spans.Count(s => s.Kind == SpanKind.Tool), Is.EqualTo(1)); Assert.That(tracer.Spans.Single(s => s.Kind == SpanKind.Tool).Tags["tool.name"], Is.EqualTo("get_current_weather"));
Excellent products, but tied to LangChain runs and require a separate SaaS. Local agents and offline workloads do not fit naturally.
Basic telemetry hooks, but no GenAI semantic conventions and no agent-specific span kinds. You build the cross-agent picture yourself.
Native ActivitySource, GenAI semconv, six span kinds, in-memory tracer for tests, plugs into any OTel backend without leaving your network.
Working console demos on GitHub, step-by-step how-to guides on the docs site, and the API reference for the classes used on this page.
Console demo: emit OpenTelemetry spans from every agent step.
Open on GitHub → DemoAgent demo: deep integration with .NET DiagnosticSource.
Open on GitHub → How-to guideHow-to: spans, metrics, traces for agent workflows.
Read the guide → How-to guideTrace a request across multiple agents and services.
Read the guide →