Solutions · AI Agents

AI agents that run on your hardware.

The most complete on-device agent framework for .NET. Multi-agent workflows, six reasoning strategies, MCP and 70+ built-in tools, portable Agent Skills, resilience policies, full tracing, and real-time streaming, all from one NuGet package with zero cloud dependency.

18 agent templates 70+ built-in tools 6 reasoning strategies 100% on-device
Templates

18

Agent templates for assistant, analyst, researcher, reviewer, code, debugger, editor, extractor, QA, tutor, and more.

Tools

70+

Built-in tools across Data, Document, Text, Numeric, Security, Utility, I/O, and Net categories.

Strategies

6

Reasoning strategies: Chain of Thought, ReAct, Plan and Execute, Reflection, Tree of Thought, None.

Privacy

100%

On-device and private. No cloud dependency, no data leaves the machine, no API keys to manage.

Multi-agent workflows

Compose agents into production workflows.

Chain specialized agents into pipelines, fan out work in parallel, route by intent, or let a supervisor delegate and aggregate. Each agent reasons independently while the orchestrator manages handoffs and convergence.

Pattern 01

Pipeline

Sequential hand-off between specialists. Output of one agent becomes input to the next.

Pattern 02

Parallel

Fan out the same task to multiple agents at once, then merge their results into one answer.

Pattern 03

Router

Route incoming requests to the right specialist by intent classification or schema match.

Pattern 04

Supervisor

A lead agent decomposes the goal, delegates to specialists, monitors progress, and aggregates results.

Pipeline orchestration

User goal → Orchestrator → Agent A → Agent B → Result

Parallel fan-out

Orchestrator → (Agent A  +  Agent B  +  Agent C) → Merge

Explore multi-agent workflows
Agent reasoning

Control how your agent thinks.

Six built-in reasoning strategies let you balance speed, accuracy, and cost. From direct single-shot responses to multi-step ReAct loops with tool calls, reflection, and tree-of-thought exploration.

  • Chain of Thought: step-by-step reasoning before the final answer
  • ReAct: reason then act loop, interleaving thought, tool calls, and observations
  • Plan and Execute: produce an upfront plan, then execute each step in order
  • Reflection: self-critique a draft answer and revise before responding
  • Tree of Thought: explore multiple branches in parallel and pick the best
  • None: direct response, no extra reasoning overhead
MCP & tools

Connect agents to any external service.

Full Model Context Protocol client implementation with 8 categories of built-in tools covering 70+ operations. Define custom tools with ITool or [LMFunction], connect to MCP servers, and compose tool chains with JSON Schema validation and parallel execution.

Highlight

MCP Protocol

Tool discovery, resources, prompts, sampling, and stdio transport. Connect your agents to any MCP-compatible server.

Highlight

70+ built-in tools

Eight categories shipped in the box: Data, Document, Text, Numeric, Security, Utility, I/O, and Net.

Highlight

Custom tools

The ITool interface and the [LMFunction] attribute let you turn any C# method into an agent-callable tool in seconds.

8 built-in tool categories
Data Document Text Numeric Security Utility I/O Net
Capabilities
  • Tool discovery
  • JSON Schema validation
  • Parallel execution
  • Human-in-the-loop approval
  • Stdio transport
  • Resources, prompts & sampling
Explore MCP & tools
Agent skills

Portable skills via SKILL.md bundles.

Package agent capabilities as portable SKILL.md files with instructions, tools, and guardrails. Load from local folders, remote URLs, or the agentskills.io marketplace with hot-reload support.

  • Hot-reload: edits land without restarting the host
  • Marketplace: pull skills from agentskills.io
  • Guardrails: restrict tools, scope, and permissions per skill
  • Remote loading: load skills from URL, not just disk
  • File watcher: auto-reload on file changes
  • Versioned: pin skills to a specific revision
Enterprise-ready

Built for production workloads.

Beyond orchestration and reasoning, LM-Kit.NET ships the infrastructure agents need to operate reliably at scale. Each capability has a dedicated page.

18 templates

Agent templates

Eighteen pre-built specialised agents (Chat, Code, Research, Reviewer, Debugger, Editor, Classifier, Extractor, Planner, ReAct, QA, Tutor, and more). Typed configuration, calibrated prompts.

Browse templates

70+ tools

Tools & function calling

Eight built-in categories, ITool for custom logic, [LMFunction] attribute binding, grammar-constrained decoding so the model cannot emit malformed JSON.

Tools page

Graphs

Graph orchestration

GraphOrchestrator with composable Sequential, Parallel, Conditional, and Agent nodes. Arbitrary workflow shapes, thread-safe context, channel-based streaming.

Graph page

Delegation

Agent delegation

Programmatic DelegationManager for explicit routing; model-driven SupervisorOrchestrator where the LLM picks workers via a delegate_to_agent tool.

Delegation page

Streaming

Real-time streaming

Channel-based, non-blocking. Typed token kinds (Content, Thinking, ToolCall, Delegation). Multi-handler aggregation.

Streaming page

Resilience

Production resilience

Polly-style policies built for agent execution: retry with backoff, circuit breaker, timeout, fallback, bulkhead, rate limit, composites, health checks.

Resilience page

Observability

OpenTelemetry tracing

AgentDiagnostics.ActivitySource emits spans with GenAI semantic conventions. Six span kinds. In-memory tracer for tests. Plugs into Jaeger, Honeycomb, Application Insights.

Observability page

Permissions

Permissions & guardrails

Every tool ships typed metadata: side-effect, risk level, idempotence. ToolPermissionPolicy turns it into allow / deny / require-approval rules with wildcards and risk ceilings.

Permissions page

Middleware

Filter pipeline

ASP.NET-style onion middleware for AI: IPromptFilter, ICompletionFilter, IToolInvocationFilter. Redact, validate, salvage, short-circuit.

Filter pipeline page
LM-Kit.NET pillars

Seven pillars, one foundation.

The seven pillars of LM-Kit.NET, plus the local runtime they share. Highlighted card is where you are now.

The foundation

Every capability above runs on this runtime.

Foundation

Local Inference

The runtime all seven pillars sit on. The LM-Kit.NET NuGet ships the complete inference system: open-weight LLMs, vision-language models, embeddings, on-device speech-to-text, OCR and classifiers, accelerated on CPU, AVX2, CUDA 12/13, Vulkan or Metal. One package, zero cloud calls, predictable latency, full data and technology sovereignty.

Explore the foundation
Install the SDK

The most complete agent framework for .NET.

From single-agent prototypes to complex multi-agent pipelines with resilience, tracing, and streaming. LM-Kit.NET gives you everything you need to ship production agents that run entirely on-device.

Download View pricing Agents API reference