18
Agent templates for assistant, analyst, researcher, reviewer, code, debugger, editor, extractor, QA, tutor, and more.
The most complete on-device agent framework for .NET. Multi-agent workflows, six reasoning strategies, MCP and 70+ built-in tools, portable Agent Skills, resilience policies, full tracing, and real-time streaming, all from one NuGet package with zero cloud dependency.
Agent templates for assistant, analyst, researcher, reviewer, code, debugger, editor, extractor, QA, tutor, and more.
Built-in tools across Data, Document, Text, Numeric, Security, Utility, I/O, and Net categories.
Reasoning strategies: Chain of Thought, ReAct, Plan and Execute, Reflection, Tree of Thought, None.
On-device and private. No cloud dependency, no data leaves the machine, no API keys to manage.
Chain specialized agents into pipelines, fan out work in parallel, route by intent, or let a supervisor delegate and aggregate. Each agent reasons independently while the orchestrator manages handoffs and convergence.
Pattern 01
Sequential hand-off between specialists. Output of one agent becomes input to the next.
Pattern 02
Fan out the same task to multiple agents at once, then merge their results into one answer.
Pattern 03
Route incoming requests to the right specialist by intent classification or schema match.
Pattern 04
A lead agent decomposes the goal, delegates to specialists, monitors progress, and aggregates results.
User goal → Orchestrator → Agent A → Agent B → Result
Parallel fan-outOrchestrator → (Agent A + Agent B + Agent C) → Merge
Six built-in reasoning strategies let you balance speed, accuracy, and cost. From direct single-shot responses to multi-step ReAct loops with tool calls, reflection, and tree-of-thought exploration.
Full Model Context Protocol client implementation with 8 categories of built-in
tools covering 70+ operations. Define custom tools with ITool or
[LMFunction], connect to MCP servers, and compose tool chains with
JSON Schema validation and parallel execution.
Highlight
Tool discovery, resources, prompts, sampling, and stdio transport. Connect your agents to any MCP-compatible server.
Highlight
Eight categories shipped in the box: Data, Document, Text, Numeric, Security, Utility, I/O, and Net.
Highlight
The ITool interface and the [LMFunction] attribute let you turn any C# method into an agent-callable tool in seconds.
Package agent capabilities as portable SKILL.md files with instructions, tools, and guardrails. Load from local folders, remote URLs, or the agentskills.io marketplace with hot-reload support.
Beyond orchestration and reasoning, LM-Kit.NET ships the infrastructure agents need to operate reliably at scale. Each capability has a dedicated page.
18 templates
Eighteen pre-built specialised agents (Chat, Code, Research, Reviewer, Debugger, Editor, Classifier, Extractor, Planner, ReAct, QA, Tutor, and more). Typed configuration, calibrated prompts.
Browse templates70+ tools
Eight built-in categories, ITool for custom logic, [LMFunction] attribute binding, grammar-constrained decoding so the model cannot emit malformed JSON.
Graphs
GraphOrchestrator with composable Sequential, Parallel, Conditional, and Agent nodes. Arbitrary workflow shapes, thread-safe context, channel-based streaming.
Delegation
Programmatic DelegationManager for explicit routing; model-driven SupervisorOrchestrator where the LLM picks workers via a delegate_to_agent tool.
Streaming
Channel-based, non-blocking. Typed token kinds (Content, Thinking, ToolCall, Delegation). Multi-handler aggregation.
Resilience
Polly-style policies built for agent execution: retry with backoff, circuit breaker, timeout, fallback, bulkhead, rate limit, composites, health checks.
Resilience pageObservability
AgentDiagnostics.ActivitySource emits spans with GenAI semantic conventions. Six span kinds. In-memory tracer for tests. Plugs into Jaeger, Honeycomb, Application Insights.
Permissions
Every tool ships typed metadata: side-effect, risk level, idempotence. ToolPermissionPolicy turns it into allow / deny / require-approval rules with wildcards and risk ceilings.
Middleware
ASP.NET-style onion middleware for AI: IPromptFilter, ICompletionFilter, IToolInvocationFilter. Redact, validate, salvage, short-circuit.
Working console demos on GitHub, step-by-step how-to guides on the docs site, and the API reference for the classes used on this page.
Agent demo: ReAct planning + web search + memory.
Open on GitHub → DemoAgent demo: parallel multi-perspective document analysis.
Open on GitHub → DemoAgent demo: supervisor delegates to specialised sub-agents.
Open on GitHub → How-to guideFoundational how-to: annotate methods, register, run.
Read the guide →The seven pillars of LM-Kit.NET, plus the local runtime they share. Highlighted card is where you are now.
01 · AI Agents
ReAct planning, supervisors, parallel and pipeline orchestrators, persistent memory, MCP clients, custom tools.
You are here02 · Document Intelligence
PDF text and table extraction, on-device OCR reaching SOTA benchmark scores, structured field extraction with grammar-constrained generation.
Document Intelligence03 · Vision & Multimodal
Image understanding, classification, labeling, multimodal chat, image embeddings, VLM-OCR, background removal. Same conversation surface as LLMs.
Vision & Multimodal04 · RAG & Knowledge
Built-in vector store, Qdrant connector, embeddings, hybrid retrieval, document chunking, source citations.
RAG & Knowledge05 · Text Analysis
Built-in classifiers and an extractor that emits typed C# objects via grammar-constrained sampling. Sentiment, keywords, language detection.
Text Analysis06 · Speech & Audio
A growing local speech-to-text stack: hallucination suppression, Voice Activity Detection, real-time translation, streaming output, 100+ languages.
Speech & Audio07 · Text Generation
Single-turn, multi-turn, and stateless conversation primitives. Translate, correct, rewrite, summarise. Prompt templates, streaming, grammar-constrained outputs.
Text GenerationThe foundation
Every capability above runs on this runtime.
Foundation
The runtime all seven pillars sit on. The LM-Kit.NET NuGet ships the complete inference system: open-weight LLMs, vision-language models, embeddings, on-device speech-to-text, OCR and classifiers, accelerated on CPU, AVX2, CUDA 12/13, Vulkan or Metal. One package, zero cloud calls, predictable latency, full data and technology sovereignty.
From single-agent prototypes to complex multi-agent pipelines with resilience, tracing, and streaming. LM-Kit.NET gives you everything you need to ship production agents that run entirely on-device.