Local AI Agent Platform for Builders
Your AI. Your Data. On Your Device.
LM-Kit.NET
Complete Local AI SDK
LM-Kit.NET delivers the complete local AI stack for builders: high-performance inference, multi-agent orchestration, document intelligence, and batteries-included tooling. Run open models with zero cloud dependency and keep full control over data, latency, and cost.
Local Inference
Run open models locally with hardware acceleration on CPU, CUDA, Vulkan, or Metal.
Agent Orchestration
4 orchestrators, 5 planning strategies. Full MCP and Agent Skills support.
Document Intelligence
PDF chat, structured extraction, VLM-powered OCR, and layout analysis.
Batteries-Included Tools
56+ built-in tools across data, text, security, IO, and network categories.
From Prompts to Production Agents
The best AI agent is the one that ships.
LM-Kit goes beyond single-model inference. Compose multi-agent workflows using 4 orchestration patterns, extend capabilities with MCP and Agent Skills, and process documents with VLM-powered intelligence. Each component is optimized for its task, delivering results that monolithic LLMs cannot match.
Ship with built-in resilience, full observability, and zero cloud dependency. Predictable costs, complete data control, minimal footprint.🌳
Multi-Agent Orchestration
Pipeline, parallel, router, and supervisor patterns. Agents that delegate, plan, and collaborate.
Document Intelligence
PDF chat, structured extraction, VLM-powered OCR. Understand documents, not just text.
Complete Data Sovereignty
100% local execution. Air-gapped ready. Your data never leaves your infrastructure.
Predictable Costs
No per-token billing. No rate limits. Fixed infrastructure, unlimited inference.
Production-Ready
Retry, circuit breaker, timeout, rate limiting. OpenTelemetry tracing built in.
Who is LM-Kit for?
Builders who want AI they can control, deploy anywhere, and run without cloud dependencies.
Build AI Agents Your Way
- .NET SDK or REST API - your choice
- 4 orchestrators, 5 planning strategies
- Full MCP and Agent Skills protocol support
- RAG with built-in vector DB and reranking
Extract Meaning, Not Just Text
- VLM-powered OCR with layout analysis
- Structured extraction from any document
- PDF, DOCX, XLSX, PPTX, images
- Chat with your documents, locally
Achieve True Data Sovereignty
- 100% local inference, zero data leakage
- Air-gapped and offline-first ready
- Built for GDPR, HIPAA, strict compliance
- Full audit trail with OpenTelemetry
Escape Per-Token Pricing
- Fixed costs, unlimited inference
- Retry, circuit breaker, timeout policies
- No rate limits, works fully offline
- Ship faster with no vendor lock-in
AI Agents Should Run Where the App Runs
Embedded AI, Not External Services
Cloud APIs add latency, complexity, and failure points. With LM-Kit, AI runs inside your application as a native .NET library. No HTTP calls. No separate services. No infrastructure to manage.
Your app deploys to desktop, mobile, server, or edge. Your AI goes with it. Same codebase, same process, same deployment. Build with familiar tools and ship faster.
Run models like Llama, Mistral, Phi, and Gemma with automatic hardware acceleration. Process sensitive data locally with complete privacy.
Run real, local AI demos directly in .NET. No cloud calls. No external services.
AI Agents
Chat & Conversation
Intelligent Document Processing
Analysis & NLP
Vision & Audio
Embeddings & RAG
Training & Optimization
The complete platform for building local AI agents
Core AI Platform
Run Qwen, Gemma, DeepSeek, Mistral, GLM, GPT-oss locally
CPU, CUDA, Vulkan, Metal acceleration·100+ pre-configured model cards·Dynamic LoRA hot-swap·Windows, macOS, Linux, ARM64
Agent Orchestration
4 orchestrators, 5 planning strategies, 56+ built-in tools
Pipeline, parallel, router, supervisor patterns·ReAct, Chain-of-Thought, Tree-of-Thought planning·Agent-to-agent delegation·Production resilience built-in
MCP & Agent Skills
Full support for both open standards
Model Context Protocol with stdio transport·Agent Skills from Cursor, GitHub, VS Code·Progressive disclosure for context efficiency·Human-in-the-loop controls
Conversational AI
Chatbots, Smart Memories, function calling
Multi-turn dialogue with context persistence·Tool calling and function invocation·RAG-backed agent memory·Structured output with JSON/grammar constraints
RAG & Knowledge
Semantic retrieval with reranking
Built-in vector DB or Qdrant for scale·Text, markdown, and layout-aware chunking·Agent memory with context persistence·Multimodal RAG with image embeddings
Document Intelligence
From PDF to chat, one pipeline
Built-in OCR with layout understanding engine·VLM-powered extraction and connectors to external OCR·Schema discovery and structured extraction·PDF, DOCX, XLSX, PPTX, images
Vision & Multimodal
Qwen-VL, Gemma, MiniCPM-V, Pixtral
Visual text extraction with VLMs·Image embeddings for multimodal search·Background removal and segmentation·Multimodal classification and extraction
Text & NLP
Comprehensive NLP, locally
Named entity and PII extraction·Sentiment, emotion, sarcasm detection·Translation and summarization·JSON grammar for constrained generation
Speech & Language
Whisper-powered transcription
Speech-to-text with hallucination suppression·Voice Activity Detection·Dictation formatting with spoken commands·Real-time streaming, multi-language
Local Inference
Resilience, observability, optimization
Retry, circuit breaker, timeout, rate limiting·OpenTelemetry GenAI instrumentation·Model quantization and LoRA hot-swap·Token metrics and throughput tracking
Deploy Your Way, Anywhere You Need
LM-Kit runs entirely on your infrastructure with no external dependencies.
From edge devices to enterprise servers, deploy AI workloads where your data lives, with full control over security, compliance, and costs.
Why Teams Are Moving AI Local
Shipping an agent should not mean shipping your data to someone else's servers.
Cloud AI APIs come with hidden costs: per-token billing, data exposure, and vendor dependency. LM-Kit gives you the same capabilities with none of the trade-offs.
Beyond GenAI: A Complete AI Stack
LLMs hallucinate and miss structure. Real-world AI needs more than text generation.
LM-Kit combines 5 AI paradigms so each layer compensates for the others.
Built by a team with deep expertise in Intelligent Document Processing and Information Management.
We know what it takes to ship AI that works in production.
Trusted by Builders Like You
Collaborating With Industry Leaders
We partner with forward-thinking companies who share our commitment to innovation in AI. From technology providers to strategic collaborators, our partners play a key role in expanding what’s possible with LM-Kit. Together, we’re shaping the future of AI integration across industries.