Get Free Community License
Free Download

Get LM-Kit.NETUp and Running in Minutes.

Install via NuGet, pick a model, and build your first AI-powered .NET application. Full feature access with no time limits, no registration required, and zero cloud dependency.

Free Trial No Registration Cross-Platform GPU Accelerated

Install via NuGet

$ dotnet add package LM-Kit.NET
Or use: Install-Package LM-Kit.NET in the Package Manager Console

Quick Start Path

1 Install the LM-Kit.NET NuGet package
2 Choose and load a model from the catalog
3 Build and deploy your AI application
100%
On-Device
3
OS Platforms
20+
Model Families
GPU
Accelerated
Quick Start

From Zero to AI in Three Steps

Create a new project, install the NuGet package, and follow the getting started guide to build your first AI application.

1

Create a .NET Project

Start with a new console application or add LM-Kit.NET to any existing .NET project.

dotnet new console -n MyAIApp
2

Install the Package

Add LM-Kit.NET from NuGet. The package includes everything you need for CPU inference.

dotnet add package LM-Kit.NET
3

Build Your First AI App

Follow the getting started guide to initialize the runtime, load a model, and generate your first response.

Your First AI Agent

Ready-to-Run Code Samples

Clone and run complete .NET applications covering chatbots, RAG pipelines, multi-agent orchestration, document processing, vision, speech recognition, and more. Every sample includes a detailed walkthrough in the developer guide.

Have a specific use case? Our engineering team can provide a proof-of-concept or tailored code snippet. Get in touch
Installation

Install LM-Kit.NET via NuGet

One command is all you need. LM-Kit.NET is distributed as a single NuGet package that includes CPU inference out of the box with AVX/AVX2 optimization.

.NET CLI
$ dotnet add package LM-Kit.NET
Package Manager Console
PM> Install-Package LM-Kit.NET

Optional: GPU Acceleration

For NVIDIA GPU acceleration, install one additional backend package matching your CUDA version and operating system. Vulkan and Metal support are included in the main package.

CUDA 12 Win dotnet add package LM-Kit.NET.Backend.Cuda12.Windows
CUDA 13 Win dotnet add package LM-Kit.NET.Backend.Cuda13.Windows
CUDA 12 Linux dotnet add package LM-Kit.NET.Backend.Cuda12.Linux
Models

Choose the Right Model for Your Use Case

The model catalog is continuously updated with state-of-the-art releases from Google, Meta, Alibaba, Mistral, Microsoft, IBM, and more. It also includes LM-Kit fine-tuned models optimized for specific tasks like sentiment analysis and sarcasm detection. Models auto-download on first use. See how local inference cuts your costs compared to cloud APIs.

// Load a model and start a conversation in four lines var model = LM.LoadFromModelID("gptoss:20b"); var chat = new MultiTurnConversation(model); var response = chat.Submit("What are the benefits of on-device AI?"); Console.WriteLine(response);

Chat & Code

Chat
The broadest category for general conversations, coding assistance, and text generation. Models from all major providers in sizes from sub-1B to 30B+.
Gemma Qwen Falcon Phi Llama Mistral Granite SmolLM

Reasoning & Agents

Reasoning
Specialized for complex multi-step reasoning, agentic workflows, mathematics, and function calling.
QwQ DeepSeek R1 GPT-OSS GLM Flash Nemotron Magistral

Vision & Multimodal

Vision
Process and understand images alongside text for visual Q&A, document analysis, and image understanding.
Qwen VL MiniCPM Ministral Pixtral Devstral

OCR & Document

OCR
Specialized models for optical character recognition and extracting structured text from images and scanned documents.
PaddleOCR LightOnOCR

Embeddings & Reranking

Embed
Generate vector embeddings for semantic search, RAG pipelines, and similarity matching. Includes text and image embedding models plus rerankers.
Qwen Embed bge Nomic Gemma Embed

Speech to Text

Speech
High-accuracy transcription in multiple languages. Multiple model sizes for different speed and accuracy trade-offs.
Whisper
Capabilities

What You Can Build

A single SDK covering the full spectrum of AI capabilities. All features run 100% on-device with zero cloud dependency. Explore all use cases

AI Agent Orchestration

Multi-agent workflows with persistent memory, reasoning, function calling, and MCP integration.
Pipeline Parallel Supervisor Router
Learn more

Document Intelligence

Chat with PDFs, extract structured data, convert documents, and perform OCR on images.
PDF Chat Extraction OCR
Learn more

RAG & Knowledge

Retrieval-augmented generation with built-in vector database, semantic search, and intelligent splitting.
Vector DB Embeddings Search
Learn more

Vision & Speech

Image understanding with Vision Language Models and high-accuracy speech-to-text with Whisper.
VLM Whisper Multimodal
Learn more

Text & NLP

Classification, sentiment analysis, translation, summarization, NER, and PII extraction.
Sentiment Translation NER PII
Learn more

Text Generation

Content creation, rewriting, grammar correction, structured output with JSON Schema validation.
Generation Rewriting Structured
Learn more

Chatbots & Assistants

Multi-turn conversational agents with persistent memory, streaming responses, agent skills, and human escalation.
Multi-turn Memory Streaming Skills
Learn more

MCP & Tool Integration

Connect agents to external services via Model Context Protocol. Built-in tools for file I/O, HTTP, web search, and more.
MCP Client Built-in Tools Permissions
Learn more

Data Extraction

Extract structured fields from invoices, forms, and free text. JSON Schema-constrained output for type-safe, reliable responses.
Invoices JSON Schema Fields PII
Learn more

Part of the Microsoft .NET AI Ecosystem

LM-Kit.NET integrates natively with Microsoft Semantic Kernel and Microsoft.Extensions.AI, making it a drop-in local provider for existing .NET AI applications.

Compatibility

Works Everywhere You Build .NET

Targets .NET Standard 2.0 for maximum compatibility. Develop and deploy on your platform of choice.

Operating Systems

Windows x64 macOS (Intel & Apple Silicon) Linux x64 & ARM64

.NET Frameworks

.NET 8, 9, 10 .NET Standard 2.0 .NET Framework 4.6.2+ .NET MAUI

Development Tools

Visual Studio 2019+ JetBrains Rider VS Code
FAQ

Frequently Asked Questions

Common questions about getting started with LM-Kit.NET.

Do I need a GPU to use LM-Kit.NET?
No. LM-Kit.NET includes optimized CPU inference (AVX/AVX2) out of the box and works well for smaller models and lower-throughput scenarios. For production workloads or larger models, GPU acceleration via CUDA, Vulkan, or Metal significantly improves performance. See the GPU setup guide for details.
Is the Community Edition really free?
Yes. The Community Edition is free for developers, startups, and open-source projects with no time limits, no feature restrictions, and no registration required. You can evaluate all features immediately. For enterprise deployments, commercial licenses are available on the pricing page.
What models are supported?
LM-Kit.NET supports a continuously updated catalog of open-weight models from Google, Meta, Alibaba, Mistral, Microsoft, IBM, and others. This includes chat, reasoning, vision, OCR, embedding, and speech models. Models download automatically on first use. Browse the full model catalog for available models and sizes.
Does it work offline or in air-gapped environments?
Yes. Once a model is downloaded, all inference runs 100% on-device with zero cloud dependency. You can pre-download models and deploy in fully air-gapped environments. This makes LM-Kit.NET ideal for regulated industries, healthcare, defense, and any scenario requiring complete data sovereignty.
How does local inference compare to cloud APIs?
Local inference eliminates per-token API costs, avoids rate limits, and keeps all data on your infrastructure. For sustained workloads, it can reduce AI costs by 90% or more compared to cloud providers. See the cost and performance comparison for detailed benchmarks.
Can I integrate with Microsoft Semantic Kernel or Extensions.AI?
Yes. LM-Kit.NET provides official bridge packages for both Microsoft Semantic Kernel and Microsoft.Extensions.AI, making it a drop-in local provider for existing .NET AI applications.
Free License

Ready to Deploy? Get the Free Community License

The Community Edition gives developers, startups, and open-source projects full, unrestricted access to LM-Kit.NET at no cost. No time limits. No feature locks. No strings attached.

Full-Featured No Time Limits Zero Cost Cross-Platform