Solutions · Integrations

Drop into the .NET AI ecosystem.

Most .NET teams already speak one of two AI abstraction surfaces: Microsoft.Extensions.AI or Semantic Kernel. Existing services accept IChatClient or register an IChatCompletionService. The LM-Kit integration packages implement those interfaces on top of on-device inference. Same kernel, same plugin model, same prompt files, same middleware. The backend moves local; the rest of the application keeps working.

Why bridges, not rewrites

Investment preserved.

Teams that adopted Microsoft.Extensions.AI or Semantic Kernel built investments in plugins, planners, prompt-function libraries, memory connectors, and middleware. Switching to a local backend should not cost that investment. Bridges keep the surface, swap the backend.

No code rewrites

Code that consumes IChatClient or IChatCompletionService works as-is. Swap the registration; inference now runs on the box.

Hybrid by default

Register multiple chat services. Route per request: local for sensitive data, cloud for bulk traffic. Same abstraction handles both.

Middleware preserved

Logging, caching, retry, function-invocation middleware written against the abstraction works unchanged. The bridge is just another implementation.

CI-friendly

Run end-to-end tests against the abstraction with the LM-Kit implementation. No external API quota, no flaky network in CI.

Library publishers

A NuGet that consumes IChatClient works with LM-Kit out of the box. The library author does not need to know about LM-Kit.

Future-compatible

Both abstractions evolve. The bridges track the surface. New methods on IChatClient land as new methods on the bridge.

Where bridges fit

Related capabilities.

Tools & function calling

When the abstraction is not enough for fine control, the native Tools API gives finer control over invocation, permissions, streaming.

Tools & function calling

Built-in vector database

The vector store under both bridges' memory paths. Same primitive other LM-Kit RAG paths use.

Vector database

Document RAG

For full-document workflows beyond text snippets, the native RAG primitives add source attribution and adaptive ingestion.

Document RAG

Edge & offline deployment

Once your kernel runs on the bridge, shipping to edge environments is a packaging change.

Edge deployment

Demos & docs

Build it. Read it. Try it.

Working console demos on GitHub, step-by-step how-to guides on the docs site, and the API reference for the classes used on this page.

Existing pipeline. Local backend.

Get Community Edition Download