Solutions · Data Processing · Vector storage engine

Built-in vector database for .NET applications.

Store and retrieve embeddings at any scale. Four pluggable storage backends with a unified API. From in-memory prototyping to production-ready deployments with Qdrant or custom backends. Swap storage strategies without rewriting code. 100% local processing.

Zero setup required Pluggable backends Millions of vectors
No setup

In-memory store

Fast prototyping, zero setup, instant feedback.

Recommended

Built-in vector DB

File-based, handles millions of vectors locally.

New

FileSystemVectorStore

Directory-based IVectorStore with caching.

Production

Qdrant integration

HNSW indexing, distributed, cloud-ready.

5
Storage patterns
100%
Local
1
Unified API

Unified vector storage for every stage.

LM-Kit provides a unified embedding storage architecture that scales from quick prototypes to production deployments. At its core is the DataSource abstraction, which manages embeddings, metadata, and retrieval through a consistent API regardless of where your vectors are stored.

Start with in-memory storage for rapid iteration, graduate to the built-in file-based vector database for local applications, or connect to Qdrant for distributed workloads. The same code works across all backends with zero modifications.

Think of it as SQLite for vectors: a self-contained, file-based engine that handles millions of embeddings without external infrastructure, while remaining fully compatible with cloud-scale solutions when you need them.

DataSource hierarchy

DataSource: container
Sections + metadata
TextPartitions · embeddings
ImagePartitions · embeddings
Storage patterns

Four backends, one unified API.

Choose the storage that fits your application's lifecycle. Switch between them seamlessly without rewriting code.

In-memory

In-memory store

DataSource.CreateInMemoryDataSource()

Embeddings computed and stored in RAM with optional serialization to disk via Serialize() method. Zero setup required. Ideal for fast prototyping, testing, and live classification tasks.

  • Persistence: Temporary
  • Scale: Low
  • Infrastructure: None
  • Instant feedback during development
  • Serialize() and Deserialize() for reusability
  • Perfect for semantic search prototypes

New

FileSystemVectorStore

new FileSystemVectorStore(path)

IVectorStore implementation that persists collections as individual files on disk. Each collection stored as a separate .ds file with in-memory caching for performance.

  • Persistence: Directory
  • Scale: Medium-High
  • Infrastructure: None
  • Multiple collections in one directory
  • Implements IVectorStore interface
  • Automatic caching for opened sources

Production

Qdrant vector store

QdrantEmbeddingStore + DataSource.LoadFromStore()

High-performance, open-source vector database with HNSW indexing. Ideal for production workloads requiring distributed access and advanced filtering.

  • Persistence: Durable
  • Scale: High
  • Infrastructure: Qdrant
  • HNSW indexing for sub-second search
  • Automatic sharding and replication
  • Deploy locally or in the cloud

Custom

Custom via IVectorStore

Implement IVectorStore interface

Full control over vector storage logic. Integrate with proprietary databases, internal APIs, or hybrid storage systems using the standardized contract.

  • Persistence: Custom
  • Scale: Varies
  • Infrastructure: Your own
  • Seamless proprietary backend integration
  • Custom indexing and retrieval logic
  • Future-proof, vendor-agnostic architecture
Core capabilities

Enterprise-ready vector management.

Everything you need to build production-grade embedding storage and retrieval.

Hierarchy

Hierarchical data organization

Organize embeddings into sections and partitions with optional metadata at each level. Manage multi-modal inputs within a single collection.

Metadata

Rich metadata support

Attach metadata to sections and partitions for filtering, tagging, and advanced retrieval scenarios across any vector backend.

Portability

Serialization & portability

Serialize DataSource instances to disk and reload anywhere. Enable checkpointing, debugging, and deployment without external services.

Updates

Incremental updates

Efficient insertions, deletions, and metadata edits without rebuilding the entire dataset. Works with both built-in and external stores.

Search

Similarity search

SearchSimilar returns ranked results by vector similarity. Configure top-K, minimum scores, and metadata filters for precise retrieval.

Privacy

Privacy by design

Local-only and on-prem options keep data secure and compliant. No external dependencies required for complete vector management.

Code examples

Get started in minutes.

Same API, different backends. Switch storage strategies without rewriting your application logic.

InMemoryExample.cs
using LMKit.Model;
using LMKit.Data;
using LMKit.Retrieval;

// Load embedding model
var embedModel = LM.LoadFromModelID("embeddinggemma-300m");

// Create in-memory DataSource
var dataSource = DataSource.CreateInMemoryDataSource("my-collection", embedModel);

// Use RagEngine to import content
var ragEngine = new RagEngine(embedModel);
ragEngine.AddDataSource(dataSource);

// Import text with automatic chunking
ragEngine.ImportText(
    "Your document content here...",
    new TextChunking() { MaxChunkSize = 500 },
    "my-collection",
    "document-section");

// Optional: Serialize to disk for later reuse
dataSource.Serialize("./cache/my-collection.bin");

// Later: Deserialize from disk
var restored = DataSource.Deserialize("./cache/my-collection.bin", embedModel);
Use cases

Built for real-world applications.

From desktop tools to enterprise RAG systems, LM-Kit's vector storage adapts to your needs.

Search

Semantic search engines

Build intelligent search that understands meaning, not just keywords. Index documents, products, or knowledge bases for natural language queries.

Chatbot

RAG-powered chatbots

Ground LLM responses with relevant context from your corpus. Use RagEngine with FindMatchingPartitions() and QueryPartitions() for accurate answers.

Memory

Agent memory systems

Give AI agents persistent memory with AgentMemory class. Store facts via SaveInformationAsync() and recall them automatically in conversations.

Documents

Document intelligence

Index and retrieve from large document collections with DocumentRag. Support legal discovery, research assistants, and enterprise knowledge management.

Recommend

Recommendation systems

Find similar items, content, or users based on embedding similarity. Power product recommendations, content discovery, and personalization.

Offline

Offline desktop applications

Ship portable AI modules with embedded vectors using FileSystemVectorStore. Support air-gapped environments and compliance-sensitive scenarios.

API reference

Key classes & interfaces.

Core components for building vector storage solutions.

DataSource

Central container for embedding storage. Manages sections, partitions, metadata. Create with CreateFileDataSource(), CreateInMemoryDataSource(), or LoadFromFile().

View documentation

RagEngine

Orchestrates RAG workflows. Import text with automatic chunking via ImportText(). Query with FindMatchingPartitions() and QueryPartitions().

View documentation

IVectorStore

Interface for custom vector storage backends. Implement for proprietary databases. Methods include CollectionExistsAsync(), CreateCollectionAsync().

View documentation

FileSystemVectorStore

File system-based IVectorStore implementation. Persists collections as .ds files in a directory with automatic caching.

View documentation

QdrantEmbeddingStore

Qdrant connector implementing IVectorStore. Bridges LM-Kit.NET with Qdrant's high-performance vector database via gRPC.

View documentation

PartitionSimilarity

Result from similarity search. Contains SectionIdentifier, Similarity score, Metadata, and partition content for retrieval workflows.

View documentation

AgentMemory

Semantic memory for AI agents. SaveInformationAsync() stores facts with embeddings. Integrates with MultiTurnConversation via Memory property.

View documentation

TextChunking

Configures text splitting for embeddings. Set MaxChunkSize to control partition size. Used with RagEngine.ImportText() for automatic chunking.

View documentation

Why LM-Kit

Why choose LM-Kit for vector storage?

The right storage strategy is critical to performance, scalability, and developer productivity.

01

Swap backends instantly

Same code works across all storage types. Just change the backend configuration.

02

Privacy by design

Local-only and on-prem solutions keep data secure and compliant.

03

Performant & scalable

From desktop experiments to high-scale RAG systems with millions of vectors.

04

Developer-friendly

Clean APIs, comprehensive documentation, and consistent patterns across all backends.

Ready to simplify your vector storage?

From in-memory experiments to durable local databases and scalable remote setups, LM-Kit makes switching storage backends effortless.

Download free API documentation