Any Document. Any Field.Zero Hallucinations.
The most advanced local structured data extraction engine available. Extract precise fields from invoices, contracts, medical records, and any document type. Powered by multimodal AI, proprietary symbolic AI layers, and purpose-trained LM-Kit models that eliminate LLM hallucinations. 100% on-device.
Symbolic AI Layers
Dynamic Sampling + adaptive layers eliminate LLM hallucinations.
LM-Kit Models
Purpose-trained models optimized for extraction tasks.
Multimodal Engine
Images, scans, PDFs, handwritten notes natively.
Schema-Driven
JSON schema or high-level API for typed outputs.
Beyond GenAI: A Unique Piece of Engineering
LM-Kit.NET delivers the most advanced local structured data extraction engine available. While other solutions rely solely on LLMs that hallucinate, LM-Kit combines generative AI with multiple symbolic AI layers, fuzzy logic, and expert systems to produce extraction results you can actually trust.
This is not just another wrapper around an LLM. Our proprietary symbolic layers work in concert with the language model, dynamically engaged based on content characteristics, domain semantics, and extraction requirements. The system intelligently orchestrates these components for each extraction scenario, achieving accuracy that pure LLM approaches cannot match.
Built by IDP pioneers: Designed by engineers with 20+ years of experience in document processing and data extraction, processing billions of documents in production worldwide.
using LMKit.Extraction; using LMKit.Model; // Load LM-Kit optimized model (recommended) var model = LM.LoadFromModelID("lmkit-tasks"); // Create extraction instance var extractor = new TextExtraction(model); // Define schema from JSON or programmatically extractor.SetElementsFromJsonSchema(schemaJson); // Set content: image, PDF, or text extractor.SetContent(new Attachment("invoice.pdf")); // Extract with zero hallucinations var result = extractor.Parse(); // Access structured JSON output Console.WriteLine(result.Json); // Or iterate typed elements foreach (var elem in result.Elements) Console.WriteLine($"{elem.Name}: {elem.Value}");
Adaptive Symbolic AI Layers
Multiple AI paradigms working together, dynamically engaged based on content type, domain semantics, and extraction context.
Why Pure LLMs Fail at Extraction
Large Language Models are designed for fluency, not precision. They hallucinate values, invent data that doesn't exist, and struggle with structured output constraints. For extraction tasks where accuracy matters, this is unacceptable.
LM-Kit solves this with a multi-layer architecture where symbolic AI systems validate, constrain, and correct LLM outputs in real-time. These layers include techniques such as taxonomy matching, ontology validation, fuzzy logic, and rule-based expert systems. Each component is adaptively engaged based on the extraction scenario.
Dynamic Sampling
Adaptive inference with real-time structural awareness and contextual validation.
Taxonomy Matching
Domain-specific classification and entity recognition.
Ontology Validation
Semantic relationship verification between extracted fields.
Contextual Rules
Expert system rules applied based on document type and content.
Dynamic Sampling: The Secret Weapon
A foundational component of our symbolic AI stack that fundamentally reimagines how LLMs generate structured output.
Adaptive Inference, Not Just Token Selection
Standard LLM sampling picks the most probable next token. This works for chat, but fails catastrophically for structured extraction where precision matters.
Dynamic Sampling is our proprietary inference method that goes far beyond probability-based token selection. It maintains real-time structural awareness of the generation process, applies contextual perplexity assessment with fuzzifiers, and leverages auxiliary content as extended context to guide every token decision.
- Speculative Grammar: Hybrid approach combining greedy sampling for constants with speculative validation for variables
- Contextual Perplexity: Adaptive guidance using fuzzifiers to reduce hallucinations without over-penalizing valid patterns
- Auxiliary Content: Extended context mechanism for semantic validation beyond the attention window
- Model-Agnostic: Works across any model, any size, no fine-tuning required
Real-Time Structural Awareness
Tracks whether the model is inside a JSON string, object, numeric run, or value start. Maintains a persistent CompletionState that enables structural validation at every step.
Metric-Guided Token Voting
Perplexity scoring identifies uncertainty between candidates. Per-candidate validation loops explore alternatives when top tokens are invalid or overly risky.
Model-Aware JSON Rendering
Monitors model preferences for formatting styles and adapts grammar expectations in real-time, ensuring higher parsing success across different model architectures.
Graceful Fallbacks
Immediate error detection with automatic correction through adaptive fallbacks. Prevents error propagation without restarting inference from scratch.
Contextual Repetition Detection
Understands when repetition is valid (e.g., "1000000000") versus problematic, avoiding the crude penalties of traditional approaches that break valid outputs.
Continuously Benchmarked
Refined through experimental research cycles and inference benchmarking on large datasets. Updated regularly to maintain state-of-the-art performance.
LM-Kit Trained Models
Purpose-built models optimized specifically for LM-Kit extraction tasks. The best option for maximum accuracy and speed.
LM-Kit Tasks Model
A specialized model optimized for LM-Kit pipelines. Achieves state-of-the-art performance in classification, structured data extraction, language detection, and sentiment analysis while also supporting chat, embeddings, text generation, code completion, math reasoning, and vision understanding.
- Optimized for extraction accuracy
- Seamless integration with LM-Kit pipelines
- Multimodal: text and vision capable
- Compact size, efficient inference
Extract From Any Content Source
Images, scans, PDFs, handwritten notes, Office documents. If it contains data, we can extract it.
Images
Photos, scans, screenshots with automatic orientation detection
PDF Documents
Digital PDFs, scanned documents, multi-page extraction
Office Documents
Word, Excel, PowerPoint with layout preservation
Handwritten Content
Notes, forms, signatures with VLM understanding
Built-in Local OCR Engine
Powerful OCR capabilities included out of the box, with support for custom OCR engine integration.
Built-in OCR
LM-Kit includes a local OCR engine that works seamlessly with the extraction pipeline. No cloud calls, no external dependencies.
- Automatic language detection
- Orientation detection and correction
- 100% local processing
Custom OCR Integration
Need a different OCR engine? The extraction pipeline supports pluggable OCR engines to match your specific requirements.
- Pluggable OCR engine interface
- Pre-integrated alternatives available
- Custom engine support
Rich Data Type Support
Define extraction schemas with typed fields. Get clean, validated output ready for integration.
String
Text values
Integer
Whole numbers
Float
Decimal values
Double
High precision
Bool
True/false
Date
Date values
Char
Single character
Arrays
Lists of any type
Plus nested objects and complex structures. View all supported types →
Invoice Data Extraction Demo
A complete demo showcasing multimodal extraction from invoice documents.
Invoice Extraction Demo
Interactive console app demonstrating structured data extraction from invoices in multiple languages with VLM + OCR integration, automatic language detection, and JSON output.
- Multiple vision-language models supported
- Built-in OCR with language detection
- Automatic orientation detection
- JSON schema configuration
- Sample invoices in French, Spanish, English
Extract Anything From Anything
Unlimited use cases. Define your schema and extract.
Financial Documents
Invoices, receipts, bank statements, expense reports. Extract vendor, amounts, dates, line items.
Legal Contracts
NDAs, service agreements, employment contracts. Extract parties, dates, clauses, obligations.
Medical Records
Patient records, lab results, prescriptions. Extract patient info, diagnoses, medications.
HR Documents
Resumes, job offers, employment applications. Extract skills, experience, contact info.
Academic Papers
Research papers, citations, abstracts. Extract authors, methodology, findings, references.
ID Documents
Passports, driver licenses, ID cards. Extract name, number, dates, nationality.
Broad Model Compatibility
Use LM-Kit trained models for best results, or bring your own. We support a wide range of models and are constantly adding new ones.
LM-Kit supports many different models for extraction tasks. While third-party models work well, LM-Kit trained models deliver the best performance as they are specifically optimized for our extraction pipelines and symbolic AI layers. We continuously add new model support and refine our purpose-built models.
Key Classes
The building blocks for structured data extraction applications.
TextExtraction
Main extraction class. Set content, define schema, parse. Supports text, images, PDFs, and Office documents.
View DocumentationTextExtractionElement
Define extraction fields with name, type, description, and nested elements for complex structures.
View DocumentationTextExtractionResult
Contains extracted elements and their JSON representation. Iterate typed elements or access raw JSON.
View DocumentationAttachment
Represents input content. Supports PDF, images, Office docs with page-level access.
View DocumentationReady to Extract With Zero Hallucinations?
The most advanced local data extraction engine. Symbolic AI layers + purpose-trained models. 100% on your infrastructure.