IPromptFilter
Inspect or rewrite the prompt before inference. Redact PII, inject context, enforce policy.
Cross-cutting concerns are the same whether you serve HTTP or call
a model: logging, validation, redaction, throttling, retry. ASP.NET
Core solved them with the onion-pattern middleware pipeline. LM-Kit
brings the same idea to AI: FilterPipeline with
IPromptFilter, ICompletionFilter, and
IToolInvocationFilter stages that wrap every call.
IPromptFilterInspect or rewrite the prompt before inference. Redact PII, inject context, enforce policy.
ICompletionFilterValidate or rewrite the completion. Strip leaked secrets, enforce length, salvage malformed JSON.
IToolInvocationFilterWrap every tool call. Approve, deny, retry, log, or transform arguments and results.
If you have ever sprinkled PII redaction logic across five agent methods, you know the cost. Filters move those concerns into composable, testable units. Each filter has one job; the pipeline runs them in order. Adding a new concern means writing one class, not editing twenty call sites.
Filters wrap inner filters. Pre-processing runs outside-in, post-processing runs inside-out. Familiar to anyone who has written ASP.NET middleware.
Every filter receives a CancellationToken. Slow filters do not strand a stuck agent.
Build a pipeline once, attach it to one or many agents, or to a MultiTurnConversation. Tests run against the same pipeline as production.
A filter can decide not to call the next stage. Useful for cache hits, policy denials, or canned responses on safe topics.
Permission policies and filters work together. Filters add domain-specific logic on top of the typed permission framework.
Filters integrate with the tracing layer. Every filter run produces a span; failures and short-circuits are tagged.
A prompt filter strips emails and phone numbers from the user prompt before inference runs.
using LMKit.Inference.Filters; public sealed class RedactPiiFilter : IPromptFilter { public async Task InvokeAsync(PromptFilterContext ctx, PromptFilterDelegate next) { ctx.Prompt = Redactor.RedactEmails(Redactor.RedactPhoneNumbers(ctx.Prompt)); await next(ctx); // pass to next filter / inference } }
A completion filter validates the model output and repairs malformed JSON before returning it.
public sealed class SalvageJsonFilter : ICompletionFilter { public async Task InvokeAsync(CompletionFilterContext ctx, CompletionFilterDelegate next) { await next(ctx); if (!JsonValidator.IsValid(ctx.Completion)) { ctx.Completion = JsonSalvager.Repair(ctx.Completion); } } }
Compose the filters in a pipeline and wire it into an agent builder in a single declarative chain.
var pipeline = new FilterPipeline() .UsePromptFilter(new RedactPiiFilter()) .UseToolInvocationFilter(new AuditLogFilter()) .UseCompletionFilter(new SalvageJsonFilter()); var agent = Agent.CreateBuilder(model) .WithFilterPipeline(pipeline) .Build();
Callbacks fire alongside execution but cannot rewrite prompts or short-circuit. Cross-cutting concerns end up duplicated in chain code.
Filters exist (function invocation, prompt rendering) but the pipeline is less composable and tied to the kernel lifecycle.
ASP.NET-style onion pattern, three first-class filter kinds, async/cancellable, attached per-agent or per-conversation, observable by default.
Combine declarative policies with custom filter logic for domain-specific approval flows.
Retries and circuit breakers handle transient failure; filters handle semantic failure.
Every filter run produces a span. Audit trails for free.
Pair the redaction filter with LM-Kit's own PII extraction model for higher-fidelity scrubbing.
Working console demos on GitHub, step-by-step how-to guides on the docs site, and the API reference for the classes used on this page.