Solutions · AI agents · MCP integration

One protocol. Every external service.

The Model Context Protocol is the open standard for connecting AI agents to tools, data, and prompts. LM-Kit.NET ships a complete MCP client with both Stdio (local servers) and HTTP+SSE (remote servers) transports, plus the full surface area of the spec: resources, prompts, sampling, elicitation, roots, progress, cancellation, completions, and logging.

Full spec coverage Stdio + HTTP+SSE Human-in-the-loop
Protocol features
  • Tools, resources, prompts
  • Sampling & elicitation
  • Roots & subscriptions
  • Progress & cancellation
  • Logging & completions
  • Capability negotiation
Two transports
  • Stdio: launch local servers (Node, Python, native binaries)
  • HTTP+SSE: connect to remote services with auth headers
  • Auto-restart and graceful shutdown for stdio
  • Custom transports via IMcpTransport
Why MCP matters

An open standard, finally.

Before MCP, every agent framework had its own way of describing tools, fetching context, and routing prompts. Anthropic introduced MCP in November 2024 as a vendor-neutral protocol for connecting AI to anything. Adoption spread fast: thousands of MCP servers exist today for databases, dev tools, productivity apps, internal services. LM-Kit.NET speaks MCP fluently so your agents inherit the entire ecosystem.

Tools across the ecosystem

Connect to public servers (DeepWiki, Microsoft Learn Docs, GitHub, currency conversion) or internal ones built by your platform team. The agent treats them all as registered tools.

Resources are first-class

MCP servers expose typed resources: file trees, database schemas, project boards. Subscribe via McpResourceUpdated events to react when source data changes.

Prompts ship from the server

Servers can expose McpPrompt templates. The server owns the prompt; your agent receives it. Versioning and updates live with the service, not buried in client code.

Sampling delegation

An MCP server can ask your client to sample tokens from your model via McpSamplingRequest. The server gets generation; you keep your model and your data.

Elicitation

Servers can request input from the user mid-flight via McpElicitationRequest. Wire it to a console prompt, a Slack approval, or a UI modal. The agent waits for the answer.

Progress and cancellation

Long-running operations report progress via McpProgressToken. Users can cancel mid-stream. UIs render live progress without polling.

Connect in five lines

From McpClientBuilder to live agent.

Build an McpClient with the fluent builder. Auto-register every tool the server exposes into your agent's tool registry. Done.

Launch a local MCP server process over stdio and forward every tool it exposes into the agent's registry.

McpStdio.cs
using LMKit.Agents;
using LMKit.Mcp.Client;
using LMKit.Mcp.Transport;

// Launch a local MCP server (Node, Python, native exe) over stdio.
var mcp = new McpClientBuilder()
    .WithStdio(new StdioTransportOptions
    {
        Command          = "npx",
        Arguments        = ["-y", "@modelcontextprotocol/server-github"],
        Environment      = { ["GITHUB_TOKEN"] = Env.Token },
        AutoRestart      = true,
        GracefulShutdown = TimeSpan.FromSeconds(5)
    })
    .Build();

await mcp.ConnectAsync();

// Every server tool joins the agent's registry.
var agent = Agent.CreateBuilder(model)
    .WithTools(t => t.AddFromMcp(mcp))
    .Build();

var result = await agent.RunAsync("Open an issue describing the failing CI run on main.");
Beyond tools

Resources, prompts, and live updates.

Many agent frameworks treat MCP as a tool transport and stop there. The spec is bigger. Resources let agents query typed data; prompts let servers deliver versioned templates; subscriptions let your client react when source data changes upstream.

ResourcesAndPrompts.cs
// Read a typed resource exposed by the MCP server.
McpResourceContent content = await mcp.ReadResourceAsync("db://schemas/orders");
Console.WriteLine(content.Text);

// Subscribe to changes. The event fires whenever upstream data updates.
mcp.ResourceUpdated += (_, e) =>
{
    log.Info($"resource changed: {e.Uri}");
};
await mcp.SubscribeAsync("db://schemas/orders");

// Render a prompt template that the server owns.
McpPromptResult p = await mcp.GetPromptAsync(
    name: "summarize-incident",
    args: new() { ["incident_id"] = "INC-4321" });

// Pass the rendered messages straight into a conversation.
foreach (var message in p.Messages)
{
    chat.AddMessage(message.Role, message.Content);
}
Sampling and elicitation

The server asks back.

MCP is bidirectional. A server can request that your client run inference on your model (sampling), or ask the user a question mid-flight (elicitation). Both keep the model and the user in your trust boundary while letting the server orchestrate complex flows.

SamplingAndElicitation.cs
// Server requests sampling. Your client owns the model and the data.
mcp.SamplingRequested += async (_, e) =>
{
    var reply = await chat.SubmitAsync(e.Request.Messages.Last().Content);
    e.Respond(new McpSamplingResponse(reply));
};

// Server asks the user for input. Wire to whatever UI you have.
mcp.ElicitationRequested += async (_, e) =>
{
    Console.Write($"{e.Request.Prompt} > ");
    var answer = Console.ReadLine();
    e.Respond(answer);
};

// Long-running tools report progress. Render as you like.
mcp.ProgressUpdated += (_, e) => ui.UpdateBar(e.Token, e.Progress, e.Total);
Versus the alternatives

Most MCP clients cover only the basics.

Reference Python client

Reference implementation lives in Python. Useful for prototyping. Production .NET applications need bindings, marshaling, or a separate service.

Semantic Kernel MCP

Tool transport works. Resources, prompts, sampling, elicitation, and roots are partially or not supported. Stdio integration is brittle.

LM-Kit MCP

Native .NET, complete spec coverage (tools / resources / prompts / sampling / elicitation / roots / progress / cancellation / logging / completions), Stdio and HTTP+SSE, auto-restart and graceful shutdown, observable events.

Related capabilities

MCP plus the rest.

Tools & function calling

The 70+ built-in tools, custom ITool implementations, and [LMFunction] attribute binding. Pair with MCP for hybrid local + remote toolchains.

Tools page

Permissions & guardrails

MCP tools register with full IToolMetadata. Apply ToolPermissionPolicy rules just like local tools.

Permissions page

Filter pipeline

Wrap MCP tool invocations in IToolInvocationFilter middleware for redaction, logging, salvage, or short-circuit.

Filter pipeline page

Observability

MCP requests and responses emit OpenTelemetry spans. Trace cross-service flows end-to-end.

Observability page

API reference

Key types.

McpClient

Full MCP client. Manages connection, capability negotiation, tools, resources, prompts, sampling, elicitation, progress, cancellation, logging.

View documentation

McpClientBuilder

Fluent builder. Pick HTTP+SSE or Stdio transport, set auth, timeouts, environment, working directory, restart policy.

View documentation

StdioTransportOptions

Configure stdio servers: command, arguments, working directory, environment, timeouts, graceful shutdown, auto-restart.

View documentation

McpResource / McpPrompt

Strongly-typed views of server-exposed resources and prompts. Includes URI, MIME type, arguments, content blocks.

View documentation

An open standard. A complete client.

Get Community Edition Download