- Tools, resources, prompts
- Sampling & elicitation
- Roots & subscriptions
- Progress & cancellation
- Logging & completions
- Capability negotiation
One protocol. Every external service.
The Model Context Protocol is the open standard for connecting AI agents to tools, data, and prompts. LM-Kit.NET ships a complete MCP client with both Stdio (local servers) and HTTP+SSE (remote servers) transports, plus the full surface area of the spec: resources, prompts, sampling, elicitation, roots, progress, cancellation, completions, and logging.
- Stdio: launch local servers (Node, Python, native binaries)
- HTTP+SSE: connect to remote services with auth headers
- Auto-restart and graceful shutdown for stdio
- Custom transports via
IMcpTransport
An open standard, finally.
Before MCP, every agent framework had its own way of describing tools, fetching context, and routing prompts. Anthropic introduced MCP in November 2024 as a vendor-neutral protocol for connecting AI to anything. Adoption spread fast: thousands of MCP servers exist today for databases, dev tools, productivity apps, internal services. LM-Kit.NET speaks MCP fluently so your agents inherit the entire ecosystem.
Tools across the ecosystem
Connect to public servers (DeepWiki, Microsoft Learn Docs, GitHub, currency conversion) or internal ones built by your platform team. The agent treats them all as registered tools.
Resources are first-class
MCP servers expose typed resources: file trees, database schemas, project boards. Subscribe via McpResourceUpdated events to react when source data changes.
Prompts ship from the server
Servers can expose McpPrompt templates. The server owns the prompt; your agent receives it. Versioning and updates live with the service, not buried in client code.
Sampling delegation
An MCP server can ask your client to sample tokens from your model via McpSamplingRequest. The server gets generation; you keep your model and your data.
Elicitation
Servers can request input from the user mid-flight via McpElicitationRequest. Wire it to a console prompt, a Slack approval, or a UI modal. The agent waits for the answer.
Progress and cancellation
Long-running operations report progress via McpProgressToken. Users can cancel mid-stream. UIs render live progress without polling.
From McpClientBuilder to live agent.
Build an McpClient with the fluent builder. Auto-register every
tool the server exposes into your agent's tool registry. Done.
Launch a local MCP server process over stdio and forward every tool it exposes into the agent's registry.
using LMKit.Agents; using LMKit.Mcp.Client; using LMKit.Mcp.Transport; // Launch a local MCP server (Node, Python, native exe) over stdio. var mcp = new McpClientBuilder() .WithStdio(new StdioTransportOptions { Command = "npx", Arguments = ["-y", "@modelcontextprotocol/server-github"], Environment = { ["GITHUB_TOKEN"] = Env.Token }, AutoRestart = true, GracefulShutdown = TimeSpan.FromSeconds(5) }) .Build(); await mcp.ConnectAsync(); // Every server tool joins the agent's registry. var agent = Agent.CreateBuilder(model) .WithTools(t => t.AddFromMcp(mcp)) .Build(); var result = await agent.RunAsync("Open an issue describing the failing CI run on main.");
Connect to a hosted MCP server over HTTP+SSE with a bearer token, then inspect tools, resources, and prompts.
using LMKit.Mcp.Client; // Connect to a hosted MCP server over HTTP+SSE with a bearer token. var mcp = new McpClientBuilder() .WithHttp("https://mcp.example.com") .WithBearerToken(secrets.McpToken) .WithTimeout(TimeSpan.FromSeconds(30)) .Build(); await mcp.ConnectAsync(); // Inspect the catalog: tools, resources, prompts. foreach (var tool in mcp.Tools) Console.WriteLine($"tool : {tool.Name}"); foreach (var r in mcp.Resources) Console.WriteLine($"resource: {r.Uri}"); foreach (var p in mcp.Prompts) Console.WriteLine($"prompt : {p.Name}");
Resources, prompts, and live updates.
Many agent frameworks treat MCP as a tool transport and stop there. The spec is bigger. Resources let agents query typed data; prompts let servers deliver versioned templates; subscriptions let your client react when source data changes upstream.
// Read a typed resource exposed by the MCP server. McpResourceContent content = await mcp.ReadResourceAsync("db://schemas/orders"); Console.WriteLine(content.Text); // Subscribe to changes. The event fires whenever upstream data updates. mcp.ResourceUpdated += (_, e) => { log.Info($"resource changed: {e.Uri}"); }; await mcp.SubscribeAsync("db://schemas/orders"); // Render a prompt template that the server owns. McpPromptResult p = await mcp.GetPromptAsync( name: "summarize-incident", args: new() { ["incident_id"] = "INC-4321" }); // Pass the rendered messages straight into a conversation. foreach (var message in p.Messages) { chat.AddMessage(message.Role, message.Content); }
The server asks back.
MCP is bidirectional. A server can request that your client run inference on your model (sampling), or ask the user a question mid-flight (elicitation). Both keep the model and the user in your trust boundary while letting the server orchestrate complex flows.
// Server requests sampling. Your client owns the model and the data. mcp.SamplingRequested += async (_, e) => { var reply = await chat.SubmitAsync(e.Request.Messages.Last().Content); e.Respond(new McpSamplingResponse(reply)); }; // Server asks the user for input. Wire to whatever UI you have. mcp.ElicitationRequested += async (_, e) => { Console.Write($"{e.Request.Prompt} > "); var answer = Console.ReadLine(); e.Respond(answer); }; // Long-running tools report progress. Render as you like. mcp.ProgressUpdated += (_, e) => ui.UpdateBar(e.Token, e.Progress, e.Total);
Most MCP clients cover only the basics.
Reference Python client
Reference implementation lives in Python. Useful for prototyping. Production .NET applications need bindings, marshaling, or a separate service.
Semantic Kernel MCP
Tool transport works. Resources, prompts, sampling, elicitation, and roots are partially or not supported. Stdio integration is brittle.
LM-Kit MCP
Native .NET, complete spec coverage (tools / resources / prompts / sampling / elicitation / roots / progress / cancellation / logging / completions), Stdio and HTTP+SSE, auto-restart and graceful shutdown, observable events.
MCP in production code.
MCP plus the rest.
Tools & function calling
The 70+ built-in tools, custom ITool implementations, and [LMFunction] attribute binding. Pair with MCP for hybrid local + remote toolchains.
Permissions & guardrails
MCP tools register with full IToolMetadata. Apply ToolPermissionPolicy rules just like local tools.
Filter pipeline
Wrap MCP tool invocations in IToolInvocationFilter middleware for redaction, logging, salvage, or short-circuit.
Observability
MCP requests and responses emit OpenTelemetry spans. Trace cross-service flows end-to-end.
Key types.
McpClient
Full MCP client. Manages connection, capability negotiation, tools, resources, prompts, sampling, elicitation, progress, cancellation, logging.
McpClientBuilder
Fluent builder. Pick HTTP+SSE or Stdio transport, set auth, timeouts, environment, working directory, restart policy.
StdioTransportOptions
Configure stdio servers: command, arguments, working directory, environment, timeouts, graceful shutdown, auto-restart.
McpResource / McpPrompt
Strongly-typed views of server-exposed resources and prompts. Includes URI, MIME type, arguments, content blocks.
Build it. Read it. Try it.
Working console demos on GitHub, step-by-step how-to guides on the docs site, and the API reference for the classes used on this page.
MCP integration
Agent demo: consume tools, resources, prompts from an MCP server.
Open on GitHub → DemoMCP stdio integration
Agent demo: connect to an MCP server over stdio (local processes).
Open on GitHub → How-to guideConnect to MCP servers
How-to: discover, register, and invoke remote MCP tools.
Read the guide → How-to guideUse MCP resources and dynamic prompts
Beyond tools: pull resources and templated prompts from MCP.
Read the guide →