Governed LLM tool execution: policy, audit, and admin controls on a real MCP host
- Problem
- Teams ship "agent demos" that are hard to govern: tool access is implicit, approvals are ad hoc, and there is no durable audit trail. Production needs explicit allow / ask / deny, operator workflows, and an architecture that separates MCP server capabilities from client enforcement and LLM planning.
- Approach
- Built a reference MCP-style system with streamable HTTP transport, a Flask web host for end users, and a Gradio operator for admins. The LLM uses standard tool calling, while the client enforces permissions, captures decisions, and routes approvals. Included Demo vs Live modes so the UI stays usable without credentials, while Live exercises the full loop.
- Result
- The outcome is a ready-to-run trio: MCP server + operator + web UI, with governance hooks already in place. It is designed as a practical foundation to accelerate a secure MCP-backed assistant in almost any domain.
Reference implementation of a permissioned MCP workflow with separate host, operator, and user experiences, built to demonstrate production-minded agent integration.
