Prompt Management
First-class managed prompts with identity, governance, telemetry, and self-improvement
What is Prompt Management?
In HUMΛN, prompts are first-class managed artifacts at the protocol level. Every prompt has a canonical URI, version control, delegation-based access control, schema-validated inputs, and protocol-level telemetry.
This is not a Companion feature — it is a MARA (Multi-Agent Runtime Architecture) concern. Every agent in the ecosystem benefits from managed, versioned, delegation-gated prompts.
Core Capabilities
Prompt URI Namespace Model
Every prompt has a canonical URI following the same addressing model as agent identities. Three scopes organise prompts across the ecosystem:
prompt://core/{namespace}.{name}@{version} Core/protocol prompts
prompt://org/{orgId}/{namespace}.{name}@{version} Org-level prompts
prompt://marketplace/{publisher}.{name}@{version} Published templates
Examples:
prompt://core/companion.canon.root-persona@0.1
prompt://org/acme/legal.contract-analysis@2.1.0
prompt://marketplace/human.companion.task.summarize@0.1Short-form resolution — developers write short keys in code, the system resolves within the agent's org context:
// Developer writes short key:const prompt = await ctx.prompts.load('contract-analysis');
// System resolves (org-first, then core):// → prompt://org/acme/legal.contract-analysis@activeThe ctx.prompts API
Available on every agent's ExecutionContext. All methods are delegation-gated — the agent must have appropriate prompt:read scopes.
| Method | Description | Delegation |
|---|---|---|
| load(id) | Load prompt by short key or full URI | prompt:read:{key} |
| compose(ids, opts) | Compose multiple prompts into a stack | prompt:read for each |
| getEffective(key) | Resolve full inheritance chain | prompt:read for chain |
| list(filter?) | List accessible prompts | Filtered by scope |
| estimateTokens(id) | Estimate token count and cost | prompt:read:{key} |
// Load and render a promptconst prompt = await ctx.prompts.load('contract-analysis');const rendered = prompt.render({ contract: input.contract, focus_areas: 'risks, obligations',});
// LLM call with prompt identity threaded into provenanceconst result = await ctx.llm.complete({ messages: [{ role: 'system', content: rendered }], promptMetadata: prompt.toCallMetadata(),});
// Compose multi-layer prompt stackconst composed = await ctx.prompts.compose([ 'root-persona', // Core identity 'lens-legal', // Domain lens 'contract-analysis', // Task prompt], { variables: { contract: input.contract } });Delegation-Based Access Control
Every prompt operation is gated by delegation scopes using the same Capability Boundary Engine (CBE) pattern as all other HUMΛN resources. No listing or editing prompts you are not authorised to see.
| Scope | Grants |
|---|---|
| prompt:read:* | Read any prompt in the org |
| prompt:read:companion.task.* | Read only companion task prompts |
| prompt:write:* | Edit or create any prompt |
| prompt:publish:* | Publish prompt versions to DB |
| prompt:rollback:* | Rollback to previous versions |
| prompt:admin:* | Cross-org administration (stewards only) |
Telemetry and Self-Improvement
Every LLM call records which prompts were used via PromptCallMetadata. Feedback signals flow back. Performance data accumulates. The Prompt Refinement Agent monitors everything and proposes improvements.
Developer authors prompt file
↓
Prompt validated, cost estimated, published to DB
↓
Agent loads prompt: ctx.prompts.load('contract-analysis')
↓ (delegation checked: prompt:read:{key})
Agent composes: ctx.prompts.compose([persona, lens, task])
↓ (all layers delegation-checked)
Agent calls: ctx.llm.complete({ promptMetadata })
↓ (prompt URIs + versions threaded into provenance)
Model Registry selects model (with prompt affinity boost)
↓
Telemetry logged: prompts + model + tokens + cost + latency
↓
Agent/user records feedback signal
↓
Performance snapshot aggregated
↓
Prompt Refinement Agent identifies underperformers
↓
Change proposals generated → human review → improved prompts
↓
Cycle continues with better promptsPromptCallMetadata
Prompt URIs, versions, layers, and composition method threaded through every LLM call
PromptFeedbackSignal
Agents and users report prompt effectiveness: positive, negative, rephrase, correction
PromptPerformanceSnapshot
Aggregated metrics: call volume, token cost, latency, signal ratios, model breakdown
PromptModelAffinity
Which (prompt, model) pairs perform best, feeding back into routing decisions
CLI Commands
# Development workflow (local, no delegation needed)human prompts lint # Validate schema and inheritancehuman prompts render contract-analysis \ --var contract="Sample text" # Preview rendered outputhuman prompts cost contract-analysis \ --model gpt-4o # Token/cost estimate
# Publishing (requires prompt:publish scope)human prompts publish contract-analysis # Publish to DBhuman prompts versions contract-analysis # List versionshuman prompts rollback contract-analysis \ --to 1.0.0 # Instant rollback
# Operationshuman prompts performance contract-analysis # View telemetryhuman prompts proposals list # View refinement proposalshuman prompts proposals accept PROPOSAL-47 # Accept improvementPrompt File Format
Prompts are markdown files with YAML frontmatter. The inputSchema defines typed variables with required/optional flags and default values.
# prompts/orgs/acme/legal/contract-analysis.md---id: contract-analysisnamespace: legaltype: taskscope: orgextends: prompt://core/companion.canon.root-personainputSchema: contract: { type: string, required: true, description: "Contract text" } focus_areas: { type: string, required: false, default: "risks, obligations" } output_format: { type: string, required: false, default: "structured JSON" }version: '1.0.0'---Analyze the following contract with focus on {{focus_areas}}.
Contract:{{contract}}
Provide your analysis in {{output_format}}.Deep Dives
Protocol-Level Prompt Management
Why AI prompts deserve first-class identity, governance, and observability
The Self-Improving Prompt Loop
How telemetry, model affinity, and the Prompt Refinement Agent close the improvement loop
Developer Guide: ctx.prompts
Hands-on walkthrough from authoring to publishing to wiring telemetry