Prompt Management

First-class managed prompts with identity, governance, telemetry, and self-improvement

What is Prompt Management?

In HUMΛN, prompts are first-class managed artifacts at the protocol level. Every prompt has a canonical URI, version control, delegation-based access control, schema-validated inputs, and protocol-level telemetry.

This is not a Companion feature — it is a MARA (Multi-Agent Runtime Architecture) concern. Every agent in the ecosystem benefits from managed, versioned, delegation-gated prompts.

Core Capabilities

URI-based identity and namespacing
Delegation-based access control
Multi-layer composition with inheritance
Schema-validated inputs with defaults
Protocol-level telemetry and feedback
Self-improving refinement loop
Version control with publish/rollback
Token cost estimation before API calls

Prompt URI Namespace Model

Every prompt has a canonical URI following the same addressing model as agent identities. Three scopes organise prompts across the ecosystem:

prompt://core/{namespace}.{name}@{version}          Core/protocol prompts
prompt://org/{orgId}/{namespace}.{name}@{version}    Org-level prompts
prompt://marketplace/{publisher}.{name}@{version}    Published templates

Examples:
  prompt://core/companion.canon.root-persona@0.1
  prompt://org/acme/legal.contract-analysis@2.1.0
  prompt://marketplace/human.companion.task.summarize@0.1

Short-form resolution — developers write short keys in code, the system resolves within the agent's org context:

typescript
// Developer writes short key:
const prompt = await ctx.prompts.load('contract-analysis');
// System resolves (org-first, then core):
// → prompt://org/acme/legal.contract-analysis@active

The ctx.prompts API

Available on every agent's ExecutionContext. All methods are delegation-gated — the agent must have appropriate prompt:read scopes.

MethodDescriptionDelegation
load(id)Load prompt by short key or full URIprompt:read:{key}
compose(ids, opts)Compose multiple prompts into a stackprompt:read for each
getEffective(key)Resolve full inheritance chainprompt:read for chain
list(filter?)List accessible promptsFiltered by scope
estimateTokens(id)Estimate token count and costprompt:read:{key}
typescript
// Load and render a prompt
const prompt = await ctx.prompts.load('contract-analysis');
const rendered = prompt.render({
contract: input.contract,
focus_areas: 'risks, obligations',
});
// LLM call with prompt identity threaded into provenance
const result = await ctx.llm.complete({
messages: [{ role: 'system', content: rendered }],
promptMetadata: prompt.toCallMetadata(),
});
// Compose multi-layer prompt stack
const composed = await ctx.prompts.compose([
'root-persona', // Core identity
'lens-legal', // Domain lens
'contract-analysis', // Task prompt
], { variables: { contract: input.contract } });

Delegation-Based Access Control

Every prompt operation is gated by delegation scopes using the same Capability Boundary Engine (CBE) pattern as all other HUMΛN resources. No listing or editing prompts you are not authorised to see.

ScopeGrants
prompt:read:*Read any prompt in the org
prompt:read:companion.task.*Read only companion task prompts
prompt:write:*Edit or create any prompt
prompt:publish:*Publish prompt versions to DB
prompt:rollback:*Rollback to previous versions
prompt:admin:*Cross-org administration (stewards only)

Telemetry and Self-Improvement

Every LLM call records which prompts were used via PromptCallMetadata. Feedback signals flow back. Performance data accumulates. The Prompt Refinement Agent monitors everything and proposes improvements.

Developer authors prompt file
    ↓
Prompt validated, cost estimated, published to DB
    ↓
Agent loads prompt: ctx.prompts.load('contract-analysis')
    ↓  (delegation checked: prompt:read:{key})
Agent composes: ctx.prompts.compose([persona, lens, task])
    ↓  (all layers delegation-checked)
Agent calls: ctx.llm.complete({ promptMetadata })
    ↓  (prompt URIs + versions threaded into provenance)
Model Registry selects model (with prompt affinity boost)
    ↓
Telemetry logged: prompts + model + tokens + cost + latency
    ↓
Agent/user records feedback signal
    ↓
Performance snapshot aggregated
    ↓
Prompt Refinement Agent identifies underperformers
    ↓
Change proposals generated → human review → improved prompts
    ↓
Cycle continues with better prompts

PromptCallMetadata

Prompt URIs, versions, layers, and composition method threaded through every LLM call

PromptFeedbackSignal

Agents and users report prompt effectiveness: positive, negative, rephrase, correction

PromptPerformanceSnapshot

Aggregated metrics: call volume, token cost, latency, signal ratios, model breakdown

PromptModelAffinity

Which (prompt, model) pairs perform best, feeding back into routing decisions

CLI Commands

bash
# Development workflow (local, no delegation needed)
human prompts lint # Validate schema and inheritance
human prompts render contract-analysis \
--var contract="Sample text" # Preview rendered output
human prompts cost contract-analysis \
--model gpt-4o # Token/cost estimate
# Publishing (requires prompt:publish scope)
human prompts publish contract-analysis # Publish to DB
human prompts versions contract-analysis # List versions
human prompts rollback contract-analysis \
--to 1.0.0 # Instant rollback
# Operations
human prompts performance contract-analysis # View telemetry
human prompts proposals list # View refinement proposals
human prompts proposals accept PROPOSAL-47 # Accept improvement

Prompt File Format

Prompts are markdown files with YAML frontmatter. The inputSchema defines typed variables with required/optional flags and default values.

yaml
# prompts/orgs/acme/legal/contract-analysis.md
---
id: contract-analysis
namespace: legal
type: task
scope: org
extends: prompt://core/companion.canon.root-persona
inputSchema:
contract: { type: string, required: true, description: "Contract text" }
focus_areas: { type: string, required: false, default: "risks, obligations" }
output_format: { type: string, required: false, default: "structured JSON" }
version: '1.0.0'
---
Analyze the following contract with focus on {{focus_areas}}.
Contract:
{{contract}}
Provide your analysis in {{output_format}}.

Deep Dives

Reference