ctx.prompt
Load versioned prompts from the HUMΛN prompt registry, compose multi-layer prompts, and attach telemetry metadata so every LLM call can be traced back to its prompt version.
What it is
ctx.prompts is the prompt registry: load, compose, getEffective, estimateTokens, toCallMetadata. Access is delegation-gated; never use inline LLM strings for production prompts.
Why it exists
Versioning, access control, telemetry, and A/B improvement. One place to change prompts; trace which prompt produced which output; no scattered template strings.
How it makes life better
If you use the registry, you get one place to edit prompts, rollback by version, and see which prompt version drove each response—without a code deploy. If you bypass it with inline strings, you take on drift, no audit trail, and no way to A/B test or hand prompts to non-engineers.
Muscle authors: The same PromptLoader contract is available inside muscles (@human/muscle-sdk), so you can load versioned prompts without duplicating logic.
ctx.prompts.load(id)
Load a versioned prompt. Pin to a version with @v2 or omit for the current active version.
// Load a prompt from the registryconst prompt = await ctx.prompts.load('org/acme/invoice-extraction@v2');
// Render with variablesconst rendered = prompt.render({ vendor_name: 'Acme Supplies', document_type: 'purchase_order',});
// Use in an LLM call with full telemetry metadataconst response = await ctx.llm.complete({ prompt: rendered, temperature: 0.2, promptMetadata: prompt.toCallMetadata(), // Links response back to the prompt version});ctx.prompts.compose(ids, options?)
Compose multiple prompts into a single layered prompt. Useful for building prompts from reusable components: base persona + task instructions + output format.
// Compose multiple prompt layers// Useful for: base persona + task-specific instructions + user preferencesconst composed = await ctx.prompts.compose([ 'org/acme/base-analyst-persona', 'org/acme/invoice-extraction-task', 'org/acme/formal-output-format',], { variables: { organization: 'Acme Corp', currency: 'USD', },});
console.log(composed.estimatedTokens); // Know the cost before callingconsole.log(composed.layers); // Each source prompt
const response = await ctx.llm.complete({ prompt: composed.content, promptMetadata: composed.metadata,});Discovery and Token Estimation
// Discover available prompts in your namespaceconst prompts = ctx.prompts.list({ scope: 'org', // 'org', 'suite', or 'system' namespace: 'acme',});
for (const meta of prompts) { console.log(meta.id, meta.version, meta.estimatedTokens);}
// Estimate tokens without loading contentconst tokenCount = ctx.prompts.estimateTokens('org/acme/invoice-extraction@v2', { model: 'gpt-4o',});console.log(`Prompt will use ~${tokenCount} tokens`);ctx.prompts.getEffective(key)
Retrieve the currently active version of a prompt key, respecting any A/B test assignments or org-level overrides. Use this instead of pinning to a version when you want the platform to manage which version runs.
// Get the most current version of a prompt key// (respects A/B test assignments, org overrides, etc.)const effective = await ctx.prompts.getEffective('invoice-extraction');if (effective) { const rendered = effective.render({ vendor: vendorName }); // effective.meta.version tells you which version is active}API Reference
| Method | Returns | Description |
|---|---|---|
load(id) | Promise<LoadedPrompt> | Load a versioned prompt by ID. |
compose(ids, options?) | Promise<ComposedPrompt> | Compose multiple prompts into one. |
getEffective(key) | Promise<LoadedPrompt | undefined> | Get the active version for a prompt key. |
list(filter?) | PromptMeta[] | List available prompts. Synchronous. |
estimateTokens(id, options?) | number | Estimate token count before loading. Synchronous. |
Delegation scope: Accessing prompts requires prompt:read scope in the agent's delegation. Attempting to load a prompt without this scope throws AccessDeniedError.
In the wild
Reference agents that demonstrate ctx.prompts in production.
load + compose + telemetryprompt-driven-analyzer
Full prompt management: load versioned prompt, compose layers, send call metadata for telemetry.
ctx.prompts.load()document-summarizer
Load reusable summarization prompt templates. Shows the simplest prompt registry usage.
Self-improving loopprompt-refinement-agent
Evaluate prompt quality, propose refinements, require human approval before committing changes.
Deep Dives: Prompt Management
Prompt Management · Part 1 of 3
From Inline Strings to ctx.prompts: A Developer's Guide to HUMΛN Prompt Management
A hands-on walkthrough of HUMΛN's prompt SDK: authoring prompts, validating schemas, composing layers, publishing versions, and wiring telemetry — with code examples from real agents.
Prompt Management · Part 2 of 3
Protocol-Level Prompt Management: Why AI Prompts Deserve First-Class Identity
Most AI systems treat prompts as throwaway strings. HUMΛN treats them as managed, namespaced, delegation-gated, version-controlled artifacts. Here's why that changes everything.
Prompt Management · Part 3 of 3
The Self-Improving Prompt Loop: How Telemetry Closes the Gap Between Good and Great
Most AI platforms ship prompts and forget them. HUMΛN's protocol-level telemetry, model affinity tracking, and Prompt Refinement Agent create a virtuous cycle of continuous improvement — with humans always in the loop.