ctx.prompt

Load versioned prompts from the HUMΛN prompt registry, compose multi-layer prompts, and attach telemetry metadata so every LLM call can be traced back to its prompt version.

What it is

ctx.prompts is the prompt registry: load, compose, getEffective, estimateTokens, toCallMetadata. Access is delegation-gated; never use inline LLM strings for production prompts.

Why it exists

Versioning, access control, telemetry, and A/B improvement. One place to change prompts; trace which prompt produced which output; no scattered template strings.

How it makes life better

If you use the registry, you get one place to edit prompts, rollback by version, and see which prompt version drove each response—without a code deploy. If you bypass it with inline strings, you take on drift, no audit trail, and no way to A/B test or hand prompts to non-engineers.

Muscle authors: The same PromptLoader contract is available inside muscles (@human/muscle-sdk), so you can load versioned prompts without duplicating logic.

ctx.prompts.load(id)

Load a versioned prompt. Pin to a version with @v2 or omit for the current active version.

typescript
// Load a prompt from the registry
const prompt = await ctx.prompts.load('org/acme/invoice-extraction@v2');
// Render with variables
const rendered = prompt.render({
vendor_name: 'Acme Supplies',
document_type: 'purchase_order',
});
// Use in an LLM call with full telemetry metadata
const response = await ctx.llm.complete({
prompt: rendered,
temperature: 0.2,
promptMetadata: prompt.toCallMetadata(), // Links response back to the prompt version
});

ctx.prompts.compose(ids, options?)

Compose multiple prompts into a single layered prompt. Useful for building prompts from reusable components: base persona + task instructions + output format.

typescript
// Compose multiple prompt layers
// Useful for: base persona + task-specific instructions + user preferences
const composed = await ctx.prompts.compose([
'org/acme/base-analyst-persona',
'org/acme/invoice-extraction-task',
'org/acme/formal-output-format',
], {
variables: {
organization: 'Acme Corp',
currency: 'USD',
},
});
console.log(composed.estimatedTokens); // Know the cost before calling
console.log(composed.layers); // Each source prompt
const response = await ctx.llm.complete({
prompt: composed.content,
promptMetadata: composed.metadata,
});

Discovery and Token Estimation

typescript
// Discover available prompts in your namespace
const prompts = ctx.prompts.list({
scope: 'org', // 'org', 'suite', or 'system'
namespace: 'acme',
});
for (const meta of prompts) {
console.log(meta.id, meta.version, meta.estimatedTokens);
}
// Estimate tokens without loading content
const tokenCount = ctx.prompts.estimateTokens('org/acme/invoice-extraction@v2', {
model: 'gpt-4o',
});
console.log(`Prompt will use ~${tokenCount} tokens`);

ctx.prompts.getEffective(key)

Retrieve the currently active version of a prompt key, respecting any A/B test assignments or org-level overrides. Use this instead of pinning to a version when you want the platform to manage which version runs.

typescript
// Get the most current version of a prompt key
// (respects A/B test assignments, org overrides, etc.)
const effective = await ctx.prompts.getEffective('invoice-extraction');
if (effective) {
const rendered = effective.render({ vendor: vendorName });
// effective.meta.version tells you which version is active
}

API Reference

MethodReturnsDescription
load(id)Promise<LoadedPrompt>Load a versioned prompt by ID.
compose(ids, options?)Promise<ComposedPrompt>Compose multiple prompts into one.
getEffective(key)Promise<LoadedPrompt | undefined>Get the active version for a prompt key.
list(filter?)PromptMeta[]List available prompts. Synchronous.
estimateTokens(id, options?)numberEstimate token count before loading. Synchronous.

Delegation scope: Accessing prompts requires prompt:read scope in the agent's delegation. Attempting to load a prompt without this scope throws AccessDeniedError.

In the wild

Reference agents that demonstrate ctx.prompts in production.

Deep Dives: Prompt Management

See Also