Execution Usage Schema

JSON Schema for execution-level LLM usage summaries

Execution Usage Summary
Token usage totals captured for a single workflow, task, or agent execution.
provider
required
string

LLM provider identifier, for example 'openai' or 'anthropic'.

model
required
string

Provider model name used for the execution.

prompt_tokens
required
integer

Total prompt tokens submitted to the provider across the execution.

completion_tokens
required
integer

Total completion tokens returned by the provider.

total_tokens
required
integer

Sum of prompt and completion tokens reported by the provider.

reasoning_tokens
integer

Optional reasoning tokens reported for models that separate reasoning output.

cached_prompt_tokens
integer

Cached prompt tokens reused by the provider, when reported.

input_audio_tokens
integer

Input audio tokens consumed during multimodal executions, if available.

output_audio_tokens
integer

Output audio tokens generated during multimodal executions, if available.

Usage Notes

  • prompt_tokens, completion_tokens, and total_tokens are always present when usage is recorded. Totals are reported exactly as provided by the upstream LLM provider.
  • Optional fields (reasoning_tokens, cached_prompt_tokens, input_audio_tokens, output_audio_tokens) appear only when the provider supplies the respective counters.
  • provider and model values align with the identifiers configured in compozy.yaml, enabling aggregation across executions by provider/model pair.