Execution Usage Schema
JSON Schema for execution-level LLM usage summaries
Execution Usage Summary
Token usage totals captured for a single workflow, task, or agent execution.
provider
required
LLM provider identifier, for example 'openai' or 'anthropic'.
model
required
Provider model name used for the execution.
prompt_tokens
required
Total prompt tokens submitted to the provider across the execution.
completion_tokens
required
Total completion tokens returned by the provider.
total_tokens
required
Sum of prompt and completion tokens reported by the provider.
reasoning_tokens
Optional reasoning tokens reported for models that separate reasoning output.
cached_prompt_tokens
Cached prompt tokens reused by the provider, when reported.
input_audio_tokens
Input audio tokens consumed during multimodal executions, if available.
output_audio_tokens
Output audio tokens generated during multimodal executions, if available.
Usage Notes
prompt_tokens
,completion_tokens
, andtotal_tokens
are always present when usage is recorded. Totals are reported exactly as provided by the upstream LLM provider.- Optional fields (
reasoning_tokens
,cached_prompt_tokens
,input_audio_tokens
,output_audio_tokens
) appear only when the provider supplies the respective counters. provider
andmodel
values align with the identifiers configured incompozy.yaml
, enabling aggregation across executions by provider/model pair.