Task Schema

JSON Schema for task configuration

Task Configuration Schema
Config is the main task configuration structure in Compozy.
action
string

Action identifier that describes what this task does Used for logging and debugging purposes

  • Example: "process-user-data", "send-notification"
agent
object

Agent configuration for AI-powered task execution Only used when the task needs to interact with an LLM agent Mutually exclusive with Tool field

Schema Reference: agents.json
attachments
array

Attachments available at this configuration scope.

batch
integer

Batch size for processing items in groups (0 = no batching) Useful for rate limiting or managing resource usage

  • Example: 10 means process 10 items at a time
batch_size
integer

BatchSize for operations that process multiple keys Controls how many keys are processed in each batch Default: 100, Maximum: 10,000

clear_config
object

Configuration for clear operations Only used when operation is "clear"

condition
string

CEL expression for conditional task execution or routing decisions Task only executes if condition evaluates to true

  • Example: "input.status == 'approved' && input.amount > 1000"
config
object

Global configuration options inherited from parent contexts Includes provider settings, API keys, and other global parameters

env
object

Environment variables available during task execution Can override or extend workflow-level environment variables

  • Example: { "API_KEY": "{{ .env.SECRET_KEY }}" }
file_path
string

Absolute file path where this task configuration was loaded from Set automatically during configuration loading

filter
string

Filter is an optional CEL expression to filter items before processing Each item is available as 'item' in the expression

  • Example: "item.status != 'inactive'" or "item.age > 18"
final
boolean

Marks this task as a terminal node in the workflow No subsequent tasks will execute after a final task

flush_config
object

Configuration for flush operations Only used when operation is "flush"

health_config
object

Configuration for health check operations Only used when operation is "health"

id
string

Unique identifier for the task instance within a workflow Must be unique within the workflow scope

index_var
string

IndexVar is the variable name for the current index (default: "index") Available in task templates as {{ .index }} or custom name Zero-based index of the current item

input
object

Schema definition for validating task input parameters Follows JSON Schema specification for type validation Format: type: object properties: user_id: { type: string, description: "User identifier" } required: ["user_id"]

item_var
string

ItemVar is the variable name for the current item (default: "item") Available in task templates as {{ .item }} or custom name

  • Example: Set to "user" to access as {{ .user }} in templates
items
string

Items is a template expression that evaluates to an array The expression should resolve to a list of items to iterate over

  • Example: "{{ .workflow.input.users }}" or "{{ range(1, 10) }}"
key_template
string

KeyTemplate is a template expression for the memory key Supports template variables for dynamic key generation

  • Example: "user:{{ .workflow.input.user_id }}:profile"
max_iterations
integer

Maximum number of reasoning iterations the agent can perform. The agent may self-correct and refine its response across multiple iterations to improve accuracy and address complex multi-step problems.

Default: 5 iterations

Trade-offs:

  • Higher values enable more thorough problem-solving and self-correction
  • Each iteration consumes additional tokens and increases response latency
  • Configure based on task complexity, accuracy requirements, and cost constraints
max_keys
integer

MaxKeys limits the number of keys processed Safety limit to prevent runaway operations Default: 1,000, Maximum: 50,000

max_workers
integer

MaxWorkers limits the number of concurrent task executions 0 means no limit (all tasks run concurrently)

  • Example: 5 means at most 5 tasks run at the same time
mcps
array

Model Context Protocol (MCP) server configurations. MCPs provide standardized interfaces for extending agent capabilities with external services and data sources through protocol-based communication.

Common MCP integrations:

  • Database connectors (PostgreSQL, Redis, MongoDB)
  • Search engines (Elasticsearch, Solr)
  • Knowledge bases (vector databases, documentation systems)
  • External APIs (REST, GraphQL, gRPC services)

MCPs support both stdio and HTTP transport protocols.

memory
array

Memory references enabling the agent to access persistent context. Memory provides stateful interactions across workflow steps and sessions.

Configuration format:

memory:
  - id: "user_context"           # Memory resource ID
    key: "user:{{.user_id}}"     # Dynamic key with template
    mode: "read-write"           # Access mode (default: "read-write")

Access modes:

  • "read-write": Full access to read and modify memory
  • "read-only": Can only read existing memory entries

memory_ref
string

MemoryRef identifies which memory store to use References a memory configuration defined at the project level

  • Example: "user-sessions", "workflow-state", "cache"
mode
string

Mode determines if items are processed in parallel or sequentially Defaults to "parallel" Options: parallel, sequential

model_config
object

LLM provider configuration defining which AI model to use and its parameters. Supports multiple providers including OpenAI, Anthropic, Google, Groq, and local models.

Required fields: provider, model Optional fields: api_key, api_url, params (temperature, max_tokens, etc.)

on_error
object

Error handling configuration Defines fallback behavior when task execution fails Can specify error task ID or retry configuration

on_success
object

Task execution control Defines what happens after successful task completion Can specify next task ID or conditional routing

on_timeout
string

OnTimeout specifies the next task to execute if the wait times out Uses the timeout value from BaseConfig If not specified, the task fails on timeout

operation
string

Operation type to perform on memory Required field that determines the action to take

output
object

Schema definition for validating task output data Ensures task results conform to expected structure Uses same format as InputSchema

outputs
object

Output mappings that define what data this task exposes to subsequent tasks Uses template expressions to transform task results

  • Example: { "processed_data": "{{ .task.output.result }}" }
payload
any

Payload data for write/append operations Can be any JSON-serializable data structure Required for write and append operations

processor
object

Processor is an optional task configuration to process received signals Allows custom handling of signal data before continuing The processor receives the signal payload as input $ref: inline:#

prompt
string

Prompt provides direct instruction to agents when no specific action is needed Used for ad-hoc agent interactions without predefined action definitions

  • Example: "Analyze this code for security issues", "Summarize the following text"
resource
string

Resource reference for the task Format: "compozy:task:<name>" (e.g., "compozy:task:process-data")

retries
integer

Number of retry attempts for failed task executions Default: 0 (no retries)

routes
object

Routes maps condition values to task IDs or inline task configurations The condition field in BaseConfig is evaluated, and its result is used as the key to select the appropriate route Values can be:

  • Task ID (string): References an existing task
  • Inline task config (object): Defines task configuration directly
  • Example: routes: approved: "process-payment" # Task ID reference rejected: # Inline task config type: basic agent: { id: rejection-handler } pending: "wait-for-approval"
signal
object

Signal configuration containing the signal ID and payload

sleep
string

Sleep duration after task completion Format: "5s", "1m", "500ms", "1h30m" Useful for rate limiting or giving external systems time to process

stats_config
object

Configuration for statistics operations Only used when operation is "stats"

strategy
string

Strategy determines how the parallel execution handles task completion Defaults to "wait_all" if not specified Options: wait_all, fail_fast, best_effort, race

task
object

Task template for collection tasks This configuration is replicated for each item in the collection The item and index are available as template variables $ref: inline:#

tasks
array

Tasks array for parallel, composite, and collection tasks Contains the list of sub-tasks to execute For parallel: tasks run concurrently For composite: tasks run sequentially For collection: not used (use Task field instead) $ref: inline:#

timeout
string

Maximum execution time for parallel or composite tasks Format: "30s", "5m", "1h" Task will be canceled if it exceeds this duration

tool
object

Tool configuration for executing specific tool operations Used when the task needs to execute a predefined tool Mutually exclusive with Agent field

Schema Reference: tools.json
tools
array

Tools available to the agent for extending its capabilities. When tools are defined, the agent automatically has toolChoice set to "auto", enabling autonomous tool selection and invocation during task execution.

Tool types supported:

  • File system operations (read, write, list)
  • API integrations (HTTP requests, webhooks)
  • Data processing utilities (parsing, transformation)
  • Custom business logic (TypeScript/JavaScript execution)

Tools are referenced by ID and can be shared across multiple agents.

type
string

Type of task that determines execution behavior If not specified, defaults to "basic"

wait_for
string

WaitFor specifies the signal ID to wait for The task will pause until a signal with this ID is received Must match the ID used in a SignalTask

  • Example: "user-approved", "payment-completed"
with
object

Input parameters passed to the task at execution time Can include references to workflow inputs, previous task outputs, etc.

  • Example: { "user_id": "{{ .workflow.input.user_id }}" }

Resources

On this page