Basic Tasks
Master the fundamental building blocks of Compozy workflows. Basic tasks execute single operations using AI agents or TypeScript tools, forming the foundation for all complex workflow patterns.
Overview
Basic tasks serve as the atomic units of execution in Compozy workflows. Whether you're building a simple AI chat bot or a complex multi-agent system, understanding basic tasks is essential since they form the foundation for all other task types. New to Compozy? Start with our Quick Start guide →
Each basic task performs a single, well-defined operation using either:
- AI Agents for natural language processing, reasoning, and creative tasks that require understanding and context
- TypeScript Tools for deterministic operations, API calls, and data processing with predictable results
Why Basic Tasks Matter:
- Foundation Layer: All complex task types (parallel, collection, router) ultimately decompose into basic tasks
- Execution Flexibility: Choose the optimal execution strategy for each operation based on complexity and requirements
- Production Ready: Built-in error handling, retry policies, and monitoring integration
Task Execution Types
Compozy provides 9 specialized task types that handle different execution patterns. Basic tasks are the foundation, while other types orchestrate multiple operations:
Basic
Router
Parallel
Collection
Composite
Aggregate
Wait
Signal
Memory
Core Principles
Basic tasks in Compozy follow several key principles that make them reliable and maintainable building blocks for complex workflows:
- Single Responsibility
Each basic task performs exactly one specific operation, making workflows easier to debug and maintain
- Execution Flexibility
Choose between AI agents for intelligent processing or TypeScript tools for deterministic operations
- Template Integration
Full support for dynamic parameters and outputs using Sprig template expressions
- Flow Control
Built-in conditional routing and sophisticated error handling patterns
- Schema Validation
Input/output validation with JSON Schema ensures type safety and reliable data flow
- State Management
Automatic output capture and seamless data passing between tasks using context variables
Task Structure
Basic tasks use a declarative YAML structure that makes them easy to read, write, and maintain. The structure is designed to be self-documenting while providing powerful capabilities for dynamic configuration.
Anatomy of a Basic Task
Task Identification
id: analyze-sentiment
$use: agent(local::agents.#(id=="sentiment-analyzer"))
action: analyze_text # Required for agents
Key components:
id
: Unique identifier for referencing in templates and flow control$use
: Reference to agent or tool using query syntaxaction
: Specific action for agents (not needed for tools)
Input Configuration
with:
text: "{{ .workflow.input.message }}"
context: "{{ .tasks.previous_step.output.metadata }}"
options:
language: "{{ .workflow.input.language | default 'en' }}"
confidence_threshold: 0.8
Template capabilities:
- Access workflow inputs with
.workflow.input.*
- Reference previous task outputs with
.tasks.taskname.output.*
- Use Sprig functions for data transformation
- Set default values and conditional logic
Output Management
outputs:
sentiment: "{{ .output.classification }}"
confidence: "{{ .output.score }}"
summary: "Analysis completed for {{ len .with.text }} characters"
metadata:
task_duration: "{{ .task.duration }}"
processed_at: "{{ now }}"
Output features:
- Transform raw outputs into structured data
- Combine multiple output fields into complex objects
- Add computed metadata like timestamps and durations
- Create summary information for downstream tasks
Flow Control
on_success:
next: validate-results
with:
analysis: "{{ .task.outputs }}"
on_error:
next: handle-error
with:
error_context: "{{ .task.error }}"
retry_count: "{{ .task.retry_count }}"
Flow options:
- Route to different tasks based on execution outcome
- Pass contextual data to next tasks
- Implement sophisticated error handling strategies
Key Components
Agent tasks execute AI models with specific instructions:
id: analyze-text
$use: agent(local::agents.#(id=="text-analyzer"))
action: analyze_sentiment
# Agent configuration (can be inline or referenced)
agent:
id: text-analyzer
config:
provider: anthropic
model: claude-3-5-haiku-latest
instructions: |
Analyze the sentiment of the provided text.
Return a score between -1 (negative) and 1 (positive).
with:
text: "{{ .workflow.input.content }}"
context: "{{ .workflow.input.metadata }}"
outputs:
sentiment_score: "{{ .output.score }}"
confidence: "{{ .output.confidence }}"
explanation: "{{ .output.reasoning }}"
Agent vs Tool Decision Guide
Choosing between agents and tools is crucial for optimal performance and reliability. This decision impacts execution speed, cost, and the type of processing your task can perform.
When to Use AI Agents
Agents excel at cognitive tasks that require understanding, reasoning, and interpretation. They're powered by large language models and can handle complex, unstructured inputs.
- Natural Language Processing
Text analysis, sentiment detection, content generation, and language translation
- Complex Reasoning
Decision making, problem solving, strategic planning, and logical inference
- Unstructured Data Analysis
Processing varied inputs like emails, documents, images, or free-form text that require interpretation
- Creative & Generative Tasks
Content creation, ideation, creative writing, and artistic tasks requiring imagination
- Context-Aware Processing
Tasks requiring understanding of context, nuance, and implicit meaning
Agent advantages:
- Handle ambiguous or varied inputs gracefully
- Provide reasoning and explanations for their outputs
- Can adapt to new scenarios without code changes
- Support structured outputs for reliable data extraction
# Example: Content analysis agent
id: analyze-content
$use: agent(local::agents.#(id=="content-analyst"))
action: analyze_content
agent:
id: content-analyst
config:
provider: anthropic
model: claude-3-5-haiku-latest
instructions: |
Analyze the provided content for:
1. Key themes and topics
2. Sentiment and tone
3. Target audience
4. Content quality score (1-10)
Return structured JSON with your analysis.
actions:
- id: analyze_content
json_mode: true
output:
type: object
properties:
themes: { type: array, items: { type: string } }
sentiment: { type: string, enum: ["positive", "negative", "neutral"] }
tone: { type: string }
audience: { type: string }
quality_score: { type: number, minimum: 1, maximum: 10 }
required: [themes, sentiment, tone, audience, quality_score]
with:
content: "{{ .workflow.input.text }}"
metadata: "{{ .workflow.input.context }}"
When to Use TypeScript Tools
Tools are perfect for deterministic operations that require precise, predictable results. They execute in a secure Bun runtime with configurable permissions and offer superior performance for computational tasks.
- Deterministic Operations
File I/O, data transformation, mathematical calculations, and formatting operations
- External API Integration
HTTP requests, database operations, third-party service integration, and webhooks
- System Operations
File system access, process management, environment interactions, and system utilities
- Performance-Critical Tasks
High-speed data processing, bulk operations, and computationally intensive tasks
- Structured Data Processing
JSON/XML parsing, data validation, format conversion, and structured transformations
Tool advantages:
- Faster execution and lower latency than agents
- Deterministic, repeatable results every time
- Cost-effective for computational tasks
- Fine-grained control over runtime permissions
Comparison at a Glance
Aspect | AI Agents | TypeScript Tools |
---|---|---|
Best for | Cognitive tasks, NLP, reasoning | Computational tasks, APIs, I/O |
Input handling | Unstructured, ambiguous data | Structured, well-defined data |
Output predictability | Variable, context-dependent | Deterministic, repeatable |
Adaptation | Learns from context | Explicit programming required |
Error handling | Graceful degradation | Precise error codes |
# Example: Data processing tool
id: process-csv
$use: tool(local::tools.#(id=="csv-processor"))
tool:
id: csv-processor
description: Process CSV data with filtering and transformation
input:
type: object
properties:
csv_data: { type: string }
filters: { type: object }
transformations: { type: array }
required: [csv_data]
with:
csv_data: "{{ .workflow.input.raw_data }}"
filters:
status: "active"
date_range: "{{ .workflow.input.date_filter }}"
transformations:
- type: "normalize_names"
- type: "calculate_totals"
outputs:
processed_data: "{{ .output.data }}"
row_count: "{{ .output.count }}"
processing_time: "{{ .output.duration }}"
Template Expressions & Dynamic Configuration
Basic tasks leverage Compozy's powerful template engine to create dynamic, data-driven workflows. Template expressions use the Sprig template engine with custom functions for accessing workflow context.
Input Template Patterns
with:
# Simple value substitution
user_id: "{{ .workflow.input.user_id }}"
# Complex expressions with functions
timestamp: "{{ now | date '2006-01-02 15:04:05' }}"
# Conditional values
priority: "{{ .workflow.input.priority | default 'medium' }}"
# Nested object construction
metadata:
source: "{{ .workflow.id }}"
user: "{{ .workflow.input.user_id }}"
environment: "{{ .env.ENVIRONMENT }}"
created_at: "{{ now }}"
# Array manipulation
tags: "{{ .workflow.input.tags | append 'processed' }}"
# Mathematical operations
score: "{{ mul .workflow.input.base_score 1.2 }}"
Output Templates
outputs:
# Direct field mapping
result: "{{ .output.data }}"
# Computed values
summary: "Processed {{ .output.count }} items in {{ .task.duration }}"
# Complex object creation
metadata:
task_id: "{{ .task.id }}"
execution_time: "{{ .task.duration }}"
input_size: "{{ len .with.data }}"
success: "{{ not .task.error }}"
# Conditional outputs
status: "{{ if .output.errors }}failed{{ else }}success{{ end }}"
# Type conversions
count_string: "{{ .output.count | toString }}"
score_number: "{{ .output.score | toNumber }}"
Task Design
- Single Responsibility: Each task should have one clear purpose
- Idempotency: Tasks should produce the same result when run multiple times
- Error Handling: Always define error handling strategies
- Input Validation: Use JSON Schema for input validation
- Output Structure: Design consistent output formats
Performance Optimization
# Efficient task configuration
id: optimized-task
$use: tool(local::tools.#(id=="processor"))
# Set appropriate timeouts
timeout: 2m
heartbeat_timeout: 30s
# Optimize retry strategy
retry_policy:
maximum_attempts: 2
initial_interval: 1s
maximum_interval: 10s
# Minimal input/output
with:
# Only pass necessary data
id: "{{ .workflow.input.item_id }}"
action: "process"
outputs:
# Only extract needed fields
result: "{{ .output.status }}"
id: "{{ .output.id }}"
Error Resilience
id: resilient-task
$use: agent(local::agents.#(id=="processor"))
action: process
# Comprehensive error handling
retry_policy:
maximum_attempts: 3
initial_interval: 1s
maximum_interval: 1m
backoff_coefficient: 2.0
non_retryable_error_types:
- "ValidationError"
- "AuthenticationError"
on_error:
next: error-recovery
with:
error_context:
original_input: "{{ .task.with }}"
error_type: "{{ .task.error.type }}"
error_message: "{{ .task.error.message }}"
retry_count: "{{ .task.retry_count }}"
task_id: "{{ .task.id }}"
# Timeout protection
schedule_to_close_timeout: 10m
start_to_close_timeout: 5m
Common Patterns & Best Practices
Master these essential patterns to build robust, maintainable workflows with basic tasks.
Task Design Principles
Single Responsibility
id: validate-email
$use: tool(local::tools.#(id=="email-validator"))
with:
email: "{{ .workflow.input.email }}"
Why: Single-responsibility tasks are easier to debug, test, and reuse across workflows.
Idempotent Execution
id: generate-report
$use: agent(local::agents.#(id=="reporter"))
action: create_report
with:
data: "{{ .workflow.input.data }}"
timestamp: "{{ .workflow.input.report_date }}" # Fixed timestamp, not 'now'
Why: Enables safe retries and workflow replay without side effects.
Comprehensive Error Handling
id: api-call
$use: tool(local::tools.#(id=="http-client"))
retry_policy:
maximum_attempts: 3
initial_interval: 1s
maximum_interval: 30s
backoff_coefficient: 2.0
non_retryable_error_types:
- "AuthenticationError"
- "ValidationError"
on_error:
next: handle-failure
with:
error_context: "{{ .task.error }}"
Learn more: Error Handling Strategies in the task documentation
Input Validation
id: process-user-data
input:
type: object
properties:
email: { type: string, format: email }
age: { type: integer, minimum: 0, maximum: 150 }
preferences:
type: object
properties:
notifications: { type: boolean }
theme: { type: string, enum: ["light", "dark"] }
required: [email, age]
Why: Early validation prevents downstream errors and improves debugging.
Essential Patterns
Pattern: Execute tasks based on runtime conditions
id: conditional-processing
$use: agent(local::agents.#(id=="processor"))
action: process_data
# Conditional input preparation
with:
data: >-
{{ if .workflow.input.process_required }}
{{ .workflow.input.data }}
{{ else }}
null
{{ end }}
# Skip execution if no data to process
condition: "{{ ne .with.data null }}"
on_success:
next: "{{ if .workflow.input.save_results }}save-results{{ else }}notify-completion{{ end }}"
Use cases: Feature flags, A/B testing, environment-specific logic
Production Considerations
- Performance Optimization
Timeouts: 2-30s for most tasks | Retries: Efficient backoff strategies | Data: Minimize I/O payload size | Resources: Monitor execution metrics
- Security Best Practices
Input Validation: JSON Schema enforcement | Secrets: Environment variables only | Permissions: Minimal runtime access | Audit: Security logging
- Monitoring & Observability
Task IDs: Meaningful, traceable identifiers | Metadata: Execution context in outputs | Health: Dependency monitoring | Alerts: Error rate thresholds
- Testing & Validation
Isolation: Unit test individual tasks | Templates: Validate expression syntax | Fixtures: Consistent test data | Integration: End-to-end workflows
Basic tasks form the foundation of all Compozy workflows, providing flexible, reliable execution of both AI agents and tools. Master these patterns to build scalable, maintainable automation that grows with your needs.
Next Steps & Learning Paths
Choose your learning path based on your goals and experience level: