Core Concepts
Understanding Compozy's fundamental concepts and architecture
Overview
Compozy is built on a straightforward philosophy: focus on describing your goals, not the implementation details. By defining your workflows, agents, and tools in easy-to-read YAML configuration files, you let Compozy take care of execution, coordination, and state management behind the scenes.
This is made possible by several core components:
- Workflows
Define a sequence of tasks to accomplish a goal
- Tasks
The building blocks of workflows
- Agents
AI-powered components that can understand natural language instructions and execute actions
- Tools
TypeScript/JavaScript functions that perform specific operations like API calls, data processing, or file operations
- Memory
Persistent memory systems that can store and retrieve data across workflow executions
- Signals
Communication channels between workflows and agents
- MCP
Model Context Protocol integration for external tooling
Workflows
Workflows are the top-level orchestration units that define a sequence of tasks to accomplish a goal. Learn more about workflow configuration →
id: customer-support
version: 0.1.0
description: Handle customer inquiries with AI agents
config:
input:
type: object
properties:
customer_message:
type: string
customer_id:
type: string
tasks:
- id: classify_intent
type: basic
# ... task definition
- id: generate_response
type: basic
# ... task definition
Key characteristics:
- Declarative
Describe the desired outcome, not implementation details
- Stateful
Compozy tracks execution state and handles failures
- Composable
Workflows can call other workflows or be called by external systems
- Observable
Built-in monitoring, logging, and debugging capabilities
Tasks
Tasks are the building blocks of workflows. Each task represents a single unit of work. Explore all task types →
Execute a single tool or agent action
- id: fetch_data
type: basic
$use: tool(local::tools.#(id=="api_client"))
with:
endpoint: "/users/{{ .workflow.input.user_id }}"
Key characteristics:
- Atomic operations
Each task represents a single, indivisible unit of work
- State management
Automatic tracking of task execution state and outputs
- Error handling
Built-in retry policies and failure recovery mechanisms
- Data flow
Seamless output passing between tasks using template expressions
- Flexible execution
Support for sequential, parallel, conditional, and collection-based patterns
Agents
Agents are AI-powered components that can understand natural language instructions and execute actions. Learn more about agents →
resource: agent
id: customer_service_agent
version: 0.1.0
config:
provider: groq
model: llama-3.3-70b-versatile
api_key: "{{ .env.GROQ_API_KEY }}"
instructions: |
You are a helpful customer service agent. Your role is to:
- Understand customer inquiries and classify their intent
- Provide accurate, helpful responses
- Escalate complex issues when necessary
Always maintain a professional, friendly tone.
tools:
- $ref: local::tools.#(id=="knowledge_base")
- $ref: local::tools.#(id=="ticket_system")
actions:
- id: classify_intent
prompt: |
Analyze this customer message: "{{ .input.message }}"
Classify the intent as: question, complaint, request, or compliment.
output:
type: object
properties:
intent:
type: string
enum: [question, complaint, request, compliment]
confidence:
type: number
reasoning:
type: string
- id: generate_response
json_mode: true
prompt: |
Based on the customer's message and intent:
Message: {{ .input.message }}
Intent: {{ .input.intent }}
Generate an appropriate response that addresses their needs.
Key features:
- Multi-model support
OpenAI, Groq, Ollama, and more. See all providers →
- Tool integration
Agents can call external tools and APIs
- Structured outputs
Define JSON schemas for reliable data extraction. Learn more →
- Context awareness
Access workflow data and previous task outputs
- Memory integration
Persistent conversation and context memory
Tools
Tools are TypeScript/JavaScript functions that perform specific operations like API calls, data processing, or file operations. Learn more about tool development →
interface WeatherInput {
location: string;
units?: "metric" | "imperial";
}
interface WeatherOutput {
temperature: number;
description: string;
humidity: number;
location: string;
success: boolean;
}
export async function weatherTool({ input }: { input: WeatherInput }): Promise<WeatherOutput> {
const { location, units = "metric" } = input;
try {
// Use OpenWeatherMap API (requires API key in environment)
const apiKey = process.env.OPENWEATHER_API_KEY;
if (!apiKey) {
throw new Error("OpenWeatherMap API key not found in environment");
}
const response = await fetch(
`https://api.openweathermap.org/data/2.5/weather?q=${encodeURIComponent(location)}&appid=${apiKey}&units=${units}`
);
if (!response.ok) {
throw new Error(`Weather API request failed: ${response.status}`);
}
const data = await response.json();
return {
temperature: Math.round(data.main.temp),
description: data.weather[0].description,
humidity: data.main.humidity,
location: data.name,
success: true,
};
} catch (error: any) {
throw new Error(`Weather lookup failed: ${error.message}`);
}
}
What this tool does: The weather tool demonstrates how to create a TypeScript tool that makes external API calls. It fetches current weather data for a given location using the OpenWeatherMap API, handles errors gracefully, and returns structured data that agents can use.
Tool configuration:
tools:
- id: weather_tool
description: Fetches current weather data for a given location
input:
type: object
properties:
location:
type: string
description: City name or location to get weather for
units:
type: string
enum: ["metric", "imperial"]
description: Temperature units (metric or imperial)
default: metric
required:
- location
Key characteristics:
- TypeScript/JavaScript execution
Run in secure Bun runtime with configurable permissions. Learn more →
- JSON I/O communication
Standardized input/output protocol for reliable data exchange
- External API integration
Connect to databases, web services, and third-party systems
- Schema validation
Define input/output schemas for type safety and validation
- Sandboxed execution
Isolated runtime environment with controlled access to system resources
Memory Systems
Memory enables persistent context and conversation history across workflow executions. Learn more about memory systems →
# Configure memory in your project
memory:
provider: redis
config:
url: "{{ .env.REDIS_URL }}"
strategies:
conversation:
type: token_lru
max_tokens: 4000
eviction_policy: lru
facts:
type: fifo
max_entries: 100
Using memory in workflows:
Dedicated memory tasks for explicit memory operations
tasks:
- id: recall_context
type: memory
operation: read
key: "user:{{ .workflow.input.user_id }}:context"
on_success:
next: process_with_context
- id: process_with_context
type: basic
$use: agent(local::agents.#(id=="assistant"))
action: respond
with:
message: "{{ .workflow.input.message }}"
context: "{{ .tasks.recall_context.output }}"
on_success:
next: update_memory
- id: update_memory
final: true
type: memory
operation: append
key: "user:{{ .workflow.input.user_id }}:context"
content: |
Human: {{ .workflow.input.message }}
Assistant: {{ .tasks.process_with_context.output }}
Key characteristics:
- Persistent storage
Context and conversation history survives across workflow executions with Redis
- Flexible strategies
Token-based LRU, FIFO, and custom eviction policies
- Hierarchical keys
Organize memory with user-specific, session-based, or global scopes
- Seamless integration
Access memory through simple read, write, append, and delete operations
Signals
Signals enable event-driven communication between workflows and agents through lightweight messages. They provide a publish-subscribe mechanism for asynchronous coordination and decoupled architectures. Learn more about signals →
# Emitting a signal in a workflow
tasks:
- id: process_order
type: basic
$use: agent(local::agents.#(id=="order_processor"))
with:
order: "{{ .workflow.input.order }}"
on_success:
next: notify_completion
- id: notify_completion
type: signal
signal:
id: order-processed
payload:
order_id: "{{ .workflow.input.order.id }}"
status: "completed"
timestamp: "{{ now }}"
# Receiving signals in another workflow
- id: wait_for_order
type: wait
wait_for: "order-processed"
timeout: 300s
condition: '{{ eq .signal.payload.status "completed" }}'
on_success:
next: send_notification
Common use cases:
- Workflow Coordination
Synchronize execution between multiple workflows
- Event-Driven Processing
Trigger actions based on business events or external signals
- Human Approval Flows
Pause workflows until manual approval or input is received
- Service Integration
Enable communication between different services and systems
MCP (Model Context Protocol)
MCP enables AI agents to connect with external tools and services through a standardized protocol. It acts as a universal adapter between agents and external capabilities, providing dynamic tool discovery and seamless integration. Learn more about MCP integration →
# Configure MCP servers in your project
mcps:
- id: filesystem
transport: stdio
command: "mcp-server-filesystem"
args: ["--root", "/workspace"]
- id: github
transport: http
endpoint: "http://localhost:3000/mcp"
headers:
Authorization: "Bearer {{ .env.GITHUB_TOKEN }}"
# Use MCP tools in agents
agents:
- id: developer_agent
config:
provider: groq
model: llama-3.3-70b-versatile
instructions: |
You are a developer assistant with access to filesystem and GitHub tools.
Use the available MCP tools to read files, make changes, and interact with repositories.
# MCP tools are automatically discovered and available
actions:
- id: analyze_codebase
prompt: |
Analyze the project structure in /workspace and provide insights.
Use filesystem tools to explore the codebase.
Integration with Compozy components:
- Automatic Tool Discovery
MCP servers expose tools that agents can use immediately without manual registration
- Multiple Transport Types
Support for stdio (local processes), HTTP, and Server-Sent Events (SSE)
- Central Proxy Architecture
MCP Proxy (port 6001) manages all connections and provides unified access
- Security & Isolation
Built-in token authentication and environment isolation for safe tool execution
Template Engine
Our powerful template engine enable modular, reusable configurations using a powerful query syntax. Learn more about YAML templates →
Reference Types
References within the same YAML file using local scope
# Reference tools defined in the same file
$ref: local::tools.#(id=="weather_tool")
# Reference agents in the current workflow
$ref: local::agents.#(id=="assistant")
# Reference schemas for validation
$ref: local::schemas.#(id=="user_input")
# Reference tasks for routing
$ref: local::tasks.#(id=="save_results")
Template Expressions
Sprig template engine with custom functions for dynamic values. See all template functions →
# Access workflow input
with:
message: "{{ .workflow.input.message }}"
# Access previous task outputs
condition: "{{ .tasks.classify.output.category }}"
# Transform data
formatted_data: "{{ .tasks.process.output | toJson }}"
# Conditional logic
status: |
{{ if eq .tasks.validate.output.valid true }}
approved
{{ else }}
rejected
{{ end }}
# Current timestamp
timestamp: "{{ now }}"
# Array operations
count: "{{ len .tasks.collection.output }}"
first_item: "{{ index .tasks.collection.output 0 }}"
# Collection item access
item_value: "{{ .item }}"
item_index: "{{ .index }}"
State Management
State management in workflows allows you to pass data between tasks seamlessly. Each task's outputs are automatically made available to subsequent tasks, enabling dynamic and context-aware processing. You can reference previous task outputs using template expressions, making it easy to build complex, data-driven workflows without manual data passing. Learn more about context variables →
# Task outputs are automatically available to subsequent tasks
tasks:
- id: step1
type: basic
outputs:
result: "{{ .output.data }}"
metadata:
processed_at: "{{ now }}"
- id: step2
type: basic
with:
# Access step1's outputs
previous_result: "{{ .tasks.step1.output.result }}"
timestamp: "{{ .tasks.step1.output.metadata.processed_at }}"
Error Handling
Handle errors in workflows using retry
, on_error
, and on_success
to control what happens when tasks fail or succeed. This makes your workflows more reliable.
tasks:
- id: risky_operation
type: basic
$use: agent(local::agents.#(id=="sophisticated_agent"))
retry:
max_attempts: 3
backoff: exponential
on_error:
next: handle_error
on_success:
next: continue_workflow
- id: handle_error
type: basic
$use: tool(local::tools.#(id=="error_handler"))
with:
error: "{{ .tasks.risky_operation.error }}"
Best Practices
Design Principles
- Single Responsibility: Each task should have one clear purpose
- Idempotency: Tasks should produce the same result when re-run
- Loose Coupling: Minimize dependencies between components
- Observable: Include logging and monitoring in your workflows
Configuration Organization
# Use schemas for input validation
schemas:
- id: user_request
type: object
properties:
message: { type: string }
priority: { type: string, enum: [low, medium, high] }
required: [message]
config:
input:
$ref: local::schemas.#(id=="user_request")
Error Handling Strategy
# Define error handling at the workflow level
error_handling:
default_retry:
max_attempts: 3
backoff: exponential
critical_tasks:
- validate_input
- save_results
Security Considerations
# Minimal runtime permissions
runtime:
permissions:
- --allow-read=./data # Only specific directories
- --allow-net=api.example.com # Only specific hosts
- --allow-env=API_KEY,DB_URL # Only required env vars
Next Steps
Choose your learning path based on your needs:
Quick Start
Agent Development
Advanced Implementation
Complete Reference
The power of Compozy lies in combining these concepts to create sophisticated AI-powered workflows that are both maintainable and scalable.