Providers
Configure LLM providers to enable AI agents to interact with various language models. Compozy supports 8 providers with distinct capabilities and configuration requirements.
Supported Providers
- 1OpenAI
GPT-4, GPT-3.5, and other OpenAI models
- 2Anthropic
Claude 3 family models for advanced reasoning
- 3Google
Gemini Pro and other Google AI models
- 4Groq
Fast inference platform with OpenAI-compatible API
- 5Ollama
Local model hosting for self-hosted models
- 6DeepSeek
DeepSeek AI models with OpenAI-compatible API
- 7xAI
Grok models with OpenAI-compatible API
Basic Configuration
Configure providers in the models
section of your compozy.yaml
:
models:
- provider: openai
model: gpt-4
api_key: "{{ .env.OPENAI_API_KEY }}"
- provider: anthropic
model: claude-3-5-sonnet-20241022
api_key: "{{ .env.ANTHROPIC_API_KEY }}"
- provider: google
model: gemini-pro
api_key: "{{ .env.GOOGLE_API_KEY }}"
Configuration Reference
Core Properties
All providers support these core configuration properties. For the complete schema definition with all available fields and validation rules, see the Provider Schema Documentation.
Property | Type | Required | Description |
---|---|---|---|
provider | string | ✅ | Provider name (see supported providers above) |
model | string | ✅ | Model identifier - any valid model string for the provider |
api_key | string | ⚠️ | API key for authentication (required for cloud providers) |
api_url | string | ❌ | Custom API endpoint URL |
organization | string | ❌ | Organization ID (OpenAI only) |
params | object | ❌ | Generation parameters (temperature, max_tokens, etc.) |
Parameter Support
Parameters are specified in the params
object. The available parameters vary by provider - consult the Provider Schema Documentation for provider-specific parameter schemas:
models:
- provider: openai
model: gpt-4
api_key: "{{ .env.OPENAI_API_KEY }}"
params:
temperature: 0.7
max_tokens: 4000
top_p: 0.9
seed: 42
Provider-Specific Configuration
OpenAI Configuration
models:
- provider: openai
model: gpt-4
api_key: "{{ .env.OPENAI_API_KEY }}"
organization: "org-123" # Optional
params:
temperature: 0.7
max_tokens: 4000
top_p: 0.9
seed: 42
Model Support: Any valid OpenAI model string is supported. Popular models include:
gpt-4
- Most capable modelgpt-4-turbo
- Faster GPT-4 variantgpt-3.5-turbo
- Faster, cheaper optiongpt-4o
- Multimodal capabilities- And any other model available in your OpenAI account
Error Conditions:
- ✅ All configuration options supported
- ✅ Organization parameter supported
- ✅ Custom API URL supported
Environment Variables
# .env
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
GOOGLE_API_KEY=AIza...
GROQ_API_KEY=gsk_...
DEEPSEEK_API_KEY=sk-...
XAI_API_KEY=xai-...
Security Best Practices
- API Key Management
Use environment variables for API keys and rotate them regularly
- Network Security
Use HTTPS endpoints and verify SSL certificates
- Access Control
Implement proper authentication and authorization
- Input Validation
Validate and sanitize all inputs before sending to providers
- Error Handling
Handle provider errors gracefully and avoid exposing sensitive information
Multiple Provider Configuration
You can configure multiple providers and models:
models:
# Primary provider
- provider: openai
model: gpt-4
api_key: "{{ .env.OPENAI_API_KEY }}"
params:
temperature: 0.7
# Alternative provider
- provider: anthropic
model: claude-3-5-sonnet-20241022
api_key: "{{ .env.ANTHROPIC_API_KEY }}"
params:
temperature: 0.5
# Local provider for development
- provider: ollama
model: llama2:7b
api_url: "http://localhost:11434"
params:
temperature: 0.8
AutoLoad
Automatic discovery and hot-reloading of project resources during development. Reduce configuration boilerplate by 80% and enable seamless development workflows.
Runtime
The runtime is the component responsible for executing your custom JavaScript/TypeScript tools within Compozy workflows. It provides a secure, sandboxed environment using Bun to run your tool functions.