Configuration

Providers

Configure LLM providers to enable AI agents to interact with various language models. Compozy supports 8 providers with distinct capabilities and configuration requirements.

Supported Providers

  • 1
    OpenAI

    GPT-4, GPT-3.5, and other OpenAI models

  • 2
    Anthropic

    Claude 3 family models for advanced reasoning

  • 3
    Google

    Gemini Pro and other Google AI models

  • 4
    Groq

    Fast inference platform with OpenAI-compatible API

  • 5
    Ollama

    Local model hosting for self-hosted models

  • 6
    DeepSeek

    DeepSeek AI models with OpenAI-compatible API

  • 7
    xAI

    Grok models with OpenAI-compatible API

Basic Configuration

Configure providers in the models section of your compozy.yaml:

models:
  - provider: openai
    model: gpt-4
    api_key: "{{ .env.OPENAI_API_KEY }}"

  - provider: anthropic
    model: claude-3-5-sonnet-20241022
    api_key: "{{ .env.ANTHROPIC_API_KEY }}"

  - provider: google
    model: gemini-pro
    api_key: "{{ .env.GOOGLE_API_KEY }}"

Configuration Reference

Core Properties

All providers support these core configuration properties. For the complete schema definition with all available fields and validation rules, see the Provider Schema Documentation.

PropertyTypeRequiredDescription
providerstringProvider name (see supported providers above)
modelstringModel identifier - any valid model string for the provider
api_keystring⚠️API key for authentication (required for cloud providers)
api_urlstringCustom API endpoint URL
organizationstringOrganization ID (OpenAI only)
paramsobjectGeneration parameters (temperature, max_tokens, etc.)

Parameter Support

Parameters are specified in the params object. The available parameters vary by provider - consult the Provider Schema Documentation for provider-specific parameter schemas:

models:
  - provider: openai
    model: gpt-4
    api_key: "{{ .env.OPENAI_API_KEY }}"
    params:
      temperature: 0.7
      max_tokens: 4000
      top_p: 0.9
      seed: 42

Provider-Specific Configuration

OpenAI Configuration

models:
  - provider: openai
    model: gpt-4
    api_key: "{{ .env.OPENAI_API_KEY }}"
    organization: "org-123"                # Optional
    params:
      temperature: 0.7
      max_tokens: 4000
      top_p: 0.9
      seed: 42

Model Support: Any valid OpenAI model string is supported. Popular models include:

  • gpt-4 - Most capable model
  • gpt-4-turbo - Faster GPT-4 variant
  • gpt-3.5-turbo - Faster, cheaper option
  • gpt-4o - Multimodal capabilities
  • And any other model available in your OpenAI account

Error Conditions:

  • ✅ All configuration options supported
  • ✅ Organization parameter supported
  • ✅ Custom API URL supported

Environment Variables

# .env
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
GOOGLE_API_KEY=AIza...
GROQ_API_KEY=gsk_...
DEEPSEEK_API_KEY=sk-...
XAI_API_KEY=xai-...

Security Best Practices

  • API Key Management

    Use environment variables for API keys and rotate them regularly

  • Network Security

    Use HTTPS endpoints and verify SSL certificates

  • Access Control

    Implement proper authentication and authorization

  • Input Validation

    Validate and sanitize all inputs before sending to providers

  • Error Handling

    Handle provider errors gracefully and avoid exposing sensitive information

Multiple Provider Configuration

You can configure multiple providers and models:

models:
  # Primary provider
  - provider: openai
    model: gpt-4
    api_key: "{{ .env.OPENAI_API_KEY }}"
    params:
      temperature: 0.7

  # Alternative provider
  - provider: anthropic
    model: claude-3-5-sonnet-20241022
    api_key: "{{ .env.ANTHROPIC_API_KEY }}"
    params:
      temperature: 0.5

  # Local provider for development
  - provider: ollama
    model: llama2:7b
    api_url: "http://localhost:11434"
    params:
      temperature: 0.8