Provider Schema

JSON Schema for LLM provider configuration

Schema Definition
ProviderConfig represents the complete configuration for an LLM provider in Compozy workflows.
api_key
string

APIKey contains the authentication key for the AI provider.

  • Security: Use template references to environment variables.
  • Examples: "{{ .env.OPENAI_API_KEY }}", "{{ .secrets.ANTHROPIC_KEY }}"

Note:: Required for most cloud providers, optional for local providers

api_url
string

APIURL specifies a custom API endpoint for the provider. Use Cases:

  • Local model hosting (Ollama, OpenAI-compatible servers)
  • Enterprise API gateways
  • Regional API endpoints
  • Custom proxy servers

Examples: "http://localhost:11434", "https://api.openai.com/v1"

default
boolean

Default indicates that this model should be used as the fallback when no explicit model configuration is provided at the task or agent level.

Behavior:

  • Only one model per project can be marked as default
  • When set to true, this model will be used for tasks/agents without explicit model config
  • Validation ensures at most one default model per project

Example:

models:
  - provider: openai
    model: gpt-4
    default: true  # This will be used by default
max_tool_iterations
integer

MaxToolIterations optionally caps the maximum number of tool-call iterations during a single LLM request when tools are available. When > 0, overrides the global default for this model; 0 uses the global default.

model
string

Model defines the specific model identifier to use with the provider. Model names are provider-specific and determine capabilities and pricing.

  • Examples:
    • OpenAI: "gpt-4-turbo", "gpt-3.5-turbo"
    • Anthropic: "claude-4-opus", "claude-3-5-haiku-latest"
    • Google: "gemini-pro", "gemini-pro-vision"
    • Ollama: "llama2:13b", "mistral:7b"
organization
string

Organization specifies the organization ID for providers that support it.

  • Primary Use: OpenAI organization management for billing and access control

  • Example: "org-123456789abcdef"

Note:: Optional for most providers

params
object

Params contains the generation parameters that control LLM behavior. These parameters are applied to all requests using this provider configuration. Can be overridden at the task or action level for specific requirements.

provider
string

Provider specifies which AI service to use for LLM operations. Must match one of the supported ProviderName constants.

  • Examples: "openai", "anthropic", "google", "ollama"

Supported Providers

The following LLM providers are supported:

  • OpenAI (openai) - GPT-4, GPT-3.5, and other OpenAI models
  • Anthropic (anthropic) - Claude models
  • Google (google) - Gemini models
  • Groq (groq) - Fast inference platform
  • Ollama (ollama) - Local model hosting
  • DeepSeek (deepseek) - DeepSeek AI models
  • xAI (xai) - Grok models
  • Mock (mock) - Mock provider for testing

Provider Capabilities

⚠️ Critical Warning: Some providers will fail to initialize if you provide unsupported configuration options. Always check the compatibility table below before configuration.

ProviderCustom API URLOrganizationAPI Key RequiredImplementation
OpenAINative OpenAI API
Anthropic⚠️ ErrorNative Claude API
Google⚠️ Error⚠️ ErrorNative Gemini API
GroqOpenAI-compatible API
OllamaLocal hosting
DeepSeekOpenAI-compatible API
xAIOpenAI-compatible API
MockTesting only

Legend:

  • ✅ Supported
  • ❌ Not supported (ignored)
  • ⚠️ Error - Will cause configuration error

Configuration Examples

OpenAI with Parameters

provider: openai
model: gpt-4-turbo
api_key: "{{ .env.OPENAI_API_KEY }}"
organization: "{{ .env.OPENAI_ORG_ID }}"
params:
  temperature: 0.7
  max_tokens: 4000
  top_p: 0.9
  seed: 42

Anthropic

provider: anthropic
model: claude-3-5-sonnet-20241022
api_key: "{{ .env.ANTHROPIC_API_KEY }}"
params:
  temperature: 0.5
  max_tokens: 2000
# DO NOT use organization or api_url - will cause error

Local Ollama

provider: ollama
model: llama2:13b
api_url: http://localhost:11434
params:
  temperature: 0.8
  repetition_penalty: 1.1
# Note: organization ignored, api_key not required

Google Gemini

provider: google
model: gemini-pro
api_key: "{{ .env.GOOGLE_API_KEY }}"
params:
  temperature: 0.7
  top_k: 40
  top_p: 0.95
# DO NOT use api_url or organization - will cause error

Parameter Support by Provider

Parameter support varies significantly by provider:

OpenAI-Compatible Providers (OpenAI, Groq, DeepSeek, xAI)

  • temperature, max_tokens, top_p, seed, stop_words
  • top_k, repetition_penalty, min_length, max_length

Anthropic

  • temperature, max_tokens
  • top_p, top_k, seed, stop_words, repetition_penalty

Google

  • temperature, max_tokens, top_p, top_k
  • seed, stop_words, repetition_penalty

Ollama

  • temperature, repetition_penalty
  • ✅ JSON format support when response format requested
  • ❌ Most other parameters (vary by model)

Mock

  • ❌ All parameters ignored (testing only)

Common Configuration Errors

Error: "anthropic does not support organization"

# ❌ This will fail
provider: anthropic
organization: "my-org"  # Remove this line

Error: "googleai does not support custom API URL"

# ❌ This will fail
provider: google
api_url: "https://custom.url"  # Remove this line

Error: "ollama does not support organization"

# ❌ This will fail
provider: ollama
organization: "my-org"  # Remove this line

Resources