Default indicates that this model should be used as the fallback when no explicit
model configuration is provided at the task or agent level.
Behavior:
Only one model per project can be marked as default
When set to true, this model will be used for tasks/agents without explicit model config
Validation ensures at most one default model per project
Example:
models: - provider: openai model: gpt-4 default: true # This will be used by default
max_tool_iterations
integer
MaxToolIterations optionally caps the maximum number of tool-call iterations
during a single LLM request when tools are available.
When > 0, overrides the global default for this model; 0 uses the global default.
model
string
Model defines the specific model identifier to use with the provider.
Model names are provider-specific and determine capabilities and pricing.
Organization specifies the organization ID for providers that support it.
Primary Use: OpenAI organization management for billing and access control
Example: "org-123456789abcdef"
Note:: Optional for most providers
params
object
Params contains the generation parameters that control LLM behavior.
These parameters are applied to all requests using this provider configuration.
Can be overridden at the task or action level for specific requirements.
provider
string
Provider specifies which AI service to use for LLM operations.
Must match one of the supported ProviderName constants.
⚠️ Critical Warning: Some providers will fail to initialize if you provide unsupported configuration options. Always check the compatibility table below before configuration.
provider: anthropicmodel: claude-3-5-sonnet-20241022api_key: "{{ .env.ANTHROPIC_API_KEY }}"params: temperature: 0.5 max_tokens: 2000# DO NOT use organization or api_url - will cause error
provider: googlemodel: gemini-proapi_key: "{{ .env.GOOGLE_API_KEY }}"params: temperature: 0.7 top_k: 40 top_p: 0.95# DO NOT use api_url or organization - will cause error