LLM Provider Configuration
Hive uses a unified provider interface to manage interactions with Large Language Models. While the framework is designed to be provider-agnostic, it primarily utilizes the LiteLLMProvider to offer seamless integration with OpenAI, Anthropic, Google Gemini, and 100+ other providers.
The LiteLLM Provider
The LiteLLMProvider is the standard interface for agent reasoning within Hive. It handles model routing, API authentication, and token tracking.
Programmatic Configuration
When initializing an AgentRuntime, you provide an instance of the provider configured with your chosen model and credentials.
from framework.llm import LiteLLMProvider
llm = LiteLLMProvider(
model="claude-3-5-sonnet-20240620", # Model identifier
api_key="your-api-key-here", # Optional if set in environment
api_base="https://api.example.com" # Optional: for proxy/alternative endpoints
)
Supported Provider Formats
Hive follows standard model string formats to identify the backend provider:
| Provider | Model String Format | Required Environment Variable |
| :--- | :--- | :--- |
| Anthropic | anthropic/claude-3-5-sonnet-20240620 | ANTHROPIC_API_KEY |
| OpenAI | openai/gpt-4o | OPENAI_API_KEY |
| Google | gemini/gemini-1.5-pro | GEMINI_API_KEY |
| Local (Ollama) | ollama/llama3 | N/A |
Credential Management
Hive includes a robust credential store located in core/framework/credentials/models.py to manage secrets securely. Credentials are stored as CredentialObject instances, preventing accidental logging of sensitive keys through the use of Pydantic's SecretStr.
Credential Types
The framework categorizes credentials to handle different authentication flows:
API_KEY: Standard keys for services like OpenAI or Brave Search.OAUTH2: Supports refresh tokens for long-lived agent sessions.BEARER_TOKEN: For JWT-based authentication.
Usage in Runtime
When building an agent, you typically pass the LLM provider into the create_agent_runtime factory function:
from framework.runtime.agent_runtime import create_agent_runtime
runtime = create_agent_runtime(
graph=my_graph,
llm=llm, # The LiteLLMProvider instance
tools=my_tools,
storage_path="./agent_data"
)
Environment Variables
For production deployments, it is recommended to configure providers via environment variables. Hive and the underlying LiteLLM library will automatically detect these:
# OpenAI Configuration
export OPENAI_API_KEY='sk-...'
# Anthropic Configuration
export ANTHROPIC_API_KEY='sk-ant-...'
# Custom API Base (e.g., for Azure or LiteLLM Proxy)
export AZURE_API_BASE='https://my-resource.openai.azure.com/'
export AZURE_API_KEY='...'
Monitoring and Logs
LLM interactions are captured by Hive's three-level runtime logging system:
- Level 1 (Summary): Total token counts and success status for the entire run.
- Level 2 (Node Detail): Token usage and latency per node execution.
- Level 3 (Tool Logs): Granular tool calls and the raw LLM text responses (see
NodeStepLog).
This data is viewable in real-time via the Aden TUI Dashboard or by inspecting the RunSummaryLog generated after an agent execution.