AutoGen Integration
Using Forge with Microsoft AutoGen.
AutoGen Integration
Microsoft AutoGen supports OpenAI-compatible endpoints, making it straightforward to use Forge as your LLM backend for multi-agent conversations.
Setup
pip install autogen-agentchat
Configure
import autogen
config_list = [
{
"model": "auto",
"api_key": "forge_sk_your_key",
"base_url": "https://api.optima-forge.com/v1",
}
]
llm_config = {"config_list": config_list, "temperature": 0.7}
assistant = autogen.AssistantAgent(
name="assistant",
llm_config=llm_config,
system_message="You are a helpful AI assistant.",
)
user_proxy = autogen.UserProxyAgent(
name="user_proxy",
human_input_mode="NEVER",
max_consecutive_auto_reply=3,
)
user_proxy.initiate_chat(
assistant,
message="Write a Python function that calculates Fibonacci numbers.",
)
Multi-Agent Conversations
AutoGen's group chat feature works with Forge. Each agent in the group can be routed to different providers based on their role and the complexity of their contributions. Forge tracks the full conversation through Langfuse tracing.