LangChain Integration
Using Forge with LangChain.
LangChain Integration
Forge integrates seamlessly with LangChain as an OpenAI-compatible LLM provider. Use Forge as your LLM backend to get intelligent routing, memory, and security while building chains and agents with LangChain's orchestration tools.
Setup
pip install langchain langchain-openai
Basic Usage
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(
model="auto",
api_key="forge_sk_your_key",
base_url="https://api.optima-forge.com/v1",
)
response = llm.invoke("Explain machine learning in simple terms.")
print(response.content)
With LangChain Chains
from langchain_core.prompts import ChatPromptTemplate
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(
model="auto",
api_key="forge_sk_your_key",
base_url="https://api.optima-forge.com/v1",
)
prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful technical writer."),
("user", "Write documentation for: {topic}"),
])
chain = prompt | llm
result = chain.invoke({"topic": "REST API authentication"})
print(result.content)
With LangGraph Agents
Use Forge as the LLM backend for LangGraph agent workflows. Forge handles provider routing and failover while LangGraph manages the agent execution graph.
from langgraph.graph import StateGraph, MessagesState
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(
model="auto",
api_key="forge_sk_your_key",
base_url="https://api.optima-forge.com/v1",
)
# Define your LangGraph workflow using Forge as the LLM provider
# All LLM calls get Forge routing, caching, and security automatically
Forge Memory with LangChain
You can use Forge's built-in memory alongside or instead of LangChain's memory classes by passing forge extensions in the model kwargs.