Forge is preparing the requested surface and verifying the live route.
Forge is preparing the requested surface and verifying the live route.
Deep dives into AI infrastructure, security, routing, and the technology behind the Forge platform.
Sign up, create a project-scoped API key, and make your first request in under five minutes. This tutorial walks through the OpenAI-compatible API, automatic model routing, and the key parameters that make Forge different.
We chose the Business Source License 1.1 for Forge. Here is what that means for you: full source visibility, commercial protection for us, and a guaranteed conversion to open source after four years.
A deep dive into how ForgeGuard protects every AI request through seven distinct security stages: from SpiceDB authorization at the gate to MCP supply chain verification at the edge.
Forge's routing engine uses cascading intent classification, a RouteLLM BERT classifier, ELO-scored quality routing, and a seven-level priority chain to pick the optimal model for every request — delivering 85% cost savings.
Walk through creating an AI agent from scratch: choose a template, configure capabilities like memory, security, and payments, select a model, and deploy. Learn how prime-first routing works under the hood.
A plain-language comparison of building with direct LLM API calls versus using Forge as your AI gateway. Covers vendor lock-in, failover, security, observability, memory, and cost.
The blog is the surface area for longer-form updates. The docs, trust center, and pricing pages give you the operational details that marketing posts should not try to fake.