OpenAI Chat Completions (HTTP)
OpenClaw’s Gateway can serve a small OpenAI-compatible Chat Completions endpoint. This endpoint is disabled by default. Enable it in config first.POST /v1/chat/completions-
Same port as the Gateway (WS + HTTP multiplex):
http://<gateway-host>:<port>/v1/chat/completions
openclaw agent), so routing/permissions/config match your Gateway.
Authentication
Uses the Gateway auth configuration. Send a bearer token:Authorization: Bearer <token>
-
When
gateway.auth.mode="token", usegateway.auth.token(orOPENCLAW_GATEWAY_TOKEN). -
When
gateway.auth.mode="password", usegateway.auth.password(orOPENCLAW_GATEWAY_PASSWORD).
Choosing an agent
No custom headers required: encode the agent id in the OpenAImodel
field:
-
model: "openclaw:<agentId>"(example:"openclaw:main","openclaw:beta") model: "agent:<agentId>"(alias)
x-openclaw-agent-id: <agentId>(default:main)
x-openclaw-session-key: <sessionKey>to fully control session routing.
Enabling the endpoint
Setgateway.http.endpoints.chatCompletions.enabled to
true:
Disabling the endpoint
Setgateway.http.endpoints.chatCompletions.enabled to
false:
Session behavior
By default the endpoint is stateless per request (a new session key is generated each call). If the request includes an OpenAIuser string, the Gateway derives a
stable session
key from it, so repeated calls can share an agent session.
Streaming (SSE)
Setstream: true to receive Server-Sent Events (SSE):
Content-Type: text/event-stream- Each event line is
data: <json> - Stream ends with
data: [DONE]