OpenAI details agent loop and prompt structure for its open‑source coding CLI

OpenAI details agent loop and prompt structure for its open‑source coding CLI — Cdn.arstechnica.net
Image source: Cdn.arstechnica.net

OpenAI published technical details, in a post by Bolin, about how its AI coding agent orchestrates work and how the Codex-based CLI builds its initial prompt for the Responses API, and the company open-sources that coding CLI on GitHub while not doing the same for ChatGPT or the Claude web interface.

Bolin’s post focuses on what he calls “the agent loop,” the repeating cycle that coordinates the user, the model, and any software tools the model invokes. As Ars Technica noted in December, the loop takes user input, prepares a textual prompt for the model, and then either returns a final assistant message or follows the model’s request to call a tool.

When the model requests a tool call, the agent executes the requested function (for example running a shell command or reading a file), appends the tool output to the original prompt, and queries the model again; this repeats until the model stops requesting tools and produces an assistant response.

Bolin’s post shows how the Codex implementation constructs the initial prompt that is sent to OpenAI’s Responses API. The prompt is built from several components assigned roles that determine priority: system, developer, user, or assistant. The instructions field comes from either a user-specified configuration file or base instructions bundled with the CLI.

openai, codex, github, chatgpt, claude, agent loop, ai coding agent, coding cli, responses api, bolin's post, initial prompt construction, prompt roles, system developer user assistant, instructions field, user-specified configuration file, base instructions bundled with cli, tools field, tool call workflow, appends tool output, model context protocol, mcp servers, shell command execution, reading a file, planning tools, web search capabilities, sandbox permissions, environment context, current working directory, model inference, assistant message

Latest in