OpenAI's Spark model codes 15x faster than GPT-5.3-Codex

20:12 1 min read Source: Latest news (content & image)
OpenAI's Spark model codes 15x faster than GPT-5.3-Codex — Latest news

OpenAI announced a research preview of GPT-5.3-Codex-Spark, a smaller version of GPT-5.3-Codex built for real-time coding in Codex. The company describes it as generating code 15 times faster while "remaining highly capable for real-world coding tasks." Spark will initially be available only to $200/mo Pro tier users, with separate rate limits during the preview.

Designed for real-time work, Codex-Spark focuses on targeted edits and tight iteration loops. It supports interruption and redirection mid-task, defaults to lightweight edits, and does not automatically run tests unless requested, aiming for quick, conversational coding rather than long-running autonomous tasks.

OpenAI reduced latency across the request-response pipeline: overhead per client/server roundtrip dropped by 80%, per-token overhead by 30%, and time-to-first-token by 50% through session initialization and streaming optimizations.

openai, gpt-5.3-codex, codex-spark, 15x faster, real-time coding, pro tier, latency reduction, streaming optimizations, targeted edits, rate limits

Latest News