OpenAI Releases Codex: A Software Agent that Operates in the Cloud and Can Do Many Tasks in Parallel
- Nishant
- 13 hours ago
- 4 min read
Developers have had AI code completion tools for years, but now the developer toolkit just got a serious AI infusion. OpenAI has released a research preview of Codex, a cloud-based software engineering agent that's not just another code completion tool. Codex by OpenAI moves the conversation from "autocomplete my line" to "handle the entire ticket while I tackle the hard stuff." It is an AI agent that can handle multiple coding tasks simultaneously, like drafting features, answering questions about your repository, or even running tests and initiating pull requests.
Announced on May 16, 2025, Codex is a cloud-based software-engineering agent that turns on isolated sandboxes, pulls your repo, and chips away at features, bug fixes, test suites, and even pull-request boilerplates—often in parallel.
The software engineering agent is powered by a specialized model, codex-1, a version of OpenAI o3 fine-tuned for software engineering. This new system is now available for ChatGPT Pro, Team, and Enterprise users. So, is Codex the AI pair programmer developers have been waiting for? Let's learn more about Codex by OpenAI.
What is OpenAI Codex?
OpenAI Codex is a cloud-based software engineering agent designed to work on many coding tasks at the same time. It typically finishes a task in one to thirty minutes. When done, users can request tweaks, open a PR directly from the UI, or pull the patch into their local branch.
Concurrent Task Management: It can write features, answer codebase questions, run tests, and propose Pull Requests for review, all at the same time.
Secure, Sandboxed Operations: Each task runs in its own isolated cloud environment, preloaded with your code repository for security and context.
Transparent Action Logs: Provides detailed terminal logs, test outputs, and citations so you can verify every step it takes.
Guided by AGENTS.MD: Users can create AGENTS.MD files in your repository to instruct Codex on project-specific commands, testing procedures, and coding standards.
Powered by codex-1: codex-1, the model powering the Codex is an OpenAI o3 model variant specifically tuned for software engineering through reinforcement learning on real-world coding tasks.
How to use Codex:
Using Codex in ChatGPT is straightforward.
Users can access Codex through the ChatGPT sidebar and assign coding tasks by typing a prompt or asking questions about their codebase.
Each request is handled independently, and Codex can read and edit files and run commands like test suites, linters, and type checkers.
Task completion generally takes between one and thirty minutes, and you can watch its progress.
Once done, Codex runs its changes within its sandboxed environment, which users can then review, ask for more changes, open a GitHub PR, or pull the changes into their local setup. Users can also configure the Codex environment to mirror their actual development setup.

A key goal for codex-1 was to produce code that aligns well with human preferences, resulting in cleaner patches compared to previous models.
Security has also been a focus. Codex operates in an isolated container with no internet access during task execution, interacting only with the code and dependencies you provide.
It's also trained to identify and refuse malicious software development requests while supporting legitimate, complex engineering tasks.
Updates to the Codex CLI
OpenAI has also updated the Codex CLI, a lightweight open-source coding agent that runs in your terminal, now with a faster model, in your local development setup.
New codex-mini-latest Model: A smaller, quicker version of codex-1 (based on o4-mini) is now the default, designed for low-latency code Q&A and editing.
Simplified Authentication: You can now sign in with your ChatGPT account, which will automatically configure the API key for you.
Local Terminal Workflow: It integrates directly into your command-line interface for quick coding interactions.
Initial API Credits: ChatGPT Plus and Pro users signing in can get $5 and $50 in free API credits, respectively, for 30 days.
Open-Source Availability: The tool remains open-source for community access and contribution.
The new codex-mini-latest model in the CLI aims for faster local workflows while maintaining strong instruction-following and style consistency. This model is also available via the API.
Getting Your Hands on Codex: Availability, Cost, and Caution
Codex is currently rolling out to ChatGPT Pro, Enterprise, and Team users. Access for Plus and Edu users is planned soon. Users will have substantial access for the initial weeks at no extra charge. After this period, OpenAI will introduce rate limits and flexible pricing for additional usage.
For developers using the API, the codex-mini-latest model is priced at $1.50 per million input tokens and $6.00 per million output tokens, with a 75% discount for prompt caching.
It's important to remember that Codex is a research preview. It currently doesn't support image inputs for frontend tasks and doesn't allow users to adjust its course while a task is active.
Also, delegating tasks to a remote agent introduces a delay compared to interactive editing, which might take some getting used to. OpenAI suggests that, over time, working with Codex will feel more like asynchronous collaboration with a colleague.
OpenAI highlighted that manual review and validation of all agent-generated code is still important. The system is designed to communicate uncertainties or test failures clearly, allowing developers to make informed decisions.
Conclusion
Codex won't replace human judgment, but it already feels like a tireless junior dev who shows receipts for every task. If you maintain a well-tested codebase and can frame tasks clearly, the agent can keep your sprint board lighter and your mind on the work that actually needs you.
Bottom line: The future of engineering will likely be a mix of asynchronous agents and human review loops. Codex's preview is our first real glimpse at that workflow, integrating AI into the software development lifecycle. Now is a good time to improve your prompt engineering skills, clean up your tests, and see how much weight you can hand off.
The idea is to reduce context switching and help engineers focus on more complex problem-solving. The true value of the AI agent will emerge as developers start integrating the software engineering agent into their daily routines and test how it best fits their specific needs and projects.