OpenAI is the agent vendor employees install themselves. ChatGPT desktop is on a meaningful share of knowledge-worker laptops already. Operator and the computer-use surface are taking UI actions across applications. Codex and the Atlas coding agents are editing repos and running shell commands. None of this is hypothetical, and most of it doesn't show up in an asset inventory.
The OpenAI firewall is what makes that footprint deployable. It treats OpenAI's agents as a known class with their own tool inventory and risk profile, enforced at the kernel — without depending on whether anyone in IT noticed the install.
What an OpenAI firewall actually has to govern
The OpenAI surface is wider than any single product, and a useful OpenAI firewall has to model each piece:
- ChatGPT desktop. Reads local files for context, talks to user-configured tools and connectors, increasingly drives other applications. The MCP and connector layer is the under-modeled risk: a single prompt-injected document can convince the agent to call a tool the user never intended to authorize.
- Operator and the computer-use surface. Browser-driving and UI-driving agents that take actions across web apps and native applications. Each step is a UI event the user did not personally trigger, and the cumulative chain often happens faster than a human would catch.
- Codex and Atlas coding agents. Edit files, run shell commands, install packages, open pull requests. The blast radius is the developer's filesystem, their shell history, every credential in their environment, and every internal service their machine can reach.
- Custom GPTs and connector-driven sub-agents. Each new connector or custom GPT broadens the tool inventory. The OS doesn't know which ones a given session pulled in. The OpenAI firewall does, because it sees the resulting kernel calls.
The risk profile that's specific to OpenAI
OpenAI's runtime risk on the endpoint is shaped by three things: scale of consumer adoption, breadth of the connector and tool surface, and the rapid productization of computer-use capabilities.
- Consumer-grade install path. An employee can install ChatGPT desktop in under a minute, log in with a personal account, and start grounding it on local company files. No admin surface saw any of that.
- Connector and custom-GPT sprawl. The plugin and connector ecosystem expands the agent's tool inventory unpredictably. A single employee can pull in dozens of third-party tools the SOC has never heard of.
- Computer-use is generally available. Operator is shipping browser and UI automation broadly. The endpoint is no longer a place where the human attends every action.
- Coding agents on developer machines. Codex and Atlas agents touch source trees, shell environments, and credentials with very little built-in containment.
The hardest part of governing OpenAI agents in an enterprise isn't the policy — it's the discovery. Most installs happen without anyone in security knowing about them.
How Ospiri's OpenAI firewall works
Ospiri's agent firewall applies the same kernel-grade isolation model to OpenAI's agents that it does to every other vendor — with OpenAI-specific signatures, policy templates, and attribution logic so the SOC can answer the question "what did ChatGPT or Operator just do on that machine?"
- Filesystem isolation with copy-on-write. When a Codex or Atlas agent edits a sensitive directory, the firewall clones the affected files into a sandbox. The agent gets the functionality it needs. The original tree is untouched until policy commits, discards, or escalates the change.
- Per-process network policy. Built on the Windows Filtering Platform. Allow ChatGPT desktop to reach OpenAI APIs and the destinations your policy permits; restrict reachability for any unfamiliar endpoint introduced by a connector or custom GPT.
- Registry isolation. Stops an OpenAI installer or connector from establishing persistence, modifying autoruns, or tampering with other software on the device.
- Object isolation. Constrains the IPC surface so Operator's UI-driving and computer-use components can't quietly orchestrate other processes on the box outside policy.
- Discovery without an allowlist. ChatGPT desktop installed five minutes ago by an employee with a personal account is the exact case the observability layer is built for — discovered shadow installs are sandboxed by default, not allowed by default.
- Continuous OpenAI signature coverage. ChatGPT desktop, Operator, Codex, Atlas, and the connector ecosystem are tracked by the same signature pipeline that handles every other agent — so a new release or a new connector doesn't show up as an unknown binary.
Where this fits with EDR and the existing endpoint stack
EDR sees a signed OpenAI binary writing files and reaching the network — and by EDR's lights, that's not a threat. The OpenAI firewall sits one layer deeper and asks a different question: given that this is ChatGPT or Operator, and given the environment it's running in, is this specific action within policy? The two layers compose. EDR catches the obvious threats; the OpenAI firewall gives the security team granular control over the things that aren't threats but still need to be governed.