How ChatGPT's Lockdown Mode protects against prompt-injection attacks
Prompt injection attacks can insert malicious instructions into text prompts to alter AI outputs or steal confidential information. These threats affect anyone who uses AI tools, but they pose a particular risk to professionals who rely on them at work. OpenAI has introduced Lockdown Mode to reduce that risk.
The optional setting is aimed at security-minded users such as executives and security teams and is available for ChatGPT Enterprise, ChatGPT Edu, ChatGPT for Healthcare, and ChatGPT for Teachers. Lockdown Mode limits how ChatGPT interacts with external systems and data, focusing protection on the tools and capabilities judged most at risk.
For example, web browsing is limited to cached content so no live requests leave OpenAI’s network, and other features are disabled unless the data can be confirmed safe. Workspace administrators can control which apps and actions Lockdown Mode governs, adding a layer beyond existing enterprise protections.
lockdown mode, prompt injection, chatgpt enterprise, chatgpt edu, web browsing, cached content, workspace administrators, external systems, enterprise protections, security teams