On April 22, 2026, OpenAI introduced Privacy Filter, a new AI tool that can find and hide personal information in text. It can mask names, home addresses, email addresses, phone numbers, private dates, account numbers, and even secrets like passwords or API keys. OpenAI released it as an open-weight model, so developers can download it and use it in their own systems. (openai.com)
Why is this important? Many older privacy tools mainly look for fixed patterns, such as the shape of a phone number or an email address. OpenAI says Privacy Filter is better at using context. In other words, it tries to understand the meaning around the words, not only the pattern. This helps it decide when information should stay and when it should be hidden. OpenAI also says the model can run locally on a device, so private data does not need to leave the machine before masking happens. (openai.com)
Privacy Filter is also built for speed. It reads text in one pass, and the released model can handle very long inputs of up to 128,000 tokens. OpenAI describes it as a small model, with 1.5 billion total parameters and 50 million active parameters. In OpenAI’s evaluation, it reached an F1 score of 96% on the PII-Masking-300k benchmark, and 97.43% on a corrected version of that benchmark. OpenAI also says it can be fine-tuned for special jobs, such as company-specific privacy rules. (openai.com)
Still, OpenAI is clear that Privacy Filter is not perfect. It is not a full anonymization tool, not a compliance certificate, and not a replacement for human review in high-risk fields such as law, medicine, or finance. The company says it is only one part of a bigger privacy system. Even so, this release shows an exciting idea: AI can do more than create text. It can also help protect the people inside the text. (openai.com)










