According to PANews, OpenAI has unveiled its open-source safety reasoning model, gpt-oss-safeguard, available in 120 billion and 20 billion parameter versions. This model allows developers to implement custom policies for content classification during inference, providing conclusions and reasoning chains. Built on the open-weight gpt-oss and fine-tuned, it is licensed under Apache 2.0 and can be downloaded from Hugging Face. Internal evaluations indicate that gpt-oss-safeguard surpasses gpt-5-thinking and gpt-oss in multi-policy accuracy, while external datasets show performance close to Safety Reasoner. However, limitations include traditional classifiers being superior in scenarios with extensive high-quality annotations, and the model requiring significant computational power and time for inference. ROOST plans to establish a model community and release a technical report.