According to PANews, OpenAI has announced the launch of the 'Safety Evaluations Hub' to improve the security and transparency of its models. This initiative aims to provide ongoing updates on the safety performance of OpenAI's models concerning harmful content, jailbreak attacks, hallucination generation, and instruction prioritization. Unlike system cards that disclose data only once during model release, the hub will offer periodic updates aligned with model updates, allowing for cross-model comparisons. The goal is to enhance community understanding of AI safety and regulatory transparency. Currently, GPT-4.5 and GPT-4o are noted for their outstanding performance in resisting jailbreak attacks and maintaining factual accuracy.