PANews May 15 news, according to an official announcement from OpenAI, to improve the safety and transparency of its models, OpenAI has launched the 'Safety Evaluations Hub', which will continuously release safety performance results of its models in areas such as harmful content, jailbreak attacks, hallucination generation, and instruction prioritization. Compared to system cards that only disclose one-time data at the time of model release, this hub will be updated periodically with model updates, supporting horizontal comparisons between different models, aiming to enhance the community's understanding of AI safety and regulatory transparency. Currently, GPT-4.5 and GPT-4o perform best in terms of resistance to jailbreak attacks and factual accuracy.