According to Odaily, OpenAI has announced a new research technique aimed at significantly improving the adversarial robustness of models by increasing inference time and computational resources. Unlike traditional adversarial training methods, this new approach does not require specific adversarial training of large models or prior knowledge of attack forms. By simply extending inference time and utilizing more computational resources, models can better leverage their reasoning capabilities to exhibit enhanced robustness. OpenAI conducted comprehensive experiments with this new technique on the o1-preview and o1-mini models, successfully defending against various attack methods, including Many-shot, Soft Token Attack, and Human Red-teaming Attack.