According to Decrypt, a recent study by Northwestern University has uncovered a significant vulnerability in custom Generative Pre-trained Transformers (GPTs). While they can be customized for various applications, they are also susceptible to prompt injection attacks that can expose sensitive information. GPTs are advanced AI chatbots created and shaped by users of OpenAI’s ChatGPT, using the core Large Language Model (LLM), GPT-4 Turbo, and enhanced with additional unique elements that influence user interaction.

However, the study found that the parameters and sensitive data used to shape GPTs can easily be accessed by third parties. Testing over 200 custom GPTs revealed a high susceptibility to attacks and jailbreaks, leading to potential extraction of initial prompts and unauthorized access to uploaded files. The researchers emphasized the dual risks of such attacks, threatening the integrity of intellectual property and user privacy. The study also highlighted that existing defenses, like defensive prompts, are not foolproof against sophisticated adversarial prompts, requiring a more robust and comprehensive approach to securing these AI models.

In light of these findings, the study urges the broader AI community to prioritize the development of stronger security measures for custom GPTs. The customization of GPTs offers immense potential, but this study serves as a crucial reminder of the associated security risks. Advancements in AI must not compromise user security and privacy, and users should be cautious when training GPTs with sensitive data.