#chatgpt #Gemini #Aİ
📢 A new way to bypass chatbot protection! 🤖
Researchers from Intel, the University of Idaho, and Illinois have discovered a vulnerability in large language models such as ChatGPT and Gemini. The “information overload” method can force chatbots to give out forbidden information! 😱
How does it work? Complex or ambiguous queries, as well as fictional sources, confuse the models, bypassing their security filters. The InfoFlood tool automates this process, opening the door to potential abuse. 🚨
But there is good news: the researchers will share their findings with developers to strengthen LLM protection. InfoFlood could become a tool for training models, making them more resistant to attacks. 💪