A scientific paper linked to #Microsoft highlighted "reliability" and toxicity issues with large language models (LLMs), including GPT-4 and GPT-3.5 of #OpenAI .
Research suggests that GPT-4 can create more toxic and biased text, especially when presented with "jailbreak" prompts that bypass security measures.