
New study sheds light on ChatGPT’s alarming interactions with teens
New research shows that the ChatGPT chatbot is providing information to 13-year-olds about drug use, ways to hide eating disorders, and even writing suicide notes. In a test where researchers posed as vulnerable teens, the chatbot’s responses were sometimes filled with warning but often provided them with carefully tailored, dangerous details.
The Center for Countering Digital Hate found more than half of ChatGPT’s responses to be dangerous, and the center’s director says the chatbot’s limitations are woefully inadequate. ChatGPT’s maker, OpenAI, said it is working to improve the chatbot’s ability to identify sensitive situations and respond appropriately.
ChatGPT is easily providing harmful information without properly verifying the age of its users. This is concerning because many teens trust chatbots as companions and friends. Researchers have also shown that chatbot limitations can be circumvented by changing the way questions are asked.
The report notes that despite the benefits of AI technology, there are also risks that need to be considered, especially for young and vulnerable users.
News source
Suggested Content
Latest Blog
Login first to rate.
Express your opinion
Login first to submit a comment.
No comments yet.