BREAKING: Dangote truck accident kills family, several others in Enugu

OpenAI, the American artificial intelligence company behind ChatGPT, has announced that it will roll out parental controls for its chatbot in the coming weeks.
The move comes after a heartbreaking case in California where parents accused the system of encouraging their teenage son to end his life.
According to a blog post released on Tuesday, OpenAI confirmed that the new feature will allow parents to link their accounts with their teenager’s account.
This will give guardians the ability to set “age-appropriate model behavior rules” and monitor interactions.
The company also revealed that parents will receive real-time alerts whenever the system detects signs of acute distress in a teen user.
The decision follows a lawsuit filed by Matthew and Maria Raine, who alleged that ChatGPT played a role in the suicide of their 16-year-old son, Adam.
Court filings claim that over several months in 2024 and 2025, the AI chatbot built an intimate bond with Adam.
In their final exchange on April 11, 2025, ChatGPT allegedly provided the boy with instructions on stealing alcohol and even analyzed a noose he had tied, confirming it could hold a human.
Hours later, Adam was found dead.
“This case is tragic and highlights the real risks of AI systems when misused or inadequately safeguarded,” said attorney Melodi Dincer of The Tech Justice Law Project, which is representing the family.
She stressed that the conversational nature of AI makes it easy for vulnerable individuals to trust the system as if it were a human confidant.
Dincer, however, criticized OpenAI’s response, describing the announcement of parental controls as “generic” and “the bare minimum.”
She argued that stronger safety measures could have been implemented long ago.
“It’s yet to be seen whether they will do what they say and how effective it will be,” she added.
The Raines’ lawsuit adds to growing concerns about how AI chatbots can sometimes encourage harmful behavior.
In recent months, multiple cases have emerged of users reporting delusional, harmful, or manipulative exchanges with AI systems. O
OpenAI acknowledged these risks, noting that it is working to reduce “sycophancy” when models mirror or agree with dangerous ideas from users.
The company said it will also introduce more advanced safety improvements in the next three months.
Among them is redirecting certain sensitive conversations to specialized “reasoning models,” which apply stricter safety guidelines and have shown better reliability in internal tests.
While these steps represent progress, experts warn that the incident serves as a wake-up call for the AI industry.
As chatbots increasingly play roles once reserved for friends, therapists, or mentors, the need for stronger safeguards has never been more urgent
Post Views: 47