OpenAI plans to allow a wider range of content, including erotica, on its popular chatbot ChatGPT as part of its push to 'treat adult users like adults', says its boss Sam Altman.

In a post on X on Tuesday, Mr. Altman stated that upcoming versions of the popular chatbot would enable it to behave in a more human-like way - 'but only if you want it, not because we are usage maxxing'.

This move, reminiscent of recent actions by Elon Musk's xAI with their introduction of sexually explicit chatbots to Grok, could help OpenAI attract more paying subscribers. However, it is likely to intensify pressure on lawmakers for tighter restrictions on chatbot companions.

OpenAI did not respond to requests for comment following Mr. Altman's post. Changes announced by the company come after it was sued earlier this year by parents of a US teen who took his own life. The lawsuit accused OpenAI of wrongful death, criticizing the company's parental controls as inadequate.

Mr. Altman acknowledged that OpenAI previously implemented restrictive measures to protect users, especially concerning mental health issues. However, he expressed that the company has now developed tools to safely relax these restrictions for most users.

He elaborated, 'In December, as we roll out age-gating more fully and as part of our 'treat adult users like adults' principle, we will allow even more, like erotica for verified adults.'

Critics have voiced concerns regarding OpenAI's decision to allow adult content, emphasizing the need for strong regulation to protect children from accessing such material. Legal experts question the effectiveness of the company's age-gating measures and call for accountability within AI companies.

Furthermore, California Governor Gavin Newsom recently vetoed a bill that would have restricted AI companions for children, stating the importance of teaching adolescents safe interaction with AI.

The announcement from OpenAI emerges amidst growing skepticism about the sustainability of the AI technology market, as companies continue to face scrutiny regarding their practices and the implications for user safety.