December 18, 2024

Report Wire

News at Another Perspective

‘AI Girlfriends’ Flood GPT Store Shortly After Launch, OpenAI Rules Breached |

New Delhi: OpenAI’s recently launched GPT store is encountering difficulties with moderation just a few days after its debut. The platform provides personalized editions of ChatGPT, but certain users are developing bots that violate OpenAI’s guidelines.

These bots, with names such as “Your AI companion, Tsu,” enable users to personalize their virtual romantic companions, violating OpenAI’s restriction on bots explicitly created for nurturing romantic relationships.

The company is actively working to address this problem. OpenAI revised its policies when the store was introduced on January 10, 2023. However, the violation of policy on the second day highlights the challenges associated with moderation.

With the growing demand for relationship bots, it’s adding a layer of complexity to the situation. As reported, seven out of the 30 most downloaded AI chatbots were virtual friends or partners in the United States previous year. This trend is linked to the prevailing loneliness epidemic.

To assess GPT models, OpenAI states that it uses automated systems, human reviews and user reports to assess GPT models applying warnings or sales bans for those considered harmful. However, the continued presence of girlfriend bots in the market raises doubts about the effectiveness of this assertion.

The difficulty in moderation reflects the common challenges experienced by AI developers. OpenAI has faced issues in implementing safety measures for previous models such as GPT-3. With the GPT store available to a wide user audience, the potential for insufficient moderation is a significant concern.

Other technology companies are also swiftly handling problems with their AI systems, understanding the significance of quick action in the growing competition. Yet, the initial breaches highlight the significant challenges in moderation that are expected in the future.

Even within the specific environment of a specialized GPT store, managing narrowly focused bots seems to be a complicated task. As AI progresses, ensuring their safety is set to become more complex.