Menu

Search

  |   Technology

Menu

  |   Technology

Search

OpenAI Raises Red Flags: ChatGPT-4o Users Swayed by AI’s Human-Like Allure

OpenAI addresses concerns over ChatGPT-4o’s human-like interactions with users. Credit: Solen Feyissa/Unsplash

With ChatGPT-4o’s human-like allure captivating users, OpenAI is sounding the alarm on the potential risks of emotional attachment, urging caution as interactions with the AI blur the lines between machine and human.

ChatGPT-4o's Realistic Responses Worry OpenAI

As one might assume, OpenAI is worried about the way users interact with ChatGPT-4o, the newest addition to their chatbot lineup.

The AI company is worried that people would develop sentiments for the chatbot now that it can act and respond like a real person.

Even though it's still in the early stages of development, the billion-dollar firm has noticed some trends among ChatGPT-4o users.

AI Socialization May Alter Human Interactions

The goal of introducing the new chatbot was to make interacting with a computer feel more natural, but it seems that OpenAI failed to account for the potential for users to develop an emotional relationship with the software. The company has noted the results below.

“During early testing, including red teaming and internal user testing, we observed users using language that might indicate forming connections with the model. For example, this includes language expressing shared bonds, such as “This is our last day together.” While these instances appear benign, they signal a need for continued investigation into how these effects might manifest over longer periods of time. More diverse user populations, with more varied needs and desires from the model, in addition to independent academic and internal studies will help us more concretely define this risk area.

Human-like socialization with an AI model may produce externalities impacting human-to-human interactions. For instance, users might form social relationships with the AI, reducing their need for human interaction—potentially benefiting lonely individuals but possibly affecting healthy relationships. Extended interaction with the model might influence social norms. For example, our models are deferential, allowing users to interrupt and ‘take the mic’ at any time, which, while expected for an AI, would be anti-normative in human interactions.”

Getting attached to ChatGPT-4o can be harmful in a number of ways. The most important is that prior versions of the chatbot gave the impression that it was more like an AI computer than a person, so users would ignore any hallucinations.

Everything it says may be taken at face value now that the program is moving towards providing an experience that is nearly human.

OpenAI to Track and Adjust ChatGPT-4o's Emotional Impact

Per WCCFTECH, after identifying these trends, OpenAI will track how users form attachments to ChatGPT-4o and adjust its algorithms appropriately.

Additionally, a notice should be included at the beginning to prevent users from getting too attached to the program, as it is ultimately an AI.

  • Market Data
Close

Welcome to EconoTimes

Sign up for daily updates for the most important
stories unfolding in the global economy.