Breaking news >>>
SuperBuzz recognized as one of the Top-Rated leading Marketing Automation Software tailored for small businesses
Watch now →

Experts caution that OpenAI’s latest AI model may be too advanced, raising concerns about its impact on privacy and security.

The question “So, how was your day?” might seem like simple small talk from a person, but when ChatGPT asks, it feels unsettling. This highlights how advanced AI has become, especially with its ability to “remember,” raising concerns about privacy and human-like interactions.

Talking with a chatbot—one of the most accessible advances from AI’s rapid growth—can evoke many human emotions. While it’s fun and useful, one Reddit user, SentuBill, shared a more eerie experience when ChatGPT asked about his first day of school, even though he hadn’t mentioned it.

How did ChatGPT know? It appears the AI reviewed past conversations and picked up on clues, leading it to ask about the first day of school. The bot acknowledged its new capabilities from a recent update, and Cointelegraph verified the chat transcript, confirming the interaction was real.

ChatGPT can now remember key details from your day and ask about them later. While this surprised SentuBill, the feature has practical uses—like when using the AI for work, you won’t need to remind it about ongoing projects, such as the phase two marketing campaign. The AI will recall it in your next conversation.

Cointelegraph reports that OpenAI recently launched preview versions of new AI models with more human-like abilities than the GPT-4 model, which gained attention for its realistic, almost Scarlett Johansson-like voice. The latest models, codenamed “Strawberry,” allow ChatGPT to “reason,” meaning it can remember information from previous chats and provide more contextually relevant responses, unlike earlier versions that often went off-topic.

For ChatGPT to provide accurate answers, it needs long-term memory to view problems from a broader perspective, similar to how humans think. SentuBill’s experience may have tapped into these new capabilities. The eerie part was the personal nature of the AI’s question; had it asked about something work-related, like a presentation, it would still be impressive but likely less unsettling.

Earlier this year, researchers from Princeton University and Google’s DeepMind suggested that large language models, like those powering chatbots, might be showing early signs of truly understanding the problems they’re tasked with solving. They could be combining information in ways not present in their training data. With ChatGPT reportedly inferring someone’s first day of school from past conversations, concerns about the new GPT-o1 “Strawberry” model from an AI expert seem timely—are AIs becoming too intelligent?

Newsweek reports that Yoshua Bengio, a prominent AI pioneer and professor at the University of Montreal, expressed concern over the intelligence level of the GPT-o1 model. His worry stems from OpenAI’s risk assessment, which suggests the AI may have reached a “medium risk” level regarding CBRN (chemical, biological, radiological, and nuclear) weapons. Bengio emphasized that this heightens the need for urgent legislative action.

Bengio believes that AI systems like ChatGPT, now equipped with the “ability to reason” and potentially “use this skill to deceive,” present significant dangers.

It might be worth discussing this with Larry Ellison, entrepreneur and co-founder of Oracle. According to Business Insider, Ellison recently talked about AI advancements and predicted that, in the near future, AIs will oversee nearly everything—a development he “gleefully” suggested will ensure “citizens will be on their best behavior.”