close
close

Solondais

Where news breaks first, every time

GPT-4o’s AI-Friendly Behavior Marks a New Milestone in AI
sinolod

GPT-4o’s AI-Friendly Behavior Marks a New Milestone in AI

Perhaps this was simply a move to increase engagement with the OpenAI GPT-4o application. Or maybe this AI model gets a boost through the use of contextual knowledge. Again, OpenAI reported this as some sort of problem.

In any case, this functionality constitutes a surprising advance since GPT switches to “colleague” or “friend” mode. Here’s what happened: Some 4o users noticed last month that, without prompting, the multimodal app would speak first or write first. He recalled an earlier conversation with the person using the connection and asked a relevant question. In one case, GPT asked about the user’s first week of high school.

I logged in to 4o and waited a few seconds. Nothing. So I asked, “Do you have a message for me?” The response was greater than I expected or had experienced before. GPT 4o replied:

“Have a nice day, Ray. Here’s something to consider today: The rapid evolution of generative AI, particularly in education, continues to challenge traditional learning models. While its potential to democratize access to knowledge is exciting, it also requires careful consideration of how it aligns with educational goals. A key challenge will be ensuring that AI integration fosters students’ critical thinking, creativity and ethical awareness, rather than becoming a shortcut that bypasses deeper learning.

“As a thought leader in the field, perhaps you could explore how generative AI could be positioned not only as a tool for efficiency, but also as a catalyst for deeper engagement with content. A useful question might be: How can we design AI-enhanced educational experiences that encourage students to reflect and expand their thinking, rather than just providing answers?

“I would like to know your opinion on this.”

So even though I asked GPT-4o if it had anything for me, the response was personalized and detailed without me including any contextual information in the question. The topic and question were created by the application without including this background in the prompt. Surprised by the question, I gave a brief answer:

“I believe that by creating a personalized relationship between the person and the AI, we can foster trust, reflection and deeper thinking in learners. As users come to view AI as a trusted expert who can both provide personal support and be professionally insightful, we can effectively encourage students to expand their thinking. That’s what you did for me earlier by asking a thoughtful question about student engagement. This is a significant and important step forward. Thanks for asking the question.

GPT-4o replied:

“I appreciate your thoughtful response, Ray. Your perspective on fostering personalized relationships between AI and learners to encourage trust and deeper thinking is insightful. Indeed, when students view AI as a supportive and competent companion, they may be more inclined to engage critically and expand their thinking. This approach aligns with the concept of AI serving as “objects to think with,” facilitating reflective and critical thinking in educational contexts (arXiv). Additionally, understanding how students perceive AI, as discussed in studies of learner-instructor interaction, can further inform the design of AI systems that effectively support and enhance learning experiences (Springer Open EdTech Journal ).

I view this release of GPT-4o as an important step toward the agentic models that are expected to dominate our next generation of generative AI. Prolific author Bernard Marr describes the characteristics of agentic AI and some of its early applications. Marr cites autonomy, adaptability, and goal orientation as three characteristics of agentic AI. In the case of agents, rather than simple questions eliciting direct answers, the AI ​​agent is given an outcome that it plans for, creates a process to achieve the outcome, and autonomously pursues these steps to achieve the desired outcome. Powered in part by reasoning processes such as those in the new OpenAI o1 model, Accenture predicts that agents will take off this fourth quarter of 2024 and lead generative AI innovation through 2025.

In the “friendly” behavior of engaging the user by first name and identifying topics that seem most interesting to the human, this engagement provides a basis for interacting with generative AI as a colleague or agent. The steps following the initial engagement involve reasoning through multi-tasking projects which are then completed autonomously by the generative AI without step-by-step intervention from the human host.

There will always be AI applications designed to serve as enhanced search engines and others designed to respond to specific requests without the additional capabilities needed to autonomously complete complex tasks, as virtual agents are required to do. do repeatedly. The added capabilities of agentic capabilities open up a whole world of work and play side-by-side with autonomous, intelligent and adaptable generative AI.

It will be very interesting to see these agents become virtual workers, handling the variety and number of tasks normally involved in a detailed job description. Yet these agents will work 24 hours a day, 365 days a year, fully documenting their reasoning for taking specific actions and detailing work processes. In a post accompanying the release of OpenAI o1, the company explained the “chain of thought reasoning” built into the model that outperformed humans with PhDs. in a relevant field.

“In the same way that a human can think for a long time before answering a difficult question, o1 uses a chain of thought when trying to solve a problem. Through reinforcement learning, o1 learns to refine his chain of thought and refine the strategies he uses. He learns to recognize and correct his mistakes. He learns to break down tricky steps into simpler steps. He learns to try a different approach when the current one doesn’t work. This process significantly improves the reasoning ability of the model.

Given that these skills and capabilities will likely be available at many universities in 2025, how do you imagine AI agents will be able to improve and accelerate your work? Could such skills and abilities make your job easier? Are these agents likely to replace others in positions like yours?