Home Tech AI agents are broken. Is GPT-5 really the answer?

AI agents are broken. Is GPT-5 really the answer?

AI agents are broken. Is GPT-5 really the answer?

As 2025 dawned, OpenAI CEO Sam Altman was promoting two developments he insisted would transform our lives. One, of course, was GPT-5 — a long-anticipated major upgrade to the Large Language Model (LLM) that powered ChatGPT’s rise to tech world superstardom.

The other? AI Agents that don’t just answer your queries like ChatGPT, but actually get stuff done for you. “We believe that, in 2025, we may see the first AI agents join the workforce and materially change the output of companies,” Altman wrote back in January.

Well, we’re eight months in, and Altman’s prediction already needs a big old asterisk. Sure, companies are keen to adopt AI Agents, such as OpenAI’s ChatGPT agent. In a May 2025 report, consultancy giant PWC found that half of all firms surveyed planned to implement some kind of AI Agent by the end of the year. Some 88% of executives want to increase their teams’ AI budgets because of Agentic AI.

But what about the actual AI Agent experience? With apologies to all those hopeful executives, the reviews are almost uniformly negative.

If “AI Agents” was a new high-tech James Bond movie, here’s the kind of blurbs you’d see on Rotten Tomatoes: “glitchy … inconsistent” (Wired); “came off like a clueless internet newbie” (Fast Company); “reality doesn’t live up to the hype” (Fortune); “not matching up to the buzzwords” (Bloomberg), “the new vaporwareoverpromising is worse than ever” (Forbes).

Study finds OpenAI’s entry failed nearly every time

A May 2025 Carnegie Mellon University study (PDF) found Google’s Gemini Pro 2.5 failed at real-world office tasks 70% of the time. And that was the best-performing agent. OpenAI’s entry, powered by GPT 4.o, failed more than 90% of the time.

GPT-5 is likely to improve on that number … but that’s not saying much. And not just because early reports say OpenAI struggled to fill GPT-5 with enough improvements to make it worthy of the release number.

Indeed, it’s starting to look to researchers like this disappointment is baked in to the whole process of LLMs learning to do stuff for you. The problem, as this AI Agent engineer’s analysis makes clear, is simple math: errors compound over time, so the more tasks an agent does, the worse they get. AI Agents who do multiple complex tasks are prone to hallucination, like all AI.

Mashable Light Speed

In the end some agents “panic” and can make “a catastrophic error in judgment,” to quote an apology from a Replit AI Agent that literally deleted a customer’s database after 9 days of working on a coding task. (Replit’s CEO called the failure “unacceptable”.)

Tellingly, that isn’t the only AI-Agent-wipes-code story of 2025 — which explains why one enterprising startup is offering insurance on your AI Agent going haywire, and why Wal-Mart has had to bring in four “super Agents” in a bid to corral its AI Agents.

No wonder a recent Gartner paper predicted that 40% of all those AI Agents currently being initiated by companies will be canceled within 2 years. “Most Agentic AI projects,” wrote senior analyst Anushree Verma, are “driven by hype and misapplied … This can blind organizations to the real cost and complexity of deploying AI agents at scale.”

What can GPT-5 do for AI Agents?

It’s possible that ChatGPT agent will vault to the top of the reliability charts once it’s powered by GPT-5. (Again, that’s not the highest of barriers.) But the new release is unlikely to fix what really ails the Agentic world.

That’s because guardrails are already being erected — by companies as well as regulators — shutting down what even the most reliable AI Agent can do for you.

Take Amazon, for example. The world’s largest retailer, like most tech giants, is talking a big game on AI Agents (as they did at a Shanghai Agentic AI fair in July, pictured above). At the same time, Amazon has shut down the ability of any AI Agent to browse and buy anywhere on its site.

That makes sense for Amazon, which has always wanted control over the customer experience, not to mention its desire to deliver ads and sponsored results to actual human eyeballs. But it’s also curtailing a massive amount of potential Agent activity right there. (On the plus side, no “catastrophic failure” involving a large pile of next-day deliveries at your door.)

And do we trust AI Agents to buy online for us anyway? It’s not that they’re evil and want to steal your credit card data; it’s that they’re naive and vulnerable to being phished by bad actors who do want your card.

Even GPT-5 may not be able to get around one vulnerability seen by researchers: data embedded in images can instruct AI agents to reveal any credit card info they might have, with the user being none the wiser.

If that kind of problem is exploited on a corporate scale, then Altman may be right about AI Agents “materially changing output” — just not in the way he meant.

Great Job Chris Taylor & the Team @ Mashable India tech Source link for sharing this story.

#FROUSA #HillCountryNews #NewBraunfels #ComalCounty #LocalVoices #IndependentMedia

NO COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Leave the field below empty!

Secret Link
Exit mobile version