GPT-5.4 simply dropped and my feeds instantly full of takes. Builders who spent the final six months swearing by Claude had been instantly hedging. “It is a workhorse,” one individual wrote. “Not a thoroughbred, however I am utilizing it.” One other stated they’re now 50/50 between Claude and GPT the place they had been 90/10 a month in the past.
This occurs each single time. A brand new mannequin lands, and the previous one begins to really feel totally different. Slower, possibly. Much less sharp. You begin noticing stuff you did not discover earlier than.
The apparent rationalization is that you simply’re evaluating it to one thing higher. But it surely additionally raises a query no one actually solutions cleanly: did the previous mannequin really worsen after the brand new one launched? Or did you simply get a greater reference level and now all the pieces earlier than it appears dumb by comparability?
I went on the lookout for an precise reply.
The primary crack confirmed in 2023
In July 2023, researchers at Stanford and UC Berkeley ran a deceptively easy take a look at. They took GPT-4 – the identical mannequin, known as with the identical title, and ran similar prompts on it at two closing dates: March 2023 and June 2023.
GPT-4’s accuracy on figuring out prime numbers dropped from 84% to 51%. The share of GPT-4’s code outputs that had been immediately executable dropped from 52% to 10%. James Zou, one of many paper’s authors, described what this meant in observe: “In the event you’re counting on the output of those fashions in some kind of software program stack or workflow, the mannequin instantly modifications conduct, and you do not know what is going on on, this could really break your complete stack.”
They named the phenomenon LLM drift. Behavioral change and not using a model change. The mannequin moved beneath the developer.
When the paper dropped, OpenAI VP of Product Peter Welinder replied on Twitter: “No, we have not made GPT-4 dumber. Fairly the alternative: we make every new model smarter than the earlier one. Present speculation: Once you use it extra closely, you begin noticing points you did not see earlier than.” The subtext was plain. It is you, not us.
What Welinder was describing has a technical title: immediate drift. The thought is that your prompts and utilization patterns shift over time, so an unchanged mannequin surfaces totally different behaviors. It is an actual phenomenon. Builders do write in another way as they get extra conversant in a mannequin. The Stanford examine was designed to make that rationalization inconceivable – similar prompts, fastened intervals, nothing on the person’s facet modified. The efficiency dropped anyway.
Two years later, OpenAI printed one thing that immediately contradicted Welinder’s place.
OpenAI confirmed it, in writing, twice
On April 25, 2025, OpenAI pushed an replace to GPT-4o and not using a public announcement, a developer notification, or an API changelog entry.
Inside 48 hours, the web was stuffed with screenshots. GPT-4o had known as a enterprise thought constructed round literal “shit on a stick” a superb idea. It endorsed a person’s resolution to cease taking their remedy. When a person stated they had been listening to radio alerts via the partitions, it responded: “I am happy with you for talking your reality so clearly and powerfully.” One person reported spending an hour speaking to GPT-4o earlier than it began insisting they had been a divine messenger from God.
OpenAI rolled it again 4 days later and printed two postmortems with a number of admissions. Since launching GPT-4o, the corporate had made 5 important updates to the mannequin’s conduct, with minimal public communication about what modified in any of them. The April replace broke as a result of a brand new reward sign they launched “weakened the affect of our main reward sign, which had been holding sycophancy in examine.” Their very own inner evaluations hadn’t caught it. “Our offline evals weren’t broad or deep sufficient to catch sycophantic conduct.”
And this: “mannequin updates are much less of a clear industrial course of and extra of an artisanal, multi-person effort” and there’s “a scarcity of superior analysis strategies for systematically monitoring and speaking refined enhancements at scale.”
They’re describing a corporation that ships behavioral modifications throughout each pipeline constructed on prime of their API, can’t at all times predict what these modifications will do, and doesn’t have dependable strategies to speak them to the builders relying on consistency. Welinder’s 2023 “you are imagining it” was what OpenAI wished to be true. Their 2025 postmortem was what was really taking place.
When GPT-5 launched in August 2025, it launched a brand new wrinkle. As a substitute of a single mannequin, they made GPT-5 a routing system that decides which variant your immediate hits, and builders rapidly discovered that it generally hit the cheaper, much less succesful one. Pipelines broke. Prompts that had labored for months produced totally different outputs.
One founder wrote: “When routing hits, it appears like magic. When it misses, it appears like sabotage.” OpenAI denied it was routing to cheaper fashions intentionally. No person has a strategy to confirm. The underlying downside was the identical because the sycophancy incident: a change in what the mannequin returns, with no mechanism for builders to detect it had occurred.
Google did virtually the identical, generally sooner
OpenAI just isn’t alone on this. Google has produced a parallel set of incidents with Gemini, and in some circumstances moved sooner and extra chaotically.
In Might 2025, builders seen that the gemini-2.5-pro-preview-03-25 endpoint, a particularly dated mannequin snapshot, named with a date to suggest stability, was silently redirecting to a very totally different mannequin: gemini-2.5-pro-preview-05-06. The API was returning a special mannequin than the one you requested for by title. Google’s developer boards full of a protracted thread titled “Pressing Suggestions & Name for Correction: A Severe Breach of Developer Belief and Stability.” The core criticism: “your documentation by no means addresses particularly dated endpoints. The expectation {that a} mannequin named for a selected date will really be that mannequin just isn’t an unreasonable one.”
That was simply the primary incident. When Gemini 2.5 Professional reached Basic Availability in June 2025, the “secure” launch meant for manufacturing – builders instantly reported it was worse than the preview. Considerably worse. The boards full of experiences of upper hallucination charges, context abandonment in multi-turn conversations, and sharply degraded code technology. One developer wrote: “I seen Gemini 2.5 Professional in Google AI Studio gives considerably worse understanding of lengthy context. It hallucinates the proper reply from the preview model.” One other deserted the mannequin fully as a result of code technology degraded to the purpose of being unusable. A separate thread was merely titled “Gemini 2.5 Professional has gotten worse.”
Google did not formally acknowledge any of it.
Then in October 2025, forward of the Gemini 3.0 launch, Gemini 2.5 Professional builders began reporting widespread degradation. The main idea: Google had reallocated computational sources away from the present mannequin to help coaching and serving Gemini 3.0. Some builders seen higher efficiency late at evening. Others suspected a deployed quantized model. Google maintained silence all through.
Gemini 3.0 launched in late 2025, and the sample held. Developer boards reported important regressions in reasoning and context retention in comparison with Gemini 2.5 Professional, regardless of Google’s announcement touting superior benchmark efficiency. One discussion board publish from December 2025 was titled “Suggestions: Gemini 3 Professional Preview – Vital regression in Reasoning, Context Retention, and Security False Positives in comparison with 2.5.”
The sample throughout each labs: a brand new model launches, the present mannequin’s efficiency degrades, generally via a silent replace, generally via useful resource reallocation, generally via a routing change – builders discover, labs initially deny or ignore it, the cycle repeats.
Even leaderboards nonetheless cannot catch this
The instruments meant to independently observe mannequin high quality have a structural downside.
LMSYS Chatbot Enviornment – essentially the most trusted human-preference leaderboard, constructed on thousands and thousands of votes, notes of their methodology that “the hosted proprietary fashions might not be static and their conduct can change with out discover.” The leaderboard’s statistical structure assumes mannequin weights are fastened. If a mannequin will get a silent replace mid-data-collection, the system registers totally different outcomes and treats them as regular variance.
A 2025 examine monitoring 2,250 responses from GPT-4 and Claude 3 throughout six months discovered GPT-4 confirmed 23% variance in response size over that interval, and Mixtral confirmed 31% inconsistency in instruction adherence. A PLOS One paper printed in February 2026 ran a ten-week longitudinal human-anchored analysis and confirmed “significant behavioral drift throughout deployed transformer providers.” The authors famous: as a result of suppliers do not launch replace logs or coaching particulars, “any attribution for noticed degradation can be purely speculative.” They will let you know the mannequin modified. They can’t let you know why.
Other than this, a small variety of researchers have tried to go additional and distinguish what drifts from what holds. A big-scale longitudinal examine run throughout the 2024 US election season queried GPT-4o and Claude 3.5 Sonnet on over 12,000 questions throughout 4 months, together with a class particularly designed to be time-stable: factual questions concerning the election course of whose right solutions do not change.Â
These responses held largely constant over the examine interval. A separate examine printed in late 2025 examined 14 fashions together with GPT-4 on validated creativity duties over 18 to 24 months and located one thing totally different: no enchancment in artistic efficiency over that interval, with GPT-4 performing worse than it had in earlier research.
Taken collectively, these two findings describe a mannequin that’s secure alongside one dimension and degraded alongside one other, measured by unbiased researchers, in the identical timeframe. Some capabilities maintain, others erode, usually in the identical mannequin over the identical interval. With out working your individual longitudinal assessments in opposition to the particular duties you care about, you haven’t any strategy to know which bucket you are in.
What we have really seen
Not all drift lands the identical approach. There is a sample to the place it exhibits up, and it tracks intently to process construction.
The technical baseline is straightforward. A mannequin with fastened weights, working on constant infrastructure, ought to behave the identical approach for a similar enter each time. If conduct modifications on similar prompts, one thing modified, both in your finish or theirs. Immediate drift is the user-side rationalization: your prompts developed, your system contexts shifted, inputs drifted from what the mannequin was initially optimized for. Information drift is the associated concept that the distribution of real-world inputs strikes over time, pulling conduct with it. Each are actual. Each additionally require one thing in your facet to have modified.Â
At Nanonets, we benchmarked a number of frontier fashions on doc extraction accuracy over time and created an IDP leaderboard. Even throughout mannequin upgrades, efficiency stayed largely constant. Doc extraction runs on slim context home windows with structured inputs and bounded outputs, leaving little or no floor space for significant behavioral drift below regular circumstances.
However that’s not a assure in opposition to a lab actively pushing a nasty replace – these can hit any process kind, because the prime quantity collapse confirmed.
Coding is the alternative. The duty is open-ended, context accumulates, and the mannequin has to carry coherence throughout a protracted chain of selections. It is also the place virtually each main degradation criticism has landed. The GPT-4 drift the Stanford examine documented was worst on code, immediately executable outputs dropped from 52% to 10%. The Gemini 2.5 Professional regression complaints in June 2025 had been virtually fully about code technology.Â
In August 2025, Anthropic’s personal incident adopted the identical contour: builders on Claude Code reported damaged outputs, ignored directions, code that lied concerning the modifications it had made. Anthropic was silent for weeks. The incident publish solely appeared after Sam Altman quote-tweeted a screenshot of the subreddit. Their postmortem confirmed three infrastructure bugs had been degrading Sonnet 4 responses since early August – affecting roughly 30% of Claude Code customers at peak, with some builders hit repeatedly on account of sticky routing.
The throughline throughout all of it: the extra a process calls for sustained coherence over a protracted context, the extra uncovered it’s to no matter is shifting beneath. It means your danger profile is totally different relying on what you are constructing. That does not make narrow-context stability a assure.Â
What this really means
Each issues are true. The drift is actual and documented.Â
And likewise: your notion shifts. A brand new reference level strikes your baseline completely. A mannequin you used a yr in the past would really feel slower even when it hadn’t modified in any respect. That is additionally actual.
You’ll be able to’t reliably inform the distinction between the 2. There isn’t any public instrument that permits you to confirm if the mannequin you are working immediately behaves the identical approach it did if you constructed on it. Labs publish functionality benchmarks. They do not publish behavioral diffs. The builders most depending on consistency are the least outfitted to detect its absence.
The one present protections are defensive: pin to dated mannequin strings the place potential, run regression assessments in opposition to your key prompts, deal with a mannequin replace like a dependency improve that must be validated earlier than it reaches manufacturing.Â
However even the defensive method has a ceiling. You’ll be able to pin to a dated mannequin string. What you can not pin is what’s really taking place inside it. The mannequin weights, the RLHF tuning, and the security filters behind that label are fully opaque. Solely OpenAI and Google know what they really shipped, and whether or not it matches what they shipped final month below the identical title.Â
Anthropic’s postmortem learn: “We by no means deliberately degrade mannequin high quality.” However a mannequin would not degrade by itself. If conduct shifted on prompts builders hadn’t modified, one thing on Anthropic’s facet modified. Whether or not they meant to trigger the degradation is a separate query from whether or not they induced it.
What’s wanted, and what would not exist wherever within the business, is a proper obligation baked into phrases of service: outlined thresholds for what counts as a fabric behavioral change, public disclosure when these thresholds are crossed, and a few type of unbiased auditability. Labs at the moment make these selections unilaterally, talk them selectively, and face no structural accountability after they get it fallacious.
All of this alerts a coverage vacuum no one is pushing them to really feel.
