Legacy Drift
A few years ago, I told a Manchester United supporter that his team had once been in the second tier of English football. He was surprised and asked when. I said the 1974–75 season. He replied that, since he was born in 1991, that didn’t feel recent.
It made me realize that I tend to think of anything in my lifetime as “recent.”
What does this anecdote have to do with AI and Software Engineering? Let me explain.
Large Language Models, which is the type of AI used by ChatGPT, CoPilot, Claude, and Grok, are trained on a large amount of code in public repos, blog posts, Stack Overflow responses, and documentation. This code will include examples that are for older versions of frameworks and languages as well as for current versions.
The volume of examples that leverage older versions can lead to problems when using an AI agent to write code. LLMs don’t ‘know’ things; they predict likely answers based on past data. If that data is outdated, their answers can be too.
This creates a problem I call Legacy Drift: models tend to generate solutions based on historically common patterns, even when those patterns are now obsolete.
An example for React
The older way of setting up Hydration
import { hydrate } from "react-dom";
hydrate(
The current way
import { hydrateRoot } from "react-dom/client";>br>
hydrateRoot(document.getElementById("root"),
The old approach throws errors in React 19. However, it was the standard from 2015 until it was deprecated in 2022 and removed in 2024. As a result, it is heavily represented in LLM training data.
The problem can be resolved by providing additional context in your prompts. For the example above you could add: “Use React 19. Do not use deprecated or removed APIs.”
Addressing this is even more important if you are exposing AI agents to your customers. I use Jira. It includes an AI agent called Rovo. One of its assorted shortcomings is an example of the Legacy Drift described above.
Jira has recently gone through a terminology rebrand. The purpose of this shift is to make the tool less developer centric. For instance “issues” are now called “work items”. Various functions have been moved in the UI to places that a UX designer felt made more sense.
However, Rovo has not fully caught up. When you ask it how to do something, it often uses outdated terminology or refers to UI elements that no longer exist. This is confusing at best and incorrect at worst.
Whoever configured Rovo should have provided additional context; such as enforcing current terminology and prioritizing recent UI behavior.
The lesson is simple: when using or deploying LLM-based agents, you must account for Legacy Drift. Without explicit guidance, models will default to the most common patterns in their training data; even when those patterns are outdated.