I'm no expert, but I don't see why improving a model for the short term should make it less accurate in the long time. There might well come a point at maybe around 10 days out when the accumulation of random errors means it is no longer any better than it was before it was improved, but I don't see why it should actually be worse.
Originally Posted by: jhall
My thinking is that the improvement in short-term accuracy comes from an increase in the number of data points, along with a narrowing of the gap between calculations as the model run progresses (say, from 12 hrs, 24 hrs, 36 hrs etc. to 03 hrs, 06 hrs, 09 hrs, 12 hrs, and so on).
The result is an exponential increase in the number of potential small errors, which (exponentially) become big errors eventually - you know, the thing about a butterfly flapping its wings and causing a hurricane.
Re Gandalf's comment that rounding errors should only have a negligible effect - I would disagree with that, particularly when dealing with our part of the world, where even the subtlest changes in a similar general pattern can result in vastly different weather on the ground. Rounding errors are the bane of iterative models (which is what these are).
A tenth of a degree centigrade here, half a millibar there, and ten days later, and thousands of calculations further on, the error gets multiplied into something substantial. But on a coarser model, with fewer data points, and a 12-hour gap between iterations, any tiny errors have less opportunity to take on a life of their own.
It's nothing to do with the mathematical calculations themselves becoming less accurate. It's just that the numbers on which those calculations are based become open to more errors as more calculations need to be made.
2 miles west of Taunton, 32 m asl, where "milder air moving in from the west" becomes SNOWMAGEDDON.
Well, two or three times a decade it does, anyway.