The “Butterfly Effect” is a term well known in popular culture, and I came across it again today. The problem is, what most people understand about it is completely wrong. Most people who know the term would describe it thusly:
A butterfly flapping its wings in China can cause a tornado in North America
Or some variation thereof.
The butterfly effect was introduced by Edward Lorenz, and popularized by James Gleick in his 1987 book Chaos: Making a New Science. Lorenz was a mathematician and meteorologist, and while modelling mathematical predictions of weather, he noticed that slight changes in initial conditions can lead to significant differences over the long term. The behaviour of the atmosphere is dependent on the state of adjascent atmosphere, which is in turn dependent on the state of its adjascent air masses, and so on. As each air mass changes, it alters its influence on all surrounding air masses. Slight changes thus can have slight influences which propagate throughout the atmosphere.
For long term weather prediction, the state of all variables must be known, as well as the exact influence of each variable on every other. Given that the exact temperature, pressure, humidity, and velocity of each litre of atmosphere cannot possibly be known, real weather patterns will, over time, deviate from the prediction. And even if we could know the exact state of every litre of atmosphere, there are other influences, including the motion of living things (butterflies, birds, people), and uneven surfaces which introduce turbulence. The more time passes, the more deviation will accumulate, leading to ever more rapid deviation.
So a butterfly will not cause a tornado, in the sense that the wing beat becomes amplified over distance. But to predict the existance and exact path of a tornado, weaks or months in advance, we would need to know every possible variable – and the simple beat of a butterfly’s wing would change the initial state, and thus change the outcome.
Chaos theory is a fascinating field, and a very powerful tool. It lets us recognize that even if we understand exactly how a process works, we cannot predict the exact outcome of a process consisting of multiple, mutually influential variables. But conversely, not being able to predict exact outcomes doesn’t mean we don’t understand the process.