We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Old climate models are often evaluated on whether they made correct predictions of global warming. But, if the old models were missing processes that we know now to be important, any correctness of their predictions would have to be attributed to a fortuitous compensation of errors, creating a paradoxical situation. Climate models are also tested for falsifiability by using them to predict the impact of short-term events like volcanic eruptions. But climate models do not exhibit the numeric convergence to a unique solution characteristic of small-scale computational fluid dynamics (CFD) models, like the ones that simulate flow over a wing. Compensating errors may obscure the convergence of individual components of a climate model. Lack of convergence suggests that climate modeling is facing a reducibility barrier, or perhaps even a reducibility limit.
Modern weather and climate prediction originated at the Institute for Advanced Study (IAS) in Princeton, New Jersey. Mathematician John von Neumann, a member of the IAS faculty, interacted with computing pioneer Alan Turing in the late 1930s and became involved in the construction of the first general-purpose digital computer, ENIAC, in the early 1940s. One of his goals was to use the computer to forecast weather using the equations of physics. He formed the Princeton Meteorology Group by hiring scientists with expertise in weather. In 1950, this group made the world’s first digital weather forecast. Two basic concepts from the philosophy of science – inductivism and deductivism – are introduced in the chapter to provide the context for the scientific developments being discussed. Von Neumann’s (thwarted) ambition of going beyond weather prediction to weather control is also discussed.
The media often lends more credence to dramatic predictions from individual climate models than does the climate science community as a whole. Models are metaphors of reality; they should be taken seriously, not literally. Predictions made by simple climate models need to be confirmed using more comprehensive climate models. Deep uncertainty surrounds estimates of climate metrics like climate sensitivity, meaning that the error bars presented with these metrics may have their own unquantifiable error bars. It is important to make the distinction between precision and accuracy when evaluating uncertainty estimates of climate parameters. Overconfidence in numerical predictions of extreme climate scenarios may lead to a “doomist” belief that a climate catastrophe is inevitable and that there is nothing we can do to prevent it.
Climate modeling developed further at the National Center for Atmospheric Research (NCAR) in Boulder, Colorado. The laws of physics that form the foundation of weather and climate models imply strict conservation of properties like mass, momentum, and energy. A household budget analogy can be used to explain these conservation requirements, which are stricter for climate models as opposed to weather models. A mismatch in the energy transfer between atmospheric and oceanic models that were part of a climate model led to a correction technique developed in the 1980s known as flux adjustment, which violated energy conservation. Subsequent improvements in climate models obviated the need for these artificial flux adjustments. Now we have more complex models, known as Earth System Models, that include biological and chemical processes such as the carbon cycle. The concept of the constraining power of models is introduced.
Climate is an emergent system with many interacting processes and components. Complexity is essential to accurately model the system and make quantitative predictions. But this complexity obscures the different compensating errors inherent in climate models. The Anna Karenina principle, which assumes that these compensating errors are random, is introduced. By using models with different formulations for small-scale processes to make predictions and then averaging them, we can expect to cancel out the random errors. This multimodel averaging can increase the skill of climate predictions, provided the models are sufficiently diverse. Climate models tend to borrow formulations from each other, which can lead to “herd mentality” and reduce model diversity. The need to preserve the diversity of models works against the need for replicability of results from those models. A compromise between these two conflicting goals becomes essential.
The Leaning Tower of Pisa, used by Galileo to demonstrate the simplicity of science, is also a testament to the complexity of science. Over an 800-year period, multiple attempts were made to fix the errors in the tower’s construction that caused it to lean. Often, the fixes had unanticipated consequences, necessitating additional compensating fixes. Climate models face a similar problem. The models use approximate formulas called parameterizations, with adjustable parameters, to represent processes like clouds that are too fine to be resolved by the model grids. The optimal values of these parameters that minimize simulation errors are determined by a trial-and-error process known as “model tuning.” Tuning minimizes errors in simulating current and past climates, but it cannot guarantee that the predictions of the future will be free of errors. This means that models can be confirmed, but they cannot be proven to be correct.
Communicating the strengths and limitations of climate modeling to those outside the field of climate science is a formidable challenge. The nuances of scientific language can be lost in the translation to natural language when climate predictions are presented to a general audience. This loss in translation can lead to misinformation and disinformation that hampers a rational response to the climate crisis. Even simple terms like “model,” “data,” and “prediction” have many different meanings depending on the context. Anytime we talk about the future, we are using a model. In climate science, we might think we are dealing with data from the past, but often this is processed data that is produced by analysis models applied to raw data. The word “prediction” can mean a range of things, from unconditional prophecies to conditional projections.
The fundamental difference between weather prediction and climate prediction is explained, using a “nature versus nurture” analogy. To predict weather, we start from initial conditions of the atmosphere and run the weather forecast model. To predict climate, the initial conditions matter less, but we need boundary conditions, such as the angle of the sun or the concentration of carbon dioxide in the atmosphere, which control the greenhouse effect. Charles David Keeling began measuring carbon dioxide in the late 1950s, and found that its concentration was steadily increasing. Carbon dioxide concentrations for the past 800,000 years can also be measured using ice cores that contain trapped air. These ice core data show that the rise in carbon dioxide concentrations measured by Keeling was unprecedented. Manabe, and another scientist, Jim Hansen, used climate models to predict that increasing carbon dioxide could cause global warming.
Recommend this
Email your librarian or administrator to recommend adding this to your organisation's collection.