Introduction to Climate Modeling
(for non-science people)

 

Quick Navigation Index

Climate Modeling

What are Humanity's Alternatives?

Climate Modeling

Why We Need Models

Humanity has always built models to better understand complicated observations -AND- serve as teaching tools. Building a working model proves "you are in possession of all the facts". Deviation from observation proves that your model works at some gross level but is missing finer detail.

Early Mechanical Models of the Solar System

In ancient times people built mechanical models of our solar system to predict the current location of the planets and stars. These models were based upon naked-eye observations and worked well until better observations indicated improvements were required. Model builders responded by adding epicycles to the planetary positions (perhaps this was just the easiest fix for a mechanical model).

In medieval times, Nicolaus Copernicus suggested that a heliocentric system would simplify the models but this idea seemed too revolutionary to most people. Improved (but non-magnified) observations by Tycho Brahe coupled with mathematical analysis by Johannes Kepler proved that planetary orbits needed to be changed from circular to elliptical (how would the mechanical model builders accomplish this?). Telescopic observations by Galileo proved the heliocentric system was correct but now the Roman Catholic Church stood in the way of scientific progress. Many scientists avoided controversy (including burning at the stake) when they stated "we don't really believe in the heliocentric system, it just simplifies our models".

Improving the Solar System Model

Most basic principles of nature can be described using simple mathematics. For example, Newton's Three Laws of Motion and Newton's Law of Universal Gravitation can each be expressed using simple algebraic equations. However, combining them with Kepler's Laws of Planetary Motion into a complicated mathematical model representing planetary motion was more difficult for most at that time.

Although work done by people such as Urbain Le Verrier (who used pen and paper to predict the location of an unknown planet eventually named Uranus) is unbelievably impressive, a computer is required to model known planetary and stellar motion. A mechanical model is no longer possible or practical.

Furthermore, if the Earth was the only planet orbiting the Sun, the shape of Earth's orbit would eventually degrade into a circle. However, the gravitational tugs of other planets causes the shape of Earth's orbit to change from circular to elliptical and back in a time period of 400,000 years. This theory was also worked out using only pen-and-paper by the Serbian mathematician Milutin Milankovitch but remained a mathematical abstraction until computer modeling was applied. BTW, other aspects of Milankovitch Cycles combine in such a way to enable glaciations (ice ages) every 100,000 years or so. Our current interglacial period (known as the Holocene) started 11,700 years ago.

Computer Limitations (and Science Limitations)

Because computers can be used to calculate equations many millions of times faster than any human, modern life would be impossible without them. For example, who could imagine any government manually processing our income tax claims? However, computers do have limitations most people are not aware of.

Experiment: the next time you pour cream into your coffee, carefully watch the swirling clouds as the two fluids (liquids in this case) mix. This turbulent behavior is partly based upon a combination of chaos theory and fluid dynamics. Unfortunately no computer on the planet now (2010), or any time soon, is able to accurately model this. Think about it: you are only mixing two liquids so why is the resulting action so complicated? To make matters worse, every time you perform the coffee-cream experiment you will observe a slightly different result. So maybe we need to consider more details like: exact volumes and temperatures of each liquid, height the cream is poured from, place where it has been poured into, exact components of the cream, exact components of the coffee, viscosities of both liquids, smoothness and shape of the container, swirling speed of the coffee from the initial filling event, etc.

It turns out that an accurate computer model will require us to mathematically compute the properties and trajectory of every molecule. Since computers won't be doing this anytime soon, perhaps we can cut corners by only computing the average action of each deciliter (one tenth of a liter) at ten second intervals. As long as the simulation gives us a homogenous mixture after 2 minutes and possibly allows the cream to settle to the bottom after a couple of hours then our computer simulation might be good enough.

Future improvements in computer technology along with advances in computer programming techniques might allow us to slowly reduce the average volume modeled along with the average time-period being simulated. And yet we are still only talking about a cup of coffee.

Modeling Earth's Climate

Simulating Earth's Weather with Pencil and Paper

The first attempt to do a model Earth's weather was done with pencil and paper using something called the two-box model. This scheme (which is still used today to teach science students) uses two boxes to model the whole earth. The top box represents Earth's atmosphere while the bottom box represents Earth's surface. It is obviously very simplistic but a little tinkering provides a good starting point to other more complicated models.

The two-box model was replaced with two-dimensional models, three-dimensional models, then finally cell models. A complete description of these models is beyond the scope of this introduction but you can Google the phrases to investigate further.

I recently stumbled upon a first serious attempt to do a cell model which was attempted in 1922 obviously without the aid of a computer. Excerpt from: www.aip.org/history/climate/GCM.htm (please read this constantly updated article)

In 1922, the British mathematician and physicist Lewis Fry Richardson published a more complete numerical system for weather prediction. His idea was to divide up a territory into a grid of cells, each with its own set of numbers describing its air pressure, temperature, and the like, as measured at a given hour. He would then solve the equations that told how air behaved (using a method that mathematicians called finite difference solutions of differential equations). He could calculate wind speed and direction, for example, from the difference in pressure between two adjacent cells. These techniques were basically what computer modelers would eventually employ. Richardson used simplified versions of Bjerknes's "primitive equations," reducing the necessary arithmetic computations to a level where working out solutions by hand seemed feasible. Even so, "the scheme is complicated," he admitted, "because the atmosphere itself is complicated."   The number of required computations was so great that Richardson scarcely hoped his idea could lead to practical weather forecasting. Even if someone assembled a "forecast-factory" employing tens of thousands of clerks with mechanical calculators, he doubted they would be able to compute weather faster than it actually happens. But if he could make a model of a typical weather pattern, it could show meteorologists how the weather worked.  So Richardson attempted to compute how the weather over Western Europe had developed during a single eight-hour period, starting with the data for a day when scientists had coordinated balloon-launchings to measure the atmosphere simultaneously at various levels. The effort cost him six weeks of pencil-work. Perhaps never has such a large and significant set of calculations been carried out under more arduous conditions: a convinced pacifist, Richardson had volunteered to serve as an ambulance-driver on the Western Front. He did his arithmetic as a relief from the surroundings of battle chaos and dreadful wounds.   The work ended in complete failure. At the center of Richardson's simulacrum of Europe, the computed barometric pressure climbed far above anything ever observed in the real world. "Perhaps some day in the dim future it will be possible to advance the calculations faster than the weather advances," he wrote wistfully. "But that is a dream." Taking the warning to heart, meteorologists gave up any hope of numerical modeling

Weather vs. Climate

We all know that weather reports today are not very accurate, and yet, they have improved quite a bit since the 1950s. In certain instances, such as tropical depressions which can develop into hurricanes, weather reports may be reasonably accurate over a period of 7-10 days. But just like our cup-of-coffee example described above, skipping over the details will allow us to predict the long-term trends. This is the major difference Weather and Climate and I should point out that "climate models" are much better than "weather models".

Climate modeling of the environment in a period of 1 to 100 years
Weather modeling of the environment in a period of days to weeks

Even if it was possible to accurately model climate or weather, you cannot mathematically model all the inputs. For example here are two (of many) events which appear to act randomly:

Volcanoes

There are 500 active volcanoes on Earth today with as many as 1,500 potentially active volcanoes. However, there does not appear to be any mathematical pattern which would describe their frequency or intensity. To make matters worse, all active volcanoes release a variable (random) volume of CO2 which will increases the greenhouse effect while. Volcanoes also release larger volumes of light-colored particulate matter (which directly reflects sunlight back into space) as well as sulphur dioxide compounds (which stimulate cloud formation).

Added Complication: CO2 can remain in the atmosphere for 100 years or more. The effects of white particulate matter and/or sulphur dioxide will only last one to two years. So what might initially appear to be a short-term cooling event (like Mount Pinatubo in 1991) eventually will be a long term warming event.

Humans

One cigarette improperly discarded in a National Park occasionally will start a massive forest fire resulting in a massive release of heat, smoke, and CO2 into the atmosphere.

These seemingly random events (along with the previously mentioned turbulent behavior of fluids) need to be manually inserted into our climate models.

 Simulating Earth's Climate (a very simple starting model)


W N1 N2 N3 N4 E
A1 A1 A3 A4
B1 B2 B3 B4
S1 S2 S3 S4
Imagine for a moment, a spinning Earth which is cut vertically into 4 columns and horizontally into 4 rows which results in 16 zones (don't use triangles in the top and bottom rows). We now need to write an a single equation for each zone which would simulate:
  1. the quantity of solar energy entering each zone over the course of a day
  2. the quantity of energy temporarily absorbed by: soil, melting ice, warming water, and evaporation
  3. the quantity of energy is being radiated back into space
  4. the quantity of energy temporarily released by: freezing water, and precipitation.

Because there is more sunlight at the equator, a greater amount of sunlight will be absorbed in rows A + B than rows N + S. In fact, you may wish to visualize an oval of light stretched from North to South and wide enough to cover two columns at the equator. Because the surface of the globe is spinning west-to-east (left to right), our view of the solar oval will be seen to move right-to-left.

Because the surface of the globe is spinning west-to-east while the atmosphere wants to stay put, an apparent east-to-west wind will be blowing over the equator so we'll need equations to describe that as well. Depending upon how you handle parameter communication between zone boundaries, you will probably need at least 28 (12v+16h) inter-zone calculations. (column 4 zones are connected to column 1 zones; there are no zones above row N or below row S)

This means that each simulated tick of the clock will require at least 44 (16 + 28) calculations. You might be able to try this with pen and paper but it will be time consuming and error prone. Moving the simulation into a computer will allow you to introduce larger (more accurate) equations into each location.

If you are brave then you'll need to introduce seasonal changes. This means that the rows N + A would receive peak daylight in June while rows B + S would receive peak daylight in December. It might be easier to visualize a single sine wave superimposed upon our model where the phase shifts over the course of a year.

Simulating Earth's Climate (more layers)

Atmospheric Layer (a)
N1a N2a N3a N4a
A1a A1a A3a A4a
B1a B2a B3a B4a
S1a S2a S3a S4a
Surface Layer (s)
N1s N2s N3s N4s
A1s A1s A3s A4s
B1s B2s B3s B4s
S1s S2s S3s S4s

Although solar energy directly heats the ground wherever it falls on land, heated ocean water tends to redistribute energy via events as small as evaporation and as large as hurricanes, which all occur in the atmosphere. Ocean energy is also responsible for water currents as small as the Gulf Stream and as large as the Thermohaline Circulation which all occur at, or below, the surface. This means we might want to introduce a second layer so atmospheric events could be simulated in the upper layer while ocean events would be simulated in the lower layer.

After updating equations in the 32 zones to reflect land vs. water, we now need 16 additional equations to describe the flow of energy (zone by zone) between the layers. We now require 60 (44+16) calculations for each tick of the simulation clock. Yikes!

But do we have enough squares? More land exists in the Northern Hemisphere so more squares would allow us to code for that. Also, since uplift formed the Panamanian Land Bridge 3 million years ago, ocean currents have changed in such a way that glaciations are much more common. If we want to model ocean currents then we will need a lot more zones.

JASON - Climate Models commissioned by the U.S. Government

It is an historical fact that the US government commissioned a climate study in 1978 by a group of scientists known associated with JASON. This group created a computer model with the audacious name "The JASON Model of the world" which produced a report in 1979 titled:

JASON
April 1979
Technical Report
JSR-78-07

Highlights:

Simulating Earth's Climate in Large Data Centers

This associated image and text in this section was borrowed from a NOAA (National Oceanic and Atmospheric Administration) web site. It appears to be using either 2.5 or 3 layers and thousands of zones.

Climate models are systems of differential equations based on the basic laws of physics, fluid motion, and chemistry. To "run" a model, scientists divide the planet into a 3-dimensional grid, apply the basic equations, and evaluate the results. Atmospheric models calculate winds, heat transfer, radiation, relative humidity, and surface hydrology within each grid and evaluate interactions with neighboring points.
NOAA Grid Sizes over the years:
Year N-S W-E Total
1980s 40 96 3840
1990s 80 192 15360
2004 200 360 72000
2009 1070 1440 1540800
NOAA Links:

NASA Links:

Departing Thoughts on Modeling

What climate professionals say "about climate models"

Climate models are mathematical representations of the interactions between the atmosphere, oceans, land surface, ice – and the sun. This is clearly a very complex task, so models are built to estimate trends rather than events. For example, a climate model can tell you it will be cold in winter, but it can’t tell you what the temperature will be on a specific day – that’s weather forecasting. Climate trends are weather, averaged out over time - usually 30 years. Trends are important because they eliminate - or "smooth out" - single events that may be extreme, but quite rare.

Climate models have to be tested to find out if they work. We can’t wait for 30 years to see if a model is any good or not; models are tested against the past, against what we know happened. If a model can correctly predict trends from a starting point somewhere in the past, we could expect it to predict with reasonable certainty what might happen in the future.

So all models are first tested in a process called Hindcasting. The models used to predict future global warming can accurately map past climate changes. If they get the past right, there is no reason to think their predictions would be wrong. Testing models against the existing instrumental record suggested CO2 must cause global warming, because the models could not simulate what had already happened unless the extra CO2 was added to the model. All other known forcings are adequate in explaining temperature variations prior to the rise in temperature over the last thirty years, while none of them are capable of explaining the rise in the past thirty years.  CO2 does explain that rise, and explains it completely without any need for additional, as yet unknown forcings.

Where models have been running for sufficient time, they have also been proved to make accurate predictions. For example, the eruption of Mt. Pinatubo allowed modellers to test the accuracy of models by feeding in the data about the eruption. The models successfully predicted the climatic response after the eruption. Models also correctly predicted other effects subsequently confirmed by observation, including greater warming in the Arctic and over land, greater warming at night, and stratospheric cooling.

note: the text above is is an excerpt from this article

Facts based on modern (direct-measured) data:

Facts based upon pre-modern proxy data

Okay so what do the climates models based upon proxies tell us? Facts and derivations:

External Links

Videos

"High Quality" Video Collections at YouTube

Selected "High Quality" Climate Science Videos

NNASA | Supercomputing the Climate


NASA | How Satellites Measure Earth's Temperature


NASA | Climate in a Box


James Hansen's Climate Model

What are Humanity's Alternatives?

Emission Scenarios (Using Computers to Model Alternatives for Humanity)

Global
Emphasis
Regional
Emphasis
A1 A2 Economic Emphasis
B1 B2 Environmental Emphasis
Many citizens are unaware of a related activity done by scientists who coupled computer-based climate models to computer-based economic models to project humanity's possible actions 100 years into the future.

This is done by restarting each climate simulation (computer run) using one of four different emission scenarios labeled: A1, A2, B1, B2. The outputs are based upon ~40 different outcomes (averaged across more than 25 climate models).

The "ones" column continues our trend to globalization while the "twos" column is a shift back to a more regional economy. The "A" row places more emphasis on a economic health (continuing to burn fossil fuels) while the "B" row places more emphasis on environmental health

Since globalization exports education and also promotes the education of women, birth rates should slow and family sizes should drop. Therefore, Column 1 showed human population hitting a 9 billion peak in 2050 then dropping off while column-2 hit a 9 billion peak in the year 2100 (no modeling was done after 2100 so we do not know if that population will plateau or drop).

The "A1" scenario is subdivided into 3 basic subcategories:

A1F1 (a.k.a. A1FI) Fossil Intensive (same as "make no changes to current activity")
A1B Balanced (between A1FI and A1T)
A1T Technology (development of: wind, nuclear, solar-thermal, solar-photovoltaic)

Talking Points:

Links:

More food for thought: When initially stumbling upon "Emission Scenarios" I couldn't help recalling an Isaac Asimov short story I read 40 years ago titled the The Evitable Conflict which happens to be Chapter 9 of the book I, Robot.

Plot:

The "Machines", powerful positronic computers which are used to optimize the world's economy and production, start giving instructions that appear to go against their function. Although each glitch is minor when taken by itself, the fact that they exist at all is alarming. Stephen Byerley, now elected World Coordinator, consults the four other Regional Coordinators and then asks Susan Calvin for her opinion.

They discover that the Machines have generalized the First Law [of robotics] to mean "No machine may harm humanity; or, through inaction, allow humanity to come to harm." (This is similar to the Zeroth Law which Asimov developed in later novels.) Dr. Calvin concludes that the "glitches" are deliberate acts by the Machines, allowing a small amount of harm to come to selected individuals in order to prevent a large amount of harm coming to humanity as a whole.

In effect, the Machines have decided that the only way to follow the First Law is to take control of humanity, which is one of the events that the three Laws are supposed to prevent. Asimov returned to this theme in The Naked Sun and The Robots of Dawn, in which the controlling influence is not a small conspiracy of Machines but instead the aggregate influence of many robots, each individually tasked to prevent harm.

Comment: we do not posses positronic self-ware computers today which means our computers are more like bicycles which amplify human effort. In many ways our climate models are a little more desirable than the story Asimov wrote about in 1950. We are not obliged to listen to computer-based prognostications but I think we would be very fooling not to. - NSR


Back to Home
Neil Rieck
Kitchener - Waterloo - Cambridge, Ontario, Canada.