Environmental Law 101: Global Climate Change, Part III

The Universe is complex—so complex that we still imperfectly understand it. What we do know is that Earth, the “Water Planet,” is unique and special. Scientists have an idea of what has happened billions of years ago. But what tools do they use to predict what is going to happen in the future. The answer is computer models. How accurate are predictions on the impact of CO₂ on the Earth’s climate? The story has to start with the complexities of the past.

Estimates of the age of the Universe suggest that it is about 14 billion years old. At the time of the Big Bang, the Universe had zero size and infinite heat; and all matter consisted of an estimated 2.2 x 10⁴¹ pounds—compressed into an infinitesimal speck. When the Universe was the size of a bowling ball (a billionth of a billionth of a billionth of a billionth of a second after the Big Bang) all known particles and forces as we know them had come into existence.

At one second after the Big Bang, the temperature of the Universe had dropped to 10 thousand million degrees—a temperature equal to the center of Earth’s sun. Within a few hours of the Big Bang, temperatures dropped to 10 million degrees, too cold to weld any more particles into atomic nuclei. Within a million years, the temperature of the Universe had fallen to 4,000 degrees and all neutrons and protons had captured all the electrons they could hold.

The speed of light is 186,000 miles per second or, more or less, about 670,000,000 mph. The galaxy within which Earth is located is one hundred thousand light years across. Earth’s galaxy is only one of some hundred thousand million galaxies that can be seen using modern telescopes; and each galaxy contains some hundred thousand million stars. All matter in the Universe is still moving away from the point of the Big Bang at 90% of the speed of light. The Earth is estimated to have formed about 4.5 billion years ago. Because the Universe is still expanding, Earth is still inside the Big Bang moving at incredible speeds that we can neither perceive nor comprehend.

Why is Earth referred to as the “Water Planet?” Because, uniquely (based on what we know now about our “neighbors” in our solar system), Earth has a lot of water. It is cooled by 40,000 gallons of water per square foot. Seventy percent of the Earth is covered by water. The Gulf Stream, which is an ocean stream, carries three million cubic miles of warm water up to the North Atlantic every hour (bringing more warmth to those high latitudes in one hour than could be provided by burning all the coal mined on Earth for an entire year).

Water vapor fuels all the winds of the Earth with around 12,000 cubic miles of liquid annually. [You will remember that water vapor makes up 70% of all Greenhouse Gases; see Global Climate Change, Part I.] The total quantity of water on Earth is much the same as it was more than three billion years ago—estimated to be about 326 million cubic miles.

The most abundant elements in the Universe are hydrogen (90%) and helium (9%). The most abundant elements in living organisms are carbon, hydrogen, oxygen and nitrogen. Historic concentrations of CO₂ were tens to hundreds of times more than what they are today. These historic concentrations were reduced by rainfall—the result being the quantities of limestone found on Earth and in its oceans.

Taking all of these extraordinary complexities together, the only way to project the factor of greatest future interest (i.e., global temperatures) is to use computer models to take into account an enormous variety of information.

Computer models got us to the moon and back; they also have been directly implicated as a cause of the Great Recession when they were inappropriately manipulated, contained defects (such as assuming that home prices would never fall) and were relied on too heavily. The fact of the matter is that contaminant groundwater computer models (i.e., computer models that predict the movement of contaminants in water through various media) are most predictive if (a) the domain they cover is relatively small (e.g., a few square miles), (b) the cells in the models are relatively small (e.g., thirty feet in width and length and ten feet in depth), (c) lots of real data is available rather than literature data, (d) boundary conditions are well defined and (e) the domain is relatively homogeneous. The difficulty of modeling Earth within the Universe is very challenging because none of these conditions exist. Albert Einstein once famously said: “God does not play dice.” And, of course, scientists and nonscientists alike have spent almost a century trying to interpret what he meant. The fact is that, because we don’t perfectly understand all the laws of physics, many things happen that appear to be random in nature. To a certain extent, computer models are capable of dealing with randomness and presenting alternative scenarios based on approaches such as a Monte Carlo analysis which involves running repeated and numerous simulations based on various combinations of variable random data in order to obtain a distribution of probable outcomes.

Climate models have been developed to test the hypothesis of climate change because laboratory experimentation is impossible. These models involve computerized mathematical representations of the physical processes that control the Earth’s climate. This requires a multitude of complicated interactive factors that affect the climate. Basically, all climate models divide the atmosphere, oceans and upper-layers of soils into a three dimensional grid. Each grid box (i.e., consider it a “cell in a groundwater contaminant flow model) represents a location; a “box” might be the size of Ohio or an area of only a few square miles. Each model normally starts with a snap shot of current conditions and, then, a “forcing factor,” such as increased CO₂, is supplied to see how the model responds.

Many reputable national and international organizations have created models. Within the United States, the following organizations have such models: (a) the National Center for Atmospheric Research; (b) the United States Department of Commerce; (c) the National Oceanic and Atmospheric Administration (NOAA); (d) the National Aeronautics and Space Administration (NASA); and the National Center for Atmospheric Research.

Processes that need to be modeled include:

–Incoming solar radiation allowing for variables such as sunspots and changes in the Earth’s orbit.

–Reflection of Suns’ rays back into Space such as from clouds, ice, sand, aerosols and, even snowflakes.

–The “greenhouse effect” (see “Global Climate Change, Part I).

–Absorption of CO₂ by oceans and forests.

–The “blanketing” (warming) effect of clouds.

–Chain reactions to reinforce/counter initial trends.

If warmer temperatures are projected, it compounds the difficulty of predicting outcomes because warmer air holds more moisture meaning more water vapor (the primary greenhouse gas), more polar ice melts, there is less reflection of the Sun’s rays away from Earth and here are greater releases of trapped CO₂ in ice and frozen soil. A model must “model” all of these variables—in fact they must model numerous combinations of these variables.

In order to avoid an “opus” post, I am going to end the modeling discussion at this point and, in my next post, deal with the “case for” and the “case against” global warming predictions based on modeling outcomes. It is a subject that deserves separate consideration and I will do that in my next post.

Search