By Dr. Larry Bell
Anyone who says they can confidently predict global climate changes or effects is either a fool or a fraud. No one can even forecast global, national or regional weather conditions that will occur months or years into the future, much less climate shifts that will be realized over decadal, centennial and longer periods.
Nevertheless, this broadly recognized limitation has not dissuaded doomsday prognostications that have prompted incalculably costly global energy and environmental policies. Such postulations attach great credence to computer models and speculative interpretations that have no demonstrated accuracy.
The primary source of scary climate change alarmism routinely trumpeted in the media originates from politically cherry-picked summary report items issued by the U.N.’s Intergovernmental Panel on Climate Change (IPCC). Yet even the IPCC’s 2001 report chapter titled “Model Evaluation” contains this confession: “We fully recognize that many of the evaluation statements we make contain a degree of subjective scientific perception and may contain much ‘community’ or ‘personal’ knowledge. For example, the very choice of model variables and model processes that are investigated are often based upon subjective judgment and experience of the modeling community.”
In that same report the IPCC further admits, “In climate research and modeling, we should realize that we are dealing with a coupled non-linear chaotic system, and therefore that the long-term prediction of future climate states is not possible.” Here, the IPCC openly acknowledges that its models should not be trusted. Still, the IPCC obviously needs to apply them to justify its budget and influence. Without contrived, frightening forecasts, they would soon be out of business.
So in the IPCC’s most recent 2007 report the story changed significantly, placing “great confidence”: in the ability of General Circulation Models (GCMs) to responsibly attribute observed climate change to anthropogenic (man-made) greenhouse gas emissions. It states that “…climate models are based on well-established physical principles and have been demonstrated to reproduce observed features of recent climate…and past changes.”
Yet even Kevin Trenberth, a lead author of 2001 and 2007 IPCC report chapters, has admitted that the IPCC models have failed to duplicate realities. Writing in a 2007 “Predictions of Climate” blog appearing in the science journal Nature.com he stated, “None of the models used by the IPCC are initialized to the observed state and none of the climate states in the models correspond even remotely to the current observed state.”
Syun-Ichi Akasofu, the former director of the International Arctic Research Center at the University of Alaska-Fairbanks, has determined that IPCC computer models have not even been able to duplicate observed temperatures in Arctic regions. While the atmospheric CO2 forecasts indicated warm Arctic conditions, they were lower than actually reported, and colder areas were absent. Akasofu stated , “If fourteen GCMs cannot reproduce prominent warming in the continental Arctic, perhaps much of this warming is not produced by greenhouse effect at all.”
Graeme Stephens at the Colorado State University’s Department of Atmospheric Science warned in a 2008 paper published in the Journal of Climate, that computer models involve simplistic cloud feedback descriptions: “Much more detail on the system and its assumptions [is] needed to judge the value of any study. Thus, we are led to conclude that the diagnostic tools currently in use by the climate community to study feedback, at least as implemented, are problematic and immature and generally cannot be verified using observations.”
The prominent, late scientist Joanne Simpson developed some of the first mathematical models of clouds in an attempt to better understand how hurricanes draw power from warm seas. Ranked as one of the world’s top meteorologists, she believed that global warming theorists place entirely too much emphasis upon faulty climate models, observing, “We all know the frailty of models concerning the air-surface system…We only need to watch the weather forecasts [to prove this].”
A recent study reported in the peer-reviewed science journal Remote Sensing concludes that NASA satellite data between the years 2000-2001 indicate that GCMs have grossly exaggerated warming retained in the Earth’s atmosphere. The study’s co-author, Dr. Roy Spencer, observes: “There is a huge discrepancy between the data and the forecasts that is especially big over the oceans. Not only does the atmosphere release more energy than previously thought, it starts releasing it earlier in the warming cycle.”
Spencer, a principal research scientist at the University of Alabama-Huntsville and former senior scientist for climate studies at NASA, has also observed that results of the one or two dozen climate modeling groups around the world often reflect a common bias. One reason is that many of these modeling programs are based upon the same “parameterization” assumptions; consequently, common errors are likely to be systematic, often missing important processes. Such problems arise because basic components and dynamics of the climate system aren’t understood well enough on either theoretical or observational grounds to even put into the models. Instead, the models focus upon those factors and relationships that are most familiar, ignoring others altogether. As Spencer notes in his book Climate Confusion, “Scientists don’t like to talk about that because we can’t study things we don’t know about.”
A peer-reviewed climate study that appeared in the July 23, 2009 edition of Geophysical Research Letters went even farther in its characterization of faulty climate modeling practices. The paper noted IPCC modeling tendencies to fudge climate projections by exaggerating CO2 influences and underestimating the importance of shifts in ocean conditions. The research indicated that influences in solar changes and intermittent volcanic activity have accounted for at least 80% of observed climate variation over the past half century. Study coauthor John McLean observed: “When climate models failed to retrospectively produce the temperatures since 1950, the modelers added some estimated influences of carbon dioxide to make up the shortfall.” He also highlighted inability of computer models to predict El Nino ocean events which can periodically dominate regional climate conditions, hence further reducing model meaningfulness.
J. Scott Armstrong, a professor at the University of Pennsylvania’s Wharton School, and a leading expert in the field of professional forecasting, believes that prediction attempts are virtually doomed when scientists don’t understand or follow basic forecasting rules. He and colleague Kesten Green of Monash University conducted a “forecasting audit” of the 2007 IPCC report and “found no references…to the primary sources of information on forecasting methods” and that “the forecasting procedures that were described [in sufficient detail to be evaluated] violated 72 principles. Many of the violations were, by themselves, critical”.
A fundamental principle that IPCC violated was to “make sure forecasts are independent of politics”. Armstrong and Green observed that “the IPCC process is directed by non-scientists who have policy objectives and who believe that anthropogenic global warming is real and a danger.” They concluded that: “The forecasts in the report were not the outcome of scientific procedures. In effect, they were the opinions of scientists transformed by mathematics and obscured by complex writing…We have not been able to identify any scientific forecasts of global warming. Claims that the Earth will get warmer have no more credence than saying it will get colder”.
Trenberth argued in his 2007 Nature blog that “the IPCC does not make forecasts”, but “instead proffers ‘what if’ projections that correspond to certain emission scenarios”; and then hopes these “projections… will guide policy and decision makers.” He went on to say: “there are no such predictions [in the IPCC reports] although the projections given by the Intergovernmental Panel on Climate Change (IPCC) are often treated as such. The distinction is important”.
Armstrong and Green challenge that semantic defense, pointing out that “the word ‘forecast’ and its derivatives occurred 37 times, and ‘predict’ and its derivatives occurred 90 times in the body of Chapter 8 of [the IPCC’s 2007] the Working Group I report.”
Of course there would be very little interest in model forecasts at all if it were not for hysterical hype about a purported man-made climate crisis caused by carbon dioxide fossil fuel emissions. Without CO2 greenhouse gas demonization there is no basis for cap-and-tax schemes, unwarranted “green” fuel subsidies, expansion of government regulatory authority over energy production and construction industries through unintended misapplications of the Clean Air Act, claims of polar bear endangerment to prevent drilling in ANWR, or justifications for massive climate research budgets including…guess what? Yup! Lots of money to produce more climate model forecasts that perpetuate these agendas.
Reprinted from Forbes with author permission.