Frozen in Time
Dec 03, 2010
Can we really measure the climate?

The Scientific Alliance, December 3, 2010 Newsletter

Average temperatures or temperature ranges are often used as a simple proxy for climate. In combination with some description of rainfall, they encapsulate the essentials: in the Mediterranean it is typically hot and dry in summer and cooler and wetter in winter, and a continental climate is hot and dry in summer and cold with snow in winter, for example. But quantifying climate more precisely is fraught with difficulty.

Records kept over the years give us historical figures to make comparisons between average temperatures then and now. This sounds simple, but the very concept of an average temperature has no simple definition. First, we have to realise that temperature is what is known as an intensive property of matter. This simply means that it does not depend on the nature or size of the material for which it is measured.

So, for example, air and a body of water may have the same measured temperature at a particular moment, but their behaviour is very different. Air has a low thermal capacity (it take little heat to change its temperature), while water has a high thermal capacity and its temperature changes relatively slowly. In the present long cold spell in western Europe, ponds and lakes need a period of consistently sub-zero temperatures before they begin to freeze. Equally, as air temperatures rise, the ice may take many days to melt. A given volume of water has a very different thermal energy content than the same volume of water. This can be easily quantified and, in contrast to temperature, is an extensive property.

When trying to average temperatures, the first obvious rule is that the measurements must all be of the same material: you cannot average air and water temperatures, for example, and get a meaningful answer. This in itself is pretty obvious and, in discussing climate change, air and water temperatures are considered separately. However, the difficulties with averaging do not stop there.

Even if temperatures are measured under carefully controlled conditions as expected for official records, they will fluctuate quite rapidly depending on wind direction and strength, cloud cover, time of day etc. The convention is to measure a maximum and minimum shade temperature each day. These readings can then be used to provide average maxima and minima per month or year, or combined to give an overall ‘average temperature’. And the figures for individual stations can themselves be combined to give national, regional and global averages.

These figures tell us something, of course, but the desire to quantify also obscures the detail. Say, for example, that place X has an average maximum temperature of +15C and an average minimum of +5 and place Y registers +25 and -5. Both have an overall average of +10, but the actual climate experienced would be quite different. In a similar way, measured air temperatures in the shade bear little relationship to the apparent temperature in the sun. Although the measured shade air temperature might be the same whether or not the sun is shining, the effect on the Earth’s surface of the sunlight is significant and, once the ground has been warmed, it will release its heat at night to keep the air somewhat warmer, at least temporarily.

Simple averaging can be deceptive in other ways as well. Depending on the weather conditions or time of year, either the maximum or minimum temperature might be more typical of the day as a whole, yet both are implicitly given equal weight. Nevertheless, it is arguable that such issues are not important when comparing time series of measured temperatures. For example, the Central England Temperature record (CET) is the longest continual record available, with monthly means being recorded from 1659 and daily means logged from 1722. Looking at this it is easy to see the recorded range and note that temperatures do indeed appear to have been higher in the latter part of the 20th Century, although they have dipped again since 2000. It is the changes which are significant rather than the absolute values, provided that all measurements are strictly comparable.

This, of course, introduces yet another concern. The same instruments would not have been used in the 17th Century as 300 years later and, with the best will in the world, it is difficult to guarantee that no artefacts have been introduced. Equally, it is hardly conceivable that the surroundings of the measuring stations will be unchanged over this period (although hopefully none of the weather stations is now in an urban area, on tarmac or near heat sources as some have been found to be in other countries).

A final problem to bring up with averages is that, to avoid giving a misleading picture, data should be taken from stations spread evenly over the Earth’s surface. This is certainly not the case. In particular, there are large areas of the Arctic and Antarctic with no data being collected. The same is true for the open oceans, where collecting surface water temperatures reliably is enough of a challenge, without trying to measure air temperatures.

What we are left with then is an incomplete record of imperfect data, from which conclusions about climate change are drawn. This is the basis of the ‘global warming’ message. But actually the concept of global average temperature is again a little misleading, since the summary of the IPCC Fourth Assessment Report shows that the warming pattern is regional rather than global. Warming over the 20th Century was recorded on all continents apart from Antarctica, but was considerably greater in the northern than the southern hemisphere. Given the greater proportion of ocean in the south, this is not surprising.

But global averages are still the main measure and this is the time of year when preliminary conclusions are drawn about the current year, as the annual meeting of the UN Convention on Climate Change takes place. So far, the message being put out by the World Meteorological Organization is that 2010 is likely to be among the warmest three on record. Based on the temperature record, this is doubtless correct, but how meaningful is this?

The WMO points towards record high temperatures in Russia, China and Greenland to support its case. Meanwhile anyone mentioning record lows and pointing out that new records are set nearly every day somewhere in the world is told that this means nothing. In practical terms, life has to go on and adapt to whatever climatic conditions turn out to be. Measuring temperatures remains a useful thing to do, but we must be careful not to read too much into the average figures. And we should never forget that, whatever the temperature is, we still have only a hazy idea about what controls it. Read more here.

Icecap Note: See here how even James Hansen agrees.

Dec 01, 2010
Gamble in the monsoons

By Madhav Khandekar, Willie Soon

The annual climate summit opened in Cancun, Mexico, this week. A few days earlier, while releasing a new report, Climate Change and India: A sectoral and regional analysis for the 2030s, environment and forests minister Jairam Ramesh emphasised, “It is imperative” that India has “sound, evidence-based assessments on the impacts of climate change”. The report claims that India will soon be able to forecast the timing and intensity of future monsoons that are so critical to its agricultural base.

Could 250 of India’s top scientists be wrong when they say their computers will soon be able to predict summer monsoon rainfall during the 2030s, based on projected CO2 trends? Do scenarios generated by climate models really constitute “sound, evidence-based assessments”? We do not believe it is yet possible to forecast future monsoons, despite more than two centuries of scientific research, or the claims and efforts of these excellent scientists. The Indian summer monsoonal rainfall remains notoriously unpredictable, because it is determined by the interaction of numerous changing and competing factors, including: ocean currents and temperatures, sea surface temperature and wind conditions in the vast Indian and Western Pacific Ocean, phases of the El Nino Southern Oscillation in the equatorial Pacific, the Eurasian and Himalayan winter snow covers, solar energy output, and even wind direction and speed in the equatorial stratosphere some 30-50 km aloft.

Relying on computer climate models has one well-known side effect: Garbage in, gospel out. Current gospel certainly says CO2 rules the climate, but any role played by CO2 in monsoon activity is almost certainly dwarfed by other, major influences. Computer climate models have simply failed to confirm current climate observations, or project future climatic changes and impacts.

Both Indian and global monsoons have declined in strength and intensity over the last 50 years, and this reality largely contradicts climate model forecasts that say monsoonal rainfalls will increase. It is equally well known that climate models have been unable to replicate the decadal to multi-decadal variations of monsoonal rainfalls. Fred Kucharski and 21 other climate modellers challenge the alleged CO2-monsoon linkage. Using World Climate Research Programme climate model analyses, they conclude that “the increase of greenhouse gases concentrations has had little impact on the [observed] decadal Indian monsoonal rainfall variability in the twentieth century.” Perhaps the Indian scientists missed their report.

No climate models predicted the severe drought conditions for the 2009 Indian monsoon season - followed by the extended wetness of the 2010 season. The inability to foresee this 30-50% precipitation swing in most regions underscores how far we really are from being able to forecast monsoons, for next year, 2030 or the end of the century. Another recent analysis, by scientists from National Technical University in Athens, found that computer model projections did not agree with actual observations at 55 locations around the world. Computer forecasts for large spatial areas, like the contiguous US, were even more out of sync with actual observations than is the case with specific locations!

Ramesh says India hopes to offer a middle ground and present a less “petulant and obstructionist” perception during climate negotiations in Cancun. But if he believes the new report and claims of imminent forecasting ability will make this happen, we fear he is mistaken. “What-if” scenarios based on CO2-driven computer models are hardly a sound basis for negotiations, energy policies, agricultural planning or changed perceptions.

The impotence of current climate models is not surprising. As climate scientists, we know computer climate models are very useful for analysing how Earth’s complex climate system works. But models available today are simply not ready for prime time, when it comes to predicting future climate, monsoons or droughts. Our understanding of how weather and climate vary from year to year is still very immature, and it will be years (if not decades) before we resolve fundamental questions of how various forces interact to cause those changes.

Computer models still cannot accurately simulate or predict regional phenomena like the Indian summer monsoon rainfall. Even when model outputs agree with certain observations, we cannot be certain that the models did so for the right reasons. Considering the myriad factors that influence and alter weather and climate regimes, it is clear that climate models cannot make meaningful projections about future events, especially if they focus on the single factor of rising atmospheric CO2 levels.

Science and society will pay a very dear price, if political agendas continue to generate and legitimise false and pretentious computer outputs that have no basis in reality. How much better it would be if researchers focused on improving our ability to accurately forecast monsoons, droughts and other events just a few weeks or months in advance. That would really give farmers and others a chance to adapt, minimise damages and actually benefit from being better prepared.

Willie Soon is a solar physicist and climate scientist at Harvard-Smithsonian Centre for Astrophysics. Madhav Khandekar is a former research scientist from Environment Canada and served as an expert reviewer for IPCC’s 2007 reports.

Dec 01, 2010
The Cancun Climate Capers

By S. Fred Singer

Update: Bureaucrats Gone Wild in Cancun. The United Nations Climate Change Conference is meeting in Cancun, Mexico from November 29—December 10 2010 where bureaucrats will work to transfer wealth and technology from developed to developing nations by raising the cost of traditional energy. But before these international bureaucrats get to “work”, they decided to throw a lavish party for themselves.



-----------

Today, Nov. 29, marks the beginning of the Cancun COP (Conference of the Parties [to the Kyoto Protocol]). This is the 16th meeting of the nearly two hundred national delegations, which have been convening annually since the Kyoto Protocol was negotiated in 1997 at COP-3.

This conference promises to be another two-week extravaganza for some 20,000 delegates and hangers-on, who will be enjoying the sand, surf, and tequila-sours —mostly paid for by taxpayers from the U.S. and Western Europe. For most delegates, this annual vacation has become a lifetime career: it pays for their mortgages and their children’s education. I suppose a few of them actually believe that they are saving the earth—even though the Kyoto Protocol (to limit emission of greenhouse [GH] gases, like CO2, but never submitted for ratification to the U.S. Senate) will be defunct in 2012 and there is—thankfully—no sign of any successor treaty.

But never fear: the organizers may “pull a rabbit out of a hat” and spring a surprise on the world. They will likely announce that they have conquered the greenhouse gas hydrofluorocarbon (HFC). Now, HFCs are what replaced HCFCs, which in turn replaced CFCs, thanks to the Montreal Protocol of 1987. This succession of chemical refrigerants has reduced ozone-destroying potential; but unfortunately they are all GH gases. So now HFCs must be eradicated, because a single molecule of HFC produces many thousand times the greenhouse effect of a molecule of CO2. What they don’t tell you, of course, is that the total forcing from the HFCs is less than one percent of that of CO2, according to the IPCC (see page 141). So “slaying the dragon” amounts to slaying a mouse—or something even smaller. But you can bet that it will be trumpeted as a tremendous achievement and will likely invigorate the search for other mice that can be slain.

Of course, industry has no objection to this maneuver of invoking the Montreal Protocol as a means of reducing the claimed GH-gas effects of global warming. It means more profits from patents, new manufacturing facilities, and sales—and it will eliminate the bothersome competition from factories in India, China, and Brazil that are still manufacturing HCFCs, and in some cases even CFCs. Very likely, these nations will oppose the maneuver. But so should consumers. It will mean replacing refrigerants in refrigerators, air conditioners, and automobiles—at huge cost and to little effect. We don’t even know yet what chemical will replace HFC and how well it will work in existing equipment.

But nobody is supposed to notice this, it is hoped, amid the clamor for an international agreement, or any kind of agreement, really—even if it means misusing the Montreal Protocol. Remember that HFCs have no effect on ozone and therefore are not covered by the 1987 Montreal Protocol.

At this point, it is worth remembering how little has been accomplished by the Montreal Protocol—that “signal achievement” of the global environmental community. As U.S. negotiator Richard Benedick brags (in his book Ozone Diplomacy), the Montreal agreement was achieved by skillful diplomacy rather than by relying on science.

When the Montreal Protocol was negotiated and signed in 1987, there was no evidence whatsoever that CFCs were actually destroying stratospheric ozone. At that time, there were no published observations (by leading Belgian researcher Zander or by others) of any increase in stratospheric chlorine, thereby indicating that natural sources, like salt from ocean spray and volcanoes, were dominating over the human contribution of chlorine from CFCs. The scientific evidence changed only in 1988 (thanks to NASA scientist Rinsland), a year after the Montreal Protocol was signed.

Nevertheless, the hype of the Antarctic Ozone Hole (AOH), which was discovered, only by chance, in 1985, was driving global fears of a coming disaster. In the U.S., there was talk about an Arctic ozone hole opening up. There was even a scare about a “hole over Kennebunkport,” President Bush’s summer home. And of course, the EPA, as usual, was hyping the whole matter to the White House. No wonder that poor George Bush (the elder) agreed to phase out CFCs immediately.

And who still remembers all the lurid tales of blind sheep in Patagonia and of ecological disasters in the Southern Ocean—all the result, supposedly, of the AOH. It turned out later that the unfortunate sheep had pink-eye.

The Montreal Protocol prohibition on manufacturing CFCs has indeed led to the reduction of the atmospheric content of these long-lived CFC molecules. But what about stratospheric ozone itself? There has been little effect on the AOH—just annual fluctuations. And according to the authoritative reports of the World Meteorological Organization, the depletion at mid-latitudes may have been only about 4% over a period ending in 1992. There seems to have been no further depletion since 1993, even while stratospheric chlorine levels were still rising. Something doesn’t quite check out here.

Whatever the cause of the observed 4% ozone depletion may be, compare this piddling amount to the natural variability of total atmospheric ozone, as measured carefully by NOAA: on the order of 100% or more from day to day, seasonal change of 30% to 50%, and an eleven-year sunspot-correlated variation on the order of 3%.

And to top it off, there has been no documented increase at all in solar ultraviolet (UV-B), the radiation that produces sunburn and can lead to skin cancer. All of the monitoring so far has shown no rise over time—and therefore no biological effects due to ozone depletion.

And in any case, theory tells us—and measurements agree—that a 4% depletion amounts to an increase in solar UV equivalent to moving 50 miles to the south, at mid-latitudes.  Measured UV-B values increase by 1,000% in going from the pole to equator, as the average solar zenith angle increases.

So look for a “breakthrough” announcement from Cancun, as once again our intrepid negotiators will have “saved the climate”—maybe. In addition to timing and cost issues, some countries will insist that HFCs have no impact on the ozone layer and thus should be handled under the United Nations climate change talks rather than the Montreal Treaty. 

A State Department official dismissed that as a legalistic argument and said that the ozone treaty could and should be used to achieve broader environmental objectives. “What we’ve found is that the Montreal Protocol has been a very effective instrument for addressing global environmental problems,” said Daniel A. Reifsnyder, the nation’s chief Montreal Protocol negotiator, in an interview. “It was created to deal with the ozone layer, but it also has tremendous ability to solve the climate problem if people are willing to use it that way.”

Mario Molina, the Mexican scientist who shared the Nobel Prize in chemistry for his work in identifying the role of chlorofluorocarbons in depleting stratospheric ozone, said that extending the Montreal Protocol to include HFCs could reduce the threat of climate change by several times what the Kyoto Protocol proposes. (Evidently, he has not read the IPCC report in which he is listed as a lead author.) “We understand it’s a stretch to use an international agreement designed for another purpose,” he said. “But dealing with these chemicals and using this treaty to protect the planet makes a lot of sense.”

Maybe Dr. Molina should stick to chemistry.

Atmospheric physicists S. Fred Singer is Professor Emeritus of Environmental Sciences at the University of Virginia and founding director of the US Weather Satellite Service (now NESDIS-NOAA).

See also this American Thinker Story on Cancun ”Dead Green Theory”.

Nov 30, 2010
Attn Cancun: Satellite shows sea level rising almost 50% slower than the slowest IPCC projection

By Phillip F. Schewe

Glaciers are retreating and parts of the ice sheets on Greenland and Antarctica are melting into the ocean. This must result in a rise in sea level, but by how much? A new measurement of the gravity everywhere around the globe with a pair of orbiting satellites provides the first ever map detailing the rises across different parts of the globe.

According to the new results, the annual world average sea level rise is about 1 millimeter, or about 0.04 of an inch. In some areas, such as the Pacific Ocean near the equator and the waters offshore from India and north of the Amazon River, the rise is larger. In some areas, such as the east coast of the United States, the sea level has actually dropped a bit over the past decade.

image
See enlarged here.

The surface of the sea is a constantly shifting fabric. To achieve a truer sense of how much the sea is changing in any one place, scientists measure the strength of gravity in that place. Measuring gravity over a patch of ocean or dry land provides an estimate of how much mass lies in that region. The measured mass depends on the presence of such things as mountains, glaciers, mineral deposits, and oceans.

If the gravity measurement for a place is changing, this could mean that the place is losing mass because of a retreating glacier or gaining mass if, as in the ocean surrounding Antarctica, new melt water is streaming in.

The Gravity Recovery and Climate Experiment, or GRACE for short, consists of a pair of satellites moving in an orbit that takes them over the South and North Poles. The two craft, nicknamed Tom and Jerry after the television cartoon characters, send constant signals to each other to determine their relative spacing to about 10 microns—one-tenth the width of a human hair—over a distance of 130 miles. If the first craft flies above a slightly more weighty area of the Earths’ surface—like a mountain range—it will be tugged a bit out of place, an effect picked up by a change in the relative spacing of the craft.

In these way monthly gravity maps of pieces of land or ocean about 180 miles wide can be made with high precision. The new report for the years of 2003-09 looks at how much mass has been lost from land areas and how much mass has been gained by ocean areas.

One of the authors of the report, Riccardo Riva from the Delft University of Technology in the Netherlands, said that average annual rise in sea level rise due to meltwater entering the ocean is about 1 millimeter, but that an additional rise will come from that fact that as the average temperature rises so does the ocean temperature, which in turn causes the volume of the ocean to increase.

“The most important result of the new report is the measurement of the sea level changes for specific regions of the Earth that are based on direct and global measurements of mass change,” Riva said.

Mark Tamisiea, who works at the National Oceanography Centre in Southampton, England, and was not involved in the GRACE work, believes the new report represents good research.

“As coastal sea level changes impact society, it is important for us to understand as much about the local differences from the global average as possible,” Tamisiea said. “These results are one piece in that puzzle.”

“GRACE is definitely the ‘real deal’ when it comes from measuring climate change from space,” said Joshua Willis, an ocean expert at the Jet Propulsion Laboratory in Pasadena, Calif. “This work by Dr. Riva and company reminds us that the world’s oceans don’t behave like a giant bathtub. As the ice melts and the water finds its way back to the ocean, the resulting sea level rise won’t be the same all over the world.”

“These effects are still small in today’s rising ocean, but as we look out over the next century, the patterns of sea level change due to melting ice will be magnified many times over as the ice sheets thin and melt,” Willis said.

Looking at the actual map of sea level rises presents an ironic twist. Offshore the areas where melting ice is most rapidly falling into the ocean—such as Greenland and Antarctica—the sea level appears to be falling.

“The main reason for this is the rebound of the solid Earth,” explained Riva. “Less ice causes the continents go up, and therefore sea level drops. Meltwater distributes around quite quickly, in most cases, so there is no accumulation due to that.”

More information: The new GRACE results appear in the journal Geophysical Research Letters.

Icecap Note: This is the equivalent of 4 inches per century well under the IPCC range of 7.5 to 23 inches. This is despite the fact that alarmists like Hansen and Gore claim sea levels are rising much faster than the IPCC projects.

Steve Goddard adds in Real Science:

For the period 1993 to 2003, the rate of sea level rise is estimated from observations with satellite altimetry as 3.1 plus/minus 0.7 mm yr–1, significantly higher than the average rate. The tide gauge record indicates that similar large rates have occurred in previous 10-year periods since 1950. It is unknown whether the higher rate in 1993 to 2003 is due to decadal variability or an increase in the longer-term trend.

That last sentence is a classic. They avoided the obvious answer that the higher rate from 1993-2003 was due to using a different methodology to generate the numbers. The older measurements are from tide gauges, and the newer ones are from satellite altimetry.

They failed to mention that tide gauges don’t agree with the satellite altimetry. They failed to mention that we don’t see much if any increase in rates from tide gauge data. They failed to provide any justification for the switch. They failed to provide any evidence that tide gauges are less reliable now than they were in the last century. They failed to do any verification of the accuracy of altimetry measurements.

This is just another IPCC nature trick - switching measurement systems to create an increase where there is none.

Nov 29, 2010
Extremely Active Atlantic Hurricane Season was a “Gentle Giant” for U.S.

NOAA

Note: See more detailed summary by Dr. Anthony Lupo in ICECAP IN THE NEWS below.

NOAA’s Prediction for Active Season Realized; Slow Eastern Pacific Season Sets Record

According to NOAA the 2010 Atlantic hurricane season, which ends tomorrow, was one of the busiest on record. In contrast, the eastern North Pacific season had the fewest storms on record since the satellite era began.

In the Atlantic Basin a total of 19 named storms formed - tied with 1887 and 1995 for third highest on record. Of those, 12 became hurricanes - tied with 1969 for second highest on record. Five of those reached major hurricane status of Category 3 or higher.

These totals are within the ranges predicted in NOAA’s seasonal outlooks issued on May 27 (14-23 named storms; 8-14 hurricanes; 3-7 major hurricanes) and August 5 (14-20 named storms; 8-12 hurricanes; 4-6 major hurricanes). An average Atlantic season produces 11 named storms, six hurricanes and two major hurricanes.

Large-scale climate features strongly influenced this year’s hurricane activity, as they often do. This year, record warm Atlantic waters, combined with the favorable winds coming off Africa and weak wind shear aided by La Nina energized developing storms. The 2010 season continues the string of active hurricane seasons that began in 1995.

But short-term weather patterns dictate where storms actually travel and in many cases this season, that was away from the United States. The jet stream’s position contributed to warm and dry conditions in the eastern U.S. and acted as a barrier that kept many storms over open water. Also, because many storms formed in the extreme eastern Atlantic, they re-curved back out to sea without threatening land. “As NOAA forecasters predicted, the Atlantic hurricane season was one of the most active on record, though fortunately most storms avoided the U.S. For that reason, you could say the season was a gentle giant,” said Jack Hayes, Ph.D., director of NOAA’s National Weather Service.

Other parts of the Atlantic basin weren’t as fortunate. Hurricane Tomas brought heavy rain to earthquake-ravaged Haiti, and several storms, including Alex, battered eastern Mexico and Central America with heavy rain, mudslides and deadly flooding.

image
Hurricane Alex, a rare June Hurricane

Though La Nina helped to enhance the Atlantic hurricane season, it also suppressed storms from forming and strengthening in the eastern North Pacific. Of that region’s seven named storms this year, three grew into hurricanes and two of those became major hurricanes. This is the fewest named storms (previous record low was eight in 1977) and the fewest hurricanes (previous record low was four in 1969, 1970, 1977 and 2007) on record since the satellite era began in the mid-1960s. An average eastern North Pacific season produces 15 named storms, nine hurricanes and four major hurricanes.

Page 140 of 307 pages « First  <  138 139 140 141 142 >  Last »