Political Climate
Aug 05, 2012
Summary Of Two Game-Changing Papers - Watts Et al 2012 and McNider Et Al 2012

By Dr. Roger Pielke Sr., Climate Science

UPDATE #2: To make sure everyone clearly recognizes my involvement with both papers, I provided Anthony suggested text and references for his article [I am not a co-author of the Watts et al paper], and am a co-author on the McNider et al paper.

UPDATE: There has been discussion as to whether the Time of Observation Bias (TOB) could affect the conclusions reached in Watts et al (2012). This is a valid concern.  Thus the “Game Changing” finding of whether the trends are actually different for well- and poorly-sited locations is tenative until it is shown whether or not TOB alters the conclusions.  The issue, however, is not easy to resolve. In our paper

Pielke Sr., R.A., T. Stohlgren, L. Schell, W. Parton, N. Doesken, K. Redmond, J. Moeny, T. McKee, and T.G.F. Kittel, 2002: Problems in evaluating regional and local trends in temperature: An example from eastern Colorado, USA. Int. J. Climatol., 22, 421-434.

this is what we concluded [highlight added]

The time of observation biases clearly are a problem in using raw data from the US Cooperative stations. Six stations used in this study have had documented changes in times of observation. Some stations, like Holly, have had numerous changes. Some of the largest impacts on monthly and seasonal temperature time series anywhere in the country are found in the Central Great Plains as a result of relatively frequent dramatic interdiurnal temperature changes. Time of observation adjustments are therefore essential prior to comparing long-term trends.

We attempted to apply the time of observation adjustments using the paper by Karl et al. (1986). The actual implementation of this procedure is very difficult, so, after several discussions with NCDC personnel familiar with the procedure, we chose instead to use the USHCN database to extract the time of observation adjustments applied by NCDC. We explored the time of observation bias and the impact on our results by taking the USHCN adjusted temperature data for 3 month seasons, and subtracted the seasonal means computed from the station data adjusted for all except time of observation changes in order to determine the magnitude of that adjustment. An example is shown here for Holly, Colorado (Figure 1), which had more changes than any other site used in the study.

What you would expect to see is a series of step function changes associated with known dates of time of observation changes. However, what you actually see is a combination of step changes and other variability, the causes of which are not all obvious. It appeared to us that editing procedures and procedures for estimating values for missing months resulted in computed monthly temperatures in the USHCN differing from what a user would compute for that same station from averaging the raw data from the Summary of the Day Cooperative Data Set. This simply points out that when manipulating and attempting to homogenize large data sets, changes can be made in an effort to improve the quality of the data set that may or may not actually accomplish the initial goal.

Overall, the impact of applying time of observation adjustment at Holly was to cool the data for the 1926-58 with respect to earlier and later periods. The magnitude of this adjustment of 2C is obviously very large, but it is consistent with changing from predominantly late afternoon observation times early in the record to early morning observation times in recent years in the part of the country where time of observation has the greatest effect. Time of observation adjustments were also applied at five other sites.

Until this issue is resolved, the Game Changer aspect of the Watts et al 2012 study is tenative. [Anthony reports he is actively working to resolve this issue on hold ].  The best way to address the TOB issue is to use data from sites in the Watts et al data set that have hourly resolution.  For those years, when the station is unchanging in location, compute the TOB.  The Karl et al (1986) method of TOB adjustment, in my view, needs to be more clearly defined and further examined in order to better address this issue. I understand research is underway to examine the TOB issue in detail, and results will be reported by Anthony when ready.
--------

There are two recent papers that raise serious questions on the accuracy of the quantitative diagnosis of global warming by NCDC, GISS, CRU and BEST based on land surface temperature anomalies.  These papers are a culmination of two areas of uncertainty study that were identified in the paper

Pielke Sr., R.A., C. Davey, D. Niyogi, S. Fall, J. Steinweg-Woods, K. Hubbard, X. Lin, M. Cai, Y.-K. Lim, H. Li, J. Nielsen-Gammon, K. Gallo, R. Hale, R. Mahmood, S. Foster, R.T. McNider, and P. Blanken, 2007: Unresolved issues with the assessment of multi-decadal global land surface temperature trends. J. Geophys. Res., 112

The Summary

One paper [Watts et al 2012] show that siting quality does matter. A warm bias results in the continental USA when poorly sited locations are used to construct a gridded analysis of land surface temperature anomalies.

The other paper [McNider et al 2012] shows that not only does the height at which minimum temperature observations are made matter, but even slight changes in vertical mixing (such as from adding a small shed near the observation site, even in an otherwise pristine location) can increase the measured temperature at the height of the observation. This can occur when there is little or no layer averaged warming.

The Two Papers

Watts et al, 2012: An area and distance weighted analysis of the impacts of station exposure on the U.S. Historical Climatology Network temperatures and temperature trends [to be submitted to JGR]

McNider, R. T., G.J. Steeneveld, B. Holtslag, R. Pielke Sr, S. Mackaro, A. Pour Biazar, J. T. Walters, U. S. Nair, and J. R. Christy (2012). Response and sensitivity of the nocturnal boundary layer over land to added longwave radiative forcing, J. Geophys. Res. in press. [for the complete paper, click here]

To Provide Context

First, however, to make sure that my perspective on climate is properly understood;

i) There has been global warming over the last several decades. The ocean is the component of the climate system that is best suited for quantifying climate system heat change [Pielke, 2003] e.g. see the figure below from NOAA’s Upper Ocean Heat Content Anomaly for their estimate of the magnitude of warming since 1993

ii) The human addition to CO2 into the atmosphere is a first-order climate forcing; e.g. see Pielke et al (2009) and the NOAA plot below

However, the Watts et al 2012 and McNider et al 2012 papers refute a major assumption in the CCSP 1.1 report

Temperature Trends in the Lower Atmosphere - Understanding and Reconciling Differences

that variations in surface temperature anomalies are random and this can be averaged to create area means that are robust measures of the average surface temperature in that region (and when summed globally, provide an accurate global land average surface temperature anomaly).  Randomness, and with assumption of no systematic biases, is shown in the two papers to be incorrect.

In the chapter

Lanzante et al 2005: What do observations indicate about the changes of temperatures in the atmosphere and at the surface since the advent of measuring temperatures vertically?

they write that [highlight added]

“Currently, there are three main groups creating global analyses of surface temperature (see Table 3.1), differing in the choice of available data that are utilized as well as the manner in which these data are synthesized.

Comment: Now there is the addition of Richard Muller’s BEST analysis.

Since the network of surface stations changes over time, it is necessary to assess how well the available observations monitor global or regional temperature. There are three ways in which to make such assessments (Jones, 1995). The first is using “frozen grids” where analysis using only those grid boxes with data present in the sparsest years is used to compare to the full data set results from other years (e.g., Parker et al., 1994). The results generally indicate very small errors on multi-annual timescales (Jones, 1995). “

My Comment:  The “frozen grids” combine data from poor- and well-site locations, and from different heights.  A warm bias results. This is a similar type of analysis as used in BEST.

The second technique is sub-sampling a spatially complete field, such as model output, only where in situ observations are available. Again the errors are small (e.g., the standard errors are less than 0.06C for the observing period 1880 to 1990; Peterson et al., 1998b).

My Comment:  Again, there is the assumption that no systematic biases exist in the observations. Poorly sited locations are blended with well-sited locations which, based on Watts et al (2012), artificially elevates the sub-sampled trends.

The third technique is comparing optimum averaging, which fills in the spatial field using covariance matrices, eigenfunctions or structure functions, with other analyses. Again, very small differences are found (Smith et al., 2005). The fidelity of the surface temperature record is further supported by work such as Peterson et al. (1999) which found that a rural subset of global land stations had almost the same global trend as the full network and Parker (2004) that found no signs of urban warming over the period covered by this report.

My Comment:  Here is where the assumption that the set of temperature anomalies are random is presented. Watts et al (2012) provide observational evidence, and McNider et al (2012) present theoretical reasons, why this is an incorrect assumption.

Since the three chosen data sets utilize many of the same raw observations, there is a degree of interdependence. Nevertheless, there are some differences among them as to which observing sites are utilized. An important advantage of surface data is the fact that at any given time there are thousands of thermometers in use that contribute to a global or other large-scale average. Besides the tendency to cancel random errors, the large number of stations also greatly facilitates temporal homogenization since a given station may have several “near-neighbors” for “buddy-checks.” While there are fundamental differences in the methodology used to create the surface data sets, the differing techniques with the same data produce almost the same results (Vose et al., 2005a).

My Comment: There statement that there is “the tendency to cancel random errors” is shown in the Watts et al 2012 and McNider et al 2012 papers to be incorrect. This means their claim that “the large number of stations also greatly facilitates temporal homogenization since a given station may have several “near-neighbors” for “buddy-checks.” is erroneously averaging together sites with a warm bias.

Bottom Line Conclusion: The Watts et al 2012 and McNider et al 2012 papers have presented the climate community with evidence of major systematic warm biases in the analysis of multi-decadal land surface temperature anomalies by NCDC, GISS, CRU and BEST.  The two paper also help explain the discrepancy seen between the multi-decadal temperature trends in the surface and lower tropospheric temperature that was documented in

Klotzbach, P.J., R.A. Pielke Sr., R.A. Pielke Jr., J.R. Christy, and R.T. McNider, 2009: An alternative explanation for differential temperature trends at the surface and in the lower troposphere. J. Geophys. Res., 114, D21102, doi:10.1029/2009JD011841.

Klotzbach, P.J., R.A. Pielke Sr., R.A. Pielke Jr., J.R. Christy, and R.T. McNider, 2010: Correction to: “An alternative explanation for differential temperature trends at the surface and in the lower troposphere. J. Geophys. Res., 114, D21102, doi:10.1029/2009JD011841″, J. Geophys. Res., 115, D1, doi:10.1029/2009JD013655.

I look forward to discussing the conclusions of these two studies in the coming weeks and months.



Aug 04, 2012
Spanish Renewable Lesson for Obama

By Andres Cala, Energy Tribune

Spain is planning to correct its renewable energy experiment gone wrong by spreading the pain, a powerful lessons for a White House with an incoherent energy policy that has often cited its model as one to emulate.

This week Obama’s campaign bashed challenger Mitt Romney for planning to end tax incentives for wind power if elected. “By opposing an extension to the wind production tax credit, Mitt Romney has come out against growth of the wind industry to support 100,000 jobs by 2016 and 500,000 jobs by 2030.”

Obama’s expectations though are based on European policy support models that are being revised and corrected. Ahead of November elections, both candidates must realize America’s energy policy more than ever demands a coherent policy based on its best interest not ideological imperatives.

Putting renewable on steroids can come to damage a country’s power sector, consumers, and the renewable industry itself, and in Spain’s case, even a national economy.

Public support for renewable power in America thus should be reconfigured to achieve realistic economic or geopolitical net gain, not winning elections.

During the first two years of his administration, President Barack Obama and top officials praised Spain as a successful model to create employment and improve energy security. So did everyone else, for that matter, but it’s time to heed the lessons.

For over a decade Spain has accumulated nearly 25 billion euro in debt –equivalent to more than half of the urgent capitalization needs of its distraught financial system- mostly in the form of subsidies for wind and solar energy.

Basically the country did not pass along to consumers the cost of generating around 30 percent of its electricity through renewable sources, and faced with the prospect of a macroeconomic sovereign collapse it has decided to hike taxes for power utilities, to increase consumer prices, and to cut some of the generous subsidies that the renewable industry has enjoyed.

The conservative government’s proposed solution has expectedly enraged all sides, although the final reforms will not be decreed until later this month.

All sides have legitimate grievances. After all, hiking consumer electricity prices during a recession is beating a dead horse; renewable players say the back-peddling will all but kill their industry already hit by an earlier moratorium imposed on new renewable projects, and utilities say more taxes will only mean more layoffs and less investment.

Furthermore, Spain’s generous subsidies already attracted more than twice as much installed capacity than its peak demand of 40 GW, and much cheaper fossil fuel and nuclear generators are being left idle to pay for renewable output.

In this context, the country has no choice but to pull the plug on its renewable experiment. More than a decade of robust Spanish growth ended in 2008 as a construction boom went bust leaving millions without a job and as the global economic crisis further undermined the economy.

Gross national product in 2012 and 2013 is expected to further contract and unemployment, already the highest of any rich nation at 25 percent, is expected to continue growing and to become increasingly hard structural, according to the OECD.

The IMF estimates Spain needs around 45 billion euros to recapitalize its ailing banks and Europe has already pledged as much as 100 billion euro. But markets are nervous. Spain will eventually have to seek a sovereign bailout like those of Greece, Ireland, and Portugal.

Meanwhile, the difference between the cost of generation and what consumers pay is adding between 7 and 10 billion euros annually in debt, depending of the year, according to the Energy Ministry, 60 percent of which comes from subsidizing renewable power.

The subsidy system itself is also dysfunctional. Solar companies get as much as half of the subsidies, despite contributing less than 5 percent of total power generation in 2011, while wind power gets around a quarter of subsidies despite contributing three times more power.

The government thus plans to raise taxes across the board for power generation between 3 and 20 percent, depending on the source. Fossil fuels, nuclear and hydroelectric would be taxed the least, while renewable would be taxed more. Companies have said consumer prices will inevitably increase.

Utilities, which truth be told are among the biggest investors in renewable power and thus are complicit of the failed experiment, have said tax increases doesn’t address the problem per se and fear the government is simply using them to raise revenue. And renewable energy investors, from international funds to small families, have also blasted planned reforms which they describe as suicidal.

Back to the drawing board

Spain is the worst example, but not the only.

A recent International Energy Agency outlook of renewable power this decade suggests how Spain’s model embodies the “wrongs” of unconditionally supporting the industry.

Now its renewable revision is going to eliminate thousands of jobs and billions in investment, and more critically become another agonizing drag on the economy.

Many countries overdid it, plain and simple. Renewable industries in OECD reached maturity and have become an economic drain, which is why countries are quietly backtracking, as the data shows.

“First, general macroeconomic and credit concerns are increasing capital costs, reducing risk appetites, and prompting investor preferences for higher returns and shorter payback periods, which tend to work against renewable technologies. Second, short-term policy uncertainty in some markets is undermining renewable project economics due to potential changes in financial support,” the IEA said.

The IEA’s report also shows how several technologies are competitive in some markets, able to compete with fossil fuel options. It got there thanks to generous subsidies in countries like Spain, Germany, and Italy, but also the United States, China, and Japan.

Indeed, renewable power is a viable economic option under certain circumstances. And in the US, there are some regions that can make a case for long term economic sustainability, even amid a natural gas glut.

But so far this administration has had an ideological, not economic approach to energy policy. And America can’t afford gambling its energy future. And for that matter, the renewable industry can’t afford it either. 



Jul 31, 2012
Why the BEST papers failed to pass peer review

By Anthony Watts, WUWT

Whoa, this is heavy.  Ross McKitrick, who was a peer review referee for the BEST papers with the Journal of Geophysical Research got fed up with Muller’s media blitzing and tells his story:

excerpts:

In October 2011, despite the papers not being accepted, Richard Muller launched a major international publicity blitz announcing the results of the “BEST” project. I wrote to him and his coauthor Judy Curry objecting to the promotional initiative since the critical comments of people like me were locked up under confidentiality rules, and the papers had not been accepted for publication. Richard stated that he felt there was no alternative since the studies would be picked up by the press anyway. Later, when the journal turned the paper down and asked for major revisions, I sought permission from Richard to release my review. He requested that I post it without indicating I was a reviewer for JGR. Since that was not feasible I simply kept it confidential.

On July 29 2012 Richard Muller launched another publicity blitz (e.g. here and here) claiming, among other things, that “In our papers we demonstrate that none of these potentially troublesome effects [including those related to urbanization and land surface changes] unduly biased our conclusions.” Their failure to provide a proper demonstration of this point had led me to recommend against publishing their paper. This places me in an awkward position since I made an undertaking to JGR to respect the confidentiality of the peer review process, but I have reason to believe Muller et al.’s analysis does not support the conclusions he is now asserting in the press.

I take the journal peer review process seriously and I dislike being placed in the position of having to break a commitment I made to JGR, but the “BEST” team’s decision to launch another publicity blitz effectively nullifies any right they might have had to confidentiality in this matter. So I am herewith releasing my referee reports.

Read it all here.

Some backstory via Andrew Revkin from Elizabeth Muller. Revkin asked:

1) What’s the status of the four papers that were submitted last fall (accepted, in review...etc?)

2) There can be perils when publicity precedes peer review. Are you all confident that the time was right to post the papers, including the new one, ahead of review? Presumably this has to do with Tuesday deadline for IPCC eligibility?

Here’s her reply:

All of the articles have been submitted to journals, and we have received substantial journal peer reviews. None of the reviews have indicated any mistakes in the papers; they have instead been primarily suggestions for additions, further citations of the literature. One review had no complaints about the content of the paper, but suggested delaying the publication until the long background paper, describing our methods in detail, was actually published.

In addition to this journal peer review, we have had extensive comments from other scientists based on the more traditional method of peer review: circulation of preprints to other scientists. It is worthwhile remembering that the tradition in science, going back pre World War II, has been to circulate “preprints” of articles that had not yet been accepted by a journal for publication. This was truly “peer” review, and it was very helpful in uncovering errors and assumptions. We have engaged extensively in such peer review. Of course, rather than sending the preprints to all the major science libraries (as was done in the past), we now post them online. Others make use of arXiv. This has proven so effective that in some fields (e.g. string theory) the journalistic review process is avoided altogether, and papers are not submitted to journals. We are not going to that extreme, but rather are taking advantage of the traditional method.

We note that others in the climate community have used this traditional approach with great effectiveness. Jim Hansen, for example, frequently puts his papers online even before they are submitted to journals. Jim has found this method to be very useful and effective, as have we. As Jim is one of the most prominent members of the climate community, and has been doing this for so long, we are surprised that some journalists and scientists think we are departing from the current tradition.

The journal publication process takes time. This fact is especially true when new methods of analysis are introduced. We will be posting revised versions of 3 of the 4 papers previously posted later today (the 4th paper has not changed significantly). The core content of the papers is still the same, though the organization and detail has changed a fair amount.

The new paper, which we informally call the “Results” paper, has also undergone journal peer review (and none of the review required changing our results). We are posting it online today as a preprint, because we also want to invite comments and suggestions from the larger scientific community.

I believe the findings in our papers are too important to wait for the year or longer that it could take to complete the journal review process. We believe in traditional peer review; we welcome feedback the public and any scientists who are interested in taking the time to make thoughtful comments. Indeed, with the first 4 papers submitted, many of the best comments came from the broader scientific community. Our papers have received scrutiny by dozens of top scientists, not just the two or three that typically are called upon by journalists.



Page 148 of 645 pages « First  <  146 147 148 149 150 >  Last »