United Nations Agenda 21 to Rule the Oceans

By Dennis Amble
April 20, 2011

Whilst everyone has been occupied with EPA Administrator Lisa Jackson’s defense before Congress of the EPA’s attempts to regulate CO2 emissions, the Administration has continued to move towards International Ocean Governance with the establishment of a Governance Coordinating Committee for the National Ocean Council, (NOC). The NOC has been long in the making and earlier history of Ocean legislation can be found here, going back to the 1969 Stratton Commission and beyond. However the current impetus dates to the Pew Oceans Commission in 2003 and the U.S. Commission on Ocean Policy – An Ocean Blueprint for the 21st Century in 2004, mandated by the Oceans Act 2000.

The recommendations of the Pew Oceans Commission and the US Commission on Ocean Policy were very similar, even down to the coastal maps used to preface the reports. The pretence was abandoned in 2005 with the formation of the Joint Ocean Commission Initiative, co-chaired by the chairs of the Pew Commission and the US Commission on Ocean Policy.
In 2007 came Oceans-21, the short name given to HR-21, The Oceans Conservation, Education, and National Strategy for the 21st Century Act. It was designed to implement the policies favoured by the Joint Oceans Commission Initiative, but it never became law.

On June 12th 2009 the White House published a Presidential Memorandum to Heads of Executive Departments and Agencies relating to a “National Policy for The Ocean, Our Coasts and The Great Lakes”. It established an Interagency Ocean Policy Task Force (Task Force), to be led by the Chair of the Council on Environmental Quality “in order to better meet our Nation’s stewardship responsibilities for the oceans, coasts, and Great Lakes”

The influence of the climate agenda was clear:

“Challenges include water pollution and degraded coastal water quality caused by industrial and commercial activities both onshore and offshore, habitat loss, fishing impacts, invasive species, disease, rising sea levels, and ocean acidification. Oceans both influence and are affected by climate change. They not only affect climate processes but they are also under stress from the impacts of climate change.”

The Pew and US Commission policies surfaced again in the report of the Task Force, the Final Recommendations Of The Interagency Ocean Policy Task Force, (OPTF), July 19, 2010.

Freedom Advocates claimed that “thirty states would be encroached upon by Obama’s Executive Order establishing the National Ocean Council for control over America’s oceans, coastlines and the Great Lakes.”

THE TASK FORCE

The members of the Task Force included, amongst other government agency representation:

Nancy Sutley, Task Force Chair. She is also chair of The Council on Environmental Quality and is principal environmental policy adviser to the President. Ms. Sutley was a special assistant to Carol Browner, administrator of the Environmental Protection Agency under President Bill Clinton. Sutley is co-chair of the National Ocean Council with John Holdren.

Jane Lubchenco Undersecretary for Oceans and Atmosphere, NOAA Administrator, she was a member of the Pew Oceans Commission in 2003 and is still, as a government employee, listed as a current member of that organisation and a member of the Joint Ocean Commission. She is also a member of the National Ocean Council as NOAA Administrator.

Peter Silva, EPA Assistant Administrator for Water. Silva resigned from post on January 14 2011, a day after he decided to revoke the permit of a mountain top mining proposal in Appalachia. Nancy Stoner, deputy assistant administrator, is now Acting Assistant Administrator. She was with the Natural Resources Defense Council before joining the EPA.

Lubchenco served, until her NOAA appointment, on the boards of the World Resources Institute, Environmental Defense, and on advisory committees for the National Research Council, the National Science Foundation and the United Nations Environment Programme.

She was a contributor to the 1991 report of the National Research Council, Policy Implications of Greenhouse Warming, along with Stephen Schneider, Maurice Strong, Tom Karl, William Nordhaus and others. She is shown as an Advisory Board Member of Diversitas, a UN linked, international government funded diversity institute, along with Paul Ehrlich, and Harold Mooney, of Stanford.

Ms. Lubchenco is a member of the National Research Council panel, America’s Climate Choices, along with long time associate John Holdren, Director of President Obama’s Office of Science and Technology. They are both on the National Oceans Council. In an interview in July 2009, with Yale Environment 360, she referred to ocean acidification as global warming’s “equally evil twin.” The interview title was hubristically titled, “Restoring Science to US Climate Policy”
Her policies on fishing have been heavily attacked by the industry and in July last year, Gloucester Times reported that Massachusetts congressmen Barney Frank and John Tierney had “called for her to resign or be fired over what they described as her “hostility” and lack of accountability toward the American fishing industry.”

In its introduction, the presidential task force report invoked the Deepwater Horizon-BP oil spill in the Gulf of Mexico, as one justification for full federal control of the oceans around the US coasts. It also stated that, “it is the Policy of the United States to use the best available science and knowledge to inform decisions affecting the ocean, our coasts, and the Great Lakes, and enhance humanity’s capacity to understand, respond, and adapt to a changing global environment.” Yet their “best available science” appears to be the contested science from the Intergovernmental Panel on Climate Change, the IPCC, as shown in these familiar claims.

Climate change is impacting the ocean, our coasts, and the Great Lakes. Increasing water temperatures are altering habitats, migratory patterns, and ecosystem structure and function.

Coastal communities are facing sea-level rise, inundation, increased threats from storms, erosion, and significant loss of coastal wetlands.

The ocean’s ability to absorb carbon dioxide from the atmosphere buffers the impacts of climate change, but also causes the ocean to become more acidic, threatening not only the survival of individual species of marine life, but also entire marine ecosystems.

The ocean buffers increased global temperatures by absorbing heat, but increasing temperatures are causing sea levels to rise by expanding seawater volume and melting land-based ice. Increased temperatures may eventually reduce the ocean’s ability to absorb carbon dioxide.

Their “best available science” includes Jane Lubchenco’s debasement of science in this propaganda video on the NOAA website, purporting to show ocean acidification. Of course the objective is to provide another scary reason for taxing energy. On sea level, NOAA’s own tide gauge data show an average sea level rise of less than 2 inches per century, in line with this assessment by S. J. Holgate, Proudman Oceanographic Laboratory, Liverpool, UK.

Read Full Report…

Fallacies about Global Warming

By John McLean

It is widely alleged that the science of global warming is “settled”. This implies that all the major scientific aspects of climate change are well understood and uncontroversial, and that scientists are now just mopping up unimportant details. The allegation is profoundly untrue: for example the US alone is said to be spending more than $4 billion annually on climate research, which is a lot to pay for detailing; and great uncertainty and argument surround many of the principles of climate change, and especially the magnitude of any human causation for warming. Worse still, not only is the science not “settled”, but its discussion in the public domain is contaminated by many fallacies, which leads directly to the great public confusion that is observed.  This paper explains the eight most common fallacies that underpin public discussion of the hypothesis that dangerous global warming is caused by human greenhouse gas emissions.

1 – Scientists have accurate historical temperature data

Historical temperature records taken near the surface of the Earth are subject to various biases and recording errors that render them incorrect. In the early days thermometers could only show the temperature at the moment of reading and so the data recorded from that time was for just one reading each day. Later the thermometers were able to record the minimum and maximum temperatures, and so the daily readings were those extremes in the 24 hour period. Only in the last 20 or 30 years have instruments been available that record the temperature at regular intervals throughout the 24 hours, thus allowing a true time-based daily average to be calculated.

The so-called “average” temperatures both published and frequently plotted through time are initially based on only a single daily value, then later on the mathematical average of the minimum and maximum temperatures. Although time-based averages are now available for some regions they are not generally used because the better instrumentation is not uniformly installed throughout the world and the historical data is at best a mathematical average of two values. The problem is that these averages are easily distorted by brief periods of high or low temperatures relative to the rest of the day, such as a brief period with less cloud cover or a short period of cold wind or rain.

Another serious problem is that thermometers are often located where human activity can directly influence the local temperature.[1] This is not only the urban heat island (UHI) effect, where heat generated by traffic, industry and private homes and then trapped by the man-made physical environment causes elevated temperatures. There is also a land use effect, where human activity has modified the microclimate of the local environment through buildings or changes such as land clearing or agriculture. Only recently have the climatic impacts of these human changes started to receive detailed scrutiny, but many older meteorological records are inescapably contaminated by them.

The integrity of some important historical data is also undermined by reports that various Chinese weather stations that were claimed to be in unchanged locations from 1954 to 1983 had in fact moved, with one station moving 5 times and up to 41 kilometres[2]. The extent of this problem on a global scale is unknown but worrying, because shifts of less than 500 metres are known to cause a significant change in recordings.

The observed minimum and maximum temperatures that are recorded, albeit with the inclusion of possible local human influences, are sent to one or more of the three agencies that calculate the “average global temperature” (NASA, NOAA, UK Hadley Centre). These agencies produce corrected data, and graphs that depict a significant increase in average global temperature over the last 30 years. However, this apparent rise may at least partly result from the various distortions of surface temperature measurements described above. No-one has independently verified the temperature records, not least because full disclosure of methods and data is not made and the responsible agencies appear very reluctant to allow such auditing to occur.

In reality, there is no guarantee, and perhaps not even a strong likelihood, that the thermometer-based temperature measurements truly reflect the average local temperatures free from any distortions. There is also no proof that the calculations of average global temperatures are consistent and accurate. For example, it is known that at least two of the three leading climate agencies use very different data handling methods and it follows that at least one of them is likely to be incorrect.

It is stating the obvious to say that if we don’t know what the global average temperature has been and currently is, then it is difficult to argue that the world is warming at all, let alone to understand to what degree any alleged change has a human cause.

2 – Temperature trends are meaningful and can be extrapolated

That temperature trends plotted over decades are meaningful, and understood to the degree that they can be projected, is one of the greatest fallacies in the claims about man-made global warming.

Any trend depends heavily upon the choice of start and end points. A judicious selection of such points for can create a wide variety of trends. For example, according to the annual average temperatures from Britain’s CRU:

trend for 1900-2006 = 0.72 °C/century

trend for 1945-2006 = 1.05 °C/century

trend for 1975-2006 = 1.87 °C/century,

None of these trends is any more correct than either of the others.

Despite the common use of temperature trends in scientific and public discussion, they cannot be used to illustrate possible human greenhouse influences on temperature unless episodic natural events, such as the powerful El Nino of 1998, are taken into account and corrected for.

Trends cannot be extrapolated meaningfully unless scientists:

(a) Thoroughly understand all relevant climate factors;

(b) Are confident that the trends in each individual factor will continue; and

(c) Are confident that interactions between factors will not cause a disruption to the overall trend.

The IPCC’s Third Assessment Report of 2001 listed 11 possible climate factors and indicated that the level of scientific understanding was “very low” for 7 of them and “low” for another. No similar listing appears in the recent Fourth Assessment Report, but it does contain a list of factors relevant to the absorption and emission of radiation that shows that the level of scientific knowledge of several of those factors is still quite low.

Scientists are still struggling even to understand the influence of clouds on temperature. Observational data shows that low-level cloud outside the tropics has decreased since 1998, but scientists cannot be certain that the decreasing trend will continue, nor what such a decrease would mean. Perhaps clouds act as a natural thermostat and higher temperatures will ultimately create more clouds and this will have a cooling effect.[3]

Again, if random natural events dictate the historical trend, then extrapolation of the trend makes no sense. Even if those natural events can be expected to continue in the future, their severity – which often dictates the short-term trend – is unknowable.

3 – The accuracy of climate models can be determined from their output

A common practice among climate scientists is to compare the output of their climate models to historical data from meteorological observations. (In fact the models are usually “adjusted” to match that historical data as closely as possible, but let’s ignore that for now.)

The accuracy of a model is determined by the accuracy with which it simulates each climatic factor and climatic process rather than the closeness of the match between its output and the historical data. If the internal processing is correct then so too will be the output, but apparently accurate output does not confer accuracy on the internal processes.

Two issues to watch are:

(a) The combination of a number of inaccuracies can produce acceptable output if calculations that are “too high” counterbalance those that are “too low”

(b) If the internal processes are largely based on data that changes almost immediately as a consequence of a change in temperature, then the output of the model will probably appear accurate when compared to historical data, but it will be of no benefit for predicting future changes.

4 – The consensus among scientists is decisive (or even important)

The extent of a claimed consensus that dangerous human-caused global warming is occurring is unknown and the claim of consensus is unsupported by any objective data[4]. However, this is irrelevant because by its nature any consensus is a product of opinions, not facts.[5]

Though consensus determines legal and political decisions in most countries, this simply reflects the number of persons who interpret data in a certain way or who have been influenced by the opinions of others. Consensus does not confer accuracy or “rightness”.

Scientific matters are certainly not settled by consensus. Einstein pointed out that hundreds of people agreeing with him were of no relevance, because it would take just one person to prove him wrong.

Science as a whole, and its near neighbour medicine, are replete with examples of individuals or small groups of researchers successfully undermining the prevailing popular theories of the day. This is not to say that individuals or small groups who hold maverick views are always correct, but it is to say that even the most widely held opinions should never be regarded as an ultimate truth.

Science is about observation, experiment and the testing of hypotheses, not consensus.

5 – The dominance of scientific papers on a certain subject establishes a truth

This fallacy is closely related to the previous discussion of consensus, but here the impact is an indirect consequence of a dominant opinion.

Funding for scientific research has moved towards being determined by consensus, because where public monies are concerned the issue ultimately comes back to an opinion as to whether the research is likely to be fruitful. Prior to the last 20 or 30 years, research was driven principally by scientific curiosity. That science research funding has now become results-oriented has had a dramatic, negative impact on the usefulness of many scientific results. For, ironically, pursuing science that is thought by politician to be “important” or “in the public interest” often results in science accomplishments that are conformist and fashionable rather than independent and truly useful.

Targeting of “useful” research strongly constricts the range of scientific papers that are produced. A general perception may arise that few scientists disagree with the dominant opinion, whereas the reality may be that papers that reject the popular opinion are difficult to find simply because of the weight of funding, and hence the research effort, that is tailored towards the conventional wisdom.

Science generally progresses by advancing on the work that has gone before, and the usual practice is to cite several existing papers to establish the basis for one’s work. Again the dominance of papers that adhere to a conventional wisdom can put major obstacles in the way of the emergence of any counter-paradigm.

6 – Peer-reviewed papers are true and accurate

The peer-review process was established for the benefit of editors who did not have good knowledge across all the fields that their journals addressed. It provided a “sanity check” to avoid the risk of publishing papers which were so outlandish that the journal would be ridiculed and lose its reputation.

In principle this notion seems entirely reasonable, but it neglects certain aspects of human nature, especially the tendency for reviewers to defend their own (earlier) papers, and indirectly their reputations, against challengers. Peer review also ignores the strong tendency for papers that disagree with a popular hypothesis, one the reviewer understands and perhaps supports, to receive a closer and often hostile scrutiny.

Reviewers are selected from practitioners in the field, but many scientific fields are so small that the reviewers will know the authors. The reviewers may even have worked with the authors in the past or wish to work with them in future, so the objectivity of any review is likely to be tainted by this association.

Some journals now request that authors suggest appropriate reviewers but this is a sure way to identify reviewers who will be favourable to certain propositions.

It also follows that if the editor of a journal wishes to reject a paper, then it will be sent to a reviewer who is likely to reject it, whereas a paper that the editor favours to be published will be sent to a reviewer who is expected to be sympathetic. In 2002 the editor-in-chief of the journal “Science” announced that there was no longer any doubt that human activity was changing climate, so what are the realistic chances of this journal publishing a paper that suggests otherwise?

The popular notion is that reviewers should be skilled in the relevant field, but a scientific field like climate change is so broad, and encompasses so many sub disciplines, that it really requires the use of expert reviewers from many different fields. That this is seldom undertaken explains why so many initially influential climate papers have later been found to be fundamentally flawed.

In theory, reviewers should be able to understand and replicate the processing used by the author(s). In practice, climate science has numerous examples where authors of highly influential papers have refused to reveal their complete set of data or the processing methods that they used. Even worse, the journals in question not only allowed this to happen, but have subsequently defended the lack of disclosure when other researchers attempted to replicate the work.

7 – The IPCC is a reliable authority and its reports are both correct and widely endorsed by all scientists

The Intergovernmental Panel on Climate Change (IPCC) undertakes no research for itself and relies on peer-reviewed scientific papers in reputable journals (see item 6). There is strong evidence that the IPCC is very selective of the papers it wishes to cite and pays scant regard to papers that do not adhere to the notion that man-made emissions of carbon dioxide have caused warming.

Four more issues noted above are also very relevant to the IPCC procedures. The IPCC reports are based on historical temperature data and trends (see 1 & 2), and the attribution of warming to human activities relies very heavily on climate modelling (see item 3). The IPCC pronouncements have a powerful influence on the direction and funding of scientific research into climate change, which in turn influences the number of research papers on these topics. Ultimately, and in entirely circular fashion, this leads the IPCC to report that large numbers of papers support a certain hypothesis (see item 5).

These fallacies alone are major defects of the IPCC reports, but the problems do not end there. Other distortions and fallacies of the IPCC are of its own doing.

Governments appoint experts to work with the IPCC but once appointed those experts can directly invite other experts to join them. This practice obviously can, and does, lead to a situation where the IPCC is heavily biased towards the philosophies and ideologies of certain governments or science groups.

The lead authors of the chapters of the IPCC reports can themselves be researchers whose work is cited in those chapters. This was the case with the so-called “hockey stick” temperature graph in the Third Assessment Report (TAR) published in 2001. The paper in which the graph first appeared was not subject to proper and independent peer review, despite which the graph was prominently featured in a chapter for which the co-creator of the graph was a lead author. The graph was debunked in 2006[6] and has been omitted without explanation from the Fourth Assessment Report (4AR) of 2007.

The IPCC has often said words to the effect “We don’t know what else can be causing warming so it must be humans” (or “the climate models will only produce the correct result if we include man-made influences”), but at the same time the IPCC says that scientists have a low level of understanding of many climate factors. It logically follows that if any natural climate factors are poorly understood then they cannot be properly modelled, the output of the models will probably be incorrect and that natural forces cannot easily be dismissed as possible causes. In these circumstances it is simply dishonest to unequivocally blame late 20th century warming on human activity.[7]

The IPCC implies that its reports are thoroughly reviewed by thousands of experts. Any impression that thousands of scientists review every word of the reports can be shown to be untrue by an examination of the review comments for the report by IPCC Working Group I. (This report is crucial, because it discusses historical observations, attributes a likely cause of change and attempts to predict global and regional changes. The reports by working groups 2 and 3 draw heavily on the findings of this WG I report.)

The analysis of the WG I report for the 4AR revealed that:

(a) A total of just 308 reviewers (including reviewers acting on behalf of governments) examined the 11 chapters of the WGI I report

(b) An average of 67 reviewers examined each chapter of this report with no chapter being examined by more than 100 reviewers and one by as few as 34.

(c) 69% of reviewers commented on less than 3 chapters of the 11-chapter report. (46% of reviewers commented on just one chapter and 23% on two chapters, thus accounting for more than two-thirds of all reviewers.)

(d) Just 5 reviewers examined all 11 chapters and two of these were recorded as “Govt of (country)”, which may represent a team of reviewers rather than individuals

(e) Every chapter had review comments from a subset of the designated authors for the chapter, which suggests that the authoring process may not have been diligent and inclusive

Chapter 9 was the key chapter because it attributed a change in climate to human activity but:

(a) Just 62 individuals or government appointed reviewers commented on this chapter

(b) A large number of reviewers had a vested interest in the content of this chapter

- 7 reviewers were “contributing editors” of the same chapter

- 3 were overall editors of the Working Group I report
- 26 were authors or co-authors of papers cited in the final draft

- 8 reviewers were noted as “Govt of …” indicating one or more reviewers who were appointed by those governments (and sometimes the same comments appear under individual names as well as for the government in question)

- Only 25 individual reviewers appeared to have no vested interest in this chapter

(c) The number of comments from each reviewer varied greatly

- 27 reviewers made just 1 or 2 comments but those making more than 2 comments often drew attention to typographical errors, grammatical errors, mistakes in citing certain papers or inconsistencies with other chapters, so how thorough were these reviews with very few comments?

- only 18 reviewers made more than 10 comments on the entire 122-page second order draft report (98 pages of text, 24 of figures) and 9 of those 18 had a vested interest

(d) Just four reviewers, including one government appointed team or individual, explicitly endorsed the entire chapter in its draft form – not thousands of scientists, but FOUR!

The claim that the IPCC’s 4th Assessment Report carries the imprimatur of having been reviewed by thousands, or even hundreds, of expert and independent scientists is incorrect, and even risible. In actuality, the report represents the view of small and self-selected science coteries that formed the lead authoring teams.

More independent scientists of standing (61) signed a public letter to the Prime Minister of Canada cautioning against the assumption of human causation of warming[8] than are listed as authors of the 4AR Summary for Policymakers (52). More than 50 scientists also reviewed the Independent Summary for Policymakers, the counter-view to the IPCC’s summary that was published by the Fraser Institute of Canada[9].

8 – It has been proven that human emissions of carbon dioxide have caused global warming

The first question to be answered is whether the Earth is warming at all. As the discussion of fallacy 1 showed, there is no certainty that this is the case.

But even were warming to be demonstrated, and assuming a reasonable correlation between an increase in carbon dioxide and an increase in temperature, that does not mean that the former has driven the latter. Good evidence exists from thousands of years ago that carbon dioxide levels rose only after the temperature increased, so why should we assume that the order is somehow reversed today?[10]

The IPCC claims a subjective 90% to 95% probability that emissions of carbon dioxide have caused warming but that assumes (a) that warming has occurred, (b) that such a subjective probability can be assigned and is meaningful, and (c) that because existing climate models cannot produce correct results without including some “human” influence, then the only allowable explanation is that humans have caused warming.

Remarkably these claims are accompanied by an admission that the level of scientific understanding of many climate factors is quite low. This means that the IPCC’s claim for dangerous human-caused warming rests primarily on the output of climate models that are invalidated and recognised to be incomplete.[11]

The other foundation for the claim of dangerous warming is based upon laboratory work and theoretical physics regarding the ability of molecules of carbon dioxide to absorb heat and re-transmit it. Using these principles, and ignoring other factors, it can be shown that an increase in carbon dioxide beyond pre-industrial levels will cause a very small increase in temperature and that the warming will become less as the concentration of carbon dioxide increases. However, these principles were developed in laboratory environments that don’t match the complexity of real world climate, and the “other factors” that are ignored are actually an integral part of the climate system. With few exceptions, the actions and interactions of these factors are poorly understood. Moreover, empirical tests of the amount of warming that will be caused by a doubling of human emissions suggest a non-alarming figure of only about 1 deg. C[12].

One major stumbling block for the hypothesis that carbon dioxide has caused significant warming is that since continuous and direct measurements of carbon dioxide began in 1958 global temperatures have both risen and fallen while at all times the concentration of carbon dioxide continued to rise.

It would seem that if carbon dioxide is causing any warming at all then it is easily overwhelmed by other, probably quite natural, climate forces.

Scientists are continuing to investigate the possible impacts of solar forces on climate and in some cases have shown strong correlations. Other scientists are questioning whether cosmic rays may influence the formation of clouds that then control the amount of sunlight reaching the Earth’s surface. Changes in ozone have also been proposed as drivers of climate. That all three of these issues are actively being explored gives the lie to claims that climate science is settled and that carbon dioxide is known to be the sole major cause of recent climatic warming.

Very recently several scientists have said words to the effect “Yes, the natural forces do drive the climate but we believe that carbon dioxide adds to the warming”, though they notably refrain from defining how much warming the carbon dioxide may have caused.[13] The reality is that there is no clear evidence that human emissions of carbon dioxide have any measurable effect on temperatures. Such a claim rests on climate models of unproven accuracy and on lines of physical argument that expressly exclude consideration of other known important drivers of climate change.

Conclusions

The hypothesis of dangerous human-caused warming caused by CO2 emission is embroiled in uncertainties of the fundamental science and its interpretation, and by fallacious public discussion. It is utterly bizarre that, in face of this reality, public funding of many billions of dollars is still being provided for climate change research. It is even more bizarre that most governments, urged on by environmental NGSs and other self-interested parties, have either already introduced carbon taxation or trading systems (Europe; some groups of US States), or have indicated a firm intention to do so (Australia).

At its most basic, if scientists cannot be sure that temperatures are today rising, nor establish that the gentle late 20th century warming was caused by CO2 emissions, then it is nonsense to propose that expensive controls are needed on human carbon dioxide emissions.

References

[1] See: SPPI Temperature Rankings

[2] InfoMath

[3] See: SPPI: Positive Feedback We’ve Been Fooling Ourselves

[4] See: SPPI: What Greenhouse Warming?

[5] See: SPPI: What Consensus?

[6] “Ad Hoc Committee Report on the ‘Hockey Stick’ Global Climate Reconstruction” (i.e. “Wegman Report”)

[7] See: SPPI: Myth of Dangerous Human-Caused Climate Change

[8] CanadaPost

[9] Frasier Institute

[10] CSPP: Idso Report

[11] See: SPPI: Monckton; The Mathematical Reason Why Long Run Climate Change is Impossible to Predict

[12] ECD

[13] See: SPPI: What Greenhouse Warming?

Climate Sensitivity Reconsidered Part 1

A special report from Christopher Monckton of Brenchley for all Climate Alarmists, Consensus Theorists and Anthropogenic Global Warming Supporters

January 20, 2011

Abstract

The Intergovernmental Panel on Climate Change (IPCC, 2007) concluded that anthropogenic CO2 emissions probably
caused more than half of the “global warming” of the past 50 years and would cause further rapid warming. However,
global mean surface temperature TS has not risen since 1998 and may have fallen since late 2001. The present analysis
suggests that the failure of the IPCC’s models to predict this and many other climatic phenomena arises from defects in its
evaluation of the three factors whose product is climate sensitivity:

1) Radiative forcing ΔF;
2) The no-feedbacks climate sensitivity parameter κ; and
3) The feedback multiplier f.
Some reasons why the IPCC’s estimates may be excessive and unsafe are explained. More importantly, the conclusion is
that, perhaps, there is no “climate crisis”, and that currently-fashionable efforts by governments to reduce anthropogenic
CO2 emissions are pointless, may be ill-conceived, and could even be harmful.

The context

LOBALLY-AVERAGED land and sea surface absolute temperature TS has not risen since 1998 (Hadley Center; US National Climatic Data Center; University of Alabama at Huntsville; etc.). For almost seven years, TS may even have fallen (Figure 1). There may be no new peak until 2015 (Keenlyside et al., 2008).

The models heavily relied upon by the Intergovernmental Panel on Climate Change (IPCC) had not projected this multidecadal stasis in “global warming”; nor (until trained ex post facto) the fall in TS from 1940-1975; nor 50 years’ cooling in Antarctica (Doran et al., 2002) and the Arctic (Soon, 2005); nor the absence of ocean warming since 2003 (Lyman et al., 2006; Gouretski & Koltermann, 2007); nor the onset, duration, or intensity of the Madden-Julian intraseasonal oscillation, the Quasi-Biennial Oscillation in the tropical stratosphere, El Nino/La Nina oscillations, the Atlantic Multidecadal Oscillation, or the Pacific Decadal Oscillation that has recently transited from its warming to its cooling phase (oceanic oscillations which, on their own, may account for all of the observed warmings and coolings over the past half-century: Tsonis et al., 2007); nor the magnitude nor duration of multicentury events such as the Medieval Warm Period or the Little Ice Age; nor the cessation since 2000 of the previously-observed growth in atmospheric methane concentration (IPCC, 2007); nor the active 2004 hurricane season; nor the inactive subsequent seasons; nor the UK flooding of 2007 (the Met Office had forecast a summer of prolonged droughts only six weeks previously); nor the solar Grand Maximum of the past 70 years, during which the Sun was more active, for longer, than at almost any
similar period in the past 11,400 years (Hathaway, 2004; Solanki et al., 2005); nor the consequent surface “global warming” on Mars, Jupiter, Neptune’s largest moon, and even distant Pluto; nor the eerily- continuing 2006 solar minimum; nor the consequent, precipitate decline of ~0.8 °C in TS from January 2007 to May 2008 that has canceled out almost all of the observed warming of the 20th century.

Figure 1
Mean global surface temperature anomalies (°C), 2001-2008


An early projection of the trend in TS in response to “global warming” was that of Hansen (1988), amplifying Hansen (1984) on quantification of climate sensitivity. In 1988, Hansen showed Congress a graph projecting rapid increases in TS to 2020 through “global warming” (Fig. 2):

Figure 2
Global temperature projections and outturns, 1988-2020


To what extent, then, has humankind warmed the world, and how much warmer will the world become if the current rate of increase in anthropogenic CO2 emissions continues? Estimating “climate sensitivity” – the magnitude of the change in TS after doubling CO2 concentration from the pre-industrial 278 parts per million to ~550 ppm – is the central question in the scientific debate about the climate. The official answer is given in IPCC (2007):

“It is very likely that anthropogenic greenhouse gas increases caused most of the observed increase in [TS] since the mid-20th century. … The equilibrium global average warming expected if carbon dioxide concentrations were to be sustained at 550 ppm is likely to be in the range 2-4.5 °C above pre-industrial values, with a best estimate of about 3 °C.”

Here as elsewhere the IPCC assigns a 90% confidence interval to “very likely”, rather than the customary 95% (two standard deviations). There is no good statistical basis for any such quantification, for the object to which it is applied is, in the formal sense, chaotic. The climate is “a complex, nonlinear, chaotic object” that defies long-run prediction of its future states (IPCC, 2001), unless the initial state of its millions of variables is known to a precision that is in practice unattainable, as Lorenz (1963; and see Giorgi, 2005) concluded in the celebrated paper that founded chaos theory –
“Prediction of the sufficiently distant future is impossible by any method, unless the present conditions are known exactly. In view of the inevitable inaccuracy and incompleteness of weather observations, precise, very-long-range weather forecasting would seem to be nonexistent.”  The Summary for Policymakers in IPCC (2007) says –“The CO2 radiative forcing increased by 20% in the last 10 years (1995-2005).”

Natural or anthropogenic CO2 in the atmosphere induces a “radiative forcing” ΔF, defined by IPCC (2001: ch.6.1) as a change in net (down minus up) radiant-energy flux at the tropopause in response to a perturbation. Aggregate forcing is natural (pre-1750) plus anthropogenic-era (post-1750) forcing. At 1990, aggregate forcing from CO2 concentration was ~27 W m–2 (Kiehl & Trenberth, 1997). From 1995-2005, CO2 concentration rose 5%, from 360 to 378 W m–2, with a consequent increase in aggregate forcing (from Eqn. 3 below) of ~0.26 W m–2, or <1%. That is one-twentieth of the value
stated by the IPCC. The absence of any definition of “radiative forcing” in the 2007 Summary led many to believe that the aggregate (as opposed to anthropogenic) effect of CO2 on TS had increased by 20% in 10 years. The IPCC – despite requests for correction – retained this confusing statement in its report.  Such solecisms throughout the IPCC’s assessment reports (including the insertion, after the scientists had completed their final draft, of a table in which four decimal points had been right-shifted so as to multiply tenfold the observed contribution of ice-sheets and glaciers to sea-level rise), combined with a heavy reliance upon computer models unskilled even in short-term projection, with initial values of key
variables unmeasurable and unknown, with advancement of multiple, untestable, non-Popperfalsifiable theories, with a quantitative assignment of unduly high statistical confidence levels to nonquantitative statements that are ineluctably subject to very large uncertainties, and, above all, with the now-prolonged failure of TS to rise as predicted (Figures 1, 2), raise questions about the reliability and hence policy-relevance of the IPCC’s central projections.

Dr. Rajendra Pachauri, chairman of the UN Intergovernmental Panel on Climate Change (IPCC), has recently said that the IPCC’s evaluation of climate sensitivity must now be revisited. This paper is a respectful contribution to that re-examination.

The IPCC’s method of evaluating climate sensitivity

We begin with an outline of the IPCC’s method of evaluating climate sensitivity. For clarity we will concentrate on central estimates. The IPCC defines climate sensitivity as equilibrium temperature change ΔTλ in response to all anthropogenic-era radiative forcings and consequent “temperature feedbacks” – further changes in TS that occur because TS has already changed in response to a forcing – arising in response to the doubling of pre-industrial CO2 concentration (expected later this century).  ΔTλ is, at its simplest, the product of three factors: the sum ΔF2x of all anthropogenic-era radiative forcings at CO2 doubling; the base or “no-feedbacks” climate sensitivity parameter κ; and the feedback
multiplier f, such that the final or “with-feedbacks” climate sensitivity parameter λ = κ f. Thus –

ΔTλ = ΔF2x κ f = ΔF2x λ, (1)
where f = (1 – bκ)–1, (2)

such that b is the sum of all climate-relevant temperature feedbacks. The definition of f in Eqn. (2) will be explained later. We now describe seriatim each of the three factors in ΔTλ: namely, ΔF2x, κ, and f.

1. Radiative forcing ΔFCO2, where (C/C0) is a proportionate increase in CO2 concentration, is given by several formulae in IPCC (2001, 2007). The simplest, following Myrhe (1998), is Eqn. (3) –

ΔFCO2 ≈ 5.35 ln(C/C0) ==> ΔF2xCO2 ≈ 5.35 ln 2 ≈ 3.708 W m–2. (3)

To ΔF2xCO2 is added the slightly net-negative sum of all other anthropogenic-era radiative forcings, calculated from IPCC values (Table 1), to obtain total anthropogenic-era radiative forcing ΔF2x at CO2 doubling (Eqn. 3). Note that forcings occurring in the anthropogenic era may not be anthropogenic.

Table 1
Evaluation of ΔF2x from the IPCC’s anthropogenic-era forcings


From the anthropogenic-era forcings summarized in Table 1, we obtain the first of the three factors –
ΔF2x ≈ 3.405 Wm–2. (4)

Continue to Part 2

Climate Sensitivity Reconsidered Part 2

A special report from Christopher Monckton of Brenchley to all Climate Alarmists, Consensus Theorists and Anthropogenic Global Warming Supporters

Continues from Part 1

2. The base or “no-feedbacks” climate sensitivity parameter κ, where ΔTκ is the response of TS to radiative forcings ignoring temperature feedbacks, ΔTλ is the response of TS to feedbacks as well as forcings, and b is the sum in W m–2 °K–1 of all individual temperature feedbacks, is –

κ = ΔTκ / ΔF2x °K W–1 m2, by definition; (5)
= ΔTλ / (ΔF2x + bΔTλ) °K W–1 m2. (6)

In Eqn. (5), ΔTκ, estimated by Hansen (1984) and IPCC (2007) as 1.2-1.3 °K at CO2 doubling, is the change in surface temperature in response to a tropopausal forcing ΔF2x, ignoring any feedbacks.  ΔTκ is not directly mensurable in the atmosphere because feedbacks as well as forcings are present.  Instruments cannot distinguish between them. However, from Eqn. (2) we may substitute 1 / (1 – bκ) for f in Eqn. (1), rearranging terms to yield a useful second identity, Eqn. (6), expressing κ in terms of ΔTλ, which is mensurable, albeit with difficulty and subject to great uncertainty (McKitrick, 2007).  IPCC (2007) does not mention κ and, therefore, provides neither error-bars nor a “Level of Scientific
Understanding” (the IPCC’s subjective measure of the extent to which enough is known about a variable to render it useful in quantifying climate sensitivity). However, its implicit value κ ≈ 0.313 °K W–1 m2, shown in Eqn. 7, may be derived using Eqns. 9-10 below, showing it to be the reciprocal of the estimated “uniform-temperature” radiative cooling response–

“Under these simplifying assumptions the amplification [f] of the global warming from a feedback parameter [b] (in W m–2 °C–1) with no other feedbacks operating is 1 / (1 –[bκ –1]), where [–κ –1] is the ‘uniform temperature’ radiative cooling response (of value approximately –3.2 W m–2 °C–1; Bony et al., 2006). If n independent feedbacks operate, [b] is replaced by (λ1 + λ 2+ … λ n).” (IPCC, 2007: ch.8, footnote).

Thus, κ ≈ 3.2–1 ≈ 0.313 °K W–1 m2. (7)

3. The feedback multiplier f is a unitless variable by which the base forcing is multiplied to take account of mutually-amplified temperature feedbacks. A “temperature feedback” is a change in TS that occurs precisely because TS has already changed in response to a forcing or combination of forcings.  An instance: as the atmosphere warms in response to a forcing, the carrying capacity of the space occupied by the atmosphere for water vapor increases near-exponentially in accordance with the Clausius-Clapeyron relation. Since water vapor is the most important greenhouse gas, the growth in its
concentration caused by atmospheric warming exerts an additional forcing, causing temperature to rise further. This is the “water-vapor feedback”. Some 20 temperature feedbacks have been described, though none can be directly measured. Most have little impact on temperature. The value of each feedback, the interactions between feedbacks and forcings, and the interactions between feedbacks and other feedbacks, are subject to very large uncertainties.

Each feedback, having been triggered by a change in atmospheric temperature, itself causes a temperature change. Consequently, temperature feedbacks amplify one another. IPCC (2007: ch.8) defines f in terms of a form of the feedback-amplification function for electronic circuits given in Bode (1945), where b is the sum of all individual feedbacks before they are mutually amplified:

f = (1 – bκ)–1 (8)
= ΔTλ / ΔTκ

Note the dependence of f not only upon the feedback-sum b but also upon κ –

ΔTλ = (ΔF + bΔTλ)κ
==> ΔTλ (1 – bκ) = ΔFκ
==> ΔTλ = ΔFκ(1 – bκ)–1
==> ΔTλ / ΔF = λ = κ(1 – bκ)–1 = κf
==> f = (1 – bκ)–1 ≈ (1 – b / 3.2)–1
==> κ ≈ 3.2–1 ≈ 0.313 °K W–1 m2. (9)

Equivalently, expressing the feedback loop as the sum of an infinite series,

ΔTλ = ΔFκ + ΔFκ 2b + ΔFκ 2b2 + …
= ΔFκ(1 + κb + κb2 + …)
= ΔFκ(1 – κb)–1
= ΔFκf
==> λ = ΔTλ /ΔF = κf (10)

Figure 3
Bode (1945) feedback amplification schematic


For the first time, IPCC (2007) quantifies the key individual temperature feedbacks summing to b:
“In AOGCMs, the water vapor feedback constitutes by far the strongest feedback, with a multi-model mean and standard deviation … of 1.80 ± 0.18 W m–2 K–1, followed by the negative lapse rate feedback (–0.84 ± 0.26 W m–2 K–1) and the surface albedo feedback (0.26 ± 0.08 W m–2 K–1). The cloud feedback mean is 0.69 W m–2 K–1 with a very large inter-model spread of ±0.38 W m–2K–1.” (Soden & Held, 2006).

To these we add the CO2 feedback, which IPCC (2007, ch.7) separately expresses not as W m–2 °K–1 but as concentration increase per CO2 doubling: [25, 225] ppmv, central estimate q = 87 ppmv. Where p is concentration at first doubling, the proportionate increase in atmospheric CO2 concentration from the CO2 feedback is o = (p + q) / p = (556 + 87) / 556 ≈ 1.16. Then the CO2 feedback is –λCO2 = z ln(o) / dTλ ≈ 5.35 ln(1.16) / 3.2 ≈ 0.25 Wm–2 K–1. (11) The CO2 feedback is added to the previously-itemized feedbacks to complete the feedback-sum b:

b = 1.8 – 0.84 + 0.26 + 0.69 + 0.25 ≈ 2.16 Wm–2 ºK–1, (12)

so that, where κ = 0.313, the IPCC’s unstated central estimate of the value of the feedback factor f is at the lower end of the range f = 3-4 suggested in Hansen et al. (1984) –

f = (1 – bκ)–1 ≈ (1 – 2.16 x 0.313)–1 ≈ 3.077. (13)

Final climate sensitivity ΔTλ, after taking account of temperature feedbacks as well as the forcings that triggered them, is simply the product of the three factors described in Eqn. (1), each of which we have briefly described above. Thus, at CO2 doubling, –

ΔTλ = ΔF2x κ f ≈ 3.405 x 0.313 x 3.077 ≈ 3.28 °K (14)

IPCC (2007) gives dTλ on [2.0, 4.5] ºK at CO2 doubling, central estimate dTλ ≈ 3.26 °K, demonstrating that the IPCC’s method has been faithfully replicated. There is a further checksum, –

ΔTκ = ΔTλ / f = κ ΔF2x = 0.313 x 3.405 ≈ 1.1 °K, (15)

sufficiently close to the IPCC’s estimate ΔTκ ≈ 1.2 °K, based on Hansen (1984), who had estimated a range 1.2-1.3 °K based on his then estimate that the radiative forcing ΔF2xCO2 arising from a CO2 doubling would amount to 4.8 W m–2, whereas the IPCC’s current estimate is ΔF2xCO2 = 3.71 W m–2 (see Eqn. 2), requiring a commensurate reduction in ΔTκ that the IPCC has not made.  A final checksum is provided by Eqn. (5), giving a value identical to that of the IPCC at Eqn (7):

κ = ΔTλ / (ΔF2x + bΔTλ)
≈ 3.28 / (3.405 + 2.16 x 3.28)
≈ 0.313 °K W–1 m2. (16)

Having outlined the IPCC’s methodology, we proceed to re-evaluate each of the three factors in dTλ.  None of these three factors is directly mensurable. For this and other reasons, it is not possible to obtain climate sensitivity numerically using general-circulation models: for, as Akasofu (2008) has pointed out, climate sensitivity must be an input to any such model, not an output from it.  In attempting a re-evaluation of climate sensitivity, we shall face the large uncertainties inherent in the climate object, whose complexity, non-linearity, and chaoticity present formidable initial-value and boundary-value problems. We cannot measure total radiative forcing, with or without temperature feedbacks, because radiative and non-radiative atmospheric transfer processes combined with seasonal, latitudinal, and altitudinal variabilities defeat all attempts at reliable measurement. We cannot even measure changes in TS to within a factor of two (McKitrick, 2007).

Even satellite-based efforts at assessing total energy-flux imbalance for the whole Earth-troposphere system are uncertain. Worse, not one of the individual forcings or feedbacks whose magnitude is essential to an accurate evaluation of climate sensitivity is mensurable directly, because we cannot distinguish individual forcings or feedbacks one from another in the real atmosphere, we can only guess at the interactions between them, and we cannot even measure the relative contributions of all forcings and of all feedbacks to total radiative forcing. Therefore we shall adopt two approaches:
theoretical demonstration (where possible); and empirical comparison of certain outputs from the models with observation to identify any significant inconsistencies.

Radiative forcing ΔF2x reconsidered

We take the second approach with ΔF2x. Since we cannot measure any individual forcing directly in the atmosphere, the models draw upon results of laboratory experiments in passing sunlight through chambers in which atmospheric constituents are artificially varied; such experiments are, however, of limited value when translated into the real atmosphere, where radiative transfers and non-radiative transports (convection and evaporation up, advection along, subsidence and precipitation down), as well as altitudinal and latitudinal asymmetries, greatly complicate the picture. Using these laboratory values, the models attempt to produce latitude-versus-altitude plots to display the characteristic
signature of each type of forcing. The signature or fingerprint of anthropogenic greenhouse-gas forcing, as predicted by the models on which the IPCC relies, is distinct from that of any other forcing, in that the models project that the rate of change in temperature in the tropical mid-troposphere – the region some 6-10 km above the surface – will be twice or thrice the rate of change at the surface (Figure 4):

Figure 4
Temperature fingerprints of five forcings Modeled zonal


The fingerprint of anthropogenic greenhouse-gas forcing is a distinctive “hot-spot” in the tropical midtroposphere.
Figure 4 shows altitude-vs.-latitude plots from four of the IPCC’s models:

Figure 5
Fingerprints of anthropogenic warming projected by four models


However, as Douglass et al. (2004) and Douglass et al. (2007) have demonstrated, the projected
fingerprint of anthropogenic greenhouse-gas warming in the tropical mid-troposphere is not observed
in reality. Figure 6 is a plot of observed tropospheric rates of temperature change from the Hadley
Center for Forecasting. In the tropical mid-troposphere, at approximately 300 hPa pressure, the model projected
fingerprint of anthropogenic greenhouse warming is absent from this and all other observed
records of temperature changes in the satellite and radiosonde eras:

Continue to Part 3

Climate Sensitivity Reconsidered Part 3

A special report from Christopher Monckton of Brenchley to all Climate Alarmists, Consensus Theorists and Anthropogenic Global Warming Supporters

Continues from Part 2

Figure 6
The absent fingerprint of anthropogenic greenhouse warming


None of the temperature datasets for the tropical surface and mid-troposphere shows the strong differential warming rate predicted by the IPCC’s models. Thorne et al. (2007) suggested that the absence of the mid-tropospheric warming might be attributable to uncertainties in the observed record: however, Douglass et al. (2007) responded with a detailed statistical analysis demonstrating that the absence of the projected degree of warming is significant in all observational datasets.  Allen et al. (2008) used upper-atmosphere wind speeds as a proxy for temperature and concluded that
the projected greater rate of warming at altitude in the tropics is occurring in reality. However, satellite records, such as the RSS temperature trends at varying altitudes, agree with the radiosondes that the warming differential is not occurring: they show that not only absolute temperatures but also warming rates decline with altitude.  There are two principal reasons why the models appear to be misrepresenting the tropical atmosphere so starkly. First, the concentration of water vapor in the tropical lower troposphere is already so great that there is little scope for additional greenhouse-gas forcing. Secondly, though the models assume that the concentration of water vapor will increase in the tropical mid-troposphere as the space occupied by the atmosphere warms, advection transports much of the additional water vapor poleward
from the tropics at that altitude.

Since the great majority of the incoming solar radiation incident upon the Earth strikes the tropics, any reduction in tropical radiative forcing has a disproportionate effect on mean global forcings. On the basis of Lindzen (2007), the anthropogenic-ear radiative forcing as established in Eqn. (3) are divided by 3 to take account of the observed failure of the tropical mid-troposphere to warm as projected by the models –

ΔF2x ≈ 3.405 / 3 ≈ 1.135 Wm–2. (17)

The “no-feedbacks” climate sensitivity parameter κ reconsidered The base climate sensitivity parameter κ is the most influential of the three factors of ΔTλ: for the final or “with-feedbacks” climate sensitivity parameter λ is the product of κ and the feedback factor f, which is itself dependent not only on the sum b of all climate-relevant temperature feedbacks but also on κ.  Yet κ has received limited attention in the literature. In IPCC (2001, 2007) it is not mentioned.
However, its value may be deduced from hints in the IPCC’s reports. IPCC (2001, ch. 6.1) says:

“The climate sensitivity parameter (global mean surface temperature response ΔTS to the radiative forcing
ΔF) is defined as ΔTS / ΔF = λ {6.1} (Dickinson, 1982; WMO, 1986; Cess et al., 1993). Equation {6.1} is defined for the transition of the surface-troposphere system from one equilibrium state to another in response to an externally imposed radiative perturbation. In the one-dimensional radiative-convective models, wherein the concept was first initiated, λ is a nearly invariant parameter (typically, about 0.5 °K W−1 m2; Ramanathan et al., 1985) for a variety of radiative forcings, thus introducing the notion of a possible universality of the relationship between forcing and response.”

Since λ = κf = κ(1 – bκ)–1 (Eqns. 1, 2), where λ = 0.5 °K W–1 m2 and b ≈ 2.16 W m–2 °K–1 (Eqn. 12), it is simple to calculate that, in 2001, one of the IPCC’s values for f was 2.08. Thus the value f = 3.077 in IPCC (2007) represents a near-50% increase in the value of f in only five years. Where f = 2.08, κ = λ / f ≈ 0.5 / 2.08 ≈ 0.24 °K W–1 m2, again substantially lower than the value implicit in IPCC (2007).

Some theory will, therefore, be needed.

The fundamental equation of radiative transfer at the emitting surface of an astronomical body, relating changes in radiant-energy flux to changes in temperature, is the Stefan-Boltzmann equation – F = ε σ T4 W m–2, (18)
where F is radiant-energy flux at the emitting surface; ε is emissivity, set at 1 for a blackbody that absorbs and emits all irradiance reaching its emitting surface (by Kirchhoff’s law of radiative transfer, absorption and emission are equal and simultaneous), 0 for a white body that reflects all irradiance, and (0, 1) for a graybody that partly absorbs/emits and partly reflects; and σ ≈ 5.67 x 10–8 is the Stefan-Boltzmann constant.  Differentiating Eqn. (18) gives –

κ = dT / dF = (dF / dT)–1 = (4 ε σ T3)–1 °K W–1 m2. (19) Outgoing radiation from the Earth’s surface is chiefly in the near-infrared. Its peak wavelength λmax is determined solely by the temperature of the emitting surface in accordance with Wien’s Displacement Law, shown in its simplest form in Eqn. (20):

λmax = 2897 / TS = 2897 / 288 ≈ 10 μm. (20)

Since the Earth/troposphere system is a blackbody with respect to the infrared radiation that Eqn. (20) shows we are chiefly concerned with, we will not introduce any significant error if ε = 1, giving the blackbody form of Eqn. (19) –

κ = dT / dF = (4 σ T3)–1 °K W–1 m2. (21)

At the Earth’s surface, TS ≈ 288 °K, so that κS ≈ 0.185 °K W–1 m2. At the characteristic-emission level, ZC, the variable altitude at which incoming and outgoing radiative fluxes balance, TC ≈ 254 °K, so that κC ≈ 0.269 °K W–1 m2. The value κC ≈ 0.24, derived from the typical final-sensitivity value λ = 0.5 given in IPCC (2001), falls between the surface and characteristic-emission values for κ.

However, the IPCC, in its evaluation of κ, does not follow the rule that in the Stefan-Boltzmann equation the temperature and radiant-energy flux must be taken at the same level of the atmosphere.  The IPCC’s value for κ is dependent upon temperature at the surface and radiant-energy flux at the tropopause, so that its implicit value κ ≈ 0.313 °K W–1 m2 is considerably higher than either κS or κC.  IPCC (2007) cites Hansen et al. (1984), who say –

“Our three-dimensional global climate model yields a warming of ~4 ºC for … doubled CO2. This indicates a net feedback factor f = 3-4, because [the forcing at CO2 doubling] would cause the earth’s surface temperature to warm 1.2-1.3 ºC to restore radiative balance with space, if other factors remained unchanged.”

Hansen says dF2x is equivalent to a 2% increase in incoming total solar irradiance (TSI). Top-ofatmosphere
TSI S ≈ 1368 W m2, albedo α = 0.31, and Earth’s radius is r. Then, at the characteristic
emission level ZC,

FC = S(1 – α)(πr2 / 4πr2) ≈ 1368 x 0.69 x (1/4) ≈ 236 Wm–2. (22)

Thus a 2% increase in FC is equivalent to 4.72 W m–2, rounded up by Hansen to 4.8 W m–2, implying that κ ≈ 1.25 / 4.8 ≈ 0.260 °K W–1 m2. However, Hansen, in his Eqn. {14}, prefers 0.29 Wm–2.  Bony et al. (2006), also cited by IPCC (2007), do not state a value for κ. However, they say – “The Planck feedback parameter [equivalent to κ –1] is negative (an increase in temperature enhances the long-wave emission to space and thus reduces R [the Earth’s radiation budget]), and its typical value for the earth’s atmosphere, estimated from GCM calculations (Colman 2003; Soden and Held 2006), is ~3.2 W m2 ºK–1 (a value of ~3.8 W m2 ºK–1 is obtained by defining [κ –1] simply as 4σT3, by equating the global
mean outgoing long-wave radiation to σT4 and by assuming an emission temperature of 255 ºK).” Bony takes TC ≈ 255 °K and FC ≈ 235 W m–2 at ZC as the theoretical basis for the stated prima facie value κ –1 ≈ TC / 4FC ≈ 3.8 W m2 ºK–1, so that κ ≈ 0.263 ºK W–1 m2, in very close agreement with Hansen. However, Bony cites two further papers, Colman (2003) and Soden & Held (2006), as justification for the value κ –1 ≈ 3.2 W m2 ºK–1, so that κ ≈ 0.313 ºK W–1 m2.  Colman (2003) does not state a value for κ, but cites Hansen et al. (1984), rounding up the value κ ≈ 0.260 °K W–1 m2 to 0.3 °K W–1 m2 – “The method used assumes a surface temperature increase of 1.2 °K with only the CO2 forcing and the
‘surface temperature’ feedback operating (value originally taken from Hansen et al. 1984).”

Soden & Held (2006) likewise do not declare a value for κ. However, we may deduce their implicit central estimate κ ≈ 1 / 4 ≈ 0.250 °K W–1 m2 from the following passage – “The increase in opacity due to a doubling of CO2 causes [the characteristic emission level ZC] to rise by ~150 meters. This results in a reduction in the effective temperature of the emission across the tropopause by ~(6.5K/km)(150 m) ≈ 1 K, which converts to 4 W m–2 using the Stefan-Boltzmann law.”  Thus the IPCC cites only two papers that cite two others in turn. None of these papers provides any theoretical or empirical justification for a value as high as the κ ≈ 0.313 °K W–1 m2 chosen by the IPCC.

Kiehl (1992) gives the following method, where FC is total flux at ZC:

κS = TS / (4FC) ≈ 288 / (4 x 236) ≈ 0.305 °K W–1 m2. (23)

Hartmann (1994) echoes Kiehl’s method, generalizing it to any level J of an n-level troposphere thus:

κJ = TJ / (4FC)
= TJ / [S(1 – α)]
≈ TJ / [1368(1 – 0.31)] ≈ TJ / 944 °K W–1 m2. (24)

Table 2 summarizes the values of κ evident in the cited literature, with their derivations, minores priores. The greatest value, chosen in IPCC (2007), is 30% above the least, chosen in IPCC (2001).  However, because the feedback factor f depends not only upon the feedback-sum b ≈ 2.16 W m–2 °K–1  but also upon κ, the 30% increase in κ nearly doubles final climate sensitivity:

Table 2
Values of the “no-feedbacks” climate sensitivity parameter κ


The value of κ cannot be deduced by observation, because temperature feedbacks are present and cannot be separately measured. However, it is possible to calculate κ using Eqn. (6), provided that the temperature change ΔTλ, radiative forcings ΔF2x, and feedback-sum b over a given period are known.  The years 1980 and 2005 will be compared, giving a spread of a quarter of a century. We take the feedback-sum b = 2.16 W m–2 °K–1 and begin by establishing values for ΔF and ΔT:  CO2 concentration: 338.67 ppmv 378.77 ppmv ΔF = 5.35 ln (378.77/338.67) = 0.560 W m–2 Anomaly in TS: 0.144 °K 0.557 °K ΔT = 0.412 °K (NCDC) Anomaly halved: ΔT = 0.206 °K (McKitrick) (25) CO2 concentrations are the annual means from 100 stations (Keeling & Whorf, 2004, updated). TS values are NCDC annual anomalies, as five-year means centered on 1980 and 2005 respectively. Now, depending on whether the NCDC or implicit McKitrick value is correct, κ may be directly evaluated:

NCDC: κ = ΔT / (ΔF + bΔT) = 0.412 / (0.560 + 2.16 x 0.412) = 0.284 °KW–1 m2
McKitrick: κ = ΔT / (ΔF + bΔT) = 0.206 / (0.599 + 2.16 x 0.206) = 0.197 °KW–1 m2
Mean: κ = (0.284 + 0.197) / 2 = 0.241 °KW–1 m2 (26)

We assume that Chylek (2008) is right to find transient and equilibrium climate sensitivity near identical; that all of the warming from 1980-2005 was anthropogenic; that the IPCC’s values for forcings and feedbacks are correct; and, in line 2, that McKitrick is right that the insufficiently corrected heat-island effect of rapid urbanization since 1980 has artificially doubled the true rate of temperature increase in the major global datasets.

With these assumptions, κ is shown to be less, and perhaps considerably less, than the value implicit in IPCC (2007). The method of finding κ shown in Eqn. (24), which yields a value very close to that of IPCC (2007), is such that progressively smaller forcing increments would deliver progressively larger temperature increases at all levels of the atmosphere, contrary to the laws of thermodynamics and to the Stefan-Boltzmann radiative-transfer equation (Eqn. 18), which mandate the opposite. It is accordingly necessary to select a value for κ that falls well below the IPCC’s value. Dr. David Evans (personal communication, 2007) has calculated that the characteristic-emission-level value of κ should be diminished by ~10% to allow for the non-uniform latitudinal distribution of incoming solar radiation, giving a value near-identical to that in Eqn. (26), and to that implicit in IPCC (2001), thus –

κ = 0.9TC / [S(1 – α)]
≈ 0.9 x 254 / [1368(1 – 0.31)] ≈ 0.242 °K W–1 m2 (27)

The feedback factor f reconsidered

The feedback factor f accounts for two-thirds of all radiative forcing in IPCC (2007); yet it is not expressly quantified, and no “Level Of Scientific Understanding” is assigned either to f or to the two variables b and κ upon which it is dependent.
Several further difficulties are apparent. Not the least is that, if the upper estimates of each of the climate-relevant feedbacks listed in IPCC (2007) are summed, an instability arises. The maxima are –

Water vapor feedback 1.98 W m–2 K–1
Lapse rate feedback –0.58 W m–2 K–1
Surface albedo feedback 0.34 W m–2 K–1
Cloud albedo feedback 1.07 W m–2 K–1
CO2 feedback 0.57 W m–2 K–1
Total feedbacks b 3.38 Wm–2 K–1 (28)

Continue to Part 4

Daftar Akun Bandar Togel Resmi dengan Hadiah 4D 10 Juta Tahun 2024

Togel resmi adalah langkah penting bagi para penggemar togel yang ingin menikmati permainan dengan aman dan terpercaya. Tahun 2024 menawarkan berbagai kesempatan menarik, termasuk hadiah 4D sebesar 10 juta rupiah yang bisa Anda menangkan. Anda perlu mendaftar akun di Daftar Togel yang menawarkan hadiah tersebut. Proses pendaftaran biasanya sederhana dan melibatkan pengisian formulir dengan informasi pribadi Anda serta verifikasi data untuk memastikan keamanan transaksi. Setelah akun Anda selasai terdaftar, Anda dapat berpartisipasi dalam berbagai permainan togel berbagai fitur yang disediakan oleh situs togel terbesar.

Bermain di Link Togel memungkinkan Anda memasang taruhan dengan minimal 100 perak, sehingga semua kalangan bisa ikut serta. Meskipun taruhan rendah, Anda tetap bisa memenangkan hadiah besar dan mendapatkan bonus. Untuk mulai bermain, Anda harus mendaftar terlebih dahulu.

Bagi pemain togel yang ingin menikmati diskon terbesar, mendaftar di situs togel online terpercaya adalah langkah yang tepat. Bo Togel Hadiah 2d 200rb tidak hanya memberikan jaminan keamanan dalam bertransaksi, tetapi juga menawarkan berbagai diskon untuk jenis taruhan tertentu. Diskon yang besar ini memungkinkan pemain untuk menghemat lebih banyak dan memasang taruhan dalam jumlah yang lebih banyak. Dengan begitu, peluang untuk mendapatkan hadiah juga semakin tinggi, sekaligus memastikan bahwa setiap taruhan dilakukan di situs yang aman dan resmi.

Link Slot Gacor Terpercaya untuk Menang Setiap Hari

Slot gacor hari ini menjadi incaran para pemain Link Slot Gacor yang ingin menikmati peluang jackpot besar hanya dengan menggunakan modal kecil, sehingga mereka bisa merasakan pengalaman bermain yang lebih menyenangkan dan penuh keuntungan.

Situs dengan slot Mahjong Ways gacor memberikan jackpot dan Scatter Hitam lebih sering di tahun 2024. Pastikan memilih situs terpercaya yang menyediakan fitur scatter unggulan, sehingga peluang Anda untuk menang lebih besar dan aman.

Dengan Situs Slot Depo 5k, Anda bisa bermain dengan modal kecil namun tetap memiliki kesempatan besar untuk meraih hadiah. Banyak platform judi online kini menawarkan pilihan deposit rendah ini, sehingga pemain dengan budget terbatas tetap bisa menikmati permainan slot favorit mereka. Bermain slot dengan deposit kecil seperti ini tentu memberikan kenyamanan bagi pemain baru maupun veteran.

Situs Slot Gacor Gampang Menang RTP Live Tertinggi

Strategi bermain slot online kini semakin berkembang, terutama dengan munculnya data rtp slot gacor tertinggi. Para pemain dapat memanfaatkan rtp live untuk memilih slot gacor dengan rtp slot yang terbaik, memastikan mereka memiliki peluang menang yang lebih besar. Slot rtp tertinggi yang tersedia hari ini bisa menjadi panduan penting bagi siapa saja yang ingin menikmati permainan yang lebih menguntungkan. Dengan memahami rtp slot online, pemain dapat bermain dengan lebih strategis dan mendapatkan hasil yang lebih memuaskan.

Related Links:

Togel178

Pedetogel

Sabatoto

Togel279

Togel158

Colok178

Novaslot88

Lain-Lain

Partner Links