Fallacies about Global Warming

By John McLean

It is widely alleged that the science of global warming is “settled”. This implies that all the major scientific aspects of climate change are well understood and uncontroversial, and that scientists are now just mopping up unimportant details. The allegation is profoundly untrue: for example the US alone is said to be spending more than $4 billion annually on climate research, which is a lot to pay for detailing; and great uncertainty and argument surround many of the principles of climate change, and especially the magnitude of any human causation for warming. Worse still, not only is the science not “settled”, but its discussion in the public domain is contaminated by many fallacies, which leads directly to the great public confusion that is observed.  This paper explains the eight most common fallacies that underpin public discussion of the hypothesis that dangerous global warming is caused by human greenhouse gas emissions.

1 – Scientists have accurate historical temperature data

Historical temperature records taken near the surface of the Earth are subject to various biases and recording errors that render them incorrect. In the early days thermometers could only show the temperature at the moment of reading and so the data recorded from that time was for just one reading each day. Later the thermometers were able to record the minimum and maximum temperatures, and so the daily readings were those extremes in the 24 hour period. Only in the last 20 or 30 years have instruments been available that record the temperature at regular intervals throughout the 24 hours, thus allowing a true time-based daily average to be calculated.

The so-called “average” temperatures both published and frequently plotted through time are initially based on only a single daily value, then later on the mathematical average of the minimum and maximum temperatures. Although time-based averages are now available for some regions they are not generally used because the better instrumentation is not uniformly installed throughout the world and the historical data is at best a mathematical average of two values. The problem is that these averages are easily distorted by brief periods of high or low temperatures relative to the rest of the day, such as a brief period with less cloud cover or a short period of cold wind or rain.

Another serious problem is that thermometers are often located where human activity can directly influence the local temperature.[1] This is not only the urban heat island (UHI) effect, where heat generated by traffic, industry and private homes and then trapped by the man-made physical environment causes elevated temperatures. There is also a land use effect, where human activity has modified the microclimate of the local environment through buildings or changes such as land clearing or agriculture. Only recently have the climatic impacts of these human changes started to receive detailed scrutiny, but many older meteorological records are inescapably contaminated by them.

The integrity of some important historical data is also undermined by reports that various Chinese weather stations that were claimed to be in unchanged locations from 1954 to 1983 had in fact moved, with one station moving 5 times and up to 41 kilometres[2]. The extent of this problem on a global scale is unknown but worrying, because shifts of less than 500 metres are known to cause a significant change in recordings.

The observed minimum and maximum temperatures that are recorded, albeit with the inclusion of possible local human influences, are sent to one or more of the three agencies that calculate the “average global temperature” (NASA, NOAA, UK Hadley Centre). These agencies produce corrected data, and graphs that depict a significant increase in average global temperature over the last 30 years. However, this apparent rise may at least partly result from the various distortions of surface temperature measurements described above. No-one has independently verified the temperature records, not least because full disclosure of methods and data is not made and the responsible agencies appear very reluctant to allow such auditing to occur.

In reality, there is no guarantee, and perhaps not even a strong likelihood, that the thermometer-based temperature measurements truly reflect the average local temperatures free from any distortions. There is also no proof that the calculations of average global temperatures are consistent and accurate. For example, it is known that at least two of the three leading climate agencies use very different data handling methods and it follows that at least one of them is likely to be incorrect.

It is stating the obvious to say that if we don’t know what the global average temperature has been and currently is, then it is difficult to argue that the world is warming at all, let alone to understand to what degree any alleged change has a human cause.

2 – Temperature trends are meaningful and can be extrapolated

That temperature trends plotted over decades are meaningful, and understood to the degree that they can be projected, is one of the greatest fallacies in the claims about man-made global warming.

Any trend depends heavily upon the choice of start and end points. A judicious selection of such points for can create a wide variety of trends. For example, according to the annual average temperatures from Britain’s CRU:

trend for 1900-2006 = 0.72 °C/century

trend for 1945-2006 = 1.05 °C/century

trend for 1975-2006 = 1.87 °C/century,

None of these trends is any more correct than either of the others.

Despite the common use of temperature trends in scientific and public discussion, they cannot be used to illustrate possible human greenhouse influences on temperature unless episodic natural events, such as the powerful El Nino of 1998, are taken into account and corrected for.

Trends cannot be extrapolated meaningfully unless scientists:

(a) Thoroughly understand all relevant climate factors;

(b) Are confident that the trends in each individual factor will continue; and

(c) Are confident that interactions between factors will not cause a disruption to the overall trend.

The IPCC’s Third Assessment Report of 2001 listed 11 possible climate factors and indicated that the level of scientific understanding was “very low” for 7 of them and “low” for another. No similar listing appears in the recent Fourth Assessment Report, but it does contain a list of factors relevant to the absorption and emission of radiation that shows that the level of scientific knowledge of several of those factors is still quite low.

Scientists are still struggling even to understand the influence of clouds on temperature. Observational data shows that low-level cloud outside the tropics has decreased since 1998, but scientists cannot be certain that the decreasing trend will continue, nor what such a decrease would mean. Perhaps clouds act as a natural thermostat and higher temperatures will ultimately create more clouds and this will have a cooling effect.[3]

Again, if random natural events dictate the historical trend, then extrapolation of the trend makes no sense. Even if those natural events can be expected to continue in the future, their severity – which often dictates the short-term trend – is unknowable.

3 – The accuracy of climate models can be determined from their output

A common practice among climate scientists is to compare the output of their climate models to historical data from meteorological observations. (In fact the models are usually “adjusted” to match that historical data as closely as possible, but let’s ignore that for now.)

The accuracy of a model is determined by the accuracy with which it simulates each climatic factor and climatic process rather than the closeness of the match between its output and the historical data. If the internal processing is correct then so too will be the output, but apparently accurate output does not confer accuracy on the internal processes.

Two issues to watch are:

(a) The combination of a number of inaccuracies can produce acceptable output if calculations that are “too high” counterbalance those that are “too low”

(b) If the internal processes are largely based on data that changes almost immediately as a consequence of a change in temperature, then the output of the model will probably appear accurate when compared to historical data, but it will be of no benefit for predicting future changes.

4 – The consensus among scientists is decisive (or even important)

The extent of a claimed consensus that dangerous human-caused global warming is occurring is unknown and the claim of consensus is unsupported by any objective data[4]. However, this is irrelevant because by its nature any consensus is a product of opinions, not facts.[5]

Though consensus determines legal and political decisions in most countries, this simply reflects the number of persons who interpret data in a certain way or who have been influenced by the opinions of others. Consensus does not confer accuracy or “rightness”.

Scientific matters are certainly not settled by consensus. Einstein pointed out that hundreds of people agreeing with him were of no relevance, because it would take just one person to prove him wrong.

Science as a whole, and its near neighbour medicine, are replete with examples of individuals or small groups of researchers successfully undermining the prevailing popular theories of the day. This is not to say that individuals or small groups who hold maverick views are always correct, but it is to say that even the most widely held opinions should never be regarded as an ultimate truth.

Science is about observation, experiment and the testing of hypotheses, not consensus.

5 – The dominance of scientific papers on a certain subject establishes a truth

This fallacy is closely related to the previous discussion of consensus, but here the impact is an indirect consequence of a dominant opinion.

Funding for scientific research has moved towards being determined by consensus, because where public monies are concerned the issue ultimately comes back to an opinion as to whether the research is likely to be fruitful. Prior to the last 20 or 30 years, research was driven principally by scientific curiosity. That science research funding has now become results-oriented has had a dramatic, negative impact on the usefulness of many scientific results. For, ironically, pursuing science that is thought by politician to be “important” or “in the public interest” often results in science accomplishments that are conformist and fashionable rather than independent and truly useful.

Targeting of “useful” research strongly constricts the range of scientific papers that are produced. A general perception may arise that few scientists disagree with the dominant opinion, whereas the reality may be that papers that reject the popular opinion are difficult to find simply because of the weight of funding, and hence the research effort, that is tailored towards the conventional wisdom.

Science generally progresses by advancing on the work that has gone before, and the usual practice is to cite several existing papers to establish the basis for one’s work. Again the dominance of papers that adhere to a conventional wisdom can put major obstacles in the way of the emergence of any counter-paradigm.

6 – Peer-reviewed papers are true and accurate

The peer-review process was established for the benefit of editors who did not have good knowledge across all the fields that their journals addressed. It provided a “sanity check” to avoid the risk of publishing papers which were so outlandish that the journal would be ridiculed and lose its reputation.

In principle this notion seems entirely reasonable, but it neglects certain aspects of human nature, especially the tendency for reviewers to defend their own (earlier) papers, and indirectly their reputations, against challengers. Peer review also ignores the strong tendency for papers that disagree with a popular hypothesis, one the reviewer understands and perhaps supports, to receive a closer and often hostile scrutiny.

Reviewers are selected from practitioners in the field, but many scientific fields are so small that the reviewers will know the authors. The reviewers may even have worked with the authors in the past or wish to work with them in future, so the objectivity of any review is likely to be tainted by this association.

Some journals now request that authors suggest appropriate reviewers but this is a sure way to identify reviewers who will be favourable to certain propositions.

It also follows that if the editor of a journal wishes to reject a paper, then it will be sent to a reviewer who is likely to reject it, whereas a paper that the editor favours to be published will be sent to a reviewer who is expected to be sympathetic. In 2002 the editor-in-chief of the journal “Science” announced that there was no longer any doubt that human activity was changing climate, so what are the realistic chances of this journal publishing a paper that suggests otherwise?

The popular notion is that reviewers should be skilled in the relevant field, but a scientific field like climate change is so broad, and encompasses so many sub disciplines, that it really requires the use of expert reviewers from many different fields. That this is seldom undertaken explains why so many initially influential climate papers have later been found to be fundamentally flawed.

In theory, reviewers should be able to understand and replicate the processing used by the author(s). In practice, climate science has numerous examples where authors of highly influential papers have refused to reveal their complete set of data or the processing methods that they used. Even worse, the journals in question not only allowed this to happen, but have subsequently defended the lack of disclosure when other researchers attempted to replicate the work.

7 – The IPCC is a reliable authority and its reports are both correct and widely endorsed by all scientists

The Intergovernmental Panel on Climate Change (IPCC) undertakes no research for itself and relies on peer-reviewed scientific papers in reputable journals (see item 6). There is strong evidence that the IPCC is very selective of the papers it wishes to cite and pays scant regard to papers that do not adhere to the notion that man-made emissions of carbon dioxide have caused warming.

Four more issues noted above are also very relevant to the IPCC procedures. The IPCC reports are based on historical temperature data and trends (see 1 & 2), and the attribution of warming to human activities relies very heavily on climate modelling (see item 3). The IPCC pronouncements have a powerful influence on the direction and funding of scientific research into climate change, which in turn influences the number of research papers on these topics. Ultimately, and in entirely circular fashion, this leads the IPCC to report that large numbers of papers support a certain hypothesis (see item 5).

These fallacies alone are major defects of the IPCC reports, but the problems do not end there. Other distortions and fallacies of the IPCC are of its own doing.

Governments appoint experts to work with the IPCC but once appointed those experts can directly invite other experts to join them. This practice obviously can, and does, lead to a situation where the IPCC is heavily biased towards the philosophies and ideologies of certain governments or science groups.

The lead authors of the chapters of the IPCC reports can themselves be researchers whose work is cited in those chapters. This was the case with the so-called “hockey stick” temperature graph in the Third Assessment Report (TAR) published in 2001. The paper in which the graph first appeared was not subject to proper and independent peer review, despite which the graph was prominently featured in a chapter for which the co-creator of the graph was a lead author. The graph was debunked in 2006[6] and has been omitted without explanation from the Fourth Assessment Report (4AR) of 2007.

The IPCC has often said words to the effect “We don’t know what else can be causing warming so it must be humans” (or “the climate models will only produce the correct result if we include man-made influences”), but at the same time the IPCC says that scientists have a low level of understanding of many climate factors. It logically follows that if any natural climate factors are poorly understood then they cannot be properly modelled, the output of the models will probably be incorrect and that natural forces cannot easily be dismissed as possible causes. In these circumstances it is simply dishonest to unequivocally blame late 20th century warming on human activity.[7]

The IPCC implies that its reports are thoroughly reviewed by thousands of experts. Any impression that thousands of scientists review every word of the reports can be shown to be untrue by an examination of the review comments for the report by IPCC Working Group I. (This report is crucial, because it discusses historical observations, attributes a likely cause of change and attempts to predict global and regional changes. The reports by working groups 2 and 3 draw heavily on the findings of this WG I report.)

The analysis of the WG I report for the 4AR revealed that:

(a) A total of just 308 reviewers (including reviewers acting on behalf of governments) examined the 11 chapters of the WGI I report

(b) An average of 67 reviewers examined each chapter of this report with no chapter being examined by more than 100 reviewers and one by as few as 34.

(c) 69% of reviewers commented on less than 3 chapters of the 11-chapter report. (46% of reviewers commented on just one chapter and 23% on two chapters, thus accounting for more than two-thirds of all reviewers.)

(d) Just 5 reviewers examined all 11 chapters and two of these were recorded as “Govt of (country)”, which may represent a team of reviewers rather than individuals

(e) Every chapter had review comments from a subset of the designated authors for the chapter, which suggests that the authoring process may not have been diligent and inclusive

Chapter 9 was the key chapter because it attributed a change in climate to human activity but:

(a) Just 62 individuals or government appointed reviewers commented on this chapter

(b) A large number of reviewers had a vested interest in the content of this chapter

- 7 reviewers were “contributing editors” of the same chapter

- 3 were overall editors of the Working Group I report
- 26 were authors or co-authors of papers cited in the final draft

- 8 reviewers were noted as “Govt of …” indicating one or more reviewers who were appointed by those governments (and sometimes the same comments appear under individual names as well as for the government in question)

- Only 25 individual reviewers appeared to have no vested interest in this chapter

(c) The number of comments from each reviewer varied greatly

- 27 reviewers made just 1 or 2 comments but those making more than 2 comments often drew attention to typographical errors, grammatical errors, mistakes in citing certain papers or inconsistencies with other chapters, so how thorough were these reviews with very few comments?

- only 18 reviewers made more than 10 comments on the entire 122-page second order draft report (98 pages of text, 24 of figures) and 9 of those 18 had a vested interest

(d) Just four reviewers, including one government appointed team or individual, explicitly endorsed the entire chapter in its draft form – not thousands of scientists, but FOUR!

The claim that the IPCC’s 4th Assessment Report carries the imprimatur of having been reviewed by thousands, or even hundreds, of expert and independent scientists is incorrect, and even risible. In actuality, the report represents the view of small and self-selected science coteries that formed the lead authoring teams.

More independent scientists of standing (61) signed a public letter to the Prime Minister of Canada cautioning against the assumption of human causation of warming[8] than are listed as authors of the 4AR Summary for Policymakers (52). More than 50 scientists also reviewed the Independent Summary for Policymakers, the counter-view to the IPCC’s summary that was published by the Fraser Institute of Canada[9].

8 – It has been proven that human emissions of carbon dioxide have caused global warming

The first question to be answered is whether the Earth is warming at all. As the discussion of fallacy 1 showed, there is no certainty that this is the case.

But even were warming to be demonstrated, and assuming a reasonable correlation between an increase in carbon dioxide and an increase in temperature, that does not mean that the former has driven the latter. Good evidence exists from thousands of years ago that carbon dioxide levels rose only after the temperature increased, so why should we assume that the order is somehow reversed today?[10]

The IPCC claims a subjective 90% to 95% probability that emissions of carbon dioxide have caused warming but that assumes (a) that warming has occurred, (b) that such a subjective probability can be assigned and is meaningful, and (c) that because existing climate models cannot produce correct results without including some “human” influence, then the only allowable explanation is that humans have caused warming.

Remarkably these claims are accompanied by an admission that the level of scientific understanding of many climate factors is quite low. This means that the IPCC’s claim for dangerous human-caused warming rests primarily on the output of climate models that are invalidated and recognised to be incomplete.[11]

The other foundation for the claim of dangerous warming is based upon laboratory work and theoretical physics regarding the ability of molecules of carbon dioxide to absorb heat and re-transmit it. Using these principles, and ignoring other factors, it can be shown that an increase in carbon dioxide beyond pre-industrial levels will cause a very small increase in temperature and that the warming will become less as the concentration of carbon dioxide increases. However, these principles were developed in laboratory environments that don’t match the complexity of real world climate, and the “other factors” that are ignored are actually an integral part of the climate system. With few exceptions, the actions and interactions of these factors are poorly understood. Moreover, empirical tests of the amount of warming that will be caused by a doubling of human emissions suggest a non-alarming figure of only about 1 deg. C[12].

One major stumbling block for the hypothesis that carbon dioxide has caused significant warming is that since continuous and direct measurements of carbon dioxide began in 1958 global temperatures have both risen and fallen while at all times the concentration of carbon dioxide continued to rise.

It would seem that if carbon dioxide is causing any warming at all then it is easily overwhelmed by other, probably quite natural, climate forces.

Scientists are continuing to investigate the possible impacts of solar forces on climate and in some cases have shown strong correlations. Other scientists are questioning whether cosmic rays may influence the formation of clouds that then control the amount of sunlight reaching the Earth’s surface. Changes in ozone have also been proposed as drivers of climate. That all three of these issues are actively being explored gives the lie to claims that climate science is settled and that carbon dioxide is known to be the sole major cause of recent climatic warming.

Very recently several scientists have said words to the effect “Yes, the natural forces do drive the climate but we believe that carbon dioxide adds to the warming”, though they notably refrain from defining how much warming the carbon dioxide may have caused.[13] The reality is that there is no clear evidence that human emissions of carbon dioxide have any measurable effect on temperatures. Such a claim rests on climate models of unproven accuracy and on lines of physical argument that expressly exclude consideration of other known important drivers of climate change.

Conclusions

The hypothesis of dangerous human-caused warming caused by CO2 emission is embroiled in uncertainties of the fundamental science and its interpretation, and by fallacious public discussion. It is utterly bizarre that, in face of this reality, public funding of many billions of dollars is still being provided for climate change research. It is even more bizarre that most governments, urged on by environmental NGSs and other self-interested parties, have either already introduced carbon taxation or trading systems (Europe; some groups of US States), or have indicated a firm intention to do so (Australia).

At its most basic, if scientists cannot be sure that temperatures are today rising, nor establish that the gentle late 20th century warming was caused by CO2 emissions, then it is nonsense to propose that expensive controls are needed on human carbon dioxide emissions.

References

[1] See: SPPI Temperature Rankings

[2] InfoMath

[3] See: SPPI: Positive Feedback We’ve Been Fooling Ourselves

[4] See: SPPI: What Greenhouse Warming?

[5] See: SPPI: What Consensus?

[6] “Ad Hoc Committee Report on the ‘Hockey Stick’ Global Climate Reconstruction” (i.e. “Wegman Report”)

[7] See: SPPI: Myth of Dangerous Human-Caused Climate Change

[8] CanadaPost

[9] Frasier Institute

[10] CSPP: Idso Report

[11] See: SPPI: Monckton; The Mathematical Reason Why Long Run Climate Change is Impossible to Predict

[12] ECD

[13] See: SPPI: What Greenhouse Warming?

About Editor
The Real Agenda is an independent publication. It does not take money from Corporations, Foundations or Non-Governmental Organizations. It provides news reports in three languages: English, Spanish and Portuguese to reach a larger group of readers. Our news are not guided by any ideological, political or religious interest, which allows us to keep our integrity towards the readers.

Comments are closed.

Related Links:

Togel178

Pedetogel

Sabatoto

Togel279

Togel158

Colok178

Novaslot88

Lain-Lain

Partner Links