Australia's climate history : was net zero 2050 at Glasgow worth it?
This is the first of four posts analysing such changes that correlate with shifts in temperature - 1972 metrication, the introduction of automatic weather stations peaking in 1996, and something that happened in 2013.
The final post will be a comprehensive analysis of Australian climate records stretching back to the 1800s. The floating menu to the left provides links to the different pages.
These posts will consider the central issue that is said to have been settled and should no longer be debated ... has a climate crisis actually happened to warrant the hope that future renewables technology will replace fossil-fuelled energy in Australia?
It might be noted that in 2021, the year that 2050 net zero was pledged in Glasgow, Australia's official ACORN mean temperature anomaly was 0.56C compared to 1961-90 (minimum 0.45C, maximum 0.65C). Within the official weather station network, 57 stations have been observing since 1910 and their unadjusted mean temperature increased 0.45C from 1910 to 2021.
For now, let's concentrate on 1 September 1972, the day that new thermometers were introduced and Australian temperatures started to be recorded in Celsius rather than Fahrenheit.
Before metrication and back to 1910, the first year of the Bureau of Meteorology's ACORN homogenised measurement system (Australian Climate Observation Reference Network of 112 stations, with 104 non-urban stations used to average national temperatures), 59.5% of all daily maximum and 62.7% of all daily minimum temperatures had a rounded decimal of .0F.
That's an awfully big proportion of .0F that might raise questions about the accuracy of temperature recordings before climate change is said to have begun in the 1970s.
Don't believe it? Below is a typical snapshot of temperature observations in the Fahrenheit days, this one detailing July 1940 at Albany in southern Western Australia.
You'll note that 54 of all temperatures have a rounded .0 decimal, with two .2, one .3, one .4, two .5 and a single lonely .8 decimal making up all 62 observations throughout July. The observer presumably found it too cold on winter mornings to bother with any minimum decimals.
All 12 months of 1940 at Albany can be viewed here, showing that 67.1% of all maxima and 74.3% of all minima were rounded .0F, their average temperatures being 20.6C in max and 11.5C in min. The BoM's Climate Data Online shows Albany's RAW temperatures a bit cooler in 1940 with maximum averaging 20.4C and minimum averaging 11.4C.
Albany's July 1940 maximum was 16.8C and minimum was 8.6C, compared to a July 2021 maximum of 16.0C and minimum of 9.5C - representing a mean temperature increase of 0.05C. Albany's total 1940 maximum was 20.4C and minimum was 11.4C, compared to a maximum of 19.9C and minimum of 13.1C in the 12 months to July 2021 - representing a mean temperature increase of 0.6C.
ACORN 2.1 homogenises Albany in 1940 to a maximum of 20.2C and a minimum of 9.8C. So the adjusted and now official maximum has cooled 0.4C and the minimum has cooled 1.7C compared to what the Albany Advertiser told its readers back in 1940. The mean temperature has increased from 0.6C as originally observed to 0.9C with ACORN 2.1 adjustments.
Do ACORN mathematical algorithms cool every historic day? No. Relevant is the fact that ACORN 1 made 8 February 1933 in the chilly southern town of Albany the hottest day ever recorded in Australia at 51.2C, while ACORN 2 and ACORN 2.1 cooled this to 49.5C. The original recording for Albany on that day was 44.8C. However, ACORN cools the majority of days so most monthly and annual historic averages are lower.
Below are snapshots from the first public analysis of decimal frequency and distribution observed every day from 1910 to 2019 at 58 of the ACORN weather stations - the 58 that were all open in 1910.
Those stations made 4,536,611 maximum and minimum temperature observations over the 109 years, and this analysis looks into every one of them. All temperatures analysed are unadjusted RAW, not adjusted ACORN dailies, with Celsius converted to Fahrenheit so that decimal distribution estimates can be calculated before 1972.
It's important to note in the tables below that all .0F observation estimates before 1972 are fairly accurate and benchmarked against previous BoM analysis, but decimals from .1 to .9 are imprecise because accuracy is lost in converting single decimal Celsius temperatures back to their original Fahrenheit.
These should be regarded only as indicative of annual totals for each decimal, although the conversion errors are consistent and smoothed across 58 stations. These errors could only be overcome if the BoM published all its digitised Fahrenheit records and/or converted two decimal Celsius records.
Nevertheless, just quickly run your eyes over the tables and you'll notice some pretty big changes as the years roll by.
At the 58 long-term ACORN stations, there were 15,310 Fahrenheit maximum decimals of .0 in 1959 and 3,845 Celsius decimals of .0 in 1985. With minimum observations, there were 15,644 at .0 in 1959 and just 4,533 in 1985.
Averaged, in 1959-1971 there were 11,846 rounded .0 maximum observations and 12,537 rounded .0 minimum observations, compared to 4,401 rounded .0 maximum observations and 4,896 rounded .0 minimum observations in 1973-1985.
Did you also notice how few of the other Fahrenheit decimals (.1 to .9) there were before 1972, particularly compared to their frequency after metrication?
So what's going on and could such a radical shift in decimal distribution affect temperatures? Yes, although that depends on whether you believe observers used to round Fahrenheit temperature up, down or in the same proportions.
The most plausible explanation is that from 1910 to 1972, many observers would either ignore decimals entirely (75F instead of 75.7F, for example) or often truncate down because they understandably figured that 75F is accurate but 76F is, well, a bit exaggerated.
But don't the tables above show that .9 decimals were even more scarce than .1 decimals before metrication? They do, and it's likely that plenty of observers in the old days rounded up and down so average temperatures were more or less accurate as though decimals don't matter.
However, the decimal distribution analysis shows pre-metric observers didn't just round their temperatures but also had a preference for lower decimals, with the averages tabulated below …
There were 14.4% more maximum lower decimal observations and 19.4% more minimum lower decimal observations in 1910-1971 (compared to 0.3% more maximum and 2.3% more minimum in 1973-2019). This trend can also be seen in the Albany example at the top of this post.
This observer preference for lower decimals suggests more .1 were rounded down than .9 rounded up to .0, with (15%) more .1 than .9 decimals recorded in 1910-1972 simply because of the observer tendency to record lower decimals.
Could it be that there are fewer .6 to .9 than lower decimals because Fahrenheit observers preferred to round up than down, suggesting pre-metric temperatures were actually cooler than the records suggest and climate change is even worse than many believe?
That's possible, but only if those observers looked at their thermometers, saw the liquid was probably a bit higher than .5 and so wrote the higher temperature, but were less included to write the lower temperature if the liquid looked a bit less than .5.
It suggests a greater number of observers who didn't care about decimals thought that if something was 75 point something it was more accurate to write 76 than 75, for example. Fahrenheit thermometers were harder to read than Celsius thermometers so if their eyesight was adequate and they made the effort to see it was probably 75.6F, why didn't they write 75.6F?
Cooling temperature influence?
Does this mean that all of Australia's temperatures from 1910 to 1972 might have been recorded cooler than they actually were, resulting in a rapid warming when new thermometers and more precise Celsius readings were introduced?
The bureau has conceded that three dataset comparisons showed an artificial warming from 1967-71 to 1973-77 (see here).
The broad conclusion was that a breakpoint around 0.1C in Australian mean temperatures existed in 1972, but it could not be determined if this was due to metrication.
Instead, the BoM chose not to incorporate the .1C into ACORN adjustments because the five years from 1973 were the wettest ever observed and 1973-75 were the three cloudiest years on record for Australia.
Overnight rainclouds can trap daytime heat and cause higher minima, but does the Australian public accept that record rainfall and record cloud cover are associated with warmer weather and higher mean temperatures?
As an example, 2019 was Australia's driest rainfall year on record (275.71mm) and the ACORN mean temperature in 2019 was +1.52C above the 1961-90 mean. In 2020 there was 486.59mm of rainfall and the mean temperature was +1.15C, which was 0.37C cooler than 2019. Yet the average rainfall from 1973 to 1977 was 597.65mm, 116.8% more than 2019 and 22.8% more than 2020, and the BoM stretches credibility by claiming the 1970s record rainfall may have caused a 0.1C warming.
So is there evidence that temperatures were affected by the fact that a majority of all Fahrenheit temperatures recorded before 1972 were rounded to .0F?
Let's compare the number of .0 observations with annual temperatures and rainfall since 1910.
The maximum chart/table above shows that when .0F rounding increased from 1940 to 1964 (.0 rounding average 66.1%), temperatures were .09C cooler than 1915-1939 (.0 decimals 55.8%), with rainfall a likely influence averaging 22.4mm per annum more in the later years.
As illustrated in the tables, rainfall at the 58 stations was at record levels in 1970-74, yet maximum temperatures were static at 24.92C compared to 1965-69 despite an average 157.7mm more annual rainfall. Average annual .0 rounding dropped from 49.96% to 30.76% between those time periods.
If there's been no change to the accuracy of temperatures observations and decimal distribution is irrelevant, the recordings should be cooler when there's a lot less sunshine. They weren't.
The heavy rainfall had declined by 1980-84, by which time the average maximum was .27C warmer than in 1965-69, despite an average annual 40.8mm more rainfall in 1980-84 than 1965-69. Something changed the rainfall relationship from 1965-69 (average .0 decimals 49.96%) to 1980-84 (average .0 decimals 19.07%).
Average minima at the 58 long-term ACORN stations increased 1C from 1972 (13.42C) to 1973 (14.42C), which could be attributed to 655.8mm of rain in 1972 and 900.4mm in 1973. The more nighttime rainclouds, the more daytime heat is trapped overnight.
Surprisingly, average rainfall increased to 1034mm in 1974 but the average minimum dropped to 13.55C. This is probably because the maximum temperature plunge from 25.16C in 1973 to 24.47C in 1974 meant there was simply less daytime heat to capture.
The heavy rainfall declined at the 58 stations and by 1980 averaged just 635.4mm, which means there were far fewer overnight clouds to trap the heat. However, the 1980 average minimum was 13.92C, which was 0.37C warmer than 1974 when there was 1,034mm of rain.
The average minimum was 13.40C from 1950 to 1969 (average 738.4mm rain) and 13.73C from 1970 to 1989 (average 747.5mm rain). Average .0 minimum decimal rounding was 65.2% in 1950-69 and 24.0% in 1970-89.
So there was a sudden metrication jump of 0.1C in Australia's mean temperature, as acknowledged but ignored by the BoM. The mean temperature at the 58 stations was 19.19C in 1965-69 and 19.28C in 1979-74, despite an annual average 158mm more rainfall.
The mean remained at 19.28C in 1975-79 but then the rainclouds cleared and in 1980-84 (average 705.6mm rain) the mean was 19.48C, which was 0.29C warmer than 1965-69 (average 664.8mm rain) despite having an annual average 40.8mm more rainfall.
The proportion of rounded .0F/C observation averaged 51.8% in 1965-69 and 20.1% in 1980-84.
Clouds mask the artificial warming
The BoM estimated a 0.1C artificial increase in mean temperatures tied to 1972 metrication but analysis suggests it was closer to 0.3C - at least at the 58 long-term weather stations that are the backbone of the network used to calculate Australia's historic temperature trends.
The temperatures and the rainfall don't make much sense, but the decimals do. Their significance in warming or cooling Australian historic temperatures is open to debate but their extreme changes in distribution ought raise questions about the reliability of observations.
In the next article those decimals, with their numbers counted on every day, will be used to analyse what happened when automatic weather stations and their one second recordings were introduced in the 1990s.
Note : Annual Fahrenheit and Celsius decimal counts for minima and maxima from 1910 to 2019 at the 58 long-term ACORN stations are contained in an Excel file that can be downloaded here.
Note : The accuracy of pre-metric .0F decimal estimations in this post are validated through comparison with a 2001 PhD thesis titled Extreme Temperature Events in Australia by the BoM's Blair Trewin in which he calculates the .0F proportion of all observations at 94 ACORN weather stations from 1957 to 1971, presumably using original digitised Fahrenheit observations held by the bureau. A study extract of his calculations can be viewed here, showing that 51.5% of the observations were .0F. His calculations can be compared with the decimal calculation formulas used in this post here, showing they average 0.F proportions at the 94 stations from 1957 to 1971 at 51.3%, compared to 51.5% calculated in the Trewin thesis.
The 2013 shift
Climate since the 1800s