Jump to content
Snow?
Local
Radar
Cold?
IGNORED

Meto Data Release


VillagePlank

Recommended Posts

Guest diessoli

The MetO have released part of the data as I mentioned here, and it is adjusted, homogenised data, and not the raw data that would meaningfully allow independent researchers to discover whether any adjustments or homogenizations have coloured the CRU temperature record that agrees with the other major long-term global temperature records (this is the reason given by CRU and UEA apologists that we should trust the data that has been generated by the CRU - that it agrees with NASA GISS and NOAA temperature records - so it should, they use essentially the same raw data, just slightly different adjustment and homogenisation algorithms, the American agencies are literally keeping up with the Joneses :smiliz19:).

Where they are allowed to, they make the raw data available for download.

BTW. that the data was not available for download for everybody before, does not mean that is was not available for "independent researchers".

Here is an interesting graph:

It shows how the unadjusted temperature rocketed in 1990 and beyond when loads of stations worldwide were discontinued from the network, many from the former Soviet bloc countries.

That's why trend analysis is done with the temperature anomaly. Changes like these will then have no impact.

Here is the Russian side of the story as reported in The Telegraph:

http://blogs.telegra...global-warming/

Oh come on. Some Russian Economic Think-Tank writes a report which gets translated and echoed around blogs is the "Russian side of the story"?

D.

Link to comment
Share on other sites

Posted
  • Location: Cheddar Valley, 20mtrs asl
  • Weather Preferences: Snow and lots of it or warm and sunny, no mediocre dross
  • Location: Cheddar Valley, 20mtrs asl

The Telegraph isn't a blog.

Is it correct, properly translated etc? I've no idea, hence saying "as reported in the Telegraph".

Anyone know anything more about this?

Link to comment
Share on other sites

Posted
  • Location: Worthing West Sussex
  • Location: Worthing West Sussex

Where they are allowed to, they make the raw data available for download.

If you say so, but where do they indicate which is "raw data" and which is tampered with "adjusted"? If you are in the know, can you help?

BTW. that the data was not available for download for everybody before, does not mean that is was not available for "independent researchers".

Sure it was available, I downloaded cruwrlda2.zip and reported it here

That's why trend analysis is done with the temperature anomaly. Changes like these will then have no impact.

BS! If the adjustments are added to the record since the anomaly reference period, the anomalies are inflated by the degree of adjustment. Also works backwards in time on the trendline.

Oh come on. Some Russian Economic Think-Tank writes a report which gets translated and echoed around blogs is the "Russian side of the story"?

D.

The Russians are not the only ones annoyed by the misrepresentations - Australians, New Zealanders, and pretty soon the rest of the English-speaking world when the US lawyers get their teeth into the climate exaggeration money machine. Boy, do they suck dry!

Edited by Chris Knight
Link to comment
Share on other sites

Guest diessoli

The Telegraph isn't a blog.

Is it correct, properly translated etc? I've no idea, hence saying "as reported in the Telegraph".

Anyone know anything more about this?

http://blogs.telegra...global-warming/

And he is quoting other blogs (Icecap, WUWT, and AirVent). I don't see any evidence of any actual attempt to get to the bottom of the problem.

D.

Link to comment
Share on other sites

Posted
  • Location: Cheddar Valley, 20mtrs asl
  • Weather Preferences: Snow and lots of it or warm and sunny, no mediocre dross
  • Location: Cheddar Valley, 20mtrs asl

Perhaps you could have a dig around, see what you can find? Afraid I just don't have the time at the moment.

It's very unlikely to be dreamt up fabrication, the story must have an origin. To be fair, the Telegraph is a respected broad sheet, it's not a red top.

Link to comment
Share on other sites

Guest diessoli

If you say so, but where do they indicate which is "raw data" and which is tampered with "adjusted"? If you are in the know, can you help?

I don't know. But if I would care enough, I would just ask them.

BS! If the adjustments are added to the record since the anomaly reference period, the anomalies are inflated by the degree of adjustment. Also works backwards in time on the trendline.

You've got a nice way of putting things.

The step change did not happen because suddenly the remaining stations reported higher temperatures, but because predominantly cold stations where removed from the sample.

The anomaly is calculated for each grid point individually, thus the station temperatures are normalized before the global mean anomaly series is calculated. This removes the step change.

The only problem you might have is that certain areas are no longer properly represented (but that is a different matter).

Take a simple example: 3 stations (one cold two warm) with 4 measurements, the cold one dropping out at the end; the reference period is t = 2 - 3:

1) 5, 6, 4, - => 5 (ref. val.)

2) 10, 11, 12, 11 => 11.5

3) 11, 9, 10, 10 => 9.5

--------------------

if you simply take the absolute temperatures and average them, you get (believe it or not, I did not choose the values such that the averages are constant):

8.66, 8.66, 8.66, 10.5

A significant step up.

But if you use the anomalies you get:

1) 0, 1, -1, -

2) -1.5, -0.5, 0.5, -0.5

3) 1.5, -0.5, 0.5, 0.5

--------------------------

0, 0, 0, 0

That's of course not the exact method used, but it's the same principle.

Basically what happens in the case of the anomalies is that at each point time you subtract the mean of the reference temperatures for the stations available at that time.

D.

Link to comment
Share on other sites

Posted
  • Location: Worthing West Sussex
  • Location: Worthing West Sussex

I don't know. But if I would care enough, I would just ask them.

You've got a nice way of putting things.

The step change did not happen because suddenly the remaining stations reported higher temperatures, but because predominantly cold stations where removed from the sample.

The anomaly is calculated for each grid point individually, thus the station temperatures are normalized before the global mean anomaly series is calculated. This removes the step change.

The only problem you might have is that certain areas are no longer properly represented (but that is a different matter).

Take a simple example: 3 stations (one cold two warm) with 4 measurements, the cold one dropping out at the end; the reference period is t = 2 - 3:

1) 5, 6, 4, - => 5 (ref. val.)

2) 10, 11, 12, 11 => 11.5

3) 11, 9, 10, 10 => 9.5

--------------------

if you simply take the absolute temperatures and average them, you get (believe it or not, I did not choose the values such that the averages are constant):

8.66, 8.66, 8.66, 10.5

A significant step up.

But if you use the anomalies you get:

1) 0, 1, -1, -

2) -1.5, -0.5, 0.5, -0.5

3) 1.5, -0.5, 0.5, 0.5

--------------------------

0, 0, 0, 0

That's of course not the exact method used, but it's the same principle.

Basically what happens in the case of the anomalies is that at each point time you subtract the mean of the reference temperatures for the stations available at that time.

D.

"nice": Yes, I try to say things simply. Could you simplify the stuff above?

Does it mean things seem warmer because people deleted some of the colder records?

Edited by Chris Knight
Link to comment
Share on other sites

Guest diessoli

Could you simplify the stuff above?

Maybe. How simple does it have to be? I was expecting to be chided for being overly simplistic.

Say you are looking at three stations to calculate your trend. A cold one (e.g. Moscow) and two warm ones, say Melbourne and Jakarta.

For all (except Moscow) you have four years of measurements: in 1987, 1988, 1989, and 1990. For Moscow you only have 1987 to 1989.

Melbourne: T = 10, 11, 12, 11

Jakarta: T = 11, 9, 10, 10

Moscow: T = 5, 6, 4, --

For each year you calculate the global average and get the following temperature series:

Tglob = (10+11+5) / 3 , (11+9+6) / 3, (12+10+4) / 3, (11 + 10)/2 = 8.66, 8.66, 8.66, 10.5

And voila, your temperature surges upwards just because Moscow stopped reporting in 1990.

But if you use anomlies this is no longer a problem.

We use 1988-1989 as a reference period and get the following reference values for each station:

Melbourne: (11+12)/2 = 11.5

Jakarta: (9+10)/2 = 9.5

Moscow: (6+ 4)/2 = 5

With this you get the temperature anomalies for each stations:

Melbourne: 10 - 11.5, 11 - 11.5, 12 - 11.5, 11-11.5 = -1.5, -0.5, 0.5, -0.5

Jakarta: 11 - 9.5, 9 -9.5, 10-9.5, 10-9.5 = 1.5, -0.5, 0.5, 0.5

Moscow 5-5, 6-5, 4-5 = 0, 1, -1

And now you build your global average:

Tano = (-1.5 + 1.5 + 0)/3, ..., (-0.5 + 0.5) / 2 = 0,0,0,0

So the fact that the cold station dropped out in 1990 has no effect on your global temperature trend. Nada.

What might become an issue is that your global average does not make sense any more if the coverage is not good enough. But this is dealt with using other techniques.

Does it mean things seem warmer because people deleted some of the colder records?

If the temperature trend was based on absolute temperatures instead of anomalies, then yes it would look like the global average temperature jumped upwards around 1990; simply because the stations that stopped reporting where colder than the average. I am not saying that records where deleted. The world "lost" a lot of stations - why I don't know, but it's not hard to imagine and probably easy to find out. Further, a lot of these stations where in the former UDSSR and presumeably colder than average, as you can see on your chart (note that I take this chart at face value, even though I consider the source (i.e. Joe D'Aleo) untrustworthy . But it's a well known fact that the number of stations decreased significantly in the 90's and the temperature increase is not implausible).

My point it that precisely for this reason people look at anomalies to do trend analysis.

D.

Link to comment
Share on other sites

Posted
  • Location: Dorset
  • Location: Dorset

Perhaps you could have a dig around, see what you can find? Afraid I just don't have the time at the moment.

It's very unlikely to be dreamt up fabrication, the story must have an origin. To be fair, the Telegraph is a respected broad sheet, it's not a red top.

It's also almost rabidly anti AGW. See Booker, Lawson et al. As is the author of that blog.

BTW good post diessoli, completely agree with what your saying and it's the difference between science and pseudo science (pseudo science will use the arguement of left out data, plot a station close to Moscow (deliberately chosen) that shows a large temp rise and draw the conclusion that AGW isn't happening and even worse that it's a scam through adjusted figures).

Link to comment
Share on other sites

Guest diessoli

Perhaps you could have a dig around, see what you can find? Afraid I just don't have the time at the moment.

It's very unlikely to be dreamt up fabrication, the story must have an origin. To be fair, the Telegraph is a respected broad sheet, it's not a red top.

I a not suggesting that it's made up.

Given that CRU bashing is the sport of the season, the IEA, a Russian economic "Think Tank" has written a report (the telegraph blog has a link to a manual translation) on the treatment of temperature measurements in Russia. Their conclusion is that CRU has cherry-picked such stations as to give a false picture of the temperature trends in Russia. And now this gets picked up and echoed around the blogs as the latest "bombshell".

It should be easy to find the various refutations (I don't like linking to blogs, even the ones I know do a good job at sticking to the science).

Maybe the Telegraph is respected, I couldn't say, but Delingpole's piece is pretty obviously nothing more than his opinion plus quotes from blogs that confirm his opinion. I wouldn't actually call that journalism.

This is all such a waste of time.

D.

Link to comment
Share on other sites

Posted
  • Location: Worthing West Sussex
  • Location: Worthing West Sussex

Maybe. How simple does it have to be? I was expecting to be chided for being overly simplistic.

Say you are looking at three stations to calculate your trend. A cold one (e.g. Moscow) and two warm ones, say Melbourne and Jakarta.

For all (except Moscow) you have four years of measurements: in 1987, 1988, 1989, and 1990. For Moscow you only have 1987 to 1989.

Melbourne: T = 10, 11, 12, 11

Jakarta: T = 11, 9, 10, 10

Moscow: T = 5, 6, 4, --

For each year you calculate the global average and get the following temperature series:

Tglob = (10+11+5) / 3 , (11+9+6) / 3, (12+10+4) / 3, (11 + 10)/2 = 8.66, 8.66, 8.66, 10.5

And voila, your temperature surges upwards just because Moscow stopped reporting in 1990.

But if you use anomlies this is no longer a problem.

We use 1988-1989 as a reference period and get the following reference values for each station:

Melbourne: (11+12)/2 = 11.5

Jakarta: (9+10)/2 = 9.5

Moscow: (6+ 4)/2 = 5

With this you get the temperature anomalies for each stations:

Melbourne: 10 - 11.5, 11 - 11.5, 12 - 11.5, 11-11.5 = -1.5, -0.5, 0.5, -0.5

Jakarta: 11 - 9.5, 9 -9.5, 10-9.5, 10-9.5 = 1.5, -0.5, 0.5, 0.5

Moscow 5-5, 6-5, 4-5 = 0, 1, -1

And now you build your global average:

Tano = (-1.5 + 1.5 + 0)/3, ..., (-0.5 + 0.5) / 2 = 0,0,0,0

So the fact that the cold station dropped out in 1990 has no effect on your global temperature trend. Nada.

What might become an issue is that your global average does not make sense any more if the coverage is not good enough. But this is dealt with using other techniques.

If the temperature trend was based on absolute temperatures instead of anomalies, then yes it would look like the global average temperature jumped upwards around 1990; simply because the stations that stopped reporting where colder than the average. I am not saying that records where deleted. The world "lost" a lot of stations - why I don't know, but it's not hard to imagine and probably easy to find out. Further, a lot of these stations where in the former UDSSR and presumeably colder than average, as you can see on your chart (note that I take this chart at face value, even though I consider the source (i.e. Joe D'Aleo) untrustworthy . But it's a well known fact that the number of stations decreased significantly in the 90's and the temperature increase is not implausible).

My point it that precisely for this reason people look at anomalies to do trend analysis.

D.

Many thanks to Diessoli for an extremely clear explanation of why a single missing continuous record should not affect the anomaly trend.

The original idea for the graph was D'Aleo, but In this case, it was taken from McKitrick, as the links I posted above show.

The GHCN data show that pre and post 1990 temperature data are not comparing like for like, as about two thirds of the pre-1990 stations were discontinued in the network, probably for extremely good reasons.

However the effect is for the more recent station sample to show a globally sampled land surface mean of a degree and a half more than the pre-1990 mean of 10 degrees C for up to 15,000 stations.

The lost stations (a majority) therefore had a mean of around 8.6 degrees. (This varies depending how the means were calculated.)

A subset of 5,000 stations with mean temperatures of 11.5 degrees generally must come from different climate regions than a subset of 10,000 whose mean was three degrees lower.

The usual cited global mean temperature for 1951-80 is 14.0 +/- 0.7 deg C, so assuming some degree of global warming in the last 30 years, the land-based stations may still be oversampling cold regions, or is the mean sea surface temperature higher than mean land surface temperatures?

It is difficult to find separate definitive global land and sea estimates. Roy Spencer's AMSU-A site says 21deg C or thereabouts for sea surface temperatures - is that so?

The supposed problem due to Siberian station loss was that an already an area of sparse coverage was made even sparser, and that to continue the record for that region, interpolation is used to fill in missing data from surrounding gridded data, apparently for the CRU record and an IPCC AR4 report. If the stations now used to supply that data are interpolated from the remaining sites that show the increased warming (with a possible +1 degree or more in the D'Aleo/McKitric GHCN graph), because use of cooler stations have been discontinued, the gridded homogenised adjusted data now has a considerable recent warm bias since the number of stations has been reduced.

In Siberia, the recent warming occurs in the winter, rather than the summer, either due to human observation bias (readings missed, or misreported due to extremely unfavourable weather!), or urban heating effects, or both.

As the station data from the MetO become available, sites like appinsys.com are presenting plots of CRU and GHNC data, with their own commentary, such as this on coverage and this on the Siberian data.

As one would expect from the subject matter, the site does not lean to the warm side. :)

Link to comment
Share on other sites

Guest diessoli

Many thanks to Diessoli for an extremely clear explanation of why a single missing continuous record should not affect the anomaly trend.

The GHCN data show that pre and post 1990 temperature data are not comparing like for like, as about two thirds of the pre-1990 stations were discontinued in the network, probably for extremely good reasons.

However the effect is for the more recent station sample to show a globally sampled land surface mean of a degree and a half more than the pre-1990 mean of 10 degrees C for up to 15,000 stations.

The lost stations (a majority) therefore had a mean of around 8.6 degrees. (This varies depending how the means were calculated.)

A subset of 5,000 stations with mean temperatures of 11.5 degrees generally must come from different climate regions than a subset of 10,000 whose mean was three degrees lower.

The usual cited global mean temperature for 1951-80 is 14.0 +/- 0.7 deg C, so assuming some degree of global warming in the last 30 years, the land-based stations may still be oversampling cold regions, or is the mean sea surface temperature higher than mean land surface temperatures?

It is difficult to find separate definitive global land and sea estimates. Roy Spencer's AMSU-A site says 21deg C or thereabouts for sea surface temperatures - is that so?

The supposed problem due to Siberian station loss was that an already an area of sparse coverage was made even sparser, and that to continue the record for that region, interpolation is used to fill in missing data from surrounding gridded data, apparently for the CRU record and an IPCC AR4 report. If the stations now used to supply that data are interpolated from the remaining sites that show the increased warming (with a possible +1 degree or more in the D'Aleo/McKitric GHCN graph), because use of cooler stations have been discontinued, the gridded homogenised adjusted data now has a considerable recent warm bias since the number of stations has been reduced.

In Siberia, the recent warming occurs in the winter, rather than the summer, either due to human observation bias (readings missed, or misreported due to extremely unfavourable weather!), or urban heating effects, or both.

The point I am trying to make is that, if you use anomalies to calculate the trend, it does not matter if you remove a sub-sample which has a different mean absolute temperature than the whole sample.

Even if the discontinued stations had a mean absolute temperature of -100 degrees it would not have any impact on the anomaly trend.

It's misguiding to look at absolute temperatures to analyse trends.

The real question is (and I have only briefly alluded to it) if the discontinued stations have a different trend. If their trend is much lower than the global trend, than removing them will make the global trend larger than it actually is and vice versa. The graph and spreadsheet you linked make no attempt to look at that aspect.

edit:

Actually, I just realised that my example is not really well chosen. I should have used data where the cold station has the same trend as the warm stations. The point is still valid though.

As the station data from the MetO become available, sites like appinsys.com are presenting plots of CRU and GHNC data, with their own commentary, such as this on coverage and this on the Siberian data.

As one would expect from the subject matter, the site does not lean to the warm side. :)

Out of interest: do you trust what you see on this site? Do you agree with his conclusions? Do you regard his method as scientific?

D.

Edited by diessoli
Link to comment
Share on other sites

Posted
  • Location: Worthing West Sussex
  • Location: Worthing West Sussex

The point I am trying to make is that, if you use anomalies to calculate the trend, it does not matter if you remove a sub-sample which has a different mean absolute temperature than the whole sample.

Even if the discontinued stations had a mean absolute temperature of -100 degrees it would not have any impact on the anomaly trend.

It's misguiding to look at absolute temperatures to analyse trends.

The real question is (and I have only briefly alluded to it) if the discontinued stations have a different trend. If their trend is much lower than the global trend, than removing them will make the global trend larger than it actually is and vice versa. The graph and spreadsheet you linked make no attempt to look at that aspect.

edit:

Actually, I just realised that my example is not really well chosen. I should have used data where the cold station has the same trend as the warm stations. The point is still valid though.

Looking at anomalies alone makes any discontinuity between prior and post-change values invisible, as you neatly showed. The new trend can only follow that of the post-change stations, and any prior trend is not only eliminated, but lost back in time depending on the period of smoothing used to distinguish the trend-line from the noise. At the end of a time series we cannot know the actual trend, until the time series has been further extended.

Out of interest: do you trust what you see on this site? Do you agree with his conclusions? Do you regard his method as scientific?

D.

Terry Pratchett puts my view of "science" much better than I can in this excerpt from "The Hogfather" where Death (who always speaks in Capitalised text) is explaining the concept of "Lies told to children" to his granddaughter, Susan.

"Ah," said Susan dully. "Trickery with words. I would have thought you'd have been more literal-minded than that."

I AM NOTHING IF NOT LITERAL-MINDED. TRICKERY WITH WORDS IS WHERE HUMANS LIVE.

"All right," said Susan. "I'm not stupid. You're saying that humans need ... fantasies to make life bearable."

REALLY? AS IF IT WAS SOME KIND OF PINK PILL? NO. HUMANS NEED FANTASY TO BE HUMAN. TO BE THE PLACE WHERE THE FALLING ANGEL MEETS THE RISING APE.

"Tooth fairies? Hogfathers? Little ---"

YES. AS PRACTICE. YOU HAVE TO START OUT LEARNING TO BELIEVE THE LITTLE LIES.

"So we can believe the big ones?"

YES. JUSTICE. MERCY. DUTY. THAT SORT OF THING.

"They're not the same at all!"

YOU THINK SO? THEN TAKE THE UNIVERSE AND GRIND IT DOWN TO THE FINEST POWDER AND SIEVE IT THROUGH THE FINEST SIEVE AND THEN SHOW ME ONE ATOM OF JUSTICE, ONE MOLECULE OF MERCY. AND YET --- Death waved a hand. AND YET YOU ACT AS IF THERE IS SOME IDEAL ORDER IN THE WORLD, AS IF THERE IS SOME ... RIGHTNESS IN THE UNIVERSE BY WHICH IT MAY BE JUDGED.

"Yes, but people have got to believe that, or what's the point ---"

MY POINT EXACTLY.

Science is also a large lie. It cannot be done by data mining, model building, or controlled by schools of thought. Laws of nature are manmade, and subject to revision, and in the main are special cases restricted to our limited experience of the cosmos. The atom is belied by sub-atomic particles.

Concepts such as "mean global temperature", "sea level", "ocean heat content" can not be measured, any more than the "human genome" can be absolutely sequenced. They are nonetheless useful and probably necessary in aiding our understanding, as are imaginary numbers in mathematics.

Without the discipline of mathematics, concepts can be misrepresented and used to show almost any desired result. I may differ in my genome from the standard human model by perhaps 25468 single nucleotide polymorphisms, you have maybe 37208. You could say that you are more highly evolved, I could say I am more human...

Climate science is pretty much at the stage where folks are arguing how many angels can dance on the head of a pin. It's good fun, but if it is to be science, the ground rules need to be defined. It's difficult to see where to start for a young science, where many different disciplines come together, with a basis of incomplete, uncertain, and possibly manipulated (however honestly done) data, much of which was never originally collected to be used for this purpose. Give it another fifty years of consistent data collection, and we may be able to judge if climatology is a valid scientific pursuit, or just a political tool that failed to generate consistent future predictions.

Merry Hogwatch D. :(

Link to comment
Share on other sites

Posted
  • Location: Near Newton Abbot or east Dartmoor, Devon
  • Location: Near Newton Abbot or east Dartmoor, Devon

...

Climate science is pretty much at the stage where folks are arguing how many angels can dance on the head of a pin. It's good fun, but if it is to be science, the ground rules need to be defined. It's difficult to see where to start for a young science, where many different disciplines come together, with a basis of incomplete, uncertain, and possibly manipulated (however honestly done) data, much of which was never originally collected to be used for this purpose. Give it another fifty years of consistent data collection, and we may be able to judge if climatology is a valid scientific pursuit, or just a political tool that failed to generate consistent future predictions.

...

How do you know?

Link to comment
Share on other sites

Posted
  • Location: Lincoln, Lincolnshire
  • Weather Preferences: Sunshine, convective precipitation, snow, thunderstorms, "episodic" months.
  • Location: Lincoln, Lincolnshire

Science is also a large lie. It cannot be done by data mining, model building, or controlled by schools of thought. Laws of nature are manmade, and subject to revision, and in the main are special cases restricted to our limited experience of the cosmos. The atom is belied by sub-atomic particles.

Concepts such as "mean global temperature", "sea level", "ocean heat content" can not be measured, any more than the "human genome" can be absolutely sequenced. They are nonetheless useful and probably necessary in aiding our understanding, as are imaginary numbers in mathematics.

The lack of absolute certainty in these things is why, when you look closely, the scientists put error estimates on them. You may not see these error estimates in government and media reports, but they are there. For example the global warming since 1861, or 1901, is subject to uncertainty bounds of approximately +/- 0.2C.

If we cannot find a way of definitively equalling the objective truth, we can aim to closely approximate it instead. Most work in the atmospheric sciences deals with approximations and uncertainty bounds.

Climate science is pretty much at the stage where folks are arguing how many angels can dance on the head of a pin. It's good fun, but if it is to be science, the ground rules need to be defined. It's difficult to see where to start for a young science, where many different disciplines come together, with a basis of incomplete, uncertain, and possibly manipulated (however honestly done) data, much of which was never originally collected to be used for this purpose. Give it another fifty years of consistent data collection, and we may be able to judge if climatology is a valid scientific pursuit, or just a political tool that failed to generate consistent future predictions.

Merry Hogwatch D. :whistling:

There is not going to be such a thing as 100% consistent data collection. Things like human error, observing sites having to be moved, some closing down, others opening- these will always add potential sources of error to the data. We can minimise the extent of those potential inhomogeneities but it is unrealistic to expect to be able to completely eliminate them.

So in 50 years' time you could still say "this data could have been manipulated and it's incomplete and uncertain, therefore we need another 50 years of consistent data collection", and rinse and repeat 50 years after that.

Again, much of the above also seems to be trying to cast doubt on the notion that the world warmed over the last 100 years. There is a massive amount of evidence pointing in this direction- should we really be discarding it just because it appears 99.999% likely to have happened instead of 100% certain?

Link to comment
Share on other sites

Posted
  • Location: Near Newton Abbot or east Dartmoor, Devon
  • Location: Near Newton Abbot or east Dartmoor, Devon

How do I know what?

C'mon :( , you can surely see the question refers to the quote and your views expressed therein?

Link to comment
Share on other sites

Guest diessoli

It is difficult to find separate definitive global land and sea estimates.

Try http://www.cru.uea.ac.uk/cru/data/temperature/

CRUTEM3 and CRUTEM3v are land-only temperatures, HadSST2 are Sea Surface Temperature anomalies.

Roy Spencer's AMSU-A site says 21deg C or thereabouts for sea surface temperatures - is that so?

Roy Spencer's site is displaying brightness temperatures which is not the same as the temperature one would measure.

Check out the data mentioned above to see what the difference between land temperatures and SST is.

D.

Link to comment
Share on other sites

Posted
  • Location: Rochester, Kent
  • Location: Rochester, Kent

Even if the discontinued stations had a mean absolute temperature of -100 degrees it would not have any impact on the anomaly trend. It's misguiding to look at absolute temperatures to analyse trends.

If that's really the case, then there is problem, is there not?

A good example might be the record breaking temperature gradient during the English October Storm 1987. Some 10C (or something high) increase in 30 minutes - well outside of any normal distrubtion - but it happened, and is recorded as having happened.

If a station, or two, or three, are showing some orders of magnitude (stddev * 5) outside the normal anomaly range and are not included on that basis, yet the observational evidence is still valid (the real heart of the question) then that seems to me to be a strange methodology.

Of course, most station data is probably (I don't know) stripped out of data that exceeds stddev * 1.96 on the basis of the normal distribution, and interpolated either against itself or it's neighbours, anyway - so further questions need to be asked, and, of course, answered.

Indeed, as might be ascertained by this post, I am just starting to get an inkling - and it's only an inkling - that the normal rules of statistics, whilst valid for 100s years, might not necessarily apply to climatology on it's current orders of magnitude. Or the measurement basis is wrong ....

Edited by VillagePlank
Link to comment
Share on other sites

Guest diessoli

If that's really the case, then there is problem, is there not?

Um, no. But maybe I don't understand your problem.

A good example might be the record breaking temperature gradient during the English October Storm 1987. Some 10C (or something high) increase in 30 minutes - well outside of any normal distrubtion - but it happened, and is recorded as having happened.

If a station, or two, or three, are showing some orders of magnitude (stddev * 5) outside the normal anomaly range and are not included on that basis, yet the observational evidence is still valid (the real heart of the question) then that seems to me to be a strange methodology.

Are we talking about a single outlier value been taken out, or a station?

Could you point to where it says which stations are excluded for what reason?

Of course, most station data is probably (I don't know) stripped out of data that exceeds stddev * 1.96 on the basis of the normal distribution, and interpolated either against itself or it's neighbours, anyway - so further questions need to be asked, and, of course, answered.

Indeed, as might be ascertained by this post, I am just starting to get an inkling - and it's only an inkling - that the normal rules of statistics, whilst valid for 100s years, might not necessarily apply to climatology on it's current orders of magnitude. Or the measurement basis is wrong ....

Again, I don't think I get your point.

Just to go one step further than using the 3 value example above:

I have generated an artificial "global temperature set" and calculated trends of absolute temperatures and anomalies.

For each station I generated a random series with a forced trend of 1 degrees / century.

One half of the stations where "cold" the other "warm". I exaggerated the difference and not included any stations in between so the impacts of the different methods are obvious.

After averaging over all stations for a single year, I fitted a linear trend through each series.

If you use absolute temperatures and assume you would have values for all years for all stations, you get the following:

post-7584-12625077598513_thumb.gif

Now do the same, but leave out all cold stations after 1990, and you get:

post-7584-12625078176213_thumb.gif

The linear fit still gives the same trend of course, but it's clear that it's nonsense to assume a linear trend and we see the warm bias.

Now we do the same for the anomaly:

post-7584-12625079231613_thumb.gif

And again without the cold stations:

post-7584-12625079653713_thumb.gif

The bias is gone and we see no change in the trend.

Here's the data:

absolute-vs-anomaly.txt

And in the spirit of transparency, here the script that created the data (sorry for the zip, but I could not upload it otherwise):

createTrendyTemps.py.zip

You will need python to run the script.

I've not used any non-standard libraries (like scipy) to make it easier for people to get up and running and I've tried to keep the code simple and straight forward to read.

D.

Link to comment
Share on other sites

Posted
  • Location: Rochester, Kent
  • Location: Rochester, Kent

Windows 7 reports the zip file as invalid.

Yes, I understand the use of anomaly - can you change your dataset so that it has outliers of >stddev*3 (for example) and see what happens?

As I understand it, and for the life of me I can't find where - so it is entirely possible that I dreamt it - data that doesn't fit the normal distribution (stddev * 1.96 or less?) is rejected and interpolated using either its own series, or that of it's neighbours. Seems a fair enough technique to me, but my point is, the weather does produce some very extreme anomalies, so, whilst this technique is valid, is it suitable for climatology since we know that large outliers do occur, and they constitute part of the observational dataset.

Also how is the deviation from the average calculated? Is a check made to see how far skewed the distribution is (subtract median from mean)? Or is the measurement frequency manipulated to guarentee a reasonably good distribution every time?

Of course, as I've said, this might not be happening at all! And this is only honest questions - I use similar techniques at work, all the time.

Link to comment
Share on other sites

Guest diessoli

Windows 7 reports the zip file as invalid.

Works fine on Linux. Anyway I've removed the shebang and changed the extension and now the forum upload accepts it:

createTrendyTemps.txt

Yes, I understand the use of anomaly - can you change your dataset so that it has outliers of >stddev*3 (for example) and see what happens?

I did not want to imply that you don't understand anomalies but wanted to extend the previous example. I should have made a separate post.

If you remove the outliers, you reduce the variance, given that the noise is random. At least for sufficiently long periods.

I don't think there is much point doing this with the artificial data.

As I understand it, and for the life of me I can't find where - so it is entirely possible that I dreamt it - data that doesn't fit the normal distribution (stddev * 1.96 or less?) is rejected and interpolated using either its own series, or that of it's neighbours. Seems a fair enough technique to me, but my point is, the weather does produce some very extreme anomalies, so, whilst this technique is valid, is it suitable for climatology since we know that large outliers do occur, and they constitute part of the observational dataset.

Whilst these outliers are real, they are part of the noise anyway and to establish a long term trend, you're not really interested in them.

It's different if you want to, say, study any changes in frequencies or amplitudes of such extreme events.

Also how is the deviation from the average calculated? Is a check made to see how far skewed the distribution is (subtract median from mean)? Or is the measurement frequency manipulated to guarentee a reasonably good distribution every time?

Of course, as I've said, this might not be happening at all! And this is only honest questions - I use similar techniques at work, all the time.

See here for review of various homogenisation methods. You might be able to find something in the references.

Also: this project might interest you (and others):

http://www.clearclimatecode.org/

D.

Edited by diessoli
Link to comment
Share on other sites

Posted
  • Location: Swallownest, Sheffield 83m ASL
  • Location: Swallownest, Sheffield 83m ASL

Interesting stuff but does it use the data that was deleted or is this based on manipulated data?

Link to comment
Share on other sites

Guest diessoli

Interesting stuff but does it use the data that was deleted or is this based on manipulated data?

Its using the data that GISStemp uses, so every single value was personally manipulated by Al Gore, of course.

D.

Edited by diessoli
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...