Jump to content
Snow?
Local
Radar
Cold?

VillagePlank

Members
  • Posts

    6,321
  • Joined

  • Last visited

  • Days Won

    7

Everything posted by VillagePlank

  1. Curious, Most PC's are set to go to sleep in periods of inactivity. So all those interested in developing sound models to further understand climate change are stopping their PC's from going to sleep and running intensive simulations, instead. A PC is inactive, normally, for >18hrs/dy;so assuming it's being used for 6 hrs (continuously) then if all PC users ran this project we should expect a 3 fold increase in the the amount of energy used, and, perhaps, the amount of CO2 computer users are injecting into the atmosphere. Self-fulfilling prophecy, anyone?
  2. It is indeed the case that the gradient of temperature increase has been seen before; especially if you step outside more 'normalised' methods of looking at historical data.There is a case to be said that says that the climate has undergone some sort of step-function increase in the base temperature value and that now we will continue as normal but with 1C (or whatever) increase in base temperatures. The problems with this approach can be summarised with these two questions (i)What forced the climate to step-change? (ii) Why is there no record of such a step-change in the CET record until recently? It is, as you imply, worthwhile looking at the absolute temperature values rather than looking at the rate of anomaly increases (such as the hockey-stick) I have to admit I don't truly understand the reasoning behind publishing plots of anomaly increases rather than absolute temperatures. I suspect it is, like most graphs, drawn to support a point of view; the only problem, of course, is that the absolute temperature line supports such an AGW point of view, but it is just not such an exponential curve.
  3. Why should such information be 'not for publication' ?
  4. The idea of trends, in my opinion, is that, for instance, we can, and do have a rather large CET variance; that is to say we can get 'extreme' values both in the warm and in the cold, and we need to have some rigorous method of seeing the big picture. Obviously these extreme values are noise - such as the anomalously warm July just past. We need to filter out the noise. If I've read your graph correctly you've put a poly 2 trend on it (which is linear) which is the equivalent of y=mx+c (the equation of a straight line) of which the most interesting part is indeed the gradient which , once noise has been smoothed (but not filtered out) gives us the clue that the climate, historically (history meaning any time in the past, even one month) is warming. When you look at a graph without such trend-lines, then you're mind will do pretty much the same job. It is, as you say, not a viable means of prediction; it is a more edible form of modelling history. A quick caveat is that one must be sure of what trend line definition they are looking at, and what, semantically it actually means. For those who'd like to see a VB6 method of generating polynomials, please see: Polynomials.doc (This code is copyleft; you can use it as you want to, but I won't support it - just please don't pretend it's yours a quick 'From Wilson, NetWeather' will do)
  5. It is, as you say, certainly a large amount of water; I think what P3 is saying is that in order to trigger a shutdown we need even more Precipitation, and evaporation rates increase with warmth but the difference between the two also increases ((Manabe 1994) What is not well specified is whether the rate of increase between the two (P-E) is linear, exponential or whatever; clearly the Schlesinger report seems to discount it rather quickly out of hand. It may indeed be the case the P, and E increase, but I think the question is where does P, and E actually happen; if the E rate is higher say over N America, and the P rate is higher over the N Atlantic and the weather is, overall, W to E in this neck of the woods, then (P-E) seems to be less than what is needed for such a dismissive stance (N America evaporates and tips that evaporation into the Atlantic) I'll try and dig out the Manabe paper to see if we can get our freshwater from rain
  6. I downloaded and read the Schlesinger et al paper and the assessment of risk is between now (the publish date) and 2205; a little more than 100 years. The paper is also geared towards economic risk ie risk without economic (policy intervention is the term used in the paper) manipulation. Also, The summary states quite clearly that 'Such probabilities are worrisome. Of course they should be checked by additional modelling studies. ... if these future studies find similar results, it would seem that the risk of a THC collapse is unacceptably large . . ." I think, as you say P3, that BG are misrepresenting this study, somewhat; for what ends, I have no idea
  7. The reason I thought I'd post the Benfield Grieg article is simply because they effectively 'bet' on the weather - they are an insurance company; the policies they issue for, say, nuclear power stations, have to be set correctly to account for future weather or they'd go broke (their policy income would not exceed their claim payout) This is not the case of an organisation with a vested interest in doom and gloom and vice versa; this is an organisation that regardless of the weather simply need to get it right whichever way they turn. And on that basis I'll nod a fair bit of credit to what they publish. In terms of the risk, of which I'll agree with 25% published in that article, I can see no problem; consider the inverse - there is a 75% chance of it not happening. A 25% risk, though, is still of a magnitude where it should raise eyebrows and such like, methinks.
  8. Good overview of risks here (Apologies to mods but it seems sensible to start splitting the issues under Environmental Change into their own threads, now)
  9. GW, I think that climatic prediction is even more difficult than that. Let's, for a simple example, discuss the frequency and long term accuracy of significant volcanic events. What? What d'ya mean we have no idea . . . . :blink:
  10. On the basis of this: I am revising my Oct CET upwards to be 10.5C
  11. On the basis of complete guess work 11.0C (assuming near av values to see the year out)
  12. Pattern for October AWCWC, so I'm having to go for distinctly average with an ever so slight warming bias (say +0.2C). The 'scope' of average is CET to 2005 = 9.7, and 'modern' CET (1970-2005) = 10.5C The average of those values = 10.1 So my CET prediction is 10.1C + 0.2C = 10.3C
  13. YOu could always make real money on betting on your spread here Bet on the weather futures market!!
  14. I also completely edited my above post to avoid the obvious clash with my incomplete knowledge and research that would ultmately arise!!
  15. Analagous forecasting, in my opinion, will always have a significant error rate in it. To perform an analogy, in my opinion, you need to have the complete picture of what it is you are comparing. In order to express the idea that a comparison has merit you must be sure that you have all of the significant information available to your fingertips. I haven't seen any charts of butterfly's turbulence [edit] Of course you can use the same ideas to claim that weather behaves like a strange attractor and that given certain initial states the weather has to decay into said analagous patterns, and like Henon's strange attractor the weather is self-similar at all scales (fractal) so knowing temporal initial states is actually irrelevant because you already have the information on the outcomes - but that's an entirely new (and different) ball game [/edit]
  16. Evens. Unless you have an awful lot of time in order to approach an infinite series, or you get lucky, then you have just as much chance of winning as you have as losing based in historical data Yup Urmmm . . . . Yup Yup, but 'cheating' cannot be considered, in mathematical terms, as part of the problem domain. Oh dear . . .never bet on horses so I can't say I've shared the experience Regardless of the probabilities, by using your understanding of the problem domain, you have effectively cheated. I use the word 'cheated' because I can't think of another less acute word. Should you have solely used the historical record then your chances would have been, at best, evens. This is exactly what I'm saying above: success comes from stepping outside the problem domain. In the hangman example, the first bit relies almost entirely on the readers ability to pattern match the word 'elephant' and has, in essence nothing to do with rigorous statistics - I just dressed it up that way. Indeed. You've used your understanding of current climatic conditions to skew the bets. I agree with your spread, here, but I'd go even less for average and up the cold one a bit. Even so, the basic underlying philosophy is that you expect the weather to change, and you expect it to change to warmer. I am overly certain, and in that respect, I've cheated. The entire possibilities of weather combinations is so vast that it would take all of the computing power available now, and expected in the next millenia to search the vast pattern space we call climate and even then you'd need about a million years to wait for a result. (Using the top ten super-computers in the world, today, it would longer than the expected lifetime of the universe)You need to, as you have, step outside the problem domain and find shortcuts. There are very effective methods of reducing the pattern space such as neural networks, genetic algorithms, simulated annealing etc etc; but the problem of weather, in my opinion, cannot simply be deterministically reduced until the underlying 'truth' is revealed, and then rebuilt into a model. The curve of current model building is coming to the point of diminishing returns (in terms of pure numerical forecasting) and they are, now, implementing more subjective measures such as probability of ENSO occurences, volcanic eruptions (I say subjective because the probability of these events occuring is so poorly understand, and that it's a matter of forming an opinion of which study is right and which is wrong) If you need an example telephone the MetO and ask them for 'near certain' predictions of where convective rainfall will occur More examples of my irrational ranting can be found: http://www.netweather.tv/forum/index.php?s...mp;#entry766861 http://www.netweather.tv/forum/index.php?s...mp;#entry753905 If we continue to use reductionist mathematics (breaking the problem domain down into smaller and smaller chunks until they're easy to solve) we'll end up with quantum mechanics which is, I'm afraid, all about statistics. I wonder what the results would be if we analysed a few weather parameters say Wp={Precipitation,Sun,MaxT,MinT,Wind} and predicted that each subsequent day will be with a 5% error of the previous day? I've done something similar to this and my results were >80% - I'll dig out my work and post it. Does this mean I understand the problem domain better than the MetO - no! Does it mean I've discovered some hitherto method of weather forecasting - no! It means I understand that weather systems normally take more than 24 hours to clear the UK.In the same manner, although I accept that your spread is fairly consistent with the short-term outcomes - and I expect you to continue to win at least for the next 2/3 years (win = you're right more times than others, not that the frequency matches your spread) it cannot, by definition, hold for the future of our climate.
  17. This weekend I was musing over the use (and abuse) of statistics, especially when talking of CET’s and records being broken when something occurred to me; one should think of me sitting enjoying a cup of Lady Grey and watching the clouds go by when a light-bulb appeared above my head . . . We talk of CET averages in a pseudo-Bayesian fashion; especially when we try to forecast future CET values. It is common to look at the CET record and cross-correlate with existing (and known) data to provide a hypothesis of why the CET will be average, or above or below the mean (there is a case for median, but I’ll leave that to another day) We look at what’s happened last Winter, and we look at trends for highly differentiate periods such as seasons, and try to anticipate what the future holds for us. I contend that this approach could be wrong, and that the CET, with today’s analysis skills, can only ever be considered as a historic record. Consider one million words of real English text. We, perhaps, all know that the letter ‘e’ is the most common letter in English. Studies have shown that the frequency of the letter ‘e’ is about 25% of all letters used; sometimes this is more, sometimes this is less. If we choose to play a game of hangman with the following (hidden word) _ _ _ _ _ _ _ _ our first guess would be the letter ‘e’ giving e _ e _ _ _ _ _; This gives us, at the moment, a guess/success hit-rate of 100% We’ve identified the scope of the problem domain, analysed it, and currently are enjoying success. The next most common letter is ‘t’: this gives us e _ e _ _ _ _ t. The more observant amongst us will probably be able to successfully guess from this what the word (the outcome) actually is. Not only that, considering the next most common letter is ‘a’ which gives us e _ e _ _ a _ t which only goes to confirm our theory; there are enough letters to successfully predict what the outcome is without all of the information. We have 100% success. However if we were to consider an approach that is, dare I say it, more rigorous, and less reliant on our unique human pattern matching skills and hunches, the success rate goes significantly down. What happens if we start with the first letter and then iterate successively across the word? In the above example we used statistics to have five attempts where, for the majority of people, the sixth attempt is the whole word ‘elephant’ The success rate for the first letter is, once again, 100%, but it, obviously, quickly goes downhill. The letter ‘l’ for instance is the 11th most used letter so out of twelve attempts we now only have 1 hit – and appalling success rate. Of course, it goes up to 2/13 with the next letter, but subsequent letters pull the success rate down and down to make the method useless. So what’s the point of all this rambling? Well, it was analogous to using the CET and trends of CET in what can only be considered a logical statistical manner in order to predict future values of the same. If you accept that this analogy is good, then you’ll agree that the method is a significant factor of success. In the hangman case it is so significant that it denotes success or failure. The data’s the same, the analysis of the domain is just the same and is excellent. It’s how you use the data, and the conclusions of the analysis that count. It may well be, and I’m convinced that it is, that numerical computation and analogous forecasting of future trends is doomed to fail. Always. There is something more special about us, as human beings. We are expert pattern matchers, we can compute vast amounts of information in sub-second precision; a human weather forecaster, given the data for the domain in question, is a far more effective tool, than anything technology can currently create. And this probably makes no sense at all. Oxymoronic, huh?
  18. I think that this is valid point. What weather are we expecting with a predicted temperature increase that is less than a normal diurnal range in the UK?
  19. I agree that what you see is indeed possible; but the weight of (recent) evidence shows that the expectation of snow recedes dramatically as we stroll through the winter months - usually at an all time low when we're all nursing our New Year hangover. We can all go back to the past and say 'Look this has happened, before' but then that's an arbitrary trick designed to foreclose the less urrmm aware; as far as I know, and I admit that that's very little, I can find littel evidence that analagous forecasting actually produces significant results I can go back 10k years and show the world how cold it can be, I can go back 4.5billion years and show a world that exists many times the level of boiling water. The argument, of course, is that there were very specific circumstances of why the climate was that way inclined in the periods mentioned (ice age, and creation of Earth) I agree that these are somewhat extreme examples, but if you wish to tackle that then you must be prepared to tell me what the boundaries to the norms for analogy actually are. ie exactly how far back am I allowed to go to present an analogy? If you accept the notion that very specific events can cause exaggerated climatic changes (hey - I'm being politically aware, here!) then why is it so difficult to accept that there might be a forcing that we, as a species, have induced, whether or not it is temporary in nature. As for my opinion of the weather this winter? I'm working on a method and hope to publish it here by the end of October
  20. Just stirring, I'm afraid; haven't even read the paper to be honest; just spreading my brand of humour around for the afternoon. My wife says it reminds her of a five-year old :blush:
×
×
  • Create New...