Jump to content
Snow?
Local
Radar
Cold?

SnowBallz

Members
  • Posts

    201
  • Joined

  • Last visited

  • Days Won

    2

Posts posted by SnowBallz

  1. CFS keeping the dream alive with a sustained cold and snowy blast from the east via a huge Scandinavian high http://www.meteociel...0&mode=0&run=10

    Followed by an Atlantic ridge and channel low

    http://www.meteociel...0&mode=0&run=10

    Trend setter! spiteful.gif

    I do love the CFS. It's just so bonkers, yet never boring; like a bedtime story from your childhood, with wizards and dragons and magic potions wub.png

    CFS was really on LSD yesterday in regards to March. Today it's probably showing a heat wave laugh.png

  2. Actually Piers has been nearer the mark this winter than many of our so called experts on here, it appears Piers is easy to make fun off here, but his record is as good as anyones here, sure he gets it wrong at times but look at the Net weather winter forecast, busted in December.

    Posted Image

    Posted Image

    Yeah, but even a broken clock is right twice a day.

    As Chio rightly states: it's not even so much about whether he's 'right' or 'wrong' - it's the gibberish which he wraps around his sage prophecies which is just plain garbage. He shows himself up to be a complete and utter charlatan, as he doesn't even understand the basic elements of the science of which he claims to be expert witness of.

    Commercially minded venture though, so very easy to understand his business model; The Sun doesn't have to be correct, it just needs to grab enough dullards attention.

    • Like 5
  3. Look at the low heights across S Europe though with the trough.

    The outlook is so simple to figure out im struggling to understand why some can't see it. Rather than using just the models we should have more members using their instinct. The 06Z is another clear example of what i've been saying since yesterday morning and that is NW,ly outbreaks followed by a prolonged cold spell beginning around the 10th as we see the PV weaken.

    Oh, well I couldn't agree more TEITS.

    In that case, I take it you've retired from confidently and persistently dismissing the idea of cold returning to our shores - simply because it isn't showing in low-res GFS. Moreover, I'm heartened that you have some newly acquired respect for Chio et al, who advised you not to place so much trust and base your 'predictions' on the mickey mouse low-res output.

    Nice to see you finally join the party; it's never too late and welcome aboard.

  4. Thank you for taking the time to provide thiis very interesting and informative reply.

    I do have a question though that I'd like to put out there. Are the models programmed with all the currently known variables that could affect the weather ie. SSW, MJO, GWO, Solar influence (Sun activity), NAO, AO etc etc so that whenever any of these variables change they are reflected in the next model runs output?

    Also, is a lack of data variables in some models/runs the reason for such wild differences in output and if so which model has the most variable input?

    No worries good.gif

    Many of those indices (NAO, AO, MJO, GWO) are a calculated product of the output, ie: they're not strictly a 'variable' as such in themselves, more a measure. I don't believe there's a cyclical measure for 'SSW' - only a retrospective measure (NAM, I think it's termed? ~ Chio?) No idea about solar influence; I think the link there is still tenuous, therefore I doubt it's included as a variable. It wouldn't surprise me if a view is taken from a space meteorologist though.

    The simple answer to the above indices, is 'no' - historical values are not used, only the most current actual observations. It's those new observations which then calculate the latest, revised AO, NAO, MJO etc... I think there's future scope to build a level of 'feedback' into models, so that you can identify growth rates and patterns - but that's going to be quite some years off, considering the 10 Billion or so data points which is fed in, at initialisation. Introducing feedback would double the processing workload, and that's before you've even began to think about writing an algorithm that knew how much weight to apply to any particular pattern. Talking 10yrs away at least, I'd say.

    Lots of factors will effect variability, but in the main you're going to encounter more variance when you're more reliant on an algorithm to guess (normalise) parameters. You'll have to do this, if your model has a low vertical or horizontal resolution; therefore, the more atmospheric layers or tighter ground coverage, the better your model will be. The best measure of performance is the verification statistics, and from this you'll see that - of the public models - ECMWF and UKMO tend to hold sway. I've heard mention that MOGREPS has greater verification than both, but no way to verify that until we could see some stats to that effect.

    As simplistic as this might sound, but it always amazes me when I look up into the sky and think that we - as human beings - can actually, and fairly accurately, predict what it's going to do over the next few days - it just looks like air! But thus, therein the beauty of CFD realises itself.

  5. Excellent post overall and not too wordy at all - you write a hell of a lot better than most mathematicians I have met! However I wont be following your advice as to my friend's wife and her position at the Met... :-)

    Do you think there is a theoretical ceiling to the ability of these ever-tuned algorithms to accurately forecast weather, or are we talking constant evolution to the point that chaos weather theory is a thing of the past?

    Ta, I do try blum.gif

    It's a very interesting question. In principle, chaos is ever present, and - by definition - it is very difficult to predict. But 'chaos theory' I think is perhaps too loose a term accredited to a phenomenon which - at present - we just don't understand enough about. Indeed, if the weather was truly and absolutely chaotic, ie: that it has no discernible pattern, then it's hard to see how you'd improve a verification score? So, it's chaotic to a degree; the degree being, perhaps, our understanding of the variables at play.

    The ceiling is an interesting idea: is there one? Impossible to tell. Some mathematicians would say that there is, whereas other theoretical physicists would disagree and align themselves more towards a constant intellectual evolution; one which continually raises the ceiling, if you will.

    In terms of extending the window (growing the 80% and 20%) I think we'll see more atmospherically encompassing forecasts, where long-drain phenomena (such as SSW) extend the range of sight. More powerful supercomputers, satellite ranging and observation density will drive this forward, as it always has. This crosses over into the realms of climatology, so a marriage of sorts is close on the horizon I believe.

    My personal view, is that we'll come to see more accuracy/detail within the 5-7 day range; the sort of accuracy which we come to expect from, say, a 48hr forecast. This in itself would be quite an achievement. The narrowing of uncertainty regarding, for instance, mesoscale modelling of depth and path of features (precipitation intensity, pressure track, general behaviour of cyclogenesis)

    Hope that helps?

    SB

  6. The irony however is that having said all of that they're still unable to forecast with any great degree of accuracy beyond 4 days at best. In the 1950's the accuracy was at 2 days.

    The question is, if you take away the computer model can a forecaster forecast. If they are presented with a chart at 12z today and asked to provide a forecast for 1,2,3 days ahead without the use of a computer can they do it. Half the people in here can interpret the models to provide a forecast and how many of those are expert mathematicians, physicists etc.

    All the lay person wants to know is will it be wet, warm, dry or cold, not be baffled with b******t. It's about the end product - an accurate weather forecast; from what I can see, all these advancements in technology, the sciences etc and for what...........to extend the forecasting accuracy by 2 days!

    Just seen your reply.

    If we look at some verification data from ECMWF - the world renowned pacesetters in mid-range forecasting - you'll see that there is a almost linear growth in forecast verification, in-line with time:

    hd_ccafreach!80%25!NHem%20Extratropics!verify!od!oper!plwww_m_hr_ccafreach_ts!00!latest!chart.gif

    What that chart illustrates is that, back in 1998 the 80% threshold was around 5 days or 120hrs (so, incidentally, I've no idea where you get '4 days today' from) However, by mid-2012, that 80% threshold has been pushed out to 6.5 days, which is 156hrs.

    What's also illustrated in that chart, is the tailing-off of growth since around the end of 2009 - with a plateau clearly evident. The reason for this is very simple: ECMWF has exhausted the capability which current supercomputing resources afford them.

    There is a clear need to upgrade the hardware, rather than refining current algorithms. However, it is important to finesse algorithms prior to an upgrade, and I note with interest that ECMWF will be introducing further vertical resolution this year; therein - or at least one would imagine - delivering greater depth of data which the refined algorithms can compute.

    Even the lowest level of confidence (25%) grows in tandem - now out to around 9 days (216hrs):

    hd_t850crpssreach!NHem%20Extratropics!verify!od!oper!plwww_m_eps_t850crpssreach_ts!00!latest!chart.gif

    Again, that chart illustrates how the bar continually moves forward; 5.5 days (132hrs) in 1998 has now risen to around 8.5 days (204hrs) Again though, we see that tail-off towards the end of 2009. It could be plausibly argued that, as you test more exploratory algorithms, you will naturally increase the error-rate as it's a given that not all algorithm refinement will actually help (indeed, it may hinder - which is what I suspect this plateau effect represents)

    For me, I always look for the stories within stories. For example, my interest from these kind of verification charts is more drawn to why it appears model performance markedly drops off in the summer season, relative to its winter performance. A very clear pattern there. Overall, the trajectory is positive, but the hidden pattern still needs addressing (in my opinion anyway)

    As you ask (you might've guessed anyway) but I am indeed an expert mathematician - yet that doesn't mean I'm able to, for instance, heavily modify a computer-generated forecast in the way that the professional meteorologists at Exeter frequently do. So, the inference that there is sole reliance by the Met Office on computers to forecast the weather, is flawed and without foundation.

    We do have some incredibly knowledgeable members on here (GP, Chio to name but two) who obviously maintain a deeper, more scientific interest in meteorology - indeed, some might say, pushing the boundaries beyond merely looking at the prima facie output. Others don't share that deep interest or understanding; they just want to know if it's gone to snow or not. Which is fair enough, but I think there is enough space for everyone to contribute and feel valued, whatever their persuasion.

    I've tried to make this post less 'wordy', without it sounding condescending. The truth is: mathematics are playing a vital part in our understanding of the physical world, and the application of it is only going to increase over the coming years. Technology is ever advancing; for instance, I read yesterday how scientists have now developed a means to encode DNA. The mind-blowing significance, is that they could store the worlds media ever created - in HD - in barely a cupful of it.

    SB

  7. Interesting point regarding the MetO - may or may not have any relevance I suppose... Anyway I work now with a guy whose wife works at the MetO in the media section - publicity basically. When she got the job there was a welcoming party and a lot of the big wigs down there came out for a meal as part of the welcome. He is a mathematician and was telling me this week how amazed he was at the people he met because, as far as he could work out, they were almost all mathematicians and statisticians and computer experts. He said there didnt seem to be a "forecaster" amongst them. When he asked them about up and coming weather they didnt have a clue really - all they did was interpret data.

    Might this suggest that the MetO respond much more directly to computer model output than some of our own more teleconnective and "gut" forecasters here? In one regard it gives them a massive head start as I am sure that the modelling they work with down there is state of the art and possibly second to none globally... but it may also leave them vulnerable to processes that require a bit more human interpretation and input.

    I think GP and RJS are still quite bullish about cold in February...

    With respect, you can only "interpret data" if you know the principles by which to interpret it against.

    It's always been true, that a robust mathematical grounding will always serve you in good stead in meteorology - indeed, in just about all sciences. Further, and a natural dove-tail, is a deep understanding of physics to a fairly respectable level. To a lay person, I'm sure such people would appear to be merely mathematicians, whereas - in actual fact - they're professionals who use their understanding of applied physics and mathematics to abstract an understanding of meteorology.

    With regard to computer experts (probably core programmers) you're naturally going to encounter such people, when you're developing thousands of lines of raw code which must be overlaid onto a multimillion pound supercomputer. For what it's worth, a programmer doesn't need to be a meteorology; he/she merely needs to be able to write code which instructs an algorithm to calculate in-line with meteorological principles, the majority of which - at their core - rely on mathematics.

    Media people are very airy-fairy; they like to consider themselves to be lifes great creatives; they don't understand physics, in the same way that they don't understand finance. In truth, they don't understand anything vaguely intellectual or remotely tangible. Hate for that to sound like a slight on your friends wife, but as she sees fit to cast doubt on the credibility of professional forecasters who work for the Met Office, I think it's fair game. She should stick to difficult things, like designing pretty leaflets or having several meetings over font size.

  8. This mornings ECM ensembles

    PLUIM_06260_NWT.png?6767676767

    15DAAGSE_06260_NWT.png?6767676767

    In the extended ENS, its fairly pronounced we can see signs of gradual cooling down, following the brief mild incursion; clustering groups more away from 10c and towards 5c. Somewhat de ja vu here, as I recall similar drop-off tight clustering (mean convergence) appearing in extended ENS in late Dec and into early Jan. Thereafter, the signal become more and more amplified and we all saw a sharp drop in ENS, to sub-zero indices.

    For me, that's enough to give me confidence that NWP is toying with the idea of moving back towards another outbreak of cold. It would be an entirely different picture were the spaghetti scatter to be favourably mild or, at best, broadly neutral - but this isn't the case. Where do we go from here? Personally, I'll be looking to see whether this MR-LR pattern remains consistent and moreover whether it begins to amplify ala late-Dec.

    If I recall correctly, the guidance was for a transient return to mild, before a return to colder conditions set-in thereafter. If we look objectively at the ENS - as a 15-day story - this guidance is quite well illustrated. Always good to take a step-back and look at the broader picture, because often the biggest change in patterns are not immediately obvious when analysing at the micro-level; they become lost in the obvious background noise which is evident in a stochastic algorithm.

    What will be of increasing interest, is whether more perturbations within the EC32 trend favourably towards colder evolutions; I think Matt informed us yesterday that 26 out of 52 were of this inclination, so precisely a 50:50 split. If we take a guidance from the 15-day ENS of a move towards a colder evolution, then you'd logically expect to see perhaps growth in those EC32 perturbations, possibly 32/52 (62%) when the next cycle is run. Obviously, the probability would remain only two-thirds, but it's the growth of a trend which important here; again, micro v macro.

    For me, the story is not what's immediately in the road in front of our car, but more over the hill and into the horizon. We need to widen our field of vision, if we want to spot the next cold spell of weather coming up. Interesting 5 days or so coming up...

  9. 'long way back to cold from here' is a phrase I often see trotted-out, yet - time and time again - it feels like those who use it are setting themselves up. This season, we've seen record levels of entropy - confirmed by the professionals, and yet even against that background uncertainty, we see such certain, absolute and conclusive statements from amateurs?

    My personal view, is that I'm not convinced NWP is entirely or, at the very least, convincingly representative of background signals; background signals which have the potential to significantly influence the output. My belief is that the current algorithms do not (yet) have sufficient capability to determine a variable which would be so at odds with what current NWP algorithms would logically and scientifically propose.

    Therein, it's difficult for me to place any great deal of confidence in the current output; therefore I find it futile to make any claims - one way or t'other - regarding how patterns will evolve many days hence. That may seem to some like a boring position to take, but I just don't see how anyone could reasonably make a confident call, considering what we know to be a highly volatile situation.

    By now I should hope many know my position; I'm all about the science, and my view of SSWs isn't one which is determined regarding whether we get cold or not - it's about testing the fundamentals of the hypothesis, so that we can better understand the atmospheric influence. To that end, in all honesty, I'm not the least bit bothered whether the net result is blazing BBQs or igloo building - irrelevant to me; what's important is developing algorithms which can tighter model cause:effect through the atmosphere.

  10. Have long range forecasts become the emperors new clothes?

    All the super computers,the uni degrees etc,breaking down every single model run with fancy reasons for this and that when the truth of the matter is that we cant predict the weather in this country beyond 120 hours. Anything over that is guesswork,granted some educated, but guesswork nevertheless.

    It doesn't matter if it's GP,the meto or a newbie like me.At that range we'll be right some of the time and miles out most of the time.

    The irony being, 20yrs ago, there would've been a similar person pontificating that 'the truth of the matter is that we cant predict the weather in this country beyond 24 hours. Anything over that is guesswork,granted some educated, but guesswork nevertheless.'

    You obviously take it for granted that there is an ability to forecast the weather out to 120hrs. It's those exact supercomputers and university degrees - whom you refer to so flippantly - which has given us that capability. If you take a phenomenon like Sudden Stratospheric Warming, potentially that's a new frontier in atmospheric understanding; one which could further extend the range of our confidence in a forecast. If you're really into meteorology, then that's exciting stuff.

    Ensemble analysis, higher resolution layering, feedback cycling, and more and more powerful supercomputing resource IS the future of NWP. It's because it has to be. Every year there are hundreds, if not thousands, of experiments conducted by either universities or research institutions into various aspects of the weather and/or climate. It's the nuggets from those experiments; the little gems of knowledge or a relationship, that can help to refine what are inherently volatile algorithms.

    It's really not about being 'fancy' either. Meteorology is a very, very, very complex field of science with an almost unimaginable level of complexity. But, where we - as scientists - to approach that complexity with such a defeatist attitude, then we wouldn't be able to have that confidence in 120hr computer-generated forecasts. There's an education to be had, for anyone who is willing enough to take the time and effort to understand the weather, more that it being just some random pretty pictures.

  11. Just an observation, but some of you seem to think that, just because NWP doesn't show HLB right this very minute, then that's it, game over, not going to happen. Okay, but when has it EVER been so black and white, as some of you seem to think it is? You're placing unquestionable faith in algorithms, yet that's a bit foolish, as they are fraught with error. They are also bereft of objectivity; they will follow pre-determined 'rules', even against a background which is entirely at odds. It's the skill of a human which can weigh-up plausibility, and consider background signals; the strength of which you may know will amplify, yet it is very difficult to write an algorithm to interpret.

    I've taken the time to do my own research around what Stewart proposes (partially why I haven't posted for nigh-on a week) and I'm finding it increasingly difficult to disagree with him. That's not to say his thoughts are absolutely and categorically right, but - as he rightly says - there is strong foundation for what he forecasts.

    Oh and, forecasts, by the way - not after-timing, as is the habit of some 'forecasters' on this thread.

    I have absolute respect for Chio and GP, because their results stand to buttress evidence of two of (..there are a couple more) the greatest minds I've ever had the pleasure to read from, on here. Each to their own, but I'm more than happy to side with GP and Chio on their thoughts over the next 14-21 days or so.

    The 18z will be imminently spat out, and no doubt with it, many more dummies too tongue.png

  12. Before Ian (Fergieweather) comes on, I've been lucky enough to receive a reply from him. He states the following:

    no suggestion of any significant change. V good continuity in broad sense. Story consistent. GM & UK4 keep same sorts of totals.

    Based on what he's saying, I think any westward correction vs the 12z is likely to be discarded by the Met Office, although not completely overlooked. This could also mean that Monday's event could be of some significance as hinted at by several models in the medium time-frame.

    To be honest, I think once 'events' are within range of the super high-resolution UKV (1.5km) I would severely doubt whether Exeter are the slightest bit interested in whatever any other model - especially the relatively low-res GFS - comes up with.

    It's the same with the NAE; it's considered 'high-res' by amateurs but, actually, compared to the UKV it's like a dot matrix printer versus a laser copier. I know that a recent field of research by the Met Office, was to improve the snow field of the UKV, and this is now well incorporated into its output. I've seen presentational material which clearly shows the chalk and cheese difference between NAE and UKV, and also how - in terms of verification - UKV completely blows it out of the water.

    Personally I've no interest in this 'event' - doesn't reach London, so will matter little to me what happens.

  13. I strongly suspect the 12z ECM will be somewhat lacking support from its ensembles. Perhaps not an outlier - but definitely on the marginal side of plausible; I think it'll be isolated, with the majority diverging towards an entirely different solution.

    It's not about 'why, because it doesn't show what you want it to?' - more a case of: i) it being far too progressive (underestimating, perhaps) the synoptic pattern to the East, and ii) the fact that such progression is very much at odds with other outputs. Personally, as soon as I see these inconsistencies, I don't bother with the rest of the run.

  14. ARPEGE evolution and 850's support UKMO-GM almost exactly up to 00z Fri. Colder air somewhat further west compared to GFS.

    Wow. Thanks Ian for sharing that with us good.gif

    I did mention earlier that I thought we were approaching a tipping point in this saga, and I think Ian's confirmation of solid ARPEGE support for the UKMO-GM is representative of this.

    Excellent developments, and sets us up nicely for the week ahead...

  15. I personally dont believe in all this talk of undercutting and blizzards. I think the progression of the Atlantic is going to be strong for the block and things will end up with a messy mush for most ending very quickly into the weekend. I expect the ECM to confirm this later. I do hope Im wrong but I feel some of the comments on here are based on unfounded optimism and a reality check is needed concerning these dubious outputs.

    I have to say, I'm a bit miffed at the bullish nature.

    If anything, it could be said that - prior to output of the last 24-36hrs - talk of a strong block were "unfounded" or "optimistic". However, the most recent NWP output has moved towards a general broad juxtaposition, which agrees with a stronger NE blocking signal.

    We've seen strong consistency from the UKMO model, and this was further added to with the 12z - albeit slightly watered-down, yet broadly in-line with previous outputs. It's very much sticking to its guns - even though, 'That UKMO' was considered so extreme, that Exeter heavily modified it.

    For what it's worth, I really don't think anything will be decided for this weekend in the current outputs; that would belie confidence in the current NWP which, as we know, is currently plagued with record variance. Personally, I imagine the 12z ECM to - if anything - move closer to the UKMO, be it very subtle changes.

  16. This point is much discussed and may apply at longer range but I don't buy this for output upto 144hrs.

    Regardless of the balloon data issue and in this fast moving situation this has 6hrs newer data and with key detail showing as early as T96hrs then you can make comparisons with the rest of todays output.

    Nick, I quite agree.

    The idea that runs should only be compared relative to the previous output (0z to 0z, 12z to 12z) due to infinitesimally small (magnitudes of 10th/%) idiosyncrasies in data, is laughable. Even if there were data blind spots, you either run algorithms to blend and normalise it, or you backfill with prior data. The overriding error correction - as you quite rightly state - is the updated observational data, which is the precursor to every initialisation.

    Therein, to discount any run even though it contains perhaps 98% of all operational data, is utter nonsense. I could sympathise with such a view if such data blind spots brought the scope down to <85-90%, but that simply isn't the case. I often find intra-run variance to be, in the main, anecdotal; for example, verification again the GFS suite (0z, 6z, 12z, 18z) doesn't actually tend to favour any one particular initialisation - they all have, more or less, periods of better performance over each other - which, to be fair, is exactly what you'd expect from a stochastic model.

    I think the next big step for NWP, will be incorporating feedback cycles into the algorithms as - at present - my understanding is that all initialisations are run in isolation from one another. I think it's very difficult for conventional NWP to accurately model a long drain phenomenon (like SSW) if, for instance, it is unaware of a temperature growth pattern in the stratosphere. I think we're many years away from feedback-cycling though, as that really would ramp the error rate up!

    Back to current assessments...

    I see no reason why members shouldn't be optimistic, of the current outputs. If you take a wider, more encompassing view (5-7days or so) what we have seen, is an underlying trend both towards cold (as opposed to milder/zonal) - in the first instance, but also a growing and generally consistent NWP consensus towards amplifying blocking strength to our NE. These are factors very much in our favour, and it is this wider window which I personally tend to view NWP in (not-so-much bothered about the intra-run variance)

    Background signals are teleconnectively conducive to, not only maintaining this relative consistency, but further building strength into what is an emerging cross-model pattern. I think this context is vitally important when, for instance, you might see the GFS resort to its default zonal modelling. There obviously remains a large degree of uncertainty, but I think this week is when we will come to identify a growing momentum for the blocking signal going forward to be favourable across NWP outputs.

    I think we may be reaching a tipping point here. Exeter had little confidence in the 'extreme' UKMO 12z UKMO output as it had little support, therefore heavily modified. Okay, that's fair enough. However, the 0z is almost a replication, in broad terms anyway. Moreover, tenuous signs of other NWP beginning to side towards the UKMO would - to me anyway - be a sign that Exeter will be taking a different view of the 'extreme' UKMO output. If we see another 'extreme' 12z later this afternoon, then there's respectable consistency behind that output, and you'd really have to lean towards it.

    Experience would suggest a moderation in the output, but let's see later...

    SB

  17. not disagreeing there, but lots of things non-public access eventually find the light.

    Oh, absolutely. But there's a world of difference between a rogue member of staff recklessly divulging commercially sensitive information, and that information being available on a commercial basis.

    Then again, Twitter is awash with people who get a little bit carried away with themselves; it's an attention-seeking contest, therefore there those on there who might embellish the truth somewhat, in order to grow their flock.

  18. I'd like to know how they have access to it to be honest as I know of no other company in the UK that has this information from the UKMO...Of which would probably cost a fortune as per ECMWF information.

    Matt.

    To be honest with you Matt, I am very dubious that they have access to MOGREPS.

    My understanding - and it's pretty well founded - is that this is a non-public access model.

  19. Hi Folks

    A quick question from another lurker and newbie to the model thread! Would it be fair to say that if we are, as Ian F has alluded to, in an unprecedented situation, and we were to see, say a repeat of a winter like 1947, that this would be the first time our supercomputers have modeled such a situation and therefore we should expect to see discrepancies? Or should we expect better given the monies invested?

    Apologies mods if this is deemed off topic!

    Cracking thread, but all this info takes some digesting and getting used to, not sure if I'll ever get a complete grasp of it!

    I would say that's boils down to the sheer matter of probability.

    The governing algorithms will have a set of values that influences synoptic probability, in much the same way that you'll need to instruct an algorithm as to how the Earth rotates on its axis, and therefore what the prevailing (W~>E) pattern is. There'll then be thresholds which adjust prevailing probability, based on phenomenon such as SSWs.

    However, I think understanding of SSWs themselves perhaps isn't established enough yet. In order to accurately mathematically model, using computational fluid dynamics, you'll really need to be quite sure of the data relationships involved, as - in an algorithm - if there's a high degree of doubt, it'll exponentially spoil. I would imagine that the boffins at Reading (ECMWF) are busy running developmental code, which would layer phenomenon such as SSWs into the operational algorithm. But there really does need to be a lot of research done into cause:effect, before you unleash a variable which - potentially - is very influential.

    I wouldn't say it's so much a matter of money, but more a matter of scientific knowledge and geophysical understanding. Some of these relational processes and phenomenon are relatively new, and it takes time to carry out the proper audit of their scientific plausibility. What is important though, is the supercomputing ratio between Operational and Development. My understanding is that ECMWF are really exhausting their Developmental capacity, as Operational introduces more and more variables - net result being: less time to test and develop code. Ideally, you'd want a 70:30 ~ 80:20 split, but apparently it's now at a fairly critical 90:10.

    All of that points to a fairly major upgrade in supercomputing power in the very near future, and - considering the high degree of success of ECMWFs NWP models - they make a very strong case for it. The JMA invested a staggering amount of money recently in a new supercomputer, and that's partly as to why UKMO have a high degree of respect for them, aligned to the fact that it runs the UK-UM base platform. The next UKMO upgrade is circa. 2015, and I wouldn't be surprised if there was even closer tie-in with ECMWF when this comes about.

    Hope that's helpful?

    SB

  20. To be honest, going by that graph, Matt got the rate of warming totally wrong; he predicted it would take into the later parts of January to warm to that degree, whereas - in reality - it only took a matter of days. One could argue hyper critique, but it's a solid obversation if one is proposing their forecast is/was right, and I think in terms of sheer punch - the rate of warming is just as important as the degree itself.

×
×
  • Create New...