Jump to content
Snow?
Local
Radar
Cold?

Evo

Members
  • Posts

    173
  • Joined

  • Last visited

Posts posted by Evo

  1. Indeed a Scilly may seem a good choice, but it surely ranks poorly in the expectation management business. Those Scillyans (what the hell are people from Scilly called?) probably never expect to get snow, and are thence never disappointed.

    Unless they're idiots of course, however I would never be so bold as to suggest that.

    It could be argued that The Enforcer never expected to get any snow in Abingdon, however we all know how acutely disappointed he was.

    Hmmm, I think this very imporant subject could be reason for a SATSIGS EGM. I nominate the RAC club in London, at Stratos Ferric's expense. All those in favour say "aye".

  2. Bournemouth (though to be fair much of the South coast could apply) should be on the list of applicants for this rota. Totally snowless this winter. The last snow that resulted in anything settling was last year with a brief dusting, previously the thundersnow event, previously 1996, previously 1980-somthing (1985 or 1986 I think).

    I will personally continue to use the appointed SI* unit of Abingdons. I'm doing that based on the theory that even if Lord Kelvin had later been found to be an axe weilding maniac, his scale would still be used today, maybe even moreso.

    * On the rambling topic of SI measurements, I'm quite sure that Abingdon would never have a sniff as a SI unit. It would have to be a decidéception or something equally unfathomable.

  3. Then we have the North Atlantic showing a tripole suggestive of -ve NAO conditions (cold anomalies to the north - warm central north Atlantic - cold to the south:

    http://weather.unisys.com/surface/sst_anom.html

    Then the GFS and ECM are starting to pick out the development of a classic Feb weak El Nino / -ve NAO / -AO pattern, not least through tonight's t168-240 resulting in a possible re-run of the last 48 hours:

    http://www.meteo.psu.edu/~gadomski/ECMWF_12z/hgtcomp.html

    http://www.ecmwf.int/products/forecasts/d/...0912!!/

    http://www.ecmwf.int/products/forecasts/d/...0912!!/

    http://www.wetterzentrale.de/pics/Rtavn2281.html

    Give it another day or so, and I will start to get very hawkish about this one...

    GP

    Great post as ever GP!

    I've been watching that tripole develop in the Atlantic too and I've also been keeping an eye on the PSU Height Comparisons. It seems to me that the signal now is for a -ve NAO-like pattern but twisted towards the Eastern US coast, with blocking over Greenland and a dominant West Atlantic trough - is that right?

    Looking at the mean height comparisons, the GFS seems to be hinting at an omega block of sorts. Also intersting are the low mean heights progged over much of Russia, in fact GFS has no mean ridge from 0 to 180E :)

  4. Interesting to see a massive difference between Metcheck's forecast for BH23 (Bournemouth Airport) and Net Weather's forecast for today and tomorrow. Both claim to be derived from the 0z GFS run - is this a factor of the resolution difference or do Net Weather factor in a UHI calculation?

    Going by the observations, it would seem that Metcheck's forecast is closer to the mark so far today. Has anybody else noticed big differences like this?

    Metcheck:

    post-2410-1171015508_thumb.jpg

    Net Weather Today:

    post-2410-1171015617_thumb.jpg

    Net Weather Tomorrow:

    post-2410-1171015642_thumb.jpg

    Meteogram:

    post-2410-1171015690_thumb.png

  5. I would tend towards the current outputs, even the seemingly woeful 6z for the fact that the latest runs seem to be realising that there never was any Scandi development. The signals were obviously there, but for whatever reason it just never happened. Without the block to the East, the Atlantic will waltz straight back in uncontested and that is what we are being shown at the moment.

    Also note that the 6z doesn't develop those shortwaves that we were watching trundle along the Channel yesterday.

    The ECM remains a puzzle, is it ahead or is it behind the game? That is the question at the moment. It's just possible that ECM is actually ahead of the game, even though it appears to be lagging behind.

  6. AFTER DAY 3... MODELS SHOW

    MEANINGFUL DIFFS WITH INDIVIDUAL SHRTWVS WITHIN THE NERN CONUS AND

    ERN PAC MEAN TROFS. HIGH UNCERTAINTY WITH THESE FEATURES...

    LIKELY NOT TO BE RESOLVED SATISFACTORILY UNTIL THE SHORT RANGE

    TIME FRAME...

    Translated into English this is:

    Models show meaningful differences with individual shortwaves within the Northern Continental United States and Eastern Pacific mean-toughs. High uncertainty with these features, not likely to be resolved satisfactorily until the short range time frame.

    Not sure this bit really applies to us yet, though their thoughts about the Atlantic definitely do, confirming the 12z to be too quick to develop the shortwaves.

  7. Thanks for the kind words Richard & John, I hope to come back to the subject when this has all died down a bit.

    Thanks again Evo for all that work, we could make a forecaster of you yet?

    John

    Steady on now :cc_confused:

    Back to the here and now, I must confess that what little I've seen today is certainly very interesting, though I feel on balance that the 12z GFS was probably an outlier.

    Picking up on what Nick is saying above, I've put the other Meteociel 850 charts for T96 into one image for comparison.

    post-2410-1170619130_thumb.png

    From right to left, top to bottom, ECMWF, GEM, GFS, JMA. Meteociel doesn't have the Met O. 850s sadly.

    There is broad agreement on balance. As is typical with the UK, the small changes between the models seem to be concerned with our shores. It looks to me that the other models really want to push the milder air further into the British Isles than the GFS.

    Also I'm minded that the 12z GFS has a reputation for being the most progressive of the GFS runs and I wondering how this will relate to it's positioning of the shortwaves, or indeed their existence full stop.

    So, if we look at the 12z z500s:

    post-2410-1170620283_thumb.png

    From right to left, top to bottom, Met O, GFS, ECMWF, GEM, NOGAPS, JMA.

    Met O and the GFS have very similar uppers, with the GFS having more trough development. ECM and GEM are very similar with the GEM the one being more progressive and the NOGAPS and JMA seem very similar. All of the models seem consistent in terms of the location of polar vortices, the difference really being down to the jet I would guess? Location and intensity of the jet streak probably making the difference.

    I suppose more questions than answers, but I'm definitely sitting on the conservative side of the fence at this stage in terms of predicting snow distribution for later in the week.

  8. Well whilst that ecm continues to churn out these operational runs we cant ignore it as much as at present I'd like to nuke it! :yahoo:

    I think what happens next week will test out DT's medium range forecasting rule number 3:

    When the model consensus or majority cluster begins to breakdown… WATCH THE SHIFT TOWARDS the Model A or M.O. solution. Incremental changes towards the M.O. solution will often lead a forecaster to make incremental changes in the forecast usually because of concerns over model uncertainty and consistentency issues. This is often a mistake. Once there is a discernible shift towards the Model A / M.O. solution… it is often wise to make large changes in the forecast that is reflective of the Model A OUTLIER solution.
    http://www.wxrisk.com/Meteorology/MRforecasting.htm

    This rule has been wheeled out a couple of times, but usually when the consensus is for mild.

    IF, and this is a big IF, the model consensus (which at the moment is fairly close) begins to break down, expect something more akin to the ECM solution.

    To be fair, I've not even looked at the ECM so I'm relying on how folks are describing it (i.e. rubbish).

    I agree with the comments reference not getting too carried away down South. It is going to be marginal for us southerners especially those like me at 0m ASL!

    I think the LP was progged too low due to the over-progessive nature of the 12z. My guess is that the 18z will have it back further north again. Nice to be proved wrong though ;)

    Experience has tought us not to get too excited down here, hasn't it!

    Technically speaking, the less progressive 18z should have the shortwaves further South or cancel them altogether, though I'm still playing catch up with it all today.

  9. Again I think that the snow potential thing is risk/reward tbh. The big problem with wanting the lows to track further north is that it is inviting the milder air quickly back in. For me a snow to rain event and a return to milder weather is a waste of time.

    Afternoon everyone, trying to stay up to date in between doing boring work, but what an interesting 12z indeed.

    With all the talk of these lows going too far South, I would point out that for us folks on or near the South coast, the first low is still going too far North for snow to low levels on the coast from what I can see. I thought I would point that out seeing as some folks might think us lot are stealing all the snow :cold: .

    Also, given that the 12z is seemingly the most progressive run of the GFS suite, I'm wondering whether we should be consider this when looking at the positioning of the little lows running down the channel.

    With the info I've seen so far, I'd not like to make too many predictions at this stage :D

    Oh well back to work...

  10. Should we use 850hpa temperatures to determine surface temperatures?

    I've noticed several people over the last couple of days refer to the 850hpa temperature and use this as the basis of surface temperature and precipitation type forecasts.

    850hpa temperatures are indeed useful to estimate front locations and therefore air masses.

    The link below claims that 850 temperatures were historically used by forecasters to estimate the surface temperatures because of the poor surface modeling at the time. 10 to 15 degrees would typically be added to obtain a mean surface temperature, then it would be adjusted for the particular location by using a fudge factor based on experience and the topography of the region. If the surface layer is saturated a value of 7 to 10 degrees is used instead. However, it is implied that this is generally no longer the case.

    Models such as the GFS do produce surface temperature predictions, which most seem to treat with scorn. I'm not sure if this is based on experience or suspicion. I'd like to hear Paul's input on this - do you factor any adjustments for your automated site forecasts?

    I remember Nick F mentioning that using the hydrostatic equation, it is possible to forecast the mean temperature for a layer of the atmosphere. Perhaps it would make more sense when looking at snow potential to look at the mean temperature of the lower 1500m or so of the atmosphere by using the 1000 and 850 heights?

    It's possible to use the hydrostatic equation:

    Z2 - Z1 = (R * T * ln(p1/p2)) / g

    Where:

    p1 and p2 are the lower and upper levels respectively

    Z1 and Z2 are the heights of the isobaric surfaces p1,p2 (in metres)

    R is the universal gas constant for dry air

    g is the gravity constant

    Which can be arrange as:

    T = ((Z2 - Z1) * g) / (R * ln(p1/p2))

    This works fine for 500-1000 thicknesses, for example

    p1 = 1000hPa

    p2 = 500Pa

    Z1 - Z2 = 5280 (528 decametres or 528dam)

    R = 287

    g = 9.81

    T = (5280 * 9.81) / (287 * ln(1000/500)

    T = 51796.8 / (198.9332)

    T = 260K or -13c

    So the mean temperature of the layer between 1000hPa and 500hPa is -13c

    However, when using this on the 1000hPa to 850hPa layer, we need to know the 1000hpa to 850hpa thickness, which I can't find published on any normal chart. We are given the 850 geopotential height, though, so we could find the thickness if we had the forecast mslp and the forecast surface temperature:

    Thickness = Z1 - (pmsl - 1000) / (g × 1000) * Tsfc

    Z1 = 850hPa height

    pmsl = Sea level pressure

    g = 9.81

    Tsfc = Surface temperature

    But surely this defeats the object (needing to know the predicted surface temperature)? Can anyone add anything to this?

    Ideally, we need to know the 1000hPa height and then this would allow us to calculate the thicknesses.

    Anyway, moving on, the reason for wanting to use this method is the research outlined by Martin Rowley in the link below.

    He states that statistical analysis has shown the following probabilities for snow to fall, given the 1000-850 thicknesses:

    Probability:.............................90%......70%.....50%....30%.....10%

    850-1000 hPa(gpm)..............1279.....1287....1293....1297....1302 (unadjusted)

    850-1000 hPa(gpm)..............1281.....1290....1293....1298....1303 (adjusted - see below)

    Boyden's work in the 60s determined that the thickness values should be adjusted to take into account the height above sea level of a particular area when using the thickness values to forecast snow chances (amongst other things).

    (Z-h)/30

    Z = 1000hPa geopotential height

    h = height of ground above sea level

    This factor is applied to the thickness value and then the corrected thickness is used in the table above.

    Additionally, Rowley outlines research by Callen and Prescott to determine surface maxima from 1000-850 thickness values as follows:

    T = -192.65 + (0.156 * Thickness)

    Note the thickness value above is the 1000-850 thickness.

    This value is then adjusted using the following table, given the expected cloud cover:

    Class 0: Low and medium cloud generally less than half cover. High cloud not overcast. Fog only around dawn, if at all.

    Class 1: Roughly 50% cloudiness. If fog occurs, it clears slowly during the morning.

    Class 2: Mainly cloudy. If fog occurs, it clears by midday, but slowly.

    Class 3: Overcast with rain/snow etc. Persistent Fog.

    JAN

    Class 0: -4

    Class 1: -4

    Class 2: -5

    Class 3: -5

    FEB

    Class 0: -3

    Class 1: -3

    Class 2: -4

    Class 3: -5

    MAR

    Class 0: -1

    Class 1: -2

    Class 2: -3

    Class 3: -4

    APR

    Class 0: +1

    Class 1: 0

    Class 2: -1

    Class 3: -2

    MAY

    Class 0: +2

    Class 1: +1

    Class 2: 0

    Class 3: -1

    JUN

    Class 0: +4

    Class 1: +3

    Class 2: +1

    Class 3: 0

    JUL

    Class 0: +4

    Class 1: +3

    Class 2: +1

    Class 3: 0

    AUG

    Class 0: +3

    Class 1: +2

    Class 2: +1

    Class 3: 0

    SEP

    Class 0: +1

    Class 1: 0

    Class 2: -1

    Class 3: -1

    OCT

    Class 0: -1

    Class 1: -1

    Class 2: -2

    Class 3: -3

    NOV

    Class 0: -2

    Class 1: -3

    Class 2: -4

    Class 3: -4

    DEC

    Class 0: -4

    Class 1: -4

    Class 2: -5

    Class 3: -5

    Obviously these figures were derived empirically, rather than theoretically so are not perfect.

    So, although I've not really shown the best way to predict surface maxima, I hope I have shown that is far more complicated than just looking at an 850hPa temperature and adding 10c onto it.

    Sources:

    http://homepage.ntlworld.com/booty.weather...topics.htm#850T

    http://homepage.ntlworld.com/booty.weather/tthkfaq.htm

    http://www.metoffice.gov.uk/research/nwp/p...381/FRTR381.pdf (Met Office NWP Forecast Techniques Review Board)

  11. As Ian states, they've got most bases covered there. I'm certain this is as a result of the current fads of litigation-awareness and accountability in public office. It's no longer acceptable for someone in a public organisation such as the Met Office to have an opinion which could possibly turn out wrong. I don't blame them, I blame others further up the pecking order...

    I also wonder if their statistical analysis continues to throw up oddball outcomes as analogues (such as 1962/3, with which there is plenty to correlate with what's going on now) How do you handle that? There's no way you could say that in January it's going to be like 1963. Then you have confusing signals with the NH El Nino reaction in the last couple of weeks being stronger than it perhaps should have been - will it continue?

    I think their call of "average" is all they can do in their position, probably remembering that "the trend is your friend" (long term trend that is).

  12. Having had a look at the data over the trial period I suggest that the 10pts / day (or part thereof) be applied for latecomers (and late changes), with a final cut-off of the 3rd of the month.

    Yep, I concur with this. Also Paul should have final say on any rules IMO as he will be the nearest thing we have to an impartial judge (plus he his ponying up the dough for the prizes)

    Edit: the other thing I meant to mention is that it's best for all if the rules are firmly fixed and applied. This makes it fairer on everyone and also, whilst this may now be a bit of fun, it'd be a shame if it descended into a general free-for-all at some point in the future over "fluffy" rules.

  13. Did you know there's a Bristol scale?

    Just so long as it doesn't involve Kerry Katona.

    That has me thinking, I wonder which met station in the UK has recorded the lowest mean wind speeds this year?

×
×
  • Create New...