Analysis of pwsFWI prediction

  • HansR 

Introduction

In my previous blog I described why and how the predictions on pwsFWI were implemented. Also I promised a short analysis:

We’re now in version 1.8.3 and it seems all to be working OK. The testing is back on the meteorological level, trying to find out how much a two day ahead prediction will differ from the actual calculation when the day has past.

In this blog I will show the results and try to interpret them.
Please note that this is not a scientific or even a full analysis as I am lacking the resources and the data for that.

The result

I received four sheets out of seven possible (not counting my own) and reworked those to a single Excel sheet so all data would be presented in the same format. The sheet downloaded here. The first column gives the predictions two days ahead. On the first of November, that day is still a prediction (see e.g. the page on the weather site), so on the first of November, the second November is noted as a prediction (two days ahead). Then when that day has passed, on the third of November, we can register the calculated value which you can read in the Excel sheet in the second column.  Finally, differences are calculated between the prediction and the calculated value and displayed as percentage of the calculated value:

 

\frac{CalculatedValue - PredictedValue}{CalculatedValue}*100%

 

The following remarks can be made:

  1. Wind and rain are the parameters with largest deviation from prediction. Such differences can be expected to have a large impact on the realised pwsFWI values for wind, much less for rain;
    1. Wind differs easily 50 – 100%
    2. Rain easily 100% or  more with a range to over the 1000%.
  2. Relative Humidity is usually around 10-15% difference;
  3. Temperature generally is within 5% deviation and appears to be very good;
  4. The  pwsFWI values remain within 30% deviation apart.

 

Additional remarks

  1. The dataset is too small to do some real statistics of course – having also to do with a too large geo-spatial distribution – so what we have are some numbers, which can only be taken as an indication and possibly as a starting point for some real research.
  2. The representation in percent of the differences may well lead to wrong interpretation. If a prediction of rain gives 4 mm and you get only 2, that means a 100% difference but the effect would not be so big and on the pwsFWI where the effect would be null. A similar effect you see with temperature where a prediction of 10 degrees and a realisation of 11 would lead to a deviation of 10% but around 20 degrees the same deviation would be 5%. But the influence of temperature on pwsFWI  is big and non-linear. I am not happy with the numerical expression of the differences in percent as it actually does not say so much. The pwsFWI itself needs a performance analysis on the variation of its parameters to evaluate the prediction. That is for the ToDo-list.
  3. The rain deviations of Wagenborgen and Komoka are very big. In general the rain prediction is inaccurate, however since the rain enters the pwsFWI equations not in strict numbers but in ranges, the effect of being unpredictable does not have that much influence.
  4. I also conclude, that the wind speeds – especially on my own station – are much lower than the prediction. In this case it has to do with the setup of the anemometer which is too low and too sheltered from the East and West directions. I am working on it but this may take some time. Generally, the wind measurements often do not meet professional standards and are difficult to judge because the influence of the installation and hight above ground is not known. Wind has a big influence – at a low humidity – on the pwsFWI as you can see in the erroneous data from ‘t Zandt.
  5. The measurement predictions appear to have a tendency to be too high.

An anomaly was observed at Weerstation ‘t Zandt where after several days, I realised the wind was exceptionally high over there because it registered wind in m/s, while the interface always has a unit of km/h. If the user registered deviating units for the pws in Cumulus, an automatic conversion is implemented to convert to the SI-units when entering the pwsFWI calculation because the equations require SI. However, since the API is already in SI, that creates a problem of converting a correct figure.  So, although not planned, I implemented a double conversion: the API unit to a registered unit and then, when entering the calculation, from registered unit to the SI unit (== API unit). All units which are handled by Cumulus, can now be entered into the pwsFWI calculation and prediction. This measure was too late, and as such ‘t Zandt is ignored with respect to the actual pwsFWI calculation.

 

Conclusion

If there is a conclusion to draw I would formulate it as follows:

“Numerical predictions of the weather are highly volatile, changing from day to day. A two day ahead prediction of single measurements (wind, temperature, rain and humidity) can easily differ a 50% from the observed quantity. However it is observed, that the actual resulting pwsFWI prediction differs hardly more than 30% from the actually realised value. Because of this last observation I would say, that predicting pwsFWI is useful but full of riddles.”

Owners of personal weather stations are invited to calibrate their equipment and especially pay attention to the positioning of the anemometer as wind is a strong contributor to the pwsFWI because of its impact on the drying process.

  1. Within the uncertainty of the prediction of around 30% on pwsFWI, I would say the prediction is useful.
  2. A performance analysis of the equations to the input variations would be a useful exercise.
  3. Please note my message on the forum of this morning, indicating a pwsFWI prediction in Australia, of a purple warning level in New South Wales.  The photo in the tweet below (let’s call this is Phil’s Backyard) says it all!

Geef een reactie