Forecast verification is a crucial process for ensuring the quality of forecasts. Accurate and adequate observations play a key role in making this process successful. Comparing our forecasts to observations enables us to fine-tune our services by selecting the right models and eliminating structural forecast errors. Additionally, we can provide our customers with insights into the quality of our forecasts through monthly verification reports. Through these efforts, we aim to bring more value in guiding you to the decision point.
At Infoplaza, we generate hundreds of forecasts every day. Our goal is to deliver the most accurate and precise forecasts to our customers, achieved by using the best combination of weather models, complemented by satellite and radar imagery, along with observations of several weather and sea state parameters. It is crucial for us to ensure that the services we provide align with the actual observed conditions, meeting the expectations of our customers.
The initial form of our verification involves examining real-time observations (when available) to fine-tune the forecast for the first few hours, if needed. This adjustment occurs just before issuing the forecast to the customer, resulting in an immediate enhancement of forecast quality. Maintaining a log of the changes provides valuable insights into location-specific challenges.
A second type of verification is conducted at a later stage. We produce monthly verification reports, comparing a month of daily forecasts with a month of observational data (either provided by the customer or through open sources). Unlike the first type of verification, this method doesn't provide an immediate quality boost but rather offers an overview of forecast errors. It helps identify potential patterns, such as a consistent overestimation of wave height in a particular location or an underestimation of wind from a specific direction. This information is important for our decision-making as well, guiding us in selecting the appropriate models and fine-tuning existing ones for specific areas.
In our monthly verification reports, we assess forecast accuracy both qualitatively and quantitatively. A first examination of forecast accuracy is presented in a simple graph (qualitative; see Figure 2). Does the forecast align well with observations? A more comprehensive analysis is conducted using various statistical methods or parameters (quantitative; see Figure 3). What are the exact forecast errors and what do they mean?
Figure 2: The graph shows forecasted significant wave height (blue) plotted against observations (pink). At first glance the forecast matches the observations quite well.
The statistical parameters that are used in our reports include (see Figure 3):
- Hit ratio (HR): this metrics shows what percentage of the forecast remains within a pre-defined threshold. The hit range lies between 0 and 100%, the perfect score is 100%.
- Mean Error (ME): this metrics is also called the bias. It show the mean systematic error. The perfect score is 0. A score below 0 means that the forecast underestimates the observations. The opposite is true for values above 0.
- Mean Absolute Error (MAE): this metrics indicates the average magnitude of the absolute forecast error; an indicator of average precision.
- Mean Square Error (MSE): this measures the average squares of the forecast errors and incorporates both the variance and bias of the forecast. It is more sensitive to outliers in the data than the MAE.
- Root Mean Square Error (RMSE): this is a frequently used measure in verification. It is a good measure of precision.
To assess whether the forecast improves as the time of interest approaches, we utilize the 00-24h, 24-48h, and 48-72h forecasts, plotting them against the corresponding observations. This approach not only helps gauge forecast accuracy over different time frames but is also useful in identifying situations with low confidence, such as when low pressure systems move over the forecast location (see Figure 4).
Figure 4: On the 3rd and 4th of September 2023 a low confidence situation occurred for this location just west of Taiwan. Typhoon Haikui tracked from east to west across southern Taiwan and resulted in a highly inaccurate wave height forecast.
In addition to significant wave height, as shown in Figure 2 and 4, we provide verifications for several other weather parameters, including temperature, wind speed and direction, air pressure, sea height, swell height, and wave period. The availability of these verifications depends on the observations that are accessible.
After engaging in discussions about this issue, the customer agreed to allow a slight overestimation of waves, emphasizing the importance of prioritizing safety. This serves as an excellent example of how discussing verification reports can help customers to understand the performance of our forecasts and make informed decisions regarding their operations.