Forecast Verification for Applications in Ecology

Anna K.  Miesner2, Dr Mark R. Payne1

1Technical University Of Denmark (DTU), Kgs. Lyngby, Denmark, 2Helmholtz-Zentrum Geesthacht, Centre for Materials and Coastal Research, Geesthacht, Germany

While ecological forecasting is still a field in its infancy, meterologists have been thinking about how to check the validity of their forecasts for more than 50 years. We have reviewed the forecast verification literature from this field and here we summarise the key lessons in a practical form that can be readily taken up for use in predictive ecological applications. We first discuss what makes a forecast “good”, and examine the tools and techniques that can be used to assess predictive skill. We then highlight different ways that skill can be measured, both numerically and visually, and including binary, categorical, probabilistic and continuous responses, and how to choose between these metrics. We emphasize the importance of reference forecasts, and of viewing forecast skill from many different perspectives, illustrating these points with examples from marine ecological forecasting applications. Open source software for calculating these metrics and worked examples will be also be presented. Finally, we examine some of the wider issues around forecast skill and how they manifest themselves in skill metrics, including overfitting and model misspecification.


Biography:

Mark R. Payne is a Senior Researcher at the Technical University of Denmark (DTU-Aqua) in Copenhagen, Denmark, whose research examines the impacts of climate change and climate variability on life in the ocean. His work is pioneering the development of Climate Services for monitoring and managing life in the ocean in Europe and involves coupling biological knowledge to climate models to produce predictions that are of direct relevance to end-users.

Similar Posts