Project

General

Profile

Actions

Task #419

open

Assessment of results in the context of the literature: Literature review and accuracy

Added by Benoit Parmentier over 12 years ago. Updated over 12 years ago.

Status:
In Progress
Priority:
Normal
Category:
Climate
Start date:
05/25/2012
Due date:
% Done:

90%

Estimated time:
70.00 h
Activity type:
Other

Description

Documenting the new products will necessitate assembling a list of references on climate interpolation to place the results in the context of existing works. As part of the compilation, a summary table of existing climate products and their accuracies will be produced. This list and table will be used for the upcoming papers.


Files

Actions #1

Updated by Benoit Parmentier over 12 years ago

  • Status changed from New to In Progress
  • Estimated time changed from 40.00 h to 70.00 h

I got the basic structure of the paper with accuracy section roughly done. The aim is to try to share the first draft next week along with some method slides.

Actions #2

Updated by Benoit Parmentier over 12 years ago

  • % Done changed from 0 to 90
Actions #3

Updated by Benoit Parmentier over 12 years ago

This first draft includes three pdf presentations that contain additional notes and comments. I am still working on the references and the summary table. Other updates will come later.

Actions #4

Updated by Adam Wilson over 12 years ago

I just read through your literature review, great job! I think there is definitely a place in the literature for a summary/review like this. I've attached my comments... If you want to pursue publishing it (and I hope you do), here a few additional points that I think we'll need to work on:
There should be a section on satellite observations. While the overall focus should be on interpolation methods (as you have done), I think we need a least a short section pointing to attempts to incorporate satellite data in various ways... I've included a short list of potential papers to include. Essentially I think we need to mention the strengths and weaknesses of existing methods and point to the opportunity for 'fusing' it with station data to improve predictions.
I think it would be strengthened by expanding and shaping the conclusion section to be more of a 'suggested best practices' rather than an overview of what others have done. i.e. If there is no single best method, what is your suggestion for completing a project like this? Pick some subset of models, compare them in different places, and choose one best? Or do a family of models and choose the best locally? Choose the best using a suite of validation metrics (which ones?)? I think if you're explicit here about what you think we should do given what you've learned, it will be a much more powerful/useful paper and would set up the temp/precip methodology paper(s) nicely (we could reference this one when describing our methods).

Actions

Also available in: Atom PDF