planning/timeline/timeline.2013.xls: reduced width of Milestone column to make room to fit an additional week on the printed page
planning/timeline/timeline.2013.xls: attribution/conditions of use: removed "(Brad/Brian/Bob/etc.)" because these are from everyone who provided or obtained data, not just Brad/Brian/Bob
planning/timeline/timeline.2013.xls: rescheduled tasks to accommodate the separate non-critical feature requests subtasks
planning/timeline/timeline.2013.xls: datasource validations: split "fix feature requests" into separate "fix critical feature requests" and "fix non-critical feature requests" tasks. rescheduled non-critical feature requests until after the other validation tasks have been completed.
planning/timeline/timeline.2013.xls: add globally-unique occurrenceID: moved up to next week because we would like to be able to get this done for the 10/31 deadline
planning/timeline/timeline.2013.xls: updated for progress
planning/timeline/timeline.2013.xls: moved "data provider metadata" before "datasource validations (spot-checking)" because conditions of use are necessary for scientists who want to publish papers based on the data (which is a key use case)
planning/timeline/timeline.2013.xls: moved "usability testing" before "datasource validations (spot-checking)" because this is most important towards reaching our goal of a useful information resource
planning/timeline/timeline.2013.xls: moved "geoscrubbing re-run", "add globally-unique occurrenceID" back under "usability testing" > "add missing columns" because these are in fact part of the usability testing
planning/timeline/timeline.2013.xls: "flatten the datasources to a common schema": moved to later column because the complex tasks "switching to new-style import" and "create interactive scripts for each import step" are also scheduled then. (it's unlikely we would have much time over winter break anyway, considering that there is ~1 week's worth of holidays then.)
planning/timeline/timeline.2013.xls: scheduled "simplify import process for easier maintainability"
planning/timeline/timeline.2013.xls: tasks performed by someone else (geoscrubbing re-run): changed solid check marks ✓ to open check marks ☑ to match the solid • vs. open ◦ dot convention
planning/timeline/timeline.2013.xls: documentation testing: added supertask dots. removed later dots for scheduled tasks.
planning/timeline/timeline.2013.xls: scheduled "documentation testing"
planning/timeline/timeline.2013.xls: scheduled "simplify process of mapping/adding a new datasource"
planning/timeline/timeline.2013.xls: "add globally-unique occurrenceID": moved it up to the first week when we're no longer fixing existing issues in datasources, since this has similar priority to adding missing columns discovered during usability testing (which is scheduled as an ongoing task)
planning/timeline/timeline.2013.xls: usability testing: did task breakdown (find scientists who want to use BIEN3 data, etc.) and scheduled subtasks
planning/timeline/timeline.2013.xls: moved "add missing columns" to its own supertask. used outline check mark ☑ (analogous to open circle ◦) to mark supertasks as completed which were split up into subtasks.
planning/timeline/timeline.2013.xls: later column: removed dots from scheduled items
planning/timeline/timeline.2013.xls: moved "switching to new-style import"-related steps (other than for CVS) to separate "simplify import process for easier maintainability" supertask, since this is not part of the "simplify process of mapping/adding a new datasource" task
planning/timeline/timeline.2013.xls: add any missing columns: added and scheduled step to add globally-unique occurrenceID
planning/timeline/timeline.2013.xls: geoscrubbing re-run: added dots ◦ for this for the time when it can be worked on asynchronously by Paul Sarando
planning/timeline/timeline.2013.xls: data provider metadata: added dots ◦ for the portion of "attribution and conditions of use" that can be worked on asynchronously by Brad/Brian/Bob
planning/timeline/timeline.2013.xls: scheduled "aggregated validations" during the last 2 weeks of "datasource validations (spot-checking)", because these weeks are only spent fixing issues uncovered in the remaining datasources, so there may be extra time then
planning/timeline/timeline.2013.xls: scheduled other tasks after "datasource validations (spot-checking)" is complete
planning/timeline/timeline.2013.xls: datasource validations (spot-checking): each datasource's validation supertask: added open circles ◦ spanning the length of the subtasks
planning/timeline/timeline.2013.xls: use an open circle ◦ instead of a bullet • for supertasks that have been fully split into subtasks (not just itemizing a few subtasks), so that these don't count towards the bullets (estimated workload) in each week
planning/timeline/timeline.2013.xls: use an open circle ◦ instead of a bullet • for tasks that are performed by someone other than me, so that these don't count towards the bullets (estimated workload) in each week
planning/timeline/timeline.2013.xls: datasource validations (spot-checking): split each datasource into subtasks and scheduled them
planning/timeline/timeline.2013.xls: moved "move denormalized validations to stage II", "move stage III validations to stage II" outside of "switching to new-style import" because the "switching to new-style import" step refers just to the per-datasource switching steps, not to the additional refactorings that would be needed to avoid dependency on the complex XPath mappings (mappings/VegCore-VegBIEN.csv)
planning/timeline/timeline.2013.xls: datasource validations (spot-checking): added subtasks for each of the remaining datasources (wiki.vegpath.org/2013-10-17_conference_call#validation-order)
planning/timeline/timeline.2013.xls: moved non-validation-related tasks after the 10/31 deadline so that these are not taking time away from the validation
planning/timeline/timeline.2013.xls: moved "flatten the datasources to a common schema" under "simplify process of mapping/adding a new datasource" because this is also needed separately for datasources where the left-joining is not part of the validation
planning/timeline/timeline.2013.xls: extended "revisions to VegBIEN schema" to length of "datasource validations (spot-checking)" because schema changes are expected as we add missing fields
planning/timeline/timeline.2013.xls: crossed out and hid completed tasks ("find out amount remaining in BIEN3 budget")
planning/timeline/timeline.2013.xls: datasource validations (spot-checking): extended through the end of November because data providers' fixes on the remaining 10 datasources (wiki.vegpath.org/2013-10-17_conference_call#validation-order) are likely to add significantly to the issues and feature requests associated with these datasources (e.g. the 2nd-round VegBank validation added 4 issues and 5 feature requests). there is also expected to be wait time while data providers are responding (most likely in multiple rounds of feedback).
planning/timeline/timeline.2013.xls: data provider metadata: removed "iPlant can do" because this actually requires Brad/Brian/Bob/other data providers to provide this info. however, this info may be findable on the web for some datasources.
planning/timeline/timeline.2013.xls: moved "data provider metadata" right after "datasource validations" because this is part of the completed database itself rather than the tools to maintain it
planning/timeline/timeline.2013.xls: split "revisions to schema" into "revisions to VegBIEN schema" (part of datasource validations) and "revisions to normalized VegCore" (part of documentation)
bin/import_all: use just import_scrub, not reimport_scrub, because import_scrub now automatically publishes the datasource's import (i.e. removes the temp suffix)
bugfix: inputs/input.Makefile: import: remove the temp suffix once the import is done, so that the full database import doesn't keep the suffix attached to the datasources that import_all didn't import with reimport. removed unused import_publish target (instead use import_temp to invoke just the import without the temp suffix removal).
planning/timeline/timeline.2013.xls: moved part of "switching to new-style import" under "datasource validations (spot-checking)" because this is necessary to validate CVS
planning/timeline/timeline.2013.xls: moved "simplify process of mapping/adding a new datasource" and "documentation testing" after "usability testing" because these tasks were there to make it possible for people other than me to reload/add to the database, which we have now decided is a lower priority than creating the validated database itself
planning/timeline/timeline.2013.xls: added weeks through the end of the year (12/31)
schemas/VegBIEN/attribution/BIEN 3 data use and attribution.docx: changed dataset definition to the definition in normalized VegCore ("a collection of records from the same place, with the same attribution requirements"), following discussion with Ramona
schemas/VegBIEN/attribution/BIEN 3 data use and attribution.docx: updated to Ramona's commented version
inputs/CVS/plot_/map.csv: realLatitude, realLongitude: remapped to UNUSED because these columns are actually empty
inputs/CVS/taxonObservation_/map.csv: collector_ID: remapped it to UNUSED and removed the join to party via it, like in VegBank
inputs/CVS/: deleted stemLocation_, because the CVS stemLocation table is empty (unlike VegBank)
inputs/CVS/import_order.txt: added plantConcept_/ so it would get automapped after switching to new-style import
inputs/CVS/taxonObservation_/map.csv: denorm_{tri,quad}*: mapped to infraspecificRank*, infraspecificEpithet*
inputs/CVS/taxonObservation_/map.csv: infraspecific ranks: remapped to EQUIV#to:species (which is the speciesBinomial), because these actually contain the full taxonomic name at that rank, like VegBank
inputs/CVS/taxonObservation_/map.csv: genus: documented that unlike VegBank, does not include genus author
inputs/CVS/taxonObservation_/map.csv: denorm_* terms _alt-ed with normalized terms: use DUPLICATE#of instead where possible. documented where and why _alt was necessary (this applies to a few rows for division, genus).
bugfix: inputs/CVS/taxonObservation_/map.csv: species: remapped to speciesBinomial, not specificEpithet (like for VegBank). however, note that denorm_species is in fact the epithet, unlike VegBank.
fix: inputs/CVS/taxonObservation_/postprocess.sql: removed {} around denorm_genus to match the normalized genus
inputs/CVS/taxonObservation_/map.csv: removed unnecessary alts for terms that don't have a duplicate denorm* or hierarchical field
fix: inputs/CVS/taxonObservation_/postprocess.sql: fix 1 row that has denorm_kingdom != Kingdom (i.e. both NOT NULL but not the same)
bugfix: lib/common.Makefile: $(subMake): don't enclose the target in "" because sometimes the target is empty (i.e. `all`), and nothing should be passed to the sub-make
inputs/VegBank/plot_/create.sql: documented runtime (5 min)
bugfix: inputs/CVS/plot_/create.sql: like for VegBank, need to compare place.*PLOT_ID*, not PLOTPLACE_ID, with plot.PLOT_ID
/README.TXT: Single datasource import: added pointer to instructions to remake the analytical DB (also required after single datasource import)
inputs/VegBank/verify/input_cols.txt, inputs/VegBank/+taxon_observation.**.sample/create.sql: updated to match taxon_observation.** columns
/README.TXT: Maintenance: to synchronize vegbiendev, jupiter, and your local machine: run all sync_uploads on the svn working copy using --size-only, because the mtimes are based on when the files were last updated by svn and are not meaningful
/README.TXT: Full database import: On local machine: do steps under Maintenance > "to synchronize vegbiendev, jupiter, and your local machine": removed no longer accurate indicator that these steps are above Full database import, since Full database import is now at the beginning of the file
bugfix: inputs/VegBank/+taxon_observation.**.sample/: renamed to ^taxon_observation.**.sample because a leading + has a special meaning to bash (it indicates a shell option, and you will get an error "invalid option name"), as well as to make (it indicates that a recipe command invokes make recursively)
bugfix: inputs/VegBank/taxon_observation.**/header.csv: updated for observation_/map.csv bugfix, which added new hasobservationsynonym field. this fixes a strange test bug caused by the taxon_observation.**/map.csv column list being mismatched/misaligned with what was in the underlying tables. (column mismatches will often cause unexplainable errors in unrelated sections of code the same way that buffer overflows do in C++.)
bugfix: inputs/VegBank/taxon_observation.**.sample/: renamed to +taxon_observation.**.sample so that the -expansion of taxon_observation.* doesn't add taxon_observation.**.sample (which causes it to attempt to install taxon_observation.**.sample before taxon_observation.** is installed)
bugfix: *Makefile: recursive invocation of $(MAKE): enclose targets in "" in case they contain *
bugfix: lib/runscripts/table.run: load_data(): pass $is_view through to `make reinstall` so that DROP VIEW will be used instead of DROP TABLE where applicable
bugfix: inputs/input.Makefile: %/uninstall: allow user to set is_view=1 flag to use DROP VIEW instead of DROP TABLE
lib/sh/util.sh: added instructions for making an export only visible locally
bugfix: inputs/VegBank/observation_/header.csv, map.csv: updated for refresh, which inserts hasobservationsynonym at the end of the observation table
inputs/VegBank/taxon_observation.**.sample/create.sql: reordered columns in the same order as analytical_plot, for easier validation
bugfix: lib/runscripts/table.run: load_data(): in remaking mode, need to remake header.csv in case the columns have changed
web/links/index.htm: updated to Firefox bookmarks. updated favicons.
web/links/index.htm: updated to Firefox bookmarks. sudo: added instructions to turn off incorrect password e-mails.
inputs/VegBank/taxon_observation.**.sample/create.sql: include only the subset of columns that is imported to VegBIEN
inputs/VegBank/taxon_observation.**.sample/test.xml.ref: updated inserted row count (which was most likely generated before the output column names had been set to the input column names)
added inputs/VegBank/verify/input_cols.include.txt, with runscript to generate it
inputs/VegBank/verify/input_cols.unmapped.txt*: renamed to input_cols.exclude.txt* because this now includes mapped columns as well
inputs/VegBank/verify/input_cols.unmapped.txt.run: remove unmapped join columns, since these would be included in the extract
inputs/VegBank/verify/input_cols.unmapped.txt.run: take input directly from input_cols.txt to avoid needing to first copy and paste it into input_cols.unmapped.txt
inputs/VegBank/verify/input_cols.unmapped.txt.run: added back deliberately excluded columns (DUPLICATE#of:..., etc.) so that the # of rows in the file can be subtracted from the total # of columns to get the # of input columns that would be included in the extract
bugfix: inputs/input.Makefile: %/VegBIEN.csv: `ln -s` to create VegBIEN.csv: enclose the filenames in "" since they may contain * (e.g. taxon_observation.**)
added inputs/VegBank/verify/input_cols.txt, input_cols.unmapped.txt (with runscript to filter input_cols.unmapped.txt)
schemas/VegBIEN/attribution/BIEN 3 data use and attribution.docx: made Ramona's corrections with track changes turned on. note that you have to use MS Word for this, not LibreOffice, because LibreOffice can't save the table of contents properly in .docx or .doc format (although it can save it in .odt format).
inputs/VegBank/stratum/postprocess.sql: added pkey
inputs/VegBank/taxonobservation_/postprocess.sql: added __parent index on locationID to facilitate the LEFT JOINs used to create the validation input
inputs/VegBank/observation_/postprocess.sql: added __parent index on locationID to facilitate the LEFT JOINs used to create the validation input
inputs/VegBank/import_order.txt: added taxon_observation.**.sample so it will automatically be kept up to date
inputs/VegBank/taxon_observation.**.sample/create.sql: set runtime (1 s)
inputs/VegBank/: added taxon_observation.**.sample subset of plots to use in the validation. this avoids the need to import all of VegBank just to validate a few of the plots.