schemas/VegCore/ERD/VegCore.ERD.mwb: place.rank: made it required, because every place should have some kind of rank indicating what type of place it is, including lower ranks (e.g. plot, individual)
schemas/VegCore/ERD/VegCore.ERD.mwb: place: added unique constraint on parent, rank, name
schemas/VegCore/ERD/VegCore.ERD.mwb: place.locality: moved to geopath, because this is actually a rank of place (i.e. below municipality) rather than a field that every place could have
schemas/VegCore/ERD/VegCore.ERD.mwb: geoplace.official_name: renamed to name to merge with inherited field from place. documented that for geoplaces, this is the official, scrubbed name.
inputs/.geoscrub/geoscrub_output/postprocess.sql: added geovalid derived column, for use by analytical_stem_view
bin/with_all: $all: renamed to $hidden_srcs for clarity, since this now just adds the hidden (.*) datasources, rather than always using all datasources
bugfix: bin/with_all: in $all mode, just prepend the .* datasources to the user-selected (or default) @inputs, so that using $all to add these datasources doesn't inadvertently cause the action to be performed for all datasources
web/links/index.htm: updated to Firefox bookmarks. PostgreSQL: ALTER TABLE: added documentation about disabling of foreign key triggers, which is only possible by the superuser. note that marking a foreign key constraint as NOT VALID does not disable the trigger, so NOT VALID cannot be used for this purpose. this would be used to add fkeys from core VegBIEN tables to validation results tables such as the geoscrubbing results, without needing to import the validation results directly into core VegBIEN (which is time-consuming and currently must be done before input data is loaded, requiring a datasource reload to add geoscrubbing results).
bin/import_all: usage: documented that this can now be run with a custom datasources list (each of the form inputs/src/)
bin/with_all: added support for providing a custom list of inputs to run the command on
inputs/.geoscrub/geoscrub_output/postprocess.sql, run: updated runtimes
inputs/.geoscrub/geoscrub_output/run: documented full load_data() runtime (9 min @starscream)
inputs/.geoscrub/geoscrub_output/postprocess.sql: updated runtimes for refreshed data, which now has 4x as many rows (1,707,970->6,747,650)
inputs/.geoscrub/geoscrub_output/: refreshed geoscrub data. removed +header.csv because the extract now contains the header in the first row of the file.
bugfix: lib/sh/local.sh: psql(): $is_root: use `` around case statement instead of $(), because it contains an embedded unbalanced )
bugfix: inputs/.geoscrub/geoscrub_output/geoscrub.csv.run: include only the columns that Jim provided in his extract (the geoscrub table contains additional internal columns that are not part of the geovalidation data for VegBIEN). documented runtime (30 s) and upload time (1.5 min).
inputs/.geoscrub/geoscrub_output/geoscrub.csv.run: removed no longer needed setting of $local_server, $local_user (and use of $local_pg_database instead of $database) because the use_local bug in local.sh has been fixed
bugfix: lib/sh/local.sh: psql(): don't default the connection vars using use_local if running as the postgres user. in that case, connection must happen via a socket, with server="", and as the user running the command (postgres), with user="".
bugfix: inputs/.geoscrub/geoscrub_output/geoscrub.csv.run: need to manually set local_server, local_user to "" so that they do not default to their bien-user values
bugfix: lib/sh/db.sh: avoid outputting to /dev/fd/# when running as sudo on Linux, because this causes a "Permission denied" error (due to the /dev/fd/# file being owned by a different user). this is not a problem with normal redirects (>&#), because they do not use /dev/fd/# files which can have access permissions.
bugfix: lib/runscripts/util.run: to_top_file(): need to pass "$@" to to_file
lib/runscripts/util.run: to_top_file: added function for this (in addition to alias), so that this can be run from sudo in a wrap_fn command
lib/sh/db.sh: pg_as_root(): run sudo with echo_run to help debug
bugfix: lib/sh/db.sh: pg_cmd(): only set PG* connection/login env vars when the corresponding var is non-empty. there are some situations in which these must be unset (in order to use the default value), and other situations when the var must be set to something (i.e. "") to avoid it being defaulted to a value in local.sh > connection vars.
backups/TNRS.backup.md5: updated
bugfix: inputs/.geoscrub/geoscrub_output/geoscrub.csv.run: need to set $local_pg_database instead of $database because use_local (in psql()) does not currently avoid clobbering already-set versions of the applicable env vars
bugfix: lib/sh/local.sh: pg_as_root(): need to use -E (preserve environment) option to sudo, so that $schema, $table get passed through
bugfix: lib/sh/local.sh: psql(): only \set schema, table if $schema, $table are non-empty, because otherwise, you will get a "zero-length delimited identifier" error
added inputs/.geoscrub/geoscrub_output/geoscrub.csv.run to export the geoscrub table (must be run on vegbiendev)
lib/sh/local.sh: added require_remote()
lib/sh/db.sh: added pg_as_root()
lib/runscripts/util.run: added $wrap_fn to run any function via sudo, etc.
Added instructions for dependencies in the README.
Added indexes to speed up geonames-to-gadm.sql.
Without these indexes, these queries could take hours to complete.With them, the times more closely matched the times Jim noted in the sqlcomments.
Fixed a couple of syntax errors in geovalidate.sh.
Fixed a sql syntax error and a bash syntax error in the next line.
planning/timeline/timeline.2013.xls: "geoscrubbing automated pipeline": scheduled for after Paul's current set of tasks on the geoscrubbing re-run is complete. i'm budgeting several weeks for this since my understanding is that Paul is doing this part-time.
planning/timeline/timeline.2013.xls: moved "geoscrubbing automated pipeline" under "simplify import process for easier maintainability"
planning/timeline/timeline.2013.xls: geoscrubbing re-run: added subtask to spot-check reloaded geoscrubbing data
planning/timeline/timeline.2013.xls: geoscrubbing re-run: added separate subtask for "geoscrubbing data reload", since apparently it was not clear that of course the new data will need to be imported into VegBIEN before the results of the re-run are available. this is currently scheduled to happen in the next full-database import, which is the week of 10/28 in order to include further validations fixes.
planning/timeline/timeline.2013.xls: CVS validation: use timespan dot ◦ for supertask
planning/timeline/timeline.2013.xls: CVS validation: added subtasks that are similar to for FIA validation (create validation subset, create extract)
planning/timeline/timeline.2013.xls: FIA validation: split apart into subtasks, including "decide which columns to validate", which has to happen ahead of time before the extract can be generated
planning/timeline/timeline.2013.xls: fixed check marks for past (hidden) weeks, which had gotten duplicated when rows were copied together with their check marks
planning/timeline/timeline.2013.xls: fixed line heights
planning/timeline/timeline.2013.xls: fixed column width so the dates display properly in MS Excel
planning/timeline/timeline.2013.xls: right-aligned legend so it isn't too close to the "During week of:" label
planning/timeline/timeline.2013.xls: added legend:• task◦ timespan✓ task progress☑ timespan progress
planning/timeline/timeline.2013.xls: attribution/conditions of use: made it a subtask of "add missing columns" because this is related to data needed for published analyses. added dots because this is an ongoing task, that depends on data providers getting their use conditions to us.
planning/timeline/timeline.2013.xls: reload core & analytical database: moved next reload ahead to last week of October so that we can include the updated geovalidation data for the 10/31 deadline. added additional reload so that they are spaced <= 1 month apart.
planning/timeline/timeline.2013.xls: receive feedback from documentation tester: added an extra week to receive additional feedback from them in response to documentation fixes made
planning/timeline/timeline.2013.xls: attribution/conditions of use: made this a top-level task instead of a subtask of "data provider metadata", to avoid including lower-priority tasks (i.e. in the later column) in the same section as higher-priority tasks
planning/timeline/timeline.2013.xls: datasource validations: regrouped by subtask instead of by datasource, so that the high-priority subtasks get done for all datasources before moving on to lower-priority subtasks for any datasources
planning/timeline/timeline.2013.xls: reduced width of Milestone column to make room to fit an additional week on the printed page
planning/timeline/timeline.2013.xls: attribution/conditions of use: removed "(Brad/Brian/Bob/etc.)" because these are from everyone who provided or obtained data, not just Brad/Brian/Bob
planning/timeline/timeline.2013.xls: rescheduled tasks to accommodate the separate non-critical feature requests subtasks
planning/timeline/timeline.2013.xls: datasource validations: split "fix feature requests" into separate "fix critical feature requests" and "fix non-critical feature requests" tasks. rescheduled non-critical feature requests until after the other validation tasks have been completed.
planning/timeline/timeline.2013.xls: add globally-unique occurrenceID: moved up to next week because we would like to be able to get this done for the 10/31 deadline
planning/timeline/timeline.2013.xls: updated for progress
planning/timeline/timeline.2013.xls: moved "data provider metadata" before "datasource validations (spot-checking)" because conditions of use are necessary for scientists who want to publish papers based on the data (which is a key use case)
planning/timeline/timeline.2013.xls: moved "usability testing" before "datasource validations (spot-checking)" because this is most important towards reaching our goal of a useful information resource
planning/timeline/timeline.2013.xls: moved "geoscrubbing re-run", "add globally-unique occurrenceID" back under "usability testing" > "add missing columns" because these are in fact part of the usability testing
planning/timeline/timeline.2013.xls: "flatten the datasources to a common schema": moved to later column because the complex tasks "switching to new-style import" and "create interactive scripts for each import step" are also scheduled then. (it's unlikely we would have much time over winter break anyway, considering that there is ~1 week's worth of holidays then.)
planning/timeline/timeline.2013.xls: scheduled "simplify import process for easier maintainability"
planning/timeline/timeline.2013.xls: tasks performed by someone else (geoscrubbing re-run): changed solid check marks ✓ to open check marks ☑ to match the solid • vs. open ◦ dot convention
planning/timeline/timeline.2013.xls: documentation testing: added supertask dots. removed later dots for scheduled tasks.
planning/timeline/timeline.2013.xls: scheduled "documentation testing"
planning/timeline/timeline.2013.xls: scheduled "simplify process of mapping/adding a new datasource"
planning/timeline/timeline.2013.xls: "add globally-unique occurrenceID": moved it up to the first week when we're no longer fixing existing issues in datasources, since this has similar priority to adding missing columns discovered during usability testing (which is scheduled as an ongoing task)
planning/timeline/timeline.2013.xls: usability testing: did task breakdown (find scientists who want to use BIEN3 data, etc.) and scheduled subtasks
planning/timeline/timeline.2013.xls: moved "add missing columns" to its own supertask. used outline check mark ☑ (analogous to open circle ◦) to mark supertasks as completed which were split up into subtasks.
planning/timeline/timeline.2013.xls: later column: removed dots from scheduled items
planning/timeline/timeline.2013.xls: moved "switching to new-style import"-related steps (other than for CVS) to separate "simplify import process for easier maintainability" supertask, since this is not part of the "simplify process of mapping/adding a new datasource" task
planning/timeline/timeline.2013.xls: add any missing columns: added and scheduled step to add globally-unique occurrenceID
planning/timeline/timeline.2013.xls: geoscrubbing re-run: added dots ◦ for this for the time when it can be worked on asynchronously by Paul Sarando
planning/timeline/timeline.2013.xls: data provider metadata: added dots ◦ for the portion of "attribution and conditions of use" that can be worked on asynchronously by Brad/Brian/Bob
planning/timeline/timeline.2013.xls: scheduled "aggregated validations" during the last 2 weeks of "datasource validations (spot-checking)", because these weeks are only spent fixing issues uncovered in the remaining datasources, so there may be extra time then
planning/timeline/timeline.2013.xls: scheduled other tasks after "datasource validations (spot-checking)" is complete
planning/timeline/timeline.2013.xls: datasource validations (spot-checking): each datasource's validation supertask: added open circles ◦ spanning the length of the subtasks
planning/timeline/timeline.2013.xls: use an open circle ◦ instead of a bullet • for supertasks that have been fully split into subtasks (not just itemizing a few subtasks), so that these don't count towards the bullets (estimated workload) in each week
planning/timeline/timeline.2013.xls: use an open circle ◦ instead of a bullet • for tasks that are performed by someone other than me, so that these don't count towards the bullets (estimated workload) in each week
planning/timeline/timeline.2013.xls: datasource validations (spot-checking): split each datasource into subtasks and scheduled them
planning/timeline/timeline.2013.xls: moved "move denormalized validations to stage II", "move stage III validations to stage II" outside of "switching to new-style import" because the "switching to new-style import" step refers just to the per-datasource switching steps, not to the additional refactorings that would be needed to avoid dependency on the complex XPath mappings (mappings/VegCore-VegBIEN.csv)
planning/timeline/timeline.2013.xls: datasource validations (spot-checking): added subtasks for each of the remaining datasources (wiki.vegpath.org/2013-10-17_conference_call#validation-order)
planning/timeline/timeline.2013.xls: moved non-validation-related tasks after the 10/31 deadline so that these are not taking time away from the validation
planning/timeline/timeline.2013.xls: moved "flatten the datasources to a common schema" under "simplify process of mapping/adding a new datasource" because this is also needed separately for datasources where the left-joining is not part of the validation
planning/timeline/timeline.2013.xls: extended "revisions to VegBIEN schema" to length of "datasource validations (spot-checking)" because schema changes are expected as we add missing fields
planning/timeline/timeline.2013.xls: crossed out and hid completed tasks ("find out amount remaining in BIEN3 budget")
planning/timeline/timeline.2013.xls: datasource validations (spot-checking): extended through the end of November because data providers' fixes on the remaining 10 datasources (wiki.vegpath.org/2013-10-17_conference_call#validation-order) are likely to add significantly to the issues and feature requests associated with these datasources (e.g. the 2nd-round VegBank validation added 4 issues and 5 feature requests). there is also expected to be wait time while data providers are responding (most likely in multiple rounds of feedback).
planning/timeline/timeline.2013.xls: data provider metadata: removed "iPlant can do" because this actually requires Brad/Brian/Bob/other data providers to provide this info. however, this info may be findable on the web for some datasources.
planning/timeline/timeline.2013.xls: moved "data provider metadata" right after "datasource validations" because this is part of the completed database itself rather than the tools to maintain it
planning/timeline/timeline.2013.xls: split "revisions to schema" into "revisions to VegBIEN schema" (part of datasource validations) and "revisions to normalized VegCore" (part of documentation)
bin/import_all: use just import_scrub, not reimport_scrub, because import_scrub now automatically publishes the datasource's import (i.e. removes the temp suffix)
bugfix: inputs/input.Makefile: import: remove the temp suffix once the import is done, so that the full database import doesn't keep the suffix attached to the datasources that import_all didn't import with reimport. removed unused import_publish target (instead use import_temp to invoke just the import without the temp suffix removal).
planning/timeline/timeline.2013.xls: moved part of "switching to new-style import" under "datasource validations (spot-checking)" because this is necessary to validate CVS
planning/timeline/timeline.2013.xls: moved "simplify process of mapping/adding a new datasource" and "documentation testing" after "usability testing" because these tasks were there to make it possible for people other than me to reload/add to the database, which we have now decided is a lower priority than creating the validated database itself
planning/timeline/timeline.2013.xls: added weeks through the end of the year (12/31)
schemas/VegBIEN/attribution/BIEN 3 data use and attribution.docx: changed dataset definition to the definition in normalized VegCore ("a collection of records from the same place, with the same attribution requirements"), following discussion with Ramona