bugfix: inputs/input.Makefile: %/install: don't run map_table, because this instead done by the runscript. although it does not hurt to do it twice, invoking load_data by itself should not run map_table at all, so that the original column names can be inspected in the table and map.csv reordered to match.
inputs/VegBank/^taxon_observation.**.sample/create.sql: synced with taxon_observation.**
(for r11396) fix: bin/map: put template: comment out the "Put template:" label so that the output is valid XML, and displays properly in a browser rather than showing a syntax error
/README.TXT: for each task, documented which machine it's run on. for tasks run on vegbiendev, added pointer to "Connecting to vegbiendev" steps.
/README.TXT: added instructions for connecting to vegbiendev
mappings/VegCore-VegBIEN.csv: mapped taxon_determination__is_current, taxon_determination__is_original
bugfix: mappings/VegCore-VegBIEN.csv: main taxondetermination: use [!isoriginal=true] instead of [!isoriginal] so that adding a manual isoriginal field does not prevent this selector from matching
inputs/VegBank/taxonobservation_/map.csv: originalinterpretation, currentinterpretation: removed table name prefix so these would automap
mappings/VegCore.htm: regenerated from wiki. added taxon_determination__is_current, taxon_determination__is_original.
planning/timeline/timeline.2013.xls: geoscrubbing automated pipeline: split into subtasks "build pipeline", "test pipeline", and "integrate pipeline into import process"
planning/timeline/timeline.2013.xls: geoscrubbing re-run: moved recent checkmarks to "geoscrubbing automated pipeline" since the work on these actually relates to automating the geoscrubbing, not the one-time reload (which was already completed)
planning/timeline/timeline.2013.xls: geoscrubbing: made "geoscrubbing re-run" a subtask of the main geoscrubbing task, instead of geoscrubbing re-run being the supertask. updated for Paul's progresss.
schemas/vegbien.sql: taxondetermination_set_iscurrent(): include new iscurrent__verbatim, so that taxondeterminations the datasource marks as current are always considered first. this currently applies to VegBank and CVS.
schemas/vegbien.sql: taxondetermination.isoriginal: made it nullable like iscurrent__verbatim, because this is populated from the datasource. taxondetermination_set_iscurrent() now supports isoriginal=NULL, so this is not a problem.
schemas/vegbien.sql: taxondetermination.is_datasource_current: renamed to iscurrent__verbatim and made it nullable, so that this can be used to store the verbatim iscurrent status
schemas/vegbien.sql: taxondetermination_set_iscurrent(): removed setting of is_datasource_current (which is now the same as iscurrent), so that this can be used to store the verbatim iscurrent status
schemas/vegbien.sql: taxondetermination_set_iscurrent(): isoriginal: make sure it is always either true or false, so that if the NOT NULL constraint on this is ever removed you don't end up with the incorrect sort order false, true, NULL (it should be false=NULL, true)
schemas/vegbien.sql: use plain taxondetermination.iscurrent instead of is_datasource_current since these are now the same
schemas/vegbien.sql: taxondetermination_set_iscurrent(): is_datasource_current: set to the same value as iscurrent, since these now have the same formula
schemas/vegbien.sql: taxondetermination_set_iscurrent(): removed no longer used accepted, matched determinationtypes (for these determinations, left-join to TNRS.ScrubbedTaxon)
Updated biengeo README with new script workflow.
Split geovalidate.sh into install and update scripts.
Split geovalidate.sh into install.sh and update_gadm_data.sh scripts.The install.sh script creates the databse and uses the install sqlscripts to create all required tables.The update_gadm_data.sh script downloads the GADM data and creates the...
Refactored geonames.sh to update_geonames_data.sh
Renamed geonames.sh to update_geonames_data.sh and moved many of the SQLstatements from the bash script into supporting update and truncate sqlscripts.These sql and update_geonames_data.sh scripts now assume all required...
Split up geonames-to-gadm.sql into 3 scripts.
Each script only operates on one table within a transaction.These scripts now assume the tables have already been created (byinstall scripts added in a previous commit), and each starts out bytruncating the table it will update with new data.
Added geoscrub.sh script.
This script runs the load-geoscrub-input.sh, geonames.sql, andgeovalidate.sql scripts in order to load and scrub vegbien input data.Updated README to explain the new script.Minor updates to load-geoscrub-input.sh.
inputs/SALVIAS/projects/postprocess.sql: remove institutions that we have direct data for: documented that most of the 13139 removed plots are from duplicates (where we have direct data). this leaves only 560 of SALVIAS's original 13699 plots.
inputs/SALVIAS/projects/postprocess.sql: remove example data
inputs/SALVIAS/projects/postprocess.sql: remove private data that should not be publicly visible (this was probably already removed by the plotMetadata.AccessCode filter in salvias_plots.~.clean_up.sql)
inputs/SALVIAS/projects/postprocess.sql: remove institutions that we have direct data for (Madidi, VegBank)
bugfix: inputs/VegBank/plot_/postprocess.sql: coordinateUncertaintyInMeters__from_fuzzing: need to convert km to m in the fuzzing radii. updated derived cols runtimes.
inputs/VegBank/plot_/postprocess.sql: remove duplicated CVS plots (2323 of 7079 CVS plots are removed by this)
planning/timeline/timeline.2013.xls: updated for progress
added exports/2013-7-10.Naia.range_limiting_factors.csv.run
bugfix: exports/2013-10-18.Brian_Enquist.Canadensys.csv.run: do not override the table to analytical_stem, because the extract-specific view should be used instead. this was actually benign, because extract.run export_() always sets $table to the extract-specific view.
schemas/vegbien.sql: added 2013-7-10.Naia.range_limiting_factors
schemas/vegbien.sql: sync_analytical_stem_to_view(): row_num: renamed to taxon_occurrence__pkey because previous taxon determinations have been removed, so each row is in fact a taxon_occurrence (~= VegCore.vegpath.org?ERD.taxon_occurrence)
fix: schemas/vegbien.sql: analytical_stem_view: don't ORDER BY datasource, because this requires a slow full-table sort after the hash joins. (when selecting a subset of analytical_stem_view, nested loops are used automatically without needing an ORDER BY to force this.) to get the datasource-sorted order (plus a sort-order guarantee), you can still add a manual `ORDER BY datasource`, which will use a fast index scan on one of the datasource indexes.
schemas/vegbien.sql: analytical_stem: added row_num, which can serve as the taxon_observation ID (DwC occurrenceID)
Updated load-geoscrub script with configurable db.
load-geoscrub-input.sh now uses a variable with the db name defined atthe top of the script.Updated the default db host to 'localhost' for this script.
schemas/vegbien.sql: analytical_stem: locationID... index: use eventDate instead of dateCollected since it's now eventDate that identifies the locationevent
schemas/vegbien.sql: analytical_stem_view: use plot.** to obtain plot-related fields, so that the same code does not need to be maintained in both analytical_stem_view and plot.**
schemas/vegbien.sql: analytical_stem_view: moved specimen-specific fields to occurrence section
schemas/vegbien.sql: analytical_stem_view, plot.**: added separate location__cultivated__bien
schemas/vegbien.sql: added separate eventDate, in addition to dateCollected
fix: schemas/vegbien.sql: dateCollected: use aggregateoccurrence.collectiondate before locationevent.obsstartdate rather than after, because this is more accurate. it was previously the other way around to allow dateCollected to be the pkey for the row's locationevent (for plots data).
schemas/vegbien.sql: analytical_stem_view, plot.**: locationevent__pkey: moved to right before the locationevent-related fields
schemas/vegbien.sql: analytical_stem_view: changed column order, etc. to match plot.**
schemas/vegbien.sql: plot.**: added locationevent__pkey so that this view can be joined to other VegBIEN tables, which require the internal pkey
derived/biengeo/README.txt: geoscrub new data: geovalidate.sql: added runtime from Paul
schemas/vegbien.sql: sync_analytical_stem_to_view(): speciesBinomialWithMorphospecies index: documented runtime (1 h)
schemas/vegbien.sql: plot.**: updated to use the same column formulas as analytical_stem_view
planning/timeline/timeline.2013.xls: add globally-unique occurrenceID: removed "globally-unique" because Naia is actually OK with this being numeric (i.e. unique within our DB)
lib/runscripts/import_subset.run: $version: use new $extract_view, which is set to the same value that this was
lib/runscripts/extract.run: use the extract-specific view instead of all of analytical_stem
schemas/vegbien.sql: added 2013-10-18.Brian_Enquist.Canadensys view
schemas/vegbien.sql: sync_analytical_stem_to_view(): added index on speciesBinomialWithMorphospecies for Brian Enquist's Canadensys request
exports/2013-10-18.Brian_Enquist.Canadensys.csv.run: documented runtime (35 min, now that bugs have been fixed)
bugfix: bin/with_all: @inputs default value: use `local`, so that the default value is only set for the current function and doesn't leak back out into the caller. this fixes a bug in subset imports where import_all's Source/import call to with_all would add the .* datasources, but these would then stay in for the import_scrub call, causing extra .* datasources to incorrectly be imported.
planning/timeline/timeline.2013.xls: usability testing: added additional subtask to validate the scientists' extracts (i.e. check that the extract fulfills their request)
planning/timeline/timeline.2013.xls: provide scientists with their requested data: added separate subtask for Brian Enquist's Canadensys extract
planning/timeline/timeline.2013.xls: updated for progress and revised schedule
bugfix: schemas/pg_hba.Mac.conf: made same change for Mac as was made for Linux in r11451
bugfix: schemas/pg_hba.conf: don't allow ident authentication for Unix socket connections, because this apparently prevents having normal, password-based connections ("md5"). note that just switching the order of the ident and md5 entries is not useful, because whichever authentication type comes second will be ignored completely. this problem was previously worked around by just not using Unix socket connections at all, and always specifying "localhost" as the host to force a hostname-based connection. this does not affect the postgres superuser, because they have their own ident line in pg_hba.conf.
Added db user and host to load-geoscrub-input.sh
The psql commands in load-geoscrub-input.sh will now connect with aspecific user on a specific host.Updated the 'COPY' sql statement to a '\COPY' statement, so that thepsql user does not have to be a PostgreSQL superuser.
derived/biengeo/README.txt: geoscrub new data: steps that use .sql scripts: added the psql commands to run these
Updated install instructions in the README.
derived/biengeo/README.txt: geoscrub new data: noted that this now deletes any previous geoscrubbing results
derived/biengeo/README.txt: added steps to set the working dir for each set of steps
derived/biengeo/README.txt: added section on obtaining source code, including path to Paul's in-progress files on vegbiendev (not sure whether the in-progress files are needed to run the core scripts in steps 1-6)
derived/biengeo/README.txt: moved commands to run to the top of the README. flagged commands-sections with *** and an identifying label.
Initial checkin of geoscrub install SQL files.
Added install.*.sql files that will do initial table creation for allrequired tables.Added a truncate.vegbien_geoscrub.sql script that will clear tables related todata downloaded in load-geoscrub-input.sh....
Update load-geoscrub-input.sh to download from URL.
Removed logic to dump input data directly from the vegbien database andto download the input from a URL provided by AMK instead.Also updated this script to download the file into an input datadirectory, rather than just into the current working directory.
planning/timeline/timeline.2013.xls: reload core & analytical database scheduled for this week: postponed to give us additional time to do datasource validations
inputs/input.Makefile: added %/import_temp alias for %/import, to mirror the presence of import_temp for import
fix: inputs/VegBank/taxonobservation_/map.csv: remapped authorplantname to OMIT because these are not specific to the taxoninterpretation row (this is in a separate taxoninterpretation for the original determination instead). see wiki.vegpath.org/Spot-checking#2013-10-10 > Mike Lee's conference call feedback.
fix: inputs/VegBank/taxonobservation_/map.csv: remapped int_* to OMIT because these are not specific to the taxoninterpretation row (this is in a separate taxoninterpretation for the original determination instead). see wiki.vegpath.org/Spot-checking#2013-10-10 > Mike Lee's conference call feedback.
exports/2013-10-18.Brian_Enquist.Canadensys.csv.run: inherit from new import_subset.run (which uses extract.run)
added lib/runscripts/import_subset.run, extract.run
added exports/2013-10-18.Brian_Enquist.Canadensys.csv.run
bin/make_analytical_db: removed no longer needed setting of $schema to $public, because this is now done by psql()
lib/sh/local.sh: psql(): also accept $public as the $schema param, since this is used by a lot of import scripts
lib/sh/util.sh: added require_dot_script()
bugfix: lib/sh/util.sh: $top_script: use @BASH_SOURCE instead of $0, because this is also defined for .-scripts
bugfix: bin/import_all: restore the working dir when main() is done, in case it started as something other than the root dir
bin/after_import: support turning off the end-of-import backup for imports that are not the full database
bugfix: lib/runscripts/util.run: `trap on_exit EXIT`: only set this if the script is not a dot script, because if it is a dot script, on_exit() will not be invoked until the calling shell exits, which may be much later than when the script is run. previously, this was handled by canceling the EXIT trap if on_exit() is run manually, but this would not work correctly if a load-time error prevented on_exit() from running and canceling the trap.
bugfix: lib/runscripts/util.run: if is_dot_script, fix $ when no args causes this to incorrectly contain the script name. use is_dot_script rather than the presence of $ args to decide whether to use @BASH_ARGV, because @BASH_ARGV is actually wrong when run as a .-script (it contains the script name).
when no args causes this to incorrectly contain the script name. use is_dot_script rather than the presence of $
bugfix: lib/sh/util.sh: is_dot_script(): need to subtract 1 from ${#BASH_LINENO[@]}, because this is the array length rather than the index of the last element as in Perl
lib/sh/util.sh: added is_dot_script()
bugfix: schemas/vegbien.sql: taxondetermination_set_iscurrent(): is_datasource_current (used by analytical_stem_view): need to separately check if `determinationtype IS NULL`, because `determinationtype NOT IN (accepted, matched))` will return NULL (false) if determinationtype is NULL, causing no match
bugfix: bin/make_analytical_db: when running into a public schema other than "public", also pass this to `/run export_` (which currently uses $schema instead of $public)
bugfix: bin/import_all: fix $ when .-included without args (which causes bash to put the wrong values in $ instead of leaving it empty)
when .-included without args (which causes bash to put the wrong values in $
bin/import_all: `make schemas/$version/install`: reinstall instead to allow re-running the import to the same custom schema (e.g. 2013-10-18.Brian_Enquist.Canadensys)
bin/import_all: `make schemas/$version/install`: ignore errors if schema exists, to support running with -e