fix: inputs/VegBank/taxonobservation_/map.csv: remapped int_* to OMIT because these are not specific to the taxoninterpretation row (this is in a separate taxoninterpretation for the original determination instead). see wiki.vegpath.org/Spot-checking#2013-10-10 > Mike Lee's conference call feedback.
exports/2013-10-18.Brian_Enquist.Canadensys.csv.run: inherit from new import_subset.run (which uses extract.run)
added lib/runscripts/import_subset.run, extract.run
added exports/2013-10-18.Brian_Enquist.Canadensys.csv.run
bin/make_analytical_db: removed no longer needed setting of $schema to $public, because this is now done by psql()
lib/sh/local.sh: psql(): also accept $public as the $schema param, since this is used by a lot of import scripts
lib/sh/util.sh: added require_dot_script()
bugfix: lib/sh/util.sh: $top_script: use @BASH_SOURCE instead of $0, because this is also defined for .-scripts
bugfix: bin/import_all: restore the working dir when main() is done, in case it started as something other than the root dir
bin/after_import: support turning off the end-of-import backup for imports that are not the full database
bugfix: lib/runscripts/util.run: `trap on_exit EXIT`: only set this if the script is not a dot script, because if it is a dot script, on_exit() will not be invoked until the calling shell exits, which may be much later than when the script is run. previously, this was handled by canceling the EXIT trap if on_exit() is run manually, but this would not work correctly if a load-time error prevented on_exit() from running and canceling the trap.
bugfix: lib/runscripts/util.run: if is_dot_script, fix $ when no args causes this to incorrectly contain the script name. use is_dot_script rather than the presence of $ args to decide whether to use @BASH_ARGV, because @BASH_ARGV is actually wrong when run as a .-script (it contains the script name).
when no args causes this to incorrectly contain the script name. use is_dot_script rather than the presence of $
bugfix: lib/sh/util.sh: is_dot_script(): need to subtract 1 from ${#BASH_LINENO[@]}, because this is the array length rather than the index of the last element as in Perl
lib/sh/util.sh: added is_dot_script()
bugfix: schemas/vegbien.sql: taxondetermination_set_iscurrent(): is_datasource_current (used by analytical_stem_view): need to separately check if `determinationtype IS NULL`, because `determinationtype NOT IN (accepted, matched))` will return NULL (false) if determinationtype is NULL, causing no match
bugfix: bin/make_analytical_db: when running into a public schema other than "public", also pass this to `/run export_` (which currently uses $schema instead of $public)
bugfix: bin/import_all: fix $ when .-included without args (which causes bash to put the wrong values in $ instead of leaving it empty)
when .-included without args (which causes bash to put the wrong values in $
bin/import_all: `make schemas/$version/install`: reinstall instead to allow re-running the import to the same custom schema (e.g. 2013-10-18.Brian_Enquist.Canadensys)
bin/import_all: `make schemas/$version/install`: ignore errors if schema exists, to support running with -e
bugfix: bin/import_all: removing inputs/.TNRS/tnrs/tnrs.make.lock: use `"rm" -f` instead of plain "rm" to avoid having an error exit status, which will abort the script if run with the -e flag (as runscripts are)
lib/runscripts/util.run: run script template: changed sample command name to all() because each runscript requires this in order to be run without args
lib/runscripts/util.run: support scripts that are run as shell-includes (with leading "."), by allowing the calling script to manually invoke on_exit() without it then being invoked twice (the end of a shell-include does not trigger the EXIT trap)
bin/*_all: *_main(): renamed to just main() because it does not matter that other shell-includes' main() methods will clobber this, because it is only executed once
bugfix: bin/import_all: Source tables: use .../import instead of import_temp because import_temp is only needed when importing all tables, to prevent the temp suffix from being removed yet
lib/runscripts/util.run: support scripts that are run as shell-includes (with leading "."), by also accepting $@ args that are passed along in the util.run include, in addition to @BASH_ARGV
bugfix: lib/sh/util.sh: alias_append(): need to enclose $(alias) call in "" because its result may contain separator chars (i.e. whitespace) that will be parsed incorrectly. this appears to only be a bug when runscripts are run as shell-includes, with a leading ".".
schemas/VegCore/ERD/VegCore.ERD.mwb: connecting lines: inherits from traceable: added arrow to indicate what this label refers to
schemas/VegCore/ERD/VegCore.ERD.mwb: regenerated exports and udpated image map
schemas/VegCore/ERD/VegCore.ERD.mwb: HAS-A/IS-A box: renamed to "connecting lines" for clarity
schemas/VegCore/ERD/VegCore.ERD.mwb: relationships: HAS-A: added HAS-MANY going in the opposite direction, because every HAS-A has an opposite HAS-MANY
schemas/VegCore/ERD/VegCore.ERD.mwb: relationships: IS-A, HAS-A: added directional arrows
schemas/VegCore/ERD/VegCore.ERD.mwb: field order box: removed spacing between top of text box and bottom of outer box label
schemas/VegCore/ERD/VegCore.ERD.mwb: reordered columns according to the field order convention
schemas/VegCore/ERD/VegCore.ERD.mwb: added label documenting the field order convention:1) inherited2) required3) identifying4) foreign key5) extenders6) others
web/links/index.htm: updated to Firefox bookmarks. added links for EER models, data management plans. put PostgreSQL before MySQL because we have found PostgreSQL to be a much more capable database system, even though it lacks some of MySQL's user-friendly features.
planning/timeline/timeline.2013.xls: updated for progress
fix: schemas/vegbien.sql: analytical_stem_view: renamed specimens columns to use the VegCore names, where these differ from DwC, so that the now-VegCore staging table column names are the same as the analytical_stem_view column names
schemas/vegbien.sql: regenerated using `make schemas/remake`. note that analytical_stem_view column renamings need this step after a search-and-replace of the column names, in order to remove excess "" around all-lowercase names and reset generated index names.
added planning/goals/web_interface/phpPgAdmin.select_interface.png for use at wiki.vegpath.org/Proposed_enhancements
inputs/CVS/_src/: added refresh from Mike Lee
fix: bin/map: put template: comment out the "Put template:" label so that the output is valid XML, and displays properly in a browser rather than showing a syntax error
planning/timeline/timeline.2013.xls: usability testing: added subtask to provide scientists with their requested data
bugfix: bin/import_all: need to publish datasources that won't be published by `make .../import`, so that the per-datasource import XPaths that refer to TNRS/geoscrub will link up with the TNRS/geoscrub source entry instead of creating a new entry without the metadata (because the entry with the metadata was named TNRS.new/geoscrub.new)
schemas/vegbien.sql: datasource_publish(): use parameter names instead of $# because this is a PL/pgSQL function
bugfix: schemas/vegbien.sql: datasource_publish(): if the datasource to publish already has the published name, don't datasource_rm() it
bin/import_all: removed no longer needed import of geoscrub data, because analytical_stem_view is now joined to the geoscrub_output table directly, instead of using the imported canon_place entries
schemas/vegbien.sql: analytical_stem_view: join to the geoscrub_output table directly, instead of using the imported canon_place entries. this avoids the need to import geoscrub_output into VegBIEN (which is expected to take 2+ hours after the refresh), as well as the need to then refresh any datasources whose geoscrubbing input data has changed.
inputs/.geoscrub/geoscrub_output/postprocess.sql: added nullable unique index on the inputs, for use by analytical_stem_view. note that it must be nullable in order to create a match when not all of the input fields are populated. this uses array[] to create a nullable index, which is much better than column-based import and VegBIEN's use of COALESCE because the expression is the same for every type and no NULL sentinel value is needed.
schemas/VegCore/VegCore.ERD.mwb: fixed lines
schemas/VegCore/ERD/VegCore.ERD.mwb: person: allow to have multiple organizations
schemas/VegCore/ERD/VegCore.ERD.mwb: split "2b. GNRS" label into two labels, one for each table GNRS is applied to
schemas/VegCore/ERD/VegCore.ERD.mwb: georeferencing: merged into geoplace, since this is actually information attached to a specific plot, etc. relating to the coordinates used in its geoplace subclass
schemas/VegCore/ERD/VegCore.ERD.mwb: geovalidatable_place: changed parent geoplace pointer to parent_boundary_WKT, since the immediate parent may not have an associated boundary to use for geovalidation (i.e. it may not be an official GADM geoplace), although ancestors further up likely will be
schemas/VegCore/ERD/VegCore.ERD.mwb: place.name: made it required because it's needed for the unique constraint to be populated properly (including for subclasses such as geoplace, which need to generate this from the coordinates)
schemas/VegCore/ERD/VegCore.ERD.mwb: place.rank: made it required, because every place should have some kind of rank indicating what type of place it is, including lower ranks (e.g. plot, individual)
schemas/VegCore/ERD/VegCore.ERD.mwb: place: added unique constraint on parent, rank, name
schemas/VegCore/ERD/VegCore.ERD.mwb: place.locality: moved to geopath, because this is actually a rank of place (i.e. below municipality) rather than a field that every place could have
schemas/VegCore/ERD/VegCore.ERD.mwb: geoplace.official_name: renamed to name to merge with inherited field from place. documented that for geoplaces, this is the official, scrubbed name.
inputs/.geoscrub/geoscrub_output/postprocess.sql: added geovalid derived column, for use by analytical_stem_view
bin/with_all: $all: renamed to $hidden_srcs for clarity, since this now just adds the hidden (.*) datasources, rather than always using all datasources
bugfix: bin/with_all: in $all mode, just prepend the .* datasources to the user-selected (or default) @inputs, so that using $all to add these datasources doesn't inadvertently cause the action to be performed for all datasources
web/links/index.htm: updated to Firefox bookmarks. PostgreSQL: ALTER TABLE: added documentation about disabling of foreign key triggers, which is only possible by the superuser. note that marking a foreign key constraint as NOT VALID does not disable the trigger, so NOT VALID cannot be used for this purpose. this would be used to add fkeys from core VegBIEN tables to validation results tables such as the geoscrubbing results, without needing to import the validation results directly into core VegBIEN (which is time-consuming and currently must be done before input data is loaded, requiring a datasource reload to add geoscrubbing results).
bin/import_all: usage: documented that this can now be run with a custom datasources list (each of the form inputs/src/)
bin/with_all: added support for providing a custom list of inputs to run the command on
inputs/.geoscrub/geoscrub_output/postprocess.sql, run: updated runtimes
inputs/.geoscrub/geoscrub_output/run: documented full load_data() runtime (9 min @starscream)
inputs/.geoscrub/geoscrub_output/postprocess.sql: updated runtimes for refreshed data, which now has 4x as many rows (1,707,970->6,747,650)
inputs/.geoscrub/geoscrub_output/: refreshed geoscrub data. removed +header.csv because the extract now contains the header in the first row of the file.
bugfix: lib/sh/local.sh: psql(): $is_root: use `` around case statement instead of $(), because it contains an embedded unbalanced )
bugfix: inputs/.geoscrub/geoscrub_output/geoscrub.csv.run: include only the columns that Jim provided in his extract (the geoscrub table contains additional internal columns that are not part of the geovalidation data for VegBIEN). documented runtime (30 s) and upload time (1.5 min).
inputs/.geoscrub/geoscrub_output/geoscrub.csv.run: removed no longer needed setting of $local_server, $local_user (and use of $local_pg_database instead of $database) because the use_local bug in local.sh has been fixed
bugfix: lib/sh/local.sh: psql(): don't default the connection vars using use_local if running as the postgres user. in that case, connection must happen via a socket, with server="", and as the user running the command (postgres), with user="".
bugfix: inputs/.geoscrub/geoscrub_output/geoscrub.csv.run: need to manually set local_server, local_user to "" so that they do not default to their bien-user values
bugfix: lib/sh/db.sh: avoid outputting to /dev/fd/# when running as sudo on Linux, because this causes a "Permission denied" error (due to the /dev/fd/# file being owned by a different user). this is not a problem with normal redirects (>&#), because they do not use /dev/fd/# files which can have access permissions.
bugfix: lib/runscripts/util.run: to_top_file(): need to pass "$@" to to_file
lib/runscripts/util.run: to_top_file: added function for this (in addition to alias), so that this can be run from sudo in a wrap_fn command
lib/sh/db.sh: pg_as_root(): run sudo with echo_run to help debug
bugfix: lib/sh/db.sh: pg_cmd(): only set PG* connection/login env vars when the corresponding var is non-empty. there are some situations in which these must be unset (in order to use the default value), and other situations when the var must be set to something (i.e. "") to avoid it being defaulted to a value in local.sh > connection vars.
backups/TNRS.backup.md5: updated
bugfix: inputs/.geoscrub/geoscrub_output/geoscrub.csv.run: need to set $local_pg_database instead of $database because use_local (in psql()) does not currently avoid clobbering already-set versions of the applicable env vars
bugfix: lib/sh/local.sh: pg_as_root(): need to use -E (preserve environment) option to sudo, so that $schema, $table get passed through
bugfix: lib/sh/local.sh: psql(): only \set schema, table if $schema, $table are non-empty, because otherwise, you will get a "zero-length delimited identifier" error
added inputs/.geoscrub/geoscrub_output/geoscrub.csv.run to export the geoscrub table (must be run on vegbiendev)
lib/sh/local.sh: added require_remote()
lib/sh/db.sh: added pg_as_root()
lib/runscripts/util.run: added $wrap_fn to run any function via sudo, etc.
Added instructions for dependencies in the README.
Added indexes to speed up geonames-to-gadm.sql.
Without these indexes, these queries could take hours to complete.With them, the times more closely matched the times Jim noted in the sqlcomments.
Fixed a couple of syntax errors in geovalidate.sh.
Fixed a sql syntax error and a bash syntax error in the next line.
planning/timeline/timeline.2013.xls: "geoscrubbing automated pipeline": scheduled for after Paul's current set of tasks on the geoscrubbing re-run is complete. i'm budgeting several weeks for this since my understanding is that Paul is doing this part-time.
planning/timeline/timeline.2013.xls: moved "geoscrubbing automated pipeline" under "simplify import process for easier maintainability"
planning/timeline/timeline.2013.xls: geoscrubbing re-run: added subtask to spot-check reloaded geoscrubbing data
planning/timeline/timeline.2013.xls: geoscrubbing re-run: added separate subtask for "geoscrubbing data reload", since apparently it was not clear that of course the new data will need to be imported into VegBIEN before the results of the re-run are available. this is currently scheduled to happen in the next full-database import, which is the week of 10/28 in order to include further validations fixes.
planning/timeline/timeline.2013.xls: CVS validation: use timespan dot ◦ for supertask
planning/timeline/timeline.2013.xls: CVS validation: added subtasks that are similar to for FIA validation (create validation subset, create extract)