bugfix: lib/runscripts/datasrc_dir.run: import(): can't run `datasrc_make reinstall` anymore because this now defers to the runscript for new-style import datasources (which was done so that `make .../install` properly reinstalls all the datasources). instead, call the applicable make targets manually (there are just 2 of them).
inputs/FIA/TREE/run: documented import() runtime (1.5 h), which includes table cleanup runtime (1 h)
bugfix: bin/pg_dump_limit: support errexit by ignoring the nonzero exit status that grep returns when it doesn't match anything
inputs/GBIF/raw_occurrence_record_plants/run: updated import() runtime (same), documented table cleanup runtime (1.5 h)
inputs/GBIF/raw_occurrence_record_plants/postprocess.sql: CREATE INDEX ... specimenHolderInstitutions: documented runtime (45 min)
inputs/GBIF/raw_occurrence_record_plants/postprocess.sql: Remove institutions that we have direct data for: documented runtime (3.5 min)
/README.TXT: Datasource setup: added steps to backup e-mails
bugfix: inputs/CTFS/import_order.txt: added *.src so that these would be installed under new-style import as well. this means that their columns will now be automapped, requiring the names to be renamed to VegCore names in */create.sql. note that VegCore taxonOccurrenceID has been renamed to taxonObservationID since this was last run.
inputs/.geoscrub/run: documented import() runtime (20 min)
bugfix: inputs/.NCBI/import_order.txt: added nodes.src, names.src so that these would be installed under new-style import as well. this means that their columns will now be automapped, requiring the names to be renamed to VegCore names in nodes/create.sql.
fix: /Makefile: inputs/reinstall: commented out to avoid a cascade of "overriding commands for target" warnings. this will revert to the default uninstall, install sequence for this target rather than the simultaneous-reinstall optimization (which can still be invoked manually).
lib/sh/local.sh: public_schema_exists(): use a higher log_level for pg_schema_exists, to avoid all the verbose output involved in running the query
bugfix: lib/sh/local.sh: public_schema_exists(): can no longer use psql_script_vegbien for this, because using `SET search_path` (called by psql_script_vegbien) with a schema that does not exist no longer produces an error. instead, use new pg_schema_exists(), which uses a different command that does produce an error if the schema does not exist.
lib/sh/db.sh: added pg_require_schema()
lib/sh/util.sh: stderr2stdout(): documented that this redirects fd 2->1 and log_fd (but not back to 2)
bugfix: lib/sh/util.sh: stderr2stdout() use `command` before tee, which re-filters log_fd so that stderr itself is also filtered. this allows log-filtering out an otherwise-confusing benign error when using e.g. stderr_matches().
lib/sh/util.sh: added not(), for use in prefixing wrapped commands
lib/sh/db.sh: added pg_schema_exists()
lib/sh/util.sh: added stderr_matches()
lib/sh/util.sh: documented that fds 2x/3x should not be used because we use these, as opposed to 1x which is used by the shell internally
lib/sh/util.sh: added stdout_contains()
lib/sh/util.sh: added stderr2stdout()
fix: lib/sh/db.sh: pg_table_exists(): usage: documented that $table is actually required for this function
bugfix: inputs/input.Makefile: install: for new-style datasources, use the associated runscript instead (the old-style install target will not do everything that's needed for a new-style datasource)
bugfix: /Makefile: moved inputs/reinstall to end so it overrides the corresponding subdir forwarding target
bugfix: /Makefile: inputs/install: don't run bin/reinstall_all here, because /install targets are supposed to be idempotent, forward-only actions that don't first remove existing data
bugfix: /Makefile: postgres-Darwin: don't prepend $(MAKE) to $(postgresReload-Darwin), because this is now a list of commands
bugfix: /Makefile: config: ignore errors if ~/bin/make exists
inputs/FIA/COND/postprocess.sql: filtering formula: documented that this was created by Brad, and provided the URL to it on nimoy
inputs/CVS/cvs.~.clean_up.sql: remove plot.realLatitude/realLongitude, since this is private data that should not be publicly visible
bin/make_analytical_db: don't regenerate family_higher_plant_group from the NCBI data because the lookup table is now prepopulated as part of the schema
bin/import_all: don't import NCBI because the lookup table is now prepopulated as part of the schema
schemas/vegbien.sql: include the family_higher_plant_group lookup table values so that these don't need to be regenerated from the NCBI nodes whenever the DB is reloaded
schemas/vegbien.sql: taxonlabel_update_ancestors(): don't do an index scan if the value being scanned for is NULL, to support testing this function without the indexes in place, without extra full-table scans for NULL values affecting things. this can be used to determine if the function is actually using the indexes, by turning them off and seeing if the runtime changes.
schemas/util.sql: explain2table(): documented usage:PERFORM util.explain2table($$query$$);
schemas/util.sql: explain2table(): by default, use the util.explain table
schemas/util.sql: added explain table
schemas/util.sql: added explain2notice()
schemas/util.sql: added explain2str()
schemas/util.sql: added explain2table()
schemas/util.sql: added explain()
schemas/vegbien.sql: taxonlabel_update_ancestors(): don't create a performance-intensive nested transaction (EXCEPTION block) for each INSERT, because there should no longer be duplicate ancestors, so it's OK to abort the whole transaction if this assertion fails
bugfix: schemas/vegbien.sql: taxonlabel_update_ancestors_on_{insert,update}(): only use either the matched taxon's ancestors or the parent's ancestors, to avoid issues related to duplication between these two ancestors lists. this also fixes a bug where the 2nd taxonlabel_update_ancestors() call assumes that the existing ancestors are for the old parent, when in fact they have actually just been set to those for the new matched taxon (which horribly confuses taxonlabel_update_ancestors()).
schemas/vegbien.sql: _taxonlabel_set_parent_id(): just use a plain UPDATE statement, to avoid the significant parsing and stringification overhead of EXECUTE and quote_nullable(). it is not clear that EXECUTE is actually necessary to avoid caching the query plan, because the cache should be invalidated automatically when the table's ANALYZE statistics are regenerated.
schemas/vegbien.sql: removed unused function _taxonlabel_set_matched_label_id(), which refers to obsolete fields
schemas/vegbien.sql: synced to DB (the view renderer apparently changed the text of a view)
backups/TNRS.backup: saved copy backups/TNRS.2013-11-18.backup
bugfix: bin/import_all: run in errexit mode, so that if the user cancels reinstalling of the import schema, the script will then abort instead of continuing and using the wrong schema
bugfix: schemas/Makefile: %/uninstall: always confirm before removing an existing schema, not just for public and r*, because an auxiliary schema might also be used as $version and reinstalled by bin/import_all
schemas/vegbien.sql: analytical_stem_view: scrubbed_author: removed empty COALESCE around value (left over from when multiple values needed to be combined for many TNRS fields)
inputs/CVS/^taxon_observation.**.sample/create.sql: uncommented identifiedBy since this is now part of taxonObservation_
fix: inputs/CVS/observation_community/create.sql: communityName: populate from commConcept.commName instead, because commInterpretation.commname is not always populated. this requires left-joining to commConcept.
inputs/CVS/observation_community/map.csv: updated output column names to new input column names, to avoid later output column collisions
inputs/CVS/observation_community/header.csv, map.csv: updated input column names for cvs.~.clean_up.sql renamings
schemas/vegbien.sql: provider_count_view: source totals: use the much faster query developed for Brad (wiki.vegpath.org/VegBIEN_FAQ#from-Brad-on-2013-12-4), which avoids the need to do a GROUP BY on all of analytical_stem. eventually, we will want to apply the same optimization to the first publisher subtotals.
inputs/CVS/cvs.~.clean_up.sql: commClass, commConcept fields: prepend table name to avoid inter-table collisions upon join
added inputs/CVS/observation_community/, as for VegBank
inputs/CVS/cvs.~.clean_up.sql: commClass.dba_src_ID: prepend table name to avoid inter-table collisions upon join
added inputs/CVS/observationContributor_/, which adds the people collecting the plot
inputs/CVS/cvs.~.clean_up.sql: observationContributor.dba_src_ID: prepended table name to avoid collision when left-joining to party
bugfix: inputs/input.Makefile: %/header.csv: errexit the command so that errors won't scroll by, which in this case requires `set -o pipefail`
fix: inputs/CVS/taxonObservation_/create.sql: mapped identifiedBy, which involves joining to party
inputs/CVS/cvs.~.clean_up.sql: don't rename taxonInterpretation.PARTY_ID, so that this can be USING-joined to party in inputs/CVS/taxonObservation_/create.sql
schemas/vegbien.ERD.mwb: regenerated exports
bin/map: support param start="", which indicates the default value. this fixes a bug in inputs/input.Makefile $(restart_row), which outputs "" if an explicit starting row is not found.
inputs/CVS/^taxon_observation.**.sample/map.csv: synced output columns to input columns (which removes the extra *s)
fix: inputs/CVS/plot_/postprocess.sql: locality: include the site name (authorLocation), because this is part of the unique specification of the place that was sampled, and Bob wants this to be included in VegBIEN
inputs/CVS/^taxon_observation.**.sample/create.sql: removed parentLocationID, since this is unused in CVS
bugfix: inputs/input.Makefile: `%/install: %/create.sql`: errexit the command so that errors won't scroll by, which in this case requires `set -o pipefail`
inputs/VegBank/plot/postprocess.sql: locality: include the site name (authorlocation), because this is part of the unique specification of the place that was sampled
bugfix: /README.TXT: Full database import: To restart an aborted import for a specific table: run the two commands in errexit mode so that the datasource does not incorrectly have the temp suffix removed if the import command exited with an error
fix: inputs/CVS/taxon_observation.**/map.csv: omit authorPlantName because it is not specific to the taxonInterpretation row (this is in a separate taxonInterpretation for the original determination instead)
web/links/index.htm: updated to Firefox bookmarks. PostgreSQL: added links for troubleshooting out-of-memory errors, which show up (cryptically) as "The database system is in recovery mode" errors in processes running at the time the out-of-memory condition occurred.
schemas/postgresql.conf: work_mem: documented that this seemingly small # is multiplied by max_connections, i.e. 256 MB * 100 = 26 GB, which approaches total memory (32 GB)
fix: inputs/CVS/plot_/map.csv: PARENT_ID: remapped to UNUSED, to clarify that subplots are not implemented through this field
bugfix: /README.TXT: Full database import: To restart an aborted import for a specific table: added command to remove the temp suffix from the source table entry, which is not automatic for importing a specific table (only for importing the entire datasource, at the end of which the datasource is considered completely imported and ready to overwrite any previous import)
inputs/input.Makefile: scrub: clarified that using & (background process) also ignores TNRS errors (the primary purpose of & , of course, is to run asynchronously)
bugfix: schemas/Makefile: $(confirmRmPublicSchema): only prompt to delete the schema if it actually exists. this avoids prompting to remove a non-existent schema at the beginning of bin/import_all, which requires user attention. since bin/import_all is often run with a delayed start (e.g. to wait for a staging table reinstall to complete), the user may not be at the terminal when this message is displayed, and without this fix, the import would be prevented from running until they return.
inputs/.geoscrub/geoscrub_output/run: import() runtime: added starscream runtime (20 min)
planning/timeline/timeline.2013.xls: updated for progress
inputs/.geoscrub/geoscrub_output/run: documented import() runtime (15 min)
inputs/.geoscrub/Source/map.csv: source__modified_date: updated for current run
**/new_terms.csv, unmapped_terms.csv updated (using `make missing_mappings`)
/README.TXT: Full database import: documented that `make schemas/reinstall` requires sudo access
inputs/.geoscrub/geoscrub_output/geoscrub.csv.run: updated upload time (30 s)
inputs/.geoscrub/geoscrub_output/geoscrub.csv.run: export_(): updated runtime (25 s)
lib/sh/util.sh: import_vars: don't overwrite vars that are already defined, to allow the caller to specify their own values for the vars to create. this requires callers that rely on the overwriting functionality to reverse the order in which they run use_* commands, so that the higher-precedence use_* is applied first and the other one as the default values for the first.
derived/biengeo/README.txt: updated geoscrub.sh runtime
inputs/.geoscrub/geoscrub_output/geoscrub.csv.run: make(): derived/biengeo/geoscrub.sh: documented runtime (2.5 h)
inputs/.geoscrub/geoscrub_output/geoscrub.csv.run: don't connect to DB as the root user, because this is not needed now that the geoscrub schema is owned by the bien user. this avoids a sudo password prompt at the end of the geoscrubbing run.
planning/timeline/timeline.2013.xls: rescheduled tasks
bugfix: inputs/input.Makefile: $(import): except in a full-database import, errexit so that the import will stop on an error and not let it scroll by
added inputs/CVS/^taxon_observation.**.sample/, used for the extract. note that the column list is slightly different than for VegBank.
inputs/CVS/taxonObservation_/map.csv: removed taxonObservation_-- prefix from terms that do not need to be table-specific (like for VegBank)
fix: inputs/CVS/taxonObservation_/map.csv: plantConcept_ columns: synced input and output column names to their names in plantConcept_