updated inputs/datasource_release_status.xlsx
added inputs/datasource_release_status.xlsx, export of Google spreadsheet at https://docs.google.com/spreadsheet/ccc?key=0ArZXrTAXd-TYdDRRb2RxYi11TWZrQVh5bVdKOURCeFE
planning/timeline/timeline.2013.xls: updated for progress
bugfix: schemas/vegbien.sql: location: use the place_id from the parent location when no place_id is specified. this fixes a bug in analytical_stem_view where the parent location's place_id was used because it was sometimes missing from the sublocation, but the parent place_id itself was sometimes missing instead if sublocations each had their own place information. this way, it is always available directly in the sublocation, populated from the parent location if needed.
bugfix: schemas/vegbien.sql: location: added place_id which is autopopulated from the current locationplace. join on this in plot.**, to avoid a 1:many join when a location has multiple locationplaces.
bugfix: schemas/vegbien.sql: locationevent_unique_within_parent_by_location unique index: need COALESCE around location_id since it's nullable
fix: inputs/CVS/^taxon_observation.**.sample/: added _no_import because this table duplicates part of what's imported from taxon_observation.**
bugfix: inputs/VegBank/plot/: added _no_import because this table is left-joined and should not be imported separately
bugfix: inputs/{.NCBI,CTFS}/*.src/: added _no_import because these tables are left-joined and should not be imported separately
inputs/import.stats.xls: removed table names from datasources where only one table is imported
fix: inputs/import.stats.xls: removed deleted tables from current import
inputs/import.stats.xls: updated import times
updated backups/TNRS.backup.md5
added backups/vegbien.r11786.backup.md5
/README.TXT: Full database import: backups: added step to download backup to local machine
bugfix: /Makefile: install: need to run inputs/download in live mode so that the flat files are actually downloaded
lib/common.Makefile: added %/live, for use with `make inputs/download`
planning/timeline/timeline.2013.xls: rescheduled tasks
/README.TXT: Full database import: In PostgreSQL: documented that the tables to check are located in the r# schema, not public
planning/timeline/timeline.2013.xls: datasource validations: reordered datasources according to Brian Enquist's new validation order (wiki.vegpath.org/Spot-checking_validation_order)
fix: schemas/vegbien.sql: analytical_specimen: added specimens-related columns that are in analytical_plot
inputs/GBIF/raw_occurrence_record_plants/map.csv: row_num: remapped to plain *row_num, like the other datasources that have this field
inputs/GBIF/raw_occurrence_record_plants/postprocess.sql: Remove institutions that we have direct data for: rerun time: noted that this is only fast after manual vacuuming of the table (to remove the deleted rows from the index). autovacuum apparently does not run, although it should.
planning/timeline/timeline.2013.xls: hid previous weeks
planning/timeline/timeline.2013.xls: added timespan dots ◦ for supertasks
planning/timeline/timeline.2013.xls: legend: changed to movable text box to avoid needing to erase and repopulate the header columns with the legend cells
planning/timeline/timeline.2013.xls: crossed out and hid completed tasks
inputs/GBIF/raw_occurrence_record_plants/test.xml.ref: reran test, which added yearCollected/monthCollected/dayCollected
inputs/CVS/plantConcept_/create.sql: documented runtime (3 min)
inputs/CTFS/*.src/: added test.xml.ref
inputs/CTFS/*.src/: added VegBIEN.csv
bugfix: inputs/CTFS/TaxonOccurrence*/map.csv: things mapped to taxonObservationID: remapped to taxonOccurrenceID since taxonObservationID is not mapped to anything in VegBIEN (denormalized VegCore doesn't distinguish between taxon occurrences and taxon observations of them)
bugfix: inputs/ARIZ/~.clean_up.sql: prevent "column already exists" errors when there is an input column of the same name as an output column
bugfix: lib/runscripts/datasrc_dir.run: import(): don't run `sql/install` if the schema already exists, because this will try to rerun all the schema-creation queries. note that this idempotent functionality was not provided by the `make .../install` target that was previously used (idempotency is new with new-style import).
bugfix: schemas/vegbien.sql: updated for renamed county_centroids column names
inputs/.geoscrub/import_order.txt: added county_centroids so that it would be installed by new-style import
bugfix: lib/runscripts/datasrc_dir.run: import(): can't run `datasrc_make reinstall` anymore because this now defers to the runscript for new-style import datasources (which was done so that `make .../install` properly reinstalls all the datasources). instead, call the applicable make targets manually (there are just 2 of them).
inputs/FIA/TREE/run: documented import() runtime (1.5 h), which includes table cleanup runtime (1 h)
bugfix: bin/pg_dump_limit: support errexit by ignoring the nonzero exit status that grep returns when it doesn't match anything
inputs/GBIF/raw_occurrence_record_plants/run: updated import() runtime (same), documented table cleanup runtime (1.5 h)
inputs/GBIF/raw_occurrence_record_plants/postprocess.sql: CREATE INDEX ... specimenHolderInstitutions: documented runtime (45 min)
inputs/GBIF/raw_occurrence_record_plants/postprocess.sql: Remove institutions that we have direct data for: documented runtime (3.5 min)
/README.TXT: Datasource setup: added steps to backup e-mails
bugfix: inputs/CTFS/import_order.txt: added *.src so that these would be installed under new-style import as well. this means that their columns will now be automapped, requiring the names to be renamed to VegCore names in */create.sql. note that VegCore taxonOccurrenceID has been renamed to taxonObservationID since this was last run.
inputs/.geoscrub/run: documented import() runtime (20 min)
bugfix: inputs/.NCBI/import_order.txt: added nodes.src, names.src so that these would be installed under new-style import as well. this means that their columns will now be automapped, requiring the names to be renamed to VegCore names in nodes/create.sql.
fix: /Makefile: inputs/reinstall: commented out to avoid a cascade of "overriding commands for target" warnings. this will revert to the default uninstall, install sequence for this target rather than the simultaneous-reinstall optimization (which can still be invoked manually).
lib/sh/local.sh: public_schema_exists(): use a higher log_level for pg_schema_exists, to avoid all the verbose output involved in running the query
bugfix: lib/sh/local.sh: public_schema_exists(): can no longer use psql_script_vegbien for this, because using `SET search_path` (called by psql_script_vegbien) with a schema that does not exist no longer produces an error. instead, use new pg_schema_exists(), which uses a different command that does produce an error if the schema does not exist.
lib/sh/db.sh: added pg_require_schema()
lib/sh/util.sh: stderr2stdout(): documented that this redirects fd 2->1 and log_fd (but not back to 2)
bugfix: lib/sh/util.sh: stderr2stdout() use `command` before tee, which re-filters log_fd so that stderr itself is also filtered. this allows log-filtering out an otherwise-confusing benign error when using e.g. stderr_matches().
lib/sh/util.sh: added not(), for use in prefixing wrapped commands
lib/sh/db.sh: added pg_schema_exists()
lib/sh/util.sh: added stderr_matches()
lib/sh/util.sh: documented that fds 2x/3x should not be used because we use these, as opposed to 1x which is used by the shell internally
lib/sh/util.sh: added stdout_contains()
lib/sh/util.sh: added stderr2stdout()
fix: lib/sh/db.sh: pg_table_exists(): usage: documented that $table is actually required for this function
bugfix: inputs/input.Makefile: install: for new-style datasources, use the associated runscript instead (the old-style install target will not do everything that's needed for a new-style datasource)
bugfix: /Makefile: moved inputs/reinstall to end so it overrides the corresponding subdir forwarding target
bugfix: /Makefile: inputs/install: don't run bin/reinstall_all here, because /install targets are supposed to be idempotent, forward-only actions that don't first remove existing data
bugfix: /Makefile: postgres-Darwin: don't prepend $(MAKE) to $(postgresReload-Darwin), because this is now a list of commands
bugfix: /Makefile: config: ignore errors if ~/bin/make exists
inputs/FIA/COND/postprocess.sql: filtering formula: documented that this was created by Brad, and provided the URL to it on nimoy
inputs/CVS/cvs.~.clean_up.sql: remove plot.realLatitude/realLongitude, since this is private data that should not be publicly visible
bin/make_analytical_db: don't regenerate family_higher_plant_group from the NCBI data because the lookup table is now prepopulated as part of the schema
bin/import_all: don't import NCBI because the lookup table is now prepopulated as part of the schema
schemas/vegbien.sql: include the family_higher_plant_group lookup table values so that these don't need to be regenerated from the NCBI nodes whenever the DB is reloaded
schemas/vegbien.sql: taxonlabel_update_ancestors(): don't do an index scan if the value being scanned for is NULL, to support testing this function without the indexes in place, without extra full-table scans for NULL values affecting things. this can be used to determine if the function is actually using the indexes, by turning them off and seeing if the runtime changes.
schemas/util.sql: explain2table(): documented usage:PERFORM util.explain2table($$query$$);
schemas/util.sql: explain2table(): by default, use the util.explain table
schemas/util.sql: added explain table
schemas/util.sql: added explain2notice()
schemas/util.sql: added explain2str()
schemas/util.sql: added explain2table()
schemas/util.sql: added explain()
schemas/vegbien.sql: taxonlabel_update_ancestors(): don't create a performance-intensive nested transaction (EXCEPTION block) for each INSERT, because there should no longer be duplicate ancestors, so it's OK to abort the whole transaction if this assertion fails
bugfix: schemas/vegbien.sql: taxonlabel_update_ancestors_on_{insert,update}(): only use either the matched taxon's ancestors or the parent's ancestors, to avoid issues related to duplication between these two ancestors lists. this also fixes a bug where the 2nd taxonlabel_update_ancestors() call assumes that the existing ancestors are for the old parent, when in fact they have actually just been set to those for the new matched taxon (which horribly confuses taxonlabel_update_ancestors()).
schemas/vegbien.sql: _taxonlabel_set_parent_id(): just use a plain UPDATE statement, to avoid the significant parsing and stringification overhead of EXECUTE and quote_nullable(). it is not clear that EXECUTE is actually necessary to avoid caching the query plan, because the cache should be invalidated automatically when the table's ANALYZE statistics are regenerated.
schemas/vegbien.sql: removed unused function _taxonlabel_set_matched_label_id(), which refers to obsolete fields
schemas/vegbien.sql: synced to DB (the view renderer apparently changed the text of a view)
backups/TNRS.backup: saved copy backups/TNRS.2013-11-18.backup
bugfix: bin/import_all: run in errexit mode, so that if the user cancels reinstalling of the import schema, the script will then abort instead of continuing and using the wrong schema
bugfix: schemas/Makefile: %/uninstall: always confirm before removing an existing schema, not just for public and r*, because an auxiliary schema might also be used as $version and reinstalled by bin/import_all
schemas/vegbien.sql: analytical_stem_view: scrubbed_author: removed empty COALESCE around value (left over from when multiple values needed to be combined for many TNRS fields)
inputs/CVS/^taxon_observation.**.sample/create.sql: uncommented identifiedBy since this is now part of taxonObservation_
fix: inputs/CVS/observation_community/create.sql: communityName: populate from commConcept.commName instead, because commInterpretation.commname is not always populated. this requires left-joining to commConcept.
inputs/CVS/observation_community/map.csv: updated output column names to new input column names, to avoid later output column collisions
inputs/CVS/observation_community/header.csv, map.csv: updated input column names for cvs.~.clean_up.sql renamings
schemas/vegbien.sql: provider_count_view: source totals: use the much faster query developed for Brad (wiki.vegpath.org/VegBIEN_FAQ#from-Brad-on-2013-12-4), which avoids the need to do a GROUP BY on all of analytical_stem. eventually, we will want to apply the same optimization to the first publisher subtotals.
inputs/CVS/cvs.~.clean_up.sql: commClass, commConcept fields: prepend table name to avoid inter-table collisions upon join
added inputs/CVS/observation_community/, as for VegBank
inputs/CVS/cvs.~.clean_up.sql: commClass.dba_src_ID: prepend table name to avoid inter-table collisions upon join
added inputs/CVS/observationContributor_/, which adds the people collecting the plot