fix: inputs/CVS/taxonObservation_/create.sql: mapped identifiedBy, which involves joining to party
inputs/CVS/cvs.~.clean_up.sql: don't rename taxonInterpretation.PARTY_ID, so that this can be USING-joined to party in inputs/CVS/taxonObservation_/create.sql
schemas/vegbien.ERD.mwb: regenerated exports
bin/map: support param start="", which indicates the default value. this fixes a bug in inputs/input.Makefile $(restart_row), which outputs "" if an explicit starting row is not found.
inputs/CVS/^taxon_observation.**.sample/map.csv: synced output columns to input columns (which removes the extra *s)
fix: inputs/CVS/plot_/postprocess.sql: locality: include the site name (authorLocation), because this is part of the unique specification of the place that was sampled, and Bob wants this to be included in VegBIEN
inputs/CVS/^taxon_observation.**.sample/create.sql: removed parentLocationID, since this is unused in CVS
bugfix: inputs/input.Makefile: `%/install: %/create.sql`: errexit the command so that errors won't scroll by, which in this case requires `set -o pipefail`
inputs/VegBank/plot/postprocess.sql: locality: include the site name (authorlocation), because this is part of the unique specification of the place that was sampled
bugfix: /README.TXT: Full database import: To restart an aborted import for a specific table: run the two commands in errexit mode so that the datasource does not incorrectly have the temp suffix removed if the import command exited with an error
fix: inputs/CVS/taxon_observation.**/map.csv: omit authorPlantName because it is not specific to the taxonInterpretation row (this is in a separate taxonInterpretation for the original determination instead)
web/links/index.htm: updated to Firefox bookmarks. PostgreSQL: added links for troubleshooting out-of-memory errors, which show up (cryptically) as "The database system is in recovery mode" errors in processes running at the time the out-of-memory condition occurred.
schemas/postgresql.conf: work_mem: documented that this seemingly small # is multiplied by max_connections, i.e. 256 MB * 100 = 26 GB, which approaches total memory (32 GB)
fix: inputs/CVS/plot_/map.csv: PARENT_ID: remapped to UNUSED, to clarify that subplots are not implemented through this field
bugfix: /README.TXT: Full database import: To restart an aborted import for a specific table: added command to remove the temp suffix from the source table entry, which is not automatic for importing a specific table (only for importing the entire datasource, at the end of which the datasource is considered completely imported and ready to overwrite any previous import)
inputs/input.Makefile: scrub: clarified that using & (background process) also ignores TNRS errors (the primary purpose of & , of course, is to run asynchronously)
bugfix: schemas/Makefile: $(confirmRmPublicSchema): only prompt to delete the schema if it actually exists. this avoids prompting to remove a non-existent schema at the beginning of bin/import_all, which requires user attention. since bin/import_all is often run with a delayed start (e.g. to wait for a staging table reinstall to complete), the user may not be at the terminal when this message is displayed, and without this fix, the import would be prevented from running until they return.
inputs/.geoscrub/geoscrub_output/run: import() runtime: added starscream runtime (20 min)
planning/timeline/timeline.2013.xls: updated for progress
inputs/.geoscrub/geoscrub_output/run: documented import() runtime (15 min)
inputs/.geoscrub/Source/map.csv: source__modified_date: updated for current run
**/new_terms.csv, unmapped_terms.csv updated (using `make missing_mappings`)
/README.TXT: Full database import: documented that `make schemas/reinstall` requires sudo access
inputs/.geoscrub/geoscrub_output/geoscrub.csv.run: updated upload time (30 s)
inputs/.geoscrub/geoscrub_output/geoscrub.csv.run: export_(): updated runtime (25 s)
lib/sh/util.sh: import_vars: don't overwrite vars that are already defined, to allow the caller to specify their own values for the vars to create. this requires callers that rely on the overwriting functionality to reverse the order in which they run use_* commands, so that the higher-precedence use_* is applied first and the other one as the default values for the first.
derived/biengeo/README.txt: updated geoscrub.sh runtime
inputs/.geoscrub/geoscrub_output/geoscrub.csv.run: make(): derived/biengeo/geoscrub.sh: documented runtime (2.5 h)
inputs/.geoscrub/geoscrub_output/geoscrub.csv.run: don't connect to DB as the root user, because this is not needed now that the geoscrub schema is owned by the bien user. this avoids a sudo password prompt at the end of the geoscrubbing run.
planning/timeline/timeline.2013.xls: rescheduled tasks
bugfix: inputs/input.Makefile: $(import): except in a full-database import, errexit so that the import will stop on an error and not let it scroll by
added inputs/CVS/^taxon_observation.**.sample/, used for the extract. note that the column list is slightly different than for VegBank.
inputs/CVS/taxonObservation_/map.csv: removed taxonObservation_-- prefix from terms that do not need to be table-specific (like for VegBank)
fix: inputs/CVS/taxonObservation_/map.csv: plantConcept_ columns: synced input and output column names to their names in plantConcept_
inputs/CVS/plantConcept_/map.csv: removed plantConcept_-- prefix from terms that do not need to be table-specific (like for VegBank)
lib/sh/db.sh: pg_table_exists(): use `SELECT NULL` instead of `SELECT *` to avoid a long column list cluttering up the log output
lib/runscripts/table.run: table_make_install(): simplified the setting of $noclobber since there no longer needs to be a different command for when the log exists
bugfix: lib/runscripts/table.run: need to errexit the make target, so that errors in the SQL install scripts are not suppressed. this requires pre-checking if the table exists (using new pg_table_exists), so that the install target's errexit does not then need to be suppressed for cases when the table already exists.
lib/sh/db.sh: added pg_table_exists()
planning/timeline/timeline.2013.xls: added timespan dots ◦ for supertasks
planning/timeline/timeline.2013.xls: crossed out and hid completed tasks
planning/timeline/timeline.2013.xls: hid previous weeks
planning/timeline/timeline.2013.xls: consolidated legend to take up fewer columns and avoid repeating labels
bugfix: inputs/CVS/import_order.txt: added taxon_observation.**. rescheduled tasks.
bugfix: inputs/CVS/import_order.txt: added taxon_observation.**
inputs/CVS/: don't import joined tables, because they are now imported in the taxon_observation.** left-join instead
inputs/CVS/: added taxon_observation.** left-join of the tables, using the steps at http://wiki.vegpath.org/Left-joining_a_datasource. this involves renaming taxonOccurrenceID->taxonOccurrenceID__overall_plot so that it can then be joined together with aggregateOrganismObservationID to create the full taxonOccurrenceID (as in VegBank).
inputs/CVS/stemCount_/map.csv: remapped stratum_ID->*STRATUM_ID so it would match up with stratum.*STRATUM_ID
inputs/CVS/taxonObservation_/map.csv: mapped TAXONINTERPRETATION_ID to identificationID
added inputs/CVS/stratum/
added inputs/CVS/stratumType/
inputs/CVS/: prepended the table name to each column name to prevent column collisions, using the steps at http://wiki.vegpath.org/Left-joining_a_datasource
bugfix: inputs/CVS/plantConcept_/map.csv: PLANTCONCEPT_ID: remapped without * prefix so that the USING join in inputs/CVS/taxonObservation_/create.sql would continue to work
inputs/CVS/taxonObservation_/header.csv, map.csv: updated to use plantConcept_ renamed columns
inputs/CVS/: switched to new-style import, using the steps at http://wiki.vegpath.org/Adding_new-style_import_to_a_datasource
inputs/CVS/taxonObservation_/map.csv: updated for CVS refresh
inputs/CVS/taxonObservation_/map.csv: updated input column names to plantConcept_ renamings
inputs/CVS/plantConcept_/header.csv, map.csv: updated for CVS refresh
fix: inputs/CVS/plot_/map.csv: removed filter-less collisions. note that the name county_ is assigned in plot_/create.sql, not cvs.~.clean_up.sql as one might expect, because this is a generated column.
fix: inputs/CVS/plot_/map.csv: removed filter-less collisions
fix: inputs/CVS/taxonObservation_/map.csv: moved inherited derived columns to right after the other columns, because for this table, these are actually real input columns rather than appended derived columns. the column order must match header.csv to avoid mis-renamings.
inputs/CVS/taxonObservation_/map.csv: removed filter functions, which are now performed in plantConcept_
inputs/CVS/taxonObservation_/postprocess.sql: added _parent index to facilitate joins
fix: inputs/CVS/taxonObservation_/header.csv, map.csv: updated for CVS refresh and addition of plantConcept_ derived columns
inputs/CVS/stemCount_/: translated filters to postprocessing derived columns, using the steps at http://wiki.vegpath.org/Adding_new-style_import_to_a_datasource#1-Translate-filters-to-postprocessing-derived-columns. note that the inserted row count changes, because there is now a primary key (which the table is auto-sorted by) where previously there was none.
web/links/index.htm: updated to Firefox bookmarks. added API writing links, including the best quotes from a Google developer's PowerPoint on the topic.
schemas/vegbien.sql: collected_dates: documented runtime (2.5 min)
schemas/vegbien.sql: collected_date_min: replaced with collected_dates view that lists all dates we have, so that we can determine which of these may be valid. it turns out that we have data collected from very far back (to the year 1), which are not merely 2-digit years because PostgreSQL will only parse early years when there are 4 digits.
added planning/publication/KNB/submission.published.old_site.maff, submission.published.eml.xml from old KNB site
added planning/publication/KNB/submission.*
bugfix: schemas/vegbien.sql: collected_date_min: exclude invalid dates < 1000-01-01
bugfix: schemas/vegbien.sql: collected_date_min: exclude -infinity
schemas/vegbien.sql: added collected_date_min view
inputs/CVS/plot_/: translated column filters to postprocessing derived columns, using the steps at http://wiki.vegpath.org/Adding_new-style_import_to_a_datasource#1-Translate-filters-to-postprocessing-derived-columns
/README.TXT: Full database import: verifying import: In PostgreSQL: don't include current values of the datasource counts, etc., because these may change and should always be re-checked at wiki.vegpath.org/VegBIEN_contents
inputs/CVS/plot_/postprocess.sql: added pkey from the primary joined table
inputs/CVS/plot_/map.csv: documented assumptions about the units of fields
inputs/CVS/plot_/map.csv: documented assumptions about the units and meaning of numeric codes for fields
inputs/CVS/plantConcept_/: translated multi-column filters to postprocessing derived columns, using the steps at http://wiki.vegpath.org/Adding_new-style_import_to_a_datasource#1-Translate-filters-to-postprocessing-derived-columns
web/links/index.htm: updated to Firefox bookmarks. BIEN: added DataONE compatibility links.
inputs/CVS/plantConcept_/postprocess.sql: added pkey from the primary joined table
inputs/CVS/observation_/postprocess.sql: added pkey from the primary joined table. added _parent index to facilitate joins.
fix: inputs/input.Makefile: $(svnFilesGlob): removed schema and PDF files, since these are owned by the data provider and should not be in the repository that gets open-sourced
bugfix: inputs/CVS/observation_/create.sql: only include one soilObs for each observation (using DISTINCT ON), rather than just left-joining them
inputs/: removed SALVIAS-CSV, because this is a sample datasource which was only there to test the mapping process. it should not be adding records that duplicate SALVIAS, nor should it take up maintenance effort (switching to new-style import, updating to match SALVIAS, etc.).
planning/timeline/timeline.2013.xls: removed the weeks of 12/23, 12/30 because these are during winter break. rescheduled tasks.
inputs/.TNRS/schema.sql: updated runtime (30 min) and rowcount (+2 million)
fix: inputs/.TNRS/schema.sql: tnrs_populate_fields(): is_valid_match: set this to false if Taxonomic_status is Invalid
schemas/vegbien.sql: analytical_stem_view: added taxonomic_status. notice that PostgreSQL 9.3 puts each view column on a separate line, making it much easier to review the svn diff!