import_times: Separate out the postprocessing logs (e.g. public.unscrubbed_taxondetermination_view), as the import times in these logs are not aggregated together (each input has its own run of the postprocessing script)
root Makefile: Datasources: import: Use new import_scrub instead of import (input.Makefile)
import_all: Use new import_scrub (input.Makefile) instead of import, which avoids needing to start background processes for tnrs-remake and scrub-remake
inputs/.TNRS/public.unscrubbed_taxondetermination_view/scrub.make: Fixed bug where need to use tnrs.make's lockfile instead because can't be importing while tnrs.make is scrubbing. tnrs.make leaves tnrs in an incomplete state while running because the accepted names are parsed after their matched names. Using a separate lockfile would cause some accepted names to be missing.
input.Makefile: Import to VegBIEN: Added import_scrub, which runs `make scrub` after the import
root Makefile: Datasources: Added scrub, which runs tnrs-remake and scrub-remake
inputs/.TNRS/*/*.make: Only allow one instance of the script to be running at any time, by using new waitself
waitpid, lockfile: Changed $interval default to 5s to work with smaller imports, where less waiting is needed
Added waitself
bin/lockfile: Include the PID in the lockfile to avoid the need to manually remove lockfiles. On Mac, this requires using shlock instead of lockfile.
Added bin/lockfile
Added pid2name
Added name2pids
waitpid: Use `ps` instead of /proc to also work on Mac
inputs/.TNRS/tnrs/tnrs.make: Fixed bug where need special handling to support being run as a .make script
inputs/.geoscrub/_src/README.TXT: Added dates for e-mails from Jim
inputs/.geoscrub/_src/README.TXT: Added e-mail from Jim about repository with scripts to generate the geoscrub_output table
schemas/vegbien.sql: unscrubbed_taxondetermination_view: Fixed bug where need to use tnrs_accepted.Name_submitted IS NOT NULL rather than tnrs_accepted.* IS NOT NULL, because tnrs_accepted.* (which plain tnrs_accepted gets changed to by PostgreSQL) checks each field of the tnrs_accepted tuple rather than checking if the tuple itself is NULL
inputs/.TNRS/schema.sql: Added back tnrs+accepted view, which is useful for debugging the import of the TNRS results
inputs/REMIB/Specimen/postprocess.sql: Added back ARIZ, NY because some REMIB specimens for these datasources are not yet in the datasources themselves
Added inputs/REMIB/Specimen/postprocess.sql to remove institutions that we have direct data for
Placed inputs/REMIB/_archive/ under version control
Added inputs/SpeciesLink/Specimen/postprocess.sql to remove institutions that we have direct data for
Placed inputs/SpeciesLink/_archive/ under version control
input.Makefile: $(import?): Renamed $public_import option to $full_import because it applies to any import of all datasources, not just a public import on vegbiendev
schemas/vegbien.sql: analytical_stem_view: Changed `WHERE COALESCE` to a join condition to enable using the taxondetermination_single_current_determination index, which produces the filtered rows directly. Note that this index will not be used for full-database imports, because the query planner uses hash joins everywhere instead of nested loops.
db_xml.py: put_table(): Fixed bug where for views, shouldn't advance start (OFFSET clause) after each chunk, because views are typically dynamic and will contain a new set of rows after the first set is imported
sql.py: Added view_exists()
inputs/.TNRS/schema.sql: Removed no longer used tnrs_canon. unscrubbed_taxondetermination_view uses its definition directly instead.
schemas/vegbien.sql: unscrubbed_taxondetermination_view: Added comment from tnrs_canon
schemas/vegbien.sql: unscrubbed_taxondetermination_view: Do the tnrs_canon joins manually instead of using tnrs_canon, to allow PostgreSQL to use a nested loop join on just the needed tnrs rows instead of a hash self-join of all tnrs rows. The query planner is not yet advanced enough to automatically integrate the select on the view into the top-level joins list, which would make this change automatically.
inputs/.TNRS/public.unscrubbed_taxondetermination_view/scrub.make: rowsAdded(): Look at last 100 rows instead of last 10, because rows are added to the log file each time the script waits and the Inserted # new rows message must be in the tailed rows
inputs/.TNRS/public.unscrubbed_taxondetermination_view/scrub.make: rowsAdded(): Fixed bug where need to test if log file exists before using it in tail, because if tail fails and causes rowsAdded to return false, this error exit status will be indistinguishable from false for no rows added and the script will keep going
inputs/.TNRS/public.unscrubbed_taxondetermination_view/scrub.make: Fixed bug where need special handling to support being run as a .make script
input.Makefile: Editing import: Added unscrub to remove TNRS taxondeterminations
psql_script_vegbien: Added no_query_results option to hide results of calls to void functions
schemas/vegbien.sql: Added delete_scrubbed_taxondeterminations()
root Makefile: python-Darwin: Added instructions to install dateutil for Python 3 as well as Python 2, for use in PL/Python functions
root Makefile: python-Darwin: Added note that Python 2 comes preinstalled
Added inputs/GBIF/Specimen/postprocess.sql to remove institutions that we have direct data for
import_all: Run disown_all after background processes have been created, so that they will not be aborted if the shell exits (e.g. due to a broken connection). Note that with_all processes are automatically disowned as they are created, but other processes, such as after_import, were not.
inputs/.TNRS/schema.sql: Removed no longer used array_to_string(). The IMMUTABLE wrapper is only needed for index conditions and other places that require an IMMUTABLE function.
input.Makefile: Maps validation: %/new_terms.csv: Filter out terms that map to UNUSED, because these are not mappings that are useful as VegCore synonyms
README.TXT: Data import: Checking free disk space: Updated import schema size to 110GB
Added inputs/Madidi/_README.TXT
new_terms.csv: Regenerated
inputs/Madidi/new_terms.csv: Regenerated
inputs/Madidi/_archive/2010-1-2/: Set svn:ignore
inputs/Madidi/_README.TXT: Archived to _archive/2010-1-2/
inputs/Madidi/: Refreshed. Note that new export has a completely new schema.
mappings/VegCore-VegBIEN.csv: fieldNumber (authorEventCode): Fixed bug where locationevent.authorlocationcode should be authoreventcode
Added inputs/Madidi/map.csv, created from new_terms.csv
inputs/Madidi/_archive/: Set svn:ignore
csvs.py: sniff(): TSVs: Don't turn off quoting, because some TSVs (such as Madidi.IndividualObservation) do quote fields
csvs.py: TsvReader: Use csv.reader.next() when possible to support quoted fields, such as in Madidi.IndividualObservation
input.Makefile: Configuration: $(exts): Added .dat, which the new Madidi files use
mappings/Makefile: VegCore.tables.csv: Removed no longer needed removal of Namespaces table, which is now marked as just a section, not a table
mappings/VegCore.csv: Regenerated from wiki
Added to_do/timeline.2013.xls (from Brad, converted to .xls)
to_do/timeline.doc: Renamed to timeline.2012.doc to allow for a separate 2013 timeline
README.TXT: Data import: Deleting imports before the last: Added instructions to keep a previous import instead of deleting it
input.Makefile: Staging tables installation: $(logInstall): Always log the installation, regardless of the $log env var, because $log is set by default on development machines but an install log should still be created
schemas/vegbien.ERD.mwb: Regenerated exports
schemas/vegbien.sql: unscrubbed_taxondetermination_view: Fixed bug where need to handle the case where (SELECT source.source_id FROM source WHERE source.shortname = 'TNRS') is NULL because no TNRS names have been imported yet
*/new_terms.csv, */unmapped_terms.csv: Regenerated using `make missing_mappings`
mappings/VegCore-VegBIEN.csv: morphoname: Remapped to the original rather than current taxondetermination because this is the original name applied by the author
inputs/SALVIAS*/Organism/map.csv: Remapped voucher_string/coll_number to recordNumber instead of catalogNumber, because this number is actually applied by the collector rather than by a herbarium
mappings/VegCore-VegBIEN.csv: Mapped recordNumber to new specimenreplicate.collectionnumber
mappings/VegCore-VegBIEN.csv: Also map recordNumber (collectionnumber) to the indirect voucher's specimenreplicate
inputs/*/*/map.csv: Remapped recordNumber to new individualCode where applicable
mappings/VegCore-VegBIEN.csv: Mapped individualCode. authortaxoncode: Prefer tag over recordNumber (collectionnumber), because this applies to the plant rather than the specimen.
mappings/VegCore-VegBIEN.csv: Mapped morphoname
schemas/vegbien.sql: taxonverbatim: Added morphoname (which is different from the morphospecies suffix)
schemas/vegbien.sql: plantobservation: Renamed collectionnumber to authorplantcode since this number, which identifies the plant, is actually different from the collectionnumber that identifies the specimen collected from it. This distinction is meaningful for plots data, but generally not for specimens data.
schemas/vegbien.sql: specimenreplicate: Added collectionnumber
schemas/vegbien.sql: taxonlabel: Removed no longer used matched_label_fit_fraction. Use taxondetermination.taxonfit instead.
inputs/*/*/test.xml.ref: Restored inserted row counts, which had gotten auto-accepted from a test run on a non-empty DB
schemas/vegbien.ERD.mwb: Expanded analytical_stem to fit the width of all fields
schemas/vegbien.sql: taxondetermination: taxondetermination_computer_min_fit CHECK constraint: Fixed bug where need to use CASE instead of OR when a branch of an OR shouldn't be evaluated, because PostgreSQL doesn't support short-circuit OR
README.TXT: Debugging: Added instructions for "binary chop" debugging, which requires syncing the DB schema to the svn working copy
mappings/VegCore-VegBIEN.csv: Removed no longer used mappings for verbatimScientificName in _if conditions
inputs/.NCBI/nodes/test.xml.ref: Restored inserted row counts, which had gotten auto-accepted from a test run on a non-empty DB
sql_io.py: put_table(): DuplicateKeyException: Uniquifying input table to avoid internal duplicate keys: Also filter out duplicate rows in the out_table, so that they don't create duplicate key errors and the resulting index holes
sql.py: distinct_table(): Added support for custom joins used in creating the new table. This can then be used by sql_io.put_table() to filter out duplicate rows in the out_table, so that they don't create duplicate key errors and the resulting index holes.
README.TXT: Documentation: Redmine-formatted list of steps for column-based import: Added step to reinstall public schema first, to reset the sequences so that they don't create a diff when the new steps.by_col.log.sql is committed
Added inputs/ACAD/Specimen/logs/steps.by_col.log.sql
sql_gen.py: Join: Added support for mapping values which are lists, for use in USING joins
inputs/SALVIAS/*/test.xml.ref: Restored SALVIAS* inserted row counts, which had gotten auto-accepted from a test run on a non-empty DB
schemas/vegbien.sql: analytical_stem: Added locationName (authorPlotCode), subplot, individualCode (authorPlantCode) for use in validation
schemas/vegbien.sql: sync_analytical_stem_to_view(): Drop and re-create dependent objects to avoid errors that analytical_stem can't be dropped because of dependents
schemas/vegbien.sql: sync_analytical_stem_to_view(): Changed to PL/pgSQL function to allow adding PL/pgSQL commands
schemas/vegbien.ERD.mwb: Moved family_higher_plant_group to leave room for analytical_stem to expand