make_analytical_db: Use new psql_verbose_vegbien
Added psql_verbose_vegbien
make_analytical_db: Use psql_script_vegbien, which contains the necessary psq options, instead of setting those options manually
make_analytical_db: Run the SQL commands directly with psql instead of using the make_analytical_db() function. This provides incremental results and avoids running all commands in one transaction, thus preventing pgAdmin from freezing when the user attempts to access a table used in analytical DB creation (because the TRUNCATE statement fully locks the table until the entire analytical DB is built).
schemas/vegbien.sql: make_analytical_db(): Added make_family_higher_plant_group()
inputs/import.stats.xls: Updated import times. Fixed input row counts and import times to include derived data, such as TNRS and geoscrub, which adds to the import time and therefore should be considered in the import's speed. (TNRS was already being included in the import time for some, but not all, imports.)
schemas/vegbien.sql: analytical_*: Added higherPlantGroup
pg_dump_vegbien: When not in data mode, use --inserts so the INSERT statements generated can be run directly in pgAdmin, they can be directly translated to MySQL, and an empty COPY FROM statement isn't generated for every table
schemas/vegbien.sql: Added family_higher_plant_group lookup table, generated with make_family_higher_plant_group() using the contents of higher_plant_group_nodes
mappings/VegCore-VegBIEN.csv: Always map taxonNameOrEpithet to taxonomicname, now that it's globally unique at all ranks in the datasource that provides it (NCBI)
inputs/.NCBI/nodes/create.sql: Make name_txt completely globally unique by removing all duplicates, not just duplicate genera
inputs/.NCBI/nodes/create.sql: Make name_txt (mostly) globally unique by removing several other kingdoms/superkingdoms, not just Animalia
inputs/.NCBI/nodes/create.sql: Making genus globally unique: Moved comment with kingdom name to line with DELETE, and put "delete cascades to descendants" comment on its own line
schemas/vegbien.sql: Added higher_plant_group_nodes lookup table
schemas/Makefile: DDLs: $(pg_dump): Turn off schema-only mode so that lookup table contents are included as well
pg_dump_vegbien: Except in (raw) data mode, filter out pg_catalog.setval() statements (only lookup table contents should be preserved)
lib/PostgreSQL-MySQL.csv: Remove SELECT pg_catalog.setval() statements
lib/PostgreSQL-MySQL.csv: Remove COPY statements
pg_dump_vegbien: $schema flag defaults to on to export only schema
pg_dump_vegbien: Fixed bug where schema name var can't be named $schema because that would conflict with the $schema flag
root Makefile: VegBIEN DB: DB and bien user: db: Run inputs/.TNRS/install before schemas/install because public.tnrs_input_name depends on TNRS.tnrs
pg_dump_vegbien: Only include just the schema if new $schema flag is set. This causes constant lookup tables to automatically be exported for DB schemas, which was not fully supported by the $data flag because it also forced $owners to be enabled.
schemas/vegbien.sql: Added higher_plant_group enum
mappings/VegCore.csv: Added higherPlantGroup
schemas/vegbien.sql: analytical_*: Added isNewWorld
mappings/VegCore.csv: Added isNewWorld
inputs/newWorld/: Added geoscrub.schema.~.changes.sql to add a unique constraint on countryNameStd and change isNewWorld's type to boolean
Added inputs/newWorld/
README.TXT: Datasource setup: Moved Auto-create the map spreadsheets after Install the staging tables, because for DB-only tables, creating the map spreadsheets requires the header.csv generated by the staging tables install
input.Makefile: SVN: add: Also set _MySQL/'s svn:ignore
inputs/SALVIAS/_MySQL/salvias_plots.*.sql.make: Use new my2pg_export
inputs/.geoscrub/_MySQL/geoscrub.*.sql.make: Use new my2pg_export
Added my2pg_export
Regenerated vegbien.ERD exports
schemas/vegbien.ERD.mwb: Added analytical_stem, analytical_aggregate to ERD
schemas/vegbien.sql: analytical_*: Added georeferenceValid, distanceToCountry_km, distanceToStateProvince_km
schemas/vegbien.sql: Added _m_to_km()
inputs/.geoscrub/geoscrub_cleaned_unique/map.csv: Removed geoscrubID->locationID mapping because it allowed multiple occurrences of the same placename/coordinates combination, which was causing problems when datasources try to link up to the geoscrub places (and end up matching multiple output rows for each input row)
sql_io.py: put_table(): Removed assertion that into's row count be at least full_in_table's row count, because now that DISTINCT ON is used to satisfy the into table pkey, this is no longer necessarily true
schemas/vegbien.sql: method: method_unique: Added reference_id to make method unique within the datasource instead of globally unique within VegBIEN
mappings/VegCore.csv: Added samplingProtocolID
mappings/VegCore-VegBIEN.csv: subplot locationevent: Only populate parent locationevent's location unique IDs if a subplot #/subplotID is actually specified. (The lack of a location unique ID will cause the parent locationevent's location to be removed, as well as the parent locationevent itself if there is no parent locationevent unique ID.) This fixes a bug where top-level plots in datasources that provide a nullable subplot #/subplotID were incorrectly getting connected to parent locationevents.
mappings/verify.plots.sql: Added # locations, # location events, which also include subplots
inputs/SALVIAS/verify/plots.ref.sql: Standardized line wrapping to put ; on its own line in multi-line queries and at the end of the line in one-line queries
mappings/verify.plots.sql: Added "top-level" to # locations, # location events names because these queries do not include all locations
inputs/SALVIAS/verify/plots.ref.sql: # locations: Fixed count to include all plotMetadata entries instead of just unique coordinates, because each plotMetadata entry now gets its own location
mappings/VegCore-VegBIEN.csv: matched place: When any of the accepted* names is specified, set matched_place_id=0 (self-reference) to indicate that the matched place is an accepted (scrubbed) place
schemas/vegbien.sql: analytical_stem_view: Use the canonical (scrubbed) place instead of the datasource's place where available. Note that canon_place_id is always populated (with a self-reference if needed), to enable using a just a LEFT JOIN for this.
schemas/vegbien.sql: analytical_stem_view: Use datasource_* prefix instead of reference_* prefix that tables had gotten when datasource_id was renamed to reference_id
inputs/SALVIAS/verify/plots.ref: Regenerated on vegbiendev using the PostgreSQL 9.1 pg_catalog.default collation
sql_io.py: put_table(): Setting pkeys of missing rows: Fixed bug in column-based import where when input rows match multiple output rows in one of this iteration's input tables, the into table's pkey constraint is violated because full_in_table contains multiple entries for an input pkey
schemas/vegbien.sql: place: Added index on matched_place_id to facilitate looking up places by matched_place_id in place_set_canon_place_id()
schemas/vegbien.sql: Added geoscrub_input derived table with associated view. Build geoscrub_input as part of make_analytical_db().
sql.py: Added table_has_pkey()
sql.py: table_pkey_col(): For PostgreSQL DBs, use pg_catalog via index_cols() and table_pkey_index(), in order to use the search_path to look up the table. This fixes a bug where the pkey would be selected from information_schema.table_constraints in random order, and this order sometimes returned the corresponding table in the public schema but sometimes in other schemas, such as VegBank. This became a problem now that VegBIEN has a place table, which conflicts with VegBank's place table. (Most other VegBank tables that are mapped to have been renamed in VegBIEN.)
sql.py: add_pkey_or_index(): Use new add_pkey_index()
sql.py: Added add_pkey_index()
inputs/import.stats.xls: Updated import times
import_times: Use $'' quoting to expand tab, in order to also work on Mac OS X
schemas/vegbien.sql: place: Added place_set_canon_place_id_on_*() triggers, analogous to taxonlabel.taxonlabel_2_set_canon_label_id_on_*()
schemas/vegbien.sql: place: Added canon_place_id
mappings/verify.plots.sql, inputs/SALVIAS/verify/plots.ref.sql: Renamed project column to projectname to avoid confusion with the project table
input.Makefile: Verification of import: Fixed bug where needed to make .ref .PRECIOUS instead of verify/.ref
inputs/SALVIAS/verify/plots.ref.sql: Use project_name instead of project_id everywhere, not just as the ID to list for each project, to match the corresponding VegBIEN queries
input.Makefile: Verification of import: Make verify/%.ref .PRECIOUS because there must always be a .ref for the make rules to work
inputs/SALVIAS/verify/plots.ref.sql: Use project_name instead of project_id as the ID to list for each project, since the SALVIAS projects table is now mapped to the DB and the project_name is used as project.projectname
mappings/VegCore-VegBIEN.csv: subplots: Also complete the locationevent/location diamond (subplot event -> {subplot location, parent plot event} -> parent plot location) when an eventDate or range is specified, as this is also an identifying field for locationevent. This fixes a bug where subplots data without explicit plot events (such as SALVIAS and TEAM) was not being connected to the appropriate parent plot event as well as parent plot location. This should fix the SALVIAS verification # location events, which should include only parent plots' locationevents to correspond with # locations, which only includes parent plots' locations, and uses locationevent.parent_id being NULL to determine what is a parent plot event.
input.Makefile: Verification of import: $(verify): Use $(inDatasrc) so that the full standard search_path, including any custom public schema, is made available to the VegBIEN-side verification script
mappings/verify.plots.sql: # locations: Fixed bug where need to use location instead of locationevent
mappings/verify.specimens.sql: Updated for current schema
input.Makefile: Verification of import: verify/%.out: Made the target .PRECIOUS so that partial output will be saved in case of error to help debugging
mappings/verify.plots.sql: Updated for current schema
Updated validation/BIEN2_Analytical_DB_overview.docx with Brad's revision
schemas/vegbien.sql: taxonlabel_set_canon_label_id(): Fixed bug where always need to set canon_label_id on insert (because it must be a self-reference rather than NULL if there is no matched_label_id), but wasn't being set when no matched_label_id because the IS DISTINCT FROM check returned false
mappings/VegCore-VegBIEN.csv: decimalLatitude/Longitude->geoscrub input coordinates: Also set to NULL if 0 here, not just for the coordinates linked to the datasource's place instance
sql_io.py: put_table(): Switched back to using run_query_into()'s add_pkey_ option now that it uses sql.add_pkey_or_index() instead of sql.add_pkey()
sql.py: run_query_into(): add_pkey_: Use add_pkey_or_index() instead of add_pkey(). This should prevent errors for input rows which match multiple output rows in VegBIEN.
README.TXT: Data import: After import, removed steps to reinstall tnrs_accepted, tnrs_other because this would overwrite the versions of these staging tables that were used to build the current version of the database. (The backup should store a snapshot of the database's input data as it was used to build the database.) Note that these tables will still be reinstalled when the next import starts.
Added validation/BIEN2_Analytical_DB_overview.docx
inputs/.TNRS/tnrs_*/create.sql: Fixed bug where need to COALESCE the WHERE condition with false to ensure that a boolean value, not NULL, is negated by NOT. Otherwise, tnrs_other's query would exclude rows for which the condition returned NULL rather than false.
schemas/vegbien.sql: make_analytical_db(): Truncate tables before (re-)populating them in case make_analytical_db() has already been run
schemas/vegbien.sql: analytical_stem_view: Updated to use reference instead of party to store the datasource name
schemas/vegbien.sql: Renamed make_analytical_stem() back to make_analytical_db()
import_all: After starting geoscrub import in the background, wait for make commands to scroll by before starting NCBI import
mappings/VegCore-VegBIEN.csv: matched place: Also map verbatim place's geoscrub-related fields to the matched place, to link up with geoscrub's corresponding input place
import_all: Removed explicit by_col=1 from datasources that don't require it for proper import. (It will still be set if the user provides it on the command line.)
schemas/functions.sql: Removed duplicate previous version of _and() with fewer parameters
schemas/vegbien.sql: method: Made reference_id NOT NULL so the method table would automatically be datasource-scoped
mappings/VegCore-VegBIEN.csv: Renamed creator_ids to reference_id since they are now fkeys to reference
schemas/vegbien.sql: Made creator_ids an fkey to reference instead of party, so that datasources are stored separately from people and to allow adding reference-type metadata (URL, copyright, etc.) for each datasource
schemas/vegbien.sql: reference: Added unique index
mappings/VegCore-VegBIEN.csv: Remapped accordingTo to taxonconcept.concept_reference_id
mappings/VegCore-VegBIEN.csv: Don't populate taxonlabel.creator_id using identifiedBy, because that was meant for creating a taxonconcept when the accordingTo was not specified, but taxonconcepts are now modeled differently using a taxonconcept subclass of taxonlabel with a concept_reference_id
schemas/vegbien.sql: taxonverbatim: Renamed binomial to taxonname because this field is actually the taxonomic name without the author, not just the genus+specific epithet binomial
mappings/VegCore.csv: Renamed *Binomial to *TaxonName because this field can store more ranks than just the genus+specificEpithet binomial (that goes in speciesBinomial)
mappings/VegCore.csv: Renamed taxonName to taxonNameOrEpithet for clarity