schemas/vegbien.sql: Added reference_by_shortname(). Using this function instead of the manual query should force the query planner to evaluate this expression first, rather than complexly reordering joins to place this nested select as a filter condition.
schemas/vegbien.sql: Added threatened_taxonlabel derived table with generating view threatened_taxonlabel_view
Updated inputs/UNCC/Specimen/test.xml.ref inserted row count
mappings/VegCore.csv: Added threatened
inputs/VegBank/vegbank.~.clean_up.sql: Remove private columns (plot.reallatitude, reallongitude) that should not be publicly visible
inputs/CVS/Organism/map.csv: Removed now-dropped realLatitude, realLongitude
Added inputs/CVS/Organism/postprocess.sql to drop private realLatitude, realLongitude columns
input.Makefile: Staging tables installation: Added back postprocess.sql, which is now used for one-time dropping of private columns that should not be publicly visible
input.Makefile: Maps building: %/.map.csv.last_cleanup: $(dict) canon/translate: Use new $(translate?)
input.Makefile: Maps building: %/.map.csv.last_cleanup: Added $(srcDict) as a prerequisite, so that .last_cleanup will be re-run if it changes. Added empty $(srcDict) target in case it doesn't exist.
inputs/bien_web/observation/map.csv: Omit *_index because they are placeholder columns created by the MySQL to PostgreSQL translation
input.Makefile: Maps building: %/.map.csv.last_cleanup: Fixed bug where can only canon/translate using $(srcDict) if it exists for the datasource
inputs/bien_web/observation/: Regenerated from actual bien_web.observation schema on nimoy, which has additional columns
input.Makefile: SVN: $(svnFilesGlob): Added top-level map.csv, which can be used to apply a datasource-global data dictionary to all tables
input.Makefile: Maps building: %/.map.csv.last_cleanup: Also apply any map.csv at the top level of the datasource directory. This can be used to apply a datasource-global data dictionary to all tables.
my2pg: Also remove column comments. Note that these cannot be translated by sed, because PostgreSQL only allows setting column comments in a separate statement, not inline with the column's entry in the CREATE TABLE statement, and sed can only make replacements contiguous with the input line.
mappings/VegCore.csv: Removed incorrect duplicate entry for verbatimSubgenus
schemas/vegbien.sql: _taxon_family_require_std(): Fixed bug where name needed to be lowercased before checking if it ended in -aceae, to support family names that are uppercase. Note that this resulted in the family not being prepended to the TNRS input name for datasources with uppercase family names, so the next DB import will likely produce a number of unscrubbed TNRS input names which now have the uppercase family prepended.
inputs/.TNRS/tnrs/tnrs.make: Fixed bug where need to reference the log file path relative to the make script itself, because otherwise the log file would go in inputs/.TNRS/logs/tnrs.make.log.sql
inputs/.TNRS/tnrs/tnrs.make: Fixed bug where need to use just the basename of $0 as the log file name
Added inputs/IUCN/
input.Makefile: SVN: add: Added _src/ (when it exists). $(_svnFilesGlob): Added .url, .pdf files in _src/.
psql_vegbien: Use bash because it supports substitutions
psql_vegbien: Set the search_path to $out_schemas set by vegbien_dest. This will enable running any psql_vegbien script on a schema other than public.
schemas/vegbien.sql: analytical_stem_view: Changed inner joins on non-datasource taxonlabels to LEFT JOINs, to ensure that an entry is included in the analytical DB even if there was no matched taxonlabel. In theory, this shouldn't be necessary, because every taxonlabel should have a canonical taxonlabel since canon_label_id is auto-populated to the taxonlabel_id if there is no matched_label_id; there should be a taxonverbatim for every datasource and accepted taxonlabel because datasources link to taxonlabel via taxonverbatim and TNRS populates a taxonverbatim for every accepted taxonlabel; and there should be a parsed taxonlabel for every datasource taxonlabel because the mappings populate it.
schemas/vegbien.sql: analytical_stem_view: Removed join on specimenreplicate, because it isn't used in the analytical DB. Each specimen will still get an entry in analytical_*, because it gets its own location.
README.TXT: Data import: Before starting import, added step to run `make inputs/upload` and reinstall newly-uploaded datasources
README.TXT: Schema changes: Remember to update the following files with any renamings: Added mappings/verify.*.sql
README.TXT: Data import: make_analytical_db: Documented how to view progress in log file
make_analytical_db: Run all commands synchronously so the log file output doesn't become jumbled
make_analytical_db: Fixed bug where log file needed to be appended to instead of overwritten
make_analytical_db: Wrap each individual command in `set -x` to avoid echoing low-level commands such as sleep, wait
make_analytical_db: Moved log file to inputs/analytical_db/logs/make_analytical_db.log.sql so it would be synced along with the other import logs
inputs/.TNRS/tnrs/tnrs.make: Output the time at which it's run, so this is included in the log file
inputs/.TNRS/tnrs/tnrs.make: Moved log file to logs/tnrs.make.log.sql so it would automatically be synced along with the other import logs
inputs/Makefile: Import logs: $(rsyncLogs): Always download the TNRS daemon log, rather than requiring tnrs_log=1 to be specified to download it
make_analytical_db: Output the time at which it's run, so this is included in the log file
make_analytical_db: Store output in schemas/make_analytical_db.log
schemas/vegbien.sql: Removed no longer used make_analytical_db(). Use bin/make_analytical_db instead.
make_analytical_db: Use new psql_verbose_vegbien
Added psql_verbose_vegbien
make_analytical_db: Use psql_script_vegbien, which contains the necessary psq options, instead of setting those options manually
make_analytical_db: Run the SQL commands directly with psql instead of using the make_analytical_db() function. This provides incremental results and avoids running all commands in one transaction, thus preventing pgAdmin from freezing when the user attempts to access a table used in analytical DB creation (because the TRUNCATE statement fully locks the table until the entire analytical DB is built).
schemas/vegbien.sql: make_analytical_db(): Added make_family_higher_plant_group()
inputs/import.stats.xls: Updated import times. Fixed input row counts and import times to include derived data, such as TNRS and geoscrub, which adds to the import time and therefore should be considered in the import's speed. (TNRS was already being included in the import time for some, but not all, imports.)
schemas/vegbien.sql: analytical_*: Added higherPlantGroup
pg_dump_vegbien: When not in data mode, use --inserts so the INSERT statements generated can be run directly in pgAdmin, they can be directly translated to MySQL, and an empty COPY FROM statement isn't generated for every table
schemas/vegbien.sql: Added family_higher_plant_group lookup table, generated with make_family_higher_plant_group() using the contents of higher_plant_group_nodes
mappings/VegCore-VegBIEN.csv: Always map taxonNameOrEpithet to taxonomicname, now that it's globally unique at all ranks in the datasource that provides it (NCBI)
inputs/.NCBI/nodes/create.sql: Make name_txt completely globally unique by removing all duplicates, not just duplicate genera
inputs/.NCBI/nodes/create.sql: Make name_txt (mostly) globally unique by removing several other kingdoms/superkingdoms, not just Animalia
inputs/.NCBI/nodes/create.sql: Making genus globally unique: Moved comment with kingdom name to line with DELETE, and put "delete cascades to descendants" comment on its own line
schemas/vegbien.sql: Added higher_plant_group_nodes lookup table
schemas/Makefile: DDLs: $(pg_dump): Turn off schema-only mode so that lookup table contents are included as well
pg_dump_vegbien: Except in (raw) data mode, filter out pg_catalog.setval() statements (only lookup table contents should be preserved)
lib/PostgreSQL-MySQL.csv: Remove SELECT pg_catalog.setval() statements
lib/PostgreSQL-MySQL.csv: Remove COPY statements
pg_dump_vegbien: $schema flag defaults to on to export only schema
pg_dump_vegbien: Fixed bug where schema name var can't be named $schema because that would conflict with the $schema flag
root Makefile: VegBIEN DB: DB and bien user: db: Run inputs/.TNRS/install before schemas/install because public.tnrs_input_name depends on TNRS.tnrs
pg_dump_vegbien: Only include just the schema if new $schema flag is set. This causes constant lookup tables to automatically be exported for DB schemas, which was not fully supported by the $data flag because it also forced $owners to be enabled.
schemas/vegbien.sql: Added higher_plant_group enum
mappings/VegCore.csv: Added higherPlantGroup
schemas/vegbien.sql: analytical_*: Added isNewWorld
mappings/VegCore.csv: Added isNewWorld
inputs/newWorld/: Added geoscrub.schema.~.changes.sql to add a unique constraint on countryNameStd and change isNewWorld's type to boolean
Added inputs/newWorld/
README.TXT: Datasource setup: Moved Auto-create the map spreadsheets after Install the staging tables, because for DB-only tables, creating the map spreadsheets requires the header.csv generated by the staging tables install
input.Makefile: SVN: add: Also set _MySQL/'s svn:ignore
inputs/SALVIAS/_MySQL/salvias_plots.*.sql.make: Use new my2pg_export
inputs/.geoscrub/_MySQL/geoscrub.*.sql.make: Use new my2pg_export
Added my2pg_export
Regenerated vegbien.ERD exports
schemas/vegbien.ERD.mwb: Added analytical_stem, analytical_aggregate to ERD
schemas/vegbien.sql: analytical_*: Added georeferenceValid, distanceToCountry_km, distanceToStateProvince_km
schemas/vegbien.sql: Added _m_to_km()
inputs/.geoscrub/geoscrub_cleaned_unique/map.csv: Removed geoscrubID->locationID mapping because it allowed multiple occurrences of the same placename/coordinates combination, which was causing problems when datasources try to link up to the geoscrub places (and end up matching multiple output rows for each input row)
sql_io.py: put_table(): Removed assertion that into's row count be at least full_in_table's row count, because now that DISTINCT ON is used to satisfy the into table pkey, this is no longer necessarily true
schemas/vegbien.sql: method: method_unique: Added reference_id to make method unique within the datasource instead of globally unique within VegBIEN
mappings/VegCore.csv: Added samplingProtocolID
mappings/VegCore-VegBIEN.csv: subplot locationevent: Only populate parent locationevent's location unique IDs if a subplot #/subplotID is actually specified. (The lack of a location unique ID will cause the parent locationevent's location to be removed, as well as the parent locationevent itself if there is no parent locationevent unique ID.) This fixes a bug where top-level plots in datasources that provide a nullable subplot #/subplotID were incorrectly getting connected to parent locationevents.
mappings/verify.plots.sql: Added # locations, # location events, which also include subplots
inputs/SALVIAS/verify/plots.ref.sql: Standardized line wrapping to put ; on its own line in multi-line queries and at the end of the line in one-line queries
mappings/verify.plots.sql: Added "top-level" to # locations, # location events names because these queries do not include all locations
inputs/SALVIAS/verify/plots.ref.sql: # locations: Fixed count to include all plotMetadata entries instead of just unique coordinates, because each plotMetadata entry now gets its own location
mappings/VegCore-VegBIEN.csv: matched place: When any of the accepted* names is specified, set matched_place_id=0 (self-reference) to indicate that the matched place is an accepted (scrubbed) place
schemas/vegbien.sql: analytical_stem_view: Use the canonical (scrubbed) place instead of the datasource's place where available. Note that canon_place_id is always populated (with a self-reference if needed), to enable using a just a LEFT JOIN for this.
schemas/vegbien.sql: analytical_stem_view: Use datasource_* prefix instead of reference_* prefix that tables had gotten when datasource_id was renamed to reference_id
inputs/SALVIAS/verify/plots.ref: Regenerated on vegbiendev using the PostgreSQL 9.1 pg_catalog.default collation
sql_io.py: put_table(): Setting pkeys of missing rows: Fixed bug in column-based import where when input rows match multiple output rows in one of this iteration's input tables, the into table's pkey constraint is violated because full_in_table contains multiple entries for an input pkey
schemas/vegbien.sql: place: Added index on matched_place_id to facilitate looking up places by matched_place_id in place_set_canon_place_id()
schemas/vegbien.sql: Added geoscrub_input derived table with associated view. Build geoscrub_input as part of make_analytical_db().
sql.py: Added table_has_pkey()
sql.py: table_pkey_col(): For PostgreSQL DBs, use pg_catalog via index_cols() and table_pkey_index(), in order to use the search_path to look up the table. This fixes a bug where the pkey would be selected from information_schema.table_constraints in random order, and this order sometimes returned the corresponding table in the public schema but sometimes in other schemas, such as VegBank. This became a problem now that VegBIEN has a place table, which conflicts with VegBank's place table. (Most other VegBank tables that are mapped to have been renamed in VegBIEN.)
sql.py: add_pkey_or_index(): Use new add_pkey_index()
sql.py: Added add_pkey_index()