root Makefile: VegBIEN DB: Schemas: Added schemas/%.backup/uninstall so that a schema can be removed by its backup file name (with extension) as well as its name
README.TXT: Documented that files and directories can easily be remade with `make <file>-remake` and `make <dir>/remake`, which do the make clean or rm automatically
pg_dump_vegbien: Added plain env var which creates plain text output instead, for use in debugging and validation
input.Makefile: Mapping: $(map): Fixed bug where use_staged was causing in_schema, in_database, etc. to be set even when not outputting to vegbien, causing bin/map to try to output to a DB when it should output to stdout, by moving the use_staged test to $(map2db)
sql_io.py: put_table(): Creating full_in_table: If out_table is a SQL function, just use in_table because in_table is not modified
input.Makefile: Documentation: import/steps.by_col.sql: $(steps): Fixed bug where stderr needed to be redirected to stdout so that it would be passed to debug2redmine and included in the file
root Makefile: Datasources: import: Don't pause after "To import all inputs at once" message because this prevents running sequential import with & (as `make import &`)
sql.py: create_table(): Only version temp tables, so that e.g. staging tables are not created with a version # if they already exist. This will help make `make install` idempotent.
input.Makefile: Import to VegBIEN: $(import): Only redirect stderr to stdout if outputting to a log file. This allows unlogged imports to redirect the "Inserted # new rows into database" message, which is output to stdout rather than stderr for use by validation and automated testing, to a separate location.
README.TXT: Documented how to remove an archived import. Moved archived imports commands from Data import to new section Maintenance of archived imports. Changed "Empty the DB" to "Remove the last import" for clarity, and made the command apply just to the public schema and not also to the functions schemas (reinstalling those is documented under Schema changes).
root Makefile: $(rmSchemaCmd) for uninstalling schemas: Escape schema name with "" so that schemas/%/uninstall can be used to remove archived imports (as well as aborted restores of them)
root Makefile: schemas/%.backup: Removed explicit check if file exists because now that the target name matches the name of the file created, the file will be rebuilt only if it doesn't exist
schemas: Ignore log files (from restore operations) for svn
main Makefile: Installation: install: Download input data using new inputs/download target
main Makefile: VegBIEN DB: Schemas: schemas/%/install, schemas/py_functions/install: Ignore errors if schema exists, so that install can be idempotent
inputs/Makefile: Added download to download input data from vegbiendev
main Makefile: VegBIEN DB: Removing database warning message: Changed "public schema versions" to "archived imports" for clarity
main Makefile: VegBIEN DB: Schemas: Added schemas/%.backup/restore and documented it in README.TXT
inputs/import.stats.xls: Updated with stats from latest import. Marked partial imports affecting the slowest datasource with a comment when they might otherwise appear to be comparable with full imports due to having a similar # of rows.
root Makefile: schemas/%.backup: Removed _always prerequisite because an existing backup should not be overwritten by running `make schemas/%.backup`
README.TXT: Updated warning messages to match what's actually output by the commands in question
README.TXT: Documented how to back up an archived import
root Makefile: VegBIEN DB: Schemas: schemas/%/backup: Time the backup
root Makefile: VegBIEN DB: Schemas: schemas/%/backup: Renamed to the name of the file created (schemas/%.backup) so that the file will automatically be removed by make if you cancel the backup. If this didn't happen, the existing partial file would prevent make from backing it up again.
schemas: Ignore *.backup files for svn
root Makefile: VegBIEN DB: Schemas: schemas/%/backup: Documented that it can't be used for the current (unrotated) public schema because pg_dump doesn't back up the CREATE SCHEMA statement for it
root Makefile: VegBIEN DB: Schemas: schemas/%/backup: Changed file extension to .backup to match what pgAdmin3 expects
inputs/SALVIAS/maps/VegX.stems.csv: Fixed PlotObsID mapping to point to sourceAccessionCode so that it would map to the same place as organisms' PlotObsID and the entries would be merged. This will hopefully also fix a SALVIAS stems import bug where the specimenreplicate join on existing/inserted rows matched multiple rows, in spite of using the specimenreplicate_unique_catalognumber UNIQUE INDEX.
root Makefile: VegBIEN DB: Schemas: Added schemas/%/backup for backing up an import's schema
pg_dump_vegbien: Added data option for backing up an import's schema
sql_io.py: cast(): Don't output warnings for errors that are stored in the errors table, because this clutters the log output unnecessarily. Warnings are still output when there is no errors table.
schemas/vegbien.sql: party: party_unique UNIQUE INDEX: Moved surname before givenname, middlename to enable the index to be used when just the person's last name is known. (Note that if the organization is not known, and explicit NULL would need to be put in for that field to use the index.)
sql_io.py: put_table(): DuplicateKeyException: Uniquifying input table to avoid internal duplicate keys: Include all join columns (including literal values) in the unique index, not just table columns, to encourage PostgreSQL to use the index in a merge join.
sql.py: distinct_table(): Support literal values as column expressions. Filter out untyped NULLs because the type unknown can't be indexed.
sql.py: add_index(): Support literal values as expressions
sql_gen.py: ensure_not_null(): Support Literal value inputs, which are passed through if not nullable
sql_gen.py: Added cast_literal()
sql.py: Added simplify_expr() and use it in parse_expr_col(). This will remove extraneous information from the condition that gets included in the errors table, so it's easy for the data provider to see what constraint is being violated as it applies to their data.
sql_io.py: put_table(): Handle CheckExceptions. This fixes a REMIB bug where some rows violated the specimenreplicate.specimenreplicate_required_key CHECK constraint, but there was previously no way to filter out just these rows so all rows were removed instead.
sql.py: Added map_expr()
sql_gen.py: Added NotCond
sql.py: run_query(): ConstraintException: Get and include the constraint condition in the exception
sql.py: Added constraint_cond()
sql.py: run_query(): ConstraintException: Include table in the constraint name because constraints are scoped by table
sql.py: Added ConstraintException and parse it in run_query()
sql.py: ConstraintException: Also store the constraint's condition, if any
schemas/vegbien.sql: specimenreplicate: specimenreplicate_unique_catalognumber UNIQUE INDEX: Limited to rows where sourceaccessioncode IS NULL, so that it will only be used as an alternate if the datasource does not specify a sourceaccessioncode. This should fix the SpeciesLink bug where input rows with catalognumber_dwc IS NULL match multiple output rows when compared on the specimenreplicate_unique_catalognumber UNIQUE INDEX because they were excluded from the uniquifying by the index filter.
schemas/vegbien.ERD.mwb: Synced with schema
schemas/vegbien.sql: specimenreplicate: Added UNIQUE INDEX on plantobservation_id for direct vouchers in plots data
inputs/import.stats.xls: Updated with stats from latest import
sql.py: Removed no longer used constraint_cols(). index_cols() can always be used for a UNIQUE constraint because a UNIQUE index is automatically created for it.
bin/map: Fixed bug where errors table indexes could not be looked up using index_cols() because their schema was not in the search_path, by explicitly adding the in_schema at the end of the search_path. This is the only reason the in_schema needs to be in the search_path, but it's unavoidable because the "duplicate key value violates unique constraint" error does not included the constraint's schema.
sql.py: run_query(): DuplicateKeyException: Don't parse table name out of constraint name because it isn't used and can't reliably be determined for table names containing "_"
mappings/VegX-VegBIEN.stems.csv: Reattached location.datasource_id :[] mappings directly to the location itself, rather than to the location via the locationevent
mappings/VegX-VegBIEN.stems.csv: Mapped to taxonoccurrence.sourceaccessioncode to avoid underconstraining taxonoccurrence
xml_func.py: process(): Merge mergeable siblings recursively so that newly-mergeable children are also merged
xml_dom.py: Removed no longer used merge_same_name()
xml_func.py: process(): Recombining pieces of nodes that were split apart in the mappings: Only combine nodes that are explicitly marked as mergeable, to avoid unwanted merges. Refactored to use new xml_dom.merge_by_name().
mappings/VegX-VegBIEN.stems.csv: Marked aggregateoccurrence as mergeable, in preparation for switching to explicit merging only to avoid unwanted merges
xml_dom.py: Moved merge_adjacent() recursion and name checking into merge() so that other callers of merge() can take advantage of it, too. Added merge_by_name() and use it in merge_same_name(). Removed now-unused merge_adjacent().
xpath.py: get(): Support checking all children when the elem name is '*'
xml_dom.py: replace(): Assert that old node not removed from parent tree
xpath.py: Added is_all()
xml_func.py: process(): List out the xml_dom.NodeElemIter iterator so that it won't be affected by concurrent changes to the DOM tree
xml_func.py: process(): Merging children: Support merging non-adjacent children by using new xml_dom.merge_same_name(). This should eliminate the need to sort children of the same name next to each other in the mappings so that they will be merged.
xml_dom.py: Added merge_same_name()
bin/map: ex_tracker: Don't add row_ct to iters count in column-based import (by_col) because errors are not done by row, so a % of rows affected is not meaningful
sql_io.py: put_table(): NullValueException: Missing mapping for NOT NULL column: Raise error if there is no default input column to fill in the rows with. Missing mappings for required columns are a normal circumstance with path mappings (plantname, namedplace) because the mappings include intervening levels that may not be specified by the datasource. These mappings have default input columns for the intervening levels, so for them, this will continue to not be flagged as an error.
sql.py: index_cols(): Fixed bug where index columns were sorted by their position in the table, not their position in the index. This was causing the order of columns in covering indexes not to match their order in the corresponding output table index, preventing merge joins and slowing down joins because of the need to do a seq scan. Refactored to use pg_get_indexdef() on each column position index and parsing the result with parse_expr_col(), which greatly simplifies the query and automatically provides the correct sort order.
sql.py: Added parse_expr_col()
sql_gen.py: Added unesc_name()
sql.py: index_cols(): Removed unneeded table param, since the index name is unique within its schema. (The schema is looked up in the search_path if needed, by using ::regclass.)
sql.py: index_cols(): Fixed bug where index was looked up only by name, not by schema-qualified name, causing columns from other schemas (such as previous versions of public) to be included in the index columns list. This problem is only noticeable when a column in an index is deleted, as locationevent.datasource_id in locationevent_unique_accessioncode was recently.
input.Makefile: Name logs with extension .log.sql so they are syntax-highlighted for the SQL statements they contain
schemas/vegbien.sql: taxonoccurrence: Added sourceaccessioncode so that taxonoccurrences can be uniquely identified in plots data
schemas/vegbien.sql: locationevent: Removed datasource_id because locationevents are now scoped by their required location, which itself is scoped by datasource
mappings/VegX-VegBIEN.stems.csv: Removed locationevent.datasource_id mappings because locationevents are now scoped by their required location, which itself is scoped by datasource
sql.py: distinct_table(): Don't sort the inserted rows by pkey because they should stay in the table order that they were in. (The select order with no ORDER BY should be the table order. Even if it isn't, it doesn't matter what order they are in for our current application.)
sql_io.py: put_table(): Creating an empty pkeys table: Don't sort the inserted result by pkey because it's empty (limit=0)
sql_io.py: put_table(): ignore(): Fixed bug where in_col's table needed to be changed to insert_in_table, because it's insert_in_table's rows that are being modified but mapping (which in_col comes from) qualifies columns by in_table
sql_io.py: put_table(): ignore(): Also add an index on in_col if mapping the value to NULL
sql_io.py: put_table(): ignore(): Only delete from the insert_in_table, because the invalid rows only need to be removed from the rows that are actually being inserted into the DB. If there are invalid rows in the full (not uniquified) in_table, that's OK, as they can still get a valid output pkey if the first copy of a row they were considered a duplicate of is valid (this is a very unusual situation, so this change should not affect most real data).
sql_io.py: put_table(): ignore(): Merged filter_ var into sql.delete() call because that's the only place it's used
sql_io.py: put_table(): insert_into_pkeys(): Removed no longer used distinct param
sql_io.py: put_table(): Getting output table pkeys of existing/inserted rows: Don't DISTINCT ON the joined rows by input pkey, because this adds sorting overhead. This should not be needed because there generally should not be any duplicate rows for the columns in a unique index (if there are, this is an index configuration problem and should be fixed in the schema). It's possible that partial indexes (with a filter condition) were causing this, but testing without it in place will be needed to determine the cause.
sql.py: flatten(): Auto-add a pkey on the created temp table. This should be standard practice for most temp tables, and for sql_io.put_table() especially this will be useful if we ever want to add back sorting the in_table by row_num (possibly by CLUSTERing on the pkey to avoid pkey index scans).
sql.run_query_into() calls: Use new add_pkey_ param instead of manually calling sql.add_pkey()
sql.py: run_query_into(): Changed add_indexes_ param to add_pkey_ and add just a pkey if it's set. It's no longer necessary to create indexes on every column of a temp table, because the covering indexes for the join columns have been fixed to have columns in the same order as the output table's corresponding index so that they can be used for a merge join.
sql_io.py: put_table(): Add pkey on pkeys table right when it's created, so that any duplicates are detected right away instead of at the end of the iteration. (Duplicates are created as a result of joins matching multiple rows, which often indicates a database misconfiguration.)
sql_io.py: put_table(): Adding pkey on pkeys table: Removed log message because adding an index is considered a low-level operation, which isn't included in the Redmine SQL
schemas/tree_cross-links.sql: Ancestors table: Synced with current definition, which removes unneeded fki_* indexes. Note that the index on ancestor_id might be needed in the future if we ever want to get all the descendants of a plantname/namedplace or perform deletions on plantname/namedplace (which cascade to *_ancestor). For getting all the plantnames/namedplaces (of any rank) for a plantconcept/locationdetermination, though, the *_ancestor_pkey index is sufficient because plantname_id/namedplace_id is the first column in it.
schemas/vegbien.sql: {plantname,namedplace}_update_ancestors(): Fixed slowdown due to removed index on {plantname,namedplace}.parent_id by adding COALESCE to enable using the plantname_unique index for the lookup instead
sql.mk_select() calls: Removed no longer needed start=0 to turn off missing WHERE, LIMIT, or OFFSET clause warning
sql.py: mk_select(): Don't output warning if there is no WHERE, LIMIT, or OFFSET clause, because column-based import has many queries where this is the case and it's annoying to need to specify start=0 to turn off this warning
sql.py: flatten(): Don't sort the input tables by the pkey because it doesn't matter what order the datasource's rows are inserted in. Note that PostgreSQL doesn't guarantee the order of rows in a table, so it is possible that the rows were being inserted in an unknown order before this change, as well.
sql.py: delete(): Cache deletes by default
sql_io.py: put_table(): Merged remove_rows() and invalid2null() into one ignore() function that chooses the action (map to NULL or delete) depending on the value and whether NULLs have been filtered out of the column
sql_io.py: put_table(): remove_rows(): Delete the rows containing the invalid value instead of filtering them out of each select, so that the filtering can be profiled separately from the insertion. This also requires deleting rows with invalid non-NULL values instead of mapping them to NULL if NULLs have already been filtered out of the column in question.
sql_io.py: put_table(): Main insert: Don't run it inside an extra savepoint, because this will cause the creation of any helper SQL functions to be rolled back if an exception is thrown. If those functions are later re-used, the cache will think they exist when they no longer do. (Calling a function on input rows is now run in recover mode, so that it doesn't need the outer savepoint anymore.)
sql_io.py: put_table(): main_insert(): Moved code that is not part of the main query outside the function, so it wouldn't be subject to the exception handling. Preparing to insert new rows: Only do the preparation code for insert_select() if the out_table is not a function.