mappings/VegX-VegBIEN.stems.csv: Marked aggregateoccurrence as mergeable, in preparation for switching to explicit merging only to avoid unwanted merges
xml_dom.py: Moved merge_adjacent() recursion and name checking into merge() so that other callers of merge() can take advantage of it, too. Added merge_by_name() and use it in merge_same_name(). Removed now-unused merge_adjacent().
xpath.py: get(): Support checking all children when the elem name is '*'
xml_dom.py: replace(): Assert that old node not removed from parent tree
xpath.py: Added is_all()
xml_func.py: process(): List out the xml_dom.NodeElemIter iterator so that it won't be affected by concurrent changes to the DOM tree
xml_func.py: process(): Merging children: Support merging non-adjacent children by using new xml_dom.merge_same_name(). This should eliminate the need to sort children of the same name next to each other in the mappings so that they will be merged.
xml_dom.py: Added merge_same_name()
bin/map: ex_tracker: Don't add row_ct to iters count in column-based import (by_col) because errors are not done by row, so a % of rows affected is not meaningful
sql_io.py: put_table(): NullValueException: Missing mapping for NOT NULL column: Raise error if there is no default input column to fill in the rows with. Missing mappings for required columns are a normal circumstance with path mappings (plantname, namedplace) because the mappings include intervening levels that may not be specified by the datasource. These mappings have default input columns for the intervening levels, so for them, this will continue to not be flagged as an error.
sql.py: index_cols(): Fixed bug where index columns were sorted by their position in the table, not their position in the index. This was causing the order of columns in covering indexes not to match their order in the corresponding output table index, preventing merge joins and slowing down joins because of the need to do a seq scan. Refactored to use pg_get_indexdef() on each column position index and parsing the result with parse_expr_col(), which greatly simplifies the query and automatically provides the correct sort order.
sql.py: Added parse_expr_col()
sql_gen.py: Added unesc_name()
sql.py: index_cols(): Removed unneeded table param, since the index name is unique within its schema. (The schema is looked up in the search_path if needed, by using ::regclass.)
sql.py: index_cols(): Fixed bug where index was looked up only by name, not by schema-qualified name, causing columns from other schemas (such as previous versions of public) to be included in the index columns list. This problem is only noticeable when a column in an index is deleted, as locationevent.datasource_id in locationevent_unique_accessioncode was recently.
input.Makefile: Name logs with extension .log.sql so they are syntax-highlighted for the SQL statements they contain
schemas/vegbien.sql: taxonoccurrence: Added sourceaccessioncode so that taxonoccurrences can be uniquely identified in plots data
schemas/vegbien.sql: locationevent: Removed datasource_id because locationevents are now scoped by their required location, which itself is scoped by datasource
mappings/VegX-VegBIEN.stems.csv: Removed locationevent.datasource_id mappings because locationevents are now scoped by their required location, which itself is scoped by datasource
sql.py: distinct_table(): Don't sort the inserted rows by pkey because they should stay in the table order that they were in. (The select order with no ORDER BY should be the table order. Even if it isn't, it doesn't matter what order they are in for our current application.)
sql_io.py: put_table(): Creating an empty pkeys table: Don't sort the inserted result by pkey because it's empty (limit=0)
sql_io.py: put_table(): ignore(): Fixed bug where in_col's table needed to be changed to insert_in_table, because it's insert_in_table's rows that are being modified but mapping (which in_col comes from) qualifies columns by in_table
sql_io.py: put_table(): ignore(): Also add an index on in_col if mapping the value to NULL
sql_io.py: put_table(): ignore(): Only delete from the insert_in_table, because the invalid rows only need to be removed from the rows that are actually being inserted into the DB. If there are invalid rows in the full (not uniquified) in_table, that's OK, as they can still get a valid output pkey if the first copy of a row they were considered a duplicate of is valid (this is a very unusual situation, so this change should not affect most real data).
sql_io.py: put_table(): ignore(): Merged filter_ var into sql.delete() call because that's the only place it's used
sql_io.py: put_table(): insert_into_pkeys(): Removed no longer used distinct param
sql_io.py: put_table(): Getting output table pkeys of existing/inserted rows: Don't DISTINCT ON the joined rows by input pkey, because this adds sorting overhead. This should not be needed because there generally should not be any duplicate rows for the columns in a unique index (if there are, this is an index configuration problem and should be fixed in the schema). It's possible that partial indexes (with a filter condition) were causing this, but testing without it in place will be needed to determine the cause.
sql.py: flatten(): Auto-add a pkey on the created temp table. This should be standard practice for most temp tables, and for sql_io.put_table() especially this will be useful if we ever want to add back sorting the in_table by row_num (possibly by CLUSTERing on the pkey to avoid pkey index scans).
sql.run_query_into() calls: Use new add_pkey_ param instead of manually calling sql.add_pkey()
sql.py: run_query_into(): Changed add_indexes_ param to add_pkey_ and add just a pkey if it's set. It's no longer necessary to create indexes on every column of a temp table, because the covering indexes for the join columns have been fixed to have columns in the same order as the output table's corresponding index so that they can be used for a merge join.
sql_io.py: put_table(): Add pkey on pkeys table right when it's created, so that any duplicates are detected right away instead of at the end of the iteration. (Duplicates are created as a result of joins matching multiple rows, which often indicates a database misconfiguration.)
sql_io.py: put_table(): Adding pkey on pkeys table: Removed log message because adding an index is considered a low-level operation, which isn't included in the Redmine SQL
schemas/tree_cross-links.sql: Ancestors table: Synced with current definition, which removes unneeded fki_* indexes. Note that the index on ancestor_id might be needed in the future if we ever want to get all the descendants of a plantname/namedplace or perform deletions on plantname/namedplace (which cascade to *_ancestor). For getting all the plantnames/namedplaces (of any rank) for a plantconcept/locationdetermination, though, the *_ancestor_pkey index is sufficient because plantname_id/namedplace_id is the first column in it.
schemas/vegbien.sql: {plantname,namedplace}_update_ancestors(): Fixed slowdown due to removed index on {plantname,namedplace}.parent_id by adding COALESCE to enable using the plantname_unique index for the lookup instead
sql.mk_select() calls: Removed no longer needed start=0 to turn off missing WHERE, LIMIT, or OFFSET clause warning
sql.py: mk_select(): Don't output warning if there is no WHERE, LIMIT, or OFFSET clause, because column-based import has many queries where this is the case and it's annoying to need to specify start=0 to turn off this warning
sql.py: flatten(): Don't sort the input tables by the pkey because it doesn't matter what order the datasource's rows are inserted in. Note that PostgreSQL doesn't guarantee the order of rows in a table, so it is possible that the rows were being inserted in an unknown order before this change, as well.
sql.py: delete(): Cache deletes by default
sql_io.py: put_table(): Merged remove_rows() and invalid2null() into one ignore() function that chooses the action (map to NULL or delete) depending on the value and whether NULLs have been filtered out of the column
sql_io.py: put_table(): remove_rows(): Delete the rows containing the invalid value instead of filtering them out of each select, so that the filtering can be profiled separately from the insertion. This also requires deleting rows with invalid non-NULL values instead of mapping them to NULL if NULLs have already been filtered out of the column in question.
sql_io.py: put_table(): Main insert: Don't run it inside an extra savepoint, because this will cause the creation of any helper SQL functions to be rolled back if an exception is thrown. If those functions are later re-used, the cache will think they exist when they no longer do. (Calling a function on input rows is now run in recover mode, so that it doesn't need the outer savepoint anymore.)
sql_io.py: put_table(): main_insert(): Moved code that is not part of the main query outside the function, so it wouldn't be subject to the exception handling. Preparing to insert new rows: Only do the preparation code for insert_select() if the out_table is not a function.
sql_io.py: put_table(): main_insert(): is_function: Run insert_into_pkeys() with recover=True so that errors in the function are properly rolled back
sql_io.py: put_table(): insert_into_pkeys(): Support custom query kw_args, such as recover
sql_io.py: put_table(): Use full_in_table in the into table row count assertion, since in_table may have rows deleted
sql_io.py: put_table(): Save default values for all rows in new temp table full_in_table since in_table may have rows deleted
sql.py: Added mk_delete() and delete()
sql_io.py: put_table(): mk_main_select(): Turned off unnecessary ORDER BY to avoid sorting the entire table every time it's used. (PostgreSQL has no concept of reordering a table and re-using that ordering, so it just re-sorts the table each time. Index scans on the pkey do not appear to be used in practice, according to EXPLAIN results from live imports.) Document that we instead assume that identical SELECT queries retrieve rows in the same order.
schemas/vegbien.sql: taxondetermination: Fixed bug where taxondetermination_taxonoccurrence_id_fkey trigger was applied before the NOT NULL constraint on taxonoccurrence_id was checked, causing the trigger to fail on NULL taxonoccurrence_ids, by making it an AFTER trigger. (An AFTER trigger will still roll back the entire insert if it fails, even though it runs after the insert itself.)
schemas/vegbien.sql: specimenreplicate: institution_id: Fixed typo in comment
inputs/import.stats.xls: Fixed date for most recent import
sql.py: DbConn.run_query(): Put the data source comment on a separate line in the log file instead of using a carriage return, which sometimes had the desired effect of overwriting the src comment with the first line of the query but sometimes the line lengths weren't right and there wasn't enough overlap
schemas/vegbien.ERD.mwb: Synced with schema
schemas/vegbien.sql: Removed per-column indexes, which are no longer needed by either row-based or column-based import because they are able to do a merge join or lookup using the table's UNIQUE INDEX. Instead of forcing the database to build and maintain large indexes (15+ GB!) that are not used, optimization-only (non-UNIQUE) indexes should be added as needed only once the database is actually used for queries. In most cases it will not even be necessary to add additional indexes then, because most UNIQUE indexes can be reused for broad lookups (rather than just duplicate elimination). Even the foreign key covering indexes (fki_*) are not needed because we virtually never delete rows in the DB, and even if we were to start doing that regularly, the cost of maintaining the indexes on import is most likely not worth the speed improvements for cascading deletes.
schemas/py_functions.sql: Removed per-column indexes on relational functions, which are no longer needed by row-based import because it is able to do a merge join-style lookup using the table's UNIQUE INDEX. (Note that column-based import doesn't use the (slower) relational functions at all anymore, and instead calls the corresponding SQL function directly using named arguments.)
schemas/functions.sql: Removed per-column indexes on relational functions, which are no longer needed by row-based import because it is able to do a merge join-style lookup using the table's UNIQUE INDEX. (Note that column-based import doesn't use the (slower) relational functions at all anymore, and instead calls the corresponding SQL function directly using named arguments.)
schemas/vegbien.sql: plantname: plantname_unique UNIQUE INDEX: Moved scope_id to the back so that the index can easily be used for lookup queries (not just column-based import) without having to explicitly specify NULL for that field. This takes advantage of a btree sorting feature where a broader lookup can be done using just the first n columns of the index.
schemas/vegbien.sql: locationevent, specimenreplicate: Turned UNIQUE CONSTRAINTs and UNIQUE INDEXes with nullable fields into partial UNIQUE INDEXes with IS NOT NULL filter conditions, in order to work automatically with sql_gen without requiring a separate covering lookup index. Removed no longer needed covering lookup indexes.
sql_io.py: put_table(): DuplicateKeyException: Fixed bug where combining multiple unique constraints was incorrectly allowed, when in fact the constraints need to be separately applied to the different rows that violate them, which is not currently supported
sql.py: DbConn.with_savepoint(): Log transaction profiling info with level=4 like the rest of the transaction commands, so that it isn't output when the transaction itself should be hidden (e.g. for name versioning or internal commands)
sql.py: DbConn.with_savepoint(): Profile (nested) transactions so that the run time for groups of commands (e.g. csv2db INSERTs) is known
csv2db: verbosity defaults to 3 so that detailed queries with profiling stats are included in the log file, to assist in optimization
csv2db: Don't cache per-row INSERT queries because this bloats the cache (there aren't repeated identical INSERTs that shouldn't be re-run like in row-based import)
sql.py with_explain_comment(), DbConn: Fixed bug where with_explain_comment() was being run in per-row imports (row-based import and csv2db with INSERT), causing the overhead of an EXPLAIN query for every single INSERT and filling up the cache with EXPLAIN query results, by adding autoexplain mode, only running with_explain_comment() in autoexplain mode, and only enabling autoexplain mode for column-based import
db_xml.py: put_table(): Turn on autoanalyze mode to help the query planner avoid sequential scans on tables that now contain data. (Don't do this in row-based import because it creates too much overhead per insert.)
sql.py: Run all EXPLAIN queries with log_level=4 since the EXPLAIN information is now usually generated when the query is generated rather than when it's run, so the log_level is not known
sql.py: Added with_explain_comment() to query generating functions so that nested queries will also have EXPLAIN information
sql.py: Added with_explain_comment() and use it in run_query()
sql.py: run_query(): EXPLAIN output: Run explain() with log_level 1 higher than the query's log_level, so that low-level queries' EXPLAIN queries are not output when the queries themselves are not output. This also ensures that only level 2 (major) queries have the EXPLAIN logged (to introduce the query that is being run), to avoid cluttering the log output.
sql.py: explain(): Support custom log_level
schemas/vegbien.sql: taxondetermination: taxondetermination_taxonoccurrence_id_fkey manual fkey constraint: Fixed bug where needed to raise foreign_key_violation instead of unique_violation
inputs/import.stats.xls: Updated with stats from latest import
debug2redmine.csv: Remove newline before EXPLAIN comment
debug2redmine.csv: Filter out EXPLAIN comments
sql.py: run_query(): EXPLAIN all explainable queries before they are run, to provide query plans for later profiling and index analysis. At verbosity 3+, this also effectively allows the user to see what query is being run before it's executed.
sql.py: is_explainable(): Fixed bug where needed r'' syntax to escape \ in \b
sql.py: Added explain() and is_explainable()
strings.py: Added join_lines()
mk_rm_indexes: Also include the search_path in the outputted commands
schemas/vegbien.sql: commclass: Fixed bug where commclass_unique needed to be a UNIQUE INDEX
schemas/vegbien.sql: plantname: Removed unneeded indexes on plantname and rank (plantname_unique takes care of joins)
pg_dump_vegbien: Enclose the schema name in "" because pg_dump requires this for schema names with special characters
inputs/import.stats.xls: Updated with stats from 2012-7-3 and 2012-7-5 imports. Note that the 2012-7-5 import was partial, so its stats can't be directly compared.
root Makefile: VegBIEN DB: Schemas: Added schemas/%/rm_indexes
Added mk_rm_indexes
sql.py: Added drop() and use it in drop_table()
debug2redmine: Remove profiling information from the logging output
sql.py: DbConn.run_query(): Only print notices in debug mode, because they are output with a log level higher than the debug verbosity threshold, and this avoid unnecessary overhead
sql.py: DbConn: Added profile_row_ct setting, which is passed to profiler.stop() in run_query()
bin/map: Logging: Raised debug-mode verbosity threshold to 1.5 so that in row-based imports, which have a default verbosity of 1.1, sql.DbConn.run_query() will not profile the query, to avoid unnecessary overhead
sql.py: DbConn.run_query(): Only profile queries in debug mode, to avoid unnecessary overhead when the run time will not be displayed
sql.py: DbConn.run_query(): Profile using the profiling.ItersProfiler class, which pretty-prints the run time
sql.py: DbConn.run_query(): Added profiling of query execution, which is logged with the query
sql.py: DbConn.run_query(): Move log_msg() to where it's used, so that it runs after the query is run and can refer to profiling variables
sql.py: DbConn.run_query(): Use else blocks to avoid applying exception handling to commands run after the main command
sql.py: DbConn.run_query(): Always output or return the log message after the query is run, so that it can be output with profiling statistics in the log message header
sql.py: run_query(): Always output the log message after the query is run, so that it can be output with profiling statistics in the log message header
Regenerated vegbien.ERD exports
schemas/vegbien.sql: locationevent: Added covering lookup indexes on the unique constraints to enable fast merge joins in column-based import. Removed no longer needed individual-column lookup indexes because the constraint-covering lookup indexes now handle lookups. This also avoids index bloat.
schemas/vegbien.sql: specimenreplicate: Removed no longer needed individual-column lookup indexes because the constraint-covering lookup indexes now handle lookups. This also avoids index bloat.