sql.py: run_query_into(): Documented why analyze() must be run manually on newly populated temp tables
sql.py: DbConn: Added autoanalyze mode. Added autoanalyze() which runs analyze() only if in autoanalyze mode. Use new autoanalyze() in functions that change a table's contents.
sql.py: run_query_into(): analyze() the created table to ensure the query planner's initial stats are accurate
inputs/SpeciesLink/src: Added custom header that overwrites existing header so that column names will not be too long for the staging table
cat_csv: Support overwriting the existing header using a separate header file
schemas/vegbien.sql: Added location.location_coords index to speed up large imports by providing an index for merge joins
csv2db: Reanalyze table, so that query planner stats are up to date even though the table doesn't need to be vacuumed anymore
sql.py: Added analyze()
csv2db: Removed no longer needed table vacuum (cleanup_table() now avoids creating dead rows)
sql.py: cleanup_table(): Use update()'s new in_place mode to avoid needing to vacuum the table
sql.py: mk_update(): in_place: Support updating multiple columns at once
sql.py: update() calls: Use in_place where possible to avoid creating dead rows, which bloats table size
sql.py: DbConn.col_info(): Support user-defined types
sql_gen.py: Added Nullif
sql_gen.py: Added Coalesce class and use it in EnsureNotNull
sql_gen.py: Added coalesce and use it in EnsureNotNull
sql.py: DbConn.col_info(): Don't cache the structure query because the column type, etc. may change between calls. This fixes a bug in mk_update() where the column type would be retrieved before a NOT NULL constraint was added, causing the NOT NULL constraint not to be in the cache info about the column.
sql.py: mk_update(): Implemented in_place mode
sql.py: mk_update(): Factored out filtering of input values so only `.to_str(db)` is used inline in the creation of the query
sql.py: mk_update(): Added in_place param
csvs.py: TsvReader: Prevent "new-line character seen in unquoted field" errors by replacing '\r' with '\n'
csv2db: Adding indexes: Fixed bug where sql.add_index()'s ensure_not_null param needed to be renamed to ensure_not_null_
sql.py: cast(): columns values clause: Use start=0 to avoid "SELECT statement missing a WHERE, LIMIT, or OFFSET clause" warning
inputs/import.stats.xls: Updated for most recent run
sql.py: Removed no longer needed mk_track_data_error()
sql.py: track_data_error(): Use for loop and insert() (ignoring DuplicateKeyException) to insert entries into the errors table, to get the same optimization benefits this change provides in other filter-out contexts, and to improve clarity
sql.py: cast(): Use FOR loop with EXCEPTION block instead of CROSS JOIN with LEFT JOIN to insert entries into the errors table, to get the same optimization benefits this change provides in other filter-out contexts, and to improve clarity
sql_gen.py: NamedValues: Support None cols param for no named columns
sql.py: ensure_not_null(): Made the use of index columns configurable based on the # of rows in the table, because for small datasources, they seem to add 6-25% to the total import time
xml_func.py: _noCV: Fixed bug where assumed items was an iterator when it's now a list
sql.py: add_index_col(), cast_temp_col(): Cache add_col() by providing new comment param to distinguish new columns of the same (colliding) suffixed name but from different source columns
sql.py: add_index_col(), cast_temp_col(): Cache the update that fills in the new column, since it's idempotent
sql.py: update(): Pass cacheable to run_query()
sql.py: add_col(): Added comment param which can be used to distinguish columns of the same name from each other when they contain different data, to allow the ADD COLUMN query (and related queries, such as adding indexes) to be cached
sql_gen.py: Added esc_comment()
sql.py: DbConn.DbCursor.execute(): Allow ADD COLUMN to be cached if it has a distinguishing comment, because then the rest of query will be unique in the face of name collisions
sql.py: add_col(): Document that name may be versioned, so caller needs to propagate any renaming back to any source column for the TypedCol
sql.py: add_col() callers: Removed column name versioning because that is now handled by add_col()
sql.py: add_col() callers: Fixed bug where needed to propagate any renaming of typed column back to regular column
sql.py: add_col(): Version column names to avoid collisions. (Previously, callers were required to do this themselves.)
sql.py: cast_temp_col(): Handle column name collisions like add_index_col()
sql.py: mk_insert_select(): INSERT IGNORE: Switched to using FOR loop rather than cursors because cursors are only needed if you want to process multiple rows in the same EXCEPTION block (which you can't do because then all the previous inserts will be rolled back if one row is a duplicate key)
sql.py: mk_insert_select(): INSERT IGNORE: Moved FETCH FROM cursor outside EXCEPTION block since only the insert needs to have unique_violations ignored
sql.py: put_table(): Removed no longer needed EXCLUSIVE lock on the output table
add_index_col(): Run all operations with log_level=3 because it is the equivalent of creating an index, and that has log_level=3 to be excluded from the primary algorithm steps. add_not_null(): Run query with log_level=3 because it's an internal operation and for add_index_col() above.
sql.py: update(): Pass any log_level to run_query()
xml_func.py: process(): Added pass-through optimization for aggregating functions with one arg
xml_func.py: process(): Call list() on the items' iterator so that its length can be determined wherever it's used without needing to call list() on it separately
sql.py: mk_select(): Put '*' on same line as 'SELECT' if there is no DISTINCT ON, to avoid extra newlines
sql_gen.py: NamedTable.to_str(): Put AS clause on same line as code being named if it didn't contain a newline, to avoid extra newlines
sql.py: run_query(): Always parse exceptions, whether recover is set or not, to avoid making the caller set recover just to parse exceptions. If recover is not set, just don't run any queries when generating the parsed exception and return an empty value for the needed information. (A partial parsed exception is better than an unparsed one.)
sql_gen.py: is_temp_col(): Use is_table_col() so that the type of the parameter is also checked
inputs/import.stats.xls: Bolded the ms/row totals. Added note that non-bolded totals are not directly comparable because different methods were run with different numbers of rows.
sql.py: put_table(): Inserting new rows: Removed no longer needed filter-out LEFT JOIN, since the INSERT IGNORE replacement handles duplicate keys
sql.py: put_table(): Inserting new rows: Use insert_select()'s (mk_insert_select()'s) new ignore option to ignore duplicate keys
sql_gen.py: to_name_only_col(): Consider any non-table column, including NamedCol, to be already name-only. This fixes a bug in sql.mk_insert_select() where the value of a NamedCol was removed by to_name_only_col() even though it was needed.
sql.py: mk_insert_select(): INSERT IGNORE: Use an EXCEPTION block for each individual row because "When an error is caught by an EXCEPTION clause, [...] all changes to persistent database state within the block are rolled back." (http://www.postgresql.org/docs/8.3/static/plpgsql-control-structures.html#PLPGSQL-ERROR-TRAPPING). Documented that cursor stays at current position in spite of automatic ROLLBACK on EXCEPTION.
sql.py: mk_insert_select(): INSERT IGNORE: Use RETURN QUERY on the insert query instead of RETURN NEXT on a local var to forward the RETURNING value
sql.py: mk_insert_select(): INSERT IGNORE: Support RETURNING clause. Always return something to set the correct rowcount.
sql_gen.py: Use an as_*() function instead of manually checking the type wherever possible
sql.py: mk_insert_select(): INSERT IGNORE: Added duplicate key handling by using EXCEPTION block to catch unique_violation
sql.py: mk_insert_select(): INSERT IGNORE: Fixed bug where user-defined types were not supported correctly, by referencing the column's type directly in PL/pgSQL instead of retrieving it from the information_schema. Use a row variable of the output table's row type to store the select query data, so that you don't need to create a separate local var for each output column.
import.stats.xls: Changed \/row (ms)" to "ms/row"
sql.py: mk_insert_select(): INSERT IGNORE: Used typed vars to store each column of the select query because the select query's column names are not known, so it would be unknown which record var column to use for each output column. (Add types so there are no unknown-type values that PostgreSQL will say can't be cast.)
sql.py: insert_select(): In ignore mode, always recover from errors
sql.py: mk_insert_select(): INSERT IGNORE: Implemented simple cursor loop that just inserts each row, without ignoring duplicate keys
sql.py: mk_insert_select(): Added ignore param to do an INSERT IGNORE equivalent operation. Added basic structure for INSERT IGNORE. Support embeddable queries without RETURNING columns (which is the case for most INSERT IGNORE queries).
sql.py: run_query_into(): Always set into.schema to None, so that table is created in the temp schema in debug_temp mode
sql.py: DbConn.DbCursor.execute(): Don't cache ADD COLUMN queries because the rest of the query must be unique in order to use caching
sql.py: add_col(): Don't ignore already-existing columns because sometimes name truncation causes collisions, requiring the caller to version the name. add_index_col(): Version the column name to avoid collisions.
sql.py: add_index(): Version index names to avoid collisions
sql.py: DbConn.DbCursor.execute(): Always cache structural changes, not just if they throw an exception
sql.py: add_index(): Fixed bug where expr could not be deep-copied until ensure_not_null() had been run on it, because ensure_not_null() modifies the index_cols of the table and this needs to be modified on the original table
sql_gen.py: EnsureNotNull: Added to_str() which uses the column's index col if available. This works around a PostgreSQL bug where indexes on expressions are sometimes not used, but indexes on plain columns are used.
sql_gen.py: Store index cols in the table instead of in the column, because columns are often recreated from the table and name but tables are generally not copied
sql_gen.py: Added is_indexed_col()
sql.py: add_index(): Use new sql.ensure_not_null(), which creates a separate column to store the index expr where possible
sql.py: Added add_not_null(), add_index_col(), ensure_not_null()
sql_gen.py: Added is_temp_col()
sql_gen.py: concat(): Also preserve cast suffix
sql_gen.py: Col: Added index_col attr
sql_gen.py: Added suffixed_col()
sql.py: run_query_into(): Set the into table to be a temp table
sql_gen.py: Table: Store whether table is temp table
sql_gen.py: concat(): Also preserve appended column names
sql_gen.py: Col.to_str(): Take for_str param which does both concat() and str()/clean_name() instead of use_concat param which doesn't remove quotes from the strings before concatenating, causing strings to be incorrectly truncated
sql_gen.py: EnsureNotNull: Cast the null sentinel to the type it's the sentinel for, so that COALESCE returns the same type as the provided type, instead of potentially incorrectly assuming the constant's type. This fixes a bug where COALESCE returned an incorrect type.
sql.py: cast(): Use new sql_gen.Cast
sql_gen.py: Added Cast
sql_gen.py: concat(): Consider ")"s part of the existing suffix, so that parentheses match up
Removed extra trailing whitespace
main Makefile: Added explicit schemas/temp/reinstall target because vegbiendev (Ubuntu 12.04) seems to ignore %/reinstall
vegbien_dest: Added temp to schemas list so the intermediate tables will be created there, where they are easy to remove with `make schemas/temp/reinstall`
main Makefile: Added %/reinstall, which calls the corresponding uninstall and install targets
Added temp schema to store intermediate tables permanently when desired for debugging. It is not part of schemas/all because it should not be remade to include current intermediate tables when remaking other schema exports.
schemas: Moved *.sql.make into Makefile to take advantage of % pattern matching
Removed no longer used schemas/vegbien_empty.sql
main Makefile: Removed empty_db, because `make schemas/reinstall` has the same effect and is simpler
README.TXT: Changed documentation to use make schemas/reinstall to empty the DB, since that command is simpler. Added how to archive the last import.