sql.py: DbConn._db(): Record that a transaction is already open before setting the search_path so that a query is never run with an _savepoint value less than 1 (manual transactions are not supported yet)
sql.py: DbConn.with_savepoint(): Increment _savepoint before running queries so they don't get autocommitted
sql.py: empty_temp(): Empty temp tables even in debug_temp mode, so that it can be seen which tables have been garbage collected and disk space leaks can be detected. This will not affect the external re-runnability of slow queries in debug_temp mode, as long as the user aborts the debug_temp import while the slow query is still running.
sql_gen.py: ColDict: Use OrderedDict so that order of keys in input dict (if ordered) will be preserved. This should ensure that tempt table unique indexes have their columns in the same order as the output table, so that a merge join can be used.
util.py: dict_subset(): Use OrderedDict so that order of keys in input dict (if ordered) will be preserved
main Makefile: python-Darwin: Added pip installation instructions. python-Linux: Added ordereddict.
sql.py: DbConn.col_info(): cacheable param defaults to True now that callers explicitly turn off cacheable when needed
sql.py: add_index_col(): Explicitly set update()'s col_info caching depending on whether col_info will be changed later by add_not_null()
sql.py: mk_update(): Renamed cacheable param to cacheable_ so it wouldn't conflict with update()'s cacheable param
sql.py: mk_update(): Added cacheable param to set whether column structure information used to generate the query can be cached
sql.py: add_index_col(): Explicitly set col_info()'s caching depending on whether col_info will be changed later by add_not_null()
sql.py: DbConn.col_info(): Allow caller to specify whether query is cacheable
csv2db: Fixed bug where CREATE TABLE statement was cached, causing it not to be re-executed after a rollback due to a failed COPY FROM. Avoid re-creating the table after a failed COPY FROM, and instead just remove any existing rows.
sql.py: add_index(): Don't generate a unique name for the index because the database does that automatically
csv2db: Vacuum table instead of just reanalyzing it because for some reason reanalyzing it isn't enough to fix the cached row count (causing pgAdmin3 to report that the table needs to be vacuumed)
csv2db: Don't add indexes on the created table because they use up more disk space than the table itself and currently aren't used. (The import process adds indexes on each iteration's column subset instead.)
db_xml.py: partition_size: Turning partitioning back on (with a larger limit), since the largest datasources' temp tables are still too big
sql_io.py: put_table(): Fixed bug where if there were multiple unique constraints that were violated, only the distinct temp table for the last one would get garbage-collected
db_xml.py: partition_size: Set to sys.maxint to disable partitioning. The last bugfix, which avoided returning a large result set to the client which was never read, seems to have fixed the disk space leak, so it's worth reattempting a full simultaneous import.
db_xml.py: put_table(): Subsetting in_table: Truncate in_table when finished with it, to avoid temp table disk space leaks
sql.py: insert_select(): If caller is only interested in the rowcount (if returning == None), keep the NULL rows for each insert on the server using CREATE TABLE AS. (CREATE TABLE AS sets rowcount to # rows in query, so rowcount will still be set correctly.)
top-level map: Added support for custom public schema, to be able to run imports and tests simultaneously (e.g. on a dev machine)
csv2db: Fixed bug where table needed to be a sql_gen.Table object with the proper schema, so that errors_table would be created in the correct schema. Removed no longer needed changing of the search_path.
sql.py: DbConn.with_savepoint(): Open a new transaction if one is not already open
sql.py: DbConn: _savepoint starts at 1 because the driver is not in autocommit mode, so a transaction is already open
csv2db: Create errors table first, so that imports can start using it right away
input.Makefile: Added import/steps.by_col.sql to generate a Redmine-formatted list of steps for column-based import
bin/map: Optimized default verbosities for the mode: automated tests should not be verbose, column-based import should show all queries to assist profiling, and row-based import should just show row progress
sql_io.py: put(): Run data import queries with log_level=3.5 so they don't clutter the output at the normal import verbosity of 3
db_xml.py: put_table(): Work around PostgreSQL's temp table disk space leak by reconnecting to the DB after every partition
sql.py: mk_select(): Also support limit and start values of type long
sql_gen.py: suffixed_table(): Fixed bug where needed to copy all table attrs, such as is_temp status
sql.py: create_table(): Fixed bug where needed to run query in recover mode in case the table exists and was created before the current connection, such that the CREATE TABLE statement would not have been cached
sql.py: create_table(): Removed final newline after query because that's added by the logging mechanism
sql.py: Added reconnect()
sql.py: DbConn._reset(): Assert that _savepoint is 0 instead of setting it to 0
db_xml.py: put_table(): put_table_(): Removed no longer used limit, start params
db_xml.py: put_table(): Merged partitioning and subsetting into same section for simplicity, to avoid creating extra temp tables, and to later allow the connection to be closed and reopened between partitions. partition_size: Expressed value without exponent notation to ensure that it's an integer.
db_xml.py: put_table(): Partitioning in_table: Adjust bounds of last partition to actual row #s included
sql.py: DbConn: Added _ to reset() to indicate that it's a protected method and users should not call it directly
sql.py: DbConn.close(): Reset the connection completely using reset()
sql.py: DbConn: Added clear_cache() and reset() and use reset() in init()
bin/map: Use new DbConn.close()
sql.py: DbConn: Added close()
db_xml.py: partition_size: Set to just more than the size of the largest data source that was successfully imported in simultaneous import
db_xml.py: put_table(): Partition in_table if larger than a threshold. The threshold is initially set to disable partitioning. Partitioning will hopefully eliminate the excessive disk usage for large input tables, which has caused the system to run out of disk space due to what may be a bug in PostgreSQL.
db_xml.py: put_table(): Set in_table's default srcs to in_table itself instead of sql_gen.src_self, so that any copies of in_table will inherit the same srcs instead of being treated as a top-level table. This ensures that the top-level table's errors table will always be used.
sql_io.py: cast(): Always convert exceptions to warnings if the input is a column or expression, even if there is no place to save the errors, so that invalid data does not need to be handled by the caller in a (much slower) extra exception-handling loop
sql_io.py: put_table(): MissingCastException: When casting, handle InvalidValueException by filtering out invalid values with invalid2null() in a loop
sql_io.py: cast_temp_col(): Run sql.update() in recover mode in case expr produces errors. Don't cache sql.update() in case this function will be called again after error recovery.
sql.py: Generalized FunctionValueException to InvalidValueException so that it will match all invalid-value errors, not just those occurring in user-defined functions
sql_io.py: put_table(): Removed no longer used sql.FunctionValueException handling, because type casting functions now do their own invalid value handling
db_xml.py: put_table(): Subsetting in_table: Call put_table() recursively using put_table_() to ensure that limit and start are reset to their default values, in case the table gets partitioned (which needs up-to-date limit and start values)
sql_io.py: put_table(): mk_main_select(): Fixed bug where the table of each cond needed to be changed to insert_in_table because mk_main_select() uses the distinct table rather than the full input table
sql_gen.py: with_table(): Support columns that are wrapped in a FunctionCall object
sql_gen.py: index_cols: Store just the name of the index column, and add the table in index_col(), in case the table is ever copied and renamed
Moved error tracking from sql.py to sql_io.py
sql_io.py: put_table(): Use sql.distinct_table() to uniquify input table, instead of DISTINCT ON. This avoids letting PostgreSQL create a sort temp table to store the output of the DISTINCT ON, which is not automatically removed until the end of the connection, causing database bloat that can use up the available disk space.
sql_gen.py: suffixed_table(): Use concat()
sql_gen.py: with_default_table(): Remove no longer used overwrite param
sql.py: distinct_table(): Return new table instead of renaming input table so that columns that use input table will continue to work correctly
sql_gen.py: Moved NamedCol check from with_default_table() to with_table()
sql.py: distinct_table(): Fixed bug where empty distinct_on cols needed to create a table with one sample row, instead of returning the original table, because this indicates that the full set of distinct_on columns are all literal values and should only occur once
sql.py: run_query(): DuplicateKeyException: Fixed bug where only constraint names matching a certain format were interpreted as DuplicateKeyExceptions. Support constraint names with the name and table separated by ".", not just "_".
sql.py: run_query(): Exception parsing: Match patterns only at the beginning of the exception message to avoid matching embedded messages in causes and literal values
sql.py: Added distinct_table()
sql_gen.py: Added with_table() and use it in with_default_table()
sql.py: mk_insert_select(): ignore mode: Support inserting all columns when cols == None
sql_gen.py: Col, Table: Support non-string names
sql_gen.py: row_count: Use new all_cols
sql_gen.py: Added all_cols
sql_gen.py: Use new as_Name() instead of db.esc_name()
sql_gen.py: Name: Truncate the input name
sql_gen.py: Added Name class and associated functions
sql.py: create_table(): Support creating temp tables. This fixes a bug in copy_table_struct() where the created table was not a temp table if the source table was. copy_table_struct(): Removed no longer needed versioning because that is now handled by create_table().
sql.py: Added copy_table_struct()
sql.py: Moved add_indexes() to Indexes subsection
sql.py: create_table(): Support LIKE table
Moved Data cleanup from sql.py to sql_io.py
sql.py: Organized Database structure introspection and Structural changes functions into subsections
Moved Heuristic queries from sql.py to new sql_io.py
Added top-level analysis dir for range modeling
sql.py: run_query_into(): Documented why analyze() must be run manually on newly populated temp tables
sql.py: DbConn: Added autoanalyze mode. Added autoanalyze() which runs analyze() only if in autoanalyze mode. Use new autoanalyze() in functions that change a table's contents.
sql.py: run_query_into(): analyze() the created table to ensure the query planner's initial stats are accurate
inputs/SpeciesLink/src: Added custom header that overwrites existing header so that column names will not be too long for the staging table
cat_csv: Support overwriting the existing header using a separate header file
schemas/vegbien.sql: Added location.location_coords index to speed up large imports by providing an index for merge joins
csv2db: Reanalyze table, so that query planner stats are up to date even though the table doesn't need to be vacuumed anymore
sql.py: Added analyze()
csv2db: Removed no longer needed table vacuum (cleanup_table() now avoids creating dead rows)
sql.py: cleanup_table(): Use update()'s new in_place mode to avoid needing to vacuum the table
sql.py: mk_update(): in_place: Support updating multiple columns at once
sql.py: update() calls: Use in_place where possible to avoid creating dead rows, which bloats table size
sql.py: DbConn.col_info(): Support user-defined types
sql_gen.py: Added Nullif