DbConn: autocommit mode defaults to True so that all scripts get the benefit of automatic commits
input.Makefile: Staging tables: import/install-%: Include the table name in the log file name so that successive tables for the same datasource don't overwrite the same log file
sql.py: DbConn: Don't always autocommit in debug_temp mode, because this could cause autocommit mode to be turned on when the user does not expect it
bin/map: connect_db(): Autocommit in commit mode to avoid the need for manual commits. This should also reduce the time that table locks are held, to avoid unnecessary contention when multiple processes are trying to insert into the same output table. (The program always uses nested transactions to support rollbacks, so there is no problem autocommitting whenever a top-level nested transaction or top-level query completes.)
sql_gen.py: Removed TempFunction because that functionality is now provided by DbConn.TempFunction()
sql.py: Use new DbConn.TempFunction()
sql.py: DbConn: Added TempFunction()
sql.py: Use new DbConn.debug_temp config option to control whether temporary objects should instead be permanent
sql.py: DbConn: Added config option debug_temp
sql.py: function_exists(): Fixed bug where trigger functions needed to be excluded, since they cannot be called directly
sql.py: Added function_exists()
sql_gen.py: Made Function an alias of Table so that isinstance(..., Function) will always work correctly
sql_gen.py: Added as_Function()
sql.py: put_table(): Lock the output table in EXCLUSIVE mode before getting its pkey so that an ACCESS SHARE lock is not acquired before EXCLUSIVE (causing a lock upgrade and deadlock). This race condition may not have been previously noticeable because pkey() is cached, so calling it doesn't necessarily execute a query or acquire an ACCESS SHARE lock.
sql.py: put_table(): Document that must be run at the beginning of a transaction
sql.py: put_table(), mk_select(): Switched back to having put_table() acquire the EXCLUSIVE locks, but right at the beginning of the transaction, in order to avoid lock upgrades which cause deadlocks
sql.py: with_autocommit(): Only allow turning autocommit on, because the opposite is not meaningful and may conflict with the session-global isolation level
sql.py: DbConn: Set the transaction isolation level to READ COMMITTED using set_isolation_level() so that the isolation level affects all transactions in the session, not just the current one
sql.py: DbConn: Always set the transaction isolation level to READ COMMITTED so that when a table is locked for update, its contents are frozen at that point rather than earlier. This ensures that no concurrent duplicate keys were inserted between the time the table was snapshotted (at the beginning of the transaction for SERIALIZABLE) and the time it was locked for update.
sql.py: put_table(): Removed locking output tables to prevent concurrent duplicate keys because that is now done automatically by mk_select()
sql.py: mk_select(): Filtering on no match: Lock the joined table in EXCLUSIVE mode to prevent concurrent duplicate keys when used with INSERT SELECT
sql_gen.py: Added underlying_table() and use it in underlying_col()
main Makefile: schemas/rotate: Fixed bug where needed to run schemas/public/install, not full schemas/install, after renaming public schema
sql.py: put_table(): Lock output tables to prevent concurrent duplicate keys
sql.py: Added lock_table()
bin/map: connect_db(): Only use autocommit mode if verbosity > 3, to avoid accidentally activating it if you want debug output in normal import mode
bin/map: connect_db(): Only use autocommit mode if verbosity > 2, because it causes the intermediate tables to be created as permanent tables, which you don't want unless you're actually debugging (verbosity = 2 is normal for column-based import)
sql.py: put_table(): remove_all_rows(): Changed log message to "Ignoring all rows" because NULL is not necessarily the pkey value that will be returned for the rows
sql.py: put_table(): Don't add index on columns that will have values filtered out, because indexes have already been added on all columns in the iteration's input table by flatten()
sql.py: DbConn._db(): Setting serializable isolation level: Always set this (if self.serializable is set), even in autocommit mode, because autocommit mode is implemented by manual commits in the DbConn wrapper object rather than using the underlying connection's autocommit mode (which does not allow setting the isolation level)
sql.py: DbConn._db(): Setting search_path: Use `SET search_path` and `SHOW search_path` instead of combining the old and new search_paths in SQL itself using `SELECT set_config('search_path', ...)`
csv2db: ProgressInputStream: Use default progress message 'Read %d line(s)' because there is not necessarily one CSV row per line, due to embedded newlines
input.Makefile: Staging tables: import/install-%: Only output to the log file if log option is non-empty (which it is by default)
csv2db: Support reinstalling just the errors table using new errors_table_only option
sql.py: Added drop_table()
schemas/vegbien.sql: method: Changed indexes to use `COALESCE` to match what sql_gen now does
schemas/vegbien.sql: specimenreplicate: Added indexes using COALESCE to match what sql_gen does
schemas/vegbien.sql: locationevent: Added indexes using COALESCE to match what sql_gen does
schemas/vegbien.ERD.mwb: Synced with schema
schemas/vegbien.sql: party: Changed indexes to use `COALESCE` to match what sql_gen now does
Wrap sys.stderr.write() calls in strings.to_raw_str() to avoid UnicodeEncodeErrors when stderr is to a file and the default encoding is ASCII
strings.py: Added to_raw_str()
bin/map: When logging the row # being processed, add 1 because row # is interally 0-based, but 1-based to the user
bin/map: Log the row # being processed with level=1.1 so that the user can see a status report if desired
exc.py: str_(): Fixed bug where UnicodeEncodeError would be raised when msg contains non-ASCII chars, by wrapping e.args0 in strings.ustr()
exc.py: print_ex(): Wrap msg in strings.to_unicode() to try to avoid UnicodeEncodeError when msg contains non-ASCII chars
sql.py: create_table(): Don't set pkey.nullable to False because the caller should make sure the pkey has the appropriate type
csv2db: Use sql_gen.TypedCol.nullable instead of manually adding 'NOT NULL' to the type. Ensure that pkeys are properly NOT NULL.
csv2db: Adding indexes: Create plain indexes using ensure_not_null=False because the indexes will primarily be used by the user to search for specific values, rather than by the mapping script which uses the ensure_not_null
sql.py: DbConn.col_info(): Run query with log_level=4 because it gathers information about database structure, and should have the same log_level as other queries that do that
csv2db: Adding indexes: Fixed bug where col.to_Col() could not be used because sql.add_index() does not support name-only columns (plain strings are OK, though)
sql.py: create_table(): has_pkey: Use new TypedCol.constraints to store 'PRIMARY KEY'
sql_gen.py: TypedCol: Added constraints instance var
sql_gen.py: EnsureNotNull: Made coalesce() all uppercase to match how pg_dump spells it
schemas/vegbien.sql: namedplace: Fixed bug where parent_id needed to be included in UNIQUE CONSTRAINT (now UNIQUE INDEX), since there can be more than one e.g. city of the same name if they are in different states
schemas/vegbien.sql: plantname: Changed indexes to use `COALESCE` to match what sql_gen now does
schemas/py_functions.sql: _dateRangeStart, _dateRangeEnd: Changed indexes to use `COALESCE` to match what sql_gen now does
schemas/py_functions.sql: _namePart: Changed indexes to use `COALESCE` to match what sql_gen now does
schemas/functions.sql: _nullIf: Changed indexes to use `COALESCE` to match what sql_gen now does
schemas/functions.sql: _nullIf: Require a non-NULL null-equivalent value
schemas/functions.sql: _label: Changed indexes to use `COALESCE` to match what sql_gen now does
schemas/functions.sql: _label: Require a non-NULL label
sql_gen.py: null_sentinels: Removed types where a sentinel doesn't make sense (unknown types, boolean) because types with no sentinel are now handled gracefully by users of ensure_not_null()
sql.py: add_index(): ensure_not_null: Handle unknown types gracefully
sql_gen.py: MockDb: Added col_info()
sql_gen.py: Use as_*() functions where the auto-wrapping was previously done manually
sql_gen.py: CompareCond.to_str(): Use ensure_not_null()'s new type_ param to apply same function to both sides but not if the right side is already NOT NULL
sql_gen.py: null_sentinels: Added value for character varying type
sql_gen.py: ensure_not_null(): Support non-column inputs if type_ is set
sql_gen.py: null_sentinels: Added value for USER-DEFINED type
sql.py: mk_select(): Joins: Filtering on no match: Use '~=' sql_gen.CompareCond operator so that IS NULL is always used, regardless of the not-null column's nullability
sql_gen.py: null_sentinels: Added value for boolean type
sql_gen.py: ensure_not_null(): Added type_ param to override the underlying column's type
sql_gen.py: EnsureNotNull: Take a type param instead of a null param so that the EnsureNotNull object stores the underlying column's type
sql_gen.py: underlying_col(): Support non-Col inputs
sql_gen.py: EnsureNotNull: Removed default value for null param to remind user that default value depends on value's type and will not always be a string
sql.py: add_index(): Added ensure_not_null param to disable the ensure_not_null functionality to force a plain index
sql.py: flatten(): Add indexes on the created table so its columns can be used in an O(n) merge join
sql_gen.py: null_sentinels: Added value for integer type
sql_gen.py: CompareCond.to_str(): Always wrap the left-side column if it's nullable. Wrap the right-side value if the left side was wrapped, rather than if both the left and right side are nullable. This causes coalesce() indexes to be used to look up NULL values using the value NULL gets coalesced to, rather than doing a sequential scan.
sql_gen.py: Run truncate() on all identifiers so that literal-string-based lookups for an identifier (such as in db.col_info()) don't use the untruncated value
sql_gen.py: Added truncate()
sql.py: put_table(): Resolving default value column: Fixed bug where default value of None was used as a key for mapping, even though this is an invalid Col name
sql_gen.py: ensure_not_null(): If input column cannot be ensured to be NULL, pass any raised exception through rather than suppressing it and leaving the column in a nullable state
schemas/functions.sql: _merge: Changed indexes to use `COALESCE` to match what sql_gen now does
schemas/functions.sql: _alt: Changed indexes to use `COALESCE` to match what sql_gen now does
sql_gen.py: CompareCond.to_str(): Handle nullable columns using ensure_not_null()
sql_gen.py: ensure_not_null(): Raise NoUnderlyingTableException if can't ensure not null for that reason
sql_gen.py: is_underlying_table(): Support non-Table inputs
sql_gen.py: NamedValues: Call set_cols_table() with the created table, not just the name, so that is_underlying_table() works properly
sql_gen.py: underlying_col(): If no underlying table, raise NoUnderlyingTableException
sql_gen.py: Added is_underlying_table()
sql_gen.py: ensure_not_null(): Call underlying_col() on the column to remove all renamings
sql_gen.py: Added underlying_col()
sql_gen.py: Join.to_str(): join(): Removed no longer needed `*_table = *_table.to_Table()`
sql_gen.py: Col: Support Table objects that are not just names, by calling `.to_Table()` on the table before stringifying it
sql_gen.py: ensure_not_null(): Added ignore_unknown_type param
sql_gen.py: CompareCond.to_str(): Put handling nullable columns as a separate step so it can be expanded
csv2db: Errors table: Removed no longer needed sql_gen.EnsureNotNull() because this is now added automatically