sql.py: cast(): Support string column name inputs
sql.py: DbConn: Renamed col_default() to col_info() and have it return a sql_gen.TypedCol object containing all the TypedCol info about the column, not just the default value
sql_gen.py: TypedCol: Added default and nullable params
dicts.py: Import util after items that util depends on have been defined, to avoid unsatisfied circular dependency
sql.py: DbConn.col_default(): Pass the connection to sql_gen.as_Code() so it fixes the syntax on values returned by PostgreSQL
sql_gen.py: as_Code(): Added optional db param, which causes the function to run db.std_code() on the value to fix the syntax
sql.py: DbConn: Added std_code()
db_xml.py: Removed into_table_name() because this functionality is now handled by sql.into_table_name()
sql.py: into_table_name(): Also parse hierarchical tables (mappings with a rank column) using a special syntax
sql.py: put_table(): Fixed bug where distinct_on included columns that were not in the input table, and were thus incorrectly taken from the LEFT JOINed output table
sql.py: track_data_error(): Do nothing if cols are empty, because mk_track_data_error() requires at least one col. mk_track_data_error(): Assert that cols are not empty because VALUES clause requires at least one row.
bin/map: by_col: Pass on_error to db_xml.put_table() that calls ex_tracker.track()
sql.py: put_table(): No handler for exception: Pass exception to on_error() instead of raising a warning, so that error message can be formatted
db_xml.py: put_table(): Pass on_error to sql.put_table()
db_xml.py: put_table(): Take on_error param like row-based put()
sql.py: put_table(): Take on_error param
sql.py: get_cur_query(): Removed no longer used input_params parameter
sql.py: Removed unused mogrify()
sql.py: DbConn.DbCursor.execute(): Removed no longer used params parameter
sql.py: with_autocommit(): Use isolation_level attr and set_isolation_level() method of connection instead of autocommit attr to support older versions of psycopg2
sql.py: DbConn.DbCursor.execute(): Only fetch all rows for empty SELECT query, to support older versions of Python that would give a "no results to fetch" error for other types of queries
csv2db: When reraising exception, use `raise` instead of `raise e` to preserve whole stack trace
sql.py: Removed no longer used _query_lookup()
sql.py: DbConn: Cache queries without params, as params are no longer used
sql.py: DbConn.is_cached(): Removed no longer used params parameter
sql.py: Removed no longer used run_raw_query()
sql.py: run_query(): Call db.run_query() directly instead of via run_raw_query()
sql.py: DbConn.run_query(): Removed no longer used params parameter
sql.py: DbConn._db(): Setting search_path: Use esc_value() instead of params
sql.py: run_query(): Removed no longer used params parameter
sql.py: run_query_into(): Moved main case (into != None) outside of if statement because the special-case if statement contains `return`
sql.py: run_query_into(): Removed no longer used params parameter
sql.py: mk_insert_select(): Removed no longer used params parameter
sql.py: mk_insert_select(): Return just the query instead of the query plus empty params
sql.py: mk_select(): Return just the query instead of the query plus empty params
sql.py: tables(): Use select() instead of a custom run_query() to avoid using params, which will be deprecated to make it easier to support old versions of Python
sql.py: DbConn.DbCursor.execute(): Require that params are empty, to ensure that code uses db.esc_value() instead. This keeps literal values in the same place as the rest of the query, so that they do not need to be maintained and passed around separately in a params list.
sql.py: constraint_cols(): Use db.esc_value() instead of params
sql.py: index_cols(): Use db.esc_value() instead of params
sql.py: add_pkey(): Use simpler `ADD PRIMARY KEY` syntax to avoid having to create a name for the primary key
db_xml.py: put_table(): Subsetting in_table: Add pkey to created temp table to facilitate joining it with intermediate tables
schemas/postgresql.nimoy.conf: shared_buffers: Fixed syntax error where decimals were not supported
sql.py: truncate(): Re-added support for string tables using sql_gen.as_Table(). This fixes empty_db(), which relied on this functionality.
sql_gen.py: as_Table(): Added schema param to use as default schema
inputs/SALVIAS: Switched to using CSV exports of the DB, so that staging tables could be created for column-based import
sql.py: run_query_into(): Added add_indexes_ param which causes the function to add indexes on the created table
sql.py: create_table(): Use new add_indexes()
sql.py: Added add_indexes()
sql.py: get_cur_query(): Fixed bug where strings.ustr() needed to be used instead of str() when ensuring that get_cur_query() returns a string
sql.py: cast(): Removed conditional checks for save_errors, since it's now always true if the function got passed the `not save_errors` special case
sql.py: cast(): Only convert errors to warnings if errors will be saved in errors_table, so that import will always be aborted if user supplied invalid values in the mappings, even if these values are passed through a relational function
sql.py: put_table(): Support inserting tables with all default values, by providing the pkey's default value for all rows so that the SELECT query has at least one column
sql_gen.py: is_table_col(): Check that input is a Col object
sql.py: put_table(): Assert that mapping is non-empty
sql.py: mk_select(): Assert that fields list is non-empty
sql.py: DbConn.DbCursor.execute(): Set _is_insert only if query starts with INSERT, so that function definitions containing INSERT are not cached as INSERT statements (exceptions only) themselves
sql.py: DbConn.DbCursor.execute(): Fixed bug where params == None would apparently turn off the mogrifier completely, causing "%"s to be excessively escaped, by just setting params to None if it was [] or () and not using strings.esc_for_mogrify() at all
sql.py: DbConn.DbCursor.execute(): If not using params, escape the query using strings.esc_for_mogrify() in case any literals contained "%"s
strings.py: Added esc_for_mogrify()
sql.py: create_table(): Add indexes on all non-pkey columns, unless turned off or deferred using new param col_indexes
csv2db: Add column indexes on errors table. Use typed_cols and `.to_Col()` to iterate over columns to add indexes on, for the main and errors tables.
sql.py: Added track_data_error(). put_table(): ignore(): Take extra e param for the exception. Use track_data_error() to store the invalid value in the errors table.
sql_gen.py: Join.to_str(): Add newline before and after right table if it's been renamed (and therefore takes up multiple lines)
exc.py: ExceptionWithCause: Store the cause in an instance variable for later use
sql.py: mk_track_data_error(): Rename the errors_table to make the generated SQL less verbose
sql.py: mk_insert_select(): Run sql_gen.remove_table_rename() on table to get just the actual name in the DB
sql_gen.py: Added remove_table_rename()
sql_gen.py: Col: Run `.to_Table()` on table to get just the reference to the table, not any SQL code that defines it
sql.py: Added mk_track_data_error() and use it in cast(). This also ensures that if only one source column's row in the CROSS JOIN violates a unique constraint, other source columns' rows are still inserted.
sql_gen.py: with_default_table(): Added overwrite param to overwrite the table (if it isn't a NamedCol)
sql_gen.py: Join.to_str(): join(): Get just the table name of left_table and right_table using `.to_Table()`. Moved order switching of tables inside join() because the order reversing only applies to an individual condition.
sql_gen.py: Renamed set_default_table() to with_default_table() and copy col before modifying it so don't modify input
sql_gen.py: Added set_default_table(). as_ValueCond(): Use set_default_table() instead of as_Col() so that any name-only column also gets its table set. Join.to_str(): Parse left side using set_default_table() instead of as_Col() so that any name-only column also gets its table set.
sql_gen.py: Join: mapping param defaults to {} for e.g. CROSS JOINs. to_str(): Omit join_cond if mapping is empty, rather than if join is a specific type.
sql_gen.py: NamedValues: Change cols to Col objects with the table set to `name`
sql_gen.py: Added set_cols_table()
sql.py: mk_insert_select(): returning: Use sql_gen.to_name_only_col()
sql_gen.py: NamedTable: cols: Use sql_gen.Col objects or name strings instead of pre-rendered SQL code
sql_gen.py: NamedTable: Wrap nested code in Expr if needed
sql_gen.py: Added NamedValues
sql_gen.py: Values: Support multiple rows
sql.py: insert(): Use new sql_gen.Values
sql_gen.py: Added Values and default
sql_gen.py: Join.to_str(): Don't add join condition for CROSS JOINs
sql.py: put_table(): Factored out errors_table name setting so it can be used by ignore()
bin/map: If doing full import, clear errors table
sql.py: truncate(): Support sql_gen.Table objects
sql.py: Moved truncate() to Database structure queries section
sql.py: tables(): Run query with log_level=4 because it's a low-level structure-determining query
sql.py: table_exists(): Use new tables() exact param so that LIKE special chars in table name are not interpreted specially
sql.py: tables(): Added exact param to check for exact matches only
sql.py: put_table(): MissingCastException: Use new errors_table()
csv2db: Use new sql.errors_table()
sql.py: Added table_exists() and errors_table()
sql.py: DbConn.print_notices(): Fixed bug where it should not do anything for a MySQL connection, because that doesn't store notices the way Postgres does
sql.py: put_table(): MissingCastException: Debug message: Added Redmine formatting
schemas/functions.sql, vegbien.sql: Removed no longer needed cast functions, which are now created on the fly by column-based import
schemas/functions.sql: _nullIf(): Ignore uncastable value, because a value that's invalid for the given type is still well-defined as not matching the nullif() criterion
sql.py: put_table(): MissingCastException: Debug message: Removed "'s" so it wouldn't mess up syntax highlighting when pasting debug output into a SQL file