sql.py: select(): Pass log_level to run_query()
sql.py: DbConn.run_query(): Added log_level param and pass it to self.log_debug(). run_query(): Pass extra kw_args to DbConn.run_query() (via run_raw_query()) so that caller can specify log_level.
sql.py: run_query_into(): Fixed bug where "temporary tables cannot specify a schema name"
bin/map: Switched to verbosity-level-based system of logging. verbose is now an integer, and debug sets the minimum verbosity to 2.
input.Makefile: Configuration: Removed debug var since it's not used in the Makefile
db_xml.py: put_table(): put_table_(): Fixed bug where row_ins_ct_ref needed to be passed recursively to put_table() as keyword arg, because the in_row_ct_ref is not passed recursively
db_xml.py: put_table(): _simplifyPath: Parse "next" XPath param to extract col name of next level's pkey
bin/map: by_col: xml_func.strip(): Don't remove _simplifyPath because it is now handled by db_xml.put_table()
db_xml.py: put_table(): Added basic special handling for structural XML functions, which for now just skips the function
xml_func.py: strip(): Added preserve param for XML functions not to remove
db_xml.py: put_table(): Handle forward pointers in translation-to-sql_gen step instead of in XML-tree-parsing step, so that special handling for structural XML functions can use the parsed tree before any sql.put_table() processing takes place
xml_dom.py: Added is_node()
sql.py: table_row_count(): Pass start=0 to mk_select() to avoid "SELECT statement missing a WHERE, LIMIT, or OFFSET clause" warnings
sql.py: put_table(): Handle unknown exceptions by returning NULL for all rows. Refactored Missing mapping for NOT NULL column handling to use new helper function remove_all_rows().
sql.py: put_table(): Assert that insert_out_pkeys and insert_in_pkeys have same row count. Assert that pkeys and in_table have same row count.
db_xml.py: put_table(): Use new sql.table_row_count()
sql.py: Added table_row_count()
db_xml.py: put_table(): Use new sql_gen.row_count
sql_gen.py: Added row_count
db_xml.py: put_table(): Count # rows and update in_row_ct_ref once all columns have been processed. Don't pass in_row_ct_ref to recursive calls because it should only be increased once.
db_xml.py: put_table(): Added in_row_ct_ref param to store the # of input rows processed. Renamed row_ct_ref param to row_ins_ct_ref to distinguish it from new in_row_ct_ref param.
sql_gen.py: MockDb.esc_name(): Don't use sql.esc_name_by_module() to avoid circular dependency on sql module
sql.py: put_table(): Factored out mk_select() calls in calls to run_query_into_pkeys() into new helper function insert_into_pkeys()
sql.py: put_table(): run_query_into_pkeys() calls use order_by=None in their select statements because there is a pkey, so order (row #) does not matter
db_xml.py: put_table(): Subset in_table if limit != None or start != 0. start param defaults to 0 again to avoid subsetting the table when starting from row 0 (with no limit).
db_xml.py: put_table(): Don't pass limit, start recursively, because the table subsetting will happen only once in the first invocation of the function. Moved limit, start params to end since they are not passed recursively. start param no longer defaults to 0 because this is not needed since sql.put_table() now sets start to 0 where needed.
sql.py: put_table(): Removed limit and start params because they were never fully implemented, and because it's simpler to just have the caller subset their input table
lists.py: Added uniqify()
sql.py: Moved mk_flatten_mapping(), flatten() to Basic queries section since they don't involve database structure info
sql.py: put_table(): Use single quotes rather than double quotes around strings where possible
schemas/functions.sql, vegbien.sql: Changed CAST-related relational functions to return NULL on data exceptions and convert the exceptions to warnings. This helps column-based import by mapping invalid values to NULL instead of aborting the whole query on the first invalid value.
sql.py: index_col(): Cache the query so it doesn't try to add an index on the same column multiple times
sql.py mk_select(), sql_gen.py Join.to_str(): Fixed bug where conditions needed to be wrapped in () before being AND-ed together to ensure the proper operator precedence
sql.py: put_table(): Add index on columns with invalid values to enable fast filtering
sql.py: Added index_col()
sql.py: put_table(): Add pkey on returned pkeys table to enable fast joins
sql.py: Added index_pkey()
sql.py: mk_update(): When running sql_gen.to_name_only_col(), check that the col's table is table
sql.py: put_table(): Renamed pkeys to insert_pkeys to distinguish them from the full set of pkeys on the input table
sql.py: put_table(): FunctionValueException: Change invalid values to NULL using UPDATE instead of filtering them out using WHERE, to avoid adding lots of conditions to the SELECT statement
sql.py: Added mk_update() and update()
sql_gen.py: Added to_name_only_col()
sql_gen.py: Added as_Value()
sql.py: mk_select(): conds: Use new sql_gen.ColValueCond instead of sql_gen.as_ValueCond(). Documented that Code and ValueCond are sql_gen objects.
sql_gen.py: Added ColValueCond
sql.py: mk_flatten_mapping(): Filter str(col) through clean_name() to remove quotes, etc.
sql.py: Added clean_name()
sql.py: put_table(): Join together input tables into new table for speed and so don't modify input if values edited
sql.py: mk_flatten_mapping(): Take as_items param to return a list of dict items instead of a dict. Sort preserve cols before other cols. flatten(): Turn on as_items so that cols list is sorted in input order, with preserve cols first. This ensures that if a pkey is provided in preserve, it will be the first col in the generated table.
sql.py: mk_flatten_mapping(), flatten(): Take list of cols to select instead of using all cols in all tables to join
sql.py: mk_flatten_mapping(), flatten(): Renamed flat_table param to into to be consistent with run_query_into() and put it first because it is the output param
sql.py: Added flatten()
sql.py: mk_flatten_mapping(): preserve Col objects will have tables changed to flat_table to work with flattened table
sql.py: mk_flatten_mapping(): Added preserve param for list of columns not to rename
sql.py: esc_name_by_module(): Support module value None, and use default module psycopg2 for it
sql.py: put_table(): Renamed *pkeys_ref to *pkeys to reflect that they are now objects rather than an array-based references
sql.py: run_query_into(): Renamed into_ref param to into to reflect that it's now an object rather than an array-based reference
sql.py: run_query_into(): Made into_ref a sql_gen.Table instead of an array containing a table name to improve flexibility and clarity
dicts.py: Added join()
sql.py: Added mk_flatten_mapping()
sql.py: put_table(): Renamed the copy of in_tables that gets modified to in_tables_, so that the original list can eventually be reused in joining together the input tables into a temp table
sql.py: run_query(): FunctionValueException: Also match "date/time field value out of range" errors
sql.py: put_table(): conds: Use a set instead of a list for faster checking of the "cond not in conds" assertion
sql.py: mk_select(): conds: Support containers of any iterable type
sql.py: put_table(): Made conds a list so that there can be multiple conditions on the same column
sql.py: mk_select(): conds is list of (key, value) tuples instead of dict (dict still supported for compatibility), so that there can be multiple conditions on the same column
util.py: NamedTuple inherits from objects.BasicObject so that it's comparable and hashable. This fixes a bug in dicts.make_hashable() where the NamedTuple created for a dict would appear to be hashable but would always compare as unequal.
sql.py: DbConn.esc_value(): Run strings.to_unicode() on the generated string so that if it contains unescaped non-ASCII characters, these will not cause problems when concatenated with plain strings
sql.py: run_query(): FunctionValueException: Unpack match.groups() into vars to make code clearer
exc.py: str_(): Avoid traceback exception-formatting functions when possible because they escape non-ASCII characters
sql.py: get_cur_query(): If no raw query: Use strings.ustr() instead of repr() to ensure that if the exception is parsed, embedded quotes will not be double-escaped. Prefix the query by [input] to show that it's not the raw query.
sql_gen.py: Non-Code objects: str() passes informative placeholder string to self.to_str() instead of empty string
sql.py: ExceptionWithNameValue: Use repr() instead of strings.ustr() on the value
sql.py: run_query(): Exception parsing: Use non-greedy qualifier "?" in regexps wherever possible to avoid matching closing quotes later in the error message
sql_gen.py: MockDb.esc_value(): Use repr() instead of strings.ustr() so the quotes around the value are included
sql_gen.py: ValueCond and Join class hierarchies inherit from objects.BasicObject like Code does
sql.py: put_table(): ignore(): Fixed bug where value needed to be filtered through repr(). NullValueException: Fixed bug where value passed to ignore() was the string 'NULL' instead of the value None.
mappings/DwC2-VegBIEN.specimens.csv: plantname.rank: Filter through _toTaxonrank
sql.py: put_table(): ignore(): Avoid infinite loops by asserting that in_col is not in conds
objects.py: BasicObject: Fixed bug where util needed to be imported. Added eq() and hash().
strings.py: Removed no longer used DebugPrintable (that functionality is now in objects.BasicObject)
sql_gen.py: Code: Inherit from new objects.BasicObject
Added objects.py
sql.py: put_table(): Renamed log_ignore() to ignore() and factored common conds-modifying code into it
sql.py: put_table(): Moved post-insert code outside while loop because it will now always be run (there are no longer special cases where the postprocessing doesn't happen)
sql.py: put_table(): Missing mapping for NOT NULL column: Just create an empty pkeys table, since the missing rows' pkeys will be set to NULL later
sql.py: put_table(): Joining together output and input pkeys: Use new sql_gen.join_same_not_null
sql.py: put_table(): Setting missing rows' pkeys to NULL: Use new sql_gen.join_same_not_null
sql_gen.py: Join: Added join_same_not_null. to_str(): Refactored to switch order of left and right tables and cols because left_table is on the right in the comparison, and using the sides of the comparison instead of the sides of the join makes the code clearer.
sql_gen.py: Renamed join_using to join_same to reflect that it can also be used without USING
sql.py: put_table(): Set missing rows' pkeys to NULL
sql.py: put_table(): NullValueException: no mapping for missing col: Fixed bug where run_query_into_pkeys() was still using insert_joins instead of input_joins
sql_gen.py: Added MockDb. All str() methods: Use self.to_str() with mockDb.
sql_gen.py: Use db.esc_name() instead of sql.esc_name(db, ...) so passed-in db can be a mock object
sql.py: DbConn: Added esc_name()
db_xml.py: put_table(): Debug-print which columns are being put
sql.py: ConstraintException, NullValueException: Improved error messages
sql.py: put_table(): FunctionValueException: Fixed bug where out_table was still assumed to be an escaped string, but is now a Table object
sql.py: mk_select(): joins: Use new table_not_null_col() instead of pkey() to get a non-NULL column to filter out on