main Makefile: VegBIEN DB: Added functions schema targets
Makefile: $(confirm): Support a separate line outside of the highlighted line. Include the "Continue?" in the macro since all prompts include it.
Makefile: VegBIEN DB: Display different warning message depending on whether entire DB or just current public schema is being deleted
db_xml.py: put_table(): Recurse into forward pointers
sql.py: put_table(): Take multiple in_tables. Initial implementation just used the first in_table.
sql.py: Added add_row_num(). put_table(): Add row_num to pkeys_table, so it can be joined with in_table's pkeys.
sql.py: Added run_query_into() and use it in insert_select()
sql.py: pkey(): Support escaped table names
sql.py: mk_insert_select(): embeddable: Name the function alias "f" since it will just be wrapped in a nested SELECT, so the exact name doesn't matter (and won't be visible outside the nested SELECT anyway)
db_xml.py: put_table(): Return the (table, col) where the pkeys are made available, now that this information is available from sql.put_table()
sql.py: put_table(): Return just the name of the table where the pkeys are made available, since the column name in that table now equals the pkey name
sql.py: mk_insert_select(): embeddable: Make the column returned by the function have the same name as the returning column
db_xml.py: put_table() Use new sql.put_table()
sql.py: Added put_table()
sql.py: Added clean_name(). Use it where needed to make an escaped name appendable as a string.
sql.py: Added with_parsed_errors() and use it in try_insert()
sql.py: insert_select(): into != None: Fixed bug where cacheable was not passed through to DROP TABLE's run_query(), even though it was passed through to CREATE TABLE AS's run_query()
db_xml.py: put_table(): Place pkeys in temp table
sql.py: mk_insert_select(): Document that embeddable will cause the query to be fully cached, not just if it raises an exception. insert_select(): into != None: Pass recover and cacheable through to each run_query()
sql.py: insert_select(): Support placing RETURNING values in temp table
db_xml.py: put_table(): Support returning pkey from INSERT SELECT
sql.py: mk_insert_select(): Support using an INSERT RETURNING statement as a nested SELECT
sql.py: mk_insert_select(): Removed unused params recover and cacheable
sql.py: Added mogrify()
db_xml.py: put_table(): Corrected @return doc
sql.py: Added mk_insert_select() and use it in insert_select()
db_xml.py: put_table(): Use new insert_select()
sql.py: insert_select(): Changed order of cols and params arguments so select_query and params would be together
sql.py: Added insert_select() and use it in insert()
Calls to sql.esc_name*(): Removed preserve_case=True because it is now the default
sql.py: esc_name_by_module(): Changed preserve_case to ignore_case, which defaults to False
sql.py: esc_name_by_module(): preserve_case defaults to True
sql.py: mk_select(): Escape all names used (table, column, cond, etc.)
sql.py: esc_name_by_module(): If not enclosing name in quotes, call check_name() on it
sql.py: mk_select(): Support literal values in the list of cols to select
sql.py: mk_select(): Don't escape the table name, because it will either be check_name()d or it's already been escaped
sql.py: Added mk_select(), and use it in select()
bin/map: Always pass qual_name(table) to sql.select(). This is possible now that qual_name() can handle None schemas.
db_xml.py: put_table(): Take separate in_table and in_schema names, instead of in_table and table_is_esc, because the in_schema is needed to scope the temp tables appropriately
sql.py: qual_name(): If schema is None, don't prepend schema
bin/map, sql.py: Turned SQL query caching back on because benchmarks of just the caching on vs. off reveal that it does reduce processing time significantly. However, there is a slowdown that was introduced between the time caching was added and the time the same XML tree was used for each node, which was giving the false indication that the slowdown was due to the caching.
bin/map: Turn SQL query caching off by default
bin/map: Added cache_sql env var to enable SQL query caching
sql.py: Make caching DbConn enablable. Turn caching off by default because recent benchmarks (n=1000) were showing that it slows things down.
bin/map: Added new verbose_errors mode, enabled in test mode and off otherwise, which controls whether the output row and tracebacks are included in error messages. Having this off in import mode will reduce the size of error logs so they don't fill up the vegbiendev hard disk as quickly.
exc.py: print_ex(): Added detail option to turn off traceback
bin/map: Turn parallel processing off by default. This should fix "Cannot allocate memory" errors in large imports.
bin/map: in_is_db: Don't cache the main SELECT query
bin/map: by_col: Use the created template, which already has the column names in it, instead of mapping a sample row
bin/map: Fixed bug where db_xml could not be imported twice, or it was treated as an undefined variable for some reason
bin/map: map_table(): Make each column a db_xml.ColRef instead of a bare index, so that it will appear as the column name when converted to a string. This will provide better debugging info in the template tree and also avoid needing to create a separate sample row in by_col.
db_xml.py: Added ColRef
bin/map: Fixed bug where row count was off by one if all rows in the input were exhausted, because the row that raises StopIteration was counting as a row
main Makefile: VegBIEN DB: mk_db: Use template1 because it has PROCEDURAL LANGUAGE plpgsql already installed and we aren't using an encoding other than UTF8
Moved "CREATE PROCEDURAL LANGUAGE plpgsql" to main Makefile so that it would only run when the DB is created, not when the public schema is reinstalled. This is only relevant on PostgreSQL < 9.x, where the plpgsql language is not part of template0.
Renamed parallel.py to parallelproc.py to avoid conflict with new system parallel module on vegbiendev
Makefile: VegBIEN DB: public schema: Added schemas/rotate
bin/map: Fixed bug in input rows processed count where the count would be off by 1, because the for loop would leave i at the index of the last row instead of one-past-the-last
bin/map: Use the same XML tree for each row in DB outputs, to eliminate time spent creating the tree from the XPaths for each row
bin/map: map_table(): Resolve each prefix into a separate mapping, which is collision-eliminated, instead of resolving values from multiple prefixes when each individual row is mapped
bin/map: Moved collision-prevention code to map_rows() so it would only run if there were mappings, and so that it would run after any mappings preprocessing by map_table() that creates more collisions
bin/map: Prevent collisions if multiple inputs mapping to same output
mappings/DwC1-DwC2.specimens.csv: Mapped collectorNumber and recordNumber to recordNumber with _alt so they wouldn't collide when every input column, even empty ones, are created in the XML tree
bin/map: If out_is_db, in debug mode, print each row's XML tree and each value that it's putting
bin/map: If out_is_db, in debug mode, print the template XML tree used to insert a sample row into the DB
bin/map: map_table(): When translating mappings to column indexes, use appends to a new list instead of deletions from an existing list to simplify the algorithm
union: Omit mappings that are mapped to in the input map, in addition to mappings that were overridden. This prevents multiple outputs being created for both the renamed and original mappings, causing duplicate output nodes when one XML tree is used for all rows.
input.Makefile: Maps building: Via maps cleanup: subtract: Include comment column so commented mappings are never removed
subtract: Support "ragged rows" that have fewer columns than the specified column numbers
util.py: list_subset(): Added default param to specify the value to use for invalid indexes (if any)
mappings/VegX-VegBIEN.stems.csv: Mappings with multiple inputs for the same output: Use _alt, etc. to map the multiple inputs to different places in the XML tree, so that when using a pregenerated tree, the empty leaves for each input will not collide with each other
mappings/VegX-VegBIEN.stems.csv: Changed XPath references (using "$") to XML function references using _ref where needed to make them work even on a pre-made XML tree used by all rows
xml_func.py: Added _ref to retrieve a value from another XML node
xml_func.py: Made all functions take a 2nd node param, which contains the func node itself
bin/map: If outputting to a DB, also create output XML elements for NULL input values. This will help with the transition to using the same XML tree for all rows.
xml_func.py: _label: return None on empty input
mappings/VegX-VegBIEN.stems.csv: Added _collapse around subtrees that need to be removed if they are created around a NULL value
xml_func.py: Added _collapse to collapse a subtree if the "value" element in it is NULL
schemas/vegbien.sql: definedvalue: Made definedvalue nullable so that each row of a datasource can have a uniform structure in VegBIEN, and to support reusing the same XML DOM tree for each row
xpath.py: Added is_xpath()
xml_dom.py: set_value(): If value is None and node is Element, remove value node entirely instead of setting node's value to None
xml_dom.py: Added value_node(). Use new value_node() in value() and set_value(). set_value(): If the node already has a value node, reuse it instead of appending a new value node.
xpath.py: put_obj(): Return the id_attr_node using get_1() because it should only be one node
xml_func.py: _simplifyPath: Also treat the elem as empty if the required node exists but is empty
db_xml.py: put_table(): Added part of put() code that should be common to both functions
xpath.py: put_obj(): Return a tuple of the inserted node and the id attr node
xpath.py: set_id(): When creating the id_path, use obj() (which deepcopy()s the entire path) because it prevents pointers w/o targets
xpath.py: set_id(): When creating the id_path, deepcopy() the id_elem because its keys will change in the main copy
xpath.py: set_id(): Return the path to the ID attr, which can be used to change the ID
xpath.py: put_obj(): Return the inserted node so it can be used to change the inserted value
main Makefile: Maps validation: Fixed bug where there would be infinite recursion with the Maps validation section before the Subdir forwarding section (it's unknown why this is necessary)
db_xml.py: put_table(): Added commit param to specify whether to commit after each query
bin/map: in_is_db: by_col: Use new put_table() (defined but not implemented yet)
db_xml.py: Added put_table() (without implementation)
xml_func.py: strip(): Remove _ignore XML funcs completely instead of replacing them with their values
bin/map: in_is_db: by_col: Prefix each input column name by "$"
bin/map: in_is_db: by_col: Strip off XML functions
xml_func.py: Added strip(). pop_value(): Support custom name of value param.