sql.py: esc_name_by_module(): preserve_case defaults to True
sql.py: mk_select(): Escape all names used (table, column, cond, etc.)
sql.py: esc_name_by_module(): If not enclosing name in quotes, call check_name() on it
sql.py: mk_select(): Support literal values in the list of cols to select
sql.py: mk_select(): Don't escape the table name, because it will either be check_name()d or it's already been escaped
sql.py: Added mk_select(), and use it in select()
bin/map: Always pass qual_name(table) to sql.select(). This is possible now that qual_name() can handle None schemas.
db_xml.py: put_table(): Take separate in_table and in_schema names, instead of in_table and table_is_esc, because the in_schema is needed to scope the temp tables appropriately
sql.py: qual_name(): If schema is None, don't prepend schema
bin/map, sql.py: Turned SQL query caching back on because benchmarks of just the caching on vs. off reveal that it does reduce processing time significantly. However, there is a slowdown that was introduced between the time caching was added and the time the same XML tree was used for each node, which was giving the false indication that the slowdown was due to the caching.
bin/map: Turn SQL query caching off by default
bin/map: Added cache_sql env var to enable SQL query caching
sql.py: Make caching DbConn enablable. Turn caching off by default because recent benchmarks (n=1000) were showing that it slows things down.
bin/map: Added new verbose_errors mode, enabled in test mode and off otherwise, which controls whether the output row and tracebacks are included in error messages. Having this off in import mode will reduce the size of error logs so they don't fill up the vegbiendev hard disk as quickly.
exc.py: print_ex(): Added detail option to turn off traceback
bin/map: Turn parallel processing off by default. This should fix "Cannot allocate memory" errors in large imports.
bin/map: in_is_db: Don't cache the main SELECT query
bin/map: by_col: Use the created template, which already has the column names in it, instead of mapping a sample row
bin/map: Fixed bug where db_xml could not be imported twice, or it was treated as an undefined variable for some reason
bin/map: map_table(): Make each column a db_xml.ColRef instead of a bare index, so that it will appear as the column name when converted to a string. This will provide better debugging info in the template tree and also avoid needing to create a separate sample row in by_col.
db_xml.py: Added ColRef
bin/map: Fixed bug where row count was off by one if all rows in the input were exhausted, because the row that raises StopIteration was counting as a row
main Makefile: VegBIEN DB: mk_db: Use template1 because it has PROCEDURAL LANGUAGE plpgsql already installed and we aren't using an encoding other than UTF8
Moved "CREATE PROCEDURAL LANGUAGE plpgsql" to main Makefile so that it would only run when the DB is created, not when the public schema is reinstalled. This is only relevant on PostgreSQL < 9.x, where the plpgsql language is not part of template0.
Renamed parallel.py to parallelproc.py to avoid conflict with new system parallel module on vegbiendev
Makefile: VegBIEN DB: public schema: Added schemas/rotate
bin/map: Fixed bug in input rows processed count where the count would be off by 1, because the for loop would leave i at the index of the last row instead of one-past-the-last
bin/map: Use the same XML tree for each row in DB outputs, to eliminate time spent creating the tree from the XPaths for each row
bin/map: map_table(): Resolve each prefix into a separate mapping, which is collision-eliminated, instead of resolving values from multiple prefixes when each individual row is mapped
bin/map: Moved collision-prevention code to map_rows() so it would only run if there were mappings, and so that it would run after any mappings preprocessing by map_table() that creates more collisions
bin/map: Prevent collisions if multiple inputs mapping to same output
mappings/DwC1-DwC2.specimens.csv: Mapped collectorNumber and recordNumber to recordNumber with _alt so they wouldn't collide when every input column, even empty ones, are created in the XML tree
bin/map: If out_is_db, in debug mode, print each row's XML tree and each value that it's putting
bin/map: If out_is_db, in debug mode, print the template XML tree used to insert a sample row into the DB
bin/map: map_table(): When translating mappings to column indexes, use appends to a new list instead of deletions from an existing list to simplify the algorithm
union: Omit mappings that are mapped to in the input map, in addition to mappings that were overridden. This prevents multiple outputs being created for both the renamed and original mappings, causing duplicate output nodes when one XML tree is used for all rows.
input.Makefile: Maps building: Via maps cleanup: subtract: Include comment column so commented mappings are never removed
subtract: Support "ragged rows" that have fewer columns than the specified column numbers
util.py: list_subset(): Added default param to specify the value to use for invalid indexes (if any)
mappings/VegX-VegBIEN.stems.csv: Mappings with multiple inputs for the same output: Use _alt, etc. to map the multiple inputs to different places in the XML tree, so that when using a pregenerated tree, the empty leaves for each input will not collide with each other
mappings/VegX-VegBIEN.stems.csv: Changed XPath references (using "$") to XML function references using _ref where needed to make them work even on a pre-made XML tree used by all rows
xml_func.py: Added _ref to retrieve a value from another XML node
xml_func.py: Made all functions take a 2nd node param, which contains the func node itself
bin/map: If outputting to a DB, also create output XML elements for NULL input values. This will help with the transition to using the same XML tree for all rows.
xml_func.py: _label: return None on empty input
mappings/VegX-VegBIEN.stems.csv: Added _collapse around subtrees that need to be removed if they are created around a NULL value
xml_func.py: Added _collapse to collapse a subtree if the "value" element in it is NULL
schemas/vegbien.sql: definedvalue: Made definedvalue nullable so that each row of a datasource can have a uniform structure in VegBIEN, and to support reusing the same XML DOM tree for each row
xpath.py: Added is_xpath()
xml_dom.py: set_value(): If value is None and node is Element, remove value node entirely instead of setting node's value to None
xml_dom.py: Added value_node(). Use new value_node() in value() and set_value(). set_value(): If the node already has a value node, reuse it instead of appending a new value node.
xpath.py: put_obj(): Return the id_attr_node using get_1() because it should only be one node
xml_func.py: _simplifyPath: Also treat the elem as empty if the required node exists but is empty
db_xml.py: put_table(): Added part of put() code that should be common to both functions
xpath.py: put_obj(): Return a tuple of the inserted node and the id attr node
xpath.py: set_id(): When creating the id_path, use obj() (which deepcopy()s the entire path) because it prevents pointers w/o targets
xpath.py: set_id(): When creating the id_path, deepcopy() the id_elem because its keys will change in the main copy
xpath.py: set_id(): Return the path to the ID attr, which can be used to change the ID
xpath.py: put_obj(): Return the inserted node so it can be used to change the inserted value
main Makefile: Maps validation: Fixed bug where there would be infinite recursion with the Maps validation section before the Subdir forwarding section (it's unknown why this is necessary)
db_xml.py: put_table(): Added commit param to specify whether to commit after each query
bin/map: in_is_db: by_col: Use new put_table() (defined but not implemented yet)
db_xml.py: Added put_table() (without implementation)
xml_func.py: strip(): Remove _ignore XML funcs completely instead of replacing them with their values
bin/map: in_is_db: by_col: Prefix each input column name by "$"
bin/map: in_is_db: by_col: Strip off XML functions
xml_func.py: Added strip(). pop_value(): Support custom name of value param.
bin/map: in_is_db: by_col: Create XML tree of sample row, with the input column names as the values. This tree will guide the sequencing and creation of the column-based queries.
input.Makefile: use_staged env var: defaults to on if by_col is on
bin/map: Only turn on by_col optimization if mapping to same DB, rather than requiring each place that checks by_col to also check whether mapping to same DB
input.Makefile: Testing: Don't abort tester if only staging test fails, in case staging table missing
input.Makefile: Testing: When cleaning up test outputs, remove everything that doesn't end in .ref
input.Makefile: Testing: Added test/import.%.staging.out test to test the staging tables. Sources: cat: Updated Usage comment to include the "inputs/<datasrc>/" prefix the user would need to add when running make.
bin/map: Fixed bug where mapping to same DB wouldn't work because by-column optimization wasn't implemented yet, by turning it off by default and allowing it to be enabled with an env var
bin/map: DB inputs: Use by-column optimization if mapping to same DB (with skeleton code for optimization's implementation)
input.Makefile: Mapping: Use the staging tables instead of any flat files if use_staged is specified
bin/map: Support custom schema name. Support input table/schema override via env vars, in case the map spreadsheet was written for a different input format.
sql.py: qual_name(): Fixed bugs where esc_name() nested func couldn't have same name as outer func, and esc_name() needed to be invoked without the module name because it's in the same module. select(): Support already-escaped table names.
main Makefile: $(psqlAsAdmin): Tell sudo to preserve env vars so PGOPTIONS is passed to psql
root map: Fill in defaults for inputs from VegBIEN, as well as outputs to it
disown_all: Updated to use main function, local vars, $self, etc. like other bash scripts run using "."
vegbien_dest: Fixed bug where it would give a usage error if run from a makefile rule, because the BASH_LINENO would be 0, by also checking if ${BASH_ARGV0} is ${BASH_SOURCE0}
postgres_vegbien: Fixed bug where interpreter did not match vegbien_dest's new required interpreter of /bin/bash
vegbien_dest: Changed interpreter to /bin/bash. Removed comment that it requires var bien_password.
postgres_vegbien: Removed no longer needed retrieval of bien_password
vegbien_dest: Get bien_password by searching relative to $self, which we now have a way to get in a bash script (${BASH_SOURCE0}), rather than requiring the caller to set it. Provide usage error if run without initial ".".
input.Makefile: Staging tables: import/install-%: Use new quiet option to determine whether to tee output to terminal. Don't use log option because that's always set to true except in test mode, which doesn't apply to installs.
main Makefile: PostgreSQL: Edit /etc/phppgadmin/apache.conf to replace "deny from all" with "allow from all", instead of uncommenting an "allow from all" that may not be there
input.Makefile: Sources: Fixed bug where cat was defined before $(tables), by moving Sources after Existing maps discovery and putting just $(inputFiles) and $(dbExport) from Sources at the beginning of Existing maps discovery
sql.py: Made truncate(), tables(), empty_db() schema-aware. Added qual_name(). tables(): Added option to filter tables by a LIKE pattern.
main Makefile: VegBIEN DB: Install public schema in a separate step, so that it can be dropped without dropping the entire DB (which also contains staging tables that shouldn't be dropped when there is a schema change). Added schemas/install, schemas/uninstall, implicit schemas/reinstall to manage the public schema separately from the rest of the DB. Moved Subdir forwarding to the bottom so overridden targets are not forwarded. README.TXT: Since `make reinstall_db` would drop the entire DB, tell user to run new `make schemas/reinstall` instead to reinstall (main) DB from schema.
schemas/postgresql.Mac.conf: Set unix_socket_directory to the new dir it seems to be using, which is now /tmp
csv2db: Fixed bug where extra columns were not truncated in INSERT mode. Replace empty column names with the column # to avoid errors with CSVs that have trailing ","s, etc.
streams.py: StreamIter: Define readline() as a separate method so it can be overridden, and all calls to self.next() will use the overridden readline(). This fixes a bug in ProgressInputStream where incremental counts would not be displayed and it would end with "not all input read" if the StreamIter interface was used instead of readline().
csv2db: Fall back to manually inserting each row (autodetecting the encoding for each field) if COPY FROM doesn't work
streams.py: FilterStream: Inherit from StreamIter so that all descendants automatically have StreamIter functionality
sql.py: insert(): Support using the default value for columns designated with the special value sql.default
sql.py: insert(): Support rows that are just a list of values, with no columns. Support already-escaped table names.