mappings/VegCore-VegBIEN.csv: Remapped people's names split apart into name components in party to new party.fullname, which does not require splitting or make assumptions about the number of people who may be listed in a particular name field and which components of their name(s) are present
schemas/vegbien.sql: party: Added fullname
mappings/VegCore.csv: Added accordingTo
inputs/.TNRS/tnrs/map.csv: Mapped Name_matched_url to scientificNameID, since the URL uniquely identifies the matched taxonconcept
schemas/vegbien.sql: taxonconcept: Renamed taxonname to taxonepithet for clarity and to be consistent with TCS's use of "epithet" to denote what the taxonname was intended to be (http://www.tdwg.org/standards/117/download/#/UserGuidev_1.3.pdf)
schemas/vegbien.sql: taxonconcept.creator_id: Documented that this is the concept reference for a taxon concept with an "according to", or the identifier's name for a nominal concept, and is equivalent to "Name sec. x"
sql_io.py: import_csv(): Add a row_num column at the beginning of the table, which is autopopulated by csvs.RowNumFilter (it cannot be autopopulated by the serial datatype, because this does not support COPY FROM with a NULL-equivalent value in the serial field). This fixes a bug in csv2db where rows would not stay in inserted order upon querying the table, and would be returned in a different order each query, which prevented LIMIT/OFFSET based subsetting from returning consistent, nonoverlapping results. This occurs because PostgreSQL unfortunately does not return rows in inserted order (or any stable order: "If sorting is not chosen, the rows will be returned in an unspecified order [which] must not be relied on" <http://www.postgresql.org/docs/8.3/static/queries-order.html>), so an explicit ORDER BY is always needed to ensure staging table rows are retrievable in the order they were inserted.
csvs.py: Added RowNumFilter, which adds a row # column at the beginning of each row
streams.py: LineCountStream, LineCountInputStream: Fixed bug where line_num was 1 too high because it started at 1 and was incremented before each line is returned. It now properly starts at 1, but the initial line_num value is 0 to increment to 1 upon encountering the first line. This off-by-one behavior may have been needed for code that associates an error message with a line #, but such code should add 1 to the line_num to get the line # of the error if the error prevents the next line from being read by the LineCount*Stream.
sql_io.py: import_csv(): Take a reader and header rather than a stream to allow callers to pass in a wrapped CSV reader for filtering, etc.
sql_io.py: append_csv(): Take a reader and header rather than a stream_info and stream to allow callers to use the simpler csvs.reader_and_header() function. This also allows callers to pass in a wrapped CSV reader for filtering, etc.
csv2db, tnrs_db: Removed ProgressInputStream wrapper around input stream, which is no longer needed (and causes overlapping output) now that sql_io.append_csv() prints # rows read
sql_io.py: append_csv(): Wrap input stream in a ProgressInputStream that reports rows (rather than lines) read
csvs.py: InputRewriter: Use new StreamFilter to translate StopIteration EOF to ''
csvs.py: Added StreamFilter
csvs.py: InputRewriter: Also support stream inputs which report EOF as '' instead of StopIteration
sql_io.py: append_csv(): Removed no longer used INSERT mode, since all callers now use the default COPY FROM
sql_io.py: import_csv(): Removed no longer needed manual setting of use_copy_from, which defaults to True in append_csv()
csv2db: Removed no longer needed manual setting of use_copy_from, which defaults to True in sql_io.import_csv()
csv2db: Removed no longer needed separate handling of sql.DatabaseErrors, because all recoverable errors caused by COPY FROM (EncodingException and ragged rows) are now handled or avoided
csv2db: Handle EncodingException separately by changing the connection encoding to LATIN1 and retrying
sql.py: DbConn: Added set_encoding()
sql_io.py: append_csv(): Parse any exceptions generated by the COPY FROM using new sql.parse_exception()
sql.py: run_query(): Factored exception parsing out into new parse_exception()
sql.py: Added EncodingException and parse it in run_query()
sql.py: Removed no longer used NameException
csvs.py: Filter: Added empty close() method to support using it as a stream (such as with streams.ProgressInputStream)
sql_io.py: append_csv(): Don't disable COPY FROM for TSVs, which are now supported using csvs.InputRewriter
sql_io.py: append_csv(): COPY FROM: Wrap provided stream in standardizing stream to fix ragged rows (with unequal # columns) and nonstandard CSV dialects (such as TSV with \-escaped newlines)
csvs.py: Added InputRewriter, which wraps a reader, writing each row back to CSV
csvs.py: Added ColCtFilter, which gives all rows the same # columns
sql_io.py: row_num_col_def: Changed type to integer so the row_num can be populated directly by the insert process
sql_io.py: Added row_num_col_def for use by import_csv(). The row_num column will be necessary again because PostgreSQL unfortunately does not return rows in inserted order (or any stable order: "If sorting is not chosen, the rows will be returned in an unspecified order [which] must not be relied on" <http://www.postgresql.org/docs/8.3/static/queries-order.html>), so an explicit ORDER BY is always needed to ensure staging table rows are retrievable in the order they were inserted.
mappings/VegCore.csv: Removed unit-ambiguous height. Use height_m, height_ft instead.
mappings/Veg+-VegCore.csv: Added height
mappings/VegCore-VegBIEN.csv: Removed no longer used height mapping. Use height_m, height_ft instead.
README.TXT: Data import: import_all: Added NCBI backbone to note about import_all not immediately returning control to the shell
inputs/FIA/Organism/map.csv: Height: Remapped to height_ft, assuming units based on the range of values, the height of the tallest tree, and location inside the U.S.
inputs/FIA/Organism/test.xml.ref: Accepted new inserted row count
mappings/VegCore-VegBIEN.csv: Mapped height_ft
schemas/functions.sql: Added _ft_to_m()
mappings/VegCore.csv: Added height_ft
inputs/SALVIAS/stems/map.csv: stem_height_m: Remapped to height_m using units from <http://salvias.net/Documents/salvias_data_dictionary.html#Plot+data>
inputs/SALVIAS-CSV/Organism/map.csv: stem_height_m: Re-sourced units to stem_height_m rather than height_m definition in SALVIAS data dictionary
Regenerated vegbien.ERD exports
schemas/vegbien.sql: taxonconcept: taxonconcept_update_ancestors() trigger: Fixed bug where matched_concept_id needed to be changed to NULL when equal to taxonconcept_id, to avoid including the node itself with its parent's ancestors (which would violate the taxonconcept_ancestor pkey)
sql_io.py: put_table(): Ensuring into's out_pkey is different from in_pkey: Prepend "out." instead of out_table to avoid long column names for the output pkey
sql_gen.py: concat(): Allow multiple "column" suffixes with "." when matching the existing suffix
schemas/vegbien.sql: taxonconcept: taxonconcept_update_ancestors() trigger: Corrected comment explaining why we don't need an ON DELETE trigger to say that this is because the foreign key for taxonconcept_ancestor.ancestor_id, not taxonconcept.parent_id, is ON DELETE CASCADE. The auto-deletion will also occur if taxonconcept.parent_id is ON DELETE CASCADE, because taxonconcept_ancestor.taxonconcept_id is ON DELETE CASCADE, but it is not actually necessary to have cascading deletes on taxonconcept.parent_id (and SET NULL may in fact sometimes be more appropriate).
schemas/tree_cross-links.sql: Removed header comments added by pgAdmin
schemas/tree_cross-links.sql: Updated for new taxonconcept_update_ancestors() trigger
schemas/vegbien.sql: taxonconcept: Rewrote taxonconcept() trigger to avoid completely reinserting the taxonconcept_ancestor entries of all descendants every time taxonconcept changes or using trigger recursion to find descendants. Instead, just delete the old parent's ancestors from and add the new parent's ancestors to each descendant, using taxonconcept_ancestor itself (with the new taxonconcept_ancestor_descendants index) to find all descendants. As an additional optimization, only update taxonconcept_ancestor if the parent_id or matched_concept_id has actually changed. This fixes a bug in NCBI where inserting taxonconcepts out of dependency order caused taxonconcept_ancestor entries to be repeatedly regenerated, slowing the import down to a crawl.
schemas/vegbien.sql: taxonconcept: Added taxonconcept_3_parent_id_avoid_self_ref() trigger to avoid recursive references in root taxonconcepts (taxonconcepts with no parent). This will simplify the new taxonconcept_update_ancestors() trigger.
schemas/vegbien.sql: taxonconcept_ancestor: Added taxonconcept_ancestor_descendants index to support looking up all the descendants for a taxonconcept. This will be used by the new taxonconcept_update_ancestors() trigger, which will support inserting taxonconcepts out of dependency order (such as for NCBI).
schemas/vegbien.sql: *_update_ancestors(): Made trigger deferred, so that it would run after all rows have been inserted in a bulk insert, such as during column-based import. This ensures that ancestors lists are not populated until all parents are inserted, which may occur out of order for datasources (such as NCBI) whose nodes are not in dependency order. (A node that newly acquires a parent will have to update all its descendants, which will then be updated again when its parent acquires its own parent.)
lib/PostgreSQL-MySQL.csv: Also filter out constraint triggers in addition to regular triggers
inputs/Madidi/Organism/map.csv: Total height: Remapped to height_m, assuming units based on the range and precision of values
inputs/VegBank/stemcount/map.csv: stemheight: Remapped to height_m using units from <http://vegbank.org/vegbank/views/dba_tabledescription_detail.jsp?view=detail&wparam=stemcount&entity=dba_tabledescription&where=where_tablename>
inputs/SALVIAS/plotObservations/map.csv, inputs/SALVIAS-CSV/Organism/map.csv: height_m, stem_height_m: Remapped to height_m using units from <http://salvias.net/Documents/salvias_data_dictionary.html#Plot+data>
mappings/VegCore-VegBIEN.csv: Mapped height_m
mappings/VegCore.csv: Added height_m
mappings/VegCore.csv, VegCore-VegBIEN.csv: Removed no longer used and unit-ambiguous organismX, organismY. Use organismX_m, organismY_m instead.
inputs/VegBank/stemlocation/map.csv: stemxposition, stemyposition: Remapped to organismX_m/organismY_m using units from <http://vegbank.org/vegbank/views/dba_tabledescription_detail.jsp?view=detail&wparam=stemlocation&entity=dba_tabledescription&where=where_tablename>
inputs/TEAM/*/map.csv: 1ha Plot X Coordinate, 1ha Plot Y Coordinate: Remapped to organismX_m/organismY_m using units from <https://projects.nceas.ucsb.edu/nceas/projects/bien/repository/raw/inputs/TEAM/_src/TEAM-DataPackage-20120920191251_3859/Vegetation+-+Trees+&+Lianas/Vegetation-Tree-and-Liana-Metadata-1.5.pdf>
inputs/SALVIAS/plotObservations/map.csv, inputs/SALVIAS-CSV/Organism/map.csv: x_position, y_position: Remapped to organismX_m/organismY_m using units from <http://salvias.net/Documents/salvias_data_dictionary.html#Plot+data>
inputs/Madidi/Organism/map.csv: Subplot X, Subplot Y: Remapped to organismX_m/organismY_m, assuming units based on the size of values relative to the plot area, which has units of ha
inputs/CTFS/StemObservation/map.csv: x, y: Remapped to organismX_m/organismY_m, assuming units based on the size of values relative to plot area, which has units of ha
mappings/VegCore-VegBIEN.csv: Mapped organismX_m, organismY_m
mappings/VegCore.csv: Added organismX_m, organismY_m
sql_io.py: put_table(): full_in_table: Create it using new sql.copy_table() instead of sql.run_query_into()
sql.py: Added copy_table()
sql.mk_select() calls: Removed no longer needed order_by=None when limit=0
sql.py: mk_select(): Set order_by to None if limit == 0
inputs/.TNRS/schema.sql: Documented that accepted names must be processed before any names that resolve to them, because the entry for the accepted name contains all the ranks parsed out but the resolved name of another entry contains just some ranks and the taxonomic name. Column-based import will do this automatically when the total # of rows is <= the partition_size (because _taxonconcept_set_matched_concept_id()'s accepted taxonconcept is created after the main taxonconcept), but TNRS has more rows than this so sorting is needed to ensure that all the accepted names are processed in the first partitions.
sql.py: table_order_by(): Cache the order_by in table.order_by and propagate it when a LIKE table is created
sql_gen.py: Table: Added order_by attr to cache the results of table_order_by()
sql.select() calls: Removed order_by=None everywhere that a stable row order is required (i.e. consistent between selects, or consistent between table transformations). This causes several tests to return different inserted row counts, because the input table is now being accessed in pkey order instead of in table order. This fixes a bug where tables with more rows than ~100 would return different results for repeated calls of the same non-ordered select.
sql.py: mk_select(): Use table_order_by() instead of table_pkey_col() to determine what column(s) to order by if order_by is set to order_by_pkey
sql.py: Added table_pkey_index(), index_order_by(), table_cluster_on(), table_order_by()
sql.py: Added index_exprs() and use it in index_cols()
README.TXT: Data import: On local machine: Added `make inputs/.TNRS/cleanup`, which is necessary because the PostgreSQL collation may differ between vegbiendev's and your DB
schemas/vegbien.sql: taxonconcept: taxonconcept_update_ancestors(): Use matched_concept_id's ancestors instead if available. (Recursively applied, this will use the ancestors of the accepted concept.) This facilitates finding all children of and matches to an accepted concept, which will all have an entry for that concept in taxonconcept_ancestor. Note that the concept's own parents will not be indexed in taxonconcept_ancestor, because only accepted ancestors are now stored in taxonconcept_ancestor. Documented that taxonconcept_ancestor now stores the accepted ancestors of a taxonconcept.
schemas/vegbien.sql: taxonconcept: taxonconcept_2_propagate_accepted_concept_id(): Also update accepted_concept_id on concepts that resolve to this concept, which may have been created before this concept was marked as accepted if concepts are not imported in dependency order (accepted concepts first). Added index on matched_concept_id to speed up finding concepts that resolve to this concept.
sql.py: mk_select(): order_by is order_by_pkey: Only order by the table's actual pkey, if it has one, rather than using the first column if it doesn't
inputs/.TNRS/tnrs/test.xml.ref: Updated inserted row count
db_xml.py: partition_size: Increased to 1,000,000 (>= NCBI.higher_taxa's size) so NCBI.higher_taxa can be imported completely in one partition. This is necessary because NCBI's taxonconcepts are not in dependency order (parents first), so a later partition cannot rely on the parents of its taxonconcepts having already been imported. Instead, all taxonconcepts must be imported at once and then separately, the parents of all taxonconcepts must be set.
mappings/VegCore-VegBIEN.csv: taxonconcept.parent_id when explicit parent provided: Set taxonconcept.parent_id using new _taxonconcept_set_parent_id() after creating the child taxonconcept, so that the parent_id will point to the already-inserted parent taxonconcept instead of creating a new, empty parent taxonconcept. This creates a two-step import, where first the taxonconcepts are imported, and then the parent_ids are matched up. This is necessary for column-based import because all the parent taxonconcepts are imported in a separate iteration from the child taxonconcepts with only their sourceaccessioncode, so this iteration must occur after the child taxonconcept iteration in order to match up with fully-populated taxonconcepts. Row-based import, on the other hand, does not require _taxonconcept_set_parent_id() but does require the taxonconcepts to be provided in dependency order (parents first), which is unfortunately not the case for NCBI.
schemas/vegbien.sql: *_update_ancestors(): Telling immediate children to update their ancestors lists: Exclude self to avoid infinite recursion
schemas/vegbien.sql: Added _taxonconcept_set_parent_id()
schemas/vegbien.sql: Renamed _set_matched_taxonconcept() to _taxonconcept_set_matched_concept_id() so that the function name is prefixed with the table it applies to
db_xml.py: put(): Treat a child node which is a function (starts with _) as a child with fkey to parent rather than as a field in the table. Such a function accepts the table's pkey as one of its arguments.
sql_gen.py: map_expr(): Don't replace an unquoted name when followed by ",", as it would be in an into table name for a function with multiple arguments (e.g. family in "_join_words(1=Field family, 2=Field name)")
schemas/vegbien.sql: locationevent: Moved obsstartdate, obsenddate to top of table so they would be visible in the ERD
sql_io.py: put_table(): ensure_cond(): track_data_error(): Concatenate the columns in the constraint together using , rather than adding a separate entry for each column, because the constraint is applicable to all columns together rather than to each column separately
sql_io.py: put_table(): Renamed ignore_cond() to ensure_cond() for clarity
import_all: Also import the NCBI tree of life, before the TNRS names
mappings/VegCore-VegBIEN.csv: Also map acceptedFamily to the corresponding NCBI family
lib/PostgreSQL-MySQL.csv: custom types: Also exclude time. Reordered excluded (built-in) types by name.