mappings/VegCore.csv: Added taxonomicStatus
schemas/vegbien.sql: taxonlabel: Added taxonstatus, with taxonomic_status enum
schemas/vegbien.sql: taxonlabel.creator_id comment: Removed no longer accurate comment that this is the "according to" and "Name sec. x", which is now stored in concept_reference_id
schemas/vegbien.sql: taxonlabel: Added concept_reference_id, which is the entity that defined the taxon concept (who the taxon label is according to)
schemas/vegbien.ERD.mwb: Moved taxonlabel_relationship to the right of taxonlabel to provide room for taxonlabel to grow
Regenerated vegbien.ERD exports
mappings/VegCore-VegBIEN.csv: Remapped morphospecies to new taxonlabel.morphospecies per today's conference call
schemas/vegbien.sql: taxonlabel: Added separate morphospecies field per today's conference call, where it was decided it could not go in taxonepithet (the lowest-rank component of the name)
schemas/vegbien.sql: Deleted taxonusage table per today's conference call, where it was decided that it was not needed
schemas/vegbien.sql: Renamed taxonlabel_ancestor to taxonlabel_relationship per today's conference call, where it was decided that it would eventually contain asserted relationships (such as synonym and parent) in addition to autopopulated ancestor relationships
schemas/vegbien.sql: Renamed taxonconcept to taxonlabel per today's conference call, where it was decided that taxonconcept contained too many unrelated fields to be purely a taxon concept
inputs/import.stats.xls: Updated import times
inputs/test_taxonomic_names/_scrub/public.sql, TNRS.sql: Regenerated with schema changes
schemas/vegbien.ERD.mwb: Fixed lines
schemas/vegbien.sql: taxonconcept_ancestor: Renamed taxonconcept_id to descendant_id to emphasize the direction of the relationship between the two taxonconcepts
schemas/vegbien.ERD.mwb: Added taxonconcept_ancestor to the diagram since it is now a core table for storing taxonomic information
mappings/VegCore-VegBIEN.csv: Mapped accordingTo to taxonconcept.creator_id, and have it take the place of identifiedBy when both are present
mappings/VegCore-VegBIEN.csv: Remapped people's names split apart into name components in party to new party.fullname, which does not require splitting or make assumptions about the number of people who may be listed in a particular name field and which components of their name(s) are present
schemas/vegbien.sql: party: Added fullname
mappings/VegCore.csv: Added accordingTo
inputs/.TNRS/tnrs/map.csv: Mapped Name_matched_url to scientificNameID, since the URL uniquely identifies the matched taxonconcept
schemas/vegbien.sql: taxonconcept: Renamed taxonname to taxonepithet for clarity and to be consistent with TCS's use of "epithet" to denote what the taxonname was intended to be (http://www.tdwg.org/standards/117/download/#/UserGuidev_1.3.pdf)
schemas/vegbien.sql: taxonconcept.creator_id: Documented that this is the concept reference for a taxon concept with an "according to", or the identifier's name for a nominal concept, and is equivalent to "Name sec. x"
sql_io.py: import_csv(): Add a row_num column at the beginning of the table, which is autopopulated by csvs.RowNumFilter (it cannot be autopopulated by the serial datatype, because this does not support COPY FROM with a NULL-equivalent value in the serial field). This fixes a bug in csv2db where rows would not stay in inserted order upon querying the table, and would be returned in a different order each query, which prevented LIMIT/OFFSET based subsetting from returning consistent, nonoverlapping results. This occurs because PostgreSQL unfortunately does not return rows in inserted order (or any stable order: "If sorting is not chosen, the rows will be returned in an unspecified order [which] must not be relied on" <http://www.postgresql.org/docs/8.3/static/queries-order.html>), so an explicit ORDER BY is always needed to ensure staging table rows are retrievable in the order they were inserted.
csvs.py: Added RowNumFilter, which adds a row # column at the beginning of each row
streams.py: LineCountStream, LineCountInputStream: Fixed bug where line_num was 1 too high because it started at 1 and was incremented before each line is returned. It now properly starts at 1, but the initial line_num value is 0 to increment to 1 upon encountering the first line. This off-by-one behavior may have been needed for code that associates an error message with a line #, but such code should add 1 to the line_num to get the line # of the error if the error prevents the next line from being read by the LineCount*Stream.
sql_io.py: import_csv(): Take a reader and header rather than a stream to allow callers to pass in a wrapped CSV reader for filtering, etc.
sql_io.py: append_csv(): Take a reader and header rather than a stream_info and stream to allow callers to use the simpler csvs.reader_and_header() function. This also allows callers to pass in a wrapped CSV reader for filtering, etc.
csv2db, tnrs_db: Removed ProgressInputStream wrapper around input stream, which is no longer needed (and causes overlapping output) now that sql_io.append_csv() prints # rows read
sql_io.py: append_csv(): Wrap input stream in a ProgressInputStream that reports rows (rather than lines) read
csvs.py: InputRewriter: Use new StreamFilter to translate StopIteration EOF to ''
csvs.py: Added StreamFilter
csvs.py: InputRewriter: Also support stream inputs which report EOF as '' instead of StopIteration
sql_io.py: append_csv(): Removed no longer used INSERT mode, since all callers now use the default COPY FROM
sql_io.py: import_csv(): Removed no longer needed manual setting of use_copy_from, which defaults to True in append_csv()
csv2db: Removed no longer needed manual setting of use_copy_from, which defaults to True in sql_io.import_csv()
csv2db: Removed no longer needed separate handling of sql.DatabaseErrors, because all recoverable errors caused by COPY FROM (EncodingException and ragged rows) are now handled or avoided
csv2db: Handle EncodingException separately by changing the connection encoding to LATIN1 and retrying
sql.py: DbConn: Added set_encoding()
sql_io.py: append_csv(): Parse any exceptions generated by the COPY FROM using new sql.parse_exception()
sql.py: run_query(): Factored exception parsing out into new parse_exception()
sql.py: Added EncodingException and parse it in run_query()
sql.py: Removed no longer used NameException
csvs.py: Filter: Added empty close() method to support using it as a stream (such as with streams.ProgressInputStream)
sql_io.py: append_csv(): Don't disable COPY FROM for TSVs, which are now supported using csvs.InputRewriter
sql_io.py: append_csv(): COPY FROM: Wrap provided stream in standardizing stream to fix ragged rows (with unequal # columns) and nonstandard CSV dialects (such as TSV with \-escaped newlines)
csvs.py: Added InputRewriter, which wraps a reader, writing each row back to CSV
csvs.py: Added ColCtFilter, which gives all rows the same # columns
sql_io.py: row_num_col_def: Changed type to integer so the row_num can be populated directly by the insert process
sql_io.py: Added row_num_col_def for use by import_csv(). The row_num column will be necessary again because PostgreSQL unfortunately does not return rows in inserted order (or any stable order: "If sorting is not chosen, the rows will be returned in an unspecified order [which] must not be relied on" <http://www.postgresql.org/docs/8.3/static/queries-order.html>), so an explicit ORDER BY is always needed to ensure staging table rows are retrievable in the order they were inserted.
mappings/VegCore.csv: Removed unit-ambiguous height. Use height_m, height_ft instead.
mappings/Veg+-VegCore.csv: Added height
mappings/VegCore-VegBIEN.csv: Removed no longer used height mapping. Use height_m, height_ft instead.
README.TXT: Data import: import_all: Added NCBI backbone to note about import_all not immediately returning control to the shell
inputs/FIA/Organism/map.csv: Height: Remapped to height_ft, assuming units based on the range of values, the height of the tallest tree, and location inside the U.S.
inputs/FIA/Organism/test.xml.ref: Accepted new inserted row count
mappings/VegCore-VegBIEN.csv: Mapped height_ft
schemas/functions.sql: Added _ft_to_m()
mappings/VegCore.csv: Added height_ft
inputs/SALVIAS/stems/map.csv: stem_height_m: Remapped to height_m using units from <http://salvias.net/Documents/salvias_data_dictionary.html#Plot+data>
inputs/SALVIAS-CSV/Organism/map.csv: stem_height_m: Re-sourced units to stem_height_m rather than height_m definition in SALVIAS data dictionary
schemas/vegbien.sql: taxonconcept: taxonconcept_update_ancestors() trigger: Fixed bug where matched_concept_id needed to be changed to NULL when equal to taxonconcept_id, to avoid including the node itself with its parent's ancestors (which would violate the taxonconcept_ancestor pkey)
sql_io.py: put_table(): Ensuring into's out_pkey is different from in_pkey: Prepend "out." instead of out_table to avoid long column names for the output pkey
sql_gen.py: concat(): Allow multiple "column" suffixes with "." when matching the existing suffix
schemas/vegbien.sql: taxonconcept: taxonconcept_update_ancestors() trigger: Corrected comment explaining why we don't need an ON DELETE trigger to say that this is because the foreign key for taxonconcept_ancestor.ancestor_id, not taxonconcept.parent_id, is ON DELETE CASCADE. The auto-deletion will also occur if taxonconcept.parent_id is ON DELETE CASCADE, because taxonconcept_ancestor.taxonconcept_id is ON DELETE CASCADE, but it is not actually necessary to have cascading deletes on taxonconcept.parent_id (and SET NULL may in fact sometimes be more appropriate).
schemas/tree_cross-links.sql: Removed header comments added by pgAdmin
schemas/tree_cross-links.sql: Updated for new taxonconcept_update_ancestors() trigger
schemas/vegbien.sql: taxonconcept: Rewrote taxonconcept() trigger to avoid completely reinserting the taxonconcept_ancestor entries of all descendants every time taxonconcept changes or using trigger recursion to find descendants. Instead, just delete the old parent's ancestors from and add the new parent's ancestors to each descendant, using taxonconcept_ancestor itself (with the new taxonconcept_ancestor_descendants index) to find all descendants. As an additional optimization, only update taxonconcept_ancestor if the parent_id or matched_concept_id has actually changed. This fixes a bug in NCBI where inserting taxonconcepts out of dependency order caused taxonconcept_ancestor entries to be repeatedly regenerated, slowing the import down to a crawl.
schemas/vegbien.sql: taxonconcept: Added taxonconcept_3_parent_id_avoid_self_ref() trigger to avoid recursive references in root taxonconcepts (taxonconcepts with no parent). This will simplify the new taxonconcept_update_ancestors() trigger.
schemas/vegbien.sql: taxonconcept_ancestor: Added taxonconcept_ancestor_descendants index to support looking up all the descendants for a taxonconcept. This will be used by the new taxonconcept_update_ancestors() trigger, which will support inserting taxonconcepts out of dependency order (such as for NCBI).
schemas/vegbien.sql: *_update_ancestors(): Made trigger deferred, so that it would run after all rows have been inserted in a bulk insert, such as during column-based import. This ensures that ancestors lists are not populated until all parents are inserted, which may occur out of order for datasources (such as NCBI) whose nodes are not in dependency order. (A node that newly acquires a parent will have to update all its descendants, which will then be updated again when its parent acquires its own parent.)
lib/PostgreSQL-MySQL.csv: Also filter out constraint triggers in addition to regular triggers
inputs/Madidi/Organism/map.csv: Total height: Remapped to height_m, assuming units based on the range and precision of values
inputs/VegBank/stemcount/map.csv: stemheight: Remapped to height_m using units from <http://vegbank.org/vegbank/views/dba_tabledescription_detail.jsp?view=detail&wparam=stemcount&entity=dba_tabledescription&where=where_tablename>
inputs/SALVIAS/plotObservations/map.csv, inputs/SALVIAS-CSV/Organism/map.csv: height_m, stem_height_m: Remapped to height_m using units from <http://salvias.net/Documents/salvias_data_dictionary.html#Plot+data>
mappings/VegCore-VegBIEN.csv: Mapped height_m
mappings/VegCore.csv: Added height_m
mappings/VegCore.csv, VegCore-VegBIEN.csv: Removed no longer used and unit-ambiguous organismX, organismY. Use organismX_m, organismY_m instead.
inputs/VegBank/stemlocation/map.csv: stemxposition, stemyposition: Remapped to organismX_m/organismY_m using units from <http://vegbank.org/vegbank/views/dba_tabledescription_detail.jsp?view=detail&wparam=stemlocation&entity=dba_tabledescription&where=where_tablename>
inputs/TEAM/*/map.csv: 1ha Plot X Coordinate, 1ha Plot Y Coordinate: Remapped to organismX_m/organismY_m using units from <https://projects.nceas.ucsb.edu/nceas/projects/bien/repository/raw/inputs/TEAM/_src/TEAM-DataPackage-20120920191251_3859/Vegetation+-+Trees+&+Lianas/Vegetation-Tree-and-Liana-Metadata-1.5.pdf>
inputs/SALVIAS/plotObservations/map.csv, inputs/SALVIAS-CSV/Organism/map.csv: x_position, y_position: Remapped to organismX_m/organismY_m using units from <http://salvias.net/Documents/salvias_data_dictionary.html#Plot+data>
inputs/Madidi/Organism/map.csv: Subplot X, Subplot Y: Remapped to organismX_m/organismY_m, assuming units based on the size of values relative to the plot area, which has units of ha
inputs/CTFS/StemObservation/map.csv: x, y: Remapped to organismX_m/organismY_m, assuming units based on the size of values relative to plot area, which has units of ha
mappings/VegCore-VegBIEN.csv: Mapped organismX_m, organismY_m
mappings/VegCore.csv: Added organismX_m, organismY_m
sql_io.py: put_table(): full_in_table: Create it using new sql.copy_table() instead of sql.run_query_into()
sql.py: Added copy_table()
sql.mk_select() calls: Removed no longer needed order_by=None when limit=0
sql.py: mk_select(): Set order_by to None if limit == 0
inputs/.TNRS/schema.sql: Documented that accepted names must be processed before any names that resolve to them, because the entry for the accepted name contains all the ranks parsed out but the resolved name of another entry contains just some ranks and the taxonomic name. Column-based import will do this automatically when the total # of rows is <= the partition_size (because _taxonconcept_set_matched_concept_id()'s accepted taxonconcept is created after the main taxonconcept), but TNRS has more rows than this so sorting is needed to ensure that all the accepted names are processed in the first partitions.
sql.py: table_order_by(): Cache the order_by in table.order_by and propagate it when a LIKE table is created
sql_gen.py: Table: Added order_by attr to cache the results of table_order_by()
sql.select() calls: Removed order_by=None everywhere that a stable row order is required (i.e. consistent between selects, or consistent between table transformations). This causes several tests to return different inserted row counts, because the input table is now being accessed in pkey order instead of in table order. This fixes a bug where tables with more rows than ~100 would return different results for repeated calls of the same non-ordered select.
sql.py: mk_select(): Use table_order_by() instead of table_pkey_col() to determine what column(s) to order by if order_by is set to order_by_pkey
sql.py: Added table_pkey_index(), index_order_by(), table_cluster_on(), table_order_by()
sql.py: Added index_exprs() and use it in index_cols()
README.TXT: Data import: On local machine: Added `make inputs/.TNRS/cleanup`, which is necessary because the PostgreSQL collation may differ between vegbiendev's and your DB