lib/PostgreSQL-MySQL.csv: Also filter out constraint triggers in addition to regular triggers
inputs/Madidi/Organism/map.csv: Total height: Remapped to height_m, assuming units based on the range and precision of values
inputs/VegBank/stemcount/map.csv: stemheight: Remapped to height_m using units from <http://vegbank.org/vegbank/views/dba_tabledescription_detail.jsp?view=detail&wparam=stemcount&entity=dba_tabledescription&where=where_tablename>
inputs/SALVIAS/plotObservations/map.csv, inputs/SALVIAS-CSV/Organism/map.csv: height_m, stem_height_m: Remapped to height_m using units from <http://salvias.net/Documents/salvias_data_dictionary.html#Plot+data>
mappings/VegCore-VegBIEN.csv: Mapped height_m
mappings/VegCore.csv: Added height_m
mappings/VegCore.csv, VegCore-VegBIEN.csv: Removed no longer used and unit-ambiguous organismX, organismY. Use organismX_m, organismY_m instead.
inputs/VegBank/stemlocation/map.csv: stemxposition, stemyposition: Remapped to organismX_m/organismY_m using units from <http://vegbank.org/vegbank/views/dba_tabledescription_detail.jsp?view=detail&wparam=stemlocation&entity=dba_tabledescription&where=where_tablename>
inputs/TEAM/*/map.csv: 1ha Plot X Coordinate, 1ha Plot Y Coordinate: Remapped to organismX_m/organismY_m using units from <https://projects.nceas.ucsb.edu/nceas/projects/bien/repository/raw/inputs/TEAM/_src/TEAM-DataPackage-20120920191251_3859/Vegetation+-+Trees+&+Lianas/Vegetation-Tree-and-Liana-Metadata-1.5.pdf>
inputs/SALVIAS/plotObservations/map.csv, inputs/SALVIAS-CSV/Organism/map.csv: x_position, y_position: Remapped to organismX_m/organismY_m using units from <http://salvias.net/Documents/salvias_data_dictionary.html#Plot+data>
inputs/Madidi/Organism/map.csv: Subplot X, Subplot Y: Remapped to organismX_m/organismY_m, assuming units based on the size of values relative to the plot area, which has units of ha
inputs/CTFS/StemObservation/map.csv: x, y: Remapped to organismX_m/organismY_m, assuming units based on the size of values relative to plot area, which has units of ha
mappings/VegCore-VegBIEN.csv: Mapped organismX_m, organismY_m
mappings/VegCore.csv: Added organismX_m, organismY_m
sql_io.py: put_table(): full_in_table: Create it using new sql.copy_table() instead of sql.run_query_into()
sql.py: Added copy_table()
sql.mk_select() calls: Removed no longer needed order_by=None when limit=0
sql.py: mk_select(): Set order_by to None if limit == 0
inputs/.TNRS/schema.sql: Documented that accepted names must be processed before any names that resolve to them, because the entry for the accepted name contains all the ranks parsed out but the resolved name of another entry contains just some ranks and the taxonomic name. Column-based import will do this automatically when the total # of rows is <= the partition_size (because _taxonconcept_set_matched_concept_id()'s accepted taxonconcept is created after the main taxonconcept), but TNRS has more rows than this so sorting is needed to ensure that all the accepted names are processed in the first partitions.
sql.py: table_order_by(): Cache the order_by in table.order_by and propagate it when a LIKE table is created
sql_gen.py: Table: Added order_by attr to cache the results of table_order_by()
sql.select() calls: Removed order_by=None everywhere that a stable row order is required (i.e. consistent between selects, or consistent between table transformations). This causes several tests to return different inserted row counts, because the input table is now being accessed in pkey order instead of in table order. This fixes a bug where tables with more rows than ~100 would return different results for repeated calls of the same non-ordered select.
sql.py: mk_select(): Use table_order_by() instead of table_pkey_col() to determine what column(s) to order by if order_by is set to order_by_pkey
sql.py: Added table_pkey_index(), index_order_by(), table_cluster_on(), table_order_by()
sql.py: Added index_exprs() and use it in index_cols()
README.TXT: Data import: On local machine: Added `make inputs/.TNRS/cleanup`, which is necessary because the PostgreSQL collation may differ between vegbiendev's and your DB
schemas/vegbien.sql: taxonconcept: taxonconcept_update_ancestors(): Use matched_concept_id's ancestors instead if available. (Recursively applied, this will use the ancestors of the accepted concept.) This facilitates finding all children of and matches to an accepted concept, which will all have an entry for that concept in taxonconcept_ancestor. Note that the concept's own parents will not be indexed in taxonconcept_ancestor, because only accepted ancestors are now stored in taxonconcept_ancestor. Documented that taxonconcept_ancestor now stores the accepted ancestors of a taxonconcept.
schemas/vegbien.sql: taxonconcept: taxonconcept_2_propagate_accepted_concept_id(): Also update accepted_concept_id on concepts that resolve to this concept, which may have been created before this concept was marked as accepted if concepts are not imported in dependency order (accepted concepts first). Added index on matched_concept_id to speed up finding concepts that resolve to this concept.
sql.py: mk_select(): order_by is order_by_pkey: Only order by the table's actual pkey, if it has one, rather than using the first column if it doesn't
inputs/.TNRS/tnrs/test.xml.ref: Updated inserted row count
db_xml.py: partition_size: Increased to 1,000,000 (>= NCBI.higher_taxa's size) so NCBI.higher_taxa can be imported completely in one partition. This is necessary because NCBI's taxonconcepts are not in dependency order (parents first), so a later partition cannot rely on the parents of its taxonconcepts having already been imported. Instead, all taxonconcepts must be imported at once and then separately, the parents of all taxonconcepts must be set.
mappings/VegCore-VegBIEN.csv: taxonconcept.parent_id when explicit parent provided: Set taxonconcept.parent_id using new _taxonconcept_set_parent_id() after creating the child taxonconcept, so that the parent_id will point to the already-inserted parent taxonconcept instead of creating a new, empty parent taxonconcept. This creates a two-step import, where first the taxonconcepts are imported, and then the parent_ids are matched up. This is necessary for column-based import because all the parent taxonconcepts are imported in a separate iteration from the child taxonconcepts with only their sourceaccessioncode, so this iteration must occur after the child taxonconcept iteration in order to match up with fully-populated taxonconcepts. Row-based import, on the other hand, does not require _taxonconcept_set_parent_id() but does require the taxonconcepts to be provided in dependency order (parents first), which is unfortunately not the case for NCBI.
schemas/vegbien.sql: *_update_ancestors(): Telling immediate children to update their ancestors lists: Exclude self to avoid infinite recursion
schemas/vegbien.sql: Added _taxonconcept_set_parent_id()
schemas/vegbien.sql: Renamed _set_matched_taxonconcept() to _taxonconcept_set_matched_concept_id() so that the function name is prefixed with the table it applies to
db_xml.py: put(): Treat a child node which is a function (starts with _) as a child with fkey to parent rather than as a field in the table. Such a function accepts the table's pkey as one of its arguments.
sql_gen.py: map_expr(): Don't replace an unquoted name when followed by ",", as it would be in an into table name for a function with multiple arguments (e.g. family in "_join_words(1=Field family, 2=Field name)")
schemas/vegbien.sql: locationevent: Moved obsstartdate, obsenddate to top of table so they would be visible in the ERD
sql_io.py: put_table(): ensure_cond(): track_data_error(): Concatenate the columns in the constraint together using , rather than adding a separate entry for each column, because the constraint is applicable to all columns together rather than to each column separately
sql_io.py: put_table(): Renamed ignore_cond() to ensure_cond() for clarity
import_all: Also import the NCBI tree of life, before the TNRS names
mappings/VegCore-VegBIEN.csv: Also map acceptedFamily to the corresponding NCBI family
lib/PostgreSQL-MySQL.csv: custom types: Also exclude time. Reordered excluded (built-in) types by name.
inputs/import.stats.xls: Updated import times
schemas/vegbien.sql: Changed `timestamp with time zone` fields to `date` because time information is not stored in these fields, and it's confusing to have an arbitrary timezone (the server's timezone) and an arbitrary time (midnight) set for input data that only has a precision to the nearest day
sql_gen.py: null_sentinels: Added entry for date
lib/PostgreSQL-MySQL.csv: custom types: Also exclude date, datetime
README.TXT: Documentation: To import and scrub just the test taxonomic names: Run `make backups/TNRS.backup/restore` in the background because it takes awhile
mappings/VegCore.csv: Re-sourced TaxonomicRankEnum fields to the official TCS schema rather than the TCS version in VegX
schemas/vegbien.sql: taxonrank: Updated source to the TCS schema (rather than VegBank) for the new, expanded list. Note that although the list itself was compiled from the TCS version in VegX, the official TCS download does not differ from the VegX TCS in the TaxonomicRankEnum fields (the xs: namespace has just been replaced with xsd: by VegX).
schemas/vegbien.sql: analytical_db_view: taxonconcept: Join again on the accepted_concept_id in order to use the accepted taxonconcept rather than the verbatim taxonconcept from the datasource
schemas/: svn:ignore log files
Added inputs/.NCBI/. This uses many of the new schema and mappings features, such as taxonconcept.sourceaccessioncode and parentTaxonID
mappings/VegCore-VegBIEN.csv: identifyingtaxonomicname: Don't create if taxonconcept has an explicit parent, because the taxonName (which is generally only a component of the full taxonomic name, e.g. specificEpithet) is not globally unique. Datasources that provide name components in such a way that levels at or below family can't be directly concatenated cannot currently receive an identifyingtaxonomicname for input to TNRS.
mappings/VegCore-VegBIEN.csv: taxonName->identifyingtaxonomicname: Don't include the rank with the taxonName, because TNRS only allows the rank to be included in the taxonomic name if it's infraspecific (otherwise, it returns no or an invalid match due to the presence of what it sees as an invalid term or a name component)
mappings/VegCore-VegBIEN.csv: Mapped taxonName to the TNRS input taxonconcept's identifyingtaxonomicname
mappings/VegCore-VegBIEN.csv: Only forward taxonRank to the parent taxonconcept (which stores the infraspecific taxonconcept when the infraspecificEpithet is provided) if there is no explicit parent provided via parentTaxonID/etc.
mappings/VegCore-VegBIEN.csv: Mapped parentScientificNameID, parentTaxonConceptID, parentTaxonID
mappings/VegCore.csv: Added parentScientificNameID, parentTaxonConceptID, parentTaxonID
input.Makefile: $(inDatasrc): Also include the vegbien_dest $schemas in the search_path, so that the datasource's SQL scripts (create.sql, etc.) can use VegBIEN functions and types
lib/common.Makefile: Added $(comma)
inputs/test_taxonomic_names/_scrub/public.sql: Regenerated with schema changes
input.Makefile: Maps building: %/.map.csv.last_cleanup: Fixed bug where needed to include $(coreMap) as a prerequisite, because even though it is not used directly in this target's recipe, it is used by targets invoked via recursive make after the main recipe runs. In general, whenever targets forward commands to a recursive make target, they also need to forward those recursive targets' prerequisites by including them in their own prerequisites list.
mappings/VegCore-VegBIEN.csv: Mapped taxonConceptID, taxonID, scientificNameID to taxonconcept.sourceaccessioncode. Note that taxonconcept stores all of these taxonomic entities, using creator_id+creationdate, taxonname+rank+parent_id, and identifyingtaxonomicname, respectively.
mappings/VegCore-VegBIEN.csv: Mapped taxonName
mappings/VegCore.csv: Added taxonName
schemas/vegbien.ERD.mwb: Fixed lines
schemas/vegbien.sql: Copied functions in the functions schema that are also used by the public schema to the public schema, so that reinstalling the functions schema would not cause anything that depends on a function in it to be cascadingly deleted. Currently, this just affects analytical_db_view, which uses _fraction_to_percent().
schemas/vegbien.sql: taxonconcept: Added taxonconcept_2_propagate_accepted_concept_id() trigger to auto-populate the accepted_concept_id
schemas/vegbien.sql: taxonconcept.sourceaccessioncode: Added descriptive comment
schemas/vegbien.sql: taxonconcept.accepted_concept_id: Added descriptive comment
Regenerated vegbien.ERD exports
schemas/vegbien.sql: taxonconcept: Added sourceaccessioncode, and allow it to scope the taxonconcept when provided
schemas/vegbien.sql: taxonconcept: Renamed canon_concept_id to matched_concept_id, because this is actually the closest-match taxonconcept in the match hierarchy (datasource concept -> parsed concept -> matched concept -> accepted concept) rather than the accepted synonym, which goes in accepted_concept_id
schemas/vegbien.sql: taxonconcept: Added accepted_concept_id
schemas/vegbien.sql: taxonconcept.canon_concept_id: comment: Changed "accepted synonym" to "closest match", since canon_concept_id is actually a hierarchy from datasource concept -> parsed concept -> matched concept -> accepted concept
schemas/vegbien.sql: taxonconcept: Added order # to trigger names so they run in a defined order (triggers are run in alphabetical order)
README.TXT: Use new revision # in log filenames to get all the logs for an import. Changed <datetime> to <version> because the rotated public schema now also includes the svn revision.
lib/common.Makefile: $(version): Include both the svn revision when make was started as well as the svn revision when the command is actually run (when these values differ), in case svn was updated between the time an import was started and the time a particular table started being imported. Because tables within a datasource are imported sequentially, it is possible that an update would have happened before the last table started importing.
Makefile: Moved setting of $(root) before include of lib/common.Makefile because it's used by lib/common.Makefile
Factored OS section out from Makefile, input.Makefile into lib/common.Makefile
Makefile, input.Makefile: Use new $(version), which unlike $(date) also includes the svn revision, to version log files, etc. This way, the working copy can be put back to the way it was at the time of a given import (excluding changes to nonversioned files). This also makes it easier to get all the log files for a particular import when different tables' imports started at different times.
Makefile: Added $(root) for use with $(rootRevision)
lib/common.Makefile: Added $(version), to replace $(date) for versioning log files, etc., and helper function $(rootRevision)
lib/common.Makefile: Added $(revision)
input.Makefile: Removed no longer used $(SED)
lib/common.Makefile: Added $(sed)
Factored $(date) out from Makefile, input.Makefile into lib/common.Makefile
sql_io.py: put_table(): DuplicateKeyException: Fixed bug where indexes with conditions needed to have the input rows filtered by the condition, to prevent trying to retrieve an existing/inserted row using a join on the index columns when the index in fact does not apply. This fixes a bug in the import of taxonconcept where the taxonconcept_0_unique_identifying_name unique index has a condition which was not satisfied for input rows with no identifyingtaxonomicname, causing any input row with NULL in this column to match all taxonconcepts with a NULL identifyingtaxonomicname. This uses ignore_cond()'s new support for constraints that did not fail at least once.
sql_io.py: put_table(): ignore_cond(): Added support for constraints that did not fail at least once, and therefore should not be required to simplify to a non-false value. As part of this, only track the failed constraint in the errors table if it actually failed at least once based on the deleted row count or the `failed` param.
sql_gen.py: map_expr(): Fixed bug where names were being replaced when they were inside another name. This occurred with combined names created by sql_io.into_table_name().
sql.py: ConstraintException: message: Wrap condition in strings.as_tt()
sql.py: run_query(): DuplicateKeyException: Also retrieve the index's condition using new index_cond()
sql.py: Added index_cond()