bugfix: inputs/VegBank/stemlocation_/map.csv: put columns in table order, which is needed by new-style import
inputs/VegBank/stemlocation_/: translated one-to-many mappings to postprocessing derived columns, using the steps at http://wiki.vegpath.org/Adding_new-style_import_to_a_datasource#Translating-filters-to-postprocessing-derived-columns
bugfix: inputs/VegBank/taxonobservation_/map.csv: put columns in table order, which is needed by new-style import
bugfix: inputs/VegBank/plot_/postprocess.sql: coordinateUncertaintyInMeters: need to use GREATEST instead of _alt() to handle cases where the coordinate uncertainty is > than the fuzzing uncertainty, where you wouldn't want to just use the smaller fuzzing uncertainty
inputs/VegBank/plot_/: translated multi-column filters to postprocessing derived columns, using the steps at http://wiki.vegpath.org/Adding_new-style_import_to_a_datasource#Translating-filters-to-postprocessing-derived-columns
inputs/VegBank/plot_/postprocess.sql: map_*() derived cols: updated runtime
inputs/VegBank/plot_/: translated single-column filters to postprocessing derived columns, using the steps at http://wiki.vegpath.org/Adding_new-style_import_to_a_datasource#Translating-filters-to-postprocessing-derived-columns
inputs/VegBank/stemcount_/: translated multi-column filters to postprocessing derived columns, using the steps at http://wiki.vegpath.org/Adding_new-style_import_to_a_datasource#Translating-filters-to-postprocessing-derived-columns
inputs/VegBank/stemlocation_/: translated multi-column filters to postprocessing derived columns, using the steps at http://wiki.vegpath.org/Adding_new-style_import_to_a_datasource#Translating-filters-to-postprocessing-derived-columns
inputs/VegBank/taxonobservation_/postprocess.sql: scientificName: recorded runtime (15 s)
inputs/VegBank/taxonobservation_/: translated multi-column filters to postprocessing derived columns, using the steps at http://wiki.vegpath.org/Adding_new-style_import_to_a_datasource#Translating-filters-to-postprocessing-derived-columns
inputs/FIA/occurrence_all/postprocess.sql: use much simpler LEFT JOINs instead of nested RIGHT JOINs, which required lots of () to get them to happen in the right order. note that the columns are now provided in reverse instead of forwards path order, but this is still much clearer than the nested mess of RIGHT JOINs. this approach can also be used to simplify VegBank's joins.
bugfix: lib/runscripts/view.run: remake_VegBIEN_mappings(): also need to remake header.csv, not just map.csv as for tables, because view columns may change when the view is regenerated
schemas/VegCore/VegCore.ERD.mwb: specimen: changed definition to "something collected from a plant" rather than just "a physical part of a plant", to support using this table for identifying pictures and descriptions of a plant (as DwC does)
schemas/VegCore/VegCore.ERD.mwb: regenerated exports and udpated image map
schemas/VegCore/VegCore.ERD.mwb: reobservable_presence: allow it to be vouchered by any reobservable element (including a tagged individual), not just a specimen
schemas/VegCore/VegCore.ERD.mwb: specimen.defining_data: clarified that the observations in this are actually a subset of individual_observation.traits (specifically, the subset that can be used to make a taxonomic redetermination). information in this field should therefore always also be stored in individual_observation.traits.
schemas/VegCore/VegCore.ERD.mwb: specimen: added specimen_unique_in_individual_observation unique constraint, analogous to specimen_unique_in_individual
schemas/VegCore/VegCore.ERD.mwb: specimen: added defining_data, which for a digital-only specimen, stores the information that comprises the specimen. note that a taxon_presence without a physical voucher can still qualify as reobservable if a detailed description of it is provided in this field, to make taxonomic redeterminations on. for datasources like VegBank, which incorrectly allow multiple taxon_determinations for any type of taxon_observation, their taxonomic redeterminations would actually be considered invalid if made on a purely taxon_presence observation (i.e. just a taxon name) without a detailed description that could be used to make a redetermination. this is different than the scrubbing of a taxon name, which relates a taxon name to another taxon name, rather than a taxon_observation to a completely different taxon name.
bugfix: lib/sh/util.sh: set_fds(): don't add surrounding quotes to empty redirect dest
bugfix: lib/sh/util.sh: set_fds(): need to check if redirect is empty before escaping it with `printf %q`, which may add surrounding quotes to an empty string
planning/timeline/timeline.2013.xls: attribution and conditions of use: documented that Brad/Brian/Bob should work on this, as decided in the conference call (wiki.vegpath.org/2013-09-12_conference_call#data-provider-metadata)
planning/timeline/timeline.2013.xls: reformatted to fit all rows and all per-week columns on one page
planning/timeline/timeline.2013.xls: streamline process of mapping and adding a new datasource: added subtask to create interactive scripts for each import step
planning/timeline/timeline.2013.xls: improve and complete data provider metadata: moved to end because this can also been added manually to the source table, and does not have to be in place before running column-based import
planning/timeline/timeline.2013.xls: flatten the datasources to a common schema: added subtask to left-join unvalidated datasources since they need the flattening in order to validate them properly
planning/timeline/timeline.2013.xls: rebalanced dots
planning/timeline/timeline.2013.xls: moved items marked later to separate section at bottom
planning/timeline/timeline.2013.xls: moved revisions to schema under datasource validations because schema changes are largely driven by validations problems uncovered
planning/timeline/timeline.2013.xls: split tasks into weeks
planning/timeline/timeline.2013.xls: updated for progress
planning/timeline/timeline.2013.xls: split months into (currently identical) weeks
planning/timeline/timeline.2013.xls: added During month of label above months
planning/timeline/timeline.2013.xls: switched to portrait mode to better fit the new format, which hides columns for past months
planning/timeline/timeline.2013.xls: hid crossed out rows to show just the remaining tasks
planning/timeline/timeline.2013.xls: crossed out avoid DB restructuring when ingesting a new datasource, because FIA (which is flattened before import) does properly support optional subplots and diamond linking of subplots to parent plot events, which were necessary to ingest an arbitrary flattened plots datasource
planning/timeline/timeline.2013.xls: crossed out fully-completed tasks. rebalanced dots.
planning/timeline/timeline.2013.xls: moved switching to new-style import to top of streamline process of mapping and adding a new datasource because this puts all the datasource adding steps (except filling in the mappings) into one rerunnable script
planning/timeline/timeline.2013.xls: hid columns for past months so that the current and future months are right next to each task
planning/timeline/timeline.2013.xls: moved streamline process of mapping and adding a new datasource before documentation testing because this will assist the documentation tester in running the import process
planning/timeline/timeline.2013.xls: moved geoscrubbing re-run under add any missing columns because this is needed to fully populate the geoscrubbing columns
planning/timeline/timeline.2013.xls: added documentation testing, usability testing priority tasks (wiki.vegpath.org/Priority_tasks). lowercased tasks for consistency with the wiki and to avoid needing to sentence case new subtasks.
planning/timeline/timeline.2013.xls: moved Flatten the datasources to a common schema under Datasource validations because the query left-joining the tables is needed for validation, and it is much easier to validate datasources when there is only one input table to validate
added derived/biengeo/Geovalidation_and_geoscrubbing_update.presentation.url
added BIEN2/traits_observation_counts.xls
/README.TXT: Single datasource import: removed rescrub step because this is not needed by the current TNRS process
web/links/index.htm: updated to Firefox bookmarks. MySQL: added steps to add a user if you are not root but have sudo access.
BIEN2/country_species/: svn:ignore the .tsv exports
BIEN2/country_species/run: documented runtime (1 min)
added BIEN2/country_species/run, which exports each BIEN2 country's species list
bugfix: lib/sh/util.sh: set_fds(): need to escape redirect destinations which are files, because they may contain special shell characters
lib/sh/util.sh: added rm_prefix()
lib/sh/db.sh: mysql_cmd(): added caller usage with connection/login opts
lib/sh/db.sh: mysql(), mysql_export(): usage: added database=...
planning/timeline/timeline.2013.xls: Data provider validations: renamed to Datasource validations to clarify that this is a validation of the datasources, but not necessarily by the data providers
/README.TXT: Full database import: added Running individual steps separately label for the section that is not part of the main import, but is useful if the import is aborted part of the way through
/README.TXT: moved Single datasource import, Datasource setup to top since these are the most important howtos
bugfix: schemas/Makefile: enclose schema names in "" so that they won't be lowercased
bugfix: schemas/Makefile, lib/common.Makefile: enclose schema names in "" so that they won't be lowercased
/run: geoscrub_input/make(): updated runtime (20 s)
planning/timeline/timeline.2013.xls: Data provider validations (spot-checking): moved ahead of Individual datasource refresh as decided in conference call
schemas/vegbien.sql: analytical_plot: added aggregateOrganismObservationID from analytical_stem
planning/timeline/timeline.2013.xls: Data provider validations: added subtask for Aggregated validations (counts)
inputs/import.stats.xls: analytical DB: updated rowcount
inputs/import.stats.xls: updated import times
inputs/input.Makefile: reimport: don't remove the existing import first, because it will instead be removed by the publish step. this ensures there is always one complete copy of the datasource in the DB.
added backups/vegbien.r10848.backup.md5
backups/TNRS.backup.md5: updated
bugfix: bin/import_all: use reimport_scrub instead of import_scrub so that the temp suffix of the datasource name is removed
inputs/input.Makefile: reimport: use import_publish instead of import so that the reimport replaces the previous import
inputs/input.Makefile: added import_publish, which removes the temp suffix when the import is done
bugfix: bin/after_import: run backups/fix_perms right after the backup files are created to make them private
bugfix: backups/fix_perms: just make the backups themselves private, since the other files are in svn, and their permissions should match their accessibility through Redmine
inputs/*/*/test.xml.ref: updated source.shortname for new datasource name, which now starts out with .new suffix
bugfix: bin/make_analytical_db: `/run export_`: don't take input from the terminal, because this causes rm to prompt the user (from a background task) about overwriting the previous export
/README.TXT: Full database import: Publish the new import: added runtime (1 min)
inputs/input.Makefile: $(map2db): import to datasrc.new instead of plain datasrc, so that the current import of the datasrc is not overwritten
inputs/input.Makefile: added publish (`make inputs/src/publish`)
bugfix: schemas/vegbien.sql: source: removed testing row that had gotten in during `make schemas/remake`
inputs/input.Makefile: added %/publish (`make inputs/src/src.version/publish`)
bugfix: schemas/vegbien.sql: datasource_publish(): need to remove the current live datasource instead of the datasource to publish. note that datasource_rename() does not currently generate an error if the specified datasource doesn't exist.
bugfix: schemas/vegbien.sql: datasource_publish(): run it in a nested transaction so that there is always one published copy of the datasource. (note that a nested transaction is not automatically created for each function, http://stackoverflow.com/questions/6274457/set-isolation-level-for-postgresql-stored-procedures?In_PG_your_procedures_aren%27t_separate_transactions#answer-6283201 .)
schemas/vegbien.sql: added datasource_publish()
schemas/vegbien.sql: added datasource_rename()
schemas/vegbien.sql: added rm_version_suffix()
bin/map: allow user to override the source env var, which is used as the source.shortname value in the DB
exports/: svn:ignore *.zip
inputs/WIN/Specimen/unmapped_terms.csv: updated
/README.TXT: Full database import: time to wait for the import to finish: updated to time in inputs/import.stats.xls
bugfix: bin/import_all: `rm inputs/.TNRS/tnrs/tnrs.make.lock`: need to use `"rm"` instead of `rm` so that we don't use any rm alias the user might have in their shell (import_all is run in the calling shell so that the jobs are owned by the calling shell)
bugfix: mappings/VegCore-VegBIEN.csv: don't map datasetURL to source.url for taxa-only data (this mapping should only occur for Source tables)
bin/import_all: added step to remove any leftover TNRS lockfile (previously done manually)
bugfix: lib/sql_io.py: put_table(): Getting output table pkeys of existing/inserted rows: need to include the index cond in the join condition here, too (using var join_custom_cond), so that an index scan can be used instead of a much slower full-table sort
bugfix: schemas/vegbien.sql: locationevent: locationevent_unique_within_location unique index: added COALESCE expression around location_id since it is nullable, and this is needed for the left and right sides of the join to exactly match up to use an index scan
bugfix: lib/sql_io.py: put_table(): DuplicateKeyException: need to include any index cond in the join condition, so that an index scan can be used instead of a much slower full-table sort (otherwise the query planner will not know that it can restrict results to rows satisfying the index cond)