inputs/.TNRS/schema.sql: schema comment: added steps to determine what changes need to be made on vegbiendev
inputs/.TNRS/schema.sql: tnrs_populate_fields(): regenerate the derived cols: updated runtimes (~same)
inputs/.TNRS/schema.sql: tnrs: moved instructions to apply schema changes on vegbiendev to the TNRS schema, because this applies to all elements in the TNRS schema, not just the tnrs table
inputs/.TNRS/schema.sql: score_ok(): don't make it STRICT because this prevents it from being inlined
inputs/.TNRS/schema.sql: tnrs: removed no longer used tnrs_score_ok index. use tnrs__valid_match instead.
bugfix: inputs/.TNRS/schema.sql: tnrs_populate_fields(): is_valid_match: documented that this excludes homonyms because these are not valid matches (i.e. TNRS provides a name, but the name is not meaningful because it is not unambiguous)
bugfix: inputs/.TNRS/schema.sql: ValidMatchedTaxon: exclude inter-kingdom homonyms because these are not valid matches (i.e. TNRS provides a name, but the name is not meaningful because it is not unambiguous). this uses taxon_scrub__is_valid_match instead of score_ok() to determine whether the result should be included.
inputs/.TNRS/schema.sql: ValidMatchedTaxon: synced to MatchedTaxon
inputs/.TNRS/schema.sql: MatchedTaxon: added is_valid_match
inputs/.TNRS/schema.sql: tnrs: added tnrs__valid_match index to facilitate joining to only valid matches
inputs/.TNRS/schema.sql: tnrs: added is_valid_match derived column, to make it easier to select from only those TNRS results that can safely be used as a scrubbed name
lib/sh/util.sh: already_exists_msg(): added instructions on how to force-remake when the file already exists (prepend `rm=1` to the command)
inputs/VegBank/^taxon_observation.**.sample/test.xml.ref: updated inserted row count, now that CVS plots have been removed
bugfix: lib/runscripts/view.run: don't do anything in load_data(), to avoid trying to remake header.csv before the view is created. (for views, this instead happens in postprocess().)
lib/runscripts/table.run: reordered functions in the order they are called by import()
bugfix: inputs/VegBank/: need to remove inter-datasource duplicates from plot instead of the left-joined plot_ table, because the fkeys needed to do the cascading deletes are all to the plot table. this requires doing the column-renaming and postprocessing on plot before it's left-joined.
inputs/VegBank/plot_/create.sql: updated runtime (5 s) for previous bugfix
exports/2013-7-10.Naia.range_limiting_factors.csv.run: updated export_() runtime and rowcount (~ the same)
bugfix: schemas/vegbien.sql: 2013-7-10.Naia.range_limiting_factors: filter by coordinateUncertaintyInMeters filter: assume true for rows with no coordinateUncertaintyInMeters
schemas/vegbien.sql: 2013-7-10.Naia.range_limiting_factors: filter by coordinateUncertaintyInMeters <= 10 km
planning/timeline/timeline.2013.xls: updated for progress
inputs/.geoscrub/geoscrub_output/run: load_data(): updated runtime (4 min)
bugfix: inputs/.geoscrub/geoscrub_output/geoscrub.csv.run: invoking derived/biengeo/geoscrub.sh: need to split the input file into separate dir and filename parts, because $DATAFILE actually is just the filename, not the entire path, and will otherwise get prepended with the default value for $DATADIR
inputs/.geoscrub/geoscrub_output/geoscrub.csv.run: also run geoscrub.sh. added export_() target to run just the export of the result table separately.
derived/biengeo/load-geoscrub-input.sh: allow the caller to override $DATAFILE in the environment, to use a file named other than "geoscrub-corpus.csv"
/run: use new exports/geoscrub_input.csv.run
added exports/geoscrub_input.csv.run
bugfix: lib/sh/make.sh: $remake: need to explicitly propagate this to invoked commands if it was set from $rm
derived/biengeo/load-geoscrub-input.sh: updated $DATA_URL for new input filename
/run geoscrub_input/make(): include a header on the CSV file, so that the column names don't risk getting spliced from the data (and to shorten the CSV filename, which had to contain the column names instead). this requires changing the geoscrubbing scripts to accept a CSV header.
exports/2013-7-10.Naia.range_limiting_factors.csv.run: added rowcount (40 million of 80 million observations, filtered w/ cultivated, geovalid, and various fields NOT NULL)
exports/2013-7-10.Naia.range_limiting_factors.csv.run: updated export_() runtime
schemas/vegbien.sql: 2013-7-10.Naia.range_limiting_factors: don't sort the results by occurrence_id, because this is not a meaningful ordering and prevents incremental output from the query
schemas/vegbien.sql: 2013-7-10.Naia.range_limiting_factors: also filter out rows without species
exports/2013-7-10.Naia.range_limiting_factors.csv.run: export_(): documented runtime (10 min)
lib/sh/db.sh: mk_select(): usage: documented that this also takes a $limit/$n param
lib/sh/db.sh: limit(): also support using $n as the limit param, since this var name is used by other parts of the import process
added backups/vegbien.r11549.backup.md5
lib/sh/db.sh: limit(): usage: documented that this also need a $limit param
backups/TNRS.backup.md5: updated
lib/runscripts/extract.run: added export_sample()
/README.TXT: Full database import: after import: record the import times in inputs/import.stats.xls: documented that this should be run on the local machine, because it needs the Mac filename ordering
inputs/import.stats.xls: updated import times
/README.TXT: Full database import: after import: removed step to install analytical_stem on nimoy because the import mechanism is not set up to do this (we don't generate CSV exports of the full analytical_stem table because they take up a lot of space and are not currently used for anything)
/README.TXT: Full database import: after import: In PostgreSQL: added step to check that analytical_stem contains the expected # of rows
/README.TXT: Full database import: after import: In PostgreSQL: added specific instructions for determining which/how many datasources are expected to be included in the provider_count and source tables
added inputs/analytical_db/_archive/
inputs/analytical_db/: removed import-related files (Source/, etc.), since this is actually just a folder used to store make_analytical_db.log.sql, so that it will be synced along with the other logs
inputs/analytical_db/: added _no_import to prevent this from incorrectly being included in the source table
inputs/input.Makefile: $(_svnFilesGlob): also svn-add _no_import in the top-level datasrc dir. (this requires using add! , because the presence of a _no_import file there will normally turn off adding by svnFilesGlob.)
Added an output CSV file option to geoscrub.sh.
Added notes on running biengeo scripts to README.
Added biengeo script options for data directories.
Added GADM and geonames.org data dir options toupdate_validation_data.sh scripts.Added geoscrub input data dir option to geoscrub.sh scripts.
Added update options to biengeo update_validation_data.sh
Added options to update only GADM data, only Geonames.org data, orneither. In every case, the geonames-to-gadm scripts are always run.
Added cmd-line options to biengeo bash scripts.
All biengeo bash scripts now accept command line options to specify psqluser, host, and database values.These options are the same as those defined by the psql command.If an invalid option is given to a script, a usage message is printed...
Fix biengeo script password prompt for postgres user.
Changed the DB_HOST variables in the biengeo bash scripts to aDB_HOST_OPT variable that is blank by default.Updated all psql calls that used "-h $DB_HOST" to use just $DB_HOST_OPTinstead.This means that to specify a different db host, the DB_HOST_OPT...
Fixed TRUNCATE statement in truncate.geonames.sql.
Fixed the biengeo truncate.geonames.sql script to include all tables inone TRUNCATE statement that have foreign-key references to geonames andcountry tables.
Added more approx. runtimes to biengeo README.
Renamed biengeo install scripts to setup scripts.
It seems to make more sense to call these setup scripts, since they areonly setting up the database and tables, and not actually installing anyfiles anywhere on the OS.
planning/timeline/timeline.2013.xls: datasource validations: CVS: left-join it: moved under "fix issues and critical feature requests" instead of "prepare 1st-round extracts" because the left-joining is actually part of getting it in the same format as VegBank
inputs/CTFS/StemObservation/test.xml.ref: updated inserted row count
planning/timeline/timeline.2013.xls: datasource validations: rescheduled CVS before other datasources, as decided in the conference call
schemas/Makefile: $(confirmRmPublicSchema0): use "any ... schema" instead of "the ... schema" because the schema in question may not exist
planning/timeline/timeline.2013.xls: datasource validations: rescheduled tasks for new order
planning/timeline/timeline.2013.xls: datasource validations: reordered to put plots before specimens, as requested by Brad (wiki.vegpath.org/2013-10-25_conference_call#validation-order)
planning/timeline/timeline.2013.xls: hid previous weeks
planning/timeline/timeline.2013.xls: crossed out and hid completed tasks
fix: planning/timeline/timeline.2013.xls: datasource validations: prepare 2nd-round extracts: VegBank: corrected check mark week, based on date of extract
planning/timeline/timeline.2013.xls: datasource validations: added "prepare 3rd-round extracts" subtask, which currently applies to VegBank. updated for progress.
planning/timeline/timeline.2013.xls: "datasource validations (spot-checking)": renamed to just "datasource validations" because that's what we've been calling it
planning/timeline/timeline.2013.xls: datasource validations: CVS: added "VegBank-related changes" subtask
planning/timeline/timeline.2013.xls: updated for progress and revised schedule
bugfix: inputs/VegBank/import_order.txt: updated name of ^taxon_observation.**.sample table
fix: inputs/VegBank/^taxon_observation.**.sample/create.sql: moved continent before country
inputs/VegBank/^taxon_observation.**.sample/create.sql: added missing columns that were recently mapped to VegBIEN (identifiedBy)
inputs/VegBank/^taxon_observation.**.sample/create.sql: synced column order to analytical_plot
inputs/VegBank/taxonobservation_/map.csv, postprocess.sql: mapped identifiedBy (the join_words() of identifiedBy_first, etc.)
fix: schemas/vegbien.sql: analytical_plot, analytical_specimen: removed derived columns that are not part of the validation
fix: schemas/vegbien.sql: analytical_plot, analytical_specimen: removed internal ID columns that are not part of the validation
schemas/vegbien.sql: analytical_plot: removed derived columns that should not be validated by data providers
schemas/vegbien.sql: analytical_specimen: synced to analytical_stem
schemas/vegbien.sql: analytical_plot: documented that this contains all of the analytical_stem columns, minus specimenHolderInstitutions, collection, accessionNumber, occurrenceID
schemas/vegbien.sql: analytical_plot: synced to analytical_stem
schemas/vegbien.sql: analytical_stem_view: added individualCount
schemas/vegbien.sql: plot.**, analytical_stem_view: added slopeAspect, slopeGradient
schemas/VegCore/ERD/VegCore.ERD.mwb: traceable.id_by_source: support multiple ids_by_source per traceable, because the same entity may be present in multiple datasources (e.g. if one got data from the other), and we would like to remove that duplicate
inputs/VegBank/taxonobservation_/create.sql: also join party_id to get the identifiedBy (not mapped yet). note that the inserted row count changes, because taxonobservation_ does not yet have a pkey to do a stable ordering with.
bugfix: inputs/input.Makefile: %/install: don't run map_table, because this instead done by the runscript. although it does not hurt to do it twice, invoking load_data by itself should not run map_table at all, so that the original column names can be inspected in the table and map.csv reordered to match.
inputs/VegBank/vegbank.~.clean_up.sql: taxoninterpretation.party_id: don't rename to taxoninterpretation_party_id, so that this can be used directly in taxonobservation_/create.sql with a USING join
inputs/VegBank/taxonobservation_/create.sql: join taxonobservation to taxoninterpretation (as in CVS) instead of vice versa, since taxonobservation is the primary, operative table. having VegBank and CVS do things the same way helps ensure that fixes in one can transfer easily to the other.
inputs/VegBank/^taxon_observation.**.sample/create.sql: synced with taxon_observation.**
(for r11396) fix: bin/map: put template: comment out the "Put template:" label so that the output is valid XML, and displays properly in a browser rather than showing a syntax error