inputs/analytical_db/: removed import-related files (Source/, etc.), since this is actually just a folder used to store make_analytical_db.log.sql, so that it will be synced along with the other logs
inputs/analytical_db/: added _no_import to prevent this from incorrectly being included in the source table
inputs/input.Makefile: $(_svnFilesGlob): also svn-add _no_import in the top-level datasrc dir. (this requires using add! , because the presence of a _no_import file there will normally turn off adding by svnFilesGlob.)
Added an output CSV file option to geoscrub.sh.
Added notes on running biengeo scripts to README.
Added biengeo script options for data directories.
Added GADM and geonames.org data dir options toupdate_validation_data.sh scripts.Added geoscrub input data dir option to geoscrub.sh scripts.
Added update options to biengeo update_validation_data.sh
Added options to update only GADM data, only Geonames.org data, orneither. In every case, the geonames-to-gadm scripts are always run.
Added cmd-line options to biengeo bash scripts.
All biengeo bash scripts now accept command line options to specify psqluser, host, and database values.These options are the same as those defined by the psql command.If an invalid option is given to a script, a usage message is printed...
Fix biengeo script password prompt for postgres user.
Changed the DB_HOST variables in the biengeo bash scripts to aDB_HOST_OPT variable that is blank by default.Updated all psql calls that used "-h $DB_HOST" to use just $DB_HOST_OPTinstead.This means that to specify a different db host, the DB_HOST_OPT...
Fixed TRUNCATE statement in truncate.geonames.sql.
Fixed the biengeo truncate.geonames.sql script to include all tables inone TRUNCATE statement that have foreign-key references to geonames andcountry tables.
Added more approx. runtimes to biengeo README.
Renamed biengeo install scripts to setup scripts.
It seems to make more sense to call these setup scripts, since they areonly setting up the database and tables, and not actually installing anyfiles anywhere on the OS.
planning/timeline/timeline.2013.xls: updated for progress
planning/timeline/timeline.2013.xls: datasource validations: CVS: left-join it: moved under "fix issues and critical feature requests" instead of "prepare 1st-round extracts" because the left-joining is actually part of getting it in the same format as VegBank
inputs/CTFS/StemObservation/test.xml.ref: updated inserted row count
planning/timeline/timeline.2013.xls: datasource validations: rescheduled CVS before other datasources, as decided in the conference call
schemas/Makefile: $(confirmRmPublicSchema0): use "any ... schema" instead of "the ... schema" because the schema in question may not exist
planning/timeline/timeline.2013.xls: datasource validations: rescheduled tasks for new order
planning/timeline/timeline.2013.xls: datasource validations: reordered to put plots before specimens, as requested by Brad (wiki.vegpath.org/2013-10-25_conference_call#validation-order)
planning/timeline/timeline.2013.xls: hid previous weeks
planning/timeline/timeline.2013.xls: crossed out and hid completed tasks
fix: planning/timeline/timeline.2013.xls: datasource validations: prepare 2nd-round extracts: VegBank: corrected check mark week, based on date of extract
planning/timeline/timeline.2013.xls: datasource validations: added "prepare 3rd-round extracts" subtask, which currently applies to VegBank. updated for progress.
planning/timeline/timeline.2013.xls: "datasource validations (spot-checking)": renamed to just "datasource validations" because that's what we've been calling it
planning/timeline/timeline.2013.xls: datasource validations: CVS: added "VegBank-related changes" subtask
planning/timeline/timeline.2013.xls: updated for progress and revised schedule
bugfix: inputs/VegBank/import_order.txt: updated name of ^taxon_observation.**.sample table
fix: inputs/VegBank/^taxon_observation.**.sample/create.sql: moved continent before country
inputs/VegBank/^taxon_observation.**.sample/create.sql: added missing columns that were recently mapped to VegBIEN (identifiedBy)
inputs/VegBank/^taxon_observation.**.sample/create.sql: synced column order to analytical_plot
inputs/VegBank/taxonobservation_/map.csv, postprocess.sql: mapped identifiedBy (the join_words() of identifiedBy_first, etc.)
fix: schemas/vegbien.sql: analytical_plot, analytical_specimen: removed derived columns that are not part of the validation
fix: schemas/vegbien.sql: analytical_plot, analytical_specimen: removed internal ID columns that are not part of the validation
schemas/vegbien.sql: analytical_plot: removed derived columns that should not be validated by data providers
schemas/vegbien.sql: analytical_specimen: synced to analytical_stem
schemas/vegbien.sql: analytical_plot: documented that this contains all of the analytical_stem columns, minus specimenHolderInstitutions, collection, accessionNumber, occurrenceID
schemas/vegbien.sql: analytical_plot: synced to analytical_stem
schemas/vegbien.sql: analytical_stem_view: added individualCount
schemas/vegbien.sql: plot.**, analytical_stem_view: added slopeAspect, slopeGradient
schemas/VegCore/ERD/VegCore.ERD.mwb: traceable.id_by_source: support multiple ids_by_source per traceable, because the same entity may be present in multiple datasources (e.g. if one got data from the other), and we would like to remove that duplicate
inputs/VegBank/taxonobservation_/create.sql: also join party_id to get the identifiedBy (not mapped yet). note that the inserted row count changes, because taxonobservation_ does not yet have a pkey to do a stable ordering with.
bugfix: inputs/input.Makefile: %/install: don't run map_table, because this instead done by the runscript. although it does not hurt to do it twice, invoking load_data by itself should not run map_table at all, so that the original column names can be inspected in the table and map.csv reordered to match.
inputs/VegBank/vegbank.~.clean_up.sql: taxoninterpretation.party_id: don't rename to taxoninterpretation_party_id, so that this can be used directly in taxonobservation_/create.sql with a USING join
inputs/VegBank/taxonobservation_/create.sql: join taxonobservation to taxoninterpretation (as in CVS) instead of vice versa, since taxonobservation is the primary, operative table. having VegBank and CVS do things the same way helps ensure that fixes in one can transfer easily to the other.
inputs/VegBank/^taxon_observation.**.sample/create.sql: synced with taxon_observation.**
(for r11396) fix: bin/map: put template: comment out the "Put template:" label so that the output is valid XML, and displays properly in a browser rather than showing a syntax error
/README.TXT: for each task, documented which machine it's run on. for tasks run on vegbiendev, added pointer to "Connecting to vegbiendev" steps.
/README.TXT: added instructions for connecting to vegbiendev
mappings/VegCore-VegBIEN.csv: mapped taxon_determination__is_current, taxon_determination__is_original
bugfix: mappings/VegCore-VegBIEN.csv: main taxondetermination: use [!isoriginal=true] instead of [!isoriginal] so that adding a manual isoriginal field does not prevent this selector from matching
inputs/VegBank/taxonobservation_/map.csv: originalinterpretation, currentinterpretation: removed table name prefix so these would automap
mappings/VegCore.htm: regenerated from wiki. added taxon_determination__is_current, taxon_determination__is_original.
planning/timeline/timeline.2013.xls: geoscrubbing automated pipeline: split into subtasks "build pipeline", "test pipeline", and "integrate pipeline into import process"
planning/timeline/timeline.2013.xls: geoscrubbing re-run: moved recent checkmarks to "geoscrubbing automated pipeline" since the work on these actually relates to automating the geoscrubbing, not the one-time reload (which was already completed)
planning/timeline/timeline.2013.xls: geoscrubbing: made "geoscrubbing re-run" a subtask of the main geoscrubbing task, instead of geoscrubbing re-run being the supertask. updated for Paul's progresss.
schemas/vegbien.sql: taxondetermination_set_iscurrent(): include new iscurrent__verbatim, so that taxondeterminations the datasource marks as current are always considered first. this currently applies to VegBank and CVS.
schemas/vegbien.sql: taxondetermination.isoriginal: made it nullable like iscurrent__verbatim, because this is populated from the datasource. taxondetermination_set_iscurrent() now supports isoriginal=NULL, so this is not a problem.
schemas/vegbien.sql: taxondetermination.is_datasource_current: renamed to iscurrent__verbatim and made it nullable, so that this can be used to store the verbatim iscurrent status
schemas/vegbien.sql: taxondetermination_set_iscurrent(): removed setting of is_datasource_current (which is now the same as iscurrent), so that this can be used to store the verbatim iscurrent status
schemas/vegbien.sql: taxondetermination_set_iscurrent(): isoriginal: make sure it is always either true or false, so that if the NOT NULL constraint on this is ever removed you don't end up with the incorrect sort order false, true, NULL (it should be false=NULL, true)
schemas/vegbien.sql: use plain taxondetermination.iscurrent instead of is_datasource_current since these are now the same
schemas/vegbien.sql: taxondetermination_set_iscurrent(): is_datasource_current: set to the same value as iscurrent, since these now have the same formula
schemas/vegbien.sql: taxondetermination_set_iscurrent(): removed no longer used accepted, matched determinationtypes (for these determinations, left-join to TNRS.ScrubbedTaxon)
Updated biengeo README with new script workflow.
Split geovalidate.sh into install and update scripts.
Split geovalidate.sh into install.sh and update_gadm_data.sh scripts.The install.sh script creates the databse and uses the install sqlscripts to create all required tables.The update_gadm_data.sh script downloads the GADM data and creates the...
Refactored geonames.sh to update_geonames_data.sh
Renamed geonames.sh to update_geonames_data.sh and moved many of the SQLstatements from the bash script into supporting update and truncate sqlscripts.These sql and update_geonames_data.sh scripts now assume all required...
Split up geonames-to-gadm.sql into 3 scripts.
Each script only operates on one table within a transaction.These scripts now assume the tables have already been created (byinstall scripts added in a previous commit), and each starts out bytruncating the table it will update with new data.
Added geoscrub.sh script.
This script runs the load-geoscrub-input.sh, geonames.sql, andgeovalidate.sql scripts in order to load and scrub vegbien input data.Updated README to explain the new script.Minor updates to load-geoscrub-input.sh.
inputs/SALVIAS/projects/postprocess.sql: remove institutions that we have direct data for: documented that most of the 13139 removed plots are from duplicates (where we have direct data). this leaves only 560 of SALVIAS's original 13699 plots.
inputs/SALVIAS/projects/postprocess.sql: remove example data
inputs/SALVIAS/projects/postprocess.sql: remove private data that should not be publicly visible (this was probably already removed by the plotMetadata.AccessCode filter in salvias_plots.~.clean_up.sql)
inputs/SALVIAS/projects/postprocess.sql: remove institutions that we have direct data for (Madidi, VegBank)
bugfix: inputs/VegBank/plot_/postprocess.sql: coordinateUncertaintyInMeters__from_fuzzing: need to convert km to m in the fuzzing radii. updated derived cols runtimes.
inputs/VegBank/plot_/postprocess.sql: remove duplicated CVS plots (2323 of 7079 CVS plots are removed by this)
added exports/2013-7-10.Naia.range_limiting_factors.csv.run
bugfix: exports/2013-10-18.Brian_Enquist.Canadensys.csv.run: do not override the table to analytical_stem, because the extract-specific view should be used instead. this was actually benign, because extract.run export_() always sets $table to the extract-specific view.
schemas/vegbien.sql: added 2013-7-10.Naia.range_limiting_factors
schemas/vegbien.sql: sync_analytical_stem_to_view(): row_num: renamed to taxon_occurrence__pkey because previous taxon determinations have been removed, so each row is in fact a taxon_occurrence (~= VegCore.vegpath.org?ERD.taxon_occurrence)
fix: schemas/vegbien.sql: analytical_stem_view: don't ORDER BY datasource, because this requires a slow full-table sort after the hash joins. (when selecting a subset of analytical_stem_view, nested loops are used automatically without needing an ORDER BY to force this.) to get the datasource-sorted order (plus a sort-order guarantee), you can still add a manual `ORDER BY datasource`, which will use a fast index scan on one of the datasource indexes.
schemas/vegbien.sql: analytical_stem: added row_num, which can serve as the taxon_observation ID (DwC occurrenceID)
Updated load-geoscrub script with configurable db.
load-geoscrub-input.sh now uses a variable with the db name defined atthe top of the script.Updated the default db host to 'localhost' for this script.
schemas/vegbien.sql: analytical_stem: locationID... index: use eventDate instead of dateCollected since it's now eventDate that identifies the locationevent
schemas/vegbien.sql: analytical_stem_view: use plot.** to obtain plot-related fields, so that the same code does not need to be maintained in both analytical_stem_view and plot.**
schemas/vegbien.sql: analytical_stem_view: moved specimen-specific fields to occurrence section
schemas/vegbien.sql: analytical_stem_view, plot.**: added separate location__cultivated__bien
schemas/vegbien.sql: added separate eventDate, in addition to dateCollected
fix: schemas/vegbien.sql: dateCollected: use aggregateoccurrence.collectiondate before locationevent.obsstartdate rather than after, because this is more accurate. it was previously the other way around to allow dateCollected to be the pkey for the row's locationevent (for plots data).
schemas/vegbien.sql: analytical_stem_view, plot.**: locationevent__pkey: moved to right before the locationevent-related fields
schemas/vegbien.sql: analytical_stem_view: changed column order, etc. to match plot.**
schemas/vegbien.sql: plot.**: added locationevent__pkey so that this view can be joined to other VegBIEN tables, which require the internal pkey
derived/biengeo/README.txt: geoscrub new data: geovalidate.sql: added runtime from Paul