planning/timeline/timeline.2013.xls: updated for progress
planning/timeline/timeline.2013.xls: hid previous weeks
planning/timeline/timeline.2013.xls: crossed out and hid completed tasks
fix: planning/timeline/timeline.2013.xls: datasource validations: prepare 2nd-round extracts: VegBank: corrected check mark week, based on date of extract
planning/timeline/timeline.2013.xls: datasource validations: added "prepare 3rd-round extracts" subtask, which currently applies to VegBank. updated for progress.
planning/timeline/timeline.2013.xls: "datasource validations (spot-checking)": renamed to just "datasource validations" because that's what we've been calling it
planning/timeline/timeline.2013.xls: datasource validations: CVS: added "VegBank-related changes" subtask
planning/timeline/timeline.2013.xls: updated for progress and revised schedule
bugfix: inputs/VegBank/import_order.txt: updated name of ^taxon_observation.**.sample table
fix: inputs/VegBank/^taxon_observation.**.sample/create.sql: moved continent before country
inputs/VegBank/^taxon_observation.**.sample/create.sql: added missing columns that were recently mapped to VegBIEN (identifiedBy)
inputs/VegBank/^taxon_observation.**.sample/create.sql: synced column order to analytical_plot
inputs/VegBank/taxonobservation_/map.csv, postprocess.sql: mapped identifiedBy (the join_words() of identifiedBy_first, etc.)
fix: schemas/vegbien.sql: analytical_plot, analytical_specimen: removed derived columns that are not part of the validation
fix: schemas/vegbien.sql: analytical_plot, analytical_specimen: removed internal ID columns that are not part of the validation
schemas/vegbien.sql: analytical_plot: removed derived columns that should not be validated by data providers
schemas/vegbien.sql: analytical_specimen: synced to analytical_stem
schemas/vegbien.sql: analytical_plot: documented that this contains all of the analytical_stem columns, minus specimenHolderInstitutions, collection, accessionNumber, occurrenceID
schemas/vegbien.sql: analytical_plot: synced to analytical_stem
schemas/vegbien.sql: analytical_stem_view: added individualCount
schemas/vegbien.sql: plot.**, analytical_stem_view: added slopeAspect, slopeGradient
schemas/VegCore/ERD/VegCore.ERD.mwb: traceable.id_by_source: support multiple ids_by_source per traceable, because the same entity may be present in multiple datasources (e.g. if one got data from the other), and we would like to remove that duplicate
inputs/VegBank/taxonobservation_/create.sql: also join party_id to get the identifiedBy (not mapped yet). note that the inserted row count changes, because taxonobservation_ does not yet have a pkey to do a stable ordering with.
bugfix: inputs/input.Makefile: %/install: don't run map_table, because this instead done by the runscript. although it does not hurt to do it twice, invoking load_data by itself should not run map_table at all, so that the original column names can be inspected in the table and map.csv reordered to match.
inputs/VegBank/vegbank.~.clean_up.sql: taxoninterpretation.party_id: don't rename to taxoninterpretation_party_id, so that this can be used directly in taxonobservation_/create.sql with a USING join
inputs/VegBank/taxonobservation_/create.sql: join taxonobservation to taxoninterpretation (as in CVS) instead of vice versa, since taxonobservation is the primary, operative table. having VegBank and CVS do things the same way helps ensure that fixes in one can transfer easily to the other.
inputs/VegBank/^taxon_observation.**.sample/create.sql: synced with taxon_observation.**
(for r11396) fix: bin/map: put template: comment out the "Put template:" label so that the output is valid XML, and displays properly in a browser rather than showing a syntax error
/README.TXT: for each task, documented which machine it's run on. for tasks run on vegbiendev, added pointer to "Connecting to vegbiendev" steps.
/README.TXT: added instructions for connecting to vegbiendev
mappings/VegCore-VegBIEN.csv: mapped taxon_determination__is_current, taxon_determination__is_original
bugfix: mappings/VegCore-VegBIEN.csv: main taxondetermination: use [!isoriginal=true] instead of [!isoriginal] so that adding a manual isoriginal field does not prevent this selector from matching
inputs/VegBank/taxonobservation_/map.csv: originalinterpretation, currentinterpretation: removed table name prefix so these would automap
mappings/VegCore.htm: regenerated from wiki. added taxon_determination__is_current, taxon_determination__is_original.
planning/timeline/timeline.2013.xls: geoscrubbing automated pipeline: split into subtasks "build pipeline", "test pipeline", and "integrate pipeline into import process"
planning/timeline/timeline.2013.xls: geoscrubbing re-run: moved recent checkmarks to "geoscrubbing automated pipeline" since the work on these actually relates to automating the geoscrubbing, not the one-time reload (which was already completed)
planning/timeline/timeline.2013.xls: geoscrubbing: made "geoscrubbing re-run" a subtask of the main geoscrubbing task, instead of geoscrubbing re-run being the supertask. updated for Paul's progresss.
schemas/vegbien.sql: taxondetermination_set_iscurrent(): include new iscurrent__verbatim, so that taxondeterminations the datasource marks as current are always considered first. this currently applies to VegBank and CVS.
schemas/vegbien.sql: taxondetermination.isoriginal: made it nullable like iscurrent__verbatim, because this is populated from the datasource. taxondetermination_set_iscurrent() now supports isoriginal=NULL, so this is not a problem.
schemas/vegbien.sql: taxondetermination.is_datasource_current: renamed to iscurrent__verbatim and made it nullable, so that this can be used to store the verbatim iscurrent status
schemas/vegbien.sql: taxondetermination_set_iscurrent(): removed setting of is_datasource_current (which is now the same as iscurrent), so that this can be used to store the verbatim iscurrent status
schemas/vegbien.sql: taxondetermination_set_iscurrent(): isoriginal: make sure it is always either true or false, so that if the NOT NULL constraint on this is ever removed you don't end up with the incorrect sort order false, true, NULL (it should be false=NULL, true)
schemas/vegbien.sql: use plain taxondetermination.iscurrent instead of is_datasource_current since these are now the same
schemas/vegbien.sql: taxondetermination_set_iscurrent(): is_datasource_current: set to the same value as iscurrent, since these now have the same formula
schemas/vegbien.sql: taxondetermination_set_iscurrent(): removed no longer used accepted, matched determinationtypes (for these determinations, left-join to TNRS.ScrubbedTaxon)
Updated biengeo README with new script workflow.
Split geovalidate.sh into install and update scripts.
Split geovalidate.sh into install.sh and update_gadm_data.sh scripts.The install.sh script creates the databse and uses the install sqlscripts to create all required tables.The update_gadm_data.sh script downloads the GADM data and creates the...
Refactored geonames.sh to update_geonames_data.sh
Renamed geonames.sh to update_geonames_data.sh and moved many of the SQLstatements from the bash script into supporting update and truncate sqlscripts.These sql and update_geonames_data.sh scripts now assume all required...
Split up geonames-to-gadm.sql into 3 scripts.
Each script only operates on one table within a transaction.These scripts now assume the tables have already been created (byinstall scripts added in a previous commit), and each starts out bytruncating the table it will update with new data.
Added geoscrub.sh script.
This script runs the load-geoscrub-input.sh, geonames.sql, andgeovalidate.sql scripts in order to load and scrub vegbien input data.Updated README to explain the new script.Minor updates to load-geoscrub-input.sh.
inputs/SALVIAS/projects/postprocess.sql: remove institutions that we have direct data for: documented that most of the 13139 removed plots are from duplicates (where we have direct data). this leaves only 560 of SALVIAS's original 13699 plots.
inputs/SALVIAS/projects/postprocess.sql: remove example data
inputs/SALVIAS/projects/postprocess.sql: remove private data that should not be publicly visible (this was probably already removed by the plotMetadata.AccessCode filter in salvias_plots.~.clean_up.sql)
inputs/SALVIAS/projects/postprocess.sql: remove institutions that we have direct data for (Madidi, VegBank)
bugfix: inputs/VegBank/plot_/postprocess.sql: coordinateUncertaintyInMeters__from_fuzzing: need to convert km to m in the fuzzing radii. updated derived cols runtimes.
inputs/VegBank/plot_/postprocess.sql: remove duplicated CVS plots (2323 of 7079 CVS plots are removed by this)
added exports/2013-7-10.Naia.range_limiting_factors.csv.run
bugfix: exports/2013-10-18.Brian_Enquist.Canadensys.csv.run: do not override the table to analytical_stem, because the extract-specific view should be used instead. this was actually benign, because extract.run export_() always sets $table to the extract-specific view.
schemas/vegbien.sql: added 2013-7-10.Naia.range_limiting_factors
schemas/vegbien.sql: sync_analytical_stem_to_view(): row_num: renamed to taxon_occurrence__pkey because previous taxon determinations have been removed, so each row is in fact a taxon_occurrence (~= VegCore.vegpath.org?ERD.taxon_occurrence)
fix: schemas/vegbien.sql: analytical_stem_view: don't ORDER BY datasource, because this requires a slow full-table sort after the hash joins. (when selecting a subset of analytical_stem_view, nested loops are used automatically without needing an ORDER BY to force this.) to get the datasource-sorted order (plus a sort-order guarantee), you can still add a manual `ORDER BY datasource`, which will use a fast index scan on one of the datasource indexes.
schemas/vegbien.sql: analytical_stem: added row_num, which can serve as the taxon_observation ID (DwC occurrenceID)
Updated load-geoscrub script with configurable db.
load-geoscrub-input.sh now uses a variable with the db name defined atthe top of the script.Updated the default db host to 'localhost' for this script.
schemas/vegbien.sql: analytical_stem: locationID... index: use eventDate instead of dateCollected since it's now eventDate that identifies the locationevent
schemas/vegbien.sql: analytical_stem_view: use plot.** to obtain plot-related fields, so that the same code does not need to be maintained in both analytical_stem_view and plot.**
schemas/vegbien.sql: analytical_stem_view: moved specimen-specific fields to occurrence section
schemas/vegbien.sql: analytical_stem_view, plot.**: added separate location__cultivated__bien
schemas/vegbien.sql: added separate eventDate, in addition to dateCollected
fix: schemas/vegbien.sql: dateCollected: use aggregateoccurrence.collectiondate before locationevent.obsstartdate rather than after, because this is more accurate. it was previously the other way around to allow dateCollected to be the pkey for the row's locationevent (for plots data).
schemas/vegbien.sql: analytical_stem_view, plot.**: locationevent__pkey: moved to right before the locationevent-related fields
schemas/vegbien.sql: analytical_stem_view: changed column order, etc. to match plot.**
schemas/vegbien.sql: plot.**: added locationevent__pkey so that this view can be joined to other VegBIEN tables, which require the internal pkey
derived/biengeo/README.txt: geoscrub new data: geovalidate.sql: added runtime from Paul
schemas/vegbien.sql: sync_analytical_stem_to_view(): speciesBinomialWithMorphospecies index: documented runtime (1 h)
schemas/vegbien.sql: plot.**: updated to use the same column formulas as analytical_stem_view
planning/timeline/timeline.2013.xls: add globally-unique occurrenceID: removed "globally-unique" because Naia is actually OK with this being numeric (i.e. unique within our DB)
lib/runscripts/import_subset.run: $version: use new $extract_view, which is set to the same value that this was
lib/runscripts/extract.run: use the extract-specific view instead of all of analytical_stem
schemas/vegbien.sql: added 2013-10-18.Brian_Enquist.Canadensys view
schemas/vegbien.sql: sync_analytical_stem_to_view(): added index on speciesBinomialWithMorphospecies for Brian Enquist's Canadensys request
exports/2013-10-18.Brian_Enquist.Canadensys.csv.run: documented runtime (35 min, now that bugs have been fixed)
bugfix: bin/with_all: @inputs default value: use `local`, so that the default value is only set for the current function and doesn't leak back out into the caller. this fixes a bug in subset imports where import_all's Source/import call to with_all would add the .* datasources, but these would then stay in for the import_scrub call, causing extra .* datasources to incorrectly be imported.
planning/timeline/timeline.2013.xls: usability testing: added additional subtask to validate the scientists' extracts (i.e. check that the extract fulfills their request)
planning/timeline/timeline.2013.xls: provide scientists with their requested data: added separate subtask for Brian Enquist's Canadensys extract
bugfix: schemas/pg_hba.Mac.conf: made same change for Mac as was made for Linux in r11451
bugfix: schemas/pg_hba.conf: don't allow ident authentication for Unix socket connections, because this apparently prevents having normal, password-based connections ("md5"). note that just switching the order of the ident and md5 entries is not useful, because whichever authentication type comes second will be ignored completely. this problem was previously worked around by just not using Unix socket connections at all, and always specifying "localhost" as the host to force a hostname-based connection. this does not affect the postgres superuser, because they have their own ident line in pg_hba.conf.
Added db user and host to load-geoscrub-input.sh
The psql commands in load-geoscrub-input.sh will now connect with aspecific user on a specific host.Updated the 'COPY' sql statement to a '\COPY' statement, so that thepsql user does not have to be a PostgreSQL superuser.
derived/biengeo/README.txt: geoscrub new data: steps that use .sql scripts: added the psql commands to run these
Updated install instructions in the README.