lib/sh/util.sh: added split()
lib/sh/util.sh: auto-echo common external commands: added `which`
lib/sh/util.sh: auto-echo common external commands: use simpler echo_run instead of command since logging handling is not needed
added backups/vegbien.r9897.backup.md5
lib/sh/sync.sh: upload(): documented that each --include path is relative to the currdir, not the root dir of the upload ($local_dir). this feature, although previously unintended, is actually better because the user can change to a subdir of the root dir and specify upload paths relative to the dir they are in. however, when invoking upload() from a script with --include paths specified, this means you need to use an absolute path (e.g. "$(dirname "${BASH_SOURCE0}")"/...; or the value that will become $local_dir, which for sync_upload() is $root_dir).
backups/.rsync_ignore: replaced with .rsync_filter.upload to allow uploading new backups but not deleting existing backups if they don't exist on the local (rsync-invoking) side; and .rsync_filter.download to avoid downloading backups to the local side. this allows storing older backups just on jupiter, where there is much more disk space. note that this change must be made on the remote side (jupiter) for it to be effective, because these are remote-side rules and are only processed by the remote-side rsync instance.
lib/sh/sync.sh: upload(): use directional .rsync_filter to supplement .rsync_ignore with all kinds of --filter rules. separate .rsync_filters are needed for the upload (swap=) and download (swap=1) directions because the sender and the receiver are reversed, causing asymmetric rules like protect/hide to change meaning.
updated backups/TNRS.backup.md5
added backups/TNRS.2013-6-17.backup.md5, TNRS.2013-6-22.backup.md5
/README.TXT: Backups: TNRS cache: Back up/Restore: added runtimes (3 min/5.5 min)
lib/sh/sync.sh: upload(): usage: documented put's swap=1 flag, which downloads instead of uploads
added inputs/GBIF/raw_occurrence_record_plants/.rsync_ignore with filters that have previously needed to be manually added whenever `make inputs/upload` was run
added inputs/GBIF/_MySQL/.rsync_ignore with filters from /README.TXT > Maintenance > to synchronize vegbiendev, jupiter, and your local machine. these filters will now be used with bin/sync_upload in addition to the periodic backup commands.
added inputs/VegBIEN/TWiki/.rsync_ignore with filters from /README.TXT > Maintenance > to synchronize vegbiendev, jupiter, and your local machine. these filters will now be used with bin/sync_upload in addition to the periodic backup commands.
added inputs/.rsync_ignore with filters from inputs/Makefile $(rsyncSrcs). these filters will now be used with bin/sync_upload in addition to `make inputs/upload`.
added bin/.rsync_ignore with filters from /README.TXT > Maintenance > to synchronize vegbiendev, jupiter, and your local machine. these filters will now be used with bin/sync_upload in addition to the periodic backup commands.
added backups/.rsync_ignore with filters from /README.TXT > Maintenance > to synchronize vegbiendev, jupiter, and your local machine. these filters will now be used with bin/sync_upload in addition to the periodic backup commands.
/.rsync_ignore: added *.pyc
added /.rsync_ignore with filters from lib/common.Makefile $(rsync). these filters will now be used with bin/sync_upload in addition to `make inputs/upload`.
lib/sh/sync.sh: upload(): use --exclude filters from per-dir .rsync_ignore. note that --exclude-from can't be used for this, because it is relative to the currdir, not the rsync root, and therefore also requires the .rsync_ignore to exist rather than using it only if it exists.
bin/tnrs_db: documented total runtime (10 days)
bin/tnrs_db: documented current runtime (162 ms/name)
web/links/index.htm: updated to Firefox bookmarks. sorted NCEAS bookmarks to put homepage and support pages first.
/README.TXT: Full database import: To run TNRS, etc. after the main import: clarified that you should only run `export version=<version>` if the import is named something other than public (i.e. it has not yet replaced the previous public schema)
/README.TXT: Full database import: To run TNRS: removed `by_col=1` because by-column mode is not applicable to running TNRS. it is, however, needed when running import_scrub (i.e. `make inputs/<datasrc>/reimport_scrub by_col=1`).
inputs/.TNRS/schema.sql: tnrs: vegbiendev update steps: added `make backups/TNRS.backup-remake` to back up TNRS before making changes to it. this provides a more recent restore point than the last import in case the changes mess things up. (however, the last import's backup is usually sufficient unless TNRS has been run since then.)
inputs/.TNRS/schema.sql: tnrs_populate_fields(): added VACUUM ANALYZE and runtime (50 s)
inputs/.TNRS/schema.sql: tnrs_populate_fields(): updated runtime (16 min)
schemas/VegBIEN/taxonomy/higherPlantGroup.xlsx.src.txt: added Brad's comment that there are some holes in the Embryophyte subclasses list, and we need to validate it
inputs/.TNRS/schema.sql: tnrs: documented that when changing this table's schema, you must also make the same changes on vegbiendev. included sample util.set_col_types() call with runtime (4 min).
bugfix: inputs/.TNRS/schema.sql: tnrs_populate_fields(): need to schema-qualify invoked functions
bugfix: inputs/.TNRS/schema.sql: tnrs_populate_fields(): Is_homonym: use the *_is_homonym flag for whichever of genus or family (in that order) is NOT NULL, rather than horizontal-ORing potentially NULL values together
bugfix: inputs/.TNRS/schema.sql: family_is_homonym(), genus_is_homonym(): need to return NULL instead of false when input family/genus is NULL. EXISTS does not support this, so STRICT is used to provide this functionality automatically.
inputs/.TNRS/schema.sql: added family_is_homonym(), genus_is_homonym() and use them in tnrs_populate_fields()
inputs/.TNRS/schema.sql: score_ok(): changed to IMMUTABLE and STRICT
inputs/.TNRS/schema.sql: tnrs_populate_fields(): never_homonym: use Author_score threshold to exclude matches that are too fuzzy to confirm the presence of a plant name author
bugfix: inputs/.TNRS/schema.sql: tnrs_populate_fields(): *_is_homonym: also need to check that there was no Author_matched (i.e. that it could be a homonym). Is_homonym: use new never_homonym var.
inputs/.TNRS/schema.sql: tnrs_populate_fields(): updated runtime (18 min)
inputs/import.stats.xls: Updated import times
planning/workflow/bien3_architecture.pptx: stage II: removed Step prefix before stage #, which the other slides don't have
added planning/workflow/bien3_architecture/stages.png
added planning/workflow/bien3_architecture/stage_*.png
added planning/workflow/bien3_architecture.pptx
inputs/.TNRS/schema.sql: tnrs_populate_fields(): when changing this function: UPDATE statement: include TNRS schema since it may not be in the search_path
inputs/.TNRS/schema.sql: tnrs_populate_fields(): Is_plant: also consider homonyms using new family_is_homonym, genus_is_homonym (see wiki.vegpath.org/Result_filtering#taxon_is_plant)
inputs/.TNRS/schema.sql: tnrs: added Is_homonym derived col (uses IRMNG.family_homonym_epithet, genus_homonym_epithet)
schemas/vegbien.sql: re-ran `make schemas/public/reinstall; make schemas/remake` cycle, which apparently changed sort order of statements
/README.TXT: Full database import: disk space check: updated minimum (to 300GB) for new import schema size. note that most of the space (166GB) is indexes, and even of the 87GB of data, only 20GB is from GBIF and 15GB from FIA (so most of it is duplication).
added inputs/IRMNG/*_homonym_epithet/map.csv, etc. (created by */run)
bugfix: inputs/input.Makefile: `%/install %/header.csv: %/create.sql`: in noclobber mode, mark %/header.csv as .PRECIOUS so the existing file won't be deleted if the table already exists (causing an error exit)
bugfix: lib/runscripts/table.run: remake_VegBIEN_mappings(): run yes using piped_cmd() so the SIGPIPE doesn't cause an errexit
added inputs/IRMNG/{genus_homonym_epithet,family_homonym_epithet}/run, which inherit from ../table.run so that load_data() (which runs create.sql) is invoked
added inputs/IRMNG/species_homonyms/new_terms.csv
bugfix: added no-op inputs/IRMNG/Source/run so inputs/IRMNG/run would have something to invoke for it
inputs/IRMNG/run: use lib/runscripts/datasrc_dir.run, which now provides import() and $subdirs
lib/runscripts/datasrc_dir.run: extend import.run and provide an import() implementation that runs all the runscripts for import_order.txt subdirs
lib/csvs.py: sniff(): support single-column spreadsheets by defaulting to the Excel dialect when the delimiter can't be determined
inputs/IRMNG/: added family_homonym_epithet, genus_homonym_epithet lookup tables, which use util.all_same() to filter out internal Plantae homonyms
schemas/util.sql: added all_same() aggregate
schemas/util.sql: added not_empty(anyarray)
schemas/util.sql: added not_null() (usable as an aggregate's FINALFUNC)
bugfix: inputs/IRMNG/import_order.txt: need to specify order so that Source is first
bugfix: inputs/IRMNG/*/map.csv: remapped Authority to scientificNameAuthorship instead of authors (now data_authors <VegCore.vegpath.org?data_authors> for clarity)
inputs/IRMNG/map.csv: updated to scrubbed output names from */map.csv (/map.csv does not currently get scrubbed)
bugfix: inputs/IRMNG/species_homonyms/header.csv, map.csv: reset input columns to DSV (delim-separated values) header. they had gotten changed to the output names in running map.csv with remake=1, causing it to be remade from the (renamed) staging tables.
inputs/input.Makefile: $(_svnFilesGlob): added *Makefile
/README.TXT: `make inputs/{upload,download}`: first run with test=1 to see what the diffs will be
added inputs/IRMNG/, including runscripts to download the names. this is now the 2nd datasource after GBIF to use runscripts, and the 3rd after FIA/GBIF to use new-style import.
inputs/input.Makefile: $(_svnFilesGlob): added *run (runscripts)
lib/runscripts/table.run: import(): also run remake_VegBIEN_mappings() to accept the test output. this function was previously unused, but was left in for future use when lib/import.sh was translated to lib/runscripts/table.run (it was used in its import.sh form in inputs/FIA/occurrence_all/import).
bugfix: lib/runscripts/table.run: remake_VegBIEN_mappings(): need to change to $top_dir before running `rm header.csv map.csv`
lib/sh/util.sh: added in_top_dir()
lib/runscripts/table.run: remake_VegBIEN_mappings(): only remake header.csv, map.csv if this target is being run directly, to avoid needing to remake them every time. for tables that are views, this instead requires them to be explicitly remade when the view columns change.
bugfix: lib/runscripts/subdir.run: subdir_make(): only remake if $remake has been explicitly propagated to subdir_make() by using self_make
lib/sh/make.sh: added deferred_check_target_exists alias and use it in check_fake_target_exists
added lib/sh/web.sh with curl wrapper
lib/sh/make.sh: added check_wildcard_target_exists alias
lib/sh/util.sh: added wildcard1 alias
lib/sh/util.sh: added echo1()
lib/runscripts/table.run: load_data(): first make sure schema is installed
lib/runscripts/table.run: added datasrc_make_install()
table_make_install(): take $install_log as an overridable kw param to support install logs in different locations
lib/runscripts/table.run: load_data(): split noclobber functionality into separate table_make_install() function, which can be used by other install-related targets
added schemas/VegBIEN/taxonomy/higherPlantGroup.xlsx.src.txt with Brad's description of how the names were chosen
added schemas/VegBIEN/taxonomy/higherPlantGroup.xlsx
schemas/VegBIEN/planning/taxonomy/: moved non-VegBIEN-specific resources to planning/resources/taxonomy/. this includes Brad's all-important Nomenclature_excerpt.ppt with the Latin taxonomic hierarchy suffixes on slide 5.
bugfix: schemas/vegbien.sql: taxon_trait_view: use the TNRS-scrubbed name from ScrubbedTaxon when available
schemas/vegbien.sql: split geoscrub_input_view's new-row-only filtering into separate view geoscrub_input_new, so that the full geoscrub_input rows are still available. the reduction in geoscrub_input from eliminating the already-scrubbed rows was only 280,000 (5076500 - 4799173) out of a possible 1.7 million (1707970), so it makes sense to just run geoscrubbing on the full input. (the lower-than-expected reduction is most likely due to rows from pre-refresh data being present in the original geoscrub_output table, which have been replaced by different, post-refresh input rows.)
added exports/_archive/
mappings/VegCore-VegBIEN.csv: genus->taxonlabel.taxonomicname: use new _filter_genus() (see r9882)
backups/TNRS.backup.md5: updated
bin/make_analytical_db: use new mk_table() instead of TRUNCATE/INSERT
bin/make_analytical_db: added mk_table() and use it in mk_analytical_table()
schemas/vegbien.sql: higher_plant_group_nodes: ferns and allies: added Lycopodiophyta node, as requested by Brad in the conference call (wiki.vegpath.org/2013-06-13_conference_call)
schemas/vegbien.sql: geoscrub_input_view: exclude rows that have already been geoscrubbed, by anti-joining on geoscrub_output
inputs/.geoscrub/geoscrub_output/postprocess.sql: set decimallatitude, decimallongitude types to double precision to facilitate joining with other double precision values
inputs/.geoscrub/geoscrub_output/postprocess.sql: coords index: added rest of input columns so this can be used to check the existence of a result by input. added runtime (55 s). use idempotent create_if_not_exists().