added bin/.rsync_ignore with filters from /README.TXT > Maintenance > to synchronize vegbiendev, jupiter, and your local machine. these filters will now be used with bin/sync_upload in addition to the periodic backup commands.
added backups/.rsync_ignore with filters from /README.TXT > Maintenance > to synchronize vegbiendev, jupiter, and your local machine. these filters will now be used with bin/sync_upload in addition to the periodic backup commands.
/.rsync_ignore: added *.pyc
added /.rsync_ignore with filters from lib/common.Makefile $(rsync). these filters will now be used with bin/sync_upload in addition to `make inputs/upload`.
lib/sh/sync.sh: upload(): use --exclude filters from per-dir .rsync_ignore. note that --exclude-from can't be used for this, because it is relative to the currdir, not the rsync root, and therefore also requires the .rsync_ignore to exist rather than using it only if it exists.
bin/tnrs_db: documented total runtime (10 days)
bin/tnrs_db: documented current runtime (162 ms/name)
web/links/index.htm: updated to Firefox bookmarks. sorted NCEAS bookmarks to put homepage and support pages first.
/README.TXT: Full database import: To run TNRS, etc. after the main import: clarified that you should only run `export version=<version>` if the import is named something other than public (i.e. it has not yet replaced the previous public schema)
/README.TXT: Full database import: To run TNRS: removed `by_col=1` because by-column mode is not applicable to running TNRS. it is, however, needed when running import_scrub (i.e. `make inputs/<datasrc>/reimport_scrub by_col=1`).
inputs/.TNRS/schema.sql: tnrs: vegbiendev update steps: added `make backups/TNRS.backup-remake` to back up TNRS before making changes to it. this provides a more recent restore point than the last import in case the changes mess things up. (however, the last import's backup is usually sufficient unless TNRS has been run since then.)
inputs/.TNRS/schema.sql: tnrs_populate_fields(): added VACUUM ANALYZE and runtime (50 s)
inputs/.TNRS/schema.sql: tnrs_populate_fields(): updated runtime (16 min)
schemas/VegBIEN/taxonomy/higherPlantGroup.xlsx.src.txt: added Brad's comment that there are some holes in the Embryophyte subclasses list, and we need to validate it
inputs/.TNRS/schema.sql: tnrs: documented that when changing this table's schema, you must also make the same changes on vegbiendev. included sample util.set_col_types() call with runtime (4 min).
bugfix: inputs/.TNRS/schema.sql: tnrs_populate_fields(): need to schema-qualify invoked functions
bugfix: inputs/.TNRS/schema.sql: tnrs_populate_fields(): Is_homonym: use the *_is_homonym flag for whichever of genus or family (in that order) is NOT NULL, rather than horizontal-ORing potentially NULL values together
bugfix: inputs/.TNRS/schema.sql: family_is_homonym(), genus_is_homonym(): need to return NULL instead of false when input family/genus is NULL. EXISTS does not support this, so STRICT is used to provide this functionality automatically.
inputs/.TNRS/schema.sql: added family_is_homonym(), genus_is_homonym() and use them in tnrs_populate_fields()
inputs/.TNRS/schema.sql: score_ok(): changed to IMMUTABLE and STRICT
inputs/.TNRS/schema.sql: tnrs_populate_fields(): never_homonym: use Author_score threshold to exclude matches that are too fuzzy to confirm the presence of a plant name author
bugfix: inputs/.TNRS/schema.sql: tnrs_populate_fields(): *_is_homonym: also need to check that there was no Author_matched (i.e. that it could be a homonym). Is_homonym: use new never_homonym var.
inputs/.TNRS/schema.sql: tnrs_populate_fields(): updated runtime (18 min)
inputs/import.stats.xls: Updated import times
planning/workflow/bien3_architecture.pptx: stage II: removed Step prefix before stage #, which the other slides don't have
added planning/workflow/bien3_architecture/stages.png
added planning/workflow/bien3_architecture/stage_*.png
added planning/workflow/bien3_architecture.pptx
inputs/.TNRS/schema.sql: tnrs_populate_fields(): when changing this function: UPDATE statement: include TNRS schema since it may not be in the search_path
inputs/.TNRS/schema.sql: tnrs_populate_fields(): Is_plant: also consider homonyms using new family_is_homonym, genus_is_homonym (see wiki.vegpath.org/Result_filtering#taxon_is_plant)
inputs/.TNRS/schema.sql: tnrs: added Is_homonym derived col (uses IRMNG.family_homonym_epithet, genus_homonym_epithet)
schemas/vegbien.sql: re-ran `make schemas/public/reinstall; make schemas/remake` cycle, which apparently changed sort order of statements
/README.TXT: Full database import: disk space check: updated minimum (to 300GB) for new import schema size. note that most of the space (166GB) is indexes, and even of the 87GB of data, only 20GB is from GBIF and 15GB from FIA (so most of it is duplication).
added inputs/IRMNG/*_homonym_epithet/map.csv, etc. (created by */run)
bugfix: inputs/input.Makefile: `%/install %/header.csv: %/create.sql`: in noclobber mode, mark %/header.csv as .PRECIOUS so the existing file won't be deleted if the table already exists (causing an error exit)
bugfix: lib/runscripts/table.run: remake_VegBIEN_mappings(): run yes using piped_cmd() so the SIGPIPE doesn't cause an errexit
added inputs/IRMNG/{genus_homonym_epithet,family_homonym_epithet}/run, which inherit from ../table.run so that load_data() (which runs create.sql) is invoked
added inputs/IRMNG/species_homonyms/new_terms.csv
bugfix: added no-op inputs/IRMNG/Source/run so inputs/IRMNG/run would have something to invoke for it
inputs/IRMNG/run: use lib/runscripts/datasrc_dir.run, which now provides import() and $subdirs
lib/runscripts/datasrc_dir.run: extend import.run and provide an import() implementation that runs all the runscripts for import_order.txt subdirs
lib/csvs.py: sniff(): support single-column spreadsheets by defaulting to the Excel dialect when the delimiter can't be determined
inputs/IRMNG/: added family_homonym_epithet, genus_homonym_epithet lookup tables, which use util.all_same() to filter out internal Plantae homonyms
schemas/util.sql: added all_same() aggregate
schemas/util.sql: added not_empty(anyarray)
schemas/util.sql: added not_null() (usable as an aggregate's FINALFUNC)
bugfix: inputs/IRMNG/import_order.txt: need to specify order so that Source is first
bugfix: inputs/IRMNG/*/map.csv: remapped Authority to scientificNameAuthorship instead of authors (now data_authors <VegCore.vegpath.org?data_authors> for clarity)
inputs/IRMNG/map.csv: updated to scrubbed output names from */map.csv (/map.csv does not currently get scrubbed)
bugfix: inputs/IRMNG/species_homonyms/header.csv, map.csv: reset input columns to DSV (delim-separated values) header. they had gotten changed to the output names in running map.csv with remake=1, causing it to be remade from the (renamed) staging tables.
inputs/input.Makefile: $(_svnFilesGlob): added *Makefile
/README.TXT: `make inputs/{upload,download}`: first run with test=1 to see what the diffs will be
added inputs/IRMNG/, including runscripts to download the names. this is now the 2nd datasource after GBIF to use runscripts, and the 3rd after FIA/GBIF to use new-style import.
inputs/input.Makefile: $(_svnFilesGlob): added *run (runscripts)
lib/runscripts/table.run: import(): also run remake_VegBIEN_mappings() to accept the test output. this function was previously unused, but was left in for future use when lib/import.sh was translated to lib/runscripts/table.run (it was used in its import.sh form in inputs/FIA/occurrence_all/import).
bugfix: lib/runscripts/table.run: remake_VegBIEN_mappings(): need to change to $top_dir before running `rm header.csv map.csv`
lib/sh/util.sh: added in_top_dir()
lib/runscripts/table.run: remake_VegBIEN_mappings(): only remake header.csv, map.csv if this target is being run directly, to avoid needing to remake them every time. for tables that are views, this instead requires them to be explicitly remade when the view columns change.
bugfix: lib/runscripts/subdir.run: subdir_make(): only remake if $remake has been explicitly propagated to subdir_make() by using self_make
lib/sh/make.sh: added deferred_check_target_exists alias and use it in check_fake_target_exists
added lib/sh/web.sh with curl wrapper
lib/sh/make.sh: added check_wildcard_target_exists alias
lib/sh/util.sh: added wildcard1 alias
lib/sh/util.sh: added echo1()
lib/runscripts/table.run: load_data(): first make sure schema is installed
lib/runscripts/table.run: added datasrc_make_install()
table_make_install(): take $install_log as an overridable kw param to support install logs in different locations
lib/runscripts/table.run: load_data(): split noclobber functionality into separate table_make_install() function, which can be used by other install-related targets
added schemas/VegBIEN/taxonomy/higherPlantGroup.xlsx.src.txt with Brad's description of how the names were chosen
added schemas/VegBIEN/taxonomy/higherPlantGroup.xlsx
schemas/VegBIEN/planning/taxonomy/: moved non-VegBIEN-specific resources to planning/resources/taxonomy/. this includes Brad's all-important Nomenclature_excerpt.ppt with the Latin taxonomic hierarchy suffixes on slide 5.
bugfix: schemas/vegbien.sql: taxon_trait_view: use the TNRS-scrubbed name from ScrubbedTaxon when available
schemas/vegbien.sql: split geoscrub_input_view's new-row-only filtering into separate view geoscrub_input_new, so that the full geoscrub_input rows are still available. the reduction in geoscrub_input from eliminating the already-scrubbed rows was only 280,000 (5076500 - 4799173) out of a possible 1.7 million (1707970), so it makes sense to just run geoscrubbing on the full input. (the lower-than-expected reduction is most likely due to rows from pre-refresh data being present in the original geoscrub_output table, which have been replaced by different, post-refresh input rows.)
added exports/_archive/
mappings/VegCore-VegBIEN.csv: genus->taxonlabel.taxonomicname: use new _filter_genus() (see r9882)
backups/TNRS.backup.md5: updated
bin/make_analytical_db: use new mk_table() instead of TRUNCATE/INSERT
bin/make_analytical_db: added mk_table() and use it in mk_analytical_table()
schemas/vegbien.sql: higher_plant_group_nodes: ferns and allies: added Lycopodiophyta node, as requested by Brad in the conference call (wiki.vegpath.org/2013-06-13_conference_call)
schemas/vegbien.sql: geoscrub_input_view: exclude rows that have already been geoscrubbed, by anti-joining on geoscrub_output
inputs/.geoscrub/geoscrub_output/postprocess.sql: set decimallatitude, decimallongitude types to double precision to facilitate joining with other double precision values
inputs/.geoscrub/geoscrub_output/postprocess.sql: coords index: added rest of input columns so this can be used to check the existence of a result by input. added runtime (55 s). use idempotent create_if_not_exists().
bugfix: schemas/vegbien.sql: higher_plant_group_nodes: removed ferns and allies nodes Anthocerotophyta, Marchantiophyta, Bryophyta, which were incorrectly said to be part of this clade in the BIEN2 analytical DB overview (/planning/workflow/validation/BIEN2_Analytical_DB_overview.docx > p. 13 bottom > last ΒΆ). see http://wiki.vegpath.org/2013-06-13_conference_call#fix-higher_plant_group_nodes-mapping .
bugfix: /Makefile: postgres-Linux: phpPgAdmin: added steps to configure it for Apache 2.4
/run: geoscrub_input/make(): documented runtime (40 s)
bin/make_analytical_db: added `/run export_` to make the geoscrub_input CSV export
inputs/.TNRS/schema.sql: tnrs_populate_fields(): removed no longer needed casts of *_score to double precision
inputs/.TNRS/schema.sql: tnrs: *_score: changed type to double precision because these fields are always floats. this also avoids the need to manually cast them to double precision each time they are used.
lib/tnrs.py: HTTP requests: rewrapped lines
lib/tnrs.py: updated HTTP requests to match current web app
bugfix: lib/tnrs.py: download_request_template: changed dirty to true (to match the current web app), which is apparently needed to apply the source_sorting setting to the downloaded TSV in addition to the GUI results
lib/tnrs.py: retrieval_request_template: turned source_sorting back off, because it causes any match from the first source to always be used, even if it has a lower match score than the match from the other source. (Brad confirms that this should be off.) I think we had this on originally to ensure that only Tropicos results were used when available, rather than USDA when it was a better match. * note that due to a bug in the web app, this change will not actually be effective, because the source_sorting option is only applied to the GUI results, not the downloaded TSV. *
inputs/.TNRS/schema.sql: tnrs: Name_number: changed type to integer so it would sort numerically
inputs/.TNRS/schema.sql: added pkey on Time_submitted, Name_number
inputs/.TNRS/schema.sql: changed Name_submitted pkey to a unique constraint to allow adding a pkey on Time_submitted, Name_number instead
inputs/.TNRS/schema.sql: Time_submitted, Name_number: added NOT NULL constraints so that they can be used in a unique constraint