schemas/VegCore/mk_derived: use new lib/sh/local.sh instead of lib/import.sh (a precursor to util.sh, etc. still used by inputs/FIA/)
bugfix: load_data(): verbosity_min: use verbosity_min='' so that csv2db's default verbosity (3) is used, instead of setting the verbosity directly to 3, which caused the log++ logging output from bin/make to be echoed at verbosity 3, creating cluttered output
lib/sh/util.sh: verbosity_min(): support value '', which sets verbosity=''
bugfix: inputs/GBIF/raw_occurrence_record_plants/postprocess.sql: updated column names to match the renamings in map.csv, which are now performed on the staging table itself
lib/sh/util.sh: run_args_cmd(): time the command so that the runtime of the outer runscript target (i.e. the command run from the shell) is printed at the end of the output, like in bin/make
bugfix: inputs/input.Makefile: %/install: don't run $(cleanup) if it has already been run by $(import_install_), so that it doesn't run twice
inputs/input.Makefile: %/postprocess: don't run postprocess.sql if it is supposed to be run by a runscript, because postprocess.sql may then depend on additional steps the runscript runs before it
lib/runscripts/table.run: import(): use self_make on load_data so that the remake status determines whether the table is reinstalled
bugfix: lib/runscripts/mysql.table.run: import(): added missing set_make_vars, needed by self_make
bugfix: lib/runscripts/table.run: load_data(): need to use $_remake instead of $remake when using set_make_vars
lib/runscripts/table.run: added set_make_vars to all make targets so $remake would be propagated appropriately
lib/runscripts/table.run: load_data(): also clobber install log if remaking, because the table will be reinstalled
lib/runscripts/table.run: load_data(): automatically select noclobber mode depending on whether the install log already exists. this removes the need for a separate load_data_first_run() function.
bugfix: lib/runscripts/table.run: load_data(): ignore errors if table already exists
lib/runscripts/table.run: load_data(): use noclobber=1 to avoid overwriting the install log when re-running the install target idempotently. load_data_first_run() is now available to preserve the output in the log on the first run.
inputs/input.Makefile: Staging tables installation: $(logInstall): don't output to the install log if $noclobber flag is set, to prevent overwriting the log when re-running the install target idempotently
bugfix: lib/runscripts/mysql.table.run: import(): move previous versions of table.tsv out of the main dir before loading staging tables, to prevent them from being considered a TSV segment file and prepended to table.tsv
lib/sh/util.sh: added mv2dir(), mv_glob which wrap mv
lib/sh/util.sh: added mkdir alias which adds -p to prevent errors if the dir already exists
lib/sh/util.sh: added wildcard alias, similar to make's $(wildcard) function
bugfix: inputs/GBIF/raw_occurrence_record_plants/postprocess.sql: institution_code index: create it idempotently using create_if_not_exists() and an explicit index name, so that a duplicate index doesn't get added each time postprocess.sql is run
lib/sh/local.sh: psql(): don't put util in the search_path because psql scripts now add it themselves if they need it, using `SELECT util.search_path_append(util);`
inputs/GBIF/raw_occurrence_record_plants/postprocess.sql: add util to the search_path so that postprocess.sql will also work when run by inputs/input.Makefile, which only puts the datasource (GBIF) in the search_path
schemas/util.sql: added search_path_append()
schemas/util.sql: added eval() to allow running EXECUTE outside of a function (and to echo the command that is run)
inputs/GBIF/raw_occurrence_record_plants/run: added import() runtime (5 h)
inputs/GBIF/raw_occurrence_record_plants/run: table.tsv.gz/make() runtime: noted that this excludes the upload time
inputs/GBIF/raw_occurrence_record_plants/run: added table.tsv.gz/upload() runtime (15 min)
added lib/runscripts/mysql.table.run (general to all MySQL datasources) and use it in inputs/GBIF/table.run
inputs/GBIF/raw_occurrence_record_plants/run: table.tsv/make(): to view runtime when using `screen`: keys used to scroll: added Ctrl-B/Ctrl-F for page-at-a-time scrolling (there are a lot of pages of output for the import() target!)
bugfix: inputs/GBIF/table.run: table.tsv.gz/make(): don't run table.tsv.gz/upload in test mode, to avoid clobbering the backup of a full table.tsv with a partial, testing table.tsv
lib/sh/db.sh: set test mode when using limited # rows
bugfix: inputs/GBIF/table.run: table.tsv.gz/upload(): don't use inplace mode because it leaves a newer mtime when aborted, causing rsync to think that the partial upload is actually newer than the source. note that rsync's --partial-dir mode is just as capable of resuming an aborted upload (it will just use a file in .rsync-tmp instead). inplace mode is primarily designed for fixed-offset files which don't change much between edits, but this is not true for exports (or the gzips of them), which will change the file offsets of most data if even one row or column is added or removed.
bugfix: inputs/GBIF/table.run: table.tsv.gz/make(): run table.tsv.gz/upload here instead of in table.tsv/make() because it should not run until table.tsv.gz is finished being made, which is not the case in table.tsv/make() because table.tsv.gz/make is run in the background
inputs/GBIF/table.run: table.tsv.gz/upload(): moved before table.tsv.gz/make() so it can be used by it
bugfix: inputs/GBIF/table.run: table.tsv.gz/upload(): need overwrite=1 because the mtime of an aborted inplace upload is newer
inputs/GBIF/table.run: table.tsv*/upload(): renamed to table.tsv.gz/upload() to upload only table.tsv.gz, not table.tsv, in order to save bandwidth
bugfix: lib/sh/sync.sh: also need to --include parent dirs for each --include path
lib/sh/util.sh: added path_parents()
*{.sh,run}: in comments, use ${array[@]} instead of @array for clarity
lib/sh/util.sh: foreach_arg(): moved `local a` to same line as for loop that uses it
bugfix: inputs/GBIF/table.run: table.tsv*/upload(): need to run put in live mode (live=1)
lib/sh/util.sh: foreach_arg(): echo_run the cmd at a log_level up so it isn't printed as if it were an external command (log_level 1)
lib/sh/sync.sh: removed `pf upload` debug statement
bugfix: lib/sh/util.sh: set_fds(): localize $i so it doesn't overwrite any previous value
inputs/GBIF/table.run: table.tsv/make(): run table.tsv*/upload when the file make is done so that the file is backed up to jupiter
inputs/GBIF/table.run: added table.tsv*/upload()
lib/sh/local.sh: added sync_upload(), sync_download() with $sync_local_dir, $sync_remote_url config vars
added lib/sh/sync.sh with upload(), download()
lib/sh/util.sh: added foreach_arg()
bugfix: lib/sh/util.sh: need to use `declare -p` instead of ${var+isset} because ${var+isset} returns not set for empty arrays
lib/sh/util.sh: added echo_vars() stub
lib/sh/util.sh: added echo_run() stub
lib/sh/util.sh: set_fds(): don't run (or echo) exec if no redirections are being made
bugfix: lib/sh/util.sh: added missing stub for indent alias (used by echo_func alias, which is a stub). without the stub, /usr/bin/indent would be used instead on Mac.
bugfix: lib/sh/local.sh: root_rel_path(): added echo_func
bugfix: lib/sh/local.sh: root_rel_path(): use canon_rel_path instead of rel_path because $1 may be absolute rather than relative to the current dir, so $root_dir needs to be absoluted (which requires $1 to be absoluted as well)
lib/sh/util.sh: support custom $base_dir, which will be run through realpath() to match $path ($PWD, which was used before, did not need to be realpath'd because it was already absolute)
lib/sh/util.sh: moved echo_func alias to stub because it must be embedded in its expanded alias form to work properly
lib/sh/util.sh: declare echo_func as a stub before it's defined, so that functions can use it even if they are defined before it (and its logging functionality will be enabled as soon as it's defined)
lib/sh/util.sh: rel_path(): don't log++ it, and instead only log++ applicable calls of it or its callers. this allows non-internal calls of rel_path() to be logged at the usual log_level.
lib/sh/local.sh: added root_rel_path()
lib/Firefox_bookmarks.reformat.csv: unescape HTML in page's description, such as links to more info. this is necessary to properly render the persistent shells link in the screen > scrollback folder description.
bin/repl: added unescape_html() filter function, which can be specified as the replacement string
bin/repl: support Unicode characters in the matched portion of the string
web/links/index.htm: updated to Firefox bookmarks. `screen`: scrollback: added link to our wiki page on persistent shells, which are a better way to support reconnecting.
web/links/index.htm: updated to Firefox bookmarks. added bookmarks about the `screen` command, especially how to access the scrollback. resorted several folders alphabetically.
inputs/GBIF/raw_occurrence_record_plants/run: table.tsv/make(): documented how to view the runtime when using `screen` (press Ctrl-A [ , use up-arrow, and then press Esc to leave copy mode)
inputs/GBIF/raw_occurrence_record_plants/run: herbaria_filter/make(): use new ih_herbarium table instead of the herbaria_filter.ih.csv_ file directly
inputs/GBIF/raw_occurrence_record_plants/run: added ih_herbarium/make(), which stores the IH herbaria
bugfix: inputs/GBIF/raw_occurrence_record_plants/run: table/make(): also filter out rows with a non-plant family (as described at http://vegpath.org/wiki/2013-06-06_conference_call#GBIF-subsetting-fix-raw_occurrence_record-filter-formula), since some institutions have both animal and plant rows, even though they are in IH or in the 80% list. (note that NULL families are OK.)
*{.sh,run}: use mysql instead of mysql_ANSI because mysql is now an alias to mysql_ANSI (since ANSI mode still supports key MySQL features, like `` quotes)
inputs/GBIF/raw_occurrence_record_plants/run: table.tsv/make(): documented that incremental output is provided right away with --quick (unbuffered), but takes awhile to become visible in Macfusion sshfs. this can be tested with `while true; do stat inputs/GBIF/raw_occurrence_record_plants/table.tsv; sleep 2; done` running concurrently with `./inputs/GBIF/raw_occurrence_record_plants/run table.tsv/make` on vegbiendev:/home/bien/svn .
inputs/GBIF/raw_occurrence_record_plants/run: table.tsv/make(): use new raw_occurrence_record_plants view from table/make()
bugfix: inputs/GBIF/raw_occurrence_record_plants/run: table/make(): added make of prerequisites
bugfix: inputs/GBIF/raw_occurrence_record_plants/run: table/make(): don't reset $table to plant_fraction_for_herbaria_filter for commands that use $table
inputs/GBIF/raw_occurrence_record_plants/run: added table/make(), which makes the filter view
inputs/GBIF/raw_occurrence_record/: renamed to raw_occurrence_record_plants because it's actually only the plants in raw_occurrence_record, not all of raw_occurrence_record. also, this will allow us to create a separate raw_occurrence_record_plants view whose name matches the folder and does not collide with the raw_occurrence_record table.
inputs/GBIF/raw_occurrence_record/run: herbaria_filter/make(): added runtime, which is ~0 since it just needs to do CSV import and index scans
inputs/GBIF/raw_occurrence_record/run: herbaria_filter/make(): time the population of herbaria_filter
inputs/GBIF/raw_occurrence_record/run: plant_fraction/make(): updated runtime. added rows affected count to runtime so if the number of rows it's related to (in this case, institution_code) changes, the runtime can be expected to change accordingly.
inputs/.TNRS/schema.sql: tnrs_populate_fields(): documented runtime (17 min)
bugfix: inputs/GBIF/raw_occurrence_record/run: plant_fraction/make(): plant_fraction column: COUNT counts non-NULL rather than true values (which counter-intuitively includes false, because it's non-NULL), so need to add NULLIF around the boolean expression to turn it into a NULL-or-not expression. see http://vegpath.org/wiki/2013-06-06_conference_call#GBIF-subsetting-fix-plant_fraction-SQL-bug .
inputs/.TNRS/schema.sql: tnrs_populate_fields(): documented that when changing this function, you must regenerate the derived cols using `UPDATE tnrs SET "Name_submitted" = "Name_submitted"`
inputs/.TNRS/schema.sql: tnrs_populate_fields(): Is_plant: must match family as Family_score = 1 (as discussed during conference call vegpath.org/wiki/2013-05-30_conference_call#postprocess-TNRS-results-to-exclude-animals-with-genus-homonyms) instead of as Family_matched IS NOT NULL (as listed in Brad's formula at vegpath.org/wiki/Result_filtering#TNRS-results) because TNRS transforms animal to plant families via fuzzy matching, necessitating a Family_score check to ensure an exact match to a plant family that was not transformed from an animal family
inputs/.TNRS/schema.sql: added Is_plant derived field, which is populated using the formula at vegpath.org/wiki/Result_filtering#TNRS-results . note that the homonym filtering is currently excluded until we determine whether we can get direct access to the IRMNG homonyms database (http://www.cmar.csiro.au/datacentre/irmng/homonyms.htm). note also that changes to the TNRS schema cannot be fully tested until any TNRS client bugs are fixed, because the data.sql updater requires a working TNRS client to regenerate the sample data.
inputs/.TNRS/schema.sql: updated for current TSV schema: renamed Accepted_species->Accepted_name_species, Accepted_family->Accepted_name_family
bugfix: schemas/vegbien.sql: tnrs_input_name: must anti-join against MatchedTaxon rather than ValidMatchedTaxon to ensure that all of TNRS.tnrs is excluded from the input names. this prevents duplicates from appearing in the TNRS results, which would break the TSV import into TNRS.tnrs. it also prevents no-match names from being scrubbed repeatedly because they were not properly filtered out of the input names.
inputs/.TNRS/schema.sql: fixed whitespace
inputs/.TNRS/schema.sql: added MatchedTaxon view, which now just renames the columns but does not filter the results, and use it in ValidMatchedTaxon
inputs/.TNRS/schema.sql: MatchedTaxon: renamed to ValidMatchedTaxon since this view actually contains only the names with a valid match
bugfix: lib/sql.py: parse_exception(): make_DuplicateKeyException(): handle nested exceptions (which should never be generated, but may be in case of sql.py bugs such as wiki.vegpath.org/To_Do#Fixes > #1) by printing the nested exception and then rethrowing the original exception, so that the original exception does not get lost and still ends up at the end of the program's output, to enable debugging
inputs/.TNRS/schema.sql: tnrs: documented that when changing this table's schema, you must regenerate data.sql using `inputs/test_taxonomic_names/test_scrub`
inputs/GBIF/raw_occurrence_record/run: table.tsv.gz/make(): documented runtime (35 min)
bugfix: schemas/vegbien.sql: analytical_stem_view: speciesBinomialWithMorphospecies: if accepted name not specified, use matched name (matched*) or Name_submitted (concatenatedScientificName), as described at http://wiki.vegpath.org/2013-05-30_conference_call#fix-TNRS-speciesBinomialWithMorphospecies-to-include-alternatives-when-no-accepted-name
lib/sh/make.sh: make(): time all invocations of make
lib/sh/make.sh: make() at verbosity < 4, hide messages about making included Makefiles: use sed with a range expression (/.../,/.../) to also exclude all log messages between an opening "make ...Makefile" and a closing "make[#]: ...Makefile"
lib/sh/util.sh: log+ aliases: added clog++/-- aliases for cmds, which don't include log_local. these are useful when you can't just use "log++" because you need the command following it to be alias-expanded.
bugfix: lib/sh/make.sh: make(): use [:char_class:] exprs instead of \X char class abbrs because the \ abbrs are not supported on Linux
inputs/GBIF/table.run: table.tsv/make(): remake table.tsv.gz/make() after table.tsv is made