lib/profiling.py: Profiler: added add_time() and use it instead of `self.total +=`
bin/tnrs_db: tnrs_profiler: renamed to cumulative_tnrs_profiler to distinguish it from the tnrs_profiler used by tnrs.tnrs_request(), which just profiles the current request
bugfix: bin/tnrs_db: cumulative profiler: use len(names) instead of this_ct (cur.rowcount) in case the actual # rows fetched differed from the rowcount
lib/tnrs.py: repeated_tnrs_request(): renamed to tnrs_request() since this is the function that should usually be used, to ensure that debugging information is output in the case of an error. (the TNRS request must be made again to output this information.)
lib/tnrs.py: tnrs_request(): renamed to single_tnrs_request() to distinguish it from repeated_tnrs_request()
bin/tnrs_db: removed no longer used $wait flag (which caused tnrs_db to wait max_pause for new rows to be added), because tnrs_db is now invoked automatically after each import by the import_scrub target (in inputs/input.Makefile) and does not need to run as a daemon. note that when scrub is invoked, it is possible that a previous datasource's import has already scrubbed the names for this import, because tnrs_db runs until all rows in tnrs_input_name are scrubbed....
bin/tnrs_db: removed no longer needed explicit population of the Time_submitted, which is now done automatically by the tnrs table. however, this requires starting the transaction before submitting data, so Time_submitted is correctly set to the submission time rather than the insertion time. the setting of the correct time can be tested by inserting `time.sleep(n_sec)` after the TNRS request and checking that the Time_submitted is close to the time tnrs_db was run instead of n_sec seconds later.
bin/tnrs_db: start transaction before submitting data, so Time_submitted is correctly set to the submission time rather than the insertion time. these may differ by several minutes if TNRS is slow. the setting of the correct time can be tested by inserting `time.sleep(n_sec)` after the TNRS request, removing the explicit setting of Time_submitted, and checking that the Time_submitted is close to the time tnrs_db was run instead of n_sec seconds later.
bugfix: bin/tnrs_db: wrap just the TNRS request and the storing of the response data in a function (undoing part of r9514), because the transaction start time for Time_submitted should not be until the TNRS request is actually made (it often takes several minutes to materialize the next set of input names on a full DB)
bin/tnrs_db: Iterate over unscrubbed verbatim taxonlabels: put loop body in a function (which returns whether or not the loop should continue), so that the loop body can easily be wrapped in a transaction using sql.with_savepoint()
inputs/.TNRS/schema.sql: tnrs.Time_submitted: set default to now() (the timestamp of the start of the current transaction, http://www.postgresql.org/docs/9.1/static/functions-datetime.html) so that it would automatically be populated when rows are added. note that because the start of the current transaction instead of the exact time at insertion is used, all rows inserted in the same transaction (e.g. as part of the same batch) will have the same value for this, linking them together.
inputs/.TNRS/schema.sql: tnrs_populate_derived_fields(): renamed to tnrs_populate_fields() so it can be used to populate other fields as well
bin/tnrs_db: removed no longer needed explicit appending of derived cols, and instead use append_csv()'s new support for importing CSVs whose columns are a subset of the full table
bin/tnrs_db: ColInsertFilters: use the simpler literal value option for the mk_value param
lib/csvs.py: ColInsertFilter: support using a literal value instead of a function for the mk_value param, since this is the most common use case
lib/sql_io.py: append_csv(): support importing CSVs whose columns are a subset of the full table and/or in a different order. when the header exactly matches the columns, the explicit column list will still be omitted as an optimization. this uses code from r4927.
bugfix: lib/runscripts/util.run: need to include sh/make.sh for all runscripts that use make-style commands
*{.sh,run}: use new top_make instead of `make --directory="$top_dir"`
lib/sh/make.sh: added top_make()
inputs/import.stats.xls: Postprocessing: populated entries for analytical DB for last 4 imports, and for backup, backup test for last import. note that the combined import time for the last import is 3.5 days, compared to 3 days for the column-based import portion.
inputs/import.stats.xls: Postprocessing: added (empty) entries for analytical DB, backup, backup test
inputs/GBIF/Specimen/postprocess.sql, inputs/REMIB/Specimen/postprocess.sql: updated for providers in r9459, which adds TEX
inputs/*/*/postprocess.sql: Remove institutions that we have direct data for: query to obtain list: updated for current schema
inputs/import.stats.xls: Updated import times. GBIF has been refreshed (with the range modeling column subset), and column-based import now takes 3 days for 88.4 million rows.
README.TXT: Full database import: added warning to perform every single step listed, to avoid breaking column-based import
README.TXT: Full database import: Publish the new import: added warning to be sure you have done every single verification step before proceeding. otherwise, a previous valid import could incorrectly be overwritten with a broken one.
bugfix: README.TXT: Full database import: To run TNRS/remake analytical DB: need to run `export version=<version>` before the command which uses it rather than after
added backups/*.md5
added backups/TNRS.2013-5-21.backup.md5
README.TXT: Datasource setup: For MySQL inputs: For .sql exports: added steps to grant privileges to the bien user. the privileges list excludes UPDATE, DELETE, ALTER, DROP to prevent bugs in the import scripts from accidentally deleting data.
inputs/.TNRS/schema.sql, data.sql: updated for new TNRS CSV columns (see bug at https://pods.iplantcollaborative.org/jira/browse/TNRS-183). note that these columns may eventually change back (comment by Naim at https://pods.iplantcollaborative.org/jira/browse/TNRS-183#comment-34444).
README.TXT: Full database import: added steps to check that TNRS ran successfully, and fix errors (due to column changes in the TNRS CSV) if it didn't
inputs/test_taxonomic_names/test_scrub: use sh's -e (errexit) mode so errors in an invoked script cause the script to abort instead of burying the error in more output
inputs/test_taxonomic_names/test_scrub: documented that `make schemas/"$public"/uninstall` removes the previous results (since it may be confusing why it's prompting the user to uninstall the schema that is an output of the program)
inputs/test_taxonomic_names/test_scrub: don't need to run the import twice anymore because the accepted names are now included in the tnrs_input_name view that TNRS runs on
inputs/test_taxonomic_names/test_scrub: updated for current TNRS schema
bugfix: inputs/test_taxonomic_names/test_scrub: unset $n so it doesn't limit the # rows. it is set to 2 in the default test environment, so must be unset for n-sensitive programs that should be unlimited.
inputs/GBIF/raw_occurrence_record/run: herbaria_filter.table/make(): also include the exported plant_fraction herbaria
inputs/GBIF/raw_occurrence_record/run: added herbaria_filter.plant_fraction.csv_/make(), which exports the plant_fraction herbaria whose plant_fraction >= 0.8
inputs/GBIF/raw_occurrence_record/run: added plant_fraction.table/make(), which contains the plant fraction for each herbarium
lib/sh/db.sh: added mk_drop()
lib/sh/util.sh: to_file(): log $stdout so users can tell which file is being created by the command. for some reason, can't use `local redirs=(">$stdout")` because the redirections don't seem to be applied. can't yet use `log+ -2 echo_vars stdout` because log+ does not yet support negative adjustments (they cause PS4 to be emptied out before being re-prepended to).
bugfix: lib/sh/util.sh: log+(): adjustment < 0: need to enclose -$1 in $(()) so it gets evaluated before being used as an array index
lib/sh/local.sh: psql(): documented that --output is actually for query results, not echoed statements (and thus must be redirected back to fd 1 while fd 1 with the statements gets sent to the logging port)
lib/sh/local.sh: psql(): documented why can't use fd 11
lib/sh/local.sh: use @redirs instead of manual redirection to set up --output fd, so that the redirection will be echoed along with the command. for some reason, this requires switching to fd 13 instead of 11, because fd 11 gives a "/dev/fd/11: Bad file descriptor" error when 11 is set with exec right before the command instead of on the subshell the command is executed in. (13 was chosen rather than 12 because 2 is for errors, while *3 (or 3) is for logging.)
bugfix: lib/sh/db.sh: pg_export_table_to_dir_no_header(): inlined $(pg_header) so setting $cols wouldn't affect pg_export_table_no_header(), which uses it as a kw param
bugfix: lib/sh/util.sh: to_file(): require_not_exists check: missing `test` in `if "$if_not_exists"`
lib/sh/util.sh: command(): log the function call using echo_func to assist debugging. (use a higher log_level because it's internal.)
lib/sh/util.sh: command(): support custom redirections, which will be echoed along with the command
lib/sh/util.sh: to_file(): reworded confusing || conditional for require_not_exists into an if statement
bugfix: inputs/GBIF/raw_occurrence_record/run: herbaria_filter.table/make(): need to use append=1 with mysql_import so the output table doesn't get re-truncated when additional parts are added
bugfix: lib/sh/db.sh: load new aliases before mk_select(), which uses mk_table_esc
lib/runscripts/table.run: include make.sh so runscripts based on it can use make-related utils
lib/sh/db.sh: added mk_select() and use it in mk_select_var
lib/sh/db.sh: added limit() and use it instead of `${limit:+LIMIT $limit}`
lib/sh/db.sh: added mysql_truncate() and use it instead of `mk_truncate|mysql_ANSI`
lib/sh/db.sh: truncate(): renamed to mk_truncate() because it actually just creates a TRUNCATE statement, rather than also executing it
lib/sh/db.sh: use_local/use_remote: unset $prefix after using it so it isn't unintentionally applied as a kw param for a later function
lib/sh/db.sh: mk_select: renamed to mk_select_var since it actually sets a var in the local context rather than returning a query
inputs/GBIF/raw_occurrence_record/run: herbaria_filter.table/make(): specify the different parts used to create the table in an array
inputs/GBIF/raw_occurrence_record/run: renamed herbaria_filter.csv_ to herbaria_filter.ih.csv_ to allow for other tables that get combined into herbaria_filter
bugfix: lib/sh/db.sh: mk_select: ensure newline before LIMIT clause, in case caller provided custom query which did not have trailing newline
bugfix: mappings/VegCore-VegBIEN.csv: place.geovalid: added missing /1 after _alt
bugfix: lib/sql.py: parse_exception(): typed_name_re: added back matching of names without "", since these are used by some error messages (ones that contain () after the function name)
bugfix: lib/sql.py: parse_exception(): typed_name_re: need to allow " within the matched name, since there are now "" around the entire identifer that was passed to Postgres, which may itself include " . always require "" around the matched name, to ensure that the whole name is matched by .+? e.g. when followed by () for a function call. the version of Postgres we currently use apparently no longer has error messages without the "", so we don't need a separate regexp for quoted and unquoted names.
lib/sh/db.sh: mysql_import(): automatically ensure the table is empty (i.e. using truncate()), unless append=1 is specified. extra calls to truncate() now that this happens automatically have also been removed.
bin/map: by_col: ensure verbosity is at least 2 in live mode (using new ints.set_min() instead of max() for clarity). documented that live column-based import MUST be run with verbosity 2+ (3 preferred) to provide debugging information for often-complex errors. without this, debugging is effectively impossible.
added lib/ints.py with renamings of max()->set_*min*(), min()->set_*max*() for easier understandability of the set-ceiling/set-floor use cases of min()/max()
bin/map: Set default verbosity: by_col: documented that showing all queries is primarily to assist debugging, not profiling
lib/sh/util.sh: logging: named it `log++`
lib/sh/util.sh: logging: verbosities: level 0: documented that log++ also suppresses external command output for full support of cron jobs
lib/sh/util.sh: logging: documented `make` equivalents of the various verbosities, where available. (many of the verbosities, such as level 1, are sorely needed in make to avoid excessive output.)
lib/sh/util.sh: die_e(): benign errors: increase log_level so that a benign non-zero exit status will only be displayed at debug verbosities (2+) (it is confusing otherwise)
lib/sh/util.sh: try(): always run the command with benign_error=1 so that any die_e() doesn't prematurely indicate that a particular exit status was an error
lib/sh/util.sh: die_e(): support benign errors using $benign_error flag that should be logged as info messages instead of errors
lib/sh/util.sh: die(): documented that msg can't use $() (because it would reset $?)
inputs/bien_web/observation/VegBIEN.csv, unmapped_terms.csv: regenerated
lib/sh/util.sh: command(): 2>&$err_fd: add to _redirs after echoing command so it isn't echoed at the end of every command (since this redirection is frequently applied)
lib/sh/util.sh: sed: use case statement instead of test to determine flag letter, to easily allow matching multiple `uname` OSes or adding additional flag letters
lib/sh/util.sh: die(): documented that its msg can use $?, because it has not yet been overridden by another command
lib/sh/util.sh: die_e(): use die(), which performs the necessary save_e/rethrow. this requires using $? instead of $e for the exit status, because $e has not yet been set.
lib/sh/util.sh: inlined log_e() into die_e() because that's the only place it's used
lib/sh/util.sh: command(): print "command exited with error" message using new die_e() if command returns false. this requires removing manual die_e()/log_e() calls elsewhere.
lib/sh/util.sh: command(): moved increase of indent inside () so that error-handling statements after () will use the outer log_level
lib/sh/util.sh: added die_e(), which logs that a command exited with an error
lib/sh/util.sh: command(): determine redirections before echoing the command so they can be logged along with the command, instead of as separate exec statements. (these had a higher log_level to avoid cluttering the output with `exec` lines, which usually suppressed the redirections completely.) inline the command__set_fds() nested func so the redirections are all in one place.
lib/sh/util.sh: use simpler `if can_log; then indent; fi` instead of `can_log && indent || true`. however, the `&& indent || true` syntax is still required in aliases such as echo_func which need to allow prefixing the command with a wrapper command or kw param assignments.
inputs/GBIF/raw_occurrence_record/run: dynamically generate herbaria_filter.csv_ from herbaria.ih in new target herbaria_filter.csv_/make()
inputs/GBIF/raw_occurrence_record/run: store the herbaria filter in a MySQL table loaded from a CSV instead of getting it from a hardcoded list of IN (...) values
lib/sh/db.sh: added truncate()
lib/sh/make.sh: set_make_vars: set $target_stem
lib/sh/db.sh: added mysql_import()
lib/sh/db.sh: removed no longer used mk_esc_name()
lib/runscripts/table.run: don't mk_esc_name schema, table because these will be mk_esc_name'd by functions that use them
lib/sh/local.sh: psql(): use $schema_esc, $table_esc instead of just putting $schema, $table in ""
lib/sh/db.sh: mk_esc_name_alias(): don't overwrite an already-defined $*_esc, to allow the user to provide an already-escaped value (such as a schema-qualified table) directly
lib/sh/util.sh: rtrim(): increase the log_level of sed to 4+ instead of 2+ because it is usually run as part of a var assignment, and should therefore have a lower log_level than echo_vars
lib/sh/db.sh: mk_esc_name_alias(): echo_vars the *_esc var when it's set