lib/csvs.py: InputRewriter: documented that this is also a stream (in addition to inheriting from StreamFilter)
bugfix: lib/csvs.py: InputRewriter: accept a reader, as would be expected, instead of a custom stream whose lines are tuples
fix: lib/sql_io.py: append_csv(): use new csvs.ProgressInputFilter instead of streams.ProgressInputStream(csvs.StreamFilter(__)), so that the input to csvs.InputRewriter is a reader, not a stream. this avoids the need for csvs.InputRewriter to accept a stream whose lines are tuples, instead of the expected reader.
bugfix: inputs/input.Makefile: %/install: $(exportHeader) must come before postprocess because postprocess renames columns
exports/: svn:ignore: added *.gz
lib/csvs.py: added ProgressInputFilter, analogous to streams.ProgressInputStream
lib/sql_io.py: added commented-out debug statement used to troubleshoot copy_expert() errors
lib/dicts.py: added pair_keys(), pair_values()
bugfix: lib/streams.py: CaptureStream: end_idx must also be > start_idx
bugfix: inputs/input.Makefile: $(import_install_): need `set -o pipefail` to enable errexit
/README.TXT: to backup files not in Time Machine: don't need to review diff because command is unidirectional
fix: /README.TXT: to back up the local machine's hard drive: "repeat until only minimal changes" should refer to the first sync command
inputs/.geoscrub/geoscrub_output/run: documented postprocess() rm=1 runtime (6 min)
lib/tnrs.py: single_tnrs_request(): use_tnrs_export=False: need to obtain export columns
lib/csvs.py: added header(stream)
fix: lib/tnrs.py: single_tnrs_request(): need to `assert name_ct >= 1`, because with no names, TNRS hangs indefinitely
bin/tnrs_client: added env var to configure use_tnrs_export
/README.TXT: to back up vegbiendev: use inplace=1 to speed stopping and resuming transfer
fix: /README.TXT: to back up the local machine's hard drive: removed --extended-attributes (after initial sync) because rsync apparently has to visit every file for this
fix: /README.TXT: to back up the local machine's hard drive: also need --extended-attributes
/README.TXT: to back up the local machine's hard drive: removed --delete-before now that that partition has been expanded
fix: /README.TXT: to back up vegbiendev: exclude /var/lib/mysql.bak,postgresql.bak because the local machine doesn't need 2 copies of this information
/README.TXT: to back up vegbiendev: removed no longer needed exclude of Dropbox subdir backup
fix: /README.TXT: to back up vegbiendev: also need to do steps under Maintenance > "to synchronize vegbiendev, jupiter, and your local machine" because /home/aaronmk/bien is not synced here
bugfix: /README.TXT: to back up vegbiendev: need `overwrite=1`
/README.TXT: to back up the version history: don't also need this on vegbiendev because it's already on jupiter and the local machine
bugfix: /README.TXT: to back up vegbiendev: need to include Postgres config files
/README.TXT: to back up the local machine's hard drive: don't back up temp files: added /.fseventsd/
fix: /README.TXT: to back up the local machine's hard drive: initial runtime: use range instead because some of the later runtime might have been from the same files
/README.TXT: to back up the local machine's hard drive: updated initial runtime to include additional transferred files (17 h)
fix: /README.TXT: to back up the local machine's hard drive: need to use --delete-before because the backup partition is near capacity
/README.TXT: to back up the local machine's hard drive: don't back up temp files such as /private/var/vm/*
fix: /README.TXT: to back up the local machine's hard drive: back up most Dropbox/Postgres files before stopping processes, to minimize downtime
bugfix: /README.TXT: to back up the local machine's hard drive: can't use ~ with --exclude
fix: inputs/.geoscrub/geoscrub_output/postprocess.sql: map_geovalidity(): unscrubbable names should actually be geo*in*valid, not geovalid=NULL, according to Brad
/README.TXT: to back up the local machine's hard drive: back up the non-Dropbox, non-Postgres files separately to minimize the Dropbox and Postgres downtime
/README.TXT: to back up the vegbiendev databases: don't need to review diff for these as it's always unidirectional
/README.TXT: added instructions to back up vegbiendev
fix: /README.TXT: to back up the local machine's hard drive: also need to repeat backup command until only minimal changes
/README.TXT: to back up the local machine's hard drive: added step to stop Postgres
bugfix: /README.TXT: to back up the local machine's hard drive: also need to stop Dropbox
/README.TXT: to back up the local machine's settings: added step to remove .DS_Store
fix: /README.TXT: to back up the local machine's settings: Dropbox: shoudl not run with `del=`, because the backup should be an exact replica
backups/TNRS.*: removed no longer needed old TNRS backups, which are part of the respective full-database backups in any case
added config/phpMyAdmin/ symlink to schemas/VegCore/phpMyAdmin/
bugfix: lib/sh/archives.sh: compress(): don't include dir prefix in zip archive
lib/sh/util.sh: cd(): use echo_run instead of a manual echo_cmd call
fix: lib/sh/util.sh: cd(): indent after running cd rather than before
lib/sh/util.sh: cd(): support rebasing path vars for the new dir
bugfix: lib/sh/archives.sh: compress(): need to use zip's path syntax to avoid the file in the archive being named "-"
lib/tnrs.py: added option to avoid using TNRS's TSV export feature, which currently returns incorrect selected matches (vegpath.org/issues/943). this has been implemented up through the GWT/JSON decoding.
lib/tnrs.py: added gwt_decode()
lib/strings.py: added unesc_quotes() and helper functions
lib/strings.py: added json_decode()
/README.TXT: To re-run geoscrubbing: updated runtimes
exports/*_GBIF.csv.run: documented compress_() runtime (20 min-1 h)
lib/runscripts/extract.run: export_(): also compress created file
lib/sh/archives.sh: added compress(), expand(), which handle compression of individual files
bugfix: inputs/input.Makefile: sql/install: ";" for commands inside $(if) blocks need to be inside the $(if) block, too, because otherwise there will be dangling ";" without a statement (bash does not support empty statements containing just ";")
/README.TXT: Full database import: converted database commands to command-line commands to make them easier to run
web/links/index.htm: updated to Firefox bookmarks: added instructions for how to enable automatic restart on power loss for the UPS (which isn't accessible in the GUI)
fix: schemas/util.sql: contained_within_approx(point geocoord, region postgis.geography): use util.geography() instead of implicit cast to suppress "Coordinate values were coerced into range [-180 -90, 180 90] for GEOGRAPHY" NOTICEs
schemas/util.sql: added geography(util.geocoord), which suppresses "Coordinate values were coerced into range [-180 -90, 180 90] for GEOGRAPHY" NOTICEs
exports/native_status_resolver.csv.run: updated export_() runtime (5 min, now that we're using the narrower New World criterion)
fix: schemas/public_.sql: native_status_resolver: don't include rows with New World coordinates that don't also have New World country names, since the NSR only uses the country name
schemas/public_.sql: native_status_resolver: removed rows with is_geovalid NULL, at Brad's request. note that this removes valid rows with standardized country names.
exports/native_status_resolver.csv.run: updated export_() runtime (30 min)
fix: schemas/public_.sql: native_status_resolver: added country IS NOT NULL filter requested by Brad
fix: schemas/public_.sql: native_status_resolver: remove the id because this prevents SELECT DISTINCT from having the desired effect. instead, the results will be joined back using the other columns.
exports/native_status_resolver.csv.run: upload_(): documented runtime (2.5 min)
bugfix: exports/native_status_resolver.csv.run: upload_(): $live must be exported
exports/native_status_resolver.csv.run: upload_(): use `live=1` instead for consistency with other invocations of put
fix: exports/native_status_resolver.csv.run: upload_(): need `l=1`
exports/native_status_resolver.csv.run: documented export_() runtime (45 min)
exports/native_status_resolver.csv.run: added upload_() to get the file onto nimoy
added exports/native_status_resolver.csv.run
schemas/public_.sql: added native_status_resolver view, requested by Brad (wiki.vegpath.org/Data_requests)
inputs/publishable datasources.xlsx: udpated
lib/tnrs.py: documentation about output of the retrieve step: added that this is also unusable because the array does not contain all the columns and contains no column names
removed no longer used web/BIEN3/Redmine/main/. use Redmine/!__ instead.
web/BIEN3/Redmine/issues/.htaccess: perform .. redirects using new ! prefix
web/BIEN3/Redmine/.htaccess: enable redirects that avoid using a subdir's .htaccess
web/BIEN3/Redmine/wiki/.htaccess: removed no longer needed ignore_fs, since the .htaccess does not have RewriteRules that would need this in a RewriteCond
web/BIEN3/Redmine/issues/.htaccess: main issues page: added default filter conditions
bugfix: web/BIEN3/Redmine/issues/.htaccess: need to redirect to separate URL for individual issues, because they are not located under the issues/ subdir in Redmine
added web/.issues symlink and dest dir (needed because Apache does not support dangling symlinks)
web/BIEN3/Redmine/wiki/.htaccess: documented that this dir is needed because Apache does not support dangling symlinks
bugfix: web/.htaccess: need to expand top-level symlinks to avoid RewriteBase issues
web/main.conf: added RewriteMap for readlink
added web/readlink
web/links/index.htm: updated to Firefox bookmarks: updated favicons
web/BIEN3/Redmine/wiki/.htaccess: just use this dir as symlink dest, since the dir name is the same as the URL path within Redmine
web/.htaccess: don't rewrite existing files/dirs: allow forcing rewrite of existing things with %{ENV:ignore_fs}
web/BIEN3/Redmine/svn-web/.htaccess: use Redmine/ instead of main/ subdir
web/BIEN3/Redmine/.htaccess: point this to the Redmine root instead of to the wiki, to avoid the need to append /main
backups/vegbien.r14089.backup.md5: updated
inputs/.TNRS/schema.sql: taxon_match: added taxon_scrub_best_match_jerry_lu index to facilitate finding names affected by the match-picking bug (#943)