/README.TXT: to back up the version history: added steps to sync the git repository to jupiter and the local machine
/README.TXT: added steps to back up the version history
/README.TXT: Notes on running programs: added warning that you should always start with a clean shell to avoid spurious bugs
/README.TXT: Testing: added pointer to development machine specs
moved everything into /trunk/ to create the standard svn layout, for use with tools that require this (eg. git-svn). IMPORTANT: do NOT do an `svn up`. instead, re-use your working copy's existing files with `svn switch` (http://svnbook.red-bean.com/en/1.6/svn.ref.svn.c.switch.html).
/README.TXT: added note that shell scripts should always be read-only, so that editing them while an import is in progress will not crash the import (see http://vegpath.org/links/#**%20modifying%20a%20running%20shell%20script)
/README.TXT: to synchronize a Mac's settings with my testing machine's: added step to remove the downloaded Spam folder, because spam e-mails often contain viruses that would trigger clamscan
/README.TXT: Full database import: documented that you should always start with a clean shell, which does not have changes to the env vars. (there have been inexplicable bugs that went away after closing and reopening the terminal window.) note that running `exec bash` is not sufficient to reset the env vars.
/README.TXT: Full database import: backups: added step to download backup to local machine
/README.TXT: Full database import: In PostgreSQL: documented that the tables to check are located in the r# schema, not public
/README.TXT: Datasource setup: added steps to backup e-mails
bugfix: /README.TXT: Full database import: To restart an aborted import for a specific table: run the two commands in errexit mode so that the datasource does not incorrectly have the temp suffix removed if the import command exited with an error
bugfix: /README.TXT: Full database import: To restart an aborted import for a specific table: added command to remove the temp suffix from the source table entry, which is not automatic for importing a specific table (only for importing the entire datasource, at the end of which the datasource is considered completely imported and ready to overwrite any previous import)
/README.TXT: Full database import: documented that `make schemas/reinstall` requires sudo access
/README.TXT: Full database import: verifying import: In PostgreSQL: don't include current values of the datasource counts, etc., because these may change and should always be re-checked at wiki.vegpath.org/VegBIEN_contents
bugfix: /README.TXT: to backup files not in Time Machine: PostgreSQL: need to run with `overwrite=1` so removed files are also deleted
/README.TXT: to backup files not in Time Machine: PostgreSQL: only stop PostgreSQL after all files have been copied, to minimize the time that the PostgreSQL server is down (the final copy just copies concurrent changes)
/README.TXT: updated to PostgreSQL 9.3
/README.TXT: Full database import: after import: record the import times in inputs/import.stats.xls: documented that this should be run on the local machine, because it needs the Mac filename ordering
/README.TXT: Full database import: after import: removed step to install analytical_stem on nimoy because the import mechanism is not set up to do this (we don't generate CSV exports of the full analytical_stem table because they take up a lot of space and are not currently used for anything)
/README.TXT: Full database import: after import: In PostgreSQL: added step to check that analytical_stem contains the expected # of rows
/README.TXT: Full database import: after import: In PostgreSQL: added specific instructions for determining which/how many datasources are expected to be included in the provider_count and source tables
/README.TXT: for each task, documented which machine it's run on. for tasks run on vegbiendev, added pointer to "Connecting to vegbiendev" steps.
/README.TXT: added instructions for connecting to vegbiendev
/README.TXT: Single datasource import: added pointer to instructions to remake the analytical DB (also required after single datasource import)
/README.TXT: Maintenance: to synchronize vegbiendev, jupiter, and your local machine: run all sync_uploads on the svn working copy using --size-only, because the mtimes are based on when the files were last updated by svn and are not meaningful
/README.TXT: Full database import: On local machine: do steps under Maintenance > "to synchronize vegbiendev, jupiter, and your local machine": removed no longer accurate indicator that these steps are above Full database import, since Full database import is now at the beginning of the file
/README.TXT: Datasource setup: added link to Example steps for a datasource (wiki.vegpath.org/Import_process_for_Madidi)
/README.TXT: Full database import: To remake analytical DB: added runtime (13 h)
/README.TXT: Datasource setup: additional steps for new-style datasources: added steps not present in http://wiki.vegpath.org/Adding_new-style_import_to_a_datasource because they were performed all at once for all datasources
/README.TXT: Datasource setup: added additional steps for new-style datasources, from http://wiki.vegpath.org/Adding_new-style_import_to_a_datasource
bugfix: /README.TXT: to backup files not in Time Machine: need to use -E option to sudo to preserve env, after installing the latest system update
/README.TXT: Single datasource import: removed rescrub step because this is not needed by the current TNRS process
/README.TXT: Full database import: added Running individual steps separately label for the section that is not part of the main import, but is useful if the import is aborted part of the way through
/README.TXT: moved Single datasource import, Datasource setup to top since these are the most important howtos
bugfix: bin/after_import: run backups/fix_perms right after the backup files are created to make them private
/README.TXT: Full database import: Publish the new import: added runtime (1 min)
/README.TXT: Full database import: time to wait for the import to finish: updated to time in inputs/import.stats.xls
bin/import_all: added step to remove any leftover TNRS lockfile (previously done manually)
bugfix: /README.TXT: on a live machine, you should put the following in your .profile: need to make svn files web-accessible, because these are used by fs.vegpath.org links (such as to the ERD, etc.). note that this does not affect unversioned files, because these get the right permissions on the local machine instead (see Testing > On a development machine, you should put the following in your .profile).
/README.TXT: to backup files not in Time Machine: added command to start the PostgreSQL server
bugfix: /README.TXT: to synchronize a Mac's settings with my testing machine's: don't upload ~/.profile, etc. to jupiter because these files are different on each machine. they can instead be synced manually.
/README.TXT: to backup files not in Time Machine: added command to stop the PostgreSQL server
/README.TXT: to synchronize vegbiendev, jupiter, and your local machine: noted that ./fix_perms should be run on all machines
bugfix: /README.TXT: to synchronize vegbiendev, jupiter, and your local machine:: added step to run `make backups/TNRS.backup/download live=1`, because bin/sync_upload does not sync this due to filters in backups/.rsync_filter.download
/README.TXT: Maintenance: to synchronize vegbiendev, jupiter, and your local machine: added step to run ./fix_perms so that there are fewer permissions diffs to review
bugfix: /README.TXT: to synchronize a Mac's settings with my testing machine's: upload: `(cd ~/Dropbox/svn/; svn up)`: use `up` instead so that the needed --force option is applied
/README.TXT: Single datasource import: run commands in the background, since these are long-running commands
/README.TXT: Full database import: fixing TNRS errors: noted that inputs/test_taxonomic_names/test_scrub re-runs TNRS
/README.TXT: Full database import: fixing TNRS errors: updated instructions for new TNRS schema editing workflow
/README.TXT: Full database import: To back up DB (staging tables and last import) separately: added step to upload backups to jupiter
/README.TXT: Full database import: To back up DB (staging tables and last import) separately: added step to remake backups/TNRS.backup
/README.TXT: Full database import: min disk space: updated import schema size for last import
/README.TXT: Full database import: tailing inputs/analytical_db/logs/make_analytical_db.log.sql: increased # lines to 150 to include all lines for the last run
bugfix: /README.TXT: Full database import: To restart an aborted import for a specific table: bin/after_import: need to run it in the background
/README.TXT: Full database import: To restart an aborted import for a specific table: added step to run bin/after_import
bugfix: /README.TXT: Full database import: To restart an aborted import for a specific table: added by_col=1
/README.TXT: Full database import: added steps to restart an aborted import for a specific table
bin/import_all: use column-based import (by_col=1) by default, instead of requiring the user to explicitly specify it. instead turn it off explicitly (by_col=) for row-based import.
bugfix: /README.TXT: Full database import: To back up DB: after renaming current import to public: say to replace $version with the appropriate revision, because the $version env var should not be set (otherwise the backup will try to use a nonexistent import with the given revision #)
/README.TXT: Full database import: To back up DB: updated instructions to inline setting of $dump_opts, like in bin/import_all
/README.TXT: Full database import: don't exit the screen until after getting $version, which is defined within it
/README.TXT: Full database import: make test by_col=1: documented that if you encounter errors, they are most likely related to the PostgreSQL error parsing in /lib/sql.py parse_exception()
/README.TXT: Maintenance: added instructions for what to do if http://vegbiendev.nceas.ucsb.edu/phppgadmin/ goes down (sometimes displaying a Not found error)
/README.TXT: Maintenance: regenerate mappings/VegCore.csv: commit command: use single quotes ' instead of double quotes " to avoid needing to \-escape every special char (single quotes ' still need to be escaped)
/README.TXT: Maintenance: to backup files not in Time Machine: removed VirtualBox VMs because they are now in Time Machine, and do not need to be backed up separately
/README.TXT: Maintenance: to synchronize a Mac's settings with my testing machine's: added steps to upload just the VirtualBox VMs
bugfix: /README.TXT: Maintenance: to synchronize a Mac's settings with my testing machine's: added overwrite=1 so that old snapshots, etc. are also deleted
/README.TXT: Maintenance: to synchronize a Mac's settings with my testing machine's: use better bin/sync_upload instead of put
/README.TXT: Maintenance: to synchronize a Mac's settings with my testing machine's: removed no longer needed inplace=1, because the VirtualBox VMs now all use a snapshot covering the full disk, so that the full disk is not altered (removing the need to optimize backing up a large file) and just the diff files need to be backed up each time
bugfix: /README.TXT: Maintenance: syncing ~/bien to ~/Dropbox/svn: added overwrite=1 so that perms transfer from the authoritative ~/bien regardless of relative mtimes
/README.TXT: to synchronize vegbiendev, jupiter, and your local machine: added step to update mtimes/perms on ~/Dropbox/svn/ so that copying files back to ~/bien does not overwrite the permissions from what is on vegbiendev
/README.TXT: Maintenance: synchronization: fixed whitespace
/README.TXT: Maintenance: to synchronize a Mac's settings with my testing machine's: removed filters that are now handled by .rsync_ignores
bugfix: /README.TXT: Maintenance: to synchronize a Mac's settings with my testing machine's: sync ~/Dropbox/svn/ (the no-unversioned-files working copy) separately from the rest of the files, because .svn/ is now excluded by /.rsync_ignore, so that `svn up` needs to be used to keep the .svn/ dirs in sync. note that .svn/ should generally not be synced between machines, because they may use incompatible versions of the svn working copy format.
/README.TXT: Maintenance: to synchronize a Mac's settings with my testing machine's: use new bin/sync_upload (with $sync_remote_subdir) so that per-dir .rsync_ignores are processed, and to use the default $sync_remote_url
/README.TXT: Maintenance: to synchronize vegbiendev, jupiter, and your local machine: use new bin/sync_upload instead of specifying all the filter patterns manually. this replaces several `put` commands with various filters with just a bin/sync_upload each on vegbiendev and your machine (in overwrite=1 mode to force a complete sync).
/README.TXT: removed unnecessary `env` before kw params, which are treated as such whenever they appear before a command name
bugfix: /README.TXT: updated `make backups/download` to `make backups/<file>/download`
backups/Makefile: upload: use bin/sync_upload
bugfix: /README.TXT: `make inputs/upload`, `make inputs/download`: added live=1 so that the sync operation runs rather than previewing what will be synced. removed test=1 because this flag is not used by put.
/README.TXT: Backups: TNRS cache: Back up/Restore: added runtimes (3 min/5.5 min)
/README.TXT: Full database import: To run TNRS, etc. after the main import: clarified that you should only run `export version=<version>` if the import is named something other than public (i.e. it has not yet replaced the previous public schema)
/README.TXT: Full database import: To run TNRS: removed `by_col=1` because by-column mode is not applicable to running TNRS. it is, however, needed when running import_scrub (i.e. `make inputs/<datasrc>/reimport_scrub by_col=1`).
/README.TXT: Full database import: disk space check: updated minimum (to 300GB) for new import schema size. note that most of the space (166GB) is indexes, and even of the 87GB of data, only 20GB is from GBIF and 15GB from FIA (so most of it is duplication).
/README.TXT: `make inputs/{upload,download}`: first run with test=1 to see what the diffs will be
bugfix: /README.TXT: Full database import: added step to remove any leftover TNRS lockfile. usually, the PID in it would not exist, but sometimes it now refers to a different, active process which blocks tnrs.make.
/README.TXT: Full database import: On local machine: added step to do steps under Maintenance > "to synchronize vegbiendev, jupiter, and your local machine", which is needed in addition to `make inputs/upload` since that doesn't handle overwrites or deletions
/README.TXT: Maintenance: to synchronize vegbiendev, jupiter, and your local machine: added warning that you should pay careful attention to all files that will be deleted or overwritten (as the three machines are often out of sync)
/README.TXT: Full database import: make inputs/{upload,download}: run them first with `test=1` to see what the changes will be
/README.TXT: Full database import: `svn up`: use --force to avoid errors about existing files
bugfix: README.TXT: Full database import: screen: need to unset TMOUT, version after running `screen` rather than before so they take effect within the `screen` shell
README.TXT: Full database import: after running `screen`: run `set -o ignoreeof` to prevent Ctrl+D from exiting `screen` to keep attached jobs
README.TXT: updating TNRS CSV columns: use the entire "COPY tnrs ..." statement instead of just the body of it so that the explicit columns list is included. this way, the COPY statement will cause an error if the TNRS schema was changed but inputs/.TNRS/data.sql was not yet updated.
README.TXT: Full database import: added warning to perform every single step listed, to avoid breaking column-based import
README.TXT: Full database import: Publish the new import: added warning to be sure you have done every single verification step before proceeding. otherwise, a previous valid import could incorrectly be overwritten with a broken one.
bugfix: README.TXT: Full database import: To run TNRS/remake analytical DB: need to run `export version=<version>` before the command which uses it rather than after
README.TXT: Datasource setup: For MySQL inputs: For .sql exports: added steps to grant privileges to the bien user. the privileges list excludes UPDATE, DELETE, ALTER, DROP to prevent bugs in the import scripts from accidentally deleting data.
README.TXT: Full database import: added steps to check that TNRS ran successfully, and fix errors (due to column changes in the TNRS CSV) if it didn't