/README.TXT: moved "to back up e-mails" and "to back up the version history" before settings backup so that the local backup of these is up to date when everything gets backed up
/README.TXT: to synchronize vegbiendev, jupiter, and your local machine: backups/TNRS.backup: do this before the general sync so that any reverse sync that's needed won't include it
/README.TXT: to synchronize vegbiendev, jupiter, and your local machine: backups/TNRS.backup: use bin/sync_upload now that this works for rsync-ignored files
fix: /README.TXT: to synchronize vegbiendev, jupiter, and your local machine: run `up` on all machines, not just jupiter, because all must be up-to-date to avoid extraneous diffs
bugfix: /README.TXT: to synchronize vegbiendev, jupiter, and your local machine: `svn up` on jupiter: need to use up alias because that adds --force
bugfix: /README.TXT: to synchronize vegbiendev, jupiter, and your local machine: added `svn up` on jupiter: needs to be in main dir (~/bien), not ~/Dropbox/svn/
/README.TXT: to synchronize vegbiendev, jupiter, and your local machine: added `svn up` on jupiter to avoid extraneous diffs when rsyncing
/README.TXT: Schema changes: manually apply schema changes to the live public schema: moved under "update mappings and staging table column names" because this is a necessary part of that step
/README.TXT: Schema changes: changed "update staging table column names" to "update mappings and staging table column names"
/README.TXT: `make inputs/{NVS,SALVIAS,TEAM}/test`: updated runtime (1 min)
/README.TXT: calls to `inputs/run postprocess`: direct user to refer to inputs/run for this, so the runtime doesn't have to be updated in multiple places
/README.TXT: Schema changes: added steps to update staging table column names on the local machine and vegbiendev
/README.TXT: Maintenance: VegCore data dictionary: `make inputs/{NVS,SALVIAS,TEAM}/test`: recorded runtime (30 s)
/README.TXT: Maintenance: VegCore data dictionary: `make inputs/{NVS,SALVIAS,TEAM}/test`: prepended `time` to enable obtaining the runtime
/README.TXT: Maintenance: VegCore data dictionary: `inputs/run postprocess`: updated runtime (20 min)
inputs/run: postprocess(): documented runtime (30 min)
bugfix: /README.TXT: Maintenance: VegCore data dictionary: apply new data dict mappings: need to use postprocess rather than import runscript target, so that the command also works on an svn checkout without the flat files (the flat files are not needed for the staging table renaming)
bugfix: /README.TXT: Maintenance: VegCore data dictionary: apply new data dict mappings: need to use import rather than mappings runscript target, to rename the staging tables
bugfix: /README.TXT: Maintenance: VegCore data dictionary: also need to apply new data dict mappings on vegbiendev
fix: /README.TXT: Maintenance: VegCore data dictionary: added steps to apply the new data dictionary mappings to the datasource mappings and staging tables
/Makefile: added separate phppgadmin-Linux target to avoid needing to run the entire postgres-Linux target whenever http://vegbiendev.nceas.ucsb.edu/phppgadmin/ goes down (after some system updates)
/README.TXT: use full hostname for jupiter so the commands work outside of the NCEAS network as well
fix: /README.TXT: use exact ssh command needed to connect to vegbiendev/jupiter (eg. `ssh -t vegbiendev.nceas.ucsb.edu exec sudo su - aaronmk`) instead of vaguely referring to "on vegbiendev"/"on jupiter"
/README.TXT: Full database import: screen: run `unset TMOUT` first because it is most important, now that the remote servers have a TMOUT set for extra security
/README.TXT: to back up the version history: added steps to sync git to the local machine
/README.TXT: Schema changes: clarified that the staging tables should only be reinstalled if needed
/README.TXT: put ... around all uppercased text, for consistency
bugfix: /README.TXT: Full database import: Check that source contains [# datasources] rows up through XAL: added alternative verification method when this is not the case (some datasources may be near the end depending on import order)
/README.TXT: to back up the version history: added back `git svn fetch` so we keep the git export up-to-date, too
/README.TXT: to back up the version history: added runtimes (1.5 h for the initial svnsync)
/README.TXT: to back up the version history: added trailing /s to dirs
bugfix: /README.TXT: to back up the version history: fixed svn_repo/ path
/README.TXT: to back up the version history: use svnsync instead of `git svn fetch`, so that the backup is in a format that can be directly reimported into an svn repo
/README.TXT: Full database import: `make test by_col=1`: documented runtime (20 min)
/README.TXT: Full database import: `. bin/import_all`: documented how to view progress
/README.TXT: to back up the version history: added steps to sync the git repository to jupiter and the local machine
/README.TXT: added steps to back up the version history
/README.TXT: Notes on running programs: added warning that you should always start with a clean shell to avoid spurious bugs
/README.TXT: Testing: added pointer to development machine specs
moved everything into /trunk/ to create the standard svn layout, for use with tools that require this (eg. git-svn). IMPORTANT: do NOT do an `svn up`. instead, re-use your working copy's existing files with `svn switch` (http://svnbook.red-bean.com/en/1.6/svn.ref.svn.c.switch.html).
/README.TXT: added note that shell scripts should always be read-only, so that editing them while an import is in progress will not crash the import (see http://vegpath.org/links/#**%20modifying%20a%20running%20shell%20script)
/README.TXT: to synchronize a Mac's settings with my testing machine's: added step to remove the downloaded Spam folder, because spam e-mails often contain viruses that would trigger clamscan
/README.TXT: Full database import: documented that you should always start with a clean shell, which does not have changes to the env vars. (there have been inexplicable bugs that went away after closing and reopening the terminal window.) note that running `exec bash` is not sufficient to reset the env vars.
/README.TXT: Full database import: backups: added step to download backup to local machine
/README.TXT: Full database import: In PostgreSQL: documented that the tables to check are located in the r# schema, not public
/README.TXT: Datasource setup: added steps to backup e-mails
bugfix: /README.TXT: Full database import: To restart an aborted import for a specific table: run the two commands in errexit mode so that the datasource does not incorrectly have the temp suffix removed if the import command exited with an error
bugfix: /README.TXT: Full database import: To restart an aborted import for a specific table: added command to remove the temp suffix from the source table entry, which is not automatic for importing a specific table (only for importing the entire datasource, at the end of which the datasource is considered completely imported and ready to overwrite any previous import)
/README.TXT: Full database import: documented that `make schemas/reinstall` requires sudo access
/README.TXT: Full database import: verifying import: In PostgreSQL: don't include current values of the datasource counts, etc., because these may change and should always be re-checked at wiki.vegpath.org/VegBIEN_contents
bugfix: /README.TXT: to backup files not in Time Machine: PostgreSQL: need to run with `overwrite=1` so removed files are also deleted
/README.TXT: to backup files not in Time Machine: PostgreSQL: only stop PostgreSQL after all files have been copied, to minimize the time that the PostgreSQL server is down (the final copy just copies concurrent changes)
/README.TXT: updated to PostgreSQL 9.3
/README.TXT: Full database import: after import: record the import times in inputs/import.stats.xls: documented that this should be run on the local machine, because it needs the Mac filename ordering
/README.TXT: Full database import: after import: removed step to install analytical_stem on nimoy because the import mechanism is not set up to do this (we don't generate CSV exports of the full analytical_stem table because they take up a lot of space and are not currently used for anything)
/README.TXT: Full database import: after import: In PostgreSQL: added step to check that analytical_stem contains the expected # of rows
/README.TXT: Full database import: after import: In PostgreSQL: added specific instructions for determining which/how many datasources are expected to be included in the provider_count and source tables
/README.TXT: for each task, documented which machine it's run on. for tasks run on vegbiendev, added pointer to "Connecting to vegbiendev" steps.
/README.TXT: added instructions for connecting to vegbiendev
/README.TXT: Single datasource import: added pointer to instructions to remake the analytical DB (also required after single datasource import)
/README.TXT: Maintenance: to synchronize vegbiendev, jupiter, and your local machine: run all sync_uploads on the svn working copy using --size-only, because the mtimes are based on when the files were last updated by svn and are not meaningful
/README.TXT: Full database import: On local machine: do steps under Maintenance > "to synchronize vegbiendev, jupiter, and your local machine": removed no longer accurate indicator that these steps are above Full database import, since Full database import is now at the beginning of the file
/README.TXT: Datasource setup: added link to Example steps for a datasource (wiki.vegpath.org/Import_process_for_Madidi)
/README.TXT: Full database import: To remake analytical DB: added runtime (13 h)
/README.TXT: Datasource setup: additional steps for new-style datasources: added steps not present in http://wiki.vegpath.org/Adding_new-style_import_to_a_datasource because they were performed all at once for all datasources
/README.TXT: Datasource setup: added additional steps for new-style datasources, from http://wiki.vegpath.org/Adding_new-style_import_to_a_datasource
bugfix: /README.TXT: to backup files not in Time Machine: need to use -E option to sudo to preserve env, after installing the latest system update
/README.TXT: Single datasource import: removed rescrub step because this is not needed by the current TNRS process
/README.TXT: Full database import: added Running individual steps separately label for the section that is not part of the main import, but is useful if the import is aborted part of the way through
/README.TXT: moved Single datasource import, Datasource setup to top since these are the most important howtos
bugfix: bin/after_import: run backups/fix_perms right after the backup files are created to make them private
/README.TXT: Full database import: Publish the new import: added runtime (1 min)
/README.TXT: Full database import: time to wait for the import to finish: updated to time in inputs/import.stats.xls
bin/import_all: added step to remove any leftover TNRS lockfile (previously done manually)
bugfix: /README.TXT: on a live machine, you should put the following in your .profile: need to make svn files web-accessible, because these are used by fs.vegpath.org links (such as to the ERD, etc.). note that this does not affect unversioned files, because these get the right permissions on the local machine instead (see Testing > On a development machine, you should put the following in your .profile).
/README.TXT: to backup files not in Time Machine: added command to start the PostgreSQL server
bugfix: /README.TXT: to synchronize a Mac's settings with my testing machine's: don't upload ~/.profile, etc. to jupiter because these files are different on each machine. they can instead be synced manually.
/README.TXT: to backup files not in Time Machine: added command to stop the PostgreSQL server
/README.TXT: to synchronize vegbiendev, jupiter, and your local machine: noted that ./fix_perms should be run on all machines
bugfix: /README.TXT: to synchronize vegbiendev, jupiter, and your local machine:: added step to run `make backups/TNRS.backup/download live=1`, because bin/sync_upload does not sync this due to filters in backups/.rsync_filter.download
/README.TXT: Maintenance: to synchronize vegbiendev, jupiter, and your local machine: added step to run ./fix_perms so that there are fewer permissions diffs to review
bugfix: /README.TXT: to synchronize a Mac's settings with my testing machine's: upload: `(cd ~/Dropbox/svn/; svn up)`: use `up` instead so that the needed --force option is applied
/README.TXT: Single datasource import: run commands in the background, since these are long-running commands
/README.TXT: Full database import: fixing TNRS errors: noted that inputs/test_taxonomic_names/test_scrub re-runs TNRS
/README.TXT: Full database import: fixing TNRS errors: updated instructions for new TNRS schema editing workflow
/README.TXT: Full database import: To back up DB (staging tables and last import) separately: added step to upload backups to jupiter
/README.TXT: Full database import: To back up DB (staging tables and last import) separately: added step to remake backups/TNRS.backup
/README.TXT: Full database import: min disk space: updated import schema size for last import
/README.TXT: Full database import: tailing inputs/analytical_db/logs/make_analytical_db.log.sql: increased # lines to 150 to include all lines for the last run
bugfix: /README.TXT: Full database import: To restart an aborted import for a specific table: bin/after_import: need to run it in the background
/README.TXT: Full database import: To restart an aborted import for a specific table: added step to run bin/after_import
bugfix: /README.TXT: Full database import: To restart an aborted import for a specific table: added by_col=1
/README.TXT: Full database import: added steps to restart an aborted import for a specific table
bin/import_all: use column-based import (by_col=1) by default, instead of requiring the user to explicitly specify it. instead turn it off explicitly (by_col=) for row-based import.
bugfix: /README.TXT: Full database import: To back up DB: after renaming current import to public: say to replace $version with the appropriate revision, because the $version env var should not be set (otherwise the backup will try to use a nonexistent import with the given revision #)
/README.TXT: Full database import: To back up DB: updated instructions to inline setting of $dump_opts, like in bin/import_all
/README.TXT: Full database import: don't exit the screen until after getting $version, which is defined within it
/README.TXT: Full database import: make test by_col=1: documented that if you encounter errors, they are most likely related to the PostgreSQL error parsing in /lib/sql.py parse_exception()