inputs/WIN/Specimen/unmapped_terms.csv: updated
inputs/import.stats.xls: updated import times
/README.TXT: Full database import: time to wait for the import to finish: updated to time in inputs/import.stats.xls
bugfix: bin/import_all: `rm inputs/.TNRS/tnrs/tnrs.make.lock`: need to use `"rm"` instead of `rm` so that we don't use any rm alias the user might have in their shell (import_all is run in the calling shell so that the jobs are owned by the calling shell)
bugfix: mappings/VegCore-VegBIEN.csv: don't map datasetURL to source.url for taxa-only data (this mapping should only occur for Source tables)
bin/import_all: added step to remove any leftover TNRS lockfile (previously done manually)
planning/timeline/timeline.2013.xls: updated for progress
bugfix: lib/sql_io.py: put_table(): Getting output table pkeys of existing/inserted rows: need to include the index cond in the join condition here, too (using var join_custom_cond), so that an index scan can be used instead of a much slower full-table sort
bugfix: schemas/vegbien.sql: locationevent: locationevent_unique_within_location unique index: added COALESCE expression around location_id since it is nullable, and this is needed for the left and right sides of the join to exactly match up to use an index scan
bugfix: lib/sql_io.py: put_table(): DuplicateKeyException: need to include any index cond in the join condition, so that an index scan can be used instead of a much slower full-table sort (otherwise the query planner will not know that it can restrict results to rows satisfying the index cond)
lib/sql_gen.py: Join: added custom_cond param that can be used to add to the JOIN condition
lib/sql.py: distinct_table(): support custom filters on the distincting query
lib/sql_gen.py: ColValueCond: support conds that are just plain SQL (without separate left and right sides) using special custom_cond flag value
bugfix: inputs/input.Makefile: %/test: in by_col mode, also need to run %/test.by_col.xml
lib/sql_io.py: ensure_cond(): documented meaning of passed, failed params (at least one row passed/failed the constraint)
fix: web/links/index.htm: PostgreSQL: vacuuming: moved info about autovacuum process being aborted to correct bookmark
web/links/index.htm: updated to Firefox bookmarks. PostgreSQL: added info on vacuuming and analyzing.
lib/runscripts/util.run: usage: documented that this usage also applies to all files that include this file
lib/runscripts/util.run: usage: clarified that the cmd to run is a function
added schemas/postgresql*.conf.run, which installs these config files and takes care of restarting the server
added lib/runscripts/pg.conf.run, which installs PostgreSQL config files
added lib/runscripts/install.run, analogous to import.run
fix: schemas/postgresql*.conf: turn on autovacuum logging (log_autovacuum_min_duration = 0) so we can verify if autovacuum is happening
bugfix: lib/db_xml.py: put_table(): turned off db.autoanalyze, since forcing an ANALYZE after every bulk insert is inefficient for small datasources. the default autovacuum settings in schemas/postgresql.conf should be fine; however, the frequency and/or threshold may need to be increased if autovacuum does not ANALYZE frequently enough to replace db.autoanalyze.
/run: taxon_trait/make(): order by scientificName, measurementType as described in the taxon_trait table comment
lib/sh/db.sh: mk_select(): added support for ORDER BY
/run: added taxon_trait/make()
lib/sh/db.sh: added pg_export_table_to_dir(), analogous to pg_export_table_to_dir_no_header()
schemas/vegbien.sql: taxon_trait: added query to use to export
inputs/NVS/*/map.csv: Taxon Growth Form: mapped to VegBIEN.growthform enum, using http://www.fgdc.gov/standards/projects/FGDC-standards-projects/vegetation/NVCS_V2_FINAL_2008-02.pdf#page=83§ion.page=76 . documented values used by each table.
lib/sh/util.sh: is_array(): handle unset vars (=false). this fixes a bug in pg_export_table_no_header, which produced the error "lib/sh/util.sh: line 290: declare: cols: not found".
fix: lib/sh/util.sh: join(): documented that delim must be a single char
bugfix: /README.TXT: on a live machine, you should put the following in your .profile: need to make svn files web-accessible, because these are used by fs.vegpath.org links (such as to the ERD, etc.). note that this does not affect unversioned files, because these get the right permissions on the local machine instead (see Testing > On a development machine, you should put the following in your .profile).
/README.TXT: to backup files not in Time Machine: added command to start the PostgreSQL server
bugfix: /README.TXT: to synchronize a Mac's settings with my testing machine's: don't upload ~/.profile, etc. to jupiter because these files are different on each machine. they can instead be synced manually.
/README.TXT: to backup files not in Time Machine: added command to stop the PostgreSQL server
/README.TXT: to synchronize vegbiendev, jupiter, and your local machine: noted that ./fix_perms should be run on all machines
removed unused dir analysis/ (originally requested by Jim)
bugfix: /README.TXT: to synchronize vegbiendev, jupiter, and your local machine:: added step to run `make backups/TNRS.backup/download live=1`, because bin/sync_upload does not sync this due to filters in backups/.rsync_filter.download
/README.TXT: Maintenance: to synchronize vegbiendev, jupiter, and your local machine: added step to run ./fix_perms so that there are fewer permissions diffs to review
bugfix: /README.TXT: to synchronize a Mac's settings with my testing machine's: upload: `(cd ~/Dropbox/svn/; svn up)`: use `up` instead so that the needed --force option is applied
inputs/VegBank/*/postprocess.sql: added primary keys to derived tables
schemas/VegCore/VegCore.ERD.mwb: regenerated exports and udpated image map
schemas/VegCore/VegCore.ERD.mwb: individual_observation.individual: documented that it is optional because an individual_observation cannot have an associated individual unless the individual is traceable to a specific plant
schemas/VegCore/VegCore.ERD.mwb: individual_observation.place_observed_at: made it optional because some individual_observations (e.g. of the plant a specimen was collected from) may be missing location information. however, an individual_observation cannot have an associated individual unless the individual is traceable to a specific plant.
schemas/VegCore/VegCore.ERD.mwb: specimen: added individual_observation, which stores observations about the plant the specimen was collected from. (some specimens may not be traceable to a reobservable individual, but will still have these plant observations.) specimen_observation: adjusted position to fully display the HAS-A connector to specimen.
planning/timeline/timeline.2013.xls: updated for progress. rebalanced dots.
planning/timeline/timeline.2013.xls: added separate task for Individual datasource refresh (separate from Individual datasource removal), because we also need to optimize the reload of datasources. the reload is most likely slow because rows are being added to very large tables.
/README.TXT: Single datasource import: run commands in the background, since these are long-running commands
planning/timeline/timeline.2013.xls: moved Attribution and conditions of use before Flatten the datasources as suggested in meeting with Mark
schemas/vegbien.sql: datasource_rm(): runtime: added runtime of MO (55 min, 0.85 ms/row), which has a much larger # of rows than ACAD (4 million instead of 45,000). updated GBIF runtime estimate (~13 h) with more accurate ms/row from MO.
schemas/vegbien.sql: datasource_rm(): estimated runtime for GBIF (~10 h). note that this is still significantly shorter than the import time (3.4 days).
schemas/vegbien.sql: datasource_rm(): documented how to calculate runtime
schemas/vegbien.sql: datasource_rm(): documented runtime for ACAD: 30 s; 0.61 ms/row
inputs/input.Makefile: rm: use new datasource_rm(), which encapsulates the schema-specific aspects of removing a datasource
bugfix: schemas/vegbien.sql: datasource_rm(): set_config(): don't name the is_local param because it is not a named parameter
schemas/vegbien.sql: added datasource_rm(). this uses an internal schema-scoping parameter to ensure that the function always operates on tables in the schema it was defined in, rather than tables in the search_path. this ensures that when the public schema is renamed (e.g. from an imported version), the function will continue to operate on its own schema rather than whichever schema happens to be called public. this avoids any surprises if you are trying to remove a datasource in one schema, and don't want it to unintentionally be removed in another schema instead.
schemas/util.sql: added schema_ident()
schemas/util.sql: added schema(regtype), schema(anyelement)
inputs/.TNRS/schema.sql: added covering indexes on foreign keys where needed. this enables rows to be cascadingly deleted without a full table scan.
schemas/vegbien.sql: added covering indexes on foreign keys where needed. this enables rows to be cascadingly deleted without a full table scan.
planning/timeline/timeline.2013.xls: increased font size for better readability at 100% (which is also the printed size). note that the timeline is normally zoomed in, so you don't see the actual font size.
inputs/.TNRS/schema.sql: tnrs: instructions for when changing this table's schema: updated to use new `inputs/.TNRS/data.sql.run refresh`
inputs/.TNRS/data.sql.run: added refresh() target which runs inputs/test_taxonomic_names/test_scrub
inputs/test_taxonomic_names/test_scrub: added step to update inputs/.TNRS/data.sql to the now-refreshed TNRS sample data (this updating step is now automated)
inputs/.TNRS/schema.sql: tnrs: updated steps to run when changing this table's schema, to use new TNRS editing workflow
inputs/.TNRS/data.sql: re-ran TNRS using `inputs/test_taxonomic_names/test_scrub; rm=1 inputs/.TNRS/data.sql.run export_`
/README.TXT: Full database import: fixing TNRS errors: noted that inputs/test_taxonomic_names/test_scrub re-runs TNRS
/README.TXT: Full database import: fixing TNRS errors: updated instructions for new TNRS schema editing workflow
inputs/.TNRS/data.sql: generate from the DB using `rm=1 inputs/.TNRS/data.sql.run export_` instead of being a hand-edited file
added inputs/.TNRS/data.sql.run for syncing data.sql directly with the DB without needing to use inputs/test_taxonomic_names/test_scrub just to export the sample data. (however, when modifying the tnrs table, it may still be easier to generate new sample data using test_scrub rather than refactoring the table in place.)
added lib/runscripts/data.pg.sql.run (analogous to schema.pg.sql.run for data-only SQL scripts)
added lib/runscripts/file.pg.sql.run and use it in schema.pg.sql.run
added lib/runscripts/schema.pg.sql.run and use it in inputs/.TNRS/schema.sql.run
inputs/.TNRS/schema.sql: generate from the DB using `rm=1 inputs/.TNRS/schema.sql.run export_` instead of being a hand-edited file. this makes it much easier to edit the (now frequently-changing) TNRS schema directly in pgAdmin (which is graphical), rather than having to manually copy SQL changes from pgAdmin to the file.
inputs/.TNRS/schema.sql.run: export_(): added usage
added inputs/.TNRS/schema.sql.run, which syncs schema.sql with the DB
bugfix: lib/sh/db.sh: pg_dump(): don't default $struct flag to on, because both structure and data should be printed by default
lib/sh/db.sh: pg_dump(): added create_schema= flag to remove CREATE SCHEMA statements (useful if the schema already exists)
bugfix: lib/sh/util.sh: set_fds(): remove empty redirects resulting from using `redirs= cmd...` to clear the redirs and then using $redirs as an array
fix: lib/sh/util.sh: set_fds(): documented that it does not currently support redirecting an fd to itself (due to bash bugs that require the dest fd to be closed before it can be reopened)
bugfix: lib/sh/util.sh: stdout2fd(): don't add >&$fd redirect if the fd is 1, because redir does not currently support redirecting an fd to itself (due to bash bugs that require the dest fd to be closed before it can be reopened)
lib/sh/util.sh: filter_fd(): factored out >() subshell command into stdout2fd() for clarity
bugfix: lib/sh/util.sh: redir(): unset redirs so that you don't redirect again in the invoked command
fix: lib/sh/util.sh: filter_fd(): documented that ${redirs[@]} must not be set to an outer value
fix: inputs/ARIZ/omoccurrences/map.csv: occurrenceID: remapped to EQUIV#to:occid instead of DUPLICATE#of:occid since these are not exact duplicates
lib/runscripts/util.run: added to_top_file alias for use with $top_file
lib/sh/local.sh: added pg_dump_local()
lib/sh/db.sh: added pg_dump(), using the code in bin/pg_dump_vegbien with clarity improvements
lib/sh/db.sh: added pg_cmd() (analogous to mysql_cmd() for PostgreSQL), and use it in psql(), so that other PostgreSQL operations can use this to set the PG* connection/login vars
planning/timeline/timeline.2013.xls: updated dots for new priority order
planning/timeline/timeline.2013.xls: moved optimization of individual datasource removal before flattening the datasources to a common schema as suggested in meeting with Mark
lib/runscripts/datasrc_dir.run: include of import.run: use .rel instead of `. "$(dirname "${BASH_SOURCE0}")"/...`
lib/runscripts/datasrc_dir.run: moved commands related to any runscript in the datasrc dir to new in_datasrc_dir.run
inputs/*/Specimen/test.xml.ref with eventDate->dateCollected mappings: updated test outputs to match mapping
some inputs/*/*/unmapped_terms.csv: updated now that datasetURL is mapped (this does not affect the mappings because it is only mapped for Source tables)
bugfix: inputs/ARIZ/omoccurrences/map.csv: fixed one-to-many mapping for modified (created by the automapper?)
lib/sh/db.sh: pg_export(): added usage
inputs/.TNRS/schema.sql: moved source code comments to in-schema COMMENT ON comments so all the info in schema.sql is in the DB
inputs/.TNRS/schema.sql: views that use * as the column list: added comments to indicate that this is the case, so that the views can be updated in place rather than only by reinstalling the TNRS schema