/README.TXT: Maintenance: to synchronize vegbiendev, jupiter, and your local machine: added step to run ./fix_perms so that there are fewer permissions diffs to review
bugfix: /README.TXT: to synchronize a Mac's settings with my testing machine's: upload: `(cd ~/Dropbox/svn/; svn up)`: use `up` instead so that the needed --force option is applied
inputs/VegBank/*/postprocess.sql: added primary keys to derived tables
schemas/VegCore/VegCore.ERD.mwb: regenerated exports and udpated image map
schemas/VegCore/VegCore.ERD.mwb: individual_observation.individual: documented that it is optional because an individual_observation cannot have an associated individual unless the individual is traceable to a specific plant
schemas/VegCore/VegCore.ERD.mwb: individual_observation.place_observed_at: made it optional because some individual_observations (e.g. of the plant a specimen was collected from) may be missing location information. however, an individual_observation cannot have an associated individual unless the individual is traceable to a specific plant.
schemas/VegCore/VegCore.ERD.mwb: specimen: added individual_observation, which stores observations about the plant the specimen was collected from. (some specimens may not be traceable to a reobservable individual, but will still have these plant observations.) specimen_observation: adjusted position to fully display the HAS-A connector to specimen.
planning/timeline/timeline.2013.xls: updated for progress. rebalanced dots.
planning/timeline/timeline.2013.xls: added separate task for Individual datasource refresh (separate from Individual datasource removal), because we also need to optimize the reload of datasources. the reload is most likely slow because rows are being added to very large tables.
/README.TXT: Single datasource import: run commands in the background, since these are long-running commands
planning/timeline/timeline.2013.xls: moved Attribution and conditions of use before Flatten the datasources as suggested in meeting with Mark
schemas/vegbien.sql: datasource_rm(): runtime: added runtime of MO (55 min, 0.85 ms/row), which has a much larger # of rows than ACAD (4 million instead of 45,000). updated GBIF runtime estimate (~13 h) with more accurate ms/row from MO.
schemas/vegbien.sql: datasource_rm(): estimated runtime for GBIF (~10 h). note that this is still significantly shorter than the import time (3.4 days).
schemas/vegbien.sql: datasource_rm(): documented how to calculate runtime
schemas/vegbien.sql: datasource_rm(): documented runtime for ACAD: 30 s; 0.61 ms/row
inputs/input.Makefile: rm: use new datasource_rm(), which encapsulates the schema-specific aspects of removing a datasource
bugfix: schemas/vegbien.sql: datasource_rm(): set_config(): don't name the is_local param because it is not a named parameter
schemas/vegbien.sql: added datasource_rm(). this uses an internal schema-scoping parameter to ensure that the function always operates on tables in the schema it was defined in, rather than tables in the search_path. this ensures that when the public schema is renamed (e.g. from an imported version), the function will continue to operate on its own schema rather than whichever schema happens to be called public. this avoids any surprises if you are trying to remove a datasource in one schema, and don't want it to unintentionally be removed in another schema instead.
schemas/util.sql: added schema_ident()
schemas/util.sql: added schema(regtype), schema(anyelement)
inputs/.TNRS/schema.sql: added covering indexes on foreign keys where needed. this enables rows to be cascadingly deleted without a full table scan.
schemas/vegbien.sql: added covering indexes on foreign keys where needed. this enables rows to be cascadingly deleted without a full table scan.
planning/timeline/timeline.2013.xls: increased font size for better readability at 100% (which is also the printed size). note that the timeline is normally zoomed in, so you don't see the actual font size.
inputs/.TNRS/schema.sql: tnrs: instructions for when changing this table's schema: updated to use new `inputs/.TNRS/data.sql.run refresh`
inputs/.TNRS/data.sql.run: added refresh() target which runs inputs/test_taxonomic_names/test_scrub
inputs/test_taxonomic_names/test_scrub: added step to update inputs/.TNRS/data.sql to the now-refreshed TNRS sample data (this updating step is now automated)
inputs/.TNRS/schema.sql: tnrs: updated steps to run when changing this table's schema, to use new TNRS editing workflow
inputs/.TNRS/data.sql: re-ran TNRS using `inputs/test_taxonomic_names/test_scrub; rm=1 inputs/.TNRS/data.sql.run export_`
/README.TXT: Full database import: fixing TNRS errors: noted that inputs/test_taxonomic_names/test_scrub re-runs TNRS
/README.TXT: Full database import: fixing TNRS errors: updated instructions for new TNRS schema editing workflow
inputs/.TNRS/data.sql: generate from the DB using `rm=1 inputs/.TNRS/data.sql.run export_` instead of being a hand-edited file
added inputs/.TNRS/data.sql.run for syncing data.sql directly with the DB without needing to use inputs/test_taxonomic_names/test_scrub just to export the sample data. (however, when modifying the tnrs table, it may still be easier to generate new sample data using test_scrub rather than refactoring the table in place.)
added lib/runscripts/data.pg.sql.run (analogous to schema.pg.sql.run for data-only SQL scripts)
added lib/runscripts/file.pg.sql.run and use it in schema.pg.sql.run
added lib/runscripts/schema.pg.sql.run and use it in inputs/.TNRS/schema.sql.run
inputs/.TNRS/schema.sql: generate from the DB using `rm=1 inputs/.TNRS/schema.sql.run export_` instead of being a hand-edited file. this makes it much easier to edit the (now frequently-changing) TNRS schema directly in pgAdmin (which is graphical), rather than having to manually copy SQL changes from pgAdmin to the file.
inputs/.TNRS/schema.sql.run: export_(): added usage
added inputs/.TNRS/schema.sql.run, which syncs schema.sql with the DB
bugfix: lib/sh/db.sh: pg_dump(): don't default $struct flag to on, because both structure and data should be printed by default
lib/sh/db.sh: pg_dump(): added create_schema= flag to remove CREATE SCHEMA statements (useful if the schema already exists)
bugfix: lib/sh/util.sh: set_fds(): remove empty redirects resulting from using `redirs= cmd...` to clear the redirs and then using $redirs as an array
fix: lib/sh/util.sh: set_fds(): documented that it does not currently support redirecting an fd to itself (due to bash bugs that require the dest fd to be closed before it can be reopened)
bugfix: lib/sh/util.sh: stdout2fd(): don't add >&$fd redirect if the fd is 1, because redir does not currently support redirecting an fd to itself (due to bash bugs that require the dest fd to be closed before it can be reopened)
lib/sh/util.sh: filter_fd(): factored out >() subshell command into stdout2fd() for clarity
bugfix: lib/sh/util.sh: redir(): unset redirs so that you don't redirect again in the invoked command
fix: lib/sh/util.sh: filter_fd(): documented that ${redirs[@]} must not be set to an outer value
fix: inputs/ARIZ/omoccurrences/map.csv: occurrenceID: remapped to EQUIV#to:occid instead of DUPLICATE#of:occid since these are not exact duplicates
lib/runscripts/util.run: added to_top_file alias for use with $top_file
lib/sh/local.sh: added pg_dump_local()
lib/sh/db.sh: added pg_dump(), using the code in bin/pg_dump_vegbien with clarity improvements
lib/sh/db.sh: added pg_cmd() (analogous to mysql_cmd() for PostgreSQL), and use it in psql(), so that other PostgreSQL operations can use this to set the PG* connection/login vars
planning/timeline/timeline.2013.xls: updated dots for new priority order
planning/timeline/timeline.2013.xls: moved optimization of individual datasource removal before flattening the datasources to a common schema as suggested in meeting with Mark
lib/runscripts/datasrc_dir.run: include of import.run: use .rel instead of `. "$(dirname "${BASH_SOURCE0}")"/...`
lib/runscripts/datasrc_dir.run: moved commands related to any runscript in the datasrc dir to new in_datasrc_dir.run
inputs/*/Specimen/test.xml.ref with eventDate->dateCollected mappings: updated test outputs to match mapping
some inputs/*/*/unmapped_terms.csv: updated now that datasetURL is mapped (this does not affect the mappings because it is only mapped for Source tables)
bugfix: inputs/ARIZ/omoccurrences/map.csv: fixed one-to-many mapping for modified (created by the automapper?)
lib/sh/db.sh: pg_export(): added usage
inputs/.TNRS/schema.sql: moved source code comments to in-schema COMMENT ON comments so all the info in schema.sql is in the DB
inputs/.TNRS/schema.sql: views that use * as the column list: added comments to indicate that this is the case, so that the views can be updated in place rather than only by reinstalling the TNRS schema
updated backups/TNRS.backup.md5
planning/timeline/timeline.2013.xls: clarified note about the purpose of the dots
added backups/vegbien.r10548.backup.md5
bugfix: backups/: svn:ignore: removed *.md5, which should be under version control
inputs/input.Makefile: scrub: documented that using & (background process) ignores TNRS errors, so that TNRS bugs do not prevent the remaining tables from being imported even if TNRS can't be run
inputs/.TNRS/schema.sql: tnrs: util.set_col_types() runtime: updated for most recent ALTER COLUMN TYPE command (9 min)
inputs/.TNRS/schema.sql: tnrs.Time_submitted: renamed to batch and added fkey to batch.id. this requires including the batch table in inputs/.TNRS/data.sql, so that the fkey is satisfied (batch entries are already added by bin/tnrs_db.
updated backups/TNRS.backup
/README.TXT: Full database import: To back up DB (staging tables and last import) separately: added step to upload backups to jupiter
/README.TXT: Full database import: To back up DB (staging tables and last import) separately: added step to remake backups/TNRS.backup
bin/tnrs_db: add entry to new batch table
inputs/.TNRS/schema.sql: batch: reset name of id_by_time unique constraint since this field is now in the batch table
inputs/.TNRS/schema.sql: download_settings: renamed to batch_download_settings because this table is actually specific to the batch, and it does not make sense to have a download settings file without a batch
inputs/.TNRS/schema.sql: download_settings.id: added fkey to batch.id to create a 1:1 relationship with optional participation by download_settings. note that this relationship happens to be the same as SQL inheritance, as used in VegCore, but in this case, the 1:1 relationship is not related to inheritance.
inputs/.TNRS/schema.sql: client_version: added table, column comments with info on how to retrieve each value
inputs/.TNRS/schema.sql: added client_version table for svn revisions, with fkey from batch
inputs/.TNRS/schema.sql: added batch table and moved download_settings.time_submitted, id_by_time to it since these are not related to the download_settings file
fix: planning/timeline/timeline.2013.xls: Switching to new-style import: updated hyperlink
planning/timeline/timeline.2013.xls: moved Individual datasource removal under Streamline process of mapping and adding a new datasource
planning/timeline/timeline.2013.xls: added note that the purpose of the dots is to show what tasks should be worked on. in some cases, they are also proportional to the complexity of the task, but this may not be the case if e.g. a task was given different priorities in different months, or worked on in different amounts.
fix: planning/timeline/timeline.2013.xls: matched supertask status to subtask status
planning/timeline/timeline.2013.xls: made Switching to new-style import a subtask of Streamline process of mapping and adding a new datasource because new-style import automates many of the datasource-mapping tasks that previously needed to be done by hand
planning/timeline/timeline.2013.xls: reordered for priorities and to-do assignments from last conference call (wiki.vegpath.org/2013-08-22_conference_call#Decisions-made)
planning/timeline/timeline.2013.xls: updated for August progress and recently-added tasks
inputs/.TNRS/schema.sql: added VegCore-style id column as the primary key, instead of using time_submitted directly. this enables always using the same name for the pkey. the pkey is now autopopulated from time_submitted in a trigger, using helper column id_by_time. the user is now also able to specify their own globally-unique ID that is not based on the time_submitted.
inputs/.TNRS/schema.sql: download_settings comment: changed name of button to Download settings, which had gotten auto-replaced to download_settings
inputs/.TNRS/schema.sql: Download settings table: renamed to download_settings because although Download settings is the verbatim name of the button that this info comes from, it is not necessary to name the table a particular way in order to match up data to it correctly, so we can just use the standard naming convention (wiki.vegpath.org/u-name#format) and avoid the need to enclose the name in ""
inputs/.TNRS/schema.sql: added Download settings table, which stores data from http://tnrs.iplantcollaborative.org/TNRSapp.html > Submit List > results section > Download settings > settings.txt
inputs/.TNRS/Source/map.csv: mapped datasetURL
inputs/.geoscrub/Source/map.csv: mapped datasetURL
mappings/VegCore-VegBIEN.csv: mapped datasetURL
fix: mappings/VegCore-VegBIEN.csv: source__modified_date: remapped to pubdate instead of datelastmodified because this is actually metadata for the source itself, rather than for the VegBIEN record of the source
fix: inputs/.geoscrub/Source/map.csv: source__modified_date: use the mtime of the CSV file instead, since this is closer to the actual version of the biengeo code at the time it was run
inputs/.geoscrub/Source/map.csv: mapped source__modified_date. note that the test must be run with inputs/.geoscrub/Source/run instead of `make inputs/.geoscrub/Source/test` to add these metadata columns to the staging table.
mappings/VegCore-VegBIEN.csv: mapped source__modified_date (different from vegcore.vegpath.org?modified, which is for the data record)
mappings/VegCore.htm: regenerated from wiki. added source__version (= edition), source__modified_date.
bugfix: schemas/util.sql: set_col_names_with_metadata(): rename any metadata cols rather than re-adding them with new names
mappings/VegCore-VegBIEN.csv: mapped edition