inputs/test_taxonomic_names/test_scrub: added step to update inputs/.TNRS/data.sql to the now-refreshed TNRS sample data (this updating step is now automated)
inputs/.TNRS/schema.sql: tnrs: updated steps to run when changing this table's schema, to use new TNRS editing workflow
inputs/.TNRS/data.sql: re-ran TNRS using `inputs/test_taxonomic_names/test_scrub; rm=1 inputs/.TNRS/data.sql.run export_`
/README.TXT: Full database import: fixing TNRS errors: noted that inputs/test_taxonomic_names/test_scrub re-runs TNRS
/README.TXT: Full database import: fixing TNRS errors: updated instructions for new TNRS schema editing workflow
inputs/.TNRS/data.sql: generate from the DB using `rm=1 inputs/.TNRS/data.sql.run export_` instead of being a hand-edited file
added inputs/.TNRS/data.sql.run for syncing data.sql directly with the DB without needing to use inputs/test_taxonomic_names/test_scrub just to export the sample data. (however, when modifying the tnrs table, it may still be easier to generate new sample data using test_scrub rather than refactoring the table in place.)
added lib/runscripts/data.pg.sql.run (analogous to schema.pg.sql.run for data-only SQL scripts)
added lib/runscripts/file.pg.sql.run and use it in schema.pg.sql.run
added lib/runscripts/schema.pg.sql.run and use it in inputs/.TNRS/schema.sql.run
inputs/.TNRS/schema.sql: generate from the DB using `rm=1 inputs/.TNRS/schema.sql.run export_` instead of being a hand-edited file. this makes it much easier to edit the (now frequently-changing) TNRS schema directly in pgAdmin (which is graphical), rather than having to manually copy SQL changes from pgAdmin to the file.
inputs/.TNRS/schema.sql.run: export_(): added usage
added inputs/.TNRS/schema.sql.run, which syncs schema.sql with the DB
bugfix: lib/sh/db.sh: pg_dump(): don't default $struct flag to on, because both structure and data should be printed by default
lib/sh/db.sh: pg_dump(): added create_schema= flag to remove CREATE SCHEMA statements (useful if the schema already exists)
bugfix: lib/sh/util.sh: set_fds(): remove empty redirects resulting from using `redirs= cmd...` to clear the redirs and then using $redirs as an array
fix: lib/sh/util.sh: set_fds(): documented that it does not currently support redirecting an fd to itself (due to bash bugs that require the dest fd to be closed before it can be reopened)
bugfix: lib/sh/util.sh: stdout2fd(): don't add >&$fd redirect if the fd is 1, because redir does not currently support redirecting an fd to itself (due to bash bugs that require the dest fd to be closed before it can be reopened)
lib/sh/util.sh: filter_fd(): factored out >() subshell command into stdout2fd() for clarity
bugfix: lib/sh/util.sh: redir(): unset redirs so that you don't redirect again in the invoked command
fix: lib/sh/util.sh: filter_fd(): documented that ${redirs[@]} must not be set to an outer value
fix: inputs/ARIZ/omoccurrences/map.csv: occurrenceID: remapped to EQUIV#to:occid instead of DUPLICATE#of:occid since these are not exact duplicates
lib/runscripts/util.run: added to_top_file alias for use with $top_file
lib/sh/local.sh: added pg_dump_local()
lib/sh/db.sh: added pg_dump(), using the code in bin/pg_dump_vegbien with clarity improvements
lib/sh/db.sh: added pg_cmd() (analogous to mysql_cmd() for PostgreSQL), and use it in psql(), so that other PostgreSQL operations can use this to set the PG* connection/login vars
planning/timeline/timeline.2013.xls: updated dots for new priority order
planning/timeline/timeline.2013.xls: moved optimization of individual datasource removal before flattening the datasources to a common schema as suggested in meeting with Mark
lib/runscripts/datasrc_dir.run: include of import.run: use .rel instead of `. "$(dirname "${BASH_SOURCE0}")"/...`
lib/runscripts/datasrc_dir.run: moved commands related to any runscript in the datasrc dir to new in_datasrc_dir.run
inputs/*/Specimen/test.xml.ref with eventDate->dateCollected mappings: updated test outputs to match mapping
some inputs/*/*/unmapped_terms.csv: updated now that datasetURL is mapped (this does not affect the mappings because it is only mapped for Source tables)
bugfix: inputs/ARIZ/omoccurrences/map.csv: fixed one-to-many mapping for modified (created by the automapper?)
lib/sh/db.sh: pg_export(): added usage
inputs/.TNRS/schema.sql: moved source code comments to in-schema COMMENT ON comments so all the info in schema.sql is in the DB
inputs/.TNRS/schema.sql: views that use * as the column list: added comments to indicate that this is the case, so that the views can be updated in place rather than only by reinstalling the TNRS schema
updated backups/TNRS.backup.md5
planning/timeline/timeline.2013.xls: clarified note about the purpose of the dots
added backups/vegbien.r10548.backup.md5
bugfix: backups/: svn:ignore: removed *.md5, which should be under version control
inputs/input.Makefile: scrub: documented that using & (background process) ignores TNRS errors, so that TNRS bugs do not prevent the remaining tables from being imported even if TNRS can't be run
inputs/.TNRS/schema.sql: tnrs: util.set_col_types() runtime: updated for most recent ALTER COLUMN TYPE command (9 min)
inputs/.TNRS/schema.sql: tnrs.Time_submitted: renamed to batch and added fkey to batch.id. this requires including the batch table in inputs/.TNRS/data.sql, so that the fkey is satisfied (batch entries are already added by bin/tnrs_db.
updated backups/TNRS.backup
/README.TXT: Full database import: To back up DB (staging tables and last import) separately: added step to upload backups to jupiter
/README.TXT: Full database import: To back up DB (staging tables and last import) separately: added step to remake backups/TNRS.backup
bin/tnrs_db: add entry to new batch table
inputs/.TNRS/schema.sql: batch: reset name of id_by_time unique constraint since this field is now in the batch table
inputs/.TNRS/schema.sql: download_settings: renamed to batch_download_settings because this table is actually specific to the batch, and it does not make sense to have a download settings file without a batch
inputs/.TNRS/schema.sql: download_settings.id: added fkey to batch.id to create a 1:1 relationship with optional participation by download_settings. note that this relationship happens to be the same as SQL inheritance, as used in VegCore, but in this case, the 1:1 relationship is not related to inheritance.
inputs/.TNRS/schema.sql: client_version: added table, column comments with info on how to retrieve each value
inputs/.TNRS/schema.sql: added client_version table for svn revisions, with fkey from batch
inputs/.TNRS/schema.sql: added batch table and moved download_settings.time_submitted, id_by_time to it since these are not related to the download_settings file
fix: planning/timeline/timeline.2013.xls: Switching to new-style import: updated hyperlink
planning/timeline/timeline.2013.xls: moved Individual datasource removal under Streamline process of mapping and adding a new datasource
planning/timeline/timeline.2013.xls: added note that the purpose of the dots is to show what tasks should be worked on. in some cases, they are also proportional to the complexity of the task, but this may not be the case if e.g. a task was given different priorities in different months, or worked on in different amounts.
fix: planning/timeline/timeline.2013.xls: matched supertask status to subtask status
planning/timeline/timeline.2013.xls: made Switching to new-style import a subtask of Streamline process of mapping and adding a new datasource because new-style import automates many of the datasource-mapping tasks that previously needed to be done by hand
planning/timeline/timeline.2013.xls: reordered for priorities and to-do assignments from last conference call (wiki.vegpath.org/2013-08-22_conference_call#Decisions-made)
planning/timeline/timeline.2013.xls: updated for August progress and recently-added tasks
inputs/.TNRS/schema.sql: added VegCore-style id column as the primary key, instead of using time_submitted directly. this enables always using the same name for the pkey. the pkey is now autopopulated from time_submitted in a trigger, using helper column id_by_time. the user is now also able to specify their own globally-unique ID that is not based on the time_submitted.
inputs/.TNRS/schema.sql: download_settings comment: changed name of button to Download settings, which had gotten auto-replaced to download_settings
inputs/.TNRS/schema.sql: Download settings table: renamed to download_settings because although Download settings is the verbatim name of the button that this info comes from, it is not necessary to name the table a particular way in order to match up data to it correctly, so we can just use the standard naming convention (wiki.vegpath.org/u-name#format) and avoid the need to enclose the name in ""
inputs/.TNRS/schema.sql: added Download settings table, which stores data from http://tnrs.iplantcollaborative.org/TNRSapp.html > Submit List > results section > Download settings > settings.txt
inputs/.TNRS/Source/map.csv: mapped datasetURL
inputs/.geoscrub/Source/map.csv: mapped datasetURL
mappings/VegCore-VegBIEN.csv: mapped datasetURL
fix: mappings/VegCore-VegBIEN.csv: source__modified_date: remapped to pubdate instead of datelastmodified because this is actually metadata for the source itself, rather than for the VegBIEN record of the source
fix: inputs/.geoscrub/Source/map.csv: source__modified_date: use the mtime of the CSV file instead, since this is closer to the actual version of the biengeo code at the time it was run
inputs/.geoscrub/Source/map.csv: mapped source__modified_date. note that the test must be run with inputs/.geoscrub/Source/run instead of `make inputs/.geoscrub/Source/test` to add these metadata columns to the staging table.
mappings/VegCore-VegBIEN.csv: mapped source__modified_date (different from vegcore.vegpath.org?modified, which is for the data record)
mappings/VegCore.htm: regenerated from wiki. added source__version (= edition), source__modified_date.
bugfix: schemas/util.sql: set_col_names_with_metadata(): rename any metadata cols rather than re-adding them with new names
mappings/VegCore-VegBIEN.csv: mapped edition
bugfix: inputs/.geoscrub/{Source,geoscrub_output}/VegBIEN.csv: switched to the version needed for new-style datasources
inputs/.geoscrub/Source/map.csv: mapped edition (the version), using `svn info derived/biengeo/`
schemas/vegbien.sql: source.revision: renamed to import_revision for clarity
schemas/vegbien.my.sql: updated with `make schemas/remake`
schemas/vegbien.sql: source: datecreated, datelastmodified: default to now() like in VegBank (schemas/VegBank/vegbank.sql)
schemas/vegbien.sql: source: added datecreated, datelastmodified, etc. for source-level tracking of import and revision (wiki.vegpath.org/2013-08-22_conference_call#source-level-tracking-of-import-and-revision)
added derived/biengeo/ from https://projects.nceas.ucsb.edu/nceas/projects/biengeo/repository/
added /derived
web/links/index.htm: updated to Firefox bookmarks. Gmvault: added steps to do full backup and to backup only new e-mails
planning/timeline/timeline.2013.xls: flagged timeline issues that can be done by iPlant personnel: Attribution and conditions of use, Geoscrubbing re-run, Geoscrubbing automated pipeline, Improve and complete data provider metadata, Obtain any additional new data
web/links/index.htm: updated to Firefox bookmarks. Gmvault: added run instructions for Mac.
web/links/index.htm: updated to Firefox bookmarks. added link to Gmvault (Gmail backup), which wouldn't install for me on Mac (but that may be because I'm using 10.8, and Gmvault is for 10.7/10.6)
added planning/meetings/BIEN conference call availability.xlsx (backup of Google spreadsheet)
planning/timeline/timeline.2013.xls: updated for changes made in the conference call: moved Data provider validations (spot-checking) to beginning since that seems to have been decided to be a higher priority than architectural changes
schemas/util.sql: combining functions taking anyelement params which could be text: take text param instead, so that other argument types (e.g. integer) will first be implicitly cast to text instead of trying to concatenate integers directly. this fixes a bug in the VegBank.stemcount_,stemlocation_ _join() of two integer pkeys, which first needed to be cast to text. anyelement was previously used so that other text-like types such as varchar could also be used, but varchar is implicitly castable to text so keeping anyelement should not be necessary.
planning/timeline/timeline.2013.xls: added tasks to Avoid DB restructuring when ingesting a new datasource and Streamline process of mapping and adding a new datasource (not yet put in priority order)
inputs/VegBank/observation_/test.xml.ref: updated inserted row count
bugfix: inputs/VegBank/stemlocation_/map.csv: also join together taxonimportance_id, stemcount_id for aggregateOrganismObservationID so that the aggregateoccurrence pkeys match up with those imported from stemcount
bugfix: inputs/VegBank/stemcount_/map.csv: aggregateOrganismObservationID: prepend taxonimportance_id so that rows with only a taxonimportance entry (no stemcounts) will also have the required sourceaccessioncode
schemas/vegbien.sql: analytical_stem: synced with analytical_stem_view using sync_analytical_stem_to_view()
bugfix: schemas/vegbien.sql: sync_analytical_stem_to_view(): added re-creation of range_modeling_input view
schemas/vegbien.sql: analytical_stem_view: added aggregateOrganismObservationID (aggregateoccurrence.sourceaccessioncode) so aggregateoccurrences can be matched back up to their input rows (e.g. VegBank.stemcount)
inputs/VegBank/taxonobservation_/map.csv: plantname: remapped to DUPLICATE#of:plantconcept_plantname because this is an exact duplicate
bugfix: inputs/VegBank/taxonobservation_/map.csv: updated input column names for renamings in inputs/VegBank/vegbank.~.clean_up.sql
inputs/VegBank/taxonobservation_/map.csv: Species and lower ranks: remapped to EQUIV#to:plantname because these contain the taxonomic name at specific ranks, but plantname contains the taxonomic name of the plant itself, which is longer and populated more often