Activity
From 08/15/2013 to 09/13/2013
09/12/2013
- 06:43 PM Revision 10944: inputs/VegBank/: prepended the table name to each column name to prevent column collisions, using the steps at http://wiki.vegpath.org/Left-joining_a_datasource
- 06:17 PM Revision 10943: inputs/VegBank/: switched to new-style import, using the steps at http://wiki.vegpath.org/Adding_new-style_import_to_a_datasource
- 06:13 PM Revision 10942: bugfix: inputs/VegBank/stemlocation_/map.csv: put columns in table order, which is needed by new-style import
- 05:57 PM Revision 10941: inputs/VegBank/stemlocation_/: translated one-to-many mappings to postprocessing derived columns, using the steps at http://wiki.vegpath.org/Adding_new-style_import_to_a_datasource#Translating-filters-to-postprocessing-derived-columns
- 05:49 PM Revision 10940: bugfix: inputs/VegBank/taxonobservation_/map.csv: put columns in table order, which is needed by new-style import
- 05:26 PM Revision 10939: bugfix: inputs/VegBank/plot_/postprocess.sql: coordinateUncertaintyInMeters: need to use GREATEST() instead of _alt() to handle cases where the coordinate uncertainty is > than the fuzzing uncertainty, where you wouldn't want to just use the smaller fuzzing uncertainty
- 05:20 PM Revision 10938: inputs/VegBank/plot_/: translated multi-column filters to postprocessing derived columns, using the steps at http://wiki.vegpath.org/Adding_new-style_import_to_a_datasource#Translating-filters-to-postprocessing-derived-columns
- 05:11 PM Revision 10937: inputs/VegBank/plot_/postprocess.sql: map_*() derived cols: updated runtime
- 05:10 PM Revision 10936: inputs/VegBank/plot_/: translated single-column filters to postprocessing derived columns, using the steps at http://wiki.vegpath.org/Adding_new-style_import_to_a_datasource#Translating-filters-to-postprocessing-derived-columns
- 04:36 PM Revision 10935: inputs/VegBank/stemcount_/: translated multi-column filters to postprocessing derived columns, using the steps at http://wiki.vegpath.org/Adding_new-style_import_to_a_datasource#Translating-filters-to-postprocessing-derived-columns
- 04:31 PM Revision 10934: inputs/VegBank/stemlocation_/: translated multi-column filters to postprocessing derived columns, using the steps at http://wiki.vegpath.org/Adding_new-style_import_to_a_datasource#Translating-filters-to-postprocessing-derived-columns
- 04:30 PM Revision 10933: inputs/VegBank/taxonobservation_/postprocess.sql: scientificName: recorded runtime (15 s)
- 04:15 PM Revision 10932: inputs/VegBank/taxonobservation_/: translated multi-column filters to postprocessing derived columns, using the steps at http://wiki.vegpath.org/Adding_new-style_import_to_a_datasource#Translating-filters-to-postprocessing-derived-columns
- 04:14 PM Revision 10931: inputs/VegBank/taxonobservation_/: translated multi-column filters to postprocessing derived columns, using the steps at http://wiki.vegpath.org/Adding_new-style_import_to_a_datasource#Translating-filters-to-postprocessing-derived-columns
- 03:37 PM Revision 10930: inputs/FIA/occurrence_all/postprocess.sql: use much simpler LEFT JOINs instead of nested RIGHT JOINs, which required lots of () to get them to happen in the right order. note that the columns are now provided in reverse instead of forwards path order, but this is still much clearer than the nested mess of RIGHT JOINs. this approach can also be used to simplify VegBank's joins.
- 03:34 PM Revision 10929: bugfix: lib/runscripts/view.run: remake_VegBIEN_mappings(): also need to remake header.csv, not just map.csv as for tables, because view columns may change when the view is regenerated
- 02:42 PM Revision 10928: schemas/VegCore/VegCore.ERD.mwb: specimen: changed definition to "something collected from a plant" rather than just "a physical part of a plant", to support using this table for identifying pictures and descriptions of a plant (as DwC does)
- 02:28 PM Revision 10927: schemas/VegCore/VegCore.ERD.mwb: regenerated exports and udpated image map
- 02:24 PM Revision 10926: schemas/VegCore/VegCore.ERD.mwb: reobservable_presence: allow it to be vouchered by any reobservable element (including a tagged individual), not just a specimen
- 02:01 PM Revision 10925: schemas/VegCore/VegCore.ERD.mwb: specimen.defining_data: clarified that the observations in this are actually a subset of individual_observation.traits (specifically, the subset that can be used to make a taxonomic redetermination). information in this field should therefore always also be stored in individual_observation.traits.
- 01:54 PM Revision 10924: schemas/VegCore/VegCore.ERD.mwb: specimen: added specimen_unique_in_individual_observation unique constraint, analogous to specimen_unique_in_individual
- 01:34 PM Revision 10923: schemas/VegCore/VegCore.ERD.mwb: regenerated exports and udpated image map
- 01:29 PM Revision 10922: schemas/VegCore/VegCore.ERD.mwb: specimen: added defining_data, which for a digital-only specimen, stores the information that comprises the specimen. note that a taxon_presence without a physical voucher can still qualify as reobservable if a detailed description of it is provided in this field, to make taxonomic redeterminations on. for datasources like VegBank, which incorrectly allow multiple taxon_determinations for *any* type of taxon_observation, their taxonomic redeterminations would actually be considered invalid if made on a purely taxon_presence observation (i.e. just a taxon name) without a detailed description that could be used to make a redetermination. this is different than the scrubbing of a taxon name, which relates a taxon name to another taxon name, rather than a taxon_observation to a completely different taxon name.
- 12:35 PM Revision 10921: bugfix: lib/sh/util.sh: set_fds(): don't add surrounding quotes to empty redirect dest
- 12:31 PM Revision 10920: bugfix: lib/sh/util.sh: set_fds(): need to check if redirect is empty *before* escaping it with `printf %q`, which may add surrounding quotes to an empty string
- 11:40 AM Revision 10919: planning/timeline/timeline.2013.xls: attribution and conditions of use: documented that Brad/Brian/Bob should work on this, as decided in the conference call (wiki.vegpath.org/2013-09-12_conference_call#data-provider-metadata)
- 05:47 AM Revision 10918: planning/timeline/timeline.2013.xls: reformatted to fit all rows and all per-week columns on one page
- 05:30 AM Revision 10917: planning/timeline/timeline.2013.xls: streamline process of mapping and adding a new datasource: added subtask to create interactive scripts for each import step
- 05:15 AM Revision 10916: planning/timeline/timeline.2013.xls: improve and complete data provider metadata: moved to end because this can also been added manually to the source table, and does not have to be in place before running column-based import
- 05:09 AM Revision 10915: planning/timeline/timeline.2013.xls: flatten the datasources to a common schema: added subtask to left-join unvalidated datasources since they need the flattening in order to validate them properly
- 04:21 AM Revision 10914: planning/timeline/timeline.2013.xls: rebalanced dots
- 04:15 AM Revision 10913: planning/timeline/timeline.2013.xls: moved items marked later to separate section at bottom
- 04:13 AM Revision 10912: planning/timeline/timeline.2013.xls: moved revisions to schema under datasource validations because schema changes are largely driven by validations problems uncovered
- 04:12 AM Revision 10911: planning/timeline/timeline.2013.xls: split tasks into weeks
- 03:47 AM Revision 10910: planning/timeline/timeline.2013.xls: updated for progress
- 03:35 AM Revision 10909: planning/timeline/timeline.2013.xls: split months into (currently identical) weeks
- 03:19 AM Revision 10908: planning/timeline/timeline.2013.xls: added During month of label above months
- 03:09 AM Revision 10907: planning/timeline/timeline.2013.xls: switched to portrait mode to better fit the new format, which hides columns for past months
- 03:05 AM Revision 10906: planning/timeline/timeline.2013.xls: hid crossed out rows to show just the remaining tasks
- 03:03 AM Revision 10905: planning/timeline/timeline.2013.xls: crossed out avoid DB restructuring when ingesting a new datasource, because FIA (which is flattened before import) *does* properly support optional subplots and diamond linking of subplots to parent plot events, which were necessary to ingest an arbitrary flattened plots datasource
- 02:55 AM Revision 10904: planning/timeline/timeline.2013.xls: crossed out fully-completed tasks. rebalanced dots.
- 02:46 AM Revision 10903: planning/timeline/timeline.2013.xls: moved switching to new-style import to top of streamline process of mapping and adding a new datasource because this puts all the datasource adding steps (except filling in the mappings) into one rerunnable script
- 02:36 AM Revision 10902: planning/timeline/timeline.2013.xls: hid columns for past months so that the current and future months are right next to each task
- 02:31 AM Revision 10901: planning/timeline/timeline.2013.xls: moved streamline process of mapping and adding a new datasource before documentation testing because this will assist the documentation tester in running the import process
- 02:26 AM Revision 10900: planning/timeline/timeline.2013.xls: moved geoscrubbing re-run under add any missing columns because this is needed to fully populate the geoscrubbing columns
- 02:20 AM Revision 10899: planning/timeline/timeline.2013.xls: added documentation testing, usability testing priority tasks (wiki.vegpath.org/Priority_tasks). lowercased tasks for consistency with the wiki and to avoid needing to sentence case new subtasks.
- 01:53 AM Revision 10898: planning/timeline/timeline.2013.xls: moved Flatten the datasources to a common schema under Datasource validations because the query left-joining the tables is needed for validation, and it is much easier to validate datasources when there is only one input table to validate
09/11/2013
- 02:52 PM Revision 10897: added derived/biengeo/Geovalidation_and_geoscrubbing_update.presentation.url
09/09/2013
- 06:12 PM Revision 10896: added BIEN2/traits_observation_counts.xls
- 05:44 PM Revision 10895: /README.TXT: Single datasource import: removed rescrub step because this is not needed by the current TNRS process
- 02:04 PM Revision 10894: web/links/index.htm: updated to Firefox bookmarks. MySQL: added steps to add a user if you are not root but have sudo access.
09/07/2013
- 08:19 PM Revision 10893: BIEN2/country_species/: svn:ignore the .tsv exports
- 08:19 PM Revision 10892: BIEN2/country_species/run: documented runtime (1 min)
- 08:15 PM Revision 10891: added BIEN2/country_species/run, which exports each BIEN2 country's species list
- 08:14 PM Revision 10890: bugfix: lib/sh/util.sh: set_fds(): need to escape redirect destinations which are files, because they may contain special shell characters
- 08:10 PM Revision 10889: lib/sh/util.sh: added rm_prefix()
- 07:11 PM Revision 10888: lib/sh/db.sh: mysql_cmd(): added caller usage with connection/login opts
- 07:08 PM Revision 10887: lib/sh/db.sh: mysql(), mysql_export(): usage: added database=...
- 12:30 AM Revision 10886: planning/timeline/timeline.2013.xls: Data provider validations: renamed to Datasource validations to clarify that this is a validation *of the datasources*, but not necessarily *by the data providers*
09/05/2013
- 07:19 PM Revision 10885: /README.TXT: Full database import: added Running individual steps separately label for the section that is not part of the main import, but is useful if the import is aborted part of the way through
- 05:02 PM Revision 10884: /README.TXT: moved Single datasource import, Datasource setup to top since these are the most important howtos
- 04:14 PM Revision 10883: bugfix: schemas/Makefile: enclose schema names in "" so that they won't be lowercased
- 03:56 PM Revision 10882: bugfix: schemas/Makefile, lib/common.Makefile: enclose schema names in "" so that they won't be lowercased
- 03:26 PM Revision 10881: /run: geoscrub_input/make(): updated runtime (20 s)
- 01:31 PM Revision 10880: planning/timeline/timeline.2013.xls: Data provider validations (spot-checking): moved ahead of Individual datasource refresh as decided in conference call
- 01:29 PM Revision 10879: schemas/vegbien.sql: analytical_plot: added aggregateOrganismObservationID from analytical_stem
- 08:38 AM Revision 10878: planning/timeline/timeline.2013.xls: updated for progress
- 08:37 AM Revision 10877: planning/timeline/timeline.2013.xls: Data provider validations: added subtask for Aggregated validations (counts)
- 01:17 AM Revision 10876: inputs/import.stats.xls: analytical DB: updated rowcount
- 01:14 AM Revision 10875: inputs/import.stats.xls: updated import times
- 01:01 AM Revision 10874: inputs/input.Makefile: reimport: don't remove the existing import first, because it will instead be removed by the publish step. this ensures there is always one complete copy of the datasource in the DB.
- 01:00 AM Revision 10873: added backups/vegbien.r10848.backup.md5
- 12:59 AM Revision 10872: backups/TNRS.backup.md5: updated
- 12:11 AM Revision 10871: bugfix: bin/import_all: use reimport_scrub instead of import_scrub so that the temp suffix of the datasource name is removed
- 12:02 AM Revision 10870: inputs/input.Makefile: reimport: use import_publish instead of import so that the reimport replaces the previous import
09/04/2013
- 11:59 PM Revision 10869: inputs/input.Makefile: added import_publish, which removes the temp suffix when the import is done
- 11:48 PM Revision 10868: bugfix: bin/after_import: run backups/fix_perms right after the backup files are created to make them private
- 11:32 PM Revision 10867: bugfix: backups/fix_perms: just make the backups themselves private, since the other files are in svn, and their permissions should match their accessibility through Redmine
- 11:06 PM Revision 10866: inputs/*/*/test.xml.ref: updated source.shortname for new datasource name, which now starts out with .new suffix
- 05:27 PM Revision 10865: bugfix: bin/make_analytical_db: `/run export_`: don't take input from the terminal, because this causes rm to prompt the user (from a background task) about overwriting the previous export
- 05:26 PM Revision 10864: /README.TXT: Full database import: Publish the new import: added runtime (1 min)
- 03:00 PM Revision 10863: inputs/input.Makefile: $(map2db): import to datasrc.new instead of plain datasrc, so that the current import of the datasrc is not overwritten
- 02:59 PM Revision 10862: inputs/input.Makefile: added publish (`make inputs/src/publish`)
- 02:55 PM Revision 10861: bugfix: schemas/vegbien.sql: source: removed testing row that had gotten in during `make schemas/remake`
- 02:43 PM Revision 10860: inputs/input.Makefile: added %/publish (`make inputs/src/src.version/publish`)
- 02:32 PM Revision 10859: bugfix: schemas/vegbien.sql: datasource_publish(): need to remove the *current* live datasource instead of the datasource to publish. note that datasource_rename() does not currently generate an error if the specified datasource doesn't exist.
- 02:27 PM Revision 10858: bugfix: schemas/vegbien.sql: datasource_publish(): run it in a nested transaction so that there is always one published copy of the datasource. (note that a nested transaction is not automatically created for each function, http://stackoverflow.com/questions/6274457/set-isolation-level-for-postgresql-stored-procedures?In_PG_your_procedures_aren%27t_separate_transactions#answer-6283201 .)
- 01:57 PM Revision 10857: schemas/vegbien.sql: added datasource_publish()
- 01:53 PM Revision 10856: schemas/vegbien.sql: added datasource_rename()
- 01:51 PM Revision 10855: schemas/vegbien.sql: added rm_version_suffix()
- 01:28 PM Revision 10854: bin/map: allow user to override the source env var, which is used as the source.shortname value in the DB
- 09:43 AM Revision 10853: exports/: svn:ignore *.zip
- 09:42 AM Revision 10852: inputs/WIN/Specimen/unmapped_terms.csv: updated
- 09:37 AM Revision 10851: inputs/import.stats.xls: updated import times
08/31/2013
- 07:47 PM Revision 10850: /README.TXT: Full database import: time to wait for the import to finish: updated to time in inputs/import.stats.xls
- 07:44 PM Revision 10849: bugfix: bin/import_all: `rm inputs/.TNRS/tnrs/tnrs.make.lock`: need to use `"rm"` instead of `rm` so that we don't use any rm alias the user might have in their shell (import_all is run in the calling shell so that the jobs are owned by the calling shell)
- 07:36 PM Revision 10848: bugfix: mappings/VegCore-VegBIEN.csv: don't map datasetURL to source.url for taxa-only data (this mapping should only occur for Source tables)
- 07:27 PM Revision 10847: bin/import_all: added step to remove any leftover TNRS lockfile (previously done manually)
- 06:46 PM Revision 10846: planning/timeline/timeline.2013.xls: updated for progress
- 06:32 PM Revision 10845: bugfix: lib/sql_io.py: put_table(): Getting output table pkeys of existing/inserted rows: need to include the index cond in the join condition here, too (using var join_custom_cond), so that an index scan can be used instead of a much slower full-table sort
- 06:01 PM Revision 10844: bugfix: schemas/vegbien.sql: locationevent: locationevent_unique_within_location unique index: added COALESCE(...) expression around location_id since it is nullable, and this is needed for the left and right sides of the join to exactly match up to use an index scan
- 05:52 PM Revision 10843: bugfix: lib/sql_io.py: put_table(): DuplicateKeyException: need to include any index cond in the join condition, so that an index scan can be used instead of a much slower full-table sort (otherwise the query planner will not know that it can restrict results to rows satisfying the index cond)
- 05:48 PM Revision 10842: lib/sql_gen.py: Join: added custom_cond param that can be used to add to the JOIN condition
- 01:02 AM Revision 10841: lib/sql.py: distinct_table(): support custom filters on the distincting query
- 01:01 AM Revision 10840: lib/sql_gen.py: ColValueCond: support conds that are just plain SQL (without separate left and right sides) using special custom_cond flag value
08/30/2013
- 11:18 PM Revision 10839: bugfix: inputs/input.Makefile: %/test: in by_col mode, also need to run %/test.by_col.xml
- 10:38 PM Revision 10838: lib/sql_io.py: ensure_cond(): documented meaning of passed, failed params (at least one row passed/failed the constraint)
- 06:28 PM Revision 10837: fix: web/links/index.htm: PostgreSQL: vacuuming: moved info about autovacuum process being aborted to correct bookmark
- 06:25 PM Revision 10836: web/links/index.htm: updated to Firefox bookmarks. PostgreSQL: added info on vacuuming and analyzing.
- 06:06 PM Revision 10835: lib/runscripts/util.run: usage: documented that this usage also applies to all files that include this file
- 06:06 PM Revision 10834: lib/runscripts/util.run: usage: clarified that the cmd to run is a function
- 06:03 PM Revision 10833: added schemas/postgresql*.conf.run, which installs these config files and takes care of restarting the server
- 06:02 PM Revision 10832: added lib/runscripts/pg.conf.run, which installs PostgreSQL config files
- 06:01 PM Revision 10831: added lib/runscripts/install.run, analogous to import.run
- 04:39 PM Revision 10830: fix: schemas/postgresql*.conf: turn on autovacuum logging (log_autovacuum_min_duration = 0) so we can verify if autovacuum is happening
- 04:35 PM Revision 10829: bugfix: lib/db_xml.py: put_table(): turned off db.autoanalyze, since forcing an ANALYZE after every bulk insert is inefficient for small datasources. the default autovacuum settings in schemas/postgresql.conf should be fine; however, the frequency and/or threshold may need to be increased if autovacuum does not ANALYZE frequently enough to replace db.autoanalyze.
- 03:37 PM Revision 10828: /run: taxon_trait/make(): order by scientificName, measurementType as described in the taxon_trait table comment
- 03:36 PM Revision 10827: lib/sh/db.sh: mk_select(): added support for ORDER BY
- 03:30 PM Revision 10826: /run: added taxon_trait/make()
- 03:28 PM Revision 10825: lib/sh/db.sh: added pg_export_table_to_dir(), analogous to pg_export_table_to_dir_no_header()
- 03:03 PM Revision 10824: schemas/vegbien.sql: taxon_trait: added query to use to export
- 02:52 PM Revision 10823: inputs/NVS/*/map.csv: Taxon Growth Form: mapped to VegBIEN.growthform enum, using http://www.fgdc.gov/standards/projects/FGDC-standards-projects/vegetation/NVCS_V2_FINAL_2008-02.pdf#page=83§ion.page=76 . documented values used by each table.
- 02:18 PM Revision 10822: lib/sh/util.sh: is_array(): handle unset vars (=false). this fixes a bug in pg_export_table_no_header, which produced the error "lib/sh/util.sh: line 290: declare: cols: not found".
- 02:06 PM Revision 10821: fix: lib/sh/util.sh: join(): documented that delim must be a single char
- 07:15 AM Revision 10820: bugfix: /README.TXT: on a live machine, you should put the following in your .profile: need to make svn files web-accessible, because these are used by fs.vegpath.org links (such as to the ERD, etc.). note that this does not affect unversioned files, because these get the right permissions on the local machine instead (see Testing > On a development machine, you should put the following in your .profile).
- 07:07 AM Revision 10819: /README.TXT: to backup files not in Time Machine: added command to start the PostgreSQL server
- 06:58 AM Revision 10818: bugfix: /README.TXT: to synchronize a Mac's settings with my testing machine's: don't upload ~/.profile, etc. to jupiter because these files are different on each machine. they can instead be synced manually.
- 06:52 AM Revision 10817: /README.TXT: to backup files not in Time Machine: added command to stop the PostgreSQL server
- 06:49 AM Revision 10816: /README.TXT: to synchronize vegbiendev, jupiter, and your local machine: noted that ./fix_perms should be run on all machines
- 06:36 AM Revision 10815: removed unused dir analysis/ (originally requested by Jim)
- 06:31 AM Revision 10814: bugfix: /README.TXT: to synchronize vegbiendev, jupiter, and your local machine:: added step to run `make backups/TNRS.backup/download live=1`, because bin/sync_upload does not sync this due to filters in backups/.rsync_filter.download
- 06:11 AM Revision 10813: /README.TXT: Maintenance: to synchronize vegbiendev, jupiter, and your local machine: added step to run ./fix_perms so that there are fewer permissions diffs to review
- 06:07 AM Revision 10812: bugfix: /README.TXT: to synchronize a Mac's settings with my testing machine's: upload: `(cd ~/Dropbox/svn/; svn up)`: use `up` instead so that the needed --force option is applied
- 05:37 AM Revision 10811: inputs/VegBank/*/postprocess.sql: added primary keys to derived tables
- 05:15 AM Revision 10810: schemas/VegCore/VegCore.ERD.mwb: regenerated exports and udpated image map
- 05:11 AM Revision 10809: schemas/VegCore/VegCore.ERD.mwb: individual_observation.individual: documented that it is optional because an individual_observation cannot have an associated individual unless the individual is traceable to a specific plant
- 05:09 AM Revision 10808: schemas/VegCore/VegCore.ERD.mwb: individual_observation.place_observed_at: made it optional because some individual_observations (e.g. of the plant a specimen was collected from) may be missing location information. however, an individual_observation cannot have an associated individual unless the individual is traceable to a specific plant.
- 05:03 AM Revision 10807: schemas/VegCore/VegCore.ERD.mwb: specimen: added individual_observation, which stores observations about the plant the specimen was collected from. (some specimens may not be traceable to a reobservable individual, but will still have these plant observations.) specimen_observation: adjusted position to fully display the HAS-A connector to specimen.
- 03:44 AM Revision 10806: planning/timeline/timeline.2013.xls: updated for progress. rebalanced dots.
- 01:39 AM Revision 10805: planning/timeline/timeline.2013.xls: added separate task for Individual datasource refresh (separate from Individual datasource *removal*), because we also need to optimize the reload of datasources. the reload is most likely slow because rows are being added to very large tables.
- 01:21 AM Revision 10804: /README.TXT: Single datasource import: run commands in the background, since these are long-running commands
- 12:57 AM Revision 10803: planning/timeline/timeline.2013.xls: moved Attribution and conditions of use before Flatten the datasources as suggested in meeting with Mark
- 12:42 AM Revision 10802: schemas/vegbien.sql: datasource_rm(): runtime: added runtime of MO (55 min, 0.85 ms/row), which has a much larger # of rows than ACAD (4 million instead of 45,000). updated GBIF runtime estimate (~13 h) with more accurate ms/row from MO.
08/29/2013
- 11:19 PM Revision 10801: schemas/vegbien.sql: datasource_rm(): estimated runtime for GBIF (~10 h). note that this is still significantly shorter than the import time (3.4 days).
- 11:11 PM Revision 10800: schemas/vegbien.sql: datasource_rm(): documented how to calculate runtime
- 11:04 PM Revision 10799: schemas/vegbien.sql: datasource_rm(): documented runtime for ACAD: 30 s; 0.61 ms/row
- 05:09 PM Revision 10798: inputs/input.Makefile: rm: use new datasource_rm(), which encapsulates the schema-specific aspects of removing a datasource
- 04:59 PM Revision 10797: bugfix: schemas/vegbien.sql: datasource_rm(): set_config(): don't name the is_local param because it is not a named parameter
- 04:50 PM Revision 10796: schemas/vegbien.sql: added datasource_rm(). this uses an internal schema-scoping parameter to ensure that the function always operates on tables in the schema it was *defined* in, rather than tables in the search_path. this ensures that when the public schema is renamed (e.g. from an imported version), the function will continue to operate on its own schema rather than whichever schema happens to be called public. this avoids any surprises if you are trying to remove a datasource in one schema, and don't want it to unintentionally be removed in another schema instead.
- 04:20 PM Revision 10795: schemas/util.sql: added schema_ident()
- 03:20 PM Revision 10794: schemas/util.sql: added schema(regtype), schema(anyelement)
- 02:07 PM Revision 10793: inputs/.TNRS/schema.sql: added covering indexes on foreign keys where needed. this enables rows to be cascadingly deleted without a full table scan.
- 01:58 PM Revision 10792: schemas/vegbien.sql: added covering indexes on foreign keys where needed. this enables rows to be cascadingly deleted without a full table scan.
08/27/2013
- 11:11 PM Revision 10791: planning/timeline/timeline.2013.xls: increased font size for better readability at 100% (which is also the printed size). note that the timeline is normally zoomed in, so you don't see the actual font size.
- 10:52 PM Revision 10790: inputs/.TNRS/schema.sql: tnrs: instructions for when changing this table's schema: updated to use new `inputs/.TNRS/data.sql.run refresh`
- 10:50 PM Revision 10789: inputs/.TNRS/data.sql.run: added refresh() target which runs inputs/test_taxonomic_names/test_scrub
- 10:34 PM Revision 10788: inputs/test_taxonomic_names/test_scrub: added step to update inputs/.TNRS/data.sql to the now-refreshed TNRS sample data (this updating step is now automated)
- 10:32 PM Revision 10787: inputs/.TNRS/schema.sql: tnrs: updated steps to run when changing this table's schema, to use new TNRS editing workflow
- 10:14 PM Revision 10786: inputs/.TNRS/data.sql: re-ran TNRS using `inputs/test_taxonomic_names/test_scrub; rm=1 inputs/.TNRS/data.sql.run export_`
- 10:13 PM Revision 10785: /README.TXT: Full database import: fixing TNRS errors: noted that inputs/test_taxonomic_names/test_scrub re-runs TNRS
- 10:12 PM Revision 10784: /README.TXT: Full database import: fixing TNRS errors: updated instructions for new TNRS schema editing workflow
- 09:53 PM Revision 10783: inputs/.TNRS/data.sql: generate from the DB using `rm=1 inputs/.TNRS/data.sql.run export_` instead of being a hand-edited file
- 09:50 PM Revision 10782: added inputs/.TNRS/data.sql.run for syncing data.sql directly with the DB without needing to use inputs/test_taxonomic_names/test_scrub just to export the sample data. (however, when modifying the tnrs table, it may still be easier to generate new sample data using test_scrub rather than refactoring the table in place.)
- 09:35 PM Revision 10781: added lib/runscripts/data.pg.sql.run (analogous to schema.pg.sql.run for data-only SQL scripts)
- 09:32 PM Revision 10780: added lib/runscripts/file.pg.sql.run and use it in schema.pg.sql.run
- 09:25 PM Revision 10779: added lib/runscripts/schema.pg.sql.run and use it in inputs/.TNRS/schema.sql.run
- 09:18 PM Revision 10778: inputs/.TNRS/schema.sql: generate from the DB using `rm=1 inputs/.TNRS/schema.sql.run export_` instead of being a hand-edited file. this makes it much easier to edit the (now frequently-changing) TNRS schema directly in pgAdmin (which is graphical), rather than having to manually copy SQL changes from pgAdmin to the file.
- 09:15 PM Revision 10777: inputs/.TNRS/schema.sql.run: export_(): added usage
- 09:12 PM Revision 10776: added inputs/.TNRS/schema.sql.run, which syncs schema.sql with the DB
- 09:07 PM Revision 10775: bugfix: lib/sh/db.sh: pg_dump(): don't default $struct flag to on, because both structure and data should be printed by default
- 09:02 PM Revision 10774: lib/sh/db.sh: pg_dump(): added create_schema= flag to remove CREATE SCHEMA statements (useful if the schema already exists)
- 08:59 PM Revision 10773: bugfix: lib/sh/util.sh: set_fds(): remove empty redirects resulting from using `redirs= cmd...` to clear the redirs and then using $redirs as an array
- 08:47 PM Revision 10772: fix: lib/sh/util.sh: set_fds(): documented that it does not currently support redirecting an fd to itself (due to bash bugs that require the dest fd to be closed before it can be reopened)
- 08:44 PM Revision 10771: bugfix: lib/sh/util.sh: stdout2fd(): don't add >&$fd redirect if the fd is 1, because redir does not currently support redirecting an fd to itself (due to bash bugs that require the dest fd to be closed before it can be reopened)
- 08:40 PM Revision 10770: lib/sh/util.sh: filter_fd(): factored out >() subshell command into stdout2fd() for clarity
- 08:33 PM Revision 10769: bugfix: lib/sh/util.sh: redir(): unset redirs so that you don't redirect again in the invoked command
- 08:29 PM Revision 10768: fix: lib/sh/util.sh: filter_fd(): documented that ${redirs[@]} must not be set to an outer value
- 07:41 PM Revision 10767: fix: inputs/ARIZ/omoccurrences/map.csv: occurrenceID: remapped to EQUIV#to:occid instead of DUPLICATE#of:occid since these are not exact duplicates
- 07:30 PM Revision 10766: lib/runscripts/util.run: added to_top_file alias for use with $top_file
- 07:10 PM Revision 10765: lib/sh/local.sh: added pg_dump_local()
- 07:09 PM Revision 10764: lib/sh/db.sh: added pg_dump(), using the code in bin/pg_dump_vegbien with clarity improvements
- 07:06 PM Revision 10763: lib/sh/db.sh: added pg_cmd() (analogous to mysql_cmd() for PostgreSQL), and use it in psql(), so that other PostgreSQL operations can use this to set the PG* connection/login vars
- 05:36 PM Revision 10762: planning/timeline/timeline.2013.xls: updated dots for new priority order
- 05:33 PM Revision 10761: planning/timeline/timeline.2013.xls: moved optimization of individual datasource removal before flattening the datasources to a common schema as suggested in meeting with Mark
- 04:00 PM Revision 10760: lib/runscripts/datasrc_dir.run: include of import.run: use .rel instead of `. "$(dirname "${BASH_SOURCE[0]}")"/...`
- 03:59 PM Revision 10759: lib/runscripts/datasrc_dir.run: moved commands related to any runscript in the datasrc dir to new in_datasrc_dir.run
- 03:57 PM Revision 10758: inputs/*/Specimen/test.xml.ref with eventDate->dateCollected mappings: updated test outputs to match mapping
- 03:52 PM Revision 10757: some inputs/*/*/unmapped_terms.csv: updated now that datasetURL is mapped (this does not affect the mappings because it is only mapped for Source tables)
- 03:43 PM Revision 10756: bugfix: inputs/ARIZ/omoccurrences/map.csv: fixed one-to-many mapping for modified (created by the automapper?)
- 02:38 PM Revision 10755: lib/sh/db.sh: pg_export(): added usage
- 01:54 PM Revision 10754: inputs/.TNRS/schema.sql: moved source code comments to in-schema COMMENT ON comments so all the info in schema.sql is in the DB
- 01:47 PM Revision 10753: inputs/.TNRS/schema.sql: views that use * as the column list: added comments to indicate that this is the case, so that the views can be updated in place rather than only by reinstalling the TNRS schema
- 01:20 PM Revision 10752: updated backups/TNRS.backup.md5
- 01:19 PM Revision 10751: planning/timeline/timeline.2013.xls: clarified note about the purpose of the dots
- 01:03 PM Revision 10750: added backups/vegbien.r10548.backup.md5
- 01:03 PM Revision 10749: bugfix: backups/: svn:ignore: removed *.md5, which should be under version control
- 12:55 PM Revision 10748: inputs/input.Makefile: scrub: documented that using & (background process) ignores TNRS errors, so that TNRS bugs do not prevent the remaining tables from being imported even if TNRS can't be run
- 12:49 PM Revision 10747: inputs/.TNRS/schema.sql: tnrs: util.set_col_types() runtime: updated for most recent ALTER COLUMN TYPE command (9 min)
- 12:25 PM Revision 10746: inputs/.TNRS/schema.sql: tnrs.Time_submitted: renamed to batch and added fkey to batch.id. this requires including the batch table in inputs/.TNRS/data.sql, so that the fkey is satisfied (batch entries are already added by bin/tnrs_db.
- 11:42 AM Revision 10745: updated backups/TNRS.backup
- 11:38 AM Revision 10744: /README.TXT: Full database import: To back up DB (staging tables and last import) separately: added step to upload backups to jupiter
- 11:30 AM Revision 10743: /README.TXT: Full database import: To back up DB (staging tables and last import) separately: added step to remake backups/TNRS.backup
08/26/2013
- 08:45 PM Revision 10742: bin/tnrs_db: add entry to new batch table
- 07:48 PM Revision 10741: inputs/.TNRS/schema.sql: batch: reset name of id_by_time unique constraint since this field is now in the batch table
- 07:46 PM Revision 10740: inputs/.TNRS/schema.sql: download_settings: renamed to batch_download_settings because this table is actually specific to the batch, and it does not make sense to have a download settings file without a batch
- 07:32 PM Revision 10739: inputs/.TNRS/schema.sql: download_settings.id: added fkey to batch.id to create a 1:1 relationship with optional participation by download_settings. note that this relationship happens to be the same as SQL inheritance, as used in VegCore, but in this case, the 1:1 relationship is not related to inheritance.
- 06:30 PM Revision 10738: inputs/.TNRS/schema.sql: client_version: added table, column comments with info on how to retrieve each value
- 06:28 PM Revision 10737: inputs/.TNRS/schema.sql: added client_version table for svn revisions, with fkey from batch
- 06:23 PM Revision 10736: inputs/.TNRS/schema.sql: added batch table and moved download_settings.time_submitted, id_by_time to it since these are not related to the download_settings file
- 05:04 PM Revision 10735: fix: planning/timeline/timeline.2013.xls: Switching to new-style import: updated hyperlink
- 05:02 PM Revision 10734: planning/timeline/timeline.2013.xls: moved Individual datasource removal under Streamline process of mapping and adding a new datasource
- 04:49 PM Revision 10733: planning/timeline/timeline.2013.xls: added note that the purpose of the dots is to show what tasks should be worked on. in some cases, they are also proportional to the complexity of the task, but this may not be the case if e.g. a task was given different priorities in different months, or worked on in different amounts.
- 04:38 PM Revision 10732: fix: planning/timeline/timeline.2013.xls: matched supertask status to subtask status
- 04:35 PM Revision 10731: planning/timeline/timeline.2013.xls: made Switching to new-style import a subtask of Streamline process of mapping and adding a new datasource because new-style import automates many of the datasource-mapping tasks that previously needed to be done by hand
- 04:33 PM Revision 10730: planning/timeline/timeline.2013.xls: reordered for priorities and to-do assignments from last conference call (wiki.vegpath.org/2013-08-22_conference_call#Decisions-made)
- 04:32 PM Revision 10729: planning/timeline/timeline.2013.xls: updated for August progress and recently-added tasks
- 01:49 PM Revision 10728: inputs/.TNRS/schema.sql: added VegCore-style id column as the primary key, instead of using time_submitted directly. this enables always using the same name for the pkey. the pkey is now autopopulated from time_submitted in a trigger, using helper column id_by_time. the user is now also able to specify their own globally-unique ID that is not based on the time_submitted.
08/25/2013
- 11:22 PM Revision 10727: inputs/.TNRS/schema.sql: download_settings comment: changed name of button to Download settings, which had gotten auto-replaced to download_settings
- 11:08 PM Revision 10726: inputs/.TNRS/schema.sql: Download settings table: renamed to download_settings because although Download settings is the verbatim name of the button that this info comes from, it is not necessary to name the table a particular way in order to match up data to it correctly, so we can just use the standard naming convention (wiki.vegpath.org/u-name#format) and avoid the need to enclose the name in ""
08/24/2013
- 06:00 PM Revision 10725: inputs/.TNRS/schema.sql: added Download settings table, which stores data from http://tnrs.iplantcollaborative.org/TNRSapp.html > Submit List > results section > Download settings > settings.txt
- 04:07 PM Revision 10724: inputs/.TNRS/Source/map.csv: mapped datasetURL
08/23/2013
- 11:43 PM Revision 10723: inputs/.geoscrub/Source/map.csv: mapped datasetURL
- 11:41 PM Revision 10722: mappings/VegCore-VegBIEN.csv: mapped datasetURL
- 11:38 PM Revision 10721: mappings/VegCore-VegBIEN.csv: mapped datasetURL
08/22/2013
- 06:12 PM Revision 10720: fix: mappings/VegCore-VegBIEN.csv: source__modified_date: remapped to pubdate instead of datelastmodified because this is actually metadata for the source itself, rather than for the VegBIEN record *of* the source
- 05:56 PM Revision 10719: fix: inputs/.geoscrub/Source/map.csv: source__modified_date: use the mtime of the CSV file instead, since this is closer to the actual version of the biengeo code at the time it was run
- 05:41 PM Revision 10718: inputs/.geoscrub/Source/map.csv: mapped source__modified_date. note that the test must be run with inputs/.geoscrub/Source/run instead of `make inputs/.geoscrub/Source/test` to add these metadata columns to the staging table.
- 05:38 PM Revision 10717: mappings/VegCore-VegBIEN.csv: mapped source__modified_date (different from vegcore.vegpath.org?modified, which is for the data record)
- 05:36 PM Revision 10716: mappings/VegCore.htm: regenerated from wiki. added source__version (= edition), source__modified_date.
- 05:33 PM Revision 10715: bugfix: schemas/util.sql: set_col_names_with_metadata(): *rename* any metadata cols rather than re-adding them with new names
- 04:38 PM Revision 10714: mappings/VegCore-VegBIEN.csv: mapped edition
- 04:36 PM Revision 10713: bugfix: inputs/.geoscrub/{Source,geoscrub_output}/VegBIEN.csv: switched to the version needed for new-style datasources
- 04:12 PM Revision 10712: inputs/.geoscrub/Source/map.csv: mapped edition (the version), using `svn info derived/biengeo/`
- 03:53 PM Revision 10711: schemas/vegbien.sql: source.revision: renamed to import_revision for clarity
- 03:52 PM Revision 10710: schemas/vegbien.my.sql: updated with `make schemas/remake`
- 03:44 PM Revision 10709: schemas/vegbien.sql: source: datecreated, datelastmodified: default to now() like in VegBank (schemas/VegBank/vegbank.sql)
- 03:29 PM Revision 10708: schemas/vegbien.sql: source: added datecreated, datelastmodified, etc. for source-level tracking of import and revision (wiki.vegpath.org/2013-08-22_conference_call#source-level-tracking-of-import-and-revision)
- 02:54 PM Revision 10707: added derived/biengeo/ from https://projects.nceas.ucsb.edu/nceas/projects/biengeo/repository/
- 02:50 PM Revision 10706: added /derived
- 01:04 PM Revision 10705: web/links/index.htm: updated to Firefox bookmarks. Gmvault: added steps to do full backup and to backup only new e-mails
- 12:48 PM Revision 10704: planning/timeline/timeline.2013.xls: flagged timeline issues that can be done by iPlant personnel: Attribution and conditions of use, Geoscrubbing re-run, Geoscrubbing automated pipeline, Improve and complete data provider metadata, Obtain any additional new data
- 11:12 AM Revision 10703: web/links/index.htm: updated to Firefox bookmarks. Gmvault: added run instructions for Mac.
- 10:50 AM Revision 10702: web/links/index.htm: updated to Firefox bookmarks. added link to Gmvault (Gmail backup), which wouldn't install for me on Mac (but that may be because I'm using 10.8, and Gmvault is for 10.7/10.6)
- 10:26 AM Revision 10701: added planning/meetings/BIEN conference call availability.xlsx (backup of Google spreadsheet)
- 10:10 AM Revision 10700: planning/timeline/timeline.2013.xls: updated for changes made in the conference call: moved Data provider validations (spot-checking) to beginning since that seems to have been decided to be a higher priority than architectural changes
08/21/2013
- 06:50 PM Revision 10699: schemas/util.sql: combining functions taking anyelement params which could be text: take text param instead, so that other argument types (e.g. integer) will first be implicitly cast to text instead of trying to concatenate integers directly. this fixes a bug in the VegBank.stemcount_,stemlocation_ _join() of two integer pkeys, which first needed to be cast to text. anyelement was previously used so that other text-like types such as varchar could also be used, but varchar is implicitly castable to text so keeping anyelement should not be necessary.
- 06:07 PM Revision 10698: planning/timeline/timeline.2013.xls: added tasks to Avoid DB restructuring when ingesting a new datasource and Streamline process of mapping and adding a new datasource (not yet put in priority order)
08/20/2013
- 08:49 PM Revision 10697: inputs/VegBank/observation_/test.xml.ref: updated inserted row count
- 02:40 PM Revision 10696: bugfix: inputs/VegBank/stemlocation_/map.csv: also _join together taxonimportance_id, stemcount_id for aggregateOrganismObservationID so that the aggregateoccurrence pkeys match up with those imported from stemcount_
- 01:40 PM Revision 10695: bugfix: inputs/VegBank/stemcount_/map.csv: aggregateOrganismObservationID: prepend taxonimportance_id so that rows with only a taxonimportance entry (no stemcounts) will also have the required sourceaccessioncode
- 12:59 PM Revision 10694: schemas/vegbien.sql: analytical_stem: synced with analytical_stem_view using sync_analytical_stem_to_view()
- 12:58 PM Revision 10693: bugfix: schemas/vegbien.sql: sync_analytical_stem_to_view(): added re-creation of range_modeling_input view
- 11:40 AM Revision 10692: schemas/vegbien.sql: analytical_stem_view: added aggregateOrganismObservationID (aggregateoccurrence.sourceaccessioncode) so aggregateoccurrences can be matched back up to their input rows (e.g. VegBank.stemcount)
- 10:31 AM Revision 10691: inputs/VegBank/taxonobservation_/map.csv: plantname: remapped to DUPLICATE#of:plantconcept_plantname because this is an exact duplicate
- 10:24 AM Revision 10690: bugfix: inputs/VegBank/taxonobservation_/map.csv: updated input column names for renamings in inputs/VegBank/vegbank.~.clean_up.sql
- 10:21 AM Revision 10689: inputs/VegBank/taxonobservation_/map.csv: Species and lower ranks: remapped to EQUIV#to:plantname because these contain the taxonomic name at specific ranks, but plantname contains the taxonomic name of the plant itself, which is longer and populated more often
08/19/2013
- 05:48 AM Revision 10688: inputs/VegBank/taxonobservation_/create.sql: also join to plantname, since plantconcept.plantname may not always be populated when plantname.plantname is
- 05:36 AM Revision 10687: fix: inputs/VegBank/taxonobservation_/map.csv: Species and below: remapped to _alts of scientificName, because these are actually the *full* taxonomic name at that rank, not just the epithet. Genus: documented that it includes the genus author.
- 05:28 AM Revision 10686: inputs/VegBank/vegbank.~.clean_up.sql: disambiguated plantconcept.plantname, plantname.reference_id to enable joining plantconcept_->plantname
- 05:13 AM Revision 10685: fix: inputs/VegBank/taxonobservation_/map.csv: also mapped plantname to scientificName, since int_currplantscifull is not always provided when this is. (it cannot replace int_currplantscifull, because when int_currplantscifull also provided, this often leaves out lower ranks.) this should fill in taxonomic information for taxonobservations that are currently missing it.
- 03:27 AM Revision 10684: bugfix: schemas/vegbien.sql: analytical_stem_view: coordinates: use the coordinates from datasource_place instead of canon_place, because canon_place's coordinates are only what the geoscrubbing output and do not contain datasource-specific information such as coordsaccuracy_m
- 01:53 AM Revision 10683: inputs/VegBank/taxonobservation_/map.csv: collector_id: remapped to UNUSED. removed LEFT JOIN collector_id->party since this field is never populated.
- 01:16 AM Revision 10682: inputs/VegBank/plot_/map.csv: area|country|territory, region|state|province (from place table): remapped to DUPLICATE, since these have the same data as, and are populated less often than, their country/stateprovince couterparts
08/18/2013
- 11:39 PM Revision 10681: bugfix: inputs/VegBank/plot_/create.sql: need to join place.*plot_id* to plot.plot_id instead of plotplace_id. this is the cause of the "State is wrong, not Wyoming, but Tennessee" and "County is incorrect (not Powell, but Orange)" bugs in the VegBank spot-checking (wiki.vegpath.org/Spot-checking#Great-Smoky-Mountains-National-Park).
- 10:42 PM Revision 10680: inputs/VegBank/taxonobservation_/map.csv: int_origplantscifull: remapped to EQUIV (to authorplantname). this is the scrubbed originalScientificName, but we do our own scrubbing.
- 10:23 PM Revision 10679: fix: inputs/VegBank/taxonobservation_/map.csv: authorplantname: remapped to originalScientificName because it includes the name author
- 10:21 PM Revision 10678: inputs/VegBank/taxonobservation_/map.csv: mapped int_origplant*, int_currplant* to *scientificName/*taxonName/etc.
- 09:48 PM Revision 10677: inputs/VegBank/plot_/map.csv: elevation: documented that it has only 5 decimal places of precision, with only 9s and random #s after that
- 09:47 PM Revision 10676: inputs/VegBank/plot_/test.xml.ref: update rowcount
- 09:15 PM Revision 10675: inputs/VegBank/stemlocation_/map.csv: stemcode, stemxposition, stemyposition: remapped to UNUSED. stemhealth is the only data field in this table that is populated, which means that VegBank does not provide data on reobservable stems even though the schema supports it.
- 09:12 PM Revision 10674: inputs/VegBank/observation_/postprocess.sql: added pkey
- 08:19 PM Revision 10673: schemas/VegCore/VegCore.ERD.mwb: regenerated exports and udpated image map
- 08:18 PM Revision 10672: schemas/VegCore/ERD/index.htm: regenerated
- 08:05 PM Revision 10671: schemas/VegCore/VegCore.ERD.mwb: taxon_observation: added observation_in_parent_place, which points to the observation of the same taxon/individual in the parent place. this accounts for VegBank allowing multiple taxonImportances per taxonObservation, which only makes sense when each taxonImportance is from a different stratum and they point to a common taxonObservation for the parent plot.
- 07:58 PM Revision 10670: bugfix: web/.htaccess: auto-detect dotpath in query string: need to include the path ($0) in the replacement, to avoid reverting to the root dir. (mod_rewrite replacements are not like relative URLs, which would interpret ?... as being relative to the *current* path, not the root.)
- 07:56 PM Revision 10669: bugfix: web/.htaccess: auto-detect dotpath in query string: added missing $ at end of regexp
Also available in: Atom