Activity
From 10/24/2013 to 11/22/2013
11/21/2013
- 05:20 PM Revision 11729: inputs/CVS/plot_/: translated column filters to postprocessing derived columns, using the steps at http://wiki.vegpath.org/Adding_new-style_import_to_a_datasource#1-Translate-filters-to-postprocessing-derived-columns
- 04:59 PM Revision 11728: /README.TXT: Full database import: verifying import: In PostgreSQL: don't include current values of the datasource counts, etc., because these may change and should always be re-checked at wiki.vegpath.org/VegBIEN_contents
- 04:27 PM Revision 11727: inputs/CVS/plot_/postprocess.sql: added pkey from the primary joined table
- 04:11 PM Revision 11726: inputs/CVS/plot_/map.csv: documented assumptions about the units of fields
- 03:52 PM Revision 11725: inputs/CVS/plot_/map.csv: documented assumptions about the units and meaning of numeric codes for fields
- 03:01 PM Revision 11724: inputs/CVS/plantConcept_/: translated multi-column filters to postprocessing derived columns, using the steps at http://wiki.vegpath.org/Adding_new-style_import_to_a_datasource#1-Translate-filters-to-postprocessing-derived-columns
- 02:54 PM Revision 11723: inputs/CVS/plantConcept_/: translated multi-column filters to postprocessing derived columns, using the steps at http://wiki.vegpath.org/Adding_new-style_import_to_a_datasource#1-Translate-filters-to-postprocessing-derived-columns
- 01:59 PM Revision 11722: web/links/index.htm: updated to Firefox bookmarks. BIEN: added DataONE compatibility links.
- 01:58 PM Revision 11721: inputs/CVS/plantConcept_/postprocess.sql: added pkey from the primary joined table
- 01:11 PM Revision 11720: inputs/CVS/observation_/postprocess.sql: added pkey from the primary joined table. added _parent index to facilitate joins.
- 01:08 PM Revision 11719: fix: inputs/input.Makefile: $(svnFilesGlob): removed schema and PDF files, since these are owned by the data provider and should not be in the repository that gets open-sourced
- 01:01 PM Revision 11718: bugfix: inputs/CVS/observation_/create.sql: only include one soilObs for each observation (using DISTINCT ON), rather than just left-joining them
- 11:59 AM Revision 11717: inputs/: removed SALVIAS-CSV, because this is a sample datasource which was only there to test the mapping process. it should not be adding records that duplicate SALVIAS, nor should it take up maintenance effort (switching to new-style import, updating to match SALVIAS, etc.).
- 11:52 AM Revision 11716: planning/timeline/timeline.2013.xls: removed the weeks of 12/23, 12/30 because these are during winter break. rescheduled tasks.
- 11:08 AM Revision 11715: inputs/.TNRS/schema.sql: updated runtime (30 min) and rowcount (+2 million)
- 10:23 AM Revision 11714: planning/timeline/timeline.2013.xls: rescheduled tasks
- 10:16 AM Revision 11713: planning/timeline/timeline.2013.xls: crossed out and hid completed tasks
- 10:14 AM Revision 11712: planning/timeline/timeline.2013.xls: updated for progress
- 09:04 AM Revision 11711: fix: inputs/.TNRS/schema.sql: tnrs_populate_fields(): is_valid_match: set this to false if Taxonomic_status is Invalid
- 08:53 AM Revision 11710: schemas/vegbien.sql: analytical_stem_view: added taxonomic_status. notice that PostgreSQL 9.3 puts each view column on a separate line, making it *much* easier to review the svn diff!
- 08:49 AM Revision 11709: inputs/.TNRS/schema.sql: added map_taxonomic_status()
- 08:48 AM Revision 11708: inputs/.TNRS/schema.sql, data.sql: updated for PostgreSQL 9.3
- 08:26 AM Revision 11707: bugfix: inputs/CVS/stemCount_/map.csv: ensure the aggregateoccurrence.sourceaccessioncode is always populated, because this is a required field when using sourceaccessioncodes. without it, the import will exclude rows which lack a value in this field because it cannot deduplicate on it for these rows, leading to the dropping of large numbers of occurrences. this shows up when comparing provider_count to the input table's row count, and produces the following error in the .errors table:
- ---
ERROR: duplicate key value violates unique constraint "aggregateoccurrence_taxonoccurrence_1_to_1"
DETAIL: Key ... - 07:40 AM Revision 11706: fix: schemas/vegbien.sql: taxon_trait_view: include only TNRS-valid names
- 12:24 AM Revision 11705: copyright scrub: inputs/: removed data provider-owned schema and documentation files, which are not BIEN copyright and should not be part of what is submitted for open-sourcing. these files will remain accessible via the web interface (fs.vegpath.org), but will not be in the repository.
- 12:02 AM Revision 11704: added inputs/TEAM/_src/data_cart.tsv, containing the content extracted from data_cart.maff
11/20/2013
- 11:38 PM Revision 11703: web/links/index.htm: updated to Firefox bookmarks. BIEN: open-sourcing: added UArizona and iPlant IP policies, which are relevant to Brad's numerous documentation and schema-modeling contributions in our repository (most done while he was an iPlant employee).
- 10:49 PM Revision 11702: removed inputs/TEAM/_src/data_cart.pdf since this does not contain all the info in data_cart.maff
- 01:21 PM Revision 11701: added planning/legal/open-sourcing/request_to_open_source_software.orig.docx.url
- 01:18 PM Revision 11700: added planning/legal/open-sourcing/, which will contain the "request to open source software" form (this cannot be under version control due to copyright limitations stated in the form)
11/19/2013
- 09:21 PM Revision 11699: web/links/index.htm: updated to Firefox bookmarks. BIEN: open-sourcing: added potential licenses we could use (public domain/CC0, BSD, GNU Verbatim Copying License, *not* CC-BY because incompatible w/ GPL).
- 08:31 PM Revision 11698: web/links/index.htm: updated to Firefox bookmarks. BIEN: added links related to open-sourcing it, including the "Request to Open Source Software" form, the funding sources that need to be included in it, and part of the delegation of authority chain (from the UC Regents) that authorizes the open-sourcing.
11/18/2013
- 11:38 PM Revision 11697: backups/TNRS.backup.md5: updated
- 05:40 PM Revision 11696: schemas/vegbien.sql: sync_analytical_stem_to_view(): use new util.force_recreate() instead of manually dropping and re-creating every view that uses this. this avoids the need to add several lines to this function every time we add a new scientific view (of which we expect to have many), because force_recreate()'s error parsing handles this automatically. this makes it possible for a non-expert user to add scientific views without compromising the ability to add columns to analytical_stem_view, because they don't need to understand Postgres's dependency error messages when updating analytical_stem with this function.
- 05:32 PM Revision 11695: schemas/util.sql: added force_recreate(), for use by sync_analytical_stem_to_view(). this uses the new `GET STACKED DIAGNOSTICS` in PostgreSQL 9.3 to access the DETAIL section of the dependent_objects_still_exist error.
- 12:10 PM Revision 11694: web/links/index.htm: updated to Firefox bookmarks. upgrading to PostgreSQL 9.3: added Linux pg_upgrade steps and install instructions. added Mac PostGIS, psycopg2 install steps. added note that after installing, you need to restore config values that the upgrade reset: in pgAdmin > Preferences > Query tool > Query editor, set Max characters per column back to -1 (to avoid cells being truncated). (this is *not* a bug in PostgreSQL, only in pgAdmin, and does *not* signal a need to downgrade.)
- 06:52 AM Revision 11693: planning/timeline/timeline.2013.xls: hid previous weeks
- 06:51 AM Revision 11692: planning/timeline/timeline.2013.xls: rescheduled tasks
- 06:45 AM Revision 11691: planning/timeline/timeline.2013.xls: added timespan checkmarks
- 06:44 AM Revision 11690: planning/timeline/timeline.2013.xls: hid completed tasks
- 06:43 AM Revision 11689: planning/timeline/timeline.2013.xls: updated for progress
- 06:23 AM Revision 11688: inputs/CVS/run: `make .../reinstall`: documented vegbiendev runtime (45 min)
- 05:35 AM Revision 11687: removed inputs/CVS/cvs-archive-2012-12-04.schema.sql, which has been replaced by cvs-eep-archive-2013-10-22-VegBIEN.schema.sql
- 05:05 AM Revision 11686: bugfix: /README.TXT: to backup files not in Time Machine: PostgreSQL: need to run with `overwrite=1` so removed files are also deleted
- 05:02 AM Revision 11685: /README.TXT: to backup files not in Time Machine: PostgreSQL: only stop PostgreSQL after all files have been copied, to minimize the time that the PostgreSQL server is down (the final copy just copies concurrent changes)
- 05:02 AM Revision 11684: /README.TXT: to backup files not in Time Machine: PostgreSQL: only stop PostgreSQL after all files have been copied, to minimize the time that the PostgreSQL server is down (the final copy just copies concurrent changes)
- 04:59 AM Revision 11683: /README.TXT: updated to PostgreSQL 9.3
- 04:54 AM Revision 11682: added inputs/CVS/_src/cvs-eep-archive-2013-10-22-VegBIEN.zip.url
- 04:54 AM Revision 11681: added inputs/CVS/cvs-eep-archive-2013-10-22-VegBIEN.schema.sql
- 04:52 AM Revision 11680: inputs/CVS/run: documented `make .../reinstall` runtime (25 min)
- 04:27 AM Revision 11679: inputs/VegBank/stemlocation_/header.csv: updated from reinstalling stemlocation_
- 04:26 AM Revision 11678: added inputs/CVS/_src/cvs-eep-archive-2013-10-22-VegBIEN.schema.sql
- 04:23 AM Revision 11677: added inputs/CVS/_src/cvs-eep-archive-2013-10-22-VegBIEN.schema.sql.run, which makes the SQL suitable for PostgreSQL
- 03:52 AM Revision 11676: bugfix: inputs/input.Makefile: sql/install: exit on error by using `set -o pipefail`
- 12:43 AM Revision 11675: fix: /Makefile: $(macPostgresLibs): added libpq.5, which is needed by PostgreSQL 9.3
- 12:29 AM Revision 11674: fix: /Makefile: postgres-Darwin: also need to install psycopg2
11/17/2013
- 11:27 PM Revision 11673: /Makefile: postgres-Linux: add the PostgreSQL 9.2 apt-src in case we ever need to downgrade to it
- 10:57 PM Revision 11672: bugfix: /Makefile: postgres-Linux: ignore errors if `sudo apt-get update` returns a non-zero exit status due to unreachable apt sources (which are likely unrelated to PostgreSQL, and should not prevent PostgreSQL configuration from continuing)
- 10:54 PM Revision 11671: bugfix: /Makefile: postgres-Linux: fixed command to create /etc/apt/sources.list.d/pgdg.list
11/15/2013
- 06:29 AM Revision 11670: schemas/*.conf: upgraded to PostgreSQL 9.3, which is needed for proper exception parsing in the auto-re-create-views functionality
- 04:29 AM Revision 11669: /Makefile: postgres-Linux: also install postgresql-#-postgis-scripts, which is used by derived/biengeo/
11/14/2013
- 02:36 PM Revision 11668: bugfix: schemas/vegbien.sql: plantobservation_aggregateoccurrence_count_1(): only default aggregateoccurrence.count to 1 for specimens data, because plots data may have any number of individuals in a taxon_presence record that has no explicit individual_count
- 02:32 PM Revision 11667: schemas/*.sql: updated for PostgreSQL 9.3. this reorders some functions, adds empty comment headers for omitted SEQUENCE SET commands, and (best of all) finally splits view columns onto multiple lines, so that changes in the columns are actually legible (and produce their own svn diff!)
- 01:00 PM Revision 11666: planning/timeline/timeline.2013.xls: added tasks "create high-level workflow diagram" and "load BIEN2 exports directly from raw data", as requested by Martha
- 07:19 AM Revision 11665: bugfix: lib/Firefox_bookmarks.reformat.csv: remove empty <DD> tags (which Firefox now adds for all bookmarks) so they don't create a blank space on the page
- 07:16 AM Revision 11664: bugfix: lib/Firefox_bookmarks.reformat.csv: don't prepend "page's description:" to empty <DD> tags, which Firefox now adds for all bookmarks, even if they don't have a description
- 07:06 AM Revision 11663: web/links/index.htm: updated to Firefox bookmarks. added instructions for upgrading PostgreSQL to 9.3, and some GBIF links.
- 06:44 AM Revision 11662: *Makefile, schemas/*.Mac.conf: upgraded to PostgreSQL 9.3, which is needed for proper exception parsing in the auto-re-create-views functionality. this also removes the Mac 10.8 Mountain Lion quirks, such as renaming the postgres user to _postgres (which messed everything up, but is now back to normal).
- 04:09 AM Revision 11661: /Makefile: postgres-Linux: added steps to install PostgreSQL 9.3, which is needed for proper exception parsing in the auto-re-create-views functionality
- 02:59 AM Revision 11660: schemas/util.sql: added save_drop_views()
- 02:37 AM Revision 11659: schemas/util.sql: added is_empty(anyarray)
- 02:17 AM Revision 11658: added inputs/GBIF/_src/0001000-131106143450413.zip.md5, GBIFPortalDB-2013-09-10.dump.gz.md5
- 02:16 AM Revision 11657: schemas/util.sql: added regexp_matches_group()
- 01:13 AM Revision 11656: schemas/util.sql: show_create_view(): also include GRANT statements, which are necessary to fully re-create the view
- 12:54 AM Revision 11655: schemas/util.sql: added show_grants_for(table_ regclass), for use by show_create_view()
- 12:49 AM Revision 11654: inputs/GBIF/_src/GBIFPortalDB-2013-09-10.dump.gz.url: documented download time (5.5 h for an 18 GB file)
- 12:40 AM Revision 11653: inputs/GBIF/_src/0001000-131106143450413.zip.url: documented download time (only 2 h for an 18 GB file)
11/13/2013
- 08:35 PM Revision 11652: schemas/util.sql: added save_drop_view()
- 08:33 PM Revision 11651: schemas/util.sql: added show_create_view()
- 07:14 PM Revision 11650: added inputs/GBIF/_src/0001000-131106143450413.zip.url (DwC-A export), GBIFPortalDB-2013-09-10.dump.gz.url (raw data), portal_26_feb_2013.war.url (raw data portal)
- 04:50 PM Revision 11649: web/.htaccess: mod_autoindex: show .* files which are normally hidden, because these are important parts of our codebase. (the leading . is not used for access controls.) .svn folders will remain hidden to avoid clutter.
- 04:16 PM Revision 11648: inputs/GBIF/: added LOA files: _src/use_conditions/LetterOfAgreement_template.doc, BIEN LoA agreement annex.docx
- 02:48 AM Revision 11647: inputs/.TNRS/schema.sql: tnrs_populate_fields(): regenerate the derived cols: updated runtime (40 min)
- 01:07 AM Revision 11646: web/links/index.htm: updated to Firefox bookmarks. added links related to PostgreSQL plain-text pkeys and the GBIF data use agreement (which is apparently much less restrictive than the LoA we signed, and would even allow the data to be public). vegetation data: placed links into subfolders by datasource.
11/10/2013
- 07:09 PM Revision 11645: bugfix: schemas/vegbien.sql: scrubbed_morphospecies_binomial: only append the morphospecies suffix if there is not a scrubbed specific epithet
- 07:08 PM Revision 11644: bugfix: schemas/vegbien.sql: scrubbed_morphospecies_binomial: only populate this from the component ranks; do not put a full taxon name in here if it would otherwise be NULL
- 07:02 PM Revision 11643: inputs/.TNRS/schema.sql: tnrs: removed no longer used Accepted_scientific_name. use scrubbed_unique_taxon_name instead.
- 07:00 PM Revision 11642: inputs/.TNRS/schema.sql: MatchedTaxon, etc.: removed no longer used acceptedScientificName (from tnrs.Accepted_scientific_name). use scrubbed_unique_taxon_name instead.
- 06:43 PM Revision 11641: inputs/.TNRS/schema.sql: removed no longer used AcceptedTaxon. use taxon_scrub.scrubbed_unique_taxon_name.* instead.
- 06:38 PM Revision 11640: bugfix: schemas/vegbien.sql: tnrs_input_name: MatchedTaxon self-join: must use a NOT NULL column for a proper anti-join. this unfortunately requires the more verbose LEFT JOIN ON syntax (which allows using the pkey as the NOT NULL column) instead of NATURAL LEFT JOIN (which requires using another column, which are all nullable)
- 06:34 PM Revision 11639: schemas/vegbien.sql: tnrs_input_name: use plain UNION, which automatically removes duplicates, rather than UNION ALL with a manual EXCEPT-removal of rows in the first SELECT
- 06:14 PM Revision 11638: schemas/vegbien.sql: tnrs_input_name: updated to use taxon_scrub.scrubbed_unique_taxon_name.*, to avoid further dependencies on AcceptedTaxon
- 05:55 PM Revision 11637: inputs/.TNRS/schema.sql: removed no longer used ScrubbedTaxon. use taxon_scrub instead.
- 05:54 PM Revision 11636: schemas/vegbien.sql: taxon_trait_view: updated to use new taxon_scrub
- 05:51 PM Revision 11635: schemas/vegbien.sql: analytical_stem_view: updated to use new taxon_scrub. this avoids the need to manually COALESCE() every accepted* and matched* field, and makes the formulas much clearer
- 04:11 PM Revision 11634: inputs/.TNRS/schema.sql: added taxon_scrub, which combines ValidMatchedTaxon with scrubbed_unique_taxon_name.* instead of AcceptedTaxon
- 03:38 PM Revision 11633: inputs/.TNRS/schema.sql: ValidMatchedTaxon: synced to MatchedTaxon
- 03:22 PM Revision 11632: fix: inputs/.TNRS/schema.sql: scrubbed_taxon_name_with_author: renamed to scrubbed_unique_taxon_name because this also contains the family, and is therefore different from just the taxon name with author
- 01:50 PM Revision 11631: inputs/.TNRS/schema.sql: MatchedTaxon: added scrubbed_taxon_name_with_author
- 01:23 PM Revision 11630: inputs/.TNRS/schema.sql: tnrs: removed Is_homonym, since this did not take into account the never_homonym status (when the author disambiguates) or the ability of a non-homonym at a lower rank to override a homonym at a higher rank. taking these into account just produces the value of is_valid_match.
- 01:19 PM Revision 11629: inputs/.TNRS/schema.sql: tnrs: removed Is_plant, since this functionality is now provided by is_valid_match. note that whether a name is a plant is not meaningful for TNRS, because it can match only plant names (thus a "non-plant" is actually a non-match).
- 01:06 PM Revision 11628: inputs/.TNRS/schema.sql: tnrs: added scrubbed_taxon_name_with_author derived column, which uses the matched name when an accepted name is not available
- 09:44 AM Revision 11627: inputs/.TNRS/schema.sql: tnrs: removed no longer used Max_score. use is_valid_match to determine validity instead.
- 12:09 AM Revision 11626: bugfix: lib/runscripts/file.pg.sql.run: export_(): exclude Source and related tables so that these will be re-created by the staging tables installation instead, ensuring that they are always in sync with the Source/ subdir
- 12:08 AM Revision 11625: inputs/.TNRS/data.sql: updated for new derived columns
- 12:04 AM Revision 11624: bugfix: lib/runscripts/file.pg.sql.run: export_(): exclude Source and related tables so that these will be re-created by the staging tables installation instead, ensuring that they are always in sync with the Source/ subdir
11/09/2013
- 10:22 PM Revision 11623: bugfix: schemas/vegbien.sql: analytical_stem_view: scrubbed_taxon_name_no_author, scrubbed_author: need to COALESCE() these to the matched* when no accepted* is available
- 10:02 PM Revision 11622: schemas/vegbien.sql: analytical_stem_view, etc.: renamed scrubbed fields with the scrubbed_* prefix, to clearly distinguish these from the equivalent fields for other taxon names
- 09:10 PM Revision 11621: bugfix: schemas/vegbien.sql: analytical_stem_view: family, genus: need to COALESCE() these to the matched* when no accepted* is available
- 06:04 PM Revision 11620: backups/TNRS.backup.md5: updated
- 04:47 PM Revision 11619: inputs/.TNRS/schema.sql: removed no longer used score_ok(). use tnrs.Is_plant instead. (the threshold is still documented in tnrs_populate_fields().)
- 04:45 PM Revision 11618: inputs/.TNRS/schema.sql: tnrs_populate_fields(): is_valid_match: don't consider Max_score because Is_plant will always be false when the Max_score is insufficient (<0.8)
- 04:20 PM Revision 11617: inputs/.TNRS/schema.sql: schema comment: added steps to remake schema.sql and back up the new TNRS schema. documented that these steps should be run on vegbiendev.
- 04:16 PM Revision 11616: inputs/.TNRS/schema.sql: schema comment: added steps to determine what changes need to be made on vegbiendev
- 04:01 PM Revision 11615: inputs/.TNRS/schema.sql: tnrs_populate_fields(): regenerate the derived cols: updated runtimes (~same)
- 03:54 PM Revision 11614: inputs/.TNRS/schema.sql: tnrs: moved instructions to apply schema changes on vegbiendev to the TNRS schema, because this applies to all elements in the TNRS schema, not just the tnrs table
- 03:30 PM Revision 11613: inputs/.TNRS/schema.sql: score_ok(): don't make it STRICT because this prevents it from being inlined
- 03:24 PM Revision 11612: inputs/.TNRS/schema.sql: tnrs: removed no longer used tnrs_score_ok index. use tnrs__valid_match instead.
- 03:09 PM Revision 11611: bugfix: inputs/.TNRS/schema.sql: tnrs_populate_fields(): is_valid_match: documented that this excludes homonyms because these are not valid matches (i.e. TNRS provides a name, but the name is not meaningful because it is not unambiguous)
- 03:07 PM Revision 11610: bugfix: inputs/.TNRS/schema.sql: ValidMatchedTaxon: exclude inter-kingdom homonyms because these are not valid matches (i.e. TNRS provides a name, but the name is not meaningful because it is not unambiguous). this uses taxon_scrub__is_valid_match instead of score_ok() to determine whether the result should be included.
- 02:56 PM Revision 11609: inputs/.TNRS/schema.sql: ValidMatchedTaxon: synced to MatchedTaxon
- 02:55 PM Revision 11608: inputs/.TNRS/schema.sql: MatchedTaxon: added is_valid_match
- 02:52 PM Revision 11607: inputs/.TNRS/schema.sql: tnrs: added tnrs__valid_match index to facilitate joining to only valid matches
- 02:48 PM Revision 11606: inputs/.TNRS/schema.sql: tnrs: added is_valid_match derived column, to make it easier to select from only those TNRS results that can safely be used as a scrubbed name
- 02:02 PM Revision 11605: lib/sh/util.sh: already_exists_msg(): added instructions on how to force-remake when the file already exists (prepend `rm=1` to the command)
- 02:20 AM Revision 11604: inputs/VegBank/^taxon_observation.**.sample/test.xml.ref: updated inserted row count, now that CVS plots have been removed
11/08/2013
- 10:57 PM Revision 11603: bugfix: lib/runscripts/view.run: don't do anything in load_data(), to avoid trying to remake header.csv before the view is created. (for views, this instead happens in postprocess().)
- 10:51 PM Revision 11602: lib/runscripts/table.run: reordered functions in the order they are called by import()
- 10:28 PM Revision 11601: bugfix: inputs/VegBank/: need to remove inter-datasource duplicates from plot instead of the left-joined plot_ table, because the fkeys needed to do the cascading deletes are all to the plot table. this requires doing the column-renaming and postprocessing on plot *before* it's left-joined.
- 09:57 PM Revision 11600: inputs/VegBank/plot_/create.sql: updated runtime (5 s) for previous bugfix
- 07:50 PM Revision 11599: exports/2013-7-10.Naia.range_limiting_factors.csv.run: updated export_() runtime and rowcount (~ the same)
- 04:26 PM Revision 11598: bugfix: schemas/vegbien.sql: 2013-7-10.Naia.range_limiting_factors: filter by coordinateUncertaintyInMeters filter: assume true for rows with no coordinateUncertaintyInMeters
- 03:43 PM Revision 11597: schemas/vegbien.sql: 2013-7-10.Naia.range_limiting_factors: filter by coordinateUncertaintyInMeters <= 10 km
11/07/2013
- 04:41 PM Revision 11596: planning/timeline/timeline.2013.xls: updated for progress
- 04:00 PM Revision 11595: inputs/.geoscrub/geoscrub_output/run: load_data(): updated runtime (4 min)
- 08:42 AM Revision 11594: planning/timeline/timeline.2013.xls: updated for progress
- 08:34 AM Revision 11593: bugfix: inputs/.geoscrub/geoscrub_output/geoscrub.csv.run: invoking derived/biengeo/geoscrub.sh: need to split the input file into separate dir and filename parts, because $DATAFILE actually is just the filename, not the entire path, and will otherwise get prepended with the default value for $DATADIR
11/06/2013
- 04:57 PM Revision 11592: inputs/.geoscrub/geoscrub_output/geoscrub.csv.run: also run geoscrub.sh. added export_() target to run just the export of the result table separately.
- 04:39 PM Revision 11591: derived/biengeo/load-geoscrub-input.sh: allow the caller to override $DATAFILE in the environment, to use a file named other than "geoscrub-corpus.csv"
- 02:41 PM Revision 11590: /run: use new exports/geoscrub_input.csv.run
- 02:40 PM Revision 11589: added exports/geoscrub_input.csv.run
- 02:39 PM Revision 11588: bugfix: lib/sh/make.sh: $remake: need to explicitly propagate this to invoked commands if it was set from $rm
- 12:34 PM Revision 11587: derived/biengeo/load-geoscrub-input.sh: updated $DATA_URL for new input filename
- 12:27 PM Revision 11586: /run geoscrub_input/make(): include a header on the CSV file, so that the column names don't risk getting spliced from the data (and to shorten the CSV filename, which had to contain the column names instead). this requires changing the geoscrubbing scripts to accept a CSV header.
- 11:22 AM Revision 11585: planning/timeline/timeline.2013.xls: updated for progress
- 10:14 AM Revision 11584: exports/2013-7-10.Naia.range_limiting_factors.csv.run: added rowcount (40 million of 80 million observations, filtered w/ cultivated, geovalid, and various fields NOT NULL)
- 01:46 AM Revision 11583: exports/2013-7-10.Naia.range_limiting_factors.csv.run: updated export_() runtime
- 01:30 AM Revision 11582: schemas/vegbien.sql: 2013-7-10.Naia.range_limiting_factors: don't sort the results by occurrence_id, because this is not a meaningful ordering and prevents incremental output from the query
- 01:09 AM Revision 11581: schemas/vegbien.sql: 2013-7-10.Naia.range_limiting_factors: also filter out rows without species
- 12:59 AM Revision 11580: exports/2013-7-10.Naia.range_limiting_factors.csv.run: export_(): documented runtime (10 min)
11/05/2013
- 11:13 PM Revision 11579: lib/sh/db.sh: mk_select(): usage: documented that this also takes a $limit/$n param
- 11:12 PM Revision 11578: lib/sh/db.sh: limit(): also support using $n as the limit param, since this var name is used by other parts of the import process
- 11:08 PM Revision 11577: added backups/vegbien.r11549.backup.md5
- 11:07 PM Revision 11576: lib/sh/db.sh: limit(): usage: documented that this also need a $limit param
- 11:06 PM Revision 11575: backups/TNRS.backup.md5: updated
- 10:47 PM Revision 11574: lib/runscripts/extract.run: added export_sample()
- 10:31 PM Revision 11573: /README.TXT: Full database import: after import: record the import times in inputs/import.stats.xls: documented that this should be run on the local machine, because it needs the Mac filename ordering
- 10:30 PM Revision 11572: planning/timeline/timeline.2013.xls: updated for progress
- 10:19 PM Revision 11571: inputs/import.stats.xls: updated import times
- 08:54 PM Revision 11570: /README.TXT: Full database import: after import: removed step to install analytical_stem on nimoy because the import mechanism is not set up to do this (we don't generate CSV exports of the full analytical_stem table because they take up a lot of space and are not currently used for anything)
- 08:32 PM Revision 11569: /README.TXT: Full database import: after import: In PostgreSQL: added step to check that analytical_stem contains the expected # of rows
- 08:16 PM Revision 11568: /README.TXT: Full database import: after import: In PostgreSQL: added specific instructions for determining which/how many datasources are expected to be included in the provider_count and source tables
- 08:05 PM Revision 11567: added inputs/analytical_db/_archive/
- 07:46 PM Revision 11566: inputs/analytical_db/: removed import-related files (Source/, etc.), since this is actually just a folder used to store make_analytical_db.log.sql, so that it will be synced along with the other logs
- 07:43 PM Revision 11565: inputs/analytical_db/: added _no_import to prevent this from incorrectly being included in the source table
- 07:27 PM Revision 11564: inputs/input.Makefile: $(_svnFilesGlob): also svn-add _no_import in the top-level datasrc dir. (this requires using add! , because the presence of a _no_import file there will normally turn off adding by svnFilesGlob.)
- 11:49 AM Revision 11563: Added an output CSV file option to geoscrub.sh.
11/04/2013
10/31/2013
- 05:35 PM Revision 11561: Added biengeo script options for data directories.
- Added GADM and geonames.org data dir options to
update_validation_data.sh scripts.
Added geoscrub input data dir opti... - 05:35 PM Revision 11560: Added update options to biengeo update_validation_data.sh
- Added options to update only GADM data, only Geonames.org data, or
neither. In every case, the geonames-to-gadm scrip... - 05:35 PM Revision 11559: Added cmd-line options to biengeo bash scripts.
- All biengeo bash scripts now accept command line options to specify psql
user, host, and database values.
These optio... - 05:35 PM Revision 11558: Fix biengeo script password prompt for postgres user.
- Changed the DB_HOST variables in the biengeo bash scripts to a
DB_HOST_OPT variable that is blank by default.
Updated... - 05:35 PM Revision 11557: Fixed TRUNCATE statement in truncate.geonames.sql.
- Fixed the biengeo truncate.geonames.sql script to include all tables in
one TRUNCATE statement that have foreign-key ... - 05:35 PM Revision 11556: Added more approx. runtimes to biengeo README.
- 05:35 PM Revision 11555: Renamed biengeo install scripts to setup scripts.
- It seems to make more sense to call these setup scripts, since they are
only setting up the database and tables, and ... - 12:29 PM Revision 11554: planning/timeline/timeline.2013.xls: updated for progress
- 12:24 PM Revision 11553: planning/timeline/timeline.2013.xls: datasource validations: CVS: left-join it: moved under "fix issues and critical feature requests" instead of "prepare 1st-round extracts" because the left-joining is actually part of getting it in the same format as VegBank
- 11:12 AM Revision 11552: inputs/CTFS/StemObservation/test.xml.ref: updated inserted row count
- 10:30 AM Revision 11551: planning/timeline/timeline.2013.xls: datasource validations: rescheduled CVS before other datasources, as decided in the conference call
- 10:27 AM Revision 11550: schemas/Makefile: $(confirmRmPublicSchema0): use "any ... schema" instead of "the ... schema" because the schema in question may not exist
- 08:53 AM Revision 11549: planning/timeline/timeline.2013.xls: datasource validations: rescheduled tasks for new order
- 08:42 AM Revision 11548: planning/timeline/timeline.2013.xls: datasource validations: reordered to put plots before specimens, as requested by Brad (wiki.vegpath.org/2013-10-25_conference_call#validation-order)
- 08:24 AM Revision 11547: planning/timeline/timeline.2013.xls: updated for progress
- 08:21 AM Revision 11546: planning/timeline/timeline.2013.xls: hid previous weeks
- 08:20 AM Revision 11545: planning/timeline/timeline.2013.xls: crossed out and hid completed tasks
- 08:17 AM Revision 11544: fix: planning/timeline/timeline.2013.xls: datasource validations: prepare 2nd-round extracts: VegBank: corrected check mark week, based on date of extract
- 08:14 AM Revision 11543: planning/timeline/timeline.2013.xls: datasource validations: added "prepare 3rd-round extracts" subtask, which currently applies to VegBank. updated for progress.
- 08:08 AM Revision 11542: planning/timeline/timeline.2013.xls: "datasource validations (spot-checking)": renamed to just "datasource validations" because that's what we've been calling it
- 08:08 AM Revision 11541: planning/timeline/timeline.2013.xls: datasource validations: CVS: added "VegBank-related changes" subtask
- 08:05 AM Revision 11540: planning/timeline/timeline.2013.xls: updated for progress and revised schedule
- 07:51 AM Revision 11539: bugfix: inputs/VegBank/import_order.txt: updated name of ^taxon_observation.**.sample table
- 07:16 AM Revision 11538: fix: inputs/VegBank/^taxon_observation.**.sample/create.sql: moved continent before country
- 06:54 AM Revision 11537: inputs/VegBank/^taxon_observation.**.sample/create.sql: added missing columns that were recently mapped to VegBIEN (identifiedBy)
- 06:52 AM Revision 11536: inputs/VegBank/^taxon_observation.**.sample/create.sql: synced column order to analytical_plot
- 06:49 AM Revision 11535: inputs/VegBank/^taxon_observation.**.sample/create.sql: synced column order to analytical_plot
- 06:47 AM Revision 11534: inputs/VegBank/taxonobservation_/map.csv, postprocess.sql: mapped identifiedBy (the _join_words() of identifiedBy__first, etc.)
- 06:22 AM Revision 11533: fix: schemas/vegbien.sql: analytical_plot, analytical_specimen: removed derived columns that are not part of the validation
- 06:15 AM Revision 11532: fix: schemas/vegbien.sql: analytical_plot, analytical_specimen: removed internal ID columns that are not part of the validation
- 05:46 AM Revision 11531: schemas/vegbien.sql: analytical_plot: removed derived columns that should not be validated by data providers
- 05:42 AM Revision 11530: schemas/vegbien.sql: analytical_specimen: synced to analytical_stem
- 05:36 AM Revision 11529: schemas/vegbien.sql: analytical_plot: documented that this contains all of the analytical_stem columns, minus specimenHolderInstitutions, collection, accessionNumber, occurrenceID
- 05:34 AM Revision 11528: schemas/vegbien.sql: analytical_plot: synced to analytical_stem
- 05:29 AM Revision 11527: schemas/vegbien.sql: analytical_stem_view: added individualCount
- 04:42 AM Revision 11526: schemas/vegbien.sql: plot.**, analytical_stem_view: added slopeAspect, slopeGradient
- 03:41 AM Revision 11525: schemas/VegCore/ERD/VegCore.ERD.mwb: traceable.id_by_source: support multiple ids_by_source per traceable, because the same entity may be present in multiple datasources (e.g. if one got data from the other), and we would like to remove that duplicate
- 02:46 AM Revision 11524: inputs/VegBank/taxonobservation_/map.csv, postprocess.sql: mapped identifiedBy (the _join_words() of identifiedBy__first, etc.)
- 02:34 AM Revision 11523: inputs/VegBank/taxonobservation_/create.sql: also join party_id to get the identifiedBy (not mapped yet). note that the inserted row count changes, because taxonobservation_ does not yet have a pkey to do a stable ordering with.
- 02:16 AM Revision 11522: bugfix: inputs/input.Makefile: %/install: don't run map_table, because this instead done by the runscript. although it does not hurt to do it twice, invoking load_data by itself should not run map_table at all, so that the original column names can be inspected in the table and map.csv reordered to match.
- 02:06 AM Revision 11521: inputs/VegBank/vegbank.~.clean_up.sql: taxoninterpretation.party_id: don't rename to taxoninterpretation_party_id, so that this can be used directly in taxonobservation_/create.sql with a USING join
- 01:52 AM Revision 11520: inputs/VegBank/taxonobservation_/create.sql: join taxonobservation to taxoninterpretation (as in CVS) instead of vice versa, since taxonobservation is the primary, operative table. having VegBank and CVS do things the same way helps ensure that fixes in one can transfer easily to the other.
- 01:51 AM Revision 11519: bugfix: inputs/input.Makefile: %/install: don't run map_table, because this instead done by the runscript. although it does not hurt to do it twice, invoking load_data by itself should not run map_table at all, so that the original column names can be inspected in the table and map.csv reordered to match.
- 01:30 AM Revision 11518: inputs/VegBank/^taxon_observation.**.sample/create.sql: synced with taxon_observation.**
- 01:22 AM Revision 11517: (for r11396) fix: bin/map: put template: comment out the "Put template:" label so that the output is valid XML, and displays properly in a browser rather than showing a syntax error
- 12:50 AM Revision 11516: /README.TXT: for each task, documented which machine it's run on. for tasks run on vegbiendev, added pointer to "Connecting to vegbiendev" steps.
- 12:19 AM Revision 11515: /README.TXT: added instructions for connecting to vegbiendev
10/30/2013
- 11:03 PM Revision 11514: mappings/VegCore-VegBIEN.csv: mapped taxon_determination__is_current, taxon_determination__is_original
- 09:49 PM Revision 11513: mappings/VegCore-VegBIEN.csv: mapped taxon_determination__is_current, taxon_determination__is_original
- 09:46 PM Revision 11512: bugfix: mappings/VegCore-VegBIEN.csv: main taxondetermination: use [!isoriginal=true] instead of [!isoriginal] so that adding a manual isoriginal field does not prevent this selector from matching
- 09:07 PM Revision 11511: inputs/VegBank/taxonobservation_/map.csv: originalinterpretation, currentinterpretation: removed table name prefix so these would automap
- 09:06 PM Revision 11510: mappings/VegCore.htm: regenerated from wiki. added taxon_determination__is_current, taxon_determination__is_original.
- 09:02 PM Revision 11509: mappings/VegCore.htm: regenerated from wiki. added taxon_determination__is_current, taxon_determination__is_original.
- 08:07 PM Revision 11508: planning/timeline/timeline.2013.xls: geoscrubbing automated pipeline: split into subtasks "build pipeline", "test pipeline", and "integrate pipeline into import process"
- 08:04 PM Revision 11507: planning/timeline/timeline.2013.xls: geoscrubbing re-run: moved recent checkmarks to "geoscrubbing automated pipeline" since the work on these actually relates to *automating* the geoscrubbing, not the one-time reload (which was already completed)
- 08:02 PM Revision 11506: planning/timeline/timeline.2013.xls: geoscrubbing: made "geoscrubbing re-run" a subtask of the main geoscrubbing task, instead of geoscrubbing re-run being the supertask. updated for Paul's progresss.
- 07:23 PM Revision 11505: schemas/vegbien.sql: taxondetermination_set_iscurrent(): include new iscurrent__verbatim, so that taxondeterminations the datasource marks as current are always considered first. this currently applies to VegBank and CVS.
- 07:17 PM Revision 11504: schemas/vegbien.sql: taxondetermination.isoriginal: made it nullable like iscurrent__verbatim, because this is populated from the datasource. taxondetermination_set_iscurrent() now supports isoriginal=NULL, so this is not a problem.
- 07:08 PM Revision 11503: schemas/vegbien.sql: taxondetermination.is_datasource_current: renamed to iscurrent__verbatim and made it nullable, so that this can be used to store the verbatim iscurrent status
- 07:04 PM Revision 11502: schemas/vegbien.sql: taxondetermination_set_iscurrent(): removed setting of is_datasource_current (which is now the same as iscurrent), so that this can be used to store the verbatim iscurrent status
- 06:59 PM Revision 11501: schemas/vegbien.sql: taxondetermination_set_iscurrent(): isoriginal: make sure it is always either true or false, so that if the NOT NULL constraint on this is ever removed you don't end up with the incorrect sort order false, true, NULL (it should be false=NULL, true)
- 06:42 PM Revision 11500: schemas/vegbien.sql: use plain taxondetermination.iscurrent instead of is_datasource_current since these are now the same
- 06:38 PM Revision 11499: schemas/vegbien.sql: taxondetermination_set_iscurrent(): is_datasource_current: set to the same value as iscurrent, since these now have the same formula
- 06:34 PM Revision 11498: schemas/vegbien.sql: taxondetermination_set_iscurrent(): removed no longer used accepted, matched determinationtypes (for these determinations, left-join to TNRS.ScrubbedTaxon)
- 06:24 PM Revision 11497: Updated biengeo README with new script workflow.
- 06:24 PM Revision 11496: Split geovalidate.sh into install and update scripts.
- Split geovalidate.sh into install.sh and update_gadm_data.sh scripts.
The install.sh script creates the databse and u... - 06:24 PM Revision 11495: Refactored geonames.sh to update_geonames_data.sh
- Renamed geonames.sh to update_geonames_data.sh and moved many of the SQL
statements from the bash script into support... - 06:24 PM Revision 11494: Split up geonames-to-gadm.sql into 3 scripts.
- Each script only operates on one table within a transaction.
These scripts now assume the tables have already been cr... - 06:24 PM Revision 11493: Added geoscrub.sh script.
- This script runs the load-geoscrub-input.sh, geonames.sql, and
geovalidate.sql scripts in order to load and scrub veg... - 06:03 PM Revision 11492: inputs/SALVIAS/projects/postprocess.sql: remove institutions that we have direct data for: documented that most of the 13139 removed plots are from duplicates (where we have direct data). this leaves only 560 of SALVIAS's original 13699 plots.
- 05:53 PM Revision 11491: inputs/SALVIAS/projects/postprocess.sql: remove example data
- 05:48 PM Revision 11490: inputs/SALVIAS/projects/postprocess.sql: remove private data that should not be publicly visible (this was probably already removed by the plotMetadata.AccessCode filter in salvias_plots.~.clean_up.sql)
- 05:44 PM Revision 11489: inputs/SALVIAS/projects/postprocess.sql: remove institutions that we have direct data for (Madidi, VegBank)
- 04:23 PM Revision 11488: bugfix: inputs/VegBank/plot_/postprocess.sql: coordinateUncertaintyInMeters__from_fuzzing: need to convert km to m in the fuzzing radii. updated derived cols runtimes.
- 04:05 PM Revision 11487: inputs/VegBank/plot_/postprocess.sql: remove duplicated CVS plots (2323 of 7079 CVS plots are removed by this)
- 03:54 PM Revision 11486: planning/timeline/timeline.2013.xls: updated for progress
- 03:22 PM Revision 11485: added exports/2013-7-10.Naia.range_limiting_factors.csv.run
- 03:04 PM Revision 11484: bugfix: exports/2013-10-18.Brian_Enquist.Canadensys.csv.run: do not override the table to analytical_stem, because the extract-specific view should be used instead. this was actually benign, because extract.run export_() always sets $table to the extract-specific view.
- 02:57 PM Revision 11483: schemas/vegbien.sql: added 2013-7-10.Naia.range_limiting_factors
- 02:45 PM Revision 11482: schemas/vegbien.sql: sync_analytical_stem_to_view(): row_num: renamed to taxon_occurrence__pkey because previous taxon determinations have been removed, so each row is in fact a taxon_occurrence (~= VegCore.vegpath.org?ERD.taxon_occurrence)
- 02:20 PM Revision 11481: fix: schemas/vegbien.sql: analytical_stem_view: don't ORDER BY datasource, because this requires a slow full-table sort after the hash joins. (when selecting a subset of analytical_stem_view, nested loops are used automatically without needing an ORDER BY to force this.) to get the datasource-sorted order (plus a sort-order guarantee), you can still add a manual `ORDER BY datasource`, which will use a fast index scan on one of the datasource indexes.
- 01:58 PM Revision 11480: schemas/vegbien.sql: analytical_stem: added row_num, which can serve as the taxon_observation ID (DwC occurrenceID)
- 01:53 PM Revision 11479: Updated load-geoscrub script with configurable db.
- load-geoscrub-input.sh now uses a variable with the db name defined at
the top of the script.
Updated the default db ... - 12:11 PM Revision 11478: schemas/vegbien.sql: analytical_stem: locationID... index: use eventDate instead of dateCollected since it's now eventDate that identifies the locationevent
- 12:11 PM Revision 11477: schemas/vegbien.sql: analytical_stem: locationID... index: use eventDate instead of dateCollected since it's now eventDate that identifies the locationevent
- 04:41 AM Revision 11476: schemas/vegbien.sql: analytical_stem_view: use plot.** to obtain plot-related fields, so that the same code does not need to be maintained in both analytical_stem_view and plot.**
- 04:32 AM Revision 11475: schemas/vegbien.sql: analytical_stem_view: moved specimen-specific fields to occurrence section
- 03:50 AM Revision 11474: schemas/vegbien.sql: analytical_stem_view, plot.**: added separate location__cultivated__bien
- 03:11 AM Revision 11473: schemas/vegbien.sql: added separate eventDate, in addition to dateCollected
- 02:59 AM Revision 11472: fix: schemas/vegbien.sql: dateCollected: use aggregateoccurrence.collectiondate *before* locationevent.obsstartdate rather than after, because this is more accurate. it was previously the other way around to allow dateCollected to be the pkey for the row's locationevent (for plots data).
- 02:38 AM Revision 11471: schemas/vegbien.sql: analytical_stem_view, plot.**: locationevent__pkey: moved to right before the locationevent-related fields
10/29/2013
- 06:53 PM Revision 11470: schemas/vegbien.sql: analytical_stem_view: changed column order, etc. to match plot.**
- 06:52 PM Revision 11469: schemas/vegbien.sql: analytical_stem_view: changed column order, etc. to match plot.**
- 06:46 PM Revision 11468: schemas/vegbien.sql: plot.**: added locationevent__pkey so that this view can be joined to other VegBIEN tables, which require the internal pkey
- 06:29 PM Revision 11467: derived/biengeo/README.txt: geoscrub new data: geovalidate.sql: added runtime from Paul
- 09:05 AM Revision 11466: schemas/vegbien.sql: sync_analytical_stem_to_view(): speciesBinomialWithMorphospecies index: documented runtime (1 h)
- 08:56 AM Revision 11465: schemas/vegbien.sql: plot.**: updated to use the same column formulas as analytical_stem_view
- 08:19 AM Revision 11464: planning/timeline/timeline.2013.xls: add globally-unique occurrenceID: removed "globally-unique" because Naia is actually OK with this being numeric (i.e. unique within our DB)
- 08:19 AM Revision 11463: planning/timeline/timeline.2013.xls: updated for progress
- 07:46 AM Revision 11462: lib/runscripts/import_subset.run: $version: use new $extract_view, which is set to the same value that this was
- 07:45 AM Revision 11461: lib/runscripts/extract.run: use the extract-specific view instead of all of analytical_stem
- 07:42 AM Revision 11460: schemas/vegbien.sql: added 2013-10-18.Brian_Enquist.Canadensys view
- 06:51 AM Revision 11459: schemas/vegbien.sql: sync_analytical_stem_to_view(): added index on speciesBinomialWithMorphospecies for Brian Enquist's Canadensys request
- 06:19 AM Revision 11458: planning/timeline/timeline.2013.xls: updated for progress
- 04:16 AM Revision 11457: exports/2013-10-18.Brian_Enquist.Canadensys.csv.run: documented runtime (35 min, now that bugs have been fixed)
- 03:33 AM Revision 11456: bugfix: bin/with_all: @inputs default value: use `local`, so that the default value is only set for the current function and doesn't leak back out into the caller. this fixes a bug in subset imports where import_all's Source/import call to with_all would add the .* datasources, but these would then stay in for the import_scrub call, causing extra .* datasources to incorrectly be imported.
- 02:22 AM Revision 11455: planning/timeline/timeline.2013.xls: usability testing: added additional subtask to validate the scientists' extracts (i.e. check that the extract fulfills their request)
- 02:17 AM Revision 11454: planning/timeline/timeline.2013.xls: provide scientists with their requested data: added separate subtask for Brian Enquist's Canadensys extract
- 02:12 AM Revision 11453: planning/timeline/timeline.2013.xls: updated for progress and revised schedule
- 01:53 AM Revision 11452: bugfix: schemas/pg_hba.Mac.conf: made same change for Mac as was made for Linux in r11451
- 01:22 AM Revision 11451: bugfix: schemas/pg_hba.conf: don't allow ident authentication for Unix socket connections, because this apparently prevents having normal, password-based connections ("md5"). note that just switching the order of the ident and md5 entries is not useful, because whichever authentication type comes second will be ignored completely. this problem was previously worked around by just not using Unix socket connections at all, and always specifying "localhost" as the host to force a hostname-based connection. this does not affect the postgres superuser, because they have their own ident line in pg_hba.conf.
10/25/2013
- 06:15 PM Revision 11450: Added db user and host to load-geoscrub-input.sh
- The psql commands in load-geoscrub-input.sh will now connect with a
specific user on a specific host.
Updated the 'CO... - 04:51 PM Revision 11449: derived/biengeo/README.txt: geoscrub new data: steps that use .sql scripts: added the psql commands to run these
- 04:22 PM Revision 11448: Updated install instructions in the README.
- 03:00 PM Revision 11447: derived/biengeo/README.txt: geoscrub new data: noted that this now deletes any previous geoscrubbing results
- 02:58 PM Revision 11446: derived/biengeo/README.txt: added steps to set the working dir for each set of steps
- 02:54 PM Revision 11445: derived/biengeo/README.txt: added section on obtaining source code, including path to Paul's in-progress files on vegbiendev (not sure whether the in-progress files are needed to run the core scripts in steps 1-6)
- 02:44 PM Revision 11444: derived/biengeo/README.txt: moved commands to run to the top of the README. flagged commands-sections with ***** and an identifying label.
- 02:04 PM Revision 11443: Initial checkin of geoscrub install SQL files.
- Added install.*.sql files that will do initial table creation for all
required tables.
Added a truncate.vegbien_geosc... - 02:04 PM Revision 11442: Update load-geoscrub-input.sh to download from URL.
- Removed logic to dump input data directly from the vegbien database and
to download the input from a URL provided by ... - 11:56 AM Revision 11441: planning/timeline/timeline.2013.xls: reload core & analytical database scheduled for this week: postponed to give us additional time to do datasource validations
- 09:58 AM Revision 11440: inputs/input.Makefile: added %/import_temp alias for %/import, to mirror the presence of import_temp for import
- 09:24 AM Revision 11439: fix: inputs/VegBank/taxonobservation_/map.csv: remapped authorplantname to OMIT because these are not specific to the taxoninterpretation row (this is in a separate taxoninterpretation for the original determination instead). see wiki.vegpath.org/Spot-checking#2013-10-10 > Mike Lee's conference call feedback.
- 09:22 AM Revision 11438: fix: inputs/VegBank/taxonobservation_/map.csv: remapped int_* to OMIT because these are not specific to the taxoninterpretation row (this is in a separate taxoninterpretation for the original determination instead). see wiki.vegpath.org/Spot-checking#2013-10-10 > Mike Lee's conference call feedback.
10/24/2013
- 07:09 PM Revision 11437: exports/2013-10-18.Brian_Enquist.Canadensys.csv.run: inherit from new import_subset.run (which uses extract.run)
- 07:08 PM Revision 11436: added lib/runscripts/import_subset.run, extract.run
- 05:21 PM Revision 11435: added exports/2013-10-18.Brian_Enquist.Canadensys.csv.run
- 05:07 PM Revision 11434: bin/make_analytical_db: removed no longer needed setting of $schema to $public, because this is now done by psql()
- 05:06 PM Revision 11433: lib/sh/local.sh: psql(): also accept $public as the $schema param, since this is used by a lot of import scripts
- 04:24 PM Revision 11432: lib/sh/util.sh: added require_dot_script()
- 04:13 PM Revision 11431: bugfix: lib/sh/util.sh: $top_script: use @BASH_SOURCE instead of $0, because this is also defined for .-scripts
- 04:03 PM Revision 11430: bugfix: bin/import_all: restore the working dir when main() is done, in case it started as something other than the root dir
- 03:49 PM Revision 11429: bin/after_import: support turning off the end-of-import backup for imports that are not the full database
- 03:26 PM Revision 11428: bugfix: lib/runscripts/util.run: `trap on_exit EXIT`: only set this if the script is not a dot script, because if it is a dot script, on_exit() will not be invoked until the calling shell exits, which may be much later than when the script is run. previously, this was handled by canceling the EXIT trap if on_exit() is run manually, but this would not work correctly if a load-time error prevented on_exit() from running and canceling the trap.
- 03:21 PM Revision 11427: bugfix: lib/runscripts/util.run: if is_dot_script, fix $@ when no args causes this to incorrectly contain the script name. use is_dot_script rather than the presence of $@ args to decide whether to use @BASH_ARGV, because @BASH_ARGV is actually wrong when run as a .-script (it contains the script name).
- 03:17 PM Revision 11426: bugfix: lib/sh/util.sh: is_dot_script(): need to subtract 1 from ${#BASH_LINENO[@]}, because this is the array length rather than the index of the last element as in Perl
- 02:58 PM Revision 11425: lib/sh/util.sh: added is_dot_script()
- 01:15 PM Revision 11424: bugfix: schemas/vegbien.sql: taxondetermination_set_iscurrent(): is_datasource_current (used by analytical_stem_view): need to separately check if `determinationtype IS NULL`, because `determinationtype NOT IN (accepted, matched))` will return NULL (false) if determinationtype is NULL, causing no match
- 01:11 PM Revision 11423: bugfix: bin/make_analytical_db: when running into a public schema other than "public", also pass this to `/run export_` (which currently uses $schema instead of $public)
- 01:10 PM Revision 11422: bugfix: bin/import_all: fix $@ when .-included without args (which causes bash to put the wrong values in $@ instead of leaving it empty)
- 01:09 PM Revision 11421: bin/import_all: `make schemas/$version/install`: reinstall instead to allow re-running the import to the same custom schema (e.g. 2013-10-18.Brian_Enquist.Canadensys)
- 01:07 PM Revision 11420: bin/import_all: `make schemas/$version/install`: ignore errors if schema exists, to support running with -e
Also available in: Atom