/Makefile: postgres-Linux: also install postgresql-#-postgis-scripts, which is used by derived/biengeo/
bugfix: schemas/vegbien.sql: plantobservation_aggregateoccurrence_count_1(): only default aggregateoccurrence.count to 1 for specimens data, because plots data may have any number of individuals in a taxon_presence record that has no explicit individual_count
schemas/*.sql: updated for PostgreSQL 9.3. this reorders some functions, adds empty comment headers for omitted SEQUENCE SET commands, and (best of all) finally splits view columns onto multiple lines, so that changes in the columns are actually legible (and produce their own svn diff!)
planning/timeline/timeline.2013.xls: added tasks "create high-level workflow diagram" and "load BIEN2 exports directly from raw data", as requested by Martha
bugfix: lib/Firefox_bookmarks.reformat.csv: remove empty <DD> tags (which Firefox now adds for all bookmarks) so they don't create a blank space on the page
bugfix: lib/Firefox_bookmarks.reformat.csv: don't prepend "page's description:" to empty <DD> tags, which Firefox now adds for all bookmarks, even if they don't have a description
web/links/index.htm: updated to Firefox bookmarks. added instructions for upgrading PostgreSQL to 9.3, and some GBIF links.
Makefile, schemas/.Mac.conf: upgraded to PostgreSQL 9.3, which is needed for proper exception parsing in the auto-re-create-views functionality. this also removes the Mac 10.8 Mountain Lion quirks, such as renaming the postgres user to _postgres (which messed everything up, but is now back to normal).
/Makefile: postgres-Linux: added steps to install PostgreSQL 9.3, which is needed for proper exception parsing in the auto-re-create-views functionality
schemas/util.sql: added save_drop_views()
schemas/util.sql: added is_empty(anyarray)
added inputs/GBIF/_src/0001000-131106143450413.zip.md5, GBIFPortalDB-2013-09-10.dump.gz.md5
schemas/util.sql: added regexp_matches_group()
schemas/util.sql: show_create_view(): also include GRANT statements, which are necessary to fully re-create the view
schemas/util.sql: added show_grants_for(table_ regclass), for use by show_create_view()
inputs/GBIF/_src/GBIFPortalDB-2013-09-10.dump.gz.url: documented download time (5.5 h for an 18 GB file)
inputs/GBIF/_src/0001000-131106143450413.zip.url: documented download time (only 2 h for an 18 GB file)
schemas/util.sql: added save_drop_view()
schemas/util.sql: added show_create_view()
added inputs/GBIF/_src/0001000-131106143450413.zip.url (DwC-A export), GBIFPortalDB-2013-09-10.dump.gz.url (raw data), portal_26_feb_2013.war.url (raw data portal)
web/.htaccess: mod_autoindex: show .* files which are normally hidden, because these are important parts of our codebase. (the leading . is not used for access controls.) .svn folders will remain hidden to avoid clutter.
inputs/GBIF/: added LOA files: _src/use_conditions/LetterOfAgreement_template.doc, BIEN LoA agreement annex.docx
inputs/.TNRS/schema.sql: tnrs_populate_fields(): regenerate the derived cols: updated runtime (40 min)
web/links/index.htm: updated to Firefox bookmarks. added links related to PostgreSQL plain-text pkeys and the GBIF data use agreement (which is apparently much less restrictive than the LoA we signed, and would even allow the data to be public). vegetation data: placed links into subfolders by datasource.
bugfix: schemas/vegbien.sql: scrubbed_morphospecies_binomial: only append the morphospecies suffix if there is not a scrubbed specific epithet
bugfix: schemas/vegbien.sql: scrubbed_morphospecies_binomial: only populate this from the component ranks; do not put a full taxon name in here if it would otherwise be NULL
inputs/.TNRS/schema.sql: tnrs: removed no longer used Accepted_scientific_name. use scrubbed_unique_taxon_name instead.
inputs/.TNRS/schema.sql: MatchedTaxon, etc.: removed no longer used acceptedScientificName (from tnrs.Accepted_scientific_name). use scrubbed_unique_taxon_name instead.
inputs/.TNRS/schema.sql: removed no longer used AcceptedTaxon. use taxon_scrub.scrubbed_unique_taxon_name.* instead.
bugfix: schemas/vegbien.sql: tnrs_input_name: MatchedTaxon self-join: must use a NOT NULL column for a proper anti-join. this unfortunately requires the more verbose LEFT JOIN ON syntax (which allows using the pkey as the NOT NULL column) instead of NATURAL LEFT JOIN (which requires using another column, which are all nullable)
schemas/vegbien.sql: tnrs_input_name: use plain UNION, which automatically removes duplicates, rather than UNION ALL with a manual EXCEPT-removal of rows in the first SELECT
schemas/vegbien.sql: tnrs_input_name: updated to use taxon_scrub.scrubbed_unique_taxon_name.*, to avoid further dependencies on AcceptedTaxon
inputs/.TNRS/schema.sql: removed no longer used ScrubbedTaxon. use taxon_scrub instead.
schemas/vegbien.sql: taxon_trait_view: updated to use new taxon_scrub
schemas/vegbien.sql: analytical_stem_view: updated to use new taxon_scrub. this avoids the need to manually COALESCE every accepted* and matched* field, and makes the formulas much clearer
inputs/.TNRS/schema.sql: added taxon_scrub, which combines ValidMatchedTaxon with scrubbed_unique_taxon_name.* instead of AcceptedTaxon
inputs/.TNRS/schema.sql: ValidMatchedTaxon: synced to MatchedTaxon
fix: inputs/.TNRS/schema.sql: scrubbed_taxon_name_with_author: renamed to scrubbed_unique_taxon_name because this also contains the family, and is therefore different from just the taxon name with author
inputs/.TNRS/schema.sql: MatchedTaxon: added scrubbed_taxon_name_with_author
inputs/.TNRS/schema.sql: tnrs: removed Is_homonym, since this did not take into account the never_homonym status (when the author disambiguates) or the ability of a non-homonym at a lower rank to override a homonym at a higher rank. taking these into account just produces the value of is_valid_match.
inputs/.TNRS/schema.sql: tnrs: removed Is_plant, since this functionality is now provided by is_valid_match. note that whether a name is a plant is not meaningful for TNRS, because it can match only plant names (thus a "non-plant" is actually a non-match).
inputs/.TNRS/schema.sql: tnrs: added scrubbed_taxon_name_with_author derived column, which uses the matched name when an accepted name is not available
inputs/.TNRS/schema.sql: tnrs: removed no longer used Max_score. use is_valid_match to determine validity instead.
bugfix: lib/runscripts/file.pg.sql.run: export_(): exclude Source and related tables so that these will be re-created by the staging tables installation instead, ensuring that they are always in sync with the Source/ subdir
inputs/.TNRS/data.sql: updated for new derived columns
bugfix: schemas/vegbien.sql: analytical_stem_view: scrubbed_taxon_name_no_author, scrubbed_author: need to COALESCE these to the matched* when no accepted* is available
schemas/vegbien.sql: analytical_stem_view, etc.: renamed scrubbed fields with the scrubbed_* prefix, to clearly distinguish these from the equivalent fields for other taxon names
bugfix: schemas/vegbien.sql: analytical_stem_view: family, genus: need to COALESCE these to the matched* when no accepted* is available
backups/TNRS.backup.md5: updated
inputs/.TNRS/schema.sql: removed no longer used score_ok(). use tnrs.Is_plant instead. (the threshold is still documented in tnrs_populate_fields().)
inputs/.TNRS/schema.sql: tnrs_populate_fields(): is_valid_match: don't consider Max_score because Is_plant will always be false when the Max_score is insufficient (<0.8)
inputs/.TNRS/schema.sql: schema comment: added steps to remake schema.sql and back up the new TNRS schema. documented that these steps should be run on vegbiendev.
inputs/.TNRS/schema.sql: schema comment: added steps to determine what changes need to be made on vegbiendev
inputs/.TNRS/schema.sql: tnrs_populate_fields(): regenerate the derived cols: updated runtimes (~same)
inputs/.TNRS/schema.sql: tnrs: moved instructions to apply schema changes on vegbiendev to the TNRS schema, because this applies to all elements in the TNRS schema, not just the tnrs table
inputs/.TNRS/schema.sql: score_ok(): don't make it STRICT because this prevents it from being inlined
inputs/.TNRS/schema.sql: tnrs: removed no longer used tnrs_score_ok index. use tnrs__valid_match instead.
bugfix: inputs/.TNRS/schema.sql: tnrs_populate_fields(): is_valid_match: documented that this excludes homonyms because these are not valid matches (i.e. TNRS provides a name, but the name is not meaningful because it is not unambiguous)
bugfix: inputs/.TNRS/schema.sql: ValidMatchedTaxon: exclude inter-kingdom homonyms because these are not valid matches (i.e. TNRS provides a name, but the name is not meaningful because it is not unambiguous). this uses taxon_scrub__is_valid_match instead of score_ok() to determine whether the result should be included.
inputs/.TNRS/schema.sql: MatchedTaxon: added is_valid_match
inputs/.TNRS/schema.sql: tnrs: added tnrs__valid_match index to facilitate joining to only valid matches
inputs/.TNRS/schema.sql: tnrs: added is_valid_match derived column, to make it easier to select from only those TNRS results that can safely be used as a scrubbed name
lib/sh/util.sh: already_exists_msg(): added instructions on how to force-remake when the file already exists (prepend `rm=1` to the command)
inputs/VegBank/^taxon_observation.**.sample/test.xml.ref: updated inserted row count, now that CVS plots have been removed
bugfix: lib/runscripts/view.run: don't do anything in load_data(), to avoid trying to remake header.csv before the view is created. (for views, this instead happens in postprocess().)
lib/runscripts/table.run: reordered functions in the order they are called by import()
bugfix: inputs/VegBank/: need to remove inter-datasource duplicates from plot instead of the left-joined plot_ table, because the fkeys needed to do the cascading deletes are all to the plot table. this requires doing the column-renaming and postprocessing on plot before it's left-joined.
inputs/VegBank/plot_/create.sql: updated runtime (5 s) for previous bugfix
exports/2013-7-10.Naia.range_limiting_factors.csv.run: updated export_() runtime and rowcount (~ the same)
bugfix: schemas/vegbien.sql: 2013-7-10.Naia.range_limiting_factors: filter by coordinateUncertaintyInMeters filter: assume true for rows with no coordinateUncertaintyInMeters
schemas/vegbien.sql: 2013-7-10.Naia.range_limiting_factors: filter by coordinateUncertaintyInMeters <= 10 km
planning/timeline/timeline.2013.xls: updated for progress
inputs/.geoscrub/geoscrub_output/run: load_data(): updated runtime (4 min)
bugfix: inputs/.geoscrub/geoscrub_output/geoscrub.csv.run: invoking derived/biengeo/geoscrub.sh: need to split the input file into separate dir and filename parts, because $DATAFILE actually is just the filename, not the entire path, and will otherwise get prepended with the default value for $DATADIR
inputs/.geoscrub/geoscrub_output/geoscrub.csv.run: also run geoscrub.sh. added export_() target to run just the export of the result table separately.
derived/biengeo/load-geoscrub-input.sh: allow the caller to override $DATAFILE in the environment, to use a file named other than "geoscrub-corpus.csv"
/run: use new exports/geoscrub_input.csv.run
added exports/geoscrub_input.csv.run
bugfix: lib/sh/make.sh: $remake: need to explicitly propagate this to invoked commands if it was set from $rm
derived/biengeo/load-geoscrub-input.sh: updated $DATA_URL for new input filename
/run geoscrub_input/make(): include a header on the CSV file, so that the column names don't risk getting spliced from the data (and to shorten the CSV filename, which had to contain the column names instead). this requires changing the geoscrubbing scripts to accept a CSV header.
exports/2013-7-10.Naia.range_limiting_factors.csv.run: added rowcount (40 million of 80 million observations, filtered w/ cultivated, geovalid, and various fields NOT NULL)
exports/2013-7-10.Naia.range_limiting_factors.csv.run: updated export_() runtime
schemas/vegbien.sql: 2013-7-10.Naia.range_limiting_factors: don't sort the results by occurrence_id, because this is not a meaningful ordering and prevents incremental output from the query
schemas/vegbien.sql: 2013-7-10.Naia.range_limiting_factors: also filter out rows without species
exports/2013-7-10.Naia.range_limiting_factors.csv.run: export_(): documented runtime (10 min)
lib/sh/db.sh: mk_select(): usage: documented that this also takes a $limit/$n param
lib/sh/db.sh: limit(): also support using $n as the limit param, since this var name is used by other parts of the import process
added backups/vegbien.r11549.backup.md5
lib/sh/db.sh: limit(): usage: documented that this also need a $limit param
lib/runscripts/extract.run: added export_sample()
/README.TXT: Full database import: after import: record the import times in inputs/import.stats.xls: documented that this should be run on the local machine, because it needs the Mac filename ordering
inputs/import.stats.xls: updated import times
/README.TXT: Full database import: after import: removed step to install analytical_stem on nimoy because the import mechanism is not set up to do this (we don't generate CSV exports of the full analytical_stem table because they take up a lot of space and are not currently used for anything)