Project

General

Profile

Activity

From 10/15/2013 to 11/13/2013

11/13/2013

08:35 PM Revision 11652: schemas/util.sql: added save_drop_view()
Aaron Marcuse-Kubitza
08:33 PM Revision 11651: schemas/util.sql: added show_create_view()
Aaron Marcuse-Kubitza
07:14 PM Revision 11650: added inputs/GBIF/_src/0001000-131106143450413.zip.url (DwC-A export), GBIFPortalDB-2013-09-10.dump.gz.url (raw data), portal_26_feb_2013.war.url (raw data portal)
Aaron Marcuse-Kubitza
04:50 PM Revision 11649: web/.htaccess: mod_autoindex: show .* files which are normally hidden, because these are important parts of our codebase. (the leading . is not used for access controls.) .svn folders will remain hidden to avoid clutter.
Aaron Marcuse-Kubitza
04:16 PM Revision 11648: inputs/GBIF/: added LOA files: _src/use_conditions/LetterOfAgreement_template.doc, BIEN LoA agreement annex.docx
Aaron Marcuse-Kubitza
02:48 AM Revision 11647: inputs/.TNRS/schema.sql: tnrs_populate_fields(): regenerate the derived cols: updated runtime (40 min)
Aaron Marcuse-Kubitza
01:07 AM Revision 11646: web/links/index.htm: updated to Firefox bookmarks. added links related to PostgreSQL plain-text pkeys and the GBIF data use agreement (which is apparently much less restrictive than the LoA we signed, and would even allow the data to be public). vegetation data: placed links into subfolders by datasource.
Aaron Marcuse-Kubitza

11/10/2013

07:09 PM Revision 11645: bugfix: schemas/vegbien.sql: scrubbed_morphospecies_binomial: only append the morphospecies suffix if there is not a scrubbed specific epithet
Aaron Marcuse-Kubitza
07:08 PM Revision 11644: bugfix: schemas/vegbien.sql: scrubbed_morphospecies_binomial: only populate this from the component ranks; do not put a full taxon name in here if it would otherwise be NULL
Aaron Marcuse-Kubitza
07:02 PM Revision 11643: inputs/.TNRS/schema.sql: tnrs: removed no longer used Accepted_scientific_name. use scrubbed_unique_taxon_name instead.
Aaron Marcuse-Kubitza
07:00 PM Revision 11642: inputs/.TNRS/schema.sql: MatchedTaxon, etc.: removed no longer used acceptedScientificName (from tnrs.Accepted_scientific_name). use scrubbed_unique_taxon_name instead.
Aaron Marcuse-Kubitza
06:43 PM Revision 11641: inputs/.TNRS/schema.sql: removed no longer used AcceptedTaxon. use taxon_scrub.scrubbed_unique_taxon_name.* instead.
Aaron Marcuse-Kubitza
06:38 PM Revision 11640: bugfix: schemas/vegbien.sql: tnrs_input_name: MatchedTaxon self-join: must use a NOT NULL column for a proper anti-join. this unfortunately requires the more verbose LEFT JOIN ON syntax (which allows using the pkey as the NOT NULL column) instead of NATURAL LEFT JOIN (which requires using another column, which are all nullable)
Aaron Marcuse-Kubitza
06:34 PM Revision 11639: schemas/vegbien.sql: tnrs_input_name: use plain UNION, which automatically removes duplicates, rather than UNION ALL with a manual EXCEPT-removal of rows in the first SELECT
Aaron Marcuse-Kubitza
06:14 PM Revision 11638: schemas/vegbien.sql: tnrs_input_name: updated to use taxon_scrub.scrubbed_unique_taxon_name.*, to avoid further dependencies on AcceptedTaxon
Aaron Marcuse-Kubitza
05:55 PM Revision 11637: inputs/.TNRS/schema.sql: removed no longer used ScrubbedTaxon. use taxon_scrub instead.
Aaron Marcuse-Kubitza
05:54 PM Revision 11636: schemas/vegbien.sql: taxon_trait_view: updated to use new taxon_scrub
Aaron Marcuse-Kubitza
05:51 PM Revision 11635: schemas/vegbien.sql: analytical_stem_view: updated to use new taxon_scrub. this avoids the need to manually COALESCE() every accepted* and matched* field, and makes the formulas much clearer
Aaron Marcuse-Kubitza
04:11 PM Revision 11634: inputs/.TNRS/schema.sql: added taxon_scrub, which combines ValidMatchedTaxon with scrubbed_unique_taxon_name.* instead of AcceptedTaxon
Aaron Marcuse-Kubitza
03:38 PM Revision 11633: inputs/.TNRS/schema.sql: ValidMatchedTaxon: synced to MatchedTaxon
Aaron Marcuse-Kubitza
03:22 PM Revision 11632: fix: inputs/.TNRS/schema.sql: scrubbed_taxon_name_with_author: renamed to scrubbed_unique_taxon_name because this also contains the family, and is therefore different from just the taxon name with author
Aaron Marcuse-Kubitza
01:50 PM Revision 11631: inputs/.TNRS/schema.sql: MatchedTaxon: added scrubbed_taxon_name_with_author
Aaron Marcuse-Kubitza
01:23 PM Revision 11630: inputs/.TNRS/schema.sql: tnrs: removed Is_homonym, since this did not take into account the never_homonym status (when the author disambiguates) or the ability of a non-homonym at a lower rank to override a homonym at a higher rank. taking these into account just produces the value of is_valid_match.
Aaron Marcuse-Kubitza
01:19 PM Revision 11629: inputs/.TNRS/schema.sql: tnrs: removed Is_plant, since this functionality is now provided by is_valid_match. note that whether a name is a plant is not meaningful for TNRS, because it can match only plant names (thus a "non-plant" is actually a non-match).
Aaron Marcuse-Kubitza
01:06 PM Revision 11628: inputs/.TNRS/schema.sql: tnrs: added scrubbed_taxon_name_with_author derived column, which uses the matched name when an accepted name is not available
Aaron Marcuse-Kubitza
09:44 AM Revision 11627: inputs/.TNRS/schema.sql: tnrs: removed no longer used Max_score. use is_valid_match to determine validity instead.
Aaron Marcuse-Kubitza
12:09 AM Revision 11626: bugfix: lib/runscripts/file.pg.sql.run: export_(): exclude Source and related tables so that these will be re-created by the staging tables installation instead, ensuring that they are always in sync with the Source/ subdir
Aaron Marcuse-Kubitza
12:08 AM Revision 11625: inputs/.TNRS/data.sql: updated for new derived columns
Aaron Marcuse-Kubitza
12:04 AM Revision 11624: bugfix: lib/runscripts/file.pg.sql.run: export_(): exclude Source and related tables so that these will be re-created by the staging tables installation instead, ensuring that they are always in sync with the Source/ subdir
Aaron Marcuse-Kubitza

11/09/2013

10:22 PM Revision 11623: bugfix: schemas/vegbien.sql: analytical_stem_view: scrubbed_taxon_name_no_author, scrubbed_author: need to COALESCE() these to the matched* when no accepted* is available
Aaron Marcuse-Kubitza
10:02 PM Revision 11622: schemas/vegbien.sql: analytical_stem_view, etc.: renamed scrubbed fields with the scrubbed_* prefix, to clearly distinguish these from the equivalent fields for other taxon names
Aaron Marcuse-Kubitza
09:10 PM Revision 11621: bugfix: schemas/vegbien.sql: analytical_stem_view: family, genus: need to COALESCE() these to the matched* when no accepted* is available
Aaron Marcuse-Kubitza
06:04 PM Revision 11620: backups/TNRS.backup.md5: updated
Aaron Marcuse-Kubitza
04:47 PM Revision 11619: inputs/.TNRS/schema.sql: removed no longer used score_ok(). use tnrs.Is_plant instead. (the threshold is still documented in tnrs_populate_fields().)
Aaron Marcuse-Kubitza
04:45 PM Revision 11618: inputs/.TNRS/schema.sql: tnrs_populate_fields(): is_valid_match: don't consider Max_score because Is_plant will always be false when the Max_score is insufficient (<0.8)
Aaron Marcuse-Kubitza
04:20 PM Revision 11617: inputs/.TNRS/schema.sql: schema comment: added steps to remake schema.sql and back up the new TNRS schema. documented that these steps should be run on vegbiendev.
Aaron Marcuse-Kubitza
04:16 PM Revision 11616: inputs/.TNRS/schema.sql: schema comment: added steps to determine what changes need to be made on vegbiendev
Aaron Marcuse-Kubitza
04:01 PM Revision 11615: inputs/.TNRS/schema.sql: tnrs_populate_fields(): regenerate the derived cols: updated runtimes (~same)
Aaron Marcuse-Kubitza
03:54 PM Revision 11614: inputs/.TNRS/schema.sql: tnrs: moved instructions to apply schema changes on vegbiendev to the TNRS schema, because this applies to all elements in the TNRS schema, not just the tnrs table
Aaron Marcuse-Kubitza
03:30 PM Revision 11613: inputs/.TNRS/schema.sql: score_ok(): don't make it STRICT because this prevents it from being inlined
Aaron Marcuse-Kubitza
03:24 PM Revision 11612: inputs/.TNRS/schema.sql: tnrs: removed no longer used tnrs_score_ok index. use tnrs__valid_match instead.
Aaron Marcuse-Kubitza
03:09 PM Revision 11611: bugfix: inputs/.TNRS/schema.sql: tnrs_populate_fields(): is_valid_match: documented that this excludes homonyms because these are not valid matches (i.e. TNRS provides a name, but the name is not meaningful because it is not unambiguous)
Aaron Marcuse-Kubitza
03:07 PM Revision 11610: bugfix: inputs/.TNRS/schema.sql: ValidMatchedTaxon: exclude inter-kingdom homonyms because these are not valid matches (i.e. TNRS provides a name, but the name is not meaningful because it is not unambiguous). this uses taxon_scrub__is_valid_match instead of score_ok() to determine whether the result should be included.
Aaron Marcuse-Kubitza
02:56 PM Revision 11609: inputs/.TNRS/schema.sql: ValidMatchedTaxon: synced to MatchedTaxon
Aaron Marcuse-Kubitza
02:55 PM Revision 11608: inputs/.TNRS/schema.sql: MatchedTaxon: added is_valid_match
Aaron Marcuse-Kubitza
02:52 PM Revision 11607: inputs/.TNRS/schema.sql: tnrs: added tnrs__valid_match index to facilitate joining to only valid matches
Aaron Marcuse-Kubitza
02:48 PM Revision 11606: inputs/.TNRS/schema.sql: tnrs: added is_valid_match derived column, to make it easier to select from only those TNRS results that can safely be used as a scrubbed name
Aaron Marcuse-Kubitza
02:02 PM Revision 11605: lib/sh/util.sh: already_exists_msg(): added instructions on how to force-remake when the file already exists (prepend `rm=1` to the command)
Aaron Marcuse-Kubitza
02:20 AM Revision 11604: inputs/VegBank/^taxon_observation.**.sample/test.xml.ref: updated inserted row count, now that CVS plots have been removed
Aaron Marcuse-Kubitza

11/08/2013

10:57 PM Revision 11603: bugfix: lib/runscripts/view.run: don't do anything in load_data(), to avoid trying to remake header.csv before the view is created. (for views, this instead happens in postprocess().)
Aaron Marcuse-Kubitza
10:51 PM Revision 11602: lib/runscripts/table.run: reordered functions in the order they are called by import()
Aaron Marcuse-Kubitza
10:28 PM Revision 11601: bugfix: inputs/VegBank/: need to remove inter-datasource duplicates from plot instead of the left-joined plot_ table, because the fkeys needed to do the cascading deletes are all to the plot table. this requires doing the column-renaming and postprocessing on plot *before* it's left-joined.
Aaron Marcuse-Kubitza
09:57 PM Revision 11600: inputs/VegBank/plot_/create.sql: updated runtime (5 s) for previous bugfix
Aaron Marcuse-Kubitza
07:50 PM Revision 11599: exports/2013-7-10.Naia.range_limiting_factors.csv.run: updated export_() runtime and rowcount (~ the same)
Aaron Marcuse-Kubitza
04:26 PM Revision 11598: bugfix: schemas/vegbien.sql: 2013-7-10.Naia.range_limiting_factors: filter by coordinateUncertaintyInMeters filter: assume true for rows with no coordinateUncertaintyInMeters
Aaron Marcuse-Kubitza
03:43 PM Revision 11597: schemas/vegbien.sql: 2013-7-10.Naia.range_limiting_factors: filter by coordinateUncertaintyInMeters <= 10 km
Aaron Marcuse-Kubitza

11/07/2013

04:41 PM Revision 11596: planning/timeline/timeline.2013.xls: updated for progress
Aaron Marcuse-Kubitza
04:00 PM Revision 11595: inputs/.geoscrub/geoscrub_output/run: load_data(): updated runtime (4 min)
Aaron Marcuse-Kubitza
08:42 AM Revision 11594: planning/timeline/timeline.2013.xls: updated for progress
Aaron Marcuse-Kubitza
08:34 AM Revision 11593: bugfix: inputs/.geoscrub/geoscrub_output/geoscrub.csv.run: invoking derived/biengeo/geoscrub.sh: need to split the input file into separate dir and filename parts, because $DATAFILE actually is just the filename, not the entire path, and will otherwise get prepended with the default value for $DATADIR
Aaron Marcuse-Kubitza

11/06/2013

04:57 PM Revision 11592: inputs/.geoscrub/geoscrub_output/geoscrub.csv.run: also run geoscrub.sh. added export_() target to run just the export of the result table separately.
Aaron Marcuse-Kubitza
04:39 PM Revision 11591: derived/biengeo/load-geoscrub-input.sh: allow the caller to override $DATAFILE in the environment, to use a file named other than "geoscrub-corpus.csv"
Aaron Marcuse-Kubitza
02:41 PM Revision 11590: /run: use new exports/geoscrub_input.csv.run
Aaron Marcuse-Kubitza
02:40 PM Revision 11589: added exports/geoscrub_input.csv.run
Aaron Marcuse-Kubitza
02:39 PM Revision 11588: bugfix: lib/sh/make.sh: $remake: need to explicitly propagate this to invoked commands if it was set from $rm
Aaron Marcuse-Kubitza
12:34 PM Revision 11587: derived/biengeo/load-geoscrub-input.sh: updated $DATA_URL for new input filename
Aaron Marcuse-Kubitza
12:27 PM Revision 11586: /run geoscrub_input/make(): include a header on the CSV file, so that the column names don't risk getting spliced from the data (and to shorten the CSV filename, which had to contain the column names instead). this requires changing the geoscrubbing scripts to accept a CSV header.
Aaron Marcuse-Kubitza
11:22 AM Revision 11585: planning/timeline/timeline.2013.xls: updated for progress
Aaron Marcuse-Kubitza
10:14 AM Revision 11584: exports/2013-7-10.Naia.range_limiting_factors.csv.run: added rowcount (40 million of 80 million observations, filtered w/ cultivated, geovalid, and various fields NOT NULL)
Aaron Marcuse-Kubitza
01:46 AM Revision 11583: exports/2013-7-10.Naia.range_limiting_factors.csv.run: updated export_() runtime
Aaron Marcuse-Kubitza
01:30 AM Revision 11582: schemas/vegbien.sql: 2013-7-10.Naia.range_limiting_factors: don't sort the results by occurrence_id, because this is not a meaningful ordering and prevents incremental output from the query
Aaron Marcuse-Kubitza
01:09 AM Revision 11581: schemas/vegbien.sql: 2013-7-10.Naia.range_limiting_factors: also filter out rows without species
Aaron Marcuse-Kubitza
12:59 AM Revision 11580: exports/2013-7-10.Naia.range_limiting_factors.csv.run: export_(): documented runtime (10 min)
Aaron Marcuse-Kubitza

11/05/2013

11:13 PM Revision 11579: lib/sh/db.sh: mk_select(): usage: documented that this also takes a $limit/$n param
Aaron Marcuse-Kubitza
11:12 PM Revision 11578: lib/sh/db.sh: limit(): also support using $n as the limit param, since this var name is used by other parts of the import process
Aaron Marcuse-Kubitza
11:08 PM Revision 11577: added backups/vegbien.r11549.backup.md5
Aaron Marcuse-Kubitza
11:07 PM Revision 11576: lib/sh/db.sh: limit(): usage: documented that this also need a $limit param
Aaron Marcuse-Kubitza
11:06 PM Revision 11575: backups/TNRS.backup.md5: updated
Aaron Marcuse-Kubitza
10:47 PM Revision 11574: lib/runscripts/extract.run: added export_sample()
Aaron Marcuse-Kubitza
10:31 PM Revision 11573: /README.TXT: Full database import: after import: record the import times in inputs/import.stats.xls: documented that this should be run on the local machine, because it needs the Mac filename ordering
Aaron Marcuse-Kubitza
10:30 PM Revision 11572: planning/timeline/timeline.2013.xls: updated for progress
Aaron Marcuse-Kubitza
10:19 PM Revision 11571: inputs/import.stats.xls: updated import times
Aaron Marcuse-Kubitza
08:54 PM Revision 11570: /README.TXT: Full database import: after import: removed step to install analytical_stem on nimoy because the import mechanism is not set up to do this (we don't generate CSV exports of the full analytical_stem table because they take up a lot of space and are not currently used for anything)
Aaron Marcuse-Kubitza
08:32 PM Revision 11569: /README.TXT: Full database import: after import: In PostgreSQL: added step to check that analytical_stem contains the expected # of rows
Aaron Marcuse-Kubitza
08:16 PM Revision 11568: /README.TXT: Full database import: after import: In PostgreSQL: added specific instructions for determining which/how many datasources are expected to be included in the provider_count and source tables
Aaron Marcuse-Kubitza
08:05 PM Revision 11567: added inputs/analytical_db/_archive/
Aaron Marcuse-Kubitza
07:46 PM Revision 11566: inputs/analytical_db/: removed import-related files (Source/, etc.), since this is actually just a folder used to store make_analytical_db.log.sql, so that it will be synced along with the other logs
Aaron Marcuse-Kubitza
07:43 PM Revision 11565: inputs/analytical_db/: added _no_import to prevent this from incorrectly being included in the source table
Aaron Marcuse-Kubitza
07:27 PM Revision 11564: inputs/input.Makefile: $(_svnFilesGlob): also svn-add _no_import in the top-level datasrc dir. (this requires using add! , because the presence of a _no_import file there will normally turn off adding by svnFilesGlob.)
Aaron Marcuse-Kubitza
11:49 AM Revision 11563: Added an output CSV file option to geoscrub.sh.
Paul Sarando

11/04/2013

03:25 PM Revision 11562: Added notes on running biengeo scripts to README.
Paul Sarando

10/31/2013

05:35 PM Revision 11561: Added biengeo script options for data directories.
Added GADM and geonames.org data dir options to
update_validation_data.sh scripts.
Added geoscrub input data dir opti...
Paul Sarando
05:35 PM Revision 11560: Added update options to biengeo update_validation_data.sh
Added options to update only GADM data, only Geonames.org data, or
neither. In every case, the geonames-to-gadm scrip...
Paul Sarando
05:35 PM Revision 11559: Added cmd-line options to biengeo bash scripts.
All biengeo bash scripts now accept command line options to specify psql
user, host, and database values.
These optio...
Paul Sarando
05:35 PM Revision 11558: Fix biengeo script password prompt for postgres user.
Changed the DB_HOST variables in the biengeo bash scripts to a
DB_HOST_OPT variable that is blank by default.
Updated...
Paul Sarando
05:35 PM Revision 11557: Fixed TRUNCATE statement in truncate.geonames.sql.
Fixed the biengeo truncate.geonames.sql script to include all tables in
one TRUNCATE statement that have foreign-key ...
Paul Sarando
05:35 PM Revision 11556: Added more approx. runtimes to biengeo README.
Paul Sarando
05:35 PM Revision 11555: Renamed biengeo install scripts to setup scripts.
It seems to make more sense to call these setup scripts, since they are
only setting up the database and tables, and ...
Paul Sarando
12:29 PM Revision 11554: planning/timeline/timeline.2013.xls: updated for progress
Aaron Marcuse-Kubitza
12:24 PM Revision 11553: planning/timeline/timeline.2013.xls: datasource validations: CVS: left-join it: moved under "fix issues and critical feature requests" instead of "prepare 1st-round extracts" because the left-joining is actually part of getting it in the same format as VegBank
Aaron Marcuse-Kubitza
11:12 AM Revision 11552: inputs/CTFS/StemObservation/test.xml.ref: updated inserted row count
Aaron Marcuse-Kubitza
10:30 AM Revision 11551: planning/timeline/timeline.2013.xls: datasource validations: rescheduled CVS before other datasources, as decided in the conference call
Aaron Marcuse-Kubitza
10:27 AM Revision 11550: schemas/Makefile: $(confirmRmPublicSchema0): use "any ... schema" instead of "the ... schema" because the schema in question may not exist
Aaron Marcuse-Kubitza
08:53 AM Revision 11549: planning/timeline/timeline.2013.xls: datasource validations: rescheduled tasks for new order
Aaron Marcuse-Kubitza
08:42 AM Revision 11548: planning/timeline/timeline.2013.xls: datasource validations: reordered to put plots before specimens, as requested by Brad (wiki.vegpath.org/2013-10-25_conference_call#validation-order)
Aaron Marcuse-Kubitza
08:24 AM Revision 11547: planning/timeline/timeline.2013.xls: updated for progress
Aaron Marcuse-Kubitza
08:21 AM Revision 11546: planning/timeline/timeline.2013.xls: hid previous weeks
Aaron Marcuse-Kubitza
08:20 AM Revision 11545: planning/timeline/timeline.2013.xls: crossed out and hid completed tasks
Aaron Marcuse-Kubitza
08:17 AM Revision 11544: fix: planning/timeline/timeline.2013.xls: datasource validations: prepare 2nd-round extracts: VegBank: corrected check mark week, based on date of extract
Aaron Marcuse-Kubitza
08:14 AM Revision 11543: planning/timeline/timeline.2013.xls: datasource validations: added "prepare 3rd-round extracts" subtask, which currently applies to VegBank. updated for progress.
Aaron Marcuse-Kubitza
08:08 AM Revision 11542: planning/timeline/timeline.2013.xls: "datasource validations (spot-checking)": renamed to just "datasource validations" because that's what we've been calling it
Aaron Marcuse-Kubitza
08:08 AM Revision 11541: planning/timeline/timeline.2013.xls: datasource validations: CVS: added "VegBank-related changes" subtask
Aaron Marcuse-Kubitza
08:05 AM Revision 11540: planning/timeline/timeline.2013.xls: updated for progress and revised schedule
Aaron Marcuse-Kubitza
07:51 AM Revision 11539: bugfix: inputs/VegBank/import_order.txt: updated name of ^taxon_observation.**.sample table
Aaron Marcuse-Kubitza
07:16 AM Revision 11538: fix: inputs/VegBank/^taxon_observation.**.sample/create.sql: moved continent before country
Aaron Marcuse-Kubitza
06:54 AM Revision 11537: inputs/VegBank/^taxon_observation.**.sample/create.sql: added missing columns that were recently mapped to VegBIEN (identifiedBy)
Aaron Marcuse-Kubitza
06:52 AM Revision 11536: inputs/VegBank/^taxon_observation.**.sample/create.sql: synced column order to analytical_plot
Aaron Marcuse-Kubitza
06:49 AM Revision 11535: inputs/VegBank/^taxon_observation.**.sample/create.sql: synced column order to analytical_plot
Aaron Marcuse-Kubitza
06:47 AM Revision 11534: inputs/VegBank/taxonobservation_/map.csv, postprocess.sql: mapped identifiedBy (the _join_words() of identifiedBy__first, etc.)
Aaron Marcuse-Kubitza
06:22 AM Revision 11533: fix: schemas/vegbien.sql: analytical_plot, analytical_specimen: removed derived columns that are not part of the validation
Aaron Marcuse-Kubitza
06:15 AM Revision 11532: fix: schemas/vegbien.sql: analytical_plot, analytical_specimen: removed internal ID columns that are not part of the validation
Aaron Marcuse-Kubitza
05:46 AM Revision 11531: schemas/vegbien.sql: analytical_plot: removed derived columns that should not be validated by data providers
Aaron Marcuse-Kubitza
05:42 AM Revision 11530: schemas/vegbien.sql: analytical_specimen: synced to analytical_stem
Aaron Marcuse-Kubitza
05:36 AM Revision 11529: schemas/vegbien.sql: analytical_plot: documented that this contains all of the analytical_stem columns, minus specimenHolderInstitutions, collection, accessionNumber, occurrenceID
Aaron Marcuse-Kubitza
05:34 AM Revision 11528: schemas/vegbien.sql: analytical_plot: synced to analytical_stem
Aaron Marcuse-Kubitza
05:29 AM Revision 11527: schemas/vegbien.sql: analytical_stem_view: added individualCount
Aaron Marcuse-Kubitza
04:42 AM Revision 11526: schemas/vegbien.sql: plot.**, analytical_stem_view: added slopeAspect, slopeGradient
Aaron Marcuse-Kubitza
03:41 AM Revision 11525: schemas/VegCore/ERD/VegCore.ERD.mwb: traceable.id_by_source: support multiple ids_by_source per traceable, because the same entity may be present in multiple datasources (e.g. if one got data from the other), and we would like to remove that duplicate
Aaron Marcuse-Kubitza
02:46 AM Revision 11524: inputs/VegBank/taxonobservation_/map.csv, postprocess.sql: mapped identifiedBy (the _join_words() of identifiedBy__first, etc.)
Aaron Marcuse-Kubitza
02:34 AM Revision 11523: inputs/VegBank/taxonobservation_/create.sql: also join party_id to get the identifiedBy (not mapped yet). note that the inserted row count changes, because taxonobservation_ does not yet have a pkey to do a stable ordering with.
Aaron Marcuse-Kubitza
02:16 AM Revision 11522: bugfix: inputs/input.Makefile: %/install: don't run map_table, because this instead done by the runscript. although it does not hurt to do it twice, invoking load_data by itself should not run map_table at all, so that the original column names can be inspected in the table and map.csv reordered to match.
Aaron Marcuse-Kubitza
02:06 AM Revision 11521: inputs/VegBank/vegbank.~.clean_up.sql: taxoninterpretation.party_id: don't rename to taxoninterpretation_party_id, so that this can be used directly in taxonobservation_/create.sql with a USING join
Aaron Marcuse-Kubitza
01:52 AM Revision 11520: inputs/VegBank/taxonobservation_/create.sql: join taxonobservation to taxoninterpretation (as in CVS) instead of vice versa, since taxonobservation is the primary, operative table. having VegBank and CVS do things the same way helps ensure that fixes in one can transfer easily to the other.
Aaron Marcuse-Kubitza
01:51 AM Revision 11519: bugfix: inputs/input.Makefile: %/install: don't run map_table, because this instead done by the runscript. although it does not hurt to do it twice, invoking load_data by itself should not run map_table at all, so that the original column names can be inspected in the table and map.csv reordered to match.
Aaron Marcuse-Kubitza
01:30 AM Revision 11518: inputs/VegBank/^taxon_observation.**.sample/create.sql: synced with taxon_observation.**
Aaron Marcuse-Kubitza
01:22 AM Revision 11517: (for r11396) fix: bin/map: put template: comment out the "Put template:" label so that the output is valid XML, and displays properly in a browser rather than showing a syntax error
Aaron Marcuse-Kubitza
12:50 AM Revision 11516: /README.TXT: for each task, documented which machine it's run on. for tasks run on vegbiendev, added pointer to "Connecting to vegbiendev" steps.
Aaron Marcuse-Kubitza
12:19 AM Revision 11515: /README.TXT: added instructions for connecting to vegbiendev
Aaron Marcuse-Kubitza

10/30/2013

11:03 PM Revision 11514: mappings/VegCore-VegBIEN.csv: mapped taxon_determination__is_current, taxon_determination__is_original
Aaron Marcuse-Kubitza
09:49 PM Revision 11513: mappings/VegCore-VegBIEN.csv: mapped taxon_determination__is_current, taxon_determination__is_original
Aaron Marcuse-Kubitza
09:46 PM Revision 11512: bugfix: mappings/VegCore-VegBIEN.csv: main taxondetermination: use [!isoriginal=true] instead of [!isoriginal] so that adding a manual isoriginal field does not prevent this selector from matching
Aaron Marcuse-Kubitza
09:07 PM Revision 11511: inputs/VegBank/taxonobservation_/map.csv: originalinterpretation, currentinterpretation: removed table name prefix so these would automap
Aaron Marcuse-Kubitza
09:06 PM Revision 11510: mappings/VegCore.htm: regenerated from wiki. added taxon_determination__is_current, taxon_determination__is_original.
Aaron Marcuse-Kubitza
09:02 PM Revision 11509: mappings/VegCore.htm: regenerated from wiki. added taxon_determination__is_current, taxon_determination__is_original.
Aaron Marcuse-Kubitza
08:07 PM Revision 11508: planning/timeline/timeline.2013.xls: geoscrubbing automated pipeline: split into subtasks "build pipeline", "test pipeline", and "integrate pipeline into import process"
Aaron Marcuse-Kubitza
08:04 PM Revision 11507: planning/timeline/timeline.2013.xls: geoscrubbing re-run: moved recent checkmarks to "geoscrubbing automated pipeline" since the work on these actually relates to *automating* the geoscrubbing, not the one-time reload (which was already completed)
Aaron Marcuse-Kubitza
08:02 PM Revision 11506: planning/timeline/timeline.2013.xls: geoscrubbing: made "geoscrubbing re-run" a subtask of the main geoscrubbing task, instead of geoscrubbing re-run being the supertask. updated for Paul's progresss.
Aaron Marcuse-Kubitza
07:23 PM Revision 11505: schemas/vegbien.sql: taxondetermination_set_iscurrent(): include new iscurrent__verbatim, so that taxondeterminations the datasource marks as current are always considered first. this currently applies to VegBank and CVS.
Aaron Marcuse-Kubitza
07:17 PM Revision 11504: schemas/vegbien.sql: taxondetermination.isoriginal: made it nullable like iscurrent__verbatim, because this is populated from the datasource. taxondetermination_set_iscurrent() now supports isoriginal=NULL, so this is not a problem.
Aaron Marcuse-Kubitza
07:08 PM Revision 11503: schemas/vegbien.sql: taxondetermination.is_datasource_current: renamed to iscurrent__verbatim and made it nullable, so that this can be used to store the verbatim iscurrent status
Aaron Marcuse-Kubitza
07:04 PM Revision 11502: schemas/vegbien.sql: taxondetermination_set_iscurrent(): removed setting of is_datasource_current (which is now the same as iscurrent), so that this can be used to store the verbatim iscurrent status
Aaron Marcuse-Kubitza
06:59 PM Revision 11501: schemas/vegbien.sql: taxondetermination_set_iscurrent(): isoriginal: make sure it is always either true or false, so that if the NOT NULL constraint on this is ever removed you don't end up with the incorrect sort order false, true, NULL (it should be false=NULL, true)
Aaron Marcuse-Kubitza
06:42 PM Revision 11500: schemas/vegbien.sql: use plain taxondetermination.iscurrent instead of is_datasource_current since these are now the same
Aaron Marcuse-Kubitza
06:38 PM Revision 11499: schemas/vegbien.sql: taxondetermination_set_iscurrent(): is_datasource_current: set to the same value as iscurrent, since these now have the same formula
Aaron Marcuse-Kubitza
06:34 PM Revision 11498: schemas/vegbien.sql: taxondetermination_set_iscurrent(): removed no longer used accepted, matched determinationtypes (for these determinations, left-join to TNRS.ScrubbedTaxon)
Aaron Marcuse-Kubitza
06:24 PM Revision 11497: Updated biengeo README with new script workflow.
Paul Sarando
06:24 PM Revision 11496: Split geovalidate.sh into install and update scripts.
Split geovalidate.sh into install.sh and update_gadm_data.sh scripts.
The install.sh script creates the databse and u...
Paul Sarando
06:24 PM Revision 11495: Refactored geonames.sh to update_geonames_data.sh
Renamed geonames.sh to update_geonames_data.sh and moved many of the SQL
statements from the bash script into support...
Paul Sarando
06:24 PM Revision 11494: Split up geonames-to-gadm.sql into 3 scripts.
Each script only operates on one table within a transaction.
These scripts now assume the tables have already been cr...
Paul Sarando
06:24 PM Revision 11493: Added geoscrub.sh script.
This script runs the load-geoscrub-input.sh, geonames.sql, and
geovalidate.sql scripts in order to load and scrub veg...
Paul Sarando
06:03 PM Revision 11492: inputs/SALVIAS/projects/postprocess.sql: remove institutions that we have direct data for: documented that most of the 13139 removed plots are from duplicates (where we have direct data). this leaves only 560 of SALVIAS's original 13699 plots.
Aaron Marcuse-Kubitza
05:53 PM Revision 11491: inputs/SALVIAS/projects/postprocess.sql: remove example data
Aaron Marcuse-Kubitza
05:48 PM Revision 11490: inputs/SALVIAS/projects/postprocess.sql: remove private data that should not be publicly visible (this was probably already removed by the plotMetadata.AccessCode filter in salvias_plots.~.clean_up.sql)
Aaron Marcuse-Kubitza
05:44 PM Revision 11489: inputs/SALVIAS/projects/postprocess.sql: remove institutions that we have direct data for (Madidi, VegBank)
Aaron Marcuse-Kubitza
04:23 PM Revision 11488: bugfix: inputs/VegBank/plot_/postprocess.sql: coordinateUncertaintyInMeters__from_fuzzing: need to convert km to m in the fuzzing radii. updated derived cols runtimes.
Aaron Marcuse-Kubitza
04:05 PM Revision 11487: inputs/VegBank/plot_/postprocess.sql: remove duplicated CVS plots (2323 of 7079 CVS plots are removed by this)
Aaron Marcuse-Kubitza
03:54 PM Revision 11486: planning/timeline/timeline.2013.xls: updated for progress
Aaron Marcuse-Kubitza
03:22 PM Revision 11485: added exports/2013-7-10.Naia.range_limiting_factors.csv.run
Aaron Marcuse-Kubitza
03:04 PM Revision 11484: bugfix: exports/2013-10-18.Brian_Enquist.Canadensys.csv.run: do not override the table to analytical_stem, because the extract-specific view should be used instead. this was actually benign, because extract.run export_() always sets $table to the extract-specific view.
Aaron Marcuse-Kubitza
02:57 PM Revision 11483: schemas/vegbien.sql: added 2013-7-10.Naia.range_limiting_factors
Aaron Marcuse-Kubitza
02:45 PM Revision 11482: schemas/vegbien.sql: sync_analytical_stem_to_view(): row_num: renamed to taxon_occurrence__pkey because previous taxon determinations have been removed, so each row is in fact a taxon_occurrence (~= VegCore.vegpath.org?ERD.taxon_occurrence)
Aaron Marcuse-Kubitza
02:20 PM Revision 11481: fix: schemas/vegbien.sql: analytical_stem_view: don't ORDER BY datasource, because this requires a slow full-table sort after the hash joins. (when selecting a subset of analytical_stem_view, nested loops are used automatically without needing an ORDER BY to force this.) to get the datasource-sorted order (plus a sort-order guarantee), you can still add a manual `ORDER BY datasource`, which will use a fast index scan on one of the datasource indexes.
Aaron Marcuse-Kubitza
01:58 PM Revision 11480: schemas/vegbien.sql: analytical_stem: added row_num, which can serve as the taxon_observation ID (DwC occurrenceID)
Aaron Marcuse-Kubitza
01:53 PM Revision 11479: Updated load-geoscrub script with configurable db.
load-geoscrub-input.sh now uses a variable with the db name defined at
the top of the script.
Updated the default db ...
Paul Sarando
12:11 PM Revision 11478: schemas/vegbien.sql: analytical_stem: locationID... index: use eventDate instead of dateCollected since it's now eventDate that identifies the locationevent
Aaron Marcuse-Kubitza
12:11 PM Revision 11477: schemas/vegbien.sql: analytical_stem: locationID... index: use eventDate instead of dateCollected since it's now eventDate that identifies the locationevent
Aaron Marcuse-Kubitza
04:41 AM Revision 11476: schemas/vegbien.sql: analytical_stem_view: use plot.** to obtain plot-related fields, so that the same code does not need to be maintained in both analytical_stem_view and plot.**
Aaron Marcuse-Kubitza
04:32 AM Revision 11475: schemas/vegbien.sql: analytical_stem_view: moved specimen-specific fields to occurrence section
Aaron Marcuse-Kubitza
03:50 AM Revision 11474: schemas/vegbien.sql: analytical_stem_view, plot.**: added separate location__cultivated__bien
Aaron Marcuse-Kubitza
03:11 AM Revision 11473: schemas/vegbien.sql: added separate eventDate, in addition to dateCollected
Aaron Marcuse-Kubitza
02:59 AM Revision 11472: fix: schemas/vegbien.sql: dateCollected: use aggregateoccurrence.collectiondate *before* locationevent.obsstartdate rather than after, because this is more accurate. it was previously the other way around to allow dateCollected to be the pkey for the row's locationevent (for plots data).
Aaron Marcuse-Kubitza
02:38 AM Revision 11471: schemas/vegbien.sql: analytical_stem_view, plot.**: locationevent__pkey: moved to right before the locationevent-related fields
Aaron Marcuse-Kubitza

10/29/2013

06:53 PM Revision 11470: schemas/vegbien.sql: analytical_stem_view: changed column order, etc. to match plot.**
Aaron Marcuse-Kubitza
06:52 PM Revision 11469: schemas/vegbien.sql: analytical_stem_view: changed column order, etc. to match plot.**
Aaron Marcuse-Kubitza
06:46 PM Revision 11468: schemas/vegbien.sql: plot.**: added locationevent__pkey so that this view can be joined to other VegBIEN tables, which require the internal pkey
Aaron Marcuse-Kubitza
06:29 PM Revision 11467: derived/biengeo/README.txt: geoscrub new data: geovalidate.sql: added runtime from Paul
Aaron Marcuse-Kubitza
09:05 AM Revision 11466: schemas/vegbien.sql: sync_analytical_stem_to_view(): speciesBinomialWithMorphospecies index: documented runtime (1 h)
Aaron Marcuse-Kubitza
08:56 AM Revision 11465: schemas/vegbien.sql: plot.**: updated to use the same column formulas as analytical_stem_view
Aaron Marcuse-Kubitza
08:19 AM Revision 11464: planning/timeline/timeline.2013.xls: add globally-unique occurrenceID: removed "globally-unique" because Naia is actually OK with this being numeric (i.e. unique within our DB)
Aaron Marcuse-Kubitza
08:19 AM Revision 11463: planning/timeline/timeline.2013.xls: updated for progress
Aaron Marcuse-Kubitza
07:46 AM Revision 11462: lib/runscripts/import_subset.run: $version: use new $extract_view, which is set to the same value that this was
Aaron Marcuse-Kubitza
07:45 AM Revision 11461: lib/runscripts/extract.run: use the extract-specific view instead of all of analytical_stem
Aaron Marcuse-Kubitza
07:42 AM Revision 11460: schemas/vegbien.sql: added 2013-10-18.Brian_Enquist.Canadensys view
Aaron Marcuse-Kubitza
06:51 AM Revision 11459: schemas/vegbien.sql: sync_analytical_stem_to_view(): added index on speciesBinomialWithMorphospecies for Brian Enquist's Canadensys request
Aaron Marcuse-Kubitza
06:19 AM Revision 11458: planning/timeline/timeline.2013.xls: updated for progress
Aaron Marcuse-Kubitza
04:16 AM Revision 11457: exports/2013-10-18.Brian_Enquist.Canadensys.csv.run: documented runtime (35 min, now that bugs have been fixed)
Aaron Marcuse-Kubitza
03:33 AM Revision 11456: bugfix: bin/with_all: @inputs default value: use `local`, so that the default value is only set for the current function and doesn't leak back out into the caller. this fixes a bug in subset imports where import_all's Source/import call to with_all would add the .* datasources, but these would then stay in for the import_scrub call, causing extra .* datasources to incorrectly be imported.
Aaron Marcuse-Kubitza
02:22 AM Revision 11455: planning/timeline/timeline.2013.xls: usability testing: added additional subtask to validate the scientists' extracts (i.e. check that the extract fulfills their request)
Aaron Marcuse-Kubitza
02:17 AM Revision 11454: planning/timeline/timeline.2013.xls: provide scientists with their requested data: added separate subtask for Brian Enquist's Canadensys extract
Aaron Marcuse-Kubitza
02:12 AM Revision 11453: planning/timeline/timeline.2013.xls: updated for progress and revised schedule
Aaron Marcuse-Kubitza
01:53 AM Revision 11452: bugfix: schemas/pg_hba.Mac.conf: made same change for Mac as was made for Linux in r11451
Aaron Marcuse-Kubitza
01:22 AM Revision 11451: bugfix: schemas/pg_hba.conf: don't allow ident authentication for Unix socket connections, because this apparently prevents having normal, password-based connections ("md5"). note that just switching the order of the ident and md5 entries is not useful, because whichever authentication type comes second will be ignored completely. this problem was previously worked around by just not using Unix socket connections at all, and always specifying "localhost" as the host to force a hostname-based connection. this does not affect the postgres superuser, because they have their own ident line in pg_hba.conf.
Aaron Marcuse-Kubitza

10/25/2013

06:15 PM Revision 11450: Added db user and host to load-geoscrub-input.sh
The psql commands in load-geoscrub-input.sh will now connect with a
specific user on a specific host.
Updated the 'CO...
Paul Sarando
04:51 PM Revision 11449: derived/biengeo/README.txt: geoscrub new data: steps that use .sql scripts: added the psql commands to run these
Aaron Marcuse-Kubitza
04:22 PM Revision 11448: Updated install instructions in the README.
Paul Sarando
03:00 PM Revision 11447: derived/biengeo/README.txt: geoscrub new data: noted that this now deletes any previous geoscrubbing results
Aaron Marcuse-Kubitza
02:58 PM Revision 11446: derived/biengeo/README.txt: added steps to set the working dir for each set of steps
Aaron Marcuse-Kubitza
02:54 PM Revision 11445: derived/biengeo/README.txt: added section on obtaining source code, including path to Paul's in-progress files on vegbiendev (not sure whether the in-progress files are needed to run the core scripts in steps 1-6)
Aaron Marcuse-Kubitza
02:44 PM Revision 11444: derived/biengeo/README.txt: moved commands to run to the top of the README. flagged commands-sections with ***** and an identifying label.
Aaron Marcuse-Kubitza
02:04 PM Revision 11443: Initial checkin of geoscrub install SQL files.
Added install.*.sql files that will do initial table creation for all
required tables.
Added a truncate.vegbien_geosc...
Paul Sarando
02:04 PM Revision 11442: Update load-geoscrub-input.sh to download from URL.
Removed logic to dump input data directly from the vegbien database and
to download the input from a URL provided by ...
Paul Sarando
11:56 AM Revision 11441: planning/timeline/timeline.2013.xls: reload core & analytical database scheduled for this week: postponed to give us additional time to do datasource validations
Aaron Marcuse-Kubitza
09:58 AM Revision 11440: inputs/input.Makefile: added %/import_temp alias for %/import, to mirror the presence of import_temp for import
Aaron Marcuse-Kubitza
09:24 AM Revision 11439: fix: inputs/VegBank/taxonobservation_/map.csv: remapped authorplantname to OMIT because these are not specific to the taxoninterpretation row (this is in a separate taxoninterpretation for the original determination instead). see wiki.vegpath.org/Spot-checking#2013-10-10 > Mike Lee's conference call feedback.
Aaron Marcuse-Kubitza
09:22 AM Revision 11438: fix: inputs/VegBank/taxonobservation_/map.csv: remapped int_* to OMIT because these are not specific to the taxoninterpretation row (this is in a separate taxoninterpretation for the original determination instead). see wiki.vegpath.org/Spot-checking#2013-10-10 > Mike Lee's conference call feedback.
Aaron Marcuse-Kubitza

10/24/2013

07:09 PM Revision 11437: exports/2013-10-18.Brian_Enquist.Canadensys.csv.run: inherit from new import_subset.run (which uses extract.run)
Aaron Marcuse-Kubitza
07:08 PM Revision 11436: added lib/runscripts/import_subset.run, extract.run
Aaron Marcuse-Kubitza
05:21 PM Revision 11435: added exports/2013-10-18.Brian_Enquist.Canadensys.csv.run
Aaron Marcuse-Kubitza
05:07 PM Revision 11434: bin/make_analytical_db: removed no longer needed setting of $schema to $public, because this is now done by psql()
Aaron Marcuse-Kubitza
05:06 PM Revision 11433: lib/sh/local.sh: psql(): also accept $public as the $schema param, since this is used by a lot of import scripts
Aaron Marcuse-Kubitza
04:24 PM Revision 11432: lib/sh/util.sh: added require_dot_script()
Aaron Marcuse-Kubitza
04:13 PM Revision 11431: bugfix: lib/sh/util.sh: $top_script: use @BASH_SOURCE instead of $0, because this is also defined for .-scripts
Aaron Marcuse-Kubitza
04:03 PM Revision 11430: bugfix: bin/import_all: restore the working dir when main() is done, in case it started as something other than the root dir
Aaron Marcuse-Kubitza
03:49 PM Revision 11429: bin/after_import: support turning off the end-of-import backup for imports that are not the full database
Aaron Marcuse-Kubitza
03:26 PM Revision 11428: bugfix: lib/runscripts/util.run: `trap on_exit EXIT`: only set this if the script is not a dot script, because if it is a dot script, on_exit() will not be invoked until the calling shell exits, which may be much later than when the script is run. previously, this was handled by canceling the EXIT trap if on_exit() is run manually, but this would not work correctly if a load-time error prevented on_exit() from running and canceling the trap.
Aaron Marcuse-Kubitza
03:21 PM Revision 11427: bugfix: lib/runscripts/util.run: if is_dot_script, fix $@ when no args causes this to incorrectly contain the script name. use is_dot_script rather than the presence of $@ args to decide whether to use @BASH_ARGV, because @BASH_ARGV is actually wrong when run as a .-script (it contains the script name).
Aaron Marcuse-Kubitza
03:17 PM Revision 11426: bugfix: lib/sh/util.sh: is_dot_script(): need to subtract 1 from ${#BASH_LINENO[@]}, because this is the array length rather than the index of the last element as in Perl
Aaron Marcuse-Kubitza
02:58 PM Revision 11425: lib/sh/util.sh: added is_dot_script()
Aaron Marcuse-Kubitza
01:15 PM Revision 11424: bugfix: schemas/vegbien.sql: taxondetermination_set_iscurrent(): is_datasource_current (used by analytical_stem_view): need to separately check if `determinationtype IS NULL`, because `determinationtype NOT IN (accepted, matched))` will return NULL (false) if determinationtype is NULL, causing no match
Aaron Marcuse-Kubitza
01:11 PM Revision 11423: bugfix: bin/make_analytical_db: when running into a public schema other than "public", also pass this to `/run export_` (which currently uses $schema instead of $public)
Aaron Marcuse-Kubitza
01:10 PM Revision 11422: bugfix: bin/import_all: fix $@ when .-included without args (which causes bash to put the wrong values in $@ instead of leaving it empty)
Aaron Marcuse-Kubitza
01:09 PM Revision 11421: bin/import_all: `make schemas/$version/install`: reinstall instead to allow re-running the import to the same custom schema (e.g. 2013-10-18.Brian_Enquist.Canadensys)
Aaron Marcuse-Kubitza
01:07 PM Revision 11420: bin/import_all: `make schemas/$version/install`: ignore errors if schema exists, to support running with -e
Aaron Marcuse-Kubitza

10/23/2013

11:10 PM Revision 11419: bugfix: bin/import_all: removing inputs/.TNRS/tnrs/tnrs.make.lock: use `"rm" -f` instead of plain "rm" to avoid having an error exit status, which will abort the script if run with the -e flag (as runscripts are)
Aaron Marcuse-Kubitza
11:02 PM Revision 11418: lib/runscripts/util.run: run script template: changed sample command name to all() because each runscript requires this in order to be run without args
Aaron Marcuse-Kubitza
11:00 PM Revision 11417: lib/runscripts/util.run: support scripts that are run as shell-includes (with leading "."), by allowing the calling script to manually invoke on_exit() without it then being invoked twice (the end of a shell-include does not trigger the EXIT trap)
Aaron Marcuse-Kubitza
10:34 PM Revision 11416: bin/*_all: *_main(): renamed to just main() because it does not matter that other shell-includes' main() methods will clobber this, because it is only executed once
Aaron Marcuse-Kubitza
10:29 PM Revision 11415: bugfix: bin/import_all: Source tables: use .../import instead of import_temp because import_temp is only needed when importing all tables, to prevent the temp suffix from being removed yet
Aaron Marcuse-Kubitza
10:17 PM Revision 11414: lib/runscripts/util.run: support scripts that are run as shell-includes (with leading "."), by also accepting $@ args that are passed along in the util.run include, in addition to @BASH_ARGV
Aaron Marcuse-Kubitza
09:11 PM Revision 11413: bugfix: lib/sh/util.sh: alias_append(): need to enclose $(alias) call in "" because its result may contain separator chars (i.e. whitespace) that will be parsed incorrectly. this appears to only be a bug when runscripts are run as shell-includes, with a leading ".".
Aaron Marcuse-Kubitza
01:49 PM Revision 11412: schemas/VegCore/ERD/VegCore.ERD.mwb: connecting lines: inherits from traceable: added arrow to indicate what this label refers to
Aaron Marcuse-Kubitza
01:43 PM Revision 11411: schemas/VegCore/ERD/VegCore.ERD.mwb: regenerated exports and udpated image map
Aaron Marcuse-Kubitza
01:37 PM Revision 11410: schemas/VegCore/ERD/VegCore.ERD.mwb: HAS-A/IS-A box: renamed to "connecting lines" for clarity
Aaron Marcuse-Kubitza

10/22/2013

10:22 PM Revision 11409: schemas/VegCore/ERD/VegCore.ERD.mwb: relationships: HAS-A: added HAS-MANY going in the opposite direction, because every HAS-A has an opposite HAS-MANY
Aaron Marcuse-Kubitza
10:17 PM Revision 11408: schemas/VegCore/ERD/VegCore.ERD.mwb: relationships: IS-A, HAS-A: added directional arrows
Aaron Marcuse-Kubitza
10:10 PM Revision 11407: schemas/VegCore/ERD/VegCore.ERD.mwb: field order box: removed spacing between top of text box and bottom of outer box label
Aaron Marcuse-Kubitza
10:04 PM Revision 11406: schemas/VegCore/ERD/VegCore.ERD.mwb: regenerated exports and udpated image map
Aaron Marcuse-Kubitza
10:00 PM Revision 11405: schemas/VegCore/ERD/VegCore.ERD.mwb: reordered columns according to the field order convention
Aaron Marcuse-Kubitza
09:49 PM Revision 11404: schemas/VegCore/ERD/VegCore.ERD.mwb: added label documenting the field order convention:
1) inherited
2) required
3) identifying
4) foreign key
5) extenders
6) others
Aaron Marcuse-Kubitza
08:34 PM Revision 11403: web/links/index.htm: updated to Firefox bookmarks. added links for EER models, data management plans. put PostgreSQL before MySQL because we have found PostgreSQL to be a much more capable database system, even though it lacks some of MySQL's user-friendly features.
Aaron Marcuse-Kubitza
06:38 PM Revision 11402: planning/timeline/timeline.2013.xls: updated for progress
Aaron Marcuse-Kubitza
06:21 PM Revision 11401: fix: schemas/vegbien.sql: analytical_stem_view: renamed specimens columns to use the VegCore names, where these differ from DwC, so that the now-VegCore staging table column names are the same as the analytical_stem_view column names
Aaron Marcuse-Kubitza
06:16 PM Revision 11400: schemas/vegbien.sql: regenerated using `make schemas/remake`. note that analytical_stem_view column renamings need this step after a search-and-replace of the column names, in order to remove excess "" around all-lowercase names and reset generated index names.
Aaron Marcuse-Kubitza
06:10 PM Revision 11399: fix: schemas/vegbien.sql: analytical_stem_view: renamed specimens columns to use the VegCore names, where these differ from DwC, so that the now-VegCore staging table column names are the same as the analytical_stem_view column names
Aaron Marcuse-Kubitza
01:20 PM Revision 11398: added planning/goals/web_interface/phpPgAdmin.select_interface.png for use at wiki.vegpath.org/Proposed_enhancements
Aaron Marcuse-Kubitza
09:39 AM Revision 11397: inputs/CVS/_src/: added refresh from Mike Lee
Aaron Marcuse-Kubitza

10/21/2013

07:14 PM Revision 11396: fix: bin/map: put template: comment out the "Put template:" label so that the output is valid XML, and displays properly in a browser rather than showing a syntax error
Aaron Marcuse-Kubitza
05:16 PM Revision 11395: planning/timeline/timeline.2013.xls: usability testing: added subtask to provide scientists with their requested data
Aaron Marcuse-Kubitza

10/20/2013

05:51 PM Revision 11394: planning/timeline/timeline.2013.xls: updated for progress
Aaron Marcuse-Kubitza
05:21 PM Revision 11393: bugfix: bin/import_all: need to publish datasources that won't be published by `make .../import`, so that the per-datasource import XPaths that refer to TNRS/geoscrub will link up with the TNRS/geoscrub source entry instead of creating a new entry without the metadata (because the entry with the metadata was named TNRS.new/geoscrub.new)
Aaron Marcuse-Kubitza
05:09 PM Revision 11392: schemas/vegbien.sql: datasource_publish(): use parameter names instead of $# because this is a PL/pgSQL function
Aaron Marcuse-Kubitza
05:07 PM Revision 11391: bugfix: schemas/vegbien.sql: datasource_publish(): if the datasource to publish already has the published name, don't datasource_rm() it
Aaron Marcuse-Kubitza
04:55 PM Revision 11390: bin/import_all: removed no longer needed import of geoscrub data, because analytical_stem_view is now joined to the geoscrub_output table directly, instead of using the imported canon_place entries
Aaron Marcuse-Kubitza
04:52 PM Revision 11389: schemas/vegbien.sql: analytical_stem_view: join to the geoscrub_output table directly, instead of using the imported canon_place entries. this avoids the need to import geoscrub_output into VegBIEN (which is expected to take 2+ hours after the refresh), as well as the need to then refresh any datasources whose geoscrubbing input data has changed.
Aaron Marcuse-Kubitza
04:37 PM Revision 11388: inputs/.geoscrub/geoscrub_output/postprocess.sql: added nullable unique index on the inputs, for use by analytical_stem_view. note that it must be nullable in order to create a match when not all of the input fields are populated. this uses array[] to create a nullable index, which is much better than column-based import and VegBIEN's use of COALESCE() because the expression is the same for every type and no NULL sentinel value is needed.
Aaron Marcuse-Kubitza
03:50 PM Revision 11387: schemas/VegCore/VegCore.ERD.mwb: fixed lines
Aaron Marcuse-Kubitza
03:41 PM Revision 11386: schemas/VegCore/ERD/VegCore.ERD.mwb: regenerated exports and udpated image map
Aaron Marcuse-Kubitza
03:34 PM Revision 11385: schemas/VegCore/ERD/VegCore.ERD.mwb: person: allow to have multiple organizations
Aaron Marcuse-Kubitza
03:28 PM Revision 11384: schemas/VegCore/ERD/VegCore.ERD.mwb: split "2b. GNRS" label into two labels, one for each table GNRS is applied to
Aaron Marcuse-Kubitza
03:23 PM Revision 11383: schemas/VegCore/ERD/VegCore.ERD.mwb: georeferencing: merged into geoplace, since this is actually information attached to a specific plot, etc. relating to the coordinates used in its geoplace subclass
Aaron Marcuse-Kubitza
02:59 PM Revision 11382: schemas/VegCore/ERD/VegCore.ERD.mwb: geovalidatable_place: changed parent geoplace pointer to parent_boundary_WKT, since the immediate parent may not have an associated boundary to use for geovalidation (i.e. it may not be an official GADM geoplace), although ancestors further up likely will be
Aaron Marcuse-Kubitza
02:40 PM Revision 11381: schemas/VegCore/ERD/VegCore.ERD.mwb: place.name: made it required because it's needed for the unique constraint to be populated properly (including for subclasses such as geoplace, which need to generate this from the coordinates)
Aaron Marcuse-Kubitza
02:30 PM Revision 11380: schemas/VegCore/ERD/VegCore.ERD.mwb: place.rank: made it required, because every place should have some kind of rank indicating what type of place it is, including lower ranks (e.g. plot, individual)
Aaron Marcuse-Kubitza
02:24 PM Revision 11379: schemas/VegCore/ERD/VegCore.ERD.mwb: place: added unique constraint on parent, rank, name
Aaron Marcuse-Kubitza
02:24 PM Revision 11378: schemas/VegCore/ERD/VegCore.ERD.mwb: place.locality: moved to geopath, because this is actually a rank of place (i.e. below municipality) rather than a field that every place could have
Aaron Marcuse-Kubitza
02:07 PM Revision 11377: schemas/VegCore/ERD/VegCore.ERD.mwb: place.locality: moved to geopath, because this is actually a rank of place (i.e. below municipality) rather than a field that every place could have
Aaron Marcuse-Kubitza
01:46 PM Revision 11376: schemas/VegCore/ERD/VegCore.ERD.mwb: geoplace.official_name: renamed to name to merge with inherited field from place. documented that for geoplaces, this is the official, scrubbed name.
Aaron Marcuse-Kubitza
12:48 PM Revision 11375: inputs/.geoscrub/geoscrub_output/postprocess.sql: added geovalid derived column, for use by analytical_stem_view
Aaron Marcuse-Kubitza

10/19/2013

06:56 PM Revision 11374: bin/with_all: $all: renamed to $hidden_srcs for clarity, since this now just adds the hidden (.*) datasources, rather than always using all datasources
Aaron Marcuse-Kubitza
06:50 PM Revision 11373: bugfix: bin/with_all: in $all mode, just prepend the .* datasources to the user-selected (or default) @inputs, so that using $all to add these datasources doesn't inadvertently cause the action to be performed for *all* datasources
Aaron Marcuse-Kubitza
04:33 PM Revision 11372: web/links/index.htm: updated to Firefox bookmarks. PostgreSQL: ALTER TABLE: added documentation about disabling of foreign key triggers, which is only possible by the superuser. note that marking a foreign key constraint as NOT VALID does *not* disable the trigger, so NOT VALID cannot be used for this purpose. this would be used to add fkeys from core VegBIEN tables to validation results tables such as the geoscrubbing results, without needing to import the validation results directly into core VegBIEN (which is time-consuming and currently must be done *before* input data is loaded, requiring a datasource reload to add geoscrubbing results).
Aaron Marcuse-Kubitza
02:15 PM Revision 11371: bin/import_all: usage: documented that this can now be run with a custom datasources list (each of the form inputs/src/)
Aaron Marcuse-Kubitza
02:02 PM Revision 11370: bin/with_all: added support for providing a custom list of inputs to run the command on
Aaron Marcuse-Kubitza
01:29 PM Revision 11369: inputs/.geoscrub/geoscrub_output/postprocess.sql, run: updated runtimes
Aaron Marcuse-Kubitza
01:13 AM Revision 11368: inputs/.geoscrub/geoscrub_output/run: documented full load_data() runtime (9 min @starscream)
Aaron Marcuse-Kubitza
01:12 AM Revision 11367: inputs/.geoscrub/geoscrub_output/postprocess.sql: updated runtimes for refreshed data, which now has 4x as many rows (1,707,970->6,747,650)
Aaron Marcuse-Kubitza
12:54 AM Revision 11366: inputs/.geoscrub/geoscrub_output/: refreshed geoscrub data. removed +header.csv because the extract now contains the header in the first row of the file.
Aaron Marcuse-Kubitza
12:52 AM Revision 11365: bugfix: lib/sh/local.sh: psql(): $is_root: use `` around case statement instead of $(), because it contains an embedded unbalanced )
Aaron Marcuse-Kubitza
12:27 AM Revision 11364: bugfix: inputs/.geoscrub/geoscrub_output/geoscrub.csv.run: include only the columns that Jim provided in his extract (the geoscrub table contains additional internal columns that are not part of the geovalidation data for VegBIEN). documented runtime (30 s) and upload time (1.5 min).
Aaron Marcuse-Kubitza

10/18/2013

10:33 PM Revision 11363: inputs/.geoscrub/geoscrub_output/geoscrub.csv.run: removed no longer needed setting of $local_server, $local_user (and use of $local_pg_database instead of $database) because the use_local bug in local.sh has been fixed
Aaron Marcuse-Kubitza
10:32 PM Revision 11362: bugfix: lib/sh/local.sh: psql(): don't default the connection vars using use_local if running as the postgres user. in that case, connection must happen via a socket, with server="", and as the user running the command (postgres), with user="".
Aaron Marcuse-Kubitza
09:55 PM Revision 11361: bugfix: inputs/.geoscrub/geoscrub_output/geoscrub.csv.run: need to manually set local_server, local_user to "" so that they do not default to their bien-user values
Aaron Marcuse-Kubitza
09:54 PM Revision 11360: bugfix: lib/sh/db.sh: avoid outputting to /dev/fd/# when running as sudo on Linux, because this causes a "Permission denied" error (due to the /dev/fd/# file being owned by a different user). this is not a problem with normal redirects (>&#), because they do not use /dev/fd/# files which can have access permissions.
Aaron Marcuse-Kubitza
09:52 PM Revision 11359: bugfix: lib/runscripts/util.run: to_top_file(): need to pass "$@" to to_file
Aaron Marcuse-Kubitza
08:17 PM Revision 11358: lib/runscripts/util.run: to_top_file: added function for this (in addition to alias), so that this can be run from sudo in a wrap_fn command
Aaron Marcuse-Kubitza
07:50 PM Revision 11357: lib/sh/db.sh: pg_as_root(): run sudo with echo_run to help debug
Aaron Marcuse-Kubitza
06:29 PM Revision 11356: bugfix: lib/sh/db.sh: pg_cmd(): only set PG* connection/login env vars when the corresponding var is *non-empty*. there are some situations in which these must be unset (in order to use the default value), and other situations when the var must be set to something (i.e. "") to avoid it being defaulted to a value in local.sh > connection vars.
Aaron Marcuse-Kubitza
06:27 PM Revision 11355: backups/TNRS.backup.md5: updated
Aaron Marcuse-Kubitza
06:13 PM Revision 11354: bugfix: inputs/.geoscrub/geoscrub_output/geoscrub.csv.run: need to set $local_pg_database instead of $database because use_local (in psql()) does not currently avoid clobbering already-set versions of the applicable env vars
Aaron Marcuse-Kubitza
06:11 PM Revision 11353: bugfix: lib/sh/local.sh: pg_as_root(): need to use -E (preserve environment) option to sudo, so that $schema, $table get passed through
Aaron Marcuse-Kubitza
06:05 PM Revision 11352: bugfix: lib/sh/local.sh: psql(): only \set schema, table if $schema, $table are non-empty, because otherwise, you will get a "zero-length delimited identifier" error
Aaron Marcuse-Kubitza
05:30 PM Revision 11351: added inputs/.geoscrub/geoscrub_output/geoscrub.csv.run to export the geoscrub table (must be run on vegbiendev)
Aaron Marcuse-Kubitza
05:29 PM Revision 11350: lib/sh/local.sh: added require_remote()
Aaron Marcuse-Kubitza
05:29 PM Revision 11349: lib/sh/db.sh: added pg_as_root()
Aaron Marcuse-Kubitza
05:28 PM Revision 11348: lib/runscripts/util.run: added $wrap_fn to run any function via sudo, etc.
Aaron Marcuse-Kubitza
05:23 PM Revision 11347: Added instructions for dependencies in the README.
Paul Sarando
05:23 PM Revision 11346: Added indexes to speed up geonames-to-gadm.sql.
Without these indexes, these queries could take hours to complete.
With them, the times more closely matched the time...
Paul Sarando
05:23 PM Revision 11345: Fixed a couple of syntax errors in geovalidate.sh.
Fixed a sql syntax error and a bash syntax error in the next line. Paul Sarando
04:03 PM Revision 11344: planning/timeline/timeline.2013.xls: "geoscrubbing automated pipeline": scheduled for after Paul's current set of tasks on the geoscrubbing re-run is complete. i'm budgeting several weeks for this since my understanding is that Paul is doing this part-time.
Aaron Marcuse-Kubitza
03:57 PM Revision 11343: planning/timeline/timeline.2013.xls: moved "geoscrubbing automated pipeline" under "simplify import process for easier maintainability"
Aaron Marcuse-Kubitza
03:53 PM Revision 11342: planning/timeline/timeline.2013.xls: geoscrubbing re-run: added subtask to spot-check reloaded geoscrubbing data
Aaron Marcuse-Kubitza
03:52 PM Revision 11341: planning/timeline/timeline.2013.xls: geoscrubbing re-run: added separate subtask for "geoscrubbing data reload", since apparently it was not clear that of course the new data will need to be imported into VegBIEN before the results of the re-run are available. this is currently scheduled to happen in the next full-database import, which is the week of 10/28 in order to include further validations fixes.
Aaron Marcuse-Kubitza
02:06 PM Revision 11340: planning/timeline/timeline.2013.xls: CVS validation: use timespan dot ◦ for supertask
Aaron Marcuse-Kubitza
02:05 PM Revision 11339: planning/timeline/timeline.2013.xls: CVS validation: added subtasks that are similar to for FIA validation (create validation subset, create extract)
Aaron Marcuse-Kubitza
02:03 PM Revision 11338: planning/timeline/timeline.2013.xls: FIA validation: split apart into subtasks, including "decide which columns to validate", which has to happen ahead of time before the extract can be generated
Aaron Marcuse-Kubitza
01:15 PM Revision 11337: planning/timeline/timeline.2013.xls: fixed check marks for past (hidden) weeks, which had gotten duplicated when rows were copied together with their check marks
Aaron Marcuse-Kubitza
01:04 PM Revision 11336: planning/timeline/timeline.2013.xls: fixed line heights
Aaron Marcuse-Kubitza
12:57 PM Revision 11335: planning/timeline/timeline.2013.xls: fixed column width so the dates display properly in MS Excel
Aaron Marcuse-Kubitza
12:48 PM Revision 11334: planning/timeline/timeline.2013.xls: right-aligned legend so it isn't too close to the "During week of:" label
Aaron Marcuse-Kubitza
12:46 PM Revision 11333: planning/timeline/timeline.2013.xls: added legend:
• task
◦ timespan
✓ task progress
☑ timespan progress
Aaron Marcuse-Kubitza
10:25 AM Revision 11332: planning/timeline/timeline.2013.xls: attribution/conditions of use: made it a subtask of "add missing columns" because this is related to data needed for published analyses. added dots because this is an ongoing task, that depends on data providers getting their use conditions to us.
Aaron Marcuse-Kubitza
10:08 AM Revision 11331: planning/timeline/timeline.2013.xls: reload core & analytical database: moved next reload ahead to last week of October so that we can include the updated geovalidation data for the 10/31 deadline. added additional reload so that they are spaced <= 1 month apart.
Aaron Marcuse-Kubitza
09:54 AM Revision 11330: planning/timeline/timeline.2013.xls: receive feedback from documentation tester: added an extra week to receive additional feedback from them in response to documentation fixes made
Aaron Marcuse-Kubitza
09:51 AM Revision 11329: planning/timeline/timeline.2013.xls: attribution/conditions of use: made this a top-level task instead of a subtask of "data provider metadata", to avoid including lower-priority tasks (i.e. in the later column) in the same section as higher-priority tasks
Aaron Marcuse-Kubitza
09:47 AM Revision 11328: planning/timeline/timeline.2013.xls: datasource validations: regrouped by subtask instead of by datasource, so that the high-priority subtasks get done for all datasources before moving on to lower-priority subtasks for any datasources
Aaron Marcuse-Kubitza
08:54 AM Revision 11327: planning/timeline/timeline.2013.xls: reduced width of Milestone column to make room to fit an additional week on the printed page
Aaron Marcuse-Kubitza
08:42 AM Revision 11326: planning/timeline/timeline.2013.xls: attribution/conditions of use: removed "(Brad/Brian/Bob/etc.)" because these are from everyone who provided or obtained data, not just Brad/Brian/Bob
Aaron Marcuse-Kubitza
08:40 AM Revision 11325: planning/timeline/timeline.2013.xls: rescheduled tasks to accommodate the separate non-critical feature requests subtasks
Aaron Marcuse-Kubitza
08:37 AM Revision 11324: planning/timeline/timeline.2013.xls: datasource validations: split "fix feature requests" into separate "fix critical feature requests" and "fix non-critical feature requests" tasks. rescheduled non-critical feature requests until after the other validation tasks have been completed.
Aaron Marcuse-Kubitza
08:11 AM Revision 11323: planning/timeline/timeline.2013.xls: add globally-unique occurrenceID: moved up to next week because we would like to be able to get this done for the 10/31 deadline
Aaron Marcuse-Kubitza
07:52 AM Revision 11322: planning/timeline/timeline.2013.xls: updated for progress
Aaron Marcuse-Kubitza
07:48 AM Revision 11321: planning/timeline/timeline.2013.xls: moved "data provider metadata" before "datasource validations (spot-checking)" because conditions of use are necessary for scientists who want to publish papers based on the data (which is a key use case)
Aaron Marcuse-Kubitza
07:43 AM Revision 11320: planning/timeline/timeline.2013.xls: moved "usability testing" before "datasource validations (spot-checking)" because this is most important towards reaching our goal of a useful information resource
Aaron Marcuse-Kubitza
07:41 AM Revision 11319: planning/timeline/timeline.2013.xls: moved "geoscrubbing re-run", "add globally-unique occurrenceID" back under "usability testing" > "add missing columns" because these are in fact part of the usability testing
Aaron Marcuse-Kubitza
01:26 AM Revision 11318: planning/timeline/timeline.2013.xls: updated for progress
Aaron Marcuse-Kubitza

10/17/2013

11:59 PM Revision 11317: planning/timeline/timeline.2013.xls: updated for progress
Aaron Marcuse-Kubitza
10:32 PM Revision 11316: planning/timeline/timeline.2013.xls: "flatten the datasources to a common schema": moved to later column because the complex tasks "switching to new-style import" and "create interactive scripts for each import step" are also scheduled then. (it's unlikely we would have much time over winter break anyway, considering that there is ~1 week's worth of holidays then.)
Aaron Marcuse-Kubitza
10:24 PM Revision 11315: planning/timeline/timeline.2013.xls: scheduled "simplify import process for easier maintainability"
Aaron Marcuse-Kubitza
10:20 PM Revision 11314: planning/timeline/timeline.2013.xls: tasks performed by someone else (geoscrubbing re-run): changed solid check marks ✓ to open check marks ☑ to match the solid • vs. open ◦ dot convention
Aaron Marcuse-Kubitza
10:17 PM Revision 11313: planning/timeline/timeline.2013.xls: documentation testing: added supertask dots. removed later dots for scheduled tasks.
Aaron Marcuse-Kubitza
10:15 PM Revision 11312: planning/timeline/timeline.2013.xls: scheduled "documentation testing"
Aaron Marcuse-Kubitza
10:02 PM Revision 11311: planning/timeline/timeline.2013.xls: scheduled "simplify process of mapping/adding a new datasource"
Aaron Marcuse-Kubitza
10:01 PM Revision 11310: planning/timeline/timeline.2013.xls: "add globally-unique occurrenceID": moved it up to the first week when we're no longer fixing existing issues in datasources, since this has similar priority to adding missing columns discovered during usability testing (which is scheduled as an ongoing task)
Aaron Marcuse-Kubitza
09:54 PM Revision 11309: planning/timeline/timeline.2013.xls: usability testing: did task breakdown (find scientists who want to use BIEN3 data, etc.) and scheduled subtasks
Aaron Marcuse-Kubitza
09:34 PM Revision 11308: planning/timeline/timeline.2013.xls: moved "add missing columns" to its own supertask. used outline check mark ☑ (analogous to open circle ◦) to mark supertasks as completed which were split up into subtasks.
Aaron Marcuse-Kubitza
09:29 PM Revision 11307: planning/timeline/timeline.2013.xls: later column: removed dots from scheduled items
Aaron Marcuse-Kubitza
09:06 PM Revision 11306: planning/timeline/timeline.2013.xls: moved "switching to new-style import"-related steps (other than for CVS) to separate "simplify import process for easier maintainability" supertask, since this is not part of the "simplify process of mapping/adding a new datasource" task
Aaron Marcuse-Kubitza
08:39 PM Revision 11305: planning/timeline/timeline.2013.xls: add any missing columns: added and scheduled step to add globally-unique occurrenceID
Aaron Marcuse-Kubitza
08:34 PM Revision 11304: planning/timeline/timeline.2013.xls: geoscrubbing re-run: added dots ◦ for this for the time when it can be worked on asynchronously by Paul Sarando
Aaron Marcuse-Kubitza
08:30 PM Revision 11303: planning/timeline/timeline.2013.xls: data provider metadata: added dots ◦ for the portion of "attribution and conditions of use" that can be worked on asynchronously by Brad/Brian/Bob
Aaron Marcuse-Kubitza
08:24 PM Revision 11302: planning/timeline/timeline.2013.xls: scheduled "aggregated validations" during the last 2 weeks of "datasource validations (spot-checking)", because these weeks are only spent fixing issues uncovered in the remaining datasources, so there may be extra time then
Aaron Marcuse-Kubitza
07:20 PM Revision 11301: planning/timeline/timeline.2013.xls: scheduled other tasks after "datasource validations (spot-checking)" is complete
Aaron Marcuse-Kubitza
07:14 PM Revision 11300: planning/timeline/timeline.2013.xls: datasource validations (spot-checking): each datasource's validation supertask: added open circles ◦ spanning the length of the subtasks
Aaron Marcuse-Kubitza
07:12 PM Revision 11299: planning/timeline/timeline.2013.xls: use an open circle ◦ instead of a bullet • for supertasks that have been fully split into subtasks (not just itemizing a few subtasks), so that these don't count towards the bullets (estimated workload) in each week
Aaron Marcuse-Kubitza
07:09 PM Revision 11298: planning/timeline/timeline.2013.xls: use an open circle ◦ instead of a bullet • for tasks that are performed by someone other than me, so that these don't count towards the bullets (estimated workload) in each week
Aaron Marcuse-Kubitza
07:04 PM Revision 11297: planning/timeline/timeline.2013.xls: datasource validations (spot-checking): split each datasource into subtasks and scheduled them
Aaron Marcuse-Kubitza
06:00 PM Revision 11296: planning/timeline/timeline.2013.xls: moved "move denormalized validations to stage II", "move stage III validations to stage II" outside of "switching to new-style import" because the "switching to new-style import" step refers just to the per-datasource switching steps, not to the additional refactorings that would be needed to avoid dependency on the complex XPath mappings (mappings/VegCore-VegBIEN.csv)
Aaron Marcuse-Kubitza
05:57 PM Revision 11295: planning/timeline/timeline.2013.xls: datasource validations (spot-checking): added subtasks for each of the remaining datasources (wiki.vegpath.org/2013-10-17_conference_call#validation-order)
Aaron Marcuse-Kubitza
05:43 PM Revision 11294: planning/timeline/timeline.2013.xls: moved non-validation-related tasks after the 10/31 deadline so that these are not taking time away from the validation
Aaron Marcuse-Kubitza
05:40 PM Revision 11293: planning/timeline/timeline.2013.xls: moved "flatten the datasources to a common schema" under "simplify process of mapping/adding a new datasource" because this is also needed separately for datasources where the left-joining is not part of the validation
Aaron Marcuse-Kubitza
05:30 PM Revision 11292: planning/timeline/timeline.2013.xls: extended "revisions to VegBIEN schema" to length of "datasource validations (spot-checking)" because schema changes are expected as we add missing fields
Aaron Marcuse-Kubitza
05:23 PM Revision 11291: planning/timeline/timeline.2013.xls: crossed out and hid completed tasks ("find out amount remaining in BIEN3 budget")
Aaron Marcuse-Kubitza
05:17 PM Revision 11290: planning/timeline/timeline.2013.xls: datasource validations (spot-checking): extended through the end of November because data providers' fixes on the remaining 10 datasources (wiki.vegpath.org/2013-10-17_conference_call#validation-order) are likely to add significantly to the issues and feature requests associated with these datasources (e.g. the 2nd-round VegBank validation added 4 issues and 5 feature requests). there is also expected to be wait time while data providers are responding (most likely in multiple rounds of feedback).
Aaron Marcuse-Kubitza
04:59 PM Revision 11289: planning/timeline/timeline.2013.xls: data provider metadata: removed "iPlant can do" because this actually requires Brad/Brian/Bob/other data providers to provide this info. however, this info may be findable on the web for some datasources.
Aaron Marcuse-Kubitza
04:53 PM Revision 11288: planning/timeline/timeline.2013.xls: moved "data provider metadata" right after "datasource validations" because this is part of the completed database itself rather than the tools to maintain it
Aaron Marcuse-Kubitza
04:47 PM Revision 11287: planning/timeline/timeline.2013.xls: split "revisions to schema" into "revisions to VegBIEN schema" (part of datasource validations) and "revisions to normalized VegCore" (part of documentation)
Aaron Marcuse-Kubitza
04:44 PM Revision 11286: bin/import_all: use just import_scrub, not reimport_scrub, because import_scrub now automatically publishes the datasource's import (i.e. removes the temp suffix)
Aaron Marcuse-Kubitza
04:43 PM Revision 11285: bugfix: inputs/input.Makefile: import: remove the temp suffix once the import is done, so that the full database import doesn't keep the suffix attached to the datasources that import_all didn't import with reimport. removed unused import_publish target (instead use import_temp to invoke just the import without the temp suffix removal).
Aaron Marcuse-Kubitza
04:27 PM Revision 11284: planning/timeline/timeline.2013.xls: moved part of "switching to new-style import" under "datasource validations (spot-checking)" because this is necessary to validate CVS
Aaron Marcuse-Kubitza
04:24 PM Revision 11283: planning/timeline/timeline.2013.xls: moved "simplify process of mapping/adding a new datasource" and "documentation testing" after "usability testing" because these tasks were there to make it possible for people other than me to reload/add to the database, which we have now decided is a lower priority than creating the validated database itself
Aaron Marcuse-Kubitza
04:14 PM Revision 11282: planning/timeline/timeline.2013.xls: added weeks through the end of the year (12/31)
Aaron Marcuse-Kubitza
02:00 PM Revision 11281: schemas/VegBIEN/attribution/BIEN 3 data use and attribution.docx: changed dataset definition to the definition in normalized VegCore ("a collection of records from the same place, with the same attribution requirements"), following discussion with Ramona
Aaron Marcuse-Kubitza
01:13 PM Revision 11280: planning/timeline/timeline.2013.xls: updated for progress
Aaron Marcuse-Kubitza
12:34 PM Revision 11279: schemas/VegBIEN/attribution/BIEN 3 data use and attribution.docx: updated to Ramona's commented version
Aaron Marcuse-Kubitza
12:19 AM Revision 11278: inputs/CVS/plot_/map.csv: realLatitude, realLongitude: remapped to UNUSED because these columns are actually empty
Aaron Marcuse-Kubitza
 

Also available in: Atom