Activity
From 06/27/2013 to 07/26/2013
07/26/2013
- 11:07 PM Revision 10455: schemas/VegCore/VegCore.ERD.mwb: collector, identified_by: allow multiple parties for these fields, using the new party_list array table
- 10:44 PM Revision 10454: schemas/VegCore/VegCore.ERD.mwb: party arrays: use new party_list array table instead of adding a separate many:many table for each table that uses a party array. this also allows using the party_list ID in a unique constraint, because it is now a first-class field.
- 10:06 PM Revision 10453: schemas/VegCore/VegCore.ERD.mwb: party: added party_list array table
- 09:45 PM Revision 10452: schemas/VegCore/VegCore.ERD.mwb: party: added optional fkey to organization
- 09:32 PM Revision 10451: schemas/VegCore/VegCore.ERD.mwb: geovalidation: renamed lat_long_in_ranks to lat_long_in_place_ranks for clarity
- 09:12 PM Revision 10450: schemas/VegCore/VegCore.ERD.mwb: individual: added tag_history hstore to store custom identity attributes
- 08:39 PM Revision 10449: schemas/VegCore/VegCore.ERD.mwb: taxon_string: documented that to get the parsed_taxon_assertion (TNRS result) for a taxon_string, you would join using the SQL dotpath taxon_string.string<-taxon_assertion(string)::parsed_taxon_assertion[source='TNRS.version'] (see wiki.vegpath.org/SQL_dotpaths). important how-to comments such as this one are now included in the version-controlled MySQL schema file itself, not just the .mwb file and the staging copy on vegbiendev.
- 08:16 PM Revision 10448: bin/my2pg: use s!...!...! when either the regexp or the replacement contains / , to avoid unnecessary \-s
- 08:09 PM Revision 10447: bin/my2pg: commenting out table options: added explanatory comment, because it is not obvious from the regexp what this does
- 08:06 PM Revision 10446: lib/sh/db.sh: mysqldump(): don't use --compatible=postgresql when the table structure is being exported, because this removes the table options (which include the COMMENT attribute). --compatible=postgresql remains on in data-only mode because embedded ` in data cannot easily be distinguished from ` around column names, so ANSI_QUOTES is needed to do the translation to " (and data sections do not contain table options). note that all --compatible modes that offer ANSI_QUOTES unfortunately exclude the table options, and there is no way to run a SQL query to set the SQL mode before beginning the dump, so ANSI_QUOTES translation must be handled by my2pg instead.
- 06:35 PM Revision 10445: bin/my2pg: comment out table options (http://dev.mysql.com/doc/refman/5.5/en/server-sql-mode.html#sqlmode_no_table_options) instead of removing them, because they include table COMMENTs, which contain important metadata such as table definitions. (note that table COMMENTs use a slightly different syntax than column COMMENTs, so the table COMMENTs will not be commented out twice.)
- 06:19 PM Revision 10444: bin/my2pg: comment out COMMENTs instead of removing them so that they will be included in the PostgreSQL translation. COMMENTs contain important metadata about columns, such as definitions and the meanings of integer flag values.
- 05:58 PM Revision 10443: inputs/{.,}*/*.schema.sql: regenerated using the instructions in bin/my2pg. this primarily replaces timestamp with text/*timestamp*/ (to preserve indefinite dates).
- 05:56 PM Revision 10442: bin/my2pg: added instructions for regenerating *.schema.sql whenever this script is changed
- 05:22 PM Revision 10441: bin/my2pg: COMMENT: also match COMMENTs with embedded ', because there will only be one COMMENT per line, so the contents of the COMMENT can just extend to the last ' on the line
- 05:16 PM Revision 10440: bugfix: lib/sh/util.sh: $sed_cmd: make output unbuffered, so that running e.g. bin/my2pg at the command line produces output as each line is read
- 04:29 PM Revision 10439: bin/my2pg: replace MySQL ` quotes with " quotes to support exports that were generated without ANSI_QUOTES mode. (this replacement only applies to schema exports, not data.) ANSI_QUOTES is only available with mysqldump --compatible modes that also include NO_TABLE_OPTIONS, which omits important table options such as comments. in particular, these comments are part of schemas/VegCore/VegCore.ERD.mwb but were not being included in VegCore.my.sql.
- 01:41 PM Revision 10438: schemas/VegCore/VegCore.ERD.mwb: taxon_string: removed parsed_taxon_assertion field, since there may be more than one parsing (TNRS result) for a given taxon_string. the parsing relationship can better be represented by adding a parsed_taxon_assertion whose taxon_assertion.string points to the parsed taxon_string. getting the parsed_taxon_assertion for a taxon_string now requires joining on parsed_taxon_assertion using a backwards instead of forwards fkey, and filtering the corresponding assertions to include only the ones for TNRS (of the desired TNRS version). documented that taxon_assertion.string was previously the concatenated matched name, but is now the TNRS input name. the concatenated matched name is still in parsed_taxon_assertion.matched_taxon_concept->:taxon_name.unique_name.
- 01:22 PM Revision 10437: schemas/VegCore/VegCore.my.sql: regenerated from .mwb schema, which apparently reverses the order of the fkeys (possibly a Linux MySQL bug?)
- 12:26 PM Revision 10436: inputs/SpeciesLink/Specimen/map.csv: remapped Darwin Core synonyms to DUPLICATE. this avoids the need to translate these to postprocessing derived columns for new-style import, and also speeds up column-based import because there are less automatic _alts to perform to resolve filter-less collisions. the svn diff was verified by replacing DUPLICATE#of:dwc_terms_<term>#... with <term>, removing the comment, and checking that this removes the diff (except where VegCore has renamed a DwC term).
- 12:17 PM Revision 10435: bugfix: inputs/SpeciesLink/Specimen/map.csv: *scientificName: remapped to scientificName instead of taxonName to match the DwC term's name (this is the same dwc_terms_scientificName mismapping that was fixed in r10434)
- 11:56 AM Revision 10434: bugfix: inputs/SpeciesLink/Specimen/map.csv: dwc_terms_scientificName: remapped to scientificName instead of taxonName to match that DwC term name, as well as the mappings of other *scientificName terms
- 11:06 AM Revision 10433: inputs/SpeciesLink/Specimen/map.csv: marked dwc_geospatial_VerbatimLatitude,Longitude as exact duplicates of dwc_terms_*
- 10:52 AM Revision 10432: inputs/SpeciesLink/Specimen/map.csv: remapped identical _alt-ed fields to DUPLICATE. this avoids the need to translate these to postprocessing derived columns for new-style import, and also speeds up column-based import because there are less automatic _alts to perform to resolve filter-less collisions.
- 10:06 AM Revision 10431: bugfix: inputs/SpeciesLink/Specimen/map.csv: *CollectorNumber: moved these to the same _alt group as recordNumber, because they are actually duplicates
- 09:43 AM Revision 10430: correction: inputs/SpeciesLink/Specimen/map.csv: *FieldNumber: fixed incorrect comment that these fields are identical to recordNumber, when instead they have the same *meaning* but not the same values. instead, values are stored under *either* of the two terms. the previous conclusion had been based on an incorrect query, which used != instead of the NULL-sensitive IS NOT DISTINCT FROM.
07/25/2013
- 08:14 PM Revision 10429: planning/timeline/timeline.2013.xls: Adding derived columns: extended to overlap with all subtasks
- 08:12 PM Revision 10428: planning/timeline/timeline.2013.xls: Geoscrubbing: split into separate re-run and automated pipeline tasks
- 08:09 PM Revision 10427: planning/timeline/timeline.2013.xls: moved Data provider validations before Adding derived columns because ensuring that the source data is in the database is more important than the derived data, which can always be added later
- 08:00 PM Revision 10426: planning/timeline/timeline.2013.xls: Data provider validations: added dot in July because some amount of datasource-level validation happens when mappings issues are discovered during the refactoring
- 07:34 PM Revision 10425: bugfix: inputs/*/*/map.csv for specimen tables: remapped eventDate,day,month,year to *Collected, because a general date always applies to the observation itself rather than to any parent event (specimens don't have a parent event)
- 07:34 PM Revision 10424: inputs/*/*/map.csv for IndividualObservation tables: also mapped eventDate,day,month,year to *Collected, because a general date always applies to the observation itself in addition to any parent event which it may be a part of
- 06:27 PM Revision 10423: bugfix: inputs/XAL/Specimen/, NY/Ecatalog_all/: *JulianDay: remapped to dayOfYear instead of day (the day of the month)
- 05:08 PM Revision 10422: inputs/SpeciesLink/Specimen/map.csv: remapped *dayOfYear-related terms to UNUSED
- 04:53 PM Revision 10421: bugfix: inputs/SpeciesLink/Specimen/map.csv: remapped conceptual_darwin_2003_1_0_JulianDay, dwc_dwcore_DayOfYear to dayOfYear instead of day (the day of the month)
- 04:43 PM Revision 10420: mappings/VegCore.htm: regenerated from wiki. added dayOfYear (=julianDay), which is different from startDayOfYear/endDayOfYear.
- 01:59 PM Revision 10419: inputs/CTFS/: switched to new-style import, using the steps at wiki.vegpath.org/Adding_new-style_import_to_a_datasource
- 01:50 PM Revision 10418: inputs/CTFS/StemObservation/: translated collisions (missing filters) to postprocessing derived columns, using the steps at wiki.vegpath.org/Adding_new-style_import_to_a_datasource#Translating-filters-to-postprocessing-derived-columns
- 10:57 AM Revision 10417: planning/timeline/timeline.2013.xls: rebalanced tasks across the remaining months, taking into account priority changes made in the conference call (e.g. that we should not be handling people's individual data requests (Brad, wiki.vegpath.org/2013-07-25_conference_call#Decisions-made))
- 10:50 AM Revision 10416: planning/timeline/timeline.2013.xls: updated with additional tasks added in conference call: translate source-specific derived columns to plain SQL, flatten the datasources, automated geoscrubbing pipeline
- 08:43 AM Revision 10415: planning/goals/BIEN_3_derived_data_products_NormalizedDB_only.docx: removed BIEN species-level phylogeny, which Brad says is out of scope for the BIEN DB
- 08:24 AM Revision 10414: removed planning/workflow/bien3_architecture.odp because the current version is now in bien3_architecture.pptx
- 08:13 AM Revision 10413: added planning/workflow/validation/TNRS_results.ppt symlink to inputs/test_taxonomic_names/_scrub/TNRS_results.ppt
- 08:10 AM Revision 10412: inputs/test_taxonomic_names/_scrub/TNRS_results.ppt: highlighted the sample row and related rows
- 08:04 AM Revision 10411: inputs/test_taxonomic_names/_scrub/TNRS_results.xls: moved arrows to TNRS_results.ppt so they can be changed more easily
- 07:51 AM Revision 10410: inputs/test_taxonomic_names/_scrub/TNRS_results.ppt: TNRS.tnrs: added diagram labels for the various names and steps
- 07:32 AM Revision 10409: inputs/test_taxonomic_names/_scrub/TNRS_results.xls: use "Poa annua var. eriolepis"->"Poaceae Poa annua L." as the synonym example instead of "Poa annua fo. lanuginosa"->"Poaceae Poa annua var. annua" because the input name is simpler and it's closer to the beginning of the list
- 07:20 AM Revision 10408: inputs/test_taxonomic_names/_scrub/run: exports/make(): tnrs.csv: include Name_matched instead of Genus_matched+Specific_epithet_matched because this also contains lower ranks, which are used in the TNRS synonymizing
- 07:06 AM Revision 10407: inputs/test_taxonomic_names/_scrub/TNRS_results.ppt: added annotations explaining the import steps
- 06:36 AM Revision 10406: added inputs/test_taxonomic_names/_scrub/TNRS_results.ppt, containing the *.png screenshots with tables labeled
- 06:35 AM Revision 10405: added inputs/test_taxonomic_names/_scrub/*.png, screenshots of the TNRS_results.xls tabs (LibreOffice does not preserve the formatting when pasting a spreadsheet to a PowerPoint as a table, and the table editing options are limited)
- 06:31 AM Revision 10404: added inputs/test_taxonomic_names/_scrub/TNRS_results.xls with formatted versions of the *.csv tables
07/24/2013
- 05:15 PM Revision 10403: inputs/test_taxonomic_names/_scrub/run: exports/make(): subset the columns to include only the most important to demo how the data is represented
- 05:13 PM Revision 10402: lib/sh/db.sh: mk_select(): support passing $cols as array instead of SQL string, which is easier to enter in a shell script (less quotes, \ , etc.)
- 05:12 PM Revision 10401: lib/sh/db.sh: added cols2list()
- 05:10 PM Revision 10400: lib/sh/util.sh: added is_array()
- 04:38 PM Revision 10399: inputs/test_taxonomic_names/_scrub/run: exports/make(): allow specifying an explicit columns list for each table using cols=... (initially set to all columns)
- 04:09 PM Revision 10398: added inputs/test_taxonomic_names/_scrub/*.csv exports
- 04:09 PM Revision 10397: added inputs/test_taxonomic_names/_scrub/run, which exports the test_scrub-populated tables to CSV
- 04:08 PM Revision 10396: lib/sh/db_make.sh: added pg_export_table_to_dir(), pg_export_tables_to_dir(). unlike db.sh pg_export_table_to_dir_no_header(), these functions are make-aware and will not clobber an existing file.
- 03:15 PM Revision 10395: reran inputs/test_taxonomic_names/test_scrub, which generates the public.test_taxonomic_names sample schema
- 01:50 PM Revision 10394: inputs/CTFS/Plot/map.csv: DescriptionOfSite: remapped to locationRemarks, not locality
- 01:38 PM Revision 10393: inputs/CTFS/AggregateObservation/: translated multi-column filters to postprocessing derived columns, using the steps at wiki.vegpath.org/Adding_new-style_import_to_a_datasource#Translating-filters-to-postprocessing-derived-columns
- 01:24 PM Revision 10392: schemas/vegbien.sql: geoscrub_input_new: updated for VegCore-renamed geoscrub_output column names
- 01:09 PM Revision 10391: schemas/util.sql: added ?>= operator with is_more_complete_than() function
- 12:44 PM Revision 10390: inputs/.geoscrub/: switched to new-style import, using the steps at wiki.vegpath.org/Adding_new-style_import_to_a_datasource
- 12:15 PM Revision 10389: inputs/.geoscrub/geoscrub_output/: translated single-column filters to postprocessing derived columns, using the steps at wiki.vegpath.org/Adding_new-style_import_to_a_datasource#Translating-filters-to-postprocessing-derived-columns
- 11:18 AM Revision 10388: schemas/util.sql: SQL-language IMMUTABLE functions marked STRICT: removed STRICT to enable dynamic inlining, which speeds up the function up to 7x. STRICT was not removed where the function was particularly complex and the STRICT optimization would likely be more significant than inlining.
- 11:07 AM Revision 10387: bugfix: inputs/BRIT/specimen_flat/postprocess.sql: diameterBreastHeight_cm, height_m: use newly NULL-mapped versions of columns instead of the *_verbatim columns
- 11:04 AM Revision 10386: inputs/BRIT/: switched to new-style import, using the steps at wiki.vegpath.org/Adding_new-style_import_to_a_datasource
- 10:49 AM Revision 10385: inputs/BRIT/specimen_flat/: translated multi-column filters with _join() to postprocessing derived columns, using the steps at wiki.vegpath.org/Adding_new-style_import_to_a_datasource#Translating-filters-to-postprocessing-derived-columns
- 10:43 AM Revision 10384: inputs/BRIT/specimen_flat/map.csv: Habitat_Summary: remapped to UNUSED
- 10:16 AM Revision 10383: inputs/BRIT/specimen_flat/postprocess.sql: diameterBreastHeight_cm, height_m: updated runtimes
- 10:15 AM Revision 10382: inputs/BRIT/specimen_flat/: DBH_*, Height_*: mapped NULL-equivalent values, using the steps at wiki.vegpath.org/Adding_new-style_import_to_a_datasource#Translating-filters-to-postprocessing-derived-columns
- 09:27 AM Revision 10381: inputs/.../: translated multi-column filters with _avg() to postprocessing derived columns, using the steps at wiki.vegpath.org/Adding_new-style_import_to_a_datasource#Translating-filters-to-postprocessing-derived-columns
- 08:18 AM Revision 10380: inputs/BRIT/specimen_flat/: translated single-column filters to postprocessing derived columns, using the steps at wiki.vegpath.org/Switching_to_new-style_import#stage-I-source-specific > "translate single-column filters to postprocessing derived columns"
07/20/2013
- 05:25 AM Revision 10379: /README.TXT: Maintenance: added instructions for what to do if http://vegbiendev.nceas.ucsb.edu/phppgadmin/ goes down (sometimes displaying a Not found error)
- 05:21 AM Revision 10378: schemas/util.sql: schema comment: added note that IMMUTABLE SQL-language functions should never be declared STRICT, because this prevents them from being inlined. inlining can create a significant speed improvement (7x+), by avoiding function calls and enabling additional constant folding.
- 05:09 AM Revision 10377: inputs/REMIB/Specimen/postprocess.sql: map_nulls() derived cols: documented total runtime (7.5 min on vegbiendev)
- 05:07 AM Revision 10376: inputs/REMIB/Specimen/postprocess.sql: map_nulls() derived cols: updated runtimes for map_nulls() inlining, which created a speed improvement of *7x* for the numeric columns and *2.5x* for the text columns (292563.362->41929.772 ms and 83640.424->35690.797 ms, respectively). note that the map_nulls__coord__*() calls could be optimized further by combining the successive map_nulls() calls into one, with the hstores merged.
- 04:37 AM Revision 10375: schemas/util.sql: map_nulls(): documented that inputs/REMIB/Specimen/postprocess.sql > country also shows that inlining is now happening properly. note that the speed improvement due to inlining is not as much, %-wise, when the values util._map() is run on are long strings instead of the short strings used in the initial profiling. this is because a greater % of the time is spent in system functions such as hstore->text, which are not affected by the inlining because they are run either way.
- 04:18 AM Revision 10374: schemas/util.sql: map_nulls(): use new nulls_map(). proper inlining (i.e. same runtime before and after change) has been verified with the following profiling query:
- SELECT util.map_nulls(array[1, 2, 3]::text[], v) FROM unnest(array_fill(1, array[100000])) f (v)
- 04:05 AM Revision 10373: schemas/util.sql: added nulls_map(), for use with _map()
- 03:39 AM Revision 10372: lib/runscripts/table.run: postprocess(): added remake action that calls trim_table()
- 03:37 AM Revision 10371: lib/runscripts/table.run: added trim_table(), which calls util.trim(regclass, regclass)
- 03:23 AM Revision 10370: lib/runscripts/table.run: map_table(): added remake action that calls reset_col_names()
- 03:21 AM Revision 10369: lib/runscripts/table.run: added reset_col_names(), which calls util.reset_col_names()
- 03:19 AM Revision 10368: bugfix: lib/runscripts/table.run: map_table(): moved $map_table to global var so it can be used by other functions
- 03:09 AM Revision 10367: bugfix: lib/runscripts/table.run: postprocess(): don't propagate $remake to remake_VegBIEN_mappings(), since this will cause map.csv to be remade, which is not related to the postprocessing.
- 03:08 AM Revision 10366: lib/runscripts/table.run: map_table(): util.set_col_names_with_metadata(): removed unnecessary cast to regclass, which is performed implicitly. this used to be needed when the polymorphic util.rename_cols() was used instead.
- 02:57 AM Revision 10365: schemas/util.sql: added trim(), which trims a table to include only original columns, as defined by a map table
- 02:53 AM Revision 10364: schemas/util.sql: added derived_cols(), which gets table_'s derived columns (all the columns not in the names table)
- 02:29 AM Revision 10363: schemas/util.sql: added eval2set()
- 02:14 AM Revision 10362: schemas/util.sql: added drop_column()
- 01:27 AM Revision 10361: inputs/REMIB/Specimen/postprocess.sql: map_nulls__*(): turned off STRICT to allow dynamic inlining, which speeds up the mk_derived_col() statements by *5x* (342799.823 ms -> 71533.252 ms (6 min -> 1 min) for latitude_sec)
07/19/2013
- 07:23 PM Revision 10360: inputs/REMIB/Specimen/postprocess.sql: runtimes: updated for vegbiendev, *before* dynamic inlining. the times are about twice as fast as on starscream, so vegbiendev is faster at whatever is the limiting speed factor (probably not CPU, based on other benchmarks).
- 07:05 PM Revision 10359: schemas/util.sql: map_nulls(): documented that due to dynamic inlining, this is just as fast as util._map() which it wraps. dynamic inlining now brings altogether a *40x* speed improvement to map_nulls() (4000 ms -> 100 ms), and would likely bring a comparable improvement for other functions that are run repeatedly and call other user-defined functions.
- 06:35 PM Revision 10358: bugfix: schemas/util.sql: map_nulls(): updated to use hstore(text[], anyelement), which has replaced hstore(anyarray, anyelement)
- 06:30 PM Revision 10357: schemas/util.sql: removed hstore(anyarray, anyelement), which did not support dynamic inlining, to avoid confusion over which hstore() function to use. use new hstore(text[], anyelement) instead (with explicit cast on the keys array if needed).
- 06:23 PM Revision 10356: schemas/util.sql: added hstore(text[], anyelement), which dynamically inlines properly, unlike hstore(anyarray, anyelement). this can be selected by explicitly casting the keys array to text[], which now provides a 6x speed improvement (380 ms -> 60 ms) for map_nulls().
- 05:31 PM Revision 10355: schemas/util.sql: fix_array(): turned off STRICT to allow dynamic inlining, which speeds up util.map_nulls() by 3x (1500 ms -> 500 ms)
- 05:15 PM Revision 10354: schemas/util.sql: array_length(anyarray), array_length(anyarray, dimension integer): turned off STRICT to allow dynamic inlining, which speeds up util.map_nulls(). this requires adding a `CASE WHEN $1 IS NULL THEN NULL` statement to array_length(anyarray, dimension integer) to replace the functionality provided by STRICT.
- 04:41 PM Revision 10353: schemas/util.sql: map_nulls(): turned off STRICT to allow dynamic inlining, which causes a 2x speed improvement[1]. (see r10352 for an explanation of dynamic inlining.) note that turning off STRICT disables NULL-skipping (avoiding running a function when all its params are NULL), so it should only be used when the NULL-skipping optimization is needed less than dynamic inlining.
- [1] the profiling query
SELECT util.map_nulls(array[v, 2, 3], v) FROM unnest(array_fill(1, array[10000])) f (v)
has a... - 04:23 PM Revision 10352: schemas/util.sql: inlinable IMMUTABLE functions: avoid using config params (e.g. `SET search_path TO util`) because these prevent dynamic inlining (i.e. inlining of a function call with *variable* instead of constant arguments, by substituting the arguments into the function's body). dynamic inlining can speed up function evaluation significantly, because a (slow) call to a user-defined SQL function is avoided.
- 04:15 PM Revision 10351: schemas/vegbien.my.sql: updated for new bin/repl text mode matching, which also affects non-regexps. this causes the replacement of a few more occurrences of PostgreSQL-only one-word typenames with their MySQL equivalents.
- 02:26 PM Revision 10350: inputs/REMIB/Specimen/postprocess.sql: runtimes: documented the machine the times are from
- 01:52 PM Revision 10349: inputs/REMIB/: switched to new-style import, using the steps at wiki.vegpath.org/Switching_to_new-style_import#stage-I-source-specific > "run the following for each datasource"
- 11:40 AM Revision 10348: bugfix: bin/repl: text mode: repurpose this to match SQL identifiers, for use by inputs/input.Makefile %/postprocess.sql. %/postprocess.sql is the only place currently using this mode, so this will not affect other scripts.
- 10:51 AM Revision 10347: bugfix: inputs/input.Makefile: %/postprocess.sql: need to run bin/repl in text mode (text=1) so that values to match are treated as literal strings rather than regular expressions. this difference is important for column names with spaces or special characters.
- 10:24 AM Revision 10346: bugfix: inputs/Madidi/LocationObservation/map.csv: resolved Notes, Notes 2 -> locationRemarks collision by _alt()ing them together. note that _alt() is fine because only one of these is ever populated.
- 09:54 AM Revision 10345: bugfix: schemas/util.sql: set_col_names(): need to generate error if destination column already exists (rather than suppressing it with try_create()), because this indicates a collision
- 09:30 AM Revision 10344: bugfix: inputs/Madidi/IndividualObservation/map.csv: removed derived column FieldFamilyFullName#originalFamily, which should not be in the map table because it can contain only columns that are initially in the table before running postprocess.sql
- 09:23 AM Revision 10343: schemas/util.sql: map table: added unique constraint on the to column as well, because the destination names also need to be distinct in order to be a valid set of column names
- 09:14 AM Revision 10342: schemas/util.sql: map table: changed pkey to a unique constraint so pgAdmin would sort the entries in table order (matching the order they are in the staging table) instead of alphabetized by the pkey
- 08:56 AM Revision 10341: bugfix: inputs/REMIB/Specimen/map.csv: state: changed output column name to stateProvince_verbatim to match the renaming in postprocess.sql
- 08:40 AM Revision 10340: inputs/REMIB/Specimen/postprocess.sql: remove frameshifted rows: removed out-of-date rerun time, which applied to doing all the deletes in the same statement (however, the current rerun time is approximately the same). note that index scans are not actually used (as the previous comment incorrectly stated) because the conditions for this filter are prefix-less regexps.
- 08:32 AM Revision 10339: inputs/REMIB/Specimen/: translated single-column filters to postprocessing derived columns, using the steps at wiki.vegpath.org/Switching_to_new-style_import#stage-I-source-specific > "translate single-column filters to postprocessing derived columns". null-mapping filters now use wrappers around new util.map_nulls(). note that the verbatim columns input to the filters need to be renamed to avoid name collisions with their filtered columns, which must be VegCore terms for new-style import.
- 07:53 AM Revision 10338: inputs/REMIB/Specimen/postprocess.sql: remove frameshifted rows: also filter out non-numbers for long_sec, lat_min, lat_sec
- 07:18 AM Revision 10337: inputs/REMIB/Specimen/postprocess.sql: remove frameshifted rows: remove rows where long_min is not a number
- 07:15 AM Revision 10336: inputs/REMIB/Specimen/postprocess.sql: change E'' to regular '' to avoid the need to double \ (instead ' would be doubled). E'' used to be necessary in previous versions of PostgreSQL to avoid a warning about escape string syntax.
- 07:09 AM Revision 10335: inputs/REMIB/Specimen/postprocess.sql: remove frameshifted rows: removed unnecessary () around `DELETE FROM :table WHERE long_deg ...`
- 07:03 AM Revision 10334: inputs/REMIB/Specimen/postprocess.sql: removed coll_year, country, long_deg indexes because the frameshift filter conditions on these columns do not use index scans (because their regexp patterns do not contain a fixed prefix). eventually, some regexp patterns may be able to be modified to use prefixes.
- 07:01 AM Revision 10333: bugfix: inputs/REMIB/Specimen/postprocess.sql: remove frameshifted rows: can't OR together conditions to determine rows to delete, because if any condition is NULL instead of true/false, this will NULL out the entire WHERE condition and prevent any other true conditions from causing a deletion. the best way to fix this is to use a separate DELETE statement for each condition, so that NULLs only impact that particular condition's DELETE. unlike using a modified, NULL-insensitive OR, which would prevent the use of index scans, this allows indexes to be used for conditions that support them.
- 06:05 AM Revision 10332: inputs/REMIB/Specimen/postprocess.sql: removed duplicate CREATE INDEX for the acronym column
- 05:59 AM Revision 10331: bugfix: inputs/REMIB/Specimen/postprocess.sql: switched back to the input column names, since the renaming to *_verbatim is part of a later step
- 05:26 AM Revision 10330: inputs/REMIB/Specimen/create.sql: moved filtering out of frameshifted rows to postprocess.sql, where it can happen in an idempotent DELETE. this allows filters to remove additional rows to easily be added on top of the existing filters, without needing to remake Specimen (which takes a long time, because of the many stage I derived columns that get added). the logical inversion inherent in the DELETE condition has been factored through rather than wrapped in NOT (...), because removal of frameshifted rows is more accurately specified as the detection of specific patterns that indicate frameshifting rather than the validation of all fields.
- 03:13 AM Revision 10329: bugfix: schemas/util.sql: not_empty(anyarray): array_length() now refers to different functions, with different semantics, depending on whether util is in the search_path. this necessitates explicitly selecting util.array_length() and switching to its semantics (ARRAY[] -> 0 instead of NULL)
- 03:02 AM Revision 10328: schemas/util.sql: map_nulls(): support all datatypes, not just text
- 02:55 AM Revision 10327: schemas/util.sql: added hstore(keys anyarray, value anyelement) and => (anyarray, anyelement) operator to support other element types for hstore
07/18/2013
- 06:43 PM Revision 10326: inputs/REMIB/Specimen/create.sql: also remove frameshifted rows with invalid long_deg values
- 04:31 PM Revision 10325: schemas/util.sql: added map_nulls(), a common use case of _map()
- 04:29 PM Revision 10324: bugfix: schemas/util.sql: hstore(keys text[], value text): use new fix_array() so that an empty keys array is made 1-dimensional to match up with the array generated by array_fill()
- 04:26 PM Revision 10323: schemas/util.sql: added fix_array(), which ensures that the array will always have proper non-NULL dimensions
- 04:21 PM Revision 10322: schemas/util.sql: added empty_array(), for constructing proper empty 1-dimensional arrays whose dimensions are not NULL ( {}::text[] does not do this)
- 03:36 PM Revision 10321: bugfix: schemas/util.sql: array_length(anyarray): need to call util.array_length() instead of just array_length() (which uses pg_catalog.array_length()) so that empty arrays will be returned as 0 instead of NULL. note that for some reason, adding `SET search_path=util` to the function does not have the same effect.
- 02:48 PM Revision 10320: inputs/ACAD/Specimen/postprocess.sql, inputs/ARIZ/omoccurrences/postprocess.sql: removed unnecessary "" around keys/values. "" are required in hstore input syntax in approximately the same places as they are in XPaths (around values containing spaces or special characters).
- 01:46 PM Revision 10319: inputs/ACAD/Specimen/map.csv, inputs/ARIZ/omoccurrences/map.csv: removed derived columns, which cause an error when trying to rename a table that does not yet have the derived columns added. this error will not be noticed locally if the derived columns were added before switching to new-style import, but will be noticed on vegbiendev.
- 01:17 PM Revision 10318: inputs/ARIZ/: switched to new-style import, using the steps at wiki.vegpath.org/Switching_to_new-style_import#stage-I-source-specific > "run the following for each datasource"
- 01:03 PM Revision 10317: added inputs/ARIZ/omoccurrences/postprocess.sql
- 12:32 PM Revision 10316: inputs/ARIZ/omoccurrences/: translated single-column filters to postprocessing derived columns, using the steps at wiki.vegpath.org/Switching_to_new-style_import#stage-I-source-specific > "translate single-column filters to postprocessing derived columns"
- 12:25 PM Revision 10315: bugfix: schemas/util.sql: _map(map hstore, value anyelement): need to cast result to unknown to support types that don't have a cast directly from text
- 12:06 PM Revision 10314: schemas/util.sql: added _map(map hstore, value anyelement) to seamlessly map types other than text (by casting back and forth between text and the value type)
- 11:44 AM Revision 10313: inputs/ACAD/: switched to new-style import, using the steps at wiki.vegpath.org/Switching_to_new-style_import#stage-I-source-specific > "run the following for each datasource"
- 11:38 AM Revision 10312: inputs/input.Makefile: added %/postprocess.sql to replace input column names with the corresponding output column names when switching to new-style import (this target must be manually run, but does simplify the process of renaming the postprocess.sql input columns)
- 11:02 AM Revision 10311: planning/timeline/timeline.2013.xls: moved Individual datasource refresh under Importing to normalized VegCore instead of Switching to new-style import because it is actually related to the refactor-in-place method used to import to VegCore
- 10:38 AM Revision 10310: inputs/ACAD/Specimen/: translated single-column filters to postprocessing derived columns, using the steps at wiki.vegpath.org/Switching_to_new-style_import#stage-I-source-specific > "translate single-column filters to postprocessing derived columns"
- 09:47 AM Revision 10309: bugfix: schemas/util.sql: rename_cols(): run additional `SELECT NULL::void` query after the main for-loop query so that PostgreSQL does not try to fold away the execution of util.try_create() just because multiple rows are not returned by the function. the result set of the first query will still be discarded, but will be fully evaluated. (this has nothing to do with VOLATILE vs. IMMUTABLE; util.try_create() is already declared VOLATILE and would normally not be folded.) rename_cols() is used to rename derived columns, which are not part of the map.csv and cannot be positionally renamed.
- 08:38 AM Revision 10308: schemas/util.sql: added text[] => text operator, analogous to text => text for multiple keys (uses hstore(keys text[], value text))
- 08:29 AM Revision 10307: schemas/util.sql: added hstore(keys text[], value text), which can be used to avoid repeating the same value for each key. there are many /_map filters which use the XPath syntax for doing this, which now need to use an equivalent SQL syntax to avoid duplicating the value many times.
- 08:23 AM Revision 10306: web/links/index.htm: updated to Firefox bookmarks. added link to Brian Enquist's fractals video on PBS NOVA.
- 07:35 AM Revision 10305: schemas/util.sql: added array_fill(anyelement, integer), which doesn't require lengths for multiple dimensions
- 07:31 AM Revision 10304: schemas/util.sql: added array_length(anyarray, dimension integer) wrapper, which returns 0 instead of NULL for empty arrays
- 07:23 AM Revision 10303: schemas/util.sql: added array_length(anyarray), which does not require a second dimension argument
- 12:04 AM Revision 10302: lib/sql_io.py: put_table(): documented that PostgreSQL 9.1+ now provides a way to implement insert/on duplicate select just once for each table (instead of dynamically for each insert) using the new INSTEAD OF triggers (http://www.postgresql.org/docs/9.1/static/plpgsql-trigger.html). INSTEAD OF triggers were not used when put_table() was developed, because it was necessary to support PostgreSQL 9.0, which was installed on the Mac and not easily upgradeable. it was eventually upgraded to add PostGIS, which required a complete reinstall of the DB from the staging tables, with the associated staging table reload bugs, as well as complete removal of the old Postgres version.
07/17/2013
- 11:40 AM Revision 10301: inputs/Madidi/: switched to new-style import
- 10:17 AM Revision 10300: schemas/VegCore/VegCore.ERD.mwb: regenerated exports
- 10:16 AM Revision 10299: schemas/VegCore/VegCore.ERD.mwb: categories: fixed position of place to match where it now is in the diagram. lined up boxes so that there is a visible line between the place- and occurrence-related categories.
- 09:27 AM Revision 10298: inputs/Madidi/IndividualObservation/map.csv: translated 1:many mappings ( FieldFamilyFullName->{family,originalFamily} ) to derived columns (in postprocess.sql) to work with new-style import, which must have a 1:1 relationship between input and output columns
- 09:05 AM Revision 10297: schemas/util.sql: added reset_col_names(), the counterpart to set_col_names(). note that this alters the map table, so it will need to be repopulated after running this function.
- 09:01 AM Revision 10296: schemas/util.sql: mk_derived_col(): support using this function to overwrite an existing column (i.e. as a general-purpose function to perform in-place update with ALTER COLUMN TYPE USING)
- 07:40 AM Revision 10295: lib/sh/db.sh: psql(): display stack traces and DETAIL sections of error messages at verbosity 2+, to help debugging (previously they were always turned off). in particular, the DETAIL section of a "duplicate key value violates unique constraint" error is useful because it contains the duplicated key.
- 04:54 AM Revision 10294: inputs/.TNRS/: switched to new-style import. because this does not have data subdirs (data comes from the TNRS client), this is just a matter of adding ./run.
- 04:53 AM Revision 10293: inputs/.TNRS/Source/: switched to new-style import. this had been missed when all the Source/ subdirs were batch-switched to new-style import.
- 04:24 AM Revision 10292: inputs/*/*/map.csv: replaced /_first filter with mapping to DUPLICATE special term (VegCore.vegpath.org?DUPLICATE). this removes collisions that don't need a postprocessing formula to combine the columns.
- 04:02 AM Revision 10291: inputs/Madidi/map.csv: removed filters on columns from before the refresh (which are not in active use), so that they don't show up in a search for map.csvs with filters (indicating collisions)
- 03:15 AM Revision 10290: inputs/Madidi/IndividualObservation/map.csv: SeniorCollector: don't prepend it to the CollectorString because the CollectorString already contains it. this may be a change between the BIEN2 and refreshed Madidi data (which uses a significantly different schema).
- 02:37 AM Revision 10289: mappings/VegCore.htm: regenerated from wiki. Special terms: added instructions for adding a distinguishing suffix to each special term in the format special_term#suffix. this is needed for new-style import to make the resulting column name unique within the staging table.
- 02:35 AM Revision 10288: mappings/VegCore-VegBIEN.csv: mapped DUPLICATE to nothing so that it would not be treated as an unmapped term
- 02:33 AM Revision 10287: mappings/VegCore.htm: regenerated from wiki. Special terms: added DUPLICATE.
- 01:56 AM Revision 10286: /README.TXT: Maintenance: regenerate mappings/VegCore.csv: commit command: use single quotes ' instead of double quotes " to avoid needing to \-escape every special char (single quotes ' still need to be escaped)
- 01:51 AM Revision 10285: mappings/VegCore.htm: regenerated from wiki. moved UNUSED, PRIVATE underneath OMIT as subterms.
07/14/2013
- 06:02 AM Revision 10284: mappings/VegCore.htm: Regenerated from wiki
- 05:52 AM Revision 10283: bugfix: bin/*: spell out [:alnum:] as [a-zA-Z0-9] because Python unfortunately doesn't support character classes
- 05:18 AM Revision 10282: web/links/index.htm: updated to Firefox bookmarks. moved Linux, Mac into Unix folder. added instructions to remove old Linux kernels, which fill up the /boot partition. added instructions to force sed to use raw binary mode instead of UTF-8 when UTF-8 is set in the environment. added methods of implementing DB disk space quotas in Postgres. added comparison on my Mac's CPU (2.66 GHz Intel Core i5) with vegbiendev's (2.44 GHz AMD Phenom X4). my Mac's seems to be much faster, so it might make sense to check that the Thor CPUs are faster than the Vis Lab computers' CPUs the next time it gets upgraded. (these diffs can be seen in WinMerge with Moved block detection on. see /README.TXT > WinMerge setup for details.)
- 04:43 AM Revision 10281: inputs/bien_web/observation/VegBIEN.csv: regenerated now that *_index dummy columns have been removed
- 03:26 AM Revision 10280: inputs/.TNRS/schema.sql: tnrs_populate_fields(): updated runtimes. it now takes 25 min instead of 16 min to regenerate the derived cols.
- 03:07 AM Revision 10279: inputs/IRMNG/_README.TXT: added note that when refreshing this datasource, remember to regenerate the TNRS derived cols using the instructions in inputs/.TNRS/schema.sql > tnrs_populate_fields()
- 02:44 AM Revision 10278: bin/*: replaced confusing regexp constructs involving \W inside [] with the much clearer explicit character class [:alnum:] . this avoids adding or subtracting from an inverted class in order to reach a subset of the corresponding positive class, because the subset can just be named explicitly instead.
- 02:38 AM Revision 10277: bugfix: bin/repl: doesn't make sense to use other chars in a [^\W_] regexp, because they will have no effect since \w doesn't include the other chars to begin with. this is a result of confusion with the ^ and \W double negative.
- 02:14 AM Revision 10276: lib/runscripts/table.run: postprocess(): propagate the $remake flag to remake_VegBIEN_mappings using self_make, so that a remake=1 on postprocess will cause map.csv to be regenerated as it would for a remake=1 directly on remake_VegBIEN_mappings
- 02:10 AM Revision 10275: bugfix: postprocess(): moved $can_test flag from import() to this function because it is used here
- 02:08 AM Revision 10274: lib/runscripts/table.run: import(): moved postprocessing commands to separate postprocess() function that can be invoked on an already-imported staging table to avoid running the load_data() target. this is especially useful when running the postprocessing on a working copy without the unversioned data files, for datasources whose load_data() target would otherwise try to download the files because they don't already exist.
- 02:01 AM Revision 10273: lib/runscripts/table.run: postprocess(): renamed to custom_postprocess() since this runs only the datasource's custom postprocessing commands, not all the postprocessing commands including map_table, mk_derived
- 01:52 AM Revision 10272: lib/runscripts/util.run: added , function, which treats each of the command-line args as commands the way make does (instead of as args to the same command, the way runscripts do)
- 01:39 AM Revision 10271: lib/sh/util.sh: moved runscript-related commands to lib/runscripts/util.run because these only apply to runscripts
- 01:26 AM Revision 10270: bugfix: inputs/*/*/map.csv (e.g. inputs/GBIF/raw_occurrence_record_plants/map.csv): remapped author to scientificNameAuthorship rather than authors, which it had gotten incorrectly automapped to. note that the VegCore term authors has now been renamed to data_authors to avoid ambiguity, but incorrect automappings resulting from it had not yet been fixed.
- 12:54 AM Revision 10269: bugfix: inputs/GBIF/raw_occurrence_record_plants/run: updated herbaria.ih column names for staging table column renaming
- 12:33 AM Revision 10268: bugfix: inputs/GBIF/table.run: need to include lib/runscripts/mysql.table.run instead of table.run (table.run was accidentally substituted when inputs/.NCBI/table.run was copied to all new-style datasources
07/13/2013
07/12/2013
07/11/2013
- 04:08 PM Revision 10265: planning/workflow/bien3_architecture.odp: added wiki page notes (wiki.vegpath.org/2013-06-20_conference_call, wiki.vegpath.org/2013-06-27_conference_call) in the slide notes
- 03:41 PM Revision 10264: planning/workflow/bien3_architecture.odp: added responses to the red-highlighted questions (from e-mails to the list) in the slide notes
- 02:54 PM Revision 10263: planning/timeline/timeline.2013.xls: fixed formatting: removed internal cell borders in spacer lines
- 02:48 PM Revision 10262: added planning/workflow/bien3_architecture.odp with changes from Skype call with Martha, which include wiki page notes (wiki.vegpath.org/2013-06-20_conference_call) about the refactor-in-place method in the Notes area
- 02:44 PM Revision 10261: planning/timeline/timeline.2013.xls: updated with changes from Skype call with Martha
- 12:53 PM Revision 10260: inputs/*/ which do not contain any explicit collisions (wiki.vegpath.org/2013-06-27_conference_call#To-do-for-Aaron > #3.2 > the following datasources ...): switched to new-style import, which adds the staging table column renaming
- 12:41 PM Revision 10259: inputs/newWorld/: switched to new-style import, which adds the staging table column renaming. these tables are used by the public schema (schemas/vegbien.sql), so the renamings are applied there as well.
- 12:26 PM Revision 10258: inputs/bien_web/bien_web.schema.sql: regenerated using bin/my2pg, to remove the *_index dummy columns so they don't create lots of OMIT#... staging table columns
- 12:09 PM Revision 10257: inputs/*/*/map.csv: added distinguishing #... suffix (e.g. UNUSED#institutionID) to the special terms OMIT, PRIVATE, UNUSED (VegCore.vegpath.org#Special-terms) to avoid creating a collision in the staging table renaming
- 11:56 AM Revision 10256: bugfix: inputs/input.Makefile: Staging tables installation: $(allInstalls): don't filter out Source table, because it is now an installed table rather than just a mapping
- 11:33 AM Revision 10255: bin/filter_out_ci, lib/maps.py: simplify(): also remove distinguishing #... suffix from terms (e.g. UNUSED#institutionID), to support mapping multiple columns to the special terms OMIT, PRIVATE, UNUSED (VegCore.vegpath.org#Special-terms), *without* creating a collision in the staging table renaming. note that this change must *not* be made to bin/canon, because this would cause suffixed terms to be autorenamed to their *un*suffixed VegCore versions.
- 05:54 AM Revision 10254: backups/Makefile: $(restore): added --verbose to display pg_restore's incremental progress
- 05:34 AM Revision 10253: bugfix: inputs/newWorld/newWorldCountries/postprocess.sql: use UPDATE statement (followed by VACUUM ANALYZE to remove dead tuples) instead of in-place update (ALTER COLUMN TYPE USING), so that the statement can be run even after the public schema has been installed and its views use the columns. (a view using the columns would normally block an ALTER COLUMN TYPE statement on a referenced column.)
- 03:56 AM Revision 10252: bugfix: lib/runscripts/table.run: remake_VegBIEN_mappings(): when remaking, do not remake header.csv, because it should keep the original CSV columns rather than being reset to whatever the current staging table columns happen to be. to force-regenerate this, instead delete it first and then run remake_VegBIEN_mappings(). remake mode will now just regenerate map.csv from header.csv, in case map.csv's columns are incomplete or out of order.
- 03:55 AM Revision 10251: bugfix: lib/runscripts/table.run: remake_VegBIEN_mappings(): when remaking, do not remake header.csv, because it should keep the original CSV columns rather than being reset to whatever the current staging table columns happen to be. to force-regenerate this, instead delete it first and then run remake_VegBIEN_mappings(). remake mode will now just regenerate map.csv from header.csv, in case map.csv's columns are incomplete or out of order.
- 03:50 AM Revision 10250: bugfix: lib/runscripts/table.run: map_table(): do not rename view columns, since their column names come from their (column-renamed) joined tables rather than from a map.csv. header.csv, map.csv for views will generally become out-of-date whenever the joined tables change, so it is better not to generate them at all.
- 03:48 AM Revision 10249: lib/runscripts/table.run: added $is_view
- 03:27 AM Revision 10248: lib/runscripts/table.run: added $postprocess_sql to store postprocess.sql path, and use it in postprocess()
- 02:20 AM Revision 10247: bugfix: lib/sh/local.sh: prevent automated tests when the public schema contains the live DB, so the user doesn't have to explicitly specify can_test= when running the import on vegbiendev
- 02:19 AM Revision 10246: bugfix: lib/runscripts/table.run: import(): allow automated tests (remake_VegBIEN_mappings) to be disabled by setting can_test= if the public schema shouldn't be modified (e.g. if it's the live DB)
- 12:55 AM Revision 10245: bugfix: inputs/*/*/postprocess.sql: made all operations idempotent, so that postprocess.sql can be run repeatedly (e.g. by new-style import)
- 12:03 AM Revision 10244: schemas/util.sql: create_if_not_exists(): also suppress "multiple primary keys are not allowed" error
07/10/2013
- 10:10 PM Revision 10243: added inputs/newWorld/iso_code_gadm/.map.csv.last_cleanup
- 10:07 PM Revision 10242: inputs/*/Source/VegBIEN.csv: regenerated for new-style import, which uses a symlink to mappings/VegCore-VegBIEN.csv instead of a custom mapping using the original column names
- 09:53 PM Revision 10241: inputs/input.Makefile: Staging tables installation: %/install: run %/map_table at end to rename the staging table columns for new-style datasources
- 09:52 PM Revision 10240: inputs/input.Makefile: Staging tables installation: added %/map_table to run the new-style import staging table renaming
- 08:37 PM Revision 10239: inputs/bien2_traits/TraitObservation/map.csv: removed no longer needed mappings of dummy columns to OMIT, which were creating an unnecessary collision of staging table column names
- 08:30 PM Revision 10238: inputs/bien2_traits/bien2_staging.schema.sql: regenerated from MySQL version so that dummy columns (which used to be generated by bin/my2pg) will be replaced with dummy CHECK constraints instead. this avoids needing to map several dummy columns all to OMIT, which was creating an unnecessary collision of staging table column names.
- 08:20 PM Revision 10237: bin/my2pg*: keep MySQL indefinite dates as text strings instead of translating them (to the first of the month or year) to fit into a PostgreSQL timestamp. this allows the application to decide how to handle these values, which otherwise have no corresponding value in PostgreSQL. this requires changing the date/time related types to text instead of leaving them as-is, so that they can store the custom MySQL strings.
- 07:36 PM Revision 10236: planning/timeline/timeline.2013.xls: Geoscrubbing: made it a subtask of Adding derived columns. moved it to July so that it can be run for Naia's new project.
- 07:00 PM Revision 10235: planning/timeline/timeline.2013.xls: reordered tasks approximately in priority order (which corresponds to the month(s) in which they are scheduled). indented subtasks under their parent tasks.
- 06:51 PM Revision 10234: planning/timeline/timeline.2013.xls: crossed out completed rows and moved them to the bottom
- 06:46 PM Revision 10233: planning/timeline/timeline.2013.xls: use different-style checkmark because LibreOffice doesn't display the font of the previous one correctly anymore (it may already have been displayed incorrectly on other people's computers)
- 06:14 PM Revision 10232: planning/timeline/timeline.2013.xls: Reload existing data in need of refresh: added Oct because Rick Condit is supposed to provide us with a CTFS refresh that we would be allowed to use (he wouldn't let us use the 2011-4-1 full-DB export)
- 06:11 PM Revision 10231: planning/timeline/timeline.2013.xls: continuous tasks: populated past months
- 06:09 PM Revision 10230: planning/timeline/timeline.2013.xls: added Sep, Oct months and moved tasks into them. moved continuous tasks to separate section at bottom to avoid confusion with discrete tasks.
- 05:33 PM Revision 10229: planning/timeline/timeline.2013.xls: use bullet points (•) instead of background shading to indicate future tasks. this allows cells to easily be cleared by pressing Backspace, rather than having to copy a white-background cell on top of the cell.
- 05:26 PM Revision 10228: planning/timeline/timeline.2013.xls: use 3-letter months to make room for more months
- 05:23 PM Revision 10227: planning/timeline/timeline.2013.xls: added missing tasks: switching to new-style import, importing to normalized VegCore, adding derived columns
- 05:16 PM Revision 10226: planning/timeline/timeline.2013.xls: removed alterate-row color highlighting because it makes it difficult to reorder rows or insert new rows in the middle
- 04:51 PM Revision 10225: bin/my2pg: use util.sh $top_dir instead of setting $selfDir
- 04:50 PM Revision 10224: bin/my2pg*: use the util.sh sed wrapper, which fixes the LANG=*.UTF-8 "illegal byte sequence" errors on invalid UTF-8
- 04:33 PM Revision 10223: /Makefile: mysql-Linux: also install mysql-workbench, for use in modifying the VegCore ERD. (note that it has to be modified on Linux, because the Linux and Mac versions of MySQL Workbench position the lines differently.)
- 04:10 PM Revision 10222: /README.TXT: Maintenance: to backup files not in Time Machine: removed VirtualBox VMs because they are now in Time Machine, and do not need to be backed up separately
- 04:08 PM Revision 10221: /README.TXT: Maintenance: to synchronize a Mac's settings with my testing machine's: added steps to upload just the VirtualBox VMs
- 04:02 PM Revision 10220: bugfix: /README.TXT: Maintenance: to synchronize a Mac's settings with my testing machine's: added overwrite=1 so that old snapshots, etc. are also deleted
- 04:01 PM Revision 10219: /README.TXT: Maintenance: to synchronize a Mac's settings with my testing machine's: use better bin/sync_upload instead of put
- 03:59 PM Revision 10218: /README.TXT: Maintenance: to synchronize a Mac's settings with my testing machine's: removed no longer needed inplace=1, because the VirtualBox VMs now all use a snapshot covering the full disk, so that the full disk is not altered (removing the need to optimize backing up a large file) and just the diff files need to be backed up each time
- 03:41 PM Revision 10217: bugfix: lib/sh/util.sh: sed: must use alias instead of function because function causes segfault in redir() subshell when used with make.sh make() filter (may be bug in bash?). this involves translating `unset LANG` to `env LANG=` (`env -u` to unset a var isn't supported on Mac, but fortunately sed treats LANG="" the same as unset LANG).
- 03:06 PM Revision 10216: archived planning/goals/BIEN3_derived_data_products.docx and replaced with symlink to new BIEN_3_derived_data_products_NormalizedDB_only.docx
- 02:59 PM Revision 10215: added planning/goals/BIEN_3_derived_data_products_NormalizedDB_only.docx from Brad's e-mail
- 02:42 PM Revision 10214: bugfix: lib/sh/util.sh: sed: unset LANG to avoid "illegal byte sequence" errors on invalid UTF-8 for LANG=*.UTF-8. these occur e.g. with MySQL data that is in Latin-1.
- 02:36 PM Revision 10213: lib/sh/util.sh: sed: use function instead of alias so that env can be set up before calling sed
- 02:15 PM Revision 10212: planning/workflow/bien3_architecture.pptx: updated to Martha's revised version from 2013-7-3
- 04:13 AM Revision 10211: lib/runscripts/table.run: map_table(): run map_table repeatedly until no more renames are made: added command to do this
- 03:53 AM Revision 10210: lib/runscripts/table.run: map_table(): documented that collisions may prevent all renames from being made at once. if this is the case, map_table must be run repeatedly until no more renames are made. collisions may result if the staging table gets messed up (e.g. due to missing input columns in map.csv).
- 02:32 AM Revision 10209: inputs/*/*/map.csv for CSV tables with a row_num column: added missing row_num entry, which is needed by the staging table column renaming to make the order of the map.csv columns match the order in the staging table
- 02:27 AM Revision 10208: bugfix: inputs/*/Source/map.csv: added missing row_num entry, which is needed by the staging table column renaming to make the order of the map.csv columns match the order in the staging table. the staging table column renaming is now used by all Source tables.
- 02:18 AM Revision 10207: bugfix: populated empty inputs/IUCN/European_Red_List_Plants/header.csv
- 02:17 AM Revision 10206: inputs/CTFS/*/map.csv: added *.src.row_num from joined tables so that the map.csv input columns would match the staging table. this is needed for the staging table column renaming, which is positional rather than name-based to work with any existing column name.
- 01:50 AM Revision 10205: bugfix: inputs/input.Makefile: map.csv and derived files: use $(tables) instead of $(importTables) when making them so that the mappings of those tables are still kept up-to-date even though they are marked _no_import (and not imported into the main DB)
- 01:46 AM Revision 10204: inputs/CTFS/*/test.xml.ref: regenerated. these got out of date because even though these tables are included in import_order.txt, they are marked as _no_import, which prevents map.csvs and derived files from being kept up-to-date.
- 01:24 AM Revision 10203: bugfix: inputs/CTFS/*/VegBIEN.csv: regenerated from map.csv. they may have gotten out of date because they are marked as _no_import, even though they *are* in import_order.txt.
07/09/2013
- 05:33 PM Revision 10202: bugfix: added missing inputs/MO/Specimen/header.csv
- 05:32 PM Revision 10201: bugfix: added missing inputs/QFA/Specimen/header.csv
- 05:26 PM Revision 10200: bugfix: inputs/TEX/Specimen/header.csv: generated from staging table (was empty previously)
- 04:44 PM Revision 10199: bugfix: inputs/*/Source/map.csv: added missing row_num entry, which is needed by the staging table column renaming to make the order of the map.csv columns match the order in the staging table. the staging table column renaming is now used by all Source tables.
- 04:42 PM Revision 10198: added inputs/newWorld/iso_code_gadm/header.csv
- 04:31 PM Revision 10197: added inputs/analytical_db/table.run
- 02:59 PM Revision 10196: bugfix: inputs/VASCAN/Taxon/map.csv: added missing row_num column added by bin/csv2db
- 02:50 PM Revision 10195: lib/sql_io.py: cleanup_table(): added assertion that the table exists, so that if it doesn't, the error will occur as part of an assertion rather than as part of the util.table_nulls_mapped__get() call, which might confusingly lead users to believe that this is a bug in util.table_nulls_mapped__get() when in fact the problem is that the table is not installed
- 02:30 AM Revision 10194: fix: inputs/import.stats.xls: removed spurious diff comment on total time, which only applied to the previous import
- 02:28 AM Revision 10193: inputs/import.stats.xls: reformatted times longer than one day as a # of days instead of hours, for clarity. the days format is chosen automatically when the # hours exceeds one day.
- 01:04 AM Revision 10192: bugfix: inputs/*/Source/: added missing ./run, which creates the new-style staging tables with the metadata fields as part of the table. this is needed now that these subdirs use installed staging tables instead of metadata-only map.csvs.
- 12:56 AM Revision 10191: bin/map: removed no longer used support for map.csv input column prefixes (expand out the prefixes instead). this used to be used by SpeciesLink to use just one mapping for a single term with multiple DwC namespaces, but was replaced with an explicit, ordered rather than implicit, unordered /_alt-ing together of the terms.
07/08/2013
07/06/2013
- 07:29 PM Revision 10189: inputs/.herbaria/: switched to new-style import, which renamed the columns to the VegCore names. this is done using the commands at wiki.vegpath.org/2013-06-27_conference_call#To-do-for-Aaron > "run the following for each datasource".
- 07:21 PM Revision 10188: lib/sql_io.py: cleanup_table(): don't run the slow ALTER TABLE statement again if the table has already been cleaned up. documented that it is idempotent (and actually was before this change as well).
- 07:19 PM Revision 10187: lib/sql_io.py: added table_nulls_mapped__set(), "__get() wrappers around the corresponding util schema functions
- 07:18 PM Revision 10186: lib/sql_gen.py: added table2regclass_text()
- 07:07 PM Revision 10185: schemas/util.sql: added table_nulls_mapped__get(), which gets whether a table's NULL-equivalent strings have been replaced with NULL
- 07:06 PM Revision 10184: schemas/util.sql: added table_flag__get(), which gets whether a status flag is set by the presence of a table constraint
- 06:56 PM Revision 10183: schemas/util.sql: added table_nulls_mapped__set(), which sets that a table's NULL-equivalent strings have been replaced with NULL
- 06:54 PM Revision 10182: schemas/util.sql: added table_flag__set(), which stores a status flag by the presence of a table constraint
- 06:52 PM Revision 10181: schemas/util.sql: create_if_not_exists(): also ignore duplicate_object exceptions, thrown when trying to add a duplicate constraint
- 06:00 PM Revision 10180: inputs/input.Makefile: %/postprocess: removed no longer used invocation of $*/import (precursor to the runscripts used in FIA)
- 05:39 PM Revision 10179: inputs/*/: added table.run for use by the table subdirs in new-style import. datasources without table subdirs do not need this.
- 05:35 PM Revision 10178: inputs/*/: added top-level Makefile which includes inputs/input.Makefile, so that make can be run directly on the datasrc dir without needing to specify `--makefile=../input.Makefile` (see input.Makefile $(selfMake))
- 05:17 PM Revision 10177: inputs/*/: added top-level Makefile which includes inputs/input.Makefile, so that make can be run directly on the datasrc dir without needing to specify `--makefile=../input.Makefile` (see input.Makefile $(selfMake))
- 05:05 PM Revision 10176: added inputs/test_taxonomic_names/Taxon/header.csv
- 04:02 PM Revision 10175: web/links/index.htm: updated to Firefox bookmarks. removed dead favicons. PostgreSQL: added bookmarks about triggers.
- 03:55 PM Revision 10174: bugfix: inputs/input.Makefile: %/VegBIEN.csv: for new-style datasources, use a symlink to mappings/VegCore-VegBIEN.csv directly instead of prefiltering VegCore-VegBIEN.csv to include only the columns in map.csv. prefiltering used to be performed as part of mapping the map.csv VegCore output terms to VegBIEN using bin/join, but is no longer needed because the staging table columns are now VegCore terms. instead, the full VegCore-VegBIEN.csv *is* needed so that derived columns added in stage I or II validations are detected by bin/map (rather than just the original source columns in map.csv).
- 03:37 PM Revision 10173: mappings/VegCore-VegBIEN.csv: cultivated, oldGrowth: use just cultivated if it's provided, rather than /_alt-ing it back with oldGrowth (which it was generated from)
- 03:30 PM Revision 10172: bugfix: mappings/VegCore-VegBIEN.csv: fixed priority of cultivated and oldGrowth so cultivated is used first if it's available
- 02:41 PM Revision 10171: bugfix: lib/runscripts/table.run: need to run remake_VegBIEN_mappings after mk_derived rather than before so the derived cols will be included in the automated test result
- 02:26 PM Revision 10170: bugfix: inputs/*/Source/: use installed staging table (with blank-line data.csv) in order to also work with new-style import. this also fixes a benign diff between the by-row and by-col test outputs, where row-based import would not import the Source/ entries because there was not at least one row in the input. note that in order to ensure that all datasources are properly run, you need to check `svn st|sort` against the datasource schema names to see if any are missing.
- 02:22 PM Revision 10169: inputs/*/logs: updated svn:ignore
- 02:22 PM Revision 10168: inputs/*/*/logs: updated svn:ignore
- 01:45 PM Revision 10167: bugfix: inputs/input.Makefile: SVN: add: don't add subdirs for datasources marked _no_import (e.g. datasources which only have an inputs/ dir to be listed in VegPath)
- 11:29 AM Revision 10166: bugfix: inputs/*/Source/data.csv for new-style datasources: need to include a blank row (plus a blank header) so that the metadata values are imported at least once instead of zero times, now that there is an installed staging table that will be iterated over. the blank row did not used to be necessary, because db_xml.put_table() has a special case for metadata-only tables with no installed table, which avoids iterating over the table's rows.
07/03/2013
- 10:48 PM Revision 10165: lib/sql_io.py: put_table() (column-based import): complexity note: clarified that INSERT RETURNING throws an error *on duplicate* instead of returning the existing row. added blank line after ¶ for readability.
- 10:44 PM Revision 10164: lib/sql_io.py: put_table() (column-based import): warning about triggers populating unique constraint-covered columns: corrected limitation to include only *the* unique constraint used to do the DISTINCT ON, since other unique constraints are not affected by column-based import. note that the primary key will normally not be the DISTINCT ON constraint, so trigger-populated natural keys are supported *unless* the input table contains duplicate rows for some generated keys.
- 10:20 PM Revision 10163: inputs/*/Source/ for new-style datasources: use an actual staging table instead of a metadata-only table, so that metadata values can be stored in the staging table instead of the map.csv (as will be required by new-style import)
- 08:21 PM Revision 10162: inputs/input.Makefile: SVN: $(svnFilesGlob): added data.csv, used to store versioned data (such as the empty data.csv used by Source/ tables which have their metadata in the map table instead)
- 07:45 PM Revision 10161: schemas/util.sql: type_qual(), type_qual_name(): added comments to distinguish these similarly-named functions, one of which gets a type qualifier and the other of which gets a qualified name (not the name of a type qualifier, which one might otherwise assume)
- 07:39 PM Revision 10160: schemas/util.sql: typeof(): support expressions that are not relative to a table (which do not have a table_ param). note that this requires removing the STRICT qualifier, so that NULL expressions will now produce an error instead of passing through as NULL.
- 07:10 PM Revision 10159: schemas/VegCore/VegCore.ERD.mwb: relationships legend: removed inheritance of base_class from record, so that the IS-A label would not confusingly appear to apply to the record connector stub instead of to the solid line between base_class and derived_class
- 06:51 PM Revision 10158: bugfix: schemas/util.sql: col_names(): need to exclude dropped columns (which remain included in the pg_attribute table until the next tuple rewrite), by filtering on `NOT attisdropped`. lib/sql.py table_col_names() is not affected by this because it is able to access the column names from the DB driver directly, after performing `SELECT * FROM table LIMIT 0`.
- 06:38 PM Revision 10157: schemas/util.sql: set_col_names_with_metadata(): don't delete the metadata entries from the map table, because they are now added *before* the renames take place, so that the renames can simply be performed on the constant columns themselves. this does, however, require that the metadata entries are always listed *last* in the map.csv (which is currently the case).
- 05:56 PM Revision 10156: lib/runscripts/table.run: map_table(): store the map table in the datasource schema, so that it can easily be referred to when using the staging tables. this also allows it to be found more easily when debugging its contents.
- 05:26 PM Revision 10155: lib/sh/db.sh: psql(): hide the verbose CONTEXT information that is output with each NOTICE by setting the VERBOSITY psql var to terse (postgresql.1045698.n5.nabble.com/Quiet-quot-CONTEXT-quot-td1906036.html#a1906037)
- 05:15 PM Revision 10154: *{.sh,run}: use new log-() instead of log+() with a negative #
- 05:14 PM Revision 10153: lib/sh/util.sh: added log-() because it's non-obvious that you would otherwise have to invoke log+() with a negative #
- 05:00 PM Revision 10152: schemas/util.sql: reset_map_table(): drop the table and recreate it instead of just creating it if it doesn't exist, so that any change to the util.map table is propagated to persistent map tables whenever they are reloaded from the map.csv
- 05:00 PM Revision 10151: lib/runscripts/table.run: map_table(): create the map table as a persistent table in the temp schema, so that its contents can be viewed for debugging
- 04:50 PM Revision 10150: schemas/util.sql: added drop_table()
- 04:39 PM Revision 10149: schemas/util.sql: set_col_names(): don't perform rename if the name is not changing, to avoid cluttering the debug output with unnecessary queries
- 04:21 PM Revision 10148: lib/runscripts/table.run: use new util.set_col_names_with_metadata() instead of util.set_col_names() so that metadata values (beginning with : ) are automatically mapped to constant columns rather than needing to add a mk_const_col() call to postprocess.sql for each of them. there are a lot of metadata value entries, especially in the Source/ tables for each datasource, so this will save time in translating the datasources to new-style import. note that this requires disabling the map_filter_insert trigger on the map table to prevent it from filtering out the metadata entries before util.set_col_names_with_metadata() can use them.
- 03:55 PM Revision 10147: bugfix: schemas/util.sql: set_col_names_with_metadata(): need `util.` before mk_const_col(). "to", "from" need to be referenced from row_. substring() needs to start from 2 rather than 1 because PostgreSQL string indexes are 1-based.
- 03:05 PM Revision 10146: schemas/util.sql: try_create(), create_if_not_exists(): use eval() so the executed statement will be echoed for debugging
- 02:58 PM Revision 10145: schemas/util.sql: added set_col_names_with_metadata()
07/02/2013
- 05:42 PM Revision 10144: bugfix: lib/sh/sync.sh: upload(): paths: don't dereference the path itself if it's a symlink; instead canonicalize just its parent dir. this allows syncing a specific file which is a symlink, rather than syncing the symlink's target.
- 05:40 PM Revision 10143: lib/sh/util.sh: added canon_dir_rel_path(), which canonicalizes just the parent dir if the path is a symlink, to leave the symlink itself untouched
- 05:08 PM Revision 10142: planning/workflow/validation/: archived BIEN2 validations documents which have been superseded by planning/goals/BIEN3_derived_data_products.docx, to avoid confusion
- 04:45 PM Revision 10141: planning/workflow/bien3_architecture.pptx: updated with clarifications made in today's conference call
- 02:31 PM Revision 10140: bugfix: bin/map: in_is_db: inline metadata value columns (used by new-style import) so that they can be compared by value in XML simplifying functions (lib/xml_func.py)
- 02:29 PM Revision 10139: lib/sql.py: added col_default_value(), col_is_constant(), which interface with corresponding util-schema functions
- 02:28 PM Revision 10138: lib/sql_gen.py: added col2col_ref() for interfacing with for SQL functions that take a util.col_ref
- 12:57 PM Revision 10137: schemas/util.sql: added is_constant(col_ref), for checking if a column has been marked "constant"
- 12:54 PM Revision 10136: schemas/util.sql: added col_comment()
- 12:53 PM Revision 10135: schemas/util.sql: mk_const_col(): add column comment "constant" to mark column as inlinable (needed by some mappings to have a literal value to compare)
- 12:03 PM Revision 10134: schemas/util.sql: added col_default_value(), which evaluates the col_default_sql() expression
- 11:51 AM Revision 10133: schemas/util.sql: added eval_expr_passthru() (passes NULL SQL through)
- 11:45 AM Revision 10132: bugfix: schemas/util.sql: eval_expr(): need to pass ret_type_null to eval2val()
- 11:42 AM Revision 10131: schemas/util.sql: added eval_expr() (does not require `SELECT ` before expr)
- 11:33 AM Revision 10130: schemas/util.sql: added col_default_sql()
- 11:26 AM Revision 10129: schemas/util.sql: eval(text, anyelement): added default polymorphic type text (can't be unknown because this would cause a "could not determine polymorphic type because input has type "unknown"" error). renamed to eval2val() to avoid overloading conflicts with eval(text) when no polymorphic type param is specified.
- 11:15 AM Revision 10128: schemas/util.sql: added value-returning eval()
- 11:02 AM Revision 10127: bugfix: lib/common.Makefile: $(asAdmin): need to use _postgres instead on Mac for OS X 10.8 Mountain Lion
- 11:01 AM Revision 10126: bugfix: *Makefile: $(asAdmin) invocations of Postgres commands: need to set DB user to postgres so that it won't default to the system user _postgres
- 10:57 AM Revision 10125: *Makefile: removed $(psqlOpts), $(psqlAsAdmin), which are now set by lib/common.Makefile
- 10:57 AM Revision 10124: lib/common.Makefile: added $(psqlOpts), $(psqlAsAdmin)
- 10:54 AM Revision 10123: bugfix: schemas/pg_hba.Mac.conf: use new postgres ident map instead of changing user to _postgres, because the DB user is still named postgres
- 10:53 AM Revision 10122: schemas/pg_ident.Mac.conf: added postgres map mapping the _postgres system user to the postgres DB user for ident authentication
- 10:45 AM Revision 10121: /Makefile: $(postgresReload-Darwin): also install pg_ident.Mac.conf
- 10:44 AM Revision 10120: placed pg_ident.conf under version control as schemas/pg_ident.Mac.conf
- 10:29 AM Revision 10119: *Makefile: removed $(asAdmin), which is now set by lib/common.Makefile
- 10:28 AM Revision 10118: lib/common.Makefile: added $(asAdmin)
- 10:26 AM Revision 10117: bugfix: schemas/pg_hba.Mac.conf: changed postgres to _postgres for OS X 10.8 Mountain Lion
- 09:48 AM Revision 10116: schemas/util.sql: added raise_undefined_column() for use in translating other exceptions to undefined_column
- 03:50 AM Revision 10115: bin/map: map_table(): Resolve prefixes: combined db_xml.ColRef() constructor call with creation of args (as tuple) for clarity
- 03:35 AM Revision 10114: bin/map: update_in_label(): use in_schema instead of the map spreadsheet column name when available, to allow using one spreadsheet for all datasources (which would not have a datasource-specific spreadsheet column name)
- 02:59 AM Revision 10113: schemas/util.sql: added mk_source_col(), which uses the schema name instead of the map spreadsheet header to get the datasource name
- 02:44 AM Revision 10112: schemas/util.sql: added table_schema()
- 01:15 AM Revision 10111: added planning/goals/iPlant_BIEN_Proposal_Final.pdf with Mark's e-mail notes in iPlant_BIEN_Proposal_Final.pdf.notes.txt
07/01/2013
06/28/2013
- 04:54 PM Revision 10109: empty inputs/*/import_order.txt: added subdirs in the order they are used by inputs/input.Makefile, by running make on the inputs to auto-populate import_order.txt. import_order.txt is needed by the runscripts to run the right set of subdirs in the right order.
- 04:48 PM Revision 10108: added inputs/.TNRS/grants.sql, with statements to provide SELECT access to bien_read. these statements must be in grants.sql to avoid them being filtered out by pg_dump_limit.
- 04:47 PM Revision 10107: inputs/input.Makefile: added support for separate grants.sql file, which may contain GRANT statements that would normally be filtered out by pg_dump_limit
- 04:44 PM Revision 10106: inputs/input.Makefile: sql/install: added $debug option to run the *.sql import verbosely, to display which statements are being run. this should only be used for SQL files that use COPY FROM to import data, to avoid echoing pages of insert statements.
- 01:53 PM Revision 10105: inputs/input.Makefile: keep $(sortFile) up-to-date: use sort_file_updated=1 flag to indicate that import_order.txt has already been checked, so that recursive invocations of make don't need to recheck it. also use this flag instead of an explicit $(MAKECMDGOALS) list to prevent the $(sortFile) check from being infinite-recursively reinvoked when input.Makefile is read as part of the $(sortFile) check itself.
- 01:38 PM Revision 10104: inputs/input.Makefile: keep import_order.txt up-to-date by running `make $(sortFile)` each time make is run. this ensures that new datasources always have import_order.txt populated when make is first run. eventually, $(tables) can be always set to $(allTables) so that this auto-updating can also be used to ensure that new subdirs added by the user always make it into import_order.txt (so that they will be included in the subdirs that get remade, etc.). import_order.txt is primarily for specifying the order of the subdirs, but some datasources also use it to filter *out* subdirs, so it can't yet be always updated to include the full list of subdirs. however, the filter-out usage should no longer be necessary after the switch to new-style import.
- 12:58 PM Revision 10103: inputs/input.Makefile: added $(filter_make), used to filter the output of embedded $(shell make ...) invocations
- 11:39 AM Revision 10102: inputs/input.Makefile: $(sortFile): use $(filter-out)->then instead of $(filter)->else for clarity
- 11:21 AM Revision 10101: inputs/input.Makefile: added $(sortFile) (import_order.txt) target which adds any missing tables to import_order.txt
- 11:03 AM Revision 10100: inputs/input.Makefile: added list_tables to print $(tables) for use in populating import_order.txt
- 02:50 AM Revision 10099: web/links/index.htm: updated to Firefox bookmarks. grouped version control systems into new version control folder.
06/27/2013
- 09:54 PM Revision 10098: inputs/.NCBI/: added new-style import runscripts, which renamed the staging table columns to VegCore
- 04:48 PM Revision 10097: bugfix: lib/runscripts/datasrc_dir.run, subdir.run: need to remove leading . from dir name to get installed schema name, using new dir2schema()
- 04:47 PM Revision 10096: lib/runscripts/datasrc_dir.run, subdir.run: use new lib/sh/datasrc.sh, which contains code in common to both datasrc-related dir runscripts
- 04:46 PM Revision 10095: added lib/sh/datasrc.sh
- 03:47 PM Revision 10094: inputs/.TNRS/schema.sql: AcceptedTaxon: removed Annotations entry because the accepted name only contains name elements, not additional text (vegpath.org/cf_aff)
- 01:02 PM Revision 10093: bugfix: /README.TXT: Maintenance: syncing ~/bien to ~/Dropbox/svn: added overwrite=1 so that perms transfer from the authoritative ~/bien regardless of relative mtimes
- 12:45 PM Revision 10092: removed no longer used lib/import.sh. use lib/runscripts/table.run instead.
- 12:28 PM Revision 10091: added inputs/*/*/header.csv for CSV inputs, which are now generated by inputs/input.Makefile %/install
- 12:23 PM Revision 10090: added inputs/FIA/*/{VegBIEN.csv,test.xml.ref}, which are now generated by the mapping process for the joined-together tables (even though they are not used by the import, because only occurrence_all is imported)
- 12:20 PM Revision 10089: added inputs/GBIF/_archive/
- 12:18 PM Revision 10088: removed inputs/GBIF/Specimen/, which has been replaced by the refresh in raw_occurrence_record_plants/
- 12:17 PM Revision 10087: added inputs/GBIF/map.csv, used to regenerate inputs/GBIF/raw_occurrence_record_plants/map.csv when raw_occurrence_record_plants is resubset
- 12:12 PM Revision 10086: inputs/FIA/*/postprocess.sql: removed svn:executable attribute using `svn pdel svn:executable ...` now that these are not shell scripts
- 12:11 PM Revision 10085: removed no longer needed inputs/FIA/import. use inputs/FIA/run instead.
- 12:10 PM Revision 10084: inputs/FIA/*/import: changed to postprocess.sql for use by the runscripts
- 04:27 AM Revision 10083: added inputs/FIA/run
- 04:26 AM Revision 10082: added inputs/FIA/*/run. these do not yet use the postprocessing operations in */import.
- 04:24 AM Revision 10081: added inputs/FIA/table.run (for use by table subdirs) and helper Makefile
- 04:17 AM Revision 10080: added lib/runscripts/view.run, for use with table subdirs for views, such as inputs/FIA/occurrence_all/
- 02:14 AM Revision 10079: planning/timeline/timeline.2013.xls: added Reload analytical database checkmark for every Rebuild core database checkmark, because these are always done together as part of the import process
- 01:41 AM Revision 10078: bugfix: inputs/FIA/occurrence_all/import: don't re-prepend * to terms because this is a view, and the underlying columns have already been mapped
- 01:40 AM Revision 10077: bin/src_map: support custom (or no) new_term_prefix. no new_term_prefix is useful for views whose columns have already been renamed in the underlying tables and should not have * re-prepended.
- 01:03 AM Revision 10076: planning/timeline/timeline.2013.xls: moved longer-term goals to new August column, leaving near-term goals in July
- 01:00 AM Revision 10075: planning/timeline/timeline.2013.xls: erased cells where a task was planned but not worked on, so that all shaded cells in the past have check marks to indicate completion of a portion of the task, and empty shaded cells in the future indicate work left to do
- 12:50 AM Revision 10074: planning/timeline/timeline.2013.xls: updated for current progress. renamed "Rerun species range models" to "Prepare to rerun species range models" because the range modeling itself is not part of the BIEN DB development. added a column for July with the tasks that are not yet complete.
Also available in: Atom