Activity
From 06/24/2013 to 07/23/2013
07/20/2013
- 05:25 AM Revision 10379: /README.TXT: Maintenance: added instructions for what to do if http://vegbiendev.nceas.ucsb.edu/phppgadmin/ goes down (sometimes displaying a Not found error)
- 05:21 AM Revision 10378: schemas/util.sql: schema comment: added note that IMMUTABLE SQL-language functions should never be declared STRICT, because this prevents them from being inlined. inlining can create a significant speed improvement (7x+), by avoiding function calls and enabling additional constant folding.
- 05:09 AM Revision 10377: inputs/REMIB/Specimen/postprocess.sql: map_nulls() derived cols: documented total runtime (7.5 min on vegbiendev)
- 05:07 AM Revision 10376: inputs/REMIB/Specimen/postprocess.sql: map_nulls() derived cols: updated runtimes for map_nulls() inlining, which created a speed improvement of *7x* for the numeric columns and *2.5x* for the text columns (292563.362->41929.772 ms and 83640.424->35690.797 ms, respectively). note that the map_nulls__coord__*() calls could be optimized further by combining the successive map_nulls() calls into one, with the hstores merged.
- 04:37 AM Revision 10375: schemas/util.sql: map_nulls(): documented that inputs/REMIB/Specimen/postprocess.sql > country also shows that inlining is now happening properly. note that the speed improvement due to inlining is not as much, %-wise, when the values util._map() is run on are long strings instead of the short strings used in the initial profiling. this is because a greater % of the time is spent in system functions such as hstore->text, which are not affected by the inlining because they are run either way.
- 04:18 AM Revision 10374: schemas/util.sql: map_nulls(): use new nulls_map(). proper inlining (i.e. same runtime before and after change) has been verified with the following profiling query:
- SELECT util.map_nulls(array[1, 2, 3]::text[], v) FROM unnest(array_fill(1, array[100000])) f (v)
- 04:05 AM Revision 10373: schemas/util.sql: added nulls_map(), for use with _map()
- 03:39 AM Revision 10372: lib/runscripts/table.run: postprocess(): added remake action that calls trim_table()
- 03:37 AM Revision 10371: lib/runscripts/table.run: added trim_table(), which calls util.trim(regclass, regclass)
- 03:23 AM Revision 10370: lib/runscripts/table.run: map_table(): added remake action that calls reset_col_names()
- 03:21 AM Revision 10369: lib/runscripts/table.run: added reset_col_names(), which calls util.reset_col_names()
- 03:19 AM Revision 10368: bugfix: lib/runscripts/table.run: map_table(): moved $map_table to global var so it can be used by other functions
- 03:09 AM Revision 10367: bugfix: lib/runscripts/table.run: postprocess(): don't propagate $remake to remake_VegBIEN_mappings(), since this will cause map.csv to be remade, which is not related to the postprocessing.
- 03:08 AM Revision 10366: lib/runscripts/table.run: map_table(): util.set_col_names_with_metadata(): removed unnecessary cast to regclass, which is performed implicitly. this used to be needed when the polymorphic util.rename_cols() was used instead.
- 02:57 AM Revision 10365: schemas/util.sql: added trim(), which trims a table to include only original columns, as defined by a map table
- 02:53 AM Revision 10364: schemas/util.sql: added derived_cols(), which gets table_'s derived columns (all the columns not in the names table)
- 02:29 AM Revision 10363: schemas/util.sql: added eval2set()
- 02:14 AM Revision 10362: schemas/util.sql: added drop_column()
- 01:27 AM Revision 10361: inputs/REMIB/Specimen/postprocess.sql: map_nulls__*(): turned off STRICT to allow dynamic inlining, which speeds up the mk_derived_col() statements by *5x* (342799.823 ms -> 71533.252 ms (6 min -> 1 min) for latitude_sec)
07/19/2013
- 07:23 PM Revision 10360: inputs/REMIB/Specimen/postprocess.sql: runtimes: updated for vegbiendev, *before* dynamic inlining. the times are about twice as fast as on starscream, so vegbiendev is faster at whatever is the limiting speed factor (probably not CPU, based on other benchmarks).
- 07:05 PM Revision 10359: schemas/util.sql: map_nulls(): documented that due to dynamic inlining, this is just as fast as util._map() which it wraps. dynamic inlining now brings altogether a *40x* speed improvement to map_nulls() (4000 ms -> 100 ms), and would likely bring a comparable improvement for other functions that are run repeatedly and call other user-defined functions.
- 06:35 PM Revision 10358: bugfix: schemas/util.sql: map_nulls(): updated to use hstore(text[], anyelement), which has replaced hstore(anyarray, anyelement)
- 06:30 PM Revision 10357: schemas/util.sql: removed hstore(anyarray, anyelement), which did not support dynamic inlining, to avoid confusion over which hstore() function to use. use new hstore(text[], anyelement) instead (with explicit cast on the keys array if needed).
- 06:23 PM Revision 10356: schemas/util.sql: added hstore(text[], anyelement), which dynamically inlines properly, unlike hstore(anyarray, anyelement). this can be selected by explicitly casting the keys array to text[], which now provides a 6x speed improvement (380 ms -> 60 ms) for map_nulls().
- 05:31 PM Revision 10355: schemas/util.sql: fix_array(): turned off STRICT to allow dynamic inlining, which speeds up util.map_nulls() by 3x (1500 ms -> 500 ms)
- 05:15 PM Revision 10354: schemas/util.sql: array_length(anyarray), array_length(anyarray, dimension integer): turned off STRICT to allow dynamic inlining, which speeds up util.map_nulls(). this requires adding a `CASE WHEN $1 IS NULL THEN NULL` statement to array_length(anyarray, dimension integer) to replace the functionality provided by STRICT.
- 04:41 PM Revision 10353: schemas/util.sql: map_nulls(): turned off STRICT to allow dynamic inlining, which causes a 2x speed improvement[1]. (see r10352 for an explanation of dynamic inlining.) note that turning off STRICT disables NULL-skipping (avoiding running a function when all its params are NULL), so it should only be used when the NULL-skipping optimization is needed less than dynamic inlining.
- [1] the profiling query
SELECT util.map_nulls(array[v, 2, 3], v) FROM unnest(array_fill(1, array[10000])) f (v)
has a... - 04:23 PM Revision 10352: schemas/util.sql: inlinable IMMUTABLE functions: avoid using config params (e.g. `SET search_path TO util`) because these prevent dynamic inlining (i.e. inlining of a function call with *variable* instead of constant arguments, by substituting the arguments into the function's body). dynamic inlining can speed up function evaluation significantly, because a (slow) call to a user-defined SQL function is avoided.
- 04:15 PM Revision 10351: schemas/vegbien.my.sql: updated for new bin/repl text mode matching, which also affects non-regexps. this causes the replacement of a few more occurrences of PostgreSQL-only one-word typenames with their MySQL equivalents.
- 02:26 PM Revision 10350: inputs/REMIB/Specimen/postprocess.sql: runtimes: documented the machine the times are from
- 01:52 PM Revision 10349: inputs/REMIB/: switched to new-style import, using the steps at wiki.vegpath.org/Switching_to_new-style_import#stage-I-source-specific > "run the following for each datasource"
- 11:40 AM Revision 10348: bugfix: bin/repl: text mode: repurpose this to match SQL identifiers, for use by inputs/input.Makefile %/postprocess.sql. %/postprocess.sql is the only place currently using this mode, so this will not affect other scripts.
- 10:51 AM Revision 10347: bugfix: inputs/input.Makefile: %/postprocess.sql: need to run bin/repl in text mode (text=1) so that values to match are treated as literal strings rather than regular expressions. this difference is important for column names with spaces or special characters.
- 10:24 AM Revision 10346: bugfix: inputs/Madidi/LocationObservation/map.csv: resolved Notes, Notes 2 -> locationRemarks collision by _alt()ing them together. note that _alt() is fine because only one of these is ever populated.
- 09:54 AM Revision 10345: bugfix: schemas/util.sql: set_col_names(): need to generate error if destination column already exists (rather than suppressing it with try_create()), because this indicates a collision
- 09:30 AM Revision 10344: bugfix: inputs/Madidi/IndividualObservation/map.csv: removed derived column FieldFamilyFullName#originalFamily, which should not be in the map table because it can contain only columns that are initially in the table before running postprocess.sql
- 09:23 AM Revision 10343: schemas/util.sql: map table: added unique constraint on the to column as well, because the destination names also need to be distinct in order to be a valid set of column names
- 09:14 AM Revision 10342: schemas/util.sql: map table: changed pkey to a unique constraint so pgAdmin would sort the entries in table order (matching the order they are in the staging table) instead of alphabetized by the pkey
- 08:56 AM Revision 10341: bugfix: inputs/REMIB/Specimen/map.csv: state: changed output column name to stateProvince_verbatim to match the renaming in postprocess.sql
- 08:40 AM Revision 10340: inputs/REMIB/Specimen/postprocess.sql: remove frameshifted rows: removed out-of-date rerun time, which applied to doing all the deletes in the same statement (however, the current rerun time is approximately the same). note that index scans are not actually used (as the previous comment incorrectly stated) because the conditions for this filter are prefix-less regexps.
- 08:32 AM Revision 10339: inputs/REMIB/Specimen/: translated single-column filters to postprocessing derived columns, using the steps at wiki.vegpath.org/Switching_to_new-style_import#stage-I-source-specific > "translate single-column filters to postprocessing derived columns". null-mapping filters now use wrappers around new util.map_nulls(). note that the verbatim columns input to the filters need to be renamed to avoid name collisions with their filtered columns, which must be VegCore terms for new-style import.
- 07:53 AM Revision 10338: inputs/REMIB/Specimen/postprocess.sql: remove frameshifted rows: also filter out non-numbers for long_sec, lat_min, lat_sec
- 07:18 AM Revision 10337: inputs/REMIB/Specimen/postprocess.sql: remove frameshifted rows: remove rows where long_min is not a number
- 07:15 AM Revision 10336: inputs/REMIB/Specimen/postprocess.sql: change E'' to regular '' to avoid the need to double \ (instead ' would be doubled). E'' used to be necessary in previous versions of PostgreSQL to avoid a warning about escape string syntax.
- 07:09 AM Revision 10335: inputs/REMIB/Specimen/postprocess.sql: remove frameshifted rows: removed unnecessary () around `DELETE FROM :table WHERE long_deg ...`
- 07:03 AM Revision 10334: inputs/REMIB/Specimen/postprocess.sql: removed coll_year, country, long_deg indexes because the frameshift filter conditions on these columns do not use index scans (because their regexp patterns do not contain a fixed prefix). eventually, some regexp patterns may be able to be modified to use prefixes.
- 07:01 AM Revision 10333: bugfix: inputs/REMIB/Specimen/postprocess.sql: remove frameshifted rows: can't OR together conditions to determine rows to delete, because if any condition is NULL instead of true/false, this will NULL out the entire WHERE condition and prevent any other true conditions from causing a deletion. the best way to fix this is to use a separate DELETE statement for each condition, so that NULLs only impact that particular condition's DELETE. unlike using a modified, NULL-insensitive OR, which would prevent the use of index scans, this allows indexes to be used for conditions that support them.
- 06:05 AM Revision 10332: inputs/REMIB/Specimen/postprocess.sql: removed duplicate CREATE INDEX for the acronym column
- 05:59 AM Revision 10331: bugfix: inputs/REMIB/Specimen/postprocess.sql: switched back to the input column names, since the renaming to *_verbatim is part of a later step
- 05:26 AM Revision 10330: inputs/REMIB/Specimen/create.sql: moved filtering out of frameshifted rows to postprocess.sql, where it can happen in an idempotent DELETE. this allows filters to remove additional rows to easily be added on top of the existing filters, without needing to remake Specimen (which takes a long time, because of the many stage I derived columns that get added). the logical inversion inherent in the DELETE condition has been factored through rather than wrapped in NOT (...), because removal of frameshifted rows is more accurately specified as the detection of specific patterns that indicate frameshifting rather than the validation of all fields.
- 03:13 AM Revision 10329: bugfix: schemas/util.sql: not_empty(anyarray): array_length() now refers to different functions, with different semantics, depending on whether util is in the search_path. this necessitates explicitly selecting util.array_length() and switching to its semantics (ARRAY[] -> 0 instead of NULL)
- 03:02 AM Revision 10328: schemas/util.sql: map_nulls(): support all datatypes, not just text
- 02:55 AM Revision 10327: schemas/util.sql: added hstore(keys anyarray, value anyelement) and => (anyarray, anyelement) operator to support other element types for hstore
07/18/2013
- 06:43 PM Revision 10326: inputs/REMIB/Specimen/create.sql: also remove frameshifted rows with invalid long_deg values
- 04:31 PM Revision 10325: schemas/util.sql: added map_nulls(), a common use case of _map()
- 04:29 PM Revision 10324: bugfix: schemas/util.sql: hstore(keys text[], value text): use new fix_array() so that an empty keys array is made 1-dimensional to match up with the array generated by array_fill()
- 04:26 PM Revision 10323: schemas/util.sql: added fix_array(), which ensures that the array will always have proper non-NULL dimensions
- 04:21 PM Revision 10322: schemas/util.sql: added empty_array(), for constructing proper empty 1-dimensional arrays whose dimensions are not NULL ( {}::text[] does not do this)
- 03:36 PM Revision 10321: bugfix: schemas/util.sql: array_length(anyarray): need to call util.array_length() instead of just array_length() (which uses pg_catalog.array_length()) so that empty arrays will be returned as 0 instead of NULL. note that for some reason, adding `SET search_path=util` to the function does not have the same effect.
- 02:48 PM Revision 10320: inputs/ACAD/Specimen/postprocess.sql, inputs/ARIZ/omoccurrences/postprocess.sql: removed unnecessary "" around keys/values. "" are required in hstore input syntax in approximately the same places as they are in XPaths (around values containing spaces or special characters).
- 01:46 PM Revision 10319: inputs/ACAD/Specimen/map.csv, inputs/ARIZ/omoccurrences/map.csv: removed derived columns, which cause an error when trying to rename a table that does not yet have the derived columns added. this error will not be noticed locally if the derived columns were added before switching to new-style import, but will be noticed on vegbiendev.
- 01:17 PM Revision 10318: inputs/ARIZ/: switched to new-style import, using the steps at wiki.vegpath.org/Switching_to_new-style_import#stage-I-source-specific > "run the following for each datasource"
- 01:03 PM Revision 10317: added inputs/ARIZ/omoccurrences/postprocess.sql
- 12:32 PM Revision 10316: inputs/ARIZ/omoccurrences/: translated single-column filters to postprocessing derived columns, using the steps at wiki.vegpath.org/Switching_to_new-style_import#stage-I-source-specific > "translate single-column filters to postprocessing derived columns"
- 12:25 PM Revision 10315: bugfix: schemas/util.sql: _map(map hstore, value anyelement): need to cast result to unknown to support types that don't have a cast directly from text
- 12:06 PM Revision 10314: schemas/util.sql: added _map(map hstore, value anyelement) to seamlessly map types other than text (by casting back and forth between text and the value type)
- 11:44 AM Revision 10313: inputs/ACAD/: switched to new-style import, using the steps at wiki.vegpath.org/Switching_to_new-style_import#stage-I-source-specific > "run the following for each datasource"
- 11:38 AM Revision 10312: inputs/input.Makefile: added %/postprocess.sql to replace input column names with the corresponding output column names when switching to new-style import (this target must be manually run, but does simplify the process of renaming the postprocess.sql input columns)
- 11:02 AM Revision 10311: planning/timeline/timeline.2013.xls: moved Individual datasource refresh under Importing to normalized VegCore instead of Switching to new-style import because it is actually related to the refactor-in-place method used to import to VegCore
- 10:38 AM Revision 10310: inputs/ACAD/Specimen/: translated single-column filters to postprocessing derived columns, using the steps at wiki.vegpath.org/Switching_to_new-style_import#stage-I-source-specific > "translate single-column filters to postprocessing derived columns"
- 09:47 AM Revision 10309: bugfix: schemas/util.sql: rename_cols(): run additional `SELECT NULL::void` query after the main for-loop query so that PostgreSQL does not try to fold away the execution of util.try_create() just because multiple rows are not returned by the function. the result set of the first query will still be discarded, but will be fully evaluated. (this has nothing to do with VOLATILE vs. IMMUTABLE; util.try_create() is already declared VOLATILE and would normally not be folded.) rename_cols() is used to rename derived columns, which are not part of the map.csv and cannot be positionally renamed.
- 08:38 AM Revision 10308: schemas/util.sql: added text[] => text operator, analogous to text => text for multiple keys (uses hstore(keys text[], value text))
- 08:29 AM Revision 10307: schemas/util.sql: added hstore(keys text[], value text), which can be used to avoid repeating the same value for each key. there are many /_map filters which use the XPath syntax for doing this, which now need to use an equivalent SQL syntax to avoid duplicating the value many times.
- 08:23 AM Revision 10306: web/links/index.htm: updated to Firefox bookmarks. added link to Brian Enquist's fractals video on PBS NOVA.
- 07:35 AM Revision 10305: schemas/util.sql: added array_fill(anyelement, integer), which doesn't require lengths for multiple dimensions
- 07:31 AM Revision 10304: schemas/util.sql: added array_length(anyarray, dimension integer) wrapper, which returns 0 instead of NULL for empty arrays
- 07:23 AM Revision 10303: schemas/util.sql: added array_length(anyarray), which does not require a second dimension argument
- 12:04 AM Revision 10302: lib/sql_io.py: put_table(): documented that PostgreSQL 9.1+ now provides a way to implement insert/on duplicate select just once for each table (instead of dynamically for each insert) using the new INSTEAD OF triggers (http://www.postgresql.org/docs/9.1/static/plpgsql-trigger.html). INSTEAD OF triggers were not used when put_table() was developed, because it was necessary to support PostgreSQL 9.0, which was installed on the Mac and not easily upgradeable. it was eventually upgraded to add PostGIS, which required a complete reinstall of the DB from the staging tables, with the associated staging table reload bugs, as well as complete removal of the old Postgres version.
07/17/2013
- 11:40 AM Revision 10301: inputs/Madidi/: switched to new-style import
- 10:17 AM Revision 10300: schemas/VegCore/VegCore.ERD.mwb: regenerated exports
- 10:16 AM Revision 10299: schemas/VegCore/VegCore.ERD.mwb: categories: fixed position of place to match where it now is in the diagram. lined up boxes so that there is a visible line between the place- and occurrence-related categories.
- 09:27 AM Revision 10298: inputs/Madidi/IndividualObservation/map.csv: translated 1:many mappings ( FieldFamilyFullName->{family,originalFamily} ) to derived columns (in postprocess.sql) to work with new-style import, which must have a 1:1 relationship between input and output columns
- 09:05 AM Revision 10297: schemas/util.sql: added reset_col_names(), the counterpart to set_col_names(). note that this alters the map table, so it will need to be repopulated after running this function.
- 09:01 AM Revision 10296: schemas/util.sql: mk_derived_col(): support using this function to overwrite an existing column (i.e. as a general-purpose function to perform in-place update with ALTER COLUMN TYPE USING)
- 07:40 AM Revision 10295: lib/sh/db.sh: psql(): display stack traces and DETAIL sections of error messages at verbosity 2+, to help debugging (previously they were always turned off). in particular, the DETAIL section of a "duplicate key value violates unique constraint" error is useful because it contains the duplicated key.
- 04:54 AM Revision 10294: inputs/.TNRS/: switched to new-style import. because this does not have data subdirs (data comes from the TNRS client), this is just a matter of adding ./run.
- 04:53 AM Revision 10293: inputs/.TNRS/Source/: switched to new-style import. this had been missed when all the Source/ subdirs were batch-switched to new-style import.
- 04:24 AM Revision 10292: inputs/*/*/map.csv: replaced /_first filter with mapping to DUPLICATE special term (VegCore.vegpath.org?DUPLICATE). this removes collisions that don't need a postprocessing formula to combine the columns.
- 04:02 AM Revision 10291: inputs/Madidi/map.csv: removed filters on columns from before the refresh (which are not in active use), so that they don't show up in a search for map.csvs with filters (indicating collisions)
- 03:15 AM Revision 10290: inputs/Madidi/IndividualObservation/map.csv: SeniorCollector: don't prepend it to the CollectorString because the CollectorString already contains it. this may be a change between the BIEN2 and refreshed Madidi data (which uses a significantly different schema).
- 02:37 AM Revision 10289: mappings/VegCore.htm: regenerated from wiki. Special terms: added instructions for adding a distinguishing suffix to each special term in the format special_term#suffix. this is needed for new-style import to make the resulting column name unique within the staging table.
- 02:35 AM Revision 10288: mappings/VegCore-VegBIEN.csv: mapped DUPLICATE to nothing so that it would not be treated as an unmapped term
- 02:33 AM Revision 10287: mappings/VegCore.htm: regenerated from wiki. Special terms: added DUPLICATE.
- 01:56 AM Revision 10286: /README.TXT: Maintenance: regenerate mappings/VegCore.csv: commit command: use single quotes ' instead of double quotes " to avoid needing to \-escape every special char (single quotes ' still need to be escaped)
- 01:51 AM Revision 10285: mappings/VegCore.htm: regenerated from wiki. moved UNUSED, PRIVATE underneath OMIT as subterms.
07/14/2013
- 06:02 AM Revision 10284: mappings/VegCore.htm: Regenerated from wiki
- 05:52 AM Revision 10283: bugfix: bin/*: spell out [:alnum:] as [a-zA-Z0-9] because Python unfortunately doesn't support character classes
- 05:18 AM Revision 10282: web/links/index.htm: updated to Firefox bookmarks. moved Linux, Mac into Unix folder. added instructions to remove old Linux kernels, which fill up the /boot partition. added instructions to force sed to use raw binary mode instead of UTF-8 when UTF-8 is set in the environment. added methods of implementing DB disk space quotas in Postgres. added comparison on my Mac's CPU (2.66 GHz Intel Core i5) with vegbiendev's (2.44 GHz AMD Phenom X4). my Mac's seems to be much faster, so it might make sense to check that the Thor CPUs are faster than the Vis Lab computers' CPUs the next time it gets upgraded. (these diffs can be seen in WinMerge with Moved block detection on. see /README.TXT > WinMerge setup for details.)
- 04:43 AM Revision 10281: inputs/bien_web/observation/VegBIEN.csv: regenerated now that *_index dummy columns have been removed
- 03:26 AM Revision 10280: inputs/.TNRS/schema.sql: tnrs_populate_fields(): updated runtimes. it now takes 25 min instead of 16 min to regenerate the derived cols.
- 03:07 AM Revision 10279: inputs/IRMNG/_README.TXT: added note that when refreshing this datasource, remember to regenerate the TNRS derived cols using the instructions in inputs/.TNRS/schema.sql > tnrs_populate_fields()
- 02:44 AM Revision 10278: bin/*: replaced confusing regexp constructs involving \W inside [] with the much clearer explicit character class [:alnum:] . this avoids adding or subtracting from an inverted class in order to reach a subset of the corresponding positive class, because the subset can just be named explicitly instead.
- 02:38 AM Revision 10277: bugfix: bin/repl: doesn't make sense to use other chars in a [^\W_] regexp, because they will have no effect since \w doesn't include the other chars to begin with. this is a result of confusion with the ^ and \W double negative.
- 02:14 AM Revision 10276: lib/runscripts/table.run: postprocess(): propagate the $remake flag to remake_VegBIEN_mappings using self_make, so that a remake=1 on postprocess will cause map.csv to be regenerated as it would for a remake=1 directly on remake_VegBIEN_mappings
- 02:10 AM Revision 10275: bugfix: postprocess(): moved $can_test flag from import() to this function because it is used here
- 02:08 AM Revision 10274: lib/runscripts/table.run: import(): moved postprocessing commands to separate postprocess() function that can be invoked on an already-imported staging table to avoid running the load_data() target. this is especially useful when running the postprocessing on a working copy without the unversioned data files, for datasources whose load_data() target would otherwise try to download the files because they don't already exist.
- 02:01 AM Revision 10273: lib/runscripts/table.run: postprocess(): renamed to custom_postprocess() since this runs only the datasource's custom postprocessing commands, not all the postprocessing commands including map_table, mk_derived
- 01:52 AM Revision 10272: lib/runscripts/util.run: added , function, which treats each of the command-line args as commands the way make does (instead of as args to the same command, the way runscripts do)
- 01:39 AM Revision 10271: lib/sh/util.sh: moved runscript-related commands to lib/runscripts/util.run because these only apply to runscripts
- 01:26 AM Revision 10270: bugfix: inputs/*/*/map.csv (e.g. inputs/GBIF/raw_occurrence_record_plants/map.csv): remapped author to scientificNameAuthorship rather than authors, which it had gotten incorrectly automapped to. note that the VegCore term authors has now been renamed to data_authors to avoid ambiguity, but incorrect automappings resulting from it had not yet been fixed.
- 12:54 AM Revision 10269: bugfix: inputs/GBIF/raw_occurrence_record_plants/run: updated herbaria.ih column names for staging table column renaming
- 12:33 AM Revision 10268: bugfix: inputs/GBIF/table.run: need to include lib/runscripts/mysql.table.run instead of table.run (table.run was accidentally substituted when inputs/.NCBI/table.run was copied to all new-style datasources
07/13/2013
07/12/2013
07/11/2013
- 04:08 PM Revision 10265: planning/workflow/bien3_architecture.odp: added wiki page notes (wiki.vegpath.org/2013-06-20_conference_call, wiki.vegpath.org/2013-06-27_conference_call) in the slide notes
- 03:41 PM Revision 10264: planning/workflow/bien3_architecture.odp: added responses to the red-highlighted questions (from e-mails to the list) in the slide notes
- 02:54 PM Revision 10263: planning/timeline/timeline.2013.xls: fixed formatting: removed internal cell borders in spacer lines
- 02:48 PM Revision 10262: added planning/workflow/bien3_architecture.odp with changes from Skype call with Martha, which include wiki page notes (wiki.vegpath.org/2013-06-20_conference_call) about the refactor-in-place method in the Notes area
- 02:44 PM Revision 10261: planning/timeline/timeline.2013.xls: updated with changes from Skype call with Martha
- 12:53 PM Revision 10260: inputs/*/ which do not contain any explicit collisions (wiki.vegpath.org/2013-06-27_conference_call#To-do-for-Aaron > #3.2 > the following datasources ...): switched to new-style import, which adds the staging table column renaming
- 12:41 PM Revision 10259: inputs/newWorld/: switched to new-style import, which adds the staging table column renaming. these tables are used by the public schema (schemas/vegbien.sql), so the renamings are applied there as well.
- 12:26 PM Revision 10258: inputs/bien_web/bien_web.schema.sql: regenerated using bin/my2pg, to remove the *_index dummy columns so they don't create lots of OMIT#... staging table columns
- 12:09 PM Revision 10257: inputs/*/*/map.csv: added distinguishing #... suffix (e.g. UNUSED#institutionID) to the special terms OMIT, PRIVATE, UNUSED (VegCore.vegpath.org#Special-terms) to avoid creating a collision in the staging table renaming
- 11:56 AM Revision 10256: bugfix: inputs/input.Makefile: Staging tables installation: $(allInstalls): don't filter out Source table, because it is now an installed table rather than just a mapping
- 11:33 AM Revision 10255: bin/filter_out_ci, lib/maps.py: simplify(): also remove distinguishing #... suffix from terms (e.g. UNUSED#institutionID), to support mapping multiple columns to the special terms OMIT, PRIVATE, UNUSED (VegCore.vegpath.org#Special-terms), *without* creating a collision in the staging table renaming. note that this change must *not* be made to bin/canon, because this would cause suffixed terms to be autorenamed to their *un*suffixed VegCore versions.
- 05:54 AM Revision 10254: backups/Makefile: $(restore): added --verbose to display pg_restore's incremental progress
- 05:34 AM Revision 10253: bugfix: inputs/newWorld/newWorldCountries/postprocess.sql: use UPDATE statement (followed by VACUUM ANALYZE to remove dead tuples) instead of in-place update (ALTER COLUMN TYPE USING), so that the statement can be run even after the public schema has been installed and its views use the columns. (a view using the columns would normally block an ALTER COLUMN TYPE statement on a referenced column.)
- 03:56 AM Revision 10252: bugfix: lib/runscripts/table.run: remake_VegBIEN_mappings(): when remaking, do not remake header.csv, because it should keep the original CSV columns rather than being reset to whatever the current staging table columns happen to be. to force-regenerate this, instead delete it first and then run remake_VegBIEN_mappings(). remake mode will now just regenerate map.csv from header.csv, in case map.csv's columns are incomplete or out of order.
- 03:55 AM Revision 10251: bugfix: lib/runscripts/table.run: remake_VegBIEN_mappings(): when remaking, do not remake header.csv, because it should keep the original CSV columns rather than being reset to whatever the current staging table columns happen to be. to force-regenerate this, instead delete it first and then run remake_VegBIEN_mappings(). remake mode will now just regenerate map.csv from header.csv, in case map.csv's columns are incomplete or out of order.
- 03:50 AM Revision 10250: bugfix: lib/runscripts/table.run: map_table(): do not rename view columns, since their column names come from their (column-renamed) joined tables rather than from a map.csv. header.csv, map.csv for views will generally become out-of-date whenever the joined tables change, so it is better not to generate them at all.
- 03:48 AM Revision 10249: lib/runscripts/table.run: added $is_view
- 03:27 AM Revision 10248: lib/runscripts/table.run: added $postprocess_sql to store postprocess.sql path, and use it in postprocess()
- 02:20 AM Revision 10247: bugfix: lib/sh/local.sh: prevent automated tests when the public schema contains the live DB, so the user doesn't have to explicitly specify can_test= when running the import on vegbiendev
- 02:19 AM Revision 10246: bugfix: lib/runscripts/table.run: import(): allow automated tests (remake_VegBIEN_mappings) to be disabled by setting can_test= if the public schema shouldn't be modified (e.g. if it's the live DB)
- 12:55 AM Revision 10245: bugfix: inputs/*/*/postprocess.sql: made all operations idempotent, so that postprocess.sql can be run repeatedly (e.g. by new-style import)
- 12:03 AM Revision 10244: schemas/util.sql: create_if_not_exists(): also suppress "multiple primary keys are not allowed" error
07/10/2013
- 10:10 PM Revision 10243: added inputs/newWorld/iso_code_gadm/.map.csv.last_cleanup
- 10:07 PM Revision 10242: inputs/*/Source/VegBIEN.csv: regenerated for new-style import, which uses a symlink to mappings/VegCore-VegBIEN.csv instead of a custom mapping using the original column names
- 09:53 PM Revision 10241: inputs/input.Makefile: Staging tables installation: %/install: run %/map_table at end to rename the staging table columns for new-style datasources
- 09:52 PM Revision 10240: inputs/input.Makefile: Staging tables installation: added %/map_table to run the new-style import staging table renaming
- 08:37 PM Revision 10239: inputs/bien2_traits/TraitObservation/map.csv: removed no longer needed mappings of dummy columns to OMIT, which were creating an unnecessary collision of staging table column names
- 08:30 PM Revision 10238: inputs/bien2_traits/bien2_staging.schema.sql: regenerated from MySQL version so that dummy columns (which used to be generated by bin/my2pg) will be replaced with dummy CHECK constraints instead. this avoids needing to map several dummy columns all to OMIT, which was creating an unnecessary collision of staging table column names.
- 08:20 PM Revision 10237: bin/my2pg*: keep MySQL indefinite dates as text strings instead of translating them (to the first of the month or year) to fit into a PostgreSQL timestamp. this allows the application to decide how to handle these values, which otherwise have no corresponding value in PostgreSQL. this requires changing the date/time related types to text instead of leaving them as-is, so that they can store the custom MySQL strings.
- 07:36 PM Revision 10236: planning/timeline/timeline.2013.xls: Geoscrubbing: made it a subtask of Adding derived columns. moved it to July so that it can be run for Naia's new project.
- 07:00 PM Revision 10235: planning/timeline/timeline.2013.xls: reordered tasks approximately in priority order (which corresponds to the month(s) in which they are scheduled). indented subtasks under their parent tasks.
- 06:51 PM Revision 10234: planning/timeline/timeline.2013.xls: crossed out completed rows and moved them to the bottom
- 06:46 PM Revision 10233: planning/timeline/timeline.2013.xls: use different-style checkmark because LibreOffice doesn't display the font of the previous one correctly anymore (it may already have been displayed incorrectly on other people's computers)
- 06:14 PM Revision 10232: planning/timeline/timeline.2013.xls: Reload existing data in need of refresh: added Oct because Rick Condit is supposed to provide us with a CTFS refresh that we would be allowed to use (he wouldn't let us use the 2011-4-1 full-DB export)
- 06:11 PM Revision 10231: planning/timeline/timeline.2013.xls: continuous tasks: populated past months
- 06:09 PM Revision 10230: planning/timeline/timeline.2013.xls: added Sep, Oct months and moved tasks into them. moved continuous tasks to separate section at bottom to avoid confusion with discrete tasks.
- 05:33 PM Revision 10229: planning/timeline/timeline.2013.xls: use bullet points (•) instead of background shading to indicate future tasks. this allows cells to easily be cleared by pressing Backspace, rather than having to copy a white-background cell on top of the cell.
- 05:26 PM Revision 10228: planning/timeline/timeline.2013.xls: use 3-letter months to make room for more months
- 05:23 PM Revision 10227: planning/timeline/timeline.2013.xls: added missing tasks: switching to new-style import, importing to normalized VegCore, adding derived columns
- 05:16 PM Revision 10226: planning/timeline/timeline.2013.xls: removed alterate-row color highlighting because it makes it difficult to reorder rows or insert new rows in the middle
- 04:51 PM Revision 10225: bin/my2pg: use util.sh $top_dir instead of setting $selfDir
- 04:50 PM Revision 10224: bin/my2pg*: use the util.sh sed wrapper, which fixes the LANG=*.UTF-8 "illegal byte sequence" errors on invalid UTF-8
- 04:33 PM Revision 10223: /Makefile: mysql-Linux: also install mysql-workbench, for use in modifying the VegCore ERD. (note that it has to be modified on Linux, because the Linux and Mac versions of MySQL Workbench position the lines differently.)
- 04:10 PM Revision 10222: /README.TXT: Maintenance: to backup files not in Time Machine: removed VirtualBox VMs because they are now in Time Machine, and do not need to be backed up separately
- 04:08 PM Revision 10221: /README.TXT: Maintenance: to synchronize a Mac's settings with my testing machine's: added steps to upload just the VirtualBox VMs
- 04:02 PM Revision 10220: bugfix: /README.TXT: Maintenance: to synchronize a Mac's settings with my testing machine's: added overwrite=1 so that old snapshots, etc. are also deleted
- 04:01 PM Revision 10219: /README.TXT: Maintenance: to synchronize a Mac's settings with my testing machine's: use better bin/sync_upload instead of put
- 03:59 PM Revision 10218: /README.TXT: Maintenance: to synchronize a Mac's settings with my testing machine's: removed no longer needed inplace=1, because the VirtualBox VMs now all use a snapshot covering the full disk, so that the full disk is not altered (removing the need to optimize backing up a large file) and just the diff files need to be backed up each time
- 03:41 PM Revision 10217: bugfix: lib/sh/util.sh: sed: must use alias instead of function because function causes segfault in redir() subshell when used with make.sh make() filter (may be bug in bash?). this involves translating `unset LANG` to `env LANG=` (`env -u` to unset a var isn't supported on Mac, but fortunately sed treats LANG="" the same as unset LANG).
- 03:06 PM Revision 10216: archived planning/goals/BIEN3_derived_data_products.docx and replaced with symlink to new BIEN_3_derived_data_products_NormalizedDB_only.docx
- 02:59 PM Revision 10215: added planning/goals/BIEN_3_derived_data_products_NormalizedDB_only.docx from Brad's e-mail
- 02:42 PM Revision 10214: bugfix: lib/sh/util.sh: sed: unset LANG to avoid "illegal byte sequence" errors on invalid UTF-8 for LANG=*.UTF-8. these occur e.g. with MySQL data that is in Latin-1.
- 02:36 PM Revision 10213: lib/sh/util.sh: sed: use function instead of alias so that env can be set up before calling sed
- 02:15 PM Revision 10212: planning/workflow/bien3_architecture.pptx: updated to Martha's revised version from 2013-7-3
- 04:13 AM Revision 10211: lib/runscripts/table.run: map_table(): run map_table repeatedly until no more renames are made: added command to do this
- 03:53 AM Revision 10210: lib/runscripts/table.run: map_table(): documented that collisions may prevent all renames from being made at once. if this is the case, map_table must be run repeatedly until no more renames are made. collisions may result if the staging table gets messed up (e.g. due to missing input columns in map.csv).
- 02:32 AM Revision 10209: inputs/*/*/map.csv for CSV tables with a row_num column: added missing row_num entry, which is needed by the staging table column renaming to make the order of the map.csv columns match the order in the staging table
- 02:27 AM Revision 10208: bugfix: inputs/*/Source/map.csv: added missing row_num entry, which is needed by the staging table column renaming to make the order of the map.csv columns match the order in the staging table. the staging table column renaming is now used by all Source tables.
- 02:18 AM Revision 10207: bugfix: populated empty inputs/IUCN/European_Red_List_Plants/header.csv
- 02:17 AM Revision 10206: inputs/CTFS/*/map.csv: added *.src.row_num from joined tables so that the map.csv input columns would match the staging table. this is needed for the staging table column renaming, which is positional rather than name-based to work with any existing column name.
- 01:50 AM Revision 10205: bugfix: inputs/input.Makefile: map.csv and derived files: use $(tables) instead of $(importTables) when making them so that the mappings of those tables are still kept up-to-date even though they are marked _no_import (and not imported into the main DB)
- 01:46 AM Revision 10204: inputs/CTFS/*/test.xml.ref: regenerated. these got out of date because even though these tables are included in import_order.txt, they are marked as _no_import, which prevents map.csvs and derived files from being kept up-to-date.
- 01:24 AM Revision 10203: bugfix: inputs/CTFS/*/VegBIEN.csv: regenerated from map.csv. they may have gotten out of date because they are marked as _no_import, even though they *are* in import_order.txt.
07/09/2013
- 05:33 PM Revision 10202: bugfix: added missing inputs/MO/Specimen/header.csv
- 05:32 PM Revision 10201: bugfix: added missing inputs/QFA/Specimen/header.csv
- 05:26 PM Revision 10200: bugfix: inputs/TEX/Specimen/header.csv: generated from staging table (was empty previously)
- 04:44 PM Revision 10199: bugfix: inputs/*/Source/map.csv: added missing row_num entry, which is needed by the staging table column renaming to make the order of the map.csv columns match the order in the staging table. the staging table column renaming is now used by all Source tables.
- 04:42 PM Revision 10198: added inputs/newWorld/iso_code_gadm/header.csv
- 04:31 PM Revision 10197: added inputs/analytical_db/table.run
- 02:59 PM Revision 10196: bugfix: inputs/VASCAN/Taxon/map.csv: added missing row_num column added by bin/csv2db
- 02:50 PM Revision 10195: lib/sql_io.py: cleanup_table(): added assertion that the table exists, so that if it doesn't, the error will occur as part of an assertion rather than as part of the util.table_nulls_mapped__get() call, which might confusingly lead users to believe that this is a bug in util.table_nulls_mapped__get() when in fact the problem is that the table is not installed
- 02:30 AM Revision 10194: fix: inputs/import.stats.xls: removed spurious diff comment on total time, which only applied to the previous import
- 02:28 AM Revision 10193: inputs/import.stats.xls: reformatted times longer than one day as a # of days instead of hours, for clarity. the days format is chosen automatically when the # hours exceeds one day.
- 01:04 AM Revision 10192: bugfix: inputs/*/Source/: added missing ./run, which creates the new-style staging tables with the metadata fields as part of the table. this is needed now that these subdirs use installed staging tables instead of metadata-only map.csvs.
- 12:56 AM Revision 10191: bin/map: removed no longer used support for map.csv input column prefixes (expand out the prefixes instead). this used to be used by SpeciesLink to use just one mapping for a single term with multiple DwC namespaces, but was replaced with an explicit, ordered rather than implicit, unordered /_alt-ing together of the terms.
07/08/2013
07/06/2013
- 07:29 PM Revision 10189: inputs/.herbaria/: switched to new-style import, which renamed the columns to the VegCore names. this is done using the commands at wiki.vegpath.org/2013-06-27_conference_call#To-do-for-Aaron > "run the following for each datasource".
- 07:21 PM Revision 10188: lib/sql_io.py: cleanup_table(): don't run the slow ALTER TABLE statement again if the table has already been cleaned up. documented that it is idempotent (and actually was before this change as well).
- 07:19 PM Revision 10187: lib/sql_io.py: added table_nulls_mapped__set(), "__get() wrappers around the corresponding util schema functions
- 07:18 PM Revision 10186: lib/sql_gen.py: added table2regclass_text()
- 07:07 PM Revision 10185: schemas/util.sql: added table_nulls_mapped__get(), which gets whether a table's NULL-equivalent strings have been replaced with NULL
- 07:06 PM Revision 10184: schemas/util.sql: added table_flag__get(), which gets whether a status flag is set by the presence of a table constraint
- 06:56 PM Revision 10183: schemas/util.sql: added table_nulls_mapped__set(), which sets that a table's NULL-equivalent strings have been replaced with NULL
- 06:54 PM Revision 10182: schemas/util.sql: added table_flag__set(), which stores a status flag by the presence of a table constraint
- 06:52 PM Revision 10181: schemas/util.sql: create_if_not_exists(): also ignore duplicate_object exceptions, thrown when trying to add a duplicate constraint
- 06:00 PM Revision 10180: inputs/input.Makefile: %/postprocess: removed no longer used invocation of $*/import (precursor to the runscripts used in FIA)
- 05:39 PM Revision 10179: inputs/*/: added table.run for use by the table subdirs in new-style import. datasources without table subdirs do not need this.
- 05:35 PM Revision 10178: inputs/*/: added top-level Makefile which includes inputs/input.Makefile, so that make can be run directly on the datasrc dir without needing to specify `--makefile=../input.Makefile` (see input.Makefile $(selfMake))
- 05:17 PM Revision 10177: inputs/*/: added top-level Makefile which includes inputs/input.Makefile, so that make can be run directly on the datasrc dir without needing to specify `--makefile=../input.Makefile` (see input.Makefile $(selfMake))
- 05:05 PM Revision 10176: added inputs/test_taxonomic_names/Taxon/header.csv
- 04:02 PM Revision 10175: web/links/index.htm: updated to Firefox bookmarks. removed dead favicons. PostgreSQL: added bookmarks about triggers.
- 03:55 PM Revision 10174: bugfix: inputs/input.Makefile: %/VegBIEN.csv: for new-style datasources, use a symlink to mappings/VegCore-VegBIEN.csv directly instead of prefiltering VegCore-VegBIEN.csv to include only the columns in map.csv. prefiltering used to be performed as part of mapping the map.csv VegCore output terms to VegBIEN using bin/join, but is no longer needed because the staging table columns are now VegCore terms. instead, the full VegCore-VegBIEN.csv *is* needed so that derived columns added in stage I or II validations are detected by bin/map (rather than just the original source columns in map.csv).
- 03:37 PM Revision 10173: mappings/VegCore-VegBIEN.csv: cultivated, oldGrowth: use just cultivated if it's provided, rather than /_alt-ing it back with oldGrowth (which it was generated from)
- 03:30 PM Revision 10172: bugfix: mappings/VegCore-VegBIEN.csv: fixed priority of cultivated and oldGrowth so cultivated is used first if it's available
- 02:41 PM Revision 10171: bugfix: lib/runscripts/table.run: need to run remake_VegBIEN_mappings after mk_derived rather than before so the derived cols will be included in the automated test result
- 02:26 PM Revision 10170: bugfix: inputs/*/Source/: use installed staging table (with blank-line data.csv) in order to also work with new-style import. this also fixes a benign diff between the by-row and by-col test outputs, where row-based import would not import the Source/ entries because there was not at least one row in the input. note that in order to ensure that all datasources are properly run, you need to check `svn st|sort` against the datasource schema names to see if any are missing.
- 02:22 PM Revision 10169: inputs/*/logs: updated svn:ignore
- 02:22 PM Revision 10168: inputs/*/*/logs: updated svn:ignore
- 01:45 PM Revision 10167: bugfix: inputs/input.Makefile: SVN: add: don't add subdirs for datasources marked _no_import (e.g. datasources which only have an inputs/ dir to be listed in VegPath)
- 11:29 AM Revision 10166: bugfix: inputs/*/Source/data.csv for new-style datasources: need to include a blank row (plus a blank header) so that the metadata values are imported at least once instead of zero times, now that there is an installed staging table that will be iterated over. the blank row did not used to be necessary, because db_xml.put_table() has a special case for metadata-only tables with no installed table, which avoids iterating over the table's rows.
07/03/2013
- 10:48 PM Revision 10165: lib/sql_io.py: put_table() (column-based import): complexity note: clarified that INSERT RETURNING throws an error *on duplicate* instead of returning the existing row. added blank line after ¶ for readability.
- 10:44 PM Revision 10164: lib/sql_io.py: put_table() (column-based import): warning about triggers populating unique constraint-covered columns: corrected limitation to include only *the* unique constraint used to do the DISTINCT ON, since other unique constraints are not affected by column-based import. note that the primary key will normally not be the DISTINCT ON constraint, so trigger-populated natural keys are supported *unless* the input table contains duplicate rows for some generated keys.
- 10:20 PM Revision 10163: inputs/*/Source/ for new-style datasources: use an actual staging table instead of a metadata-only table, so that metadata values can be stored in the staging table instead of the map.csv (as will be required by new-style import)
- 08:21 PM Revision 10162: inputs/input.Makefile: SVN: $(svnFilesGlob): added data.csv, used to store versioned data (such as the empty data.csv used by Source/ tables which have their metadata in the map table instead)
- 07:45 PM Revision 10161: schemas/util.sql: type_qual(), type_qual_name(): added comments to distinguish these similarly-named functions, one of which gets a type qualifier and the other of which gets a qualified name (not the name of a type qualifier, which one might otherwise assume)
- 07:39 PM Revision 10160: schemas/util.sql: typeof(): support expressions that are not relative to a table (which do not have a table_ param). note that this requires removing the STRICT qualifier, so that NULL expressions will now produce an error instead of passing through as NULL.
- 07:10 PM Revision 10159: schemas/VegCore/VegCore.ERD.mwb: relationships legend: removed inheritance of base_class from record, so that the IS-A label would not confusingly appear to apply to the record connector stub instead of to the solid line between base_class and derived_class
- 06:51 PM Revision 10158: bugfix: schemas/util.sql: col_names(): need to exclude dropped columns (which remain included in the pg_attribute table until the next tuple rewrite), by filtering on `NOT attisdropped`. lib/sql.py table_col_names() is not affected by this because it is able to access the column names from the DB driver directly, after performing `SELECT * FROM table LIMIT 0`.
- 06:38 PM Revision 10157: schemas/util.sql: set_col_names_with_metadata(): don't delete the metadata entries from the map table, because they are now added *before* the renames take place, so that the renames can simply be performed on the constant columns themselves. this does, however, require that the metadata entries are always listed *last* in the map.csv (which is currently the case).
- 05:56 PM Revision 10156: lib/runscripts/table.run: map_table(): store the map table in the datasource schema, so that it can easily be referred to when using the staging tables. this also allows it to be found more easily when debugging its contents.
- 05:26 PM Revision 10155: lib/sh/db.sh: psql(): hide the verbose CONTEXT information that is output with each NOTICE by setting the VERBOSITY psql var to terse (postgresql.1045698.n5.nabble.com/Quiet-quot-CONTEXT-quot-td1906036.html#a1906037)
- 05:15 PM Revision 10154: *{.sh,run}: use new log-() instead of log+() with a negative #
- 05:14 PM Revision 10153: lib/sh/util.sh: added log-() because it's non-obvious that you would otherwise have to invoke log+() with a negative #
- 05:00 PM Revision 10152: schemas/util.sql: reset_map_table(): drop the table and recreate it instead of just creating it if it doesn't exist, so that any change to the util.map table is propagated to persistent map tables whenever they are reloaded from the map.csv
- 05:00 PM Revision 10151: lib/runscripts/table.run: map_table(): create the map table as a persistent table in the temp schema, so that its contents can be viewed for debugging
- 04:50 PM Revision 10150: schemas/util.sql: added drop_table()
- 04:39 PM Revision 10149: schemas/util.sql: set_col_names(): don't perform rename if the name is not changing, to avoid cluttering the debug output with unnecessary queries
- 04:21 PM Revision 10148: lib/runscripts/table.run: use new util.set_col_names_with_metadata() instead of util.set_col_names() so that metadata values (beginning with : ) are automatically mapped to constant columns rather than needing to add a mk_const_col() call to postprocess.sql for each of them. there are a lot of metadata value entries, especially in the Source/ tables for each datasource, so this will save time in translating the datasources to new-style import. note that this requires disabling the map_filter_insert trigger on the map table to prevent it from filtering out the metadata entries before util.set_col_names_with_metadata() can use them.
- 03:55 PM Revision 10147: bugfix: schemas/util.sql: set_col_names_with_metadata(): need `util.` before mk_const_col(). "to", "from" need to be referenced from row_. substring() needs to start from 2 rather than 1 because PostgreSQL string indexes are 1-based.
- 03:05 PM Revision 10146: schemas/util.sql: try_create(), create_if_not_exists(): use eval() so the executed statement will be echoed for debugging
- 02:58 PM Revision 10145: schemas/util.sql: added set_col_names_with_metadata()
07/02/2013
- 05:42 PM Revision 10144: bugfix: lib/sh/sync.sh: upload(): paths: don't dereference the path itself if it's a symlink; instead canonicalize just its parent dir. this allows syncing a specific file which is a symlink, rather than syncing the symlink's target.
- 05:40 PM Revision 10143: lib/sh/util.sh: added canon_dir_rel_path(), which canonicalizes just the parent dir if the path is a symlink, to leave the symlink itself untouched
- 05:08 PM Revision 10142: planning/workflow/validation/: archived BIEN2 validations documents which have been superseded by planning/goals/BIEN3_derived_data_products.docx, to avoid confusion
- 04:45 PM Revision 10141: planning/workflow/bien3_architecture.pptx: updated with clarifications made in today's conference call
- 02:31 PM Revision 10140: bugfix: bin/map: in_is_db: inline metadata value columns (used by new-style import) so that they can be compared by value in XML simplifying functions (lib/xml_func.py)
- 02:29 PM Revision 10139: lib/sql.py: added col_default_value(), col_is_constant(), which interface with corresponding util-schema functions
- 02:28 PM Revision 10138: lib/sql_gen.py: added col2col_ref() for interfacing with for SQL functions that take a util.col_ref
- 12:57 PM Revision 10137: schemas/util.sql: added is_constant(col_ref), for checking if a column has been marked "constant"
- 12:54 PM Revision 10136: schemas/util.sql: added col_comment()
- 12:53 PM Revision 10135: schemas/util.sql: mk_const_col(): add column comment "constant" to mark column as inlinable (needed by some mappings to have a literal value to compare)
- 12:03 PM Revision 10134: schemas/util.sql: added col_default_value(), which evaluates the col_default_sql() expression
- 11:51 AM Revision 10133: schemas/util.sql: added eval_expr_passthru() (passes NULL SQL through)
- 11:45 AM Revision 10132: bugfix: schemas/util.sql: eval_expr(): need to pass ret_type_null to eval2val()
- 11:42 AM Revision 10131: schemas/util.sql: added eval_expr() (does not require `SELECT ` before expr)
- 11:33 AM Revision 10130: schemas/util.sql: added col_default_sql()
- 11:26 AM Revision 10129: schemas/util.sql: eval(text, anyelement): added default polymorphic type text (can't be unknown because this would cause a "could not determine polymorphic type because input has type "unknown"" error). renamed to eval2val() to avoid overloading conflicts with eval(text) when no polymorphic type param is specified.
- 11:15 AM Revision 10128: schemas/util.sql: added value-returning eval()
- 11:02 AM Revision 10127: bugfix: lib/common.Makefile: $(asAdmin): need to use _postgres instead on Mac for OS X 10.8 Mountain Lion
- 11:01 AM Revision 10126: bugfix: *Makefile: $(asAdmin) invocations of Postgres commands: need to set DB user to postgres so that it won't default to the system user _postgres
- 10:57 AM Revision 10125: *Makefile: removed $(psqlOpts), $(psqlAsAdmin), which are now set by lib/common.Makefile
- 10:57 AM Revision 10124: lib/common.Makefile: added $(psqlOpts), $(psqlAsAdmin)
- 10:54 AM Revision 10123: bugfix: schemas/pg_hba.Mac.conf: use new postgres ident map instead of changing user to _postgres, because the DB user is still named postgres
- 10:53 AM Revision 10122: schemas/pg_ident.Mac.conf: added postgres map mapping the _postgres system user to the postgres DB user for ident authentication
- 10:45 AM Revision 10121: /Makefile: $(postgresReload-Darwin): also install pg_ident.Mac.conf
- 10:44 AM Revision 10120: placed pg_ident.conf under version control as schemas/pg_ident.Mac.conf
- 10:29 AM Revision 10119: *Makefile: removed $(asAdmin), which is now set by lib/common.Makefile
- 10:28 AM Revision 10118: lib/common.Makefile: added $(asAdmin)
- 10:26 AM Revision 10117: bugfix: schemas/pg_hba.Mac.conf: changed postgres to _postgres for OS X 10.8 Mountain Lion
- 09:48 AM Revision 10116: schemas/util.sql: added raise_undefined_column() for use in translating other exceptions to undefined_column
- 03:50 AM Revision 10115: bin/map: map_table(): Resolve prefixes: combined db_xml.ColRef() constructor call with creation of args (as tuple) for clarity
- 03:35 AM Revision 10114: bin/map: update_in_label(): use in_schema instead of the map spreadsheet column name when available, to allow using one spreadsheet for all datasources (which would not have a datasource-specific spreadsheet column name)
- 02:59 AM Revision 10113: schemas/util.sql: added mk_source_col(), which uses the schema name instead of the map spreadsheet header to get the datasource name
- 02:44 AM Revision 10112: schemas/util.sql: added table_schema()
- 01:15 AM Revision 10111: added planning/goals/iPlant_BIEN_Proposal_Final.pdf with Mark's e-mail notes in iPlant_BIEN_Proposal_Final.pdf.notes.txt
07/01/2013
06/28/2013
- 04:54 PM Revision 10109: empty inputs/*/import_order.txt: added subdirs in the order they are used by inputs/input.Makefile, by running make on the inputs to auto-populate import_order.txt. import_order.txt is needed by the runscripts to run the right set of subdirs in the right order.
- 04:48 PM Revision 10108: added inputs/.TNRS/grants.sql, with statements to provide SELECT access to bien_read. these statements must be in grants.sql to avoid them being filtered out by pg_dump_limit.
- 04:47 PM Revision 10107: inputs/input.Makefile: added support for separate grants.sql file, which may contain GRANT statements that would normally be filtered out by pg_dump_limit
- 04:44 PM Revision 10106: inputs/input.Makefile: sql/install: added $debug option to run the *.sql import verbosely, to display which statements are being run. this should only be used for SQL files that use COPY FROM to import data, to avoid echoing pages of insert statements.
- 01:53 PM Revision 10105: inputs/input.Makefile: keep $(sortFile) up-to-date: use sort_file_updated=1 flag to indicate that import_order.txt has already been checked, so that recursive invocations of make don't need to recheck it. also use this flag instead of an explicit $(MAKECMDGOALS) list to prevent the $(sortFile) check from being infinite-recursively reinvoked when input.Makefile is read as part of the $(sortFile) check itself.
- 01:38 PM Revision 10104: inputs/input.Makefile: keep import_order.txt up-to-date by running `make $(sortFile)` each time make is run. this ensures that new datasources always have import_order.txt populated when make is first run. eventually, $(tables) can be always set to $(allTables) so that this auto-updating can also be used to ensure that new subdirs added by the user always make it into import_order.txt (so that they will be included in the subdirs that get remade, etc.). import_order.txt is primarily for specifying the order of the subdirs, but some datasources also use it to filter *out* subdirs, so it can't yet be always updated to include the full list of subdirs. however, the filter-out usage should no longer be necessary after the switch to new-style import.
- 12:58 PM Revision 10103: inputs/input.Makefile: added $(filter_make), used to filter the output of embedded $(shell make ...) invocations
- 11:39 AM Revision 10102: inputs/input.Makefile: $(sortFile): use $(filter-out)->then instead of $(filter)->else for clarity
- 11:21 AM Revision 10101: inputs/input.Makefile: added $(sortFile) (import_order.txt) target which adds any missing tables to import_order.txt
- 11:03 AM Revision 10100: inputs/input.Makefile: added list_tables to print $(tables) for use in populating import_order.txt
- 02:50 AM Revision 10099: web/links/index.htm: updated to Firefox bookmarks. grouped version control systems into new version control folder.
06/27/2013
- 09:54 PM Revision 10098: inputs/.NCBI/: added new-style import runscripts, which renamed the staging table columns to VegCore
- 04:48 PM Revision 10097: bugfix: lib/runscripts/datasrc_dir.run, subdir.run: need to remove leading . from dir name to get installed schema name, using new dir2schema()
- 04:47 PM Revision 10096: lib/runscripts/datasrc_dir.run, subdir.run: use new lib/sh/datasrc.sh, which contains code in common to both datasrc-related dir runscripts
- 04:46 PM Revision 10095: added lib/sh/datasrc.sh
- 03:47 PM Revision 10094: inputs/.TNRS/schema.sql: AcceptedTaxon: removed Annotations entry because the accepted name only contains name elements, not additional text (vegpath.org/cf_aff)
- 01:02 PM Revision 10093: bugfix: /README.TXT: Maintenance: syncing ~/bien to ~/Dropbox/svn: added overwrite=1 so that perms transfer from the authoritative ~/bien regardless of relative mtimes
- 12:45 PM Revision 10092: removed no longer used lib/import.sh. use lib/runscripts/table.run instead.
- 12:28 PM Revision 10091: added inputs/*/*/header.csv for CSV inputs, which are now generated by inputs/input.Makefile %/install
- 12:23 PM Revision 10090: added inputs/FIA/*/{VegBIEN.csv,test.xml.ref}, which are now generated by the mapping process for the joined-together tables (even though they are not used by the import, because only occurrence_all is imported)
- 12:20 PM Revision 10089: added inputs/GBIF/_archive/
- 12:18 PM Revision 10088: removed inputs/GBIF/Specimen/, which has been replaced by the refresh in raw_occurrence_record_plants/
- 12:17 PM Revision 10087: added inputs/GBIF/map.csv, used to regenerate inputs/GBIF/raw_occurrence_record_plants/map.csv when raw_occurrence_record_plants is resubset
- 12:12 PM Revision 10086: inputs/FIA/*/postprocess.sql: removed svn:executable attribute using `svn pdel svn:executable ...` now that these are not shell scripts
- 12:11 PM Revision 10085: removed no longer needed inputs/FIA/import. use inputs/FIA/run instead.
- 12:10 PM Revision 10084: inputs/FIA/*/import: changed to postprocess.sql for use by the runscripts
- 04:27 AM Revision 10083: added inputs/FIA/run
- 04:26 AM Revision 10082: added inputs/FIA/*/run. these do not yet use the postprocessing operations in */import.
- 04:24 AM Revision 10081: added inputs/FIA/table.run (for use by table subdirs) and helper Makefile
- 04:17 AM Revision 10080: added lib/runscripts/view.run, for use with table subdirs for views, such as inputs/FIA/occurrence_all/
- 02:14 AM Revision 10079: planning/timeline/timeline.2013.xls: added Reload analytical database checkmark for every Rebuild core database checkmark, because these are always done together as part of the import process
- 01:41 AM Revision 10078: bugfix: inputs/FIA/occurrence_all/import: don't re-prepend * to terms because this is a view, and the underlying columns have already been mapped
- 01:40 AM Revision 10077: bin/src_map: support custom (or no) new_term_prefix. no new_term_prefix is useful for views whose columns have already been renamed in the underlying tables and should not have * re-prepended.
- 01:03 AM Revision 10076: planning/timeline/timeline.2013.xls: moved longer-term goals to new August column, leaving near-term goals in July
- 01:00 AM Revision 10075: planning/timeline/timeline.2013.xls: erased cells where a task was planned but not worked on, so that all shaded cells in the past have check marks to indicate completion of a portion of the task, and empty shaded cells in the future indicate work left to do
- 12:50 AM Revision 10074: planning/timeline/timeline.2013.xls: updated for current progress. renamed "Rerun species range models" to "Prepare to rerun species range models" because the range modeling itself is not part of the BIEN DB development. added a column for July with the tasks that are not yet complete.
06/26/2013
- 06:57 PM Revision 10073: bugfix: inputs/FIA/REF_SPECIES/import: PLANT_SYMBOL_TYPE: prepended * since it's a datasource column, and needs to match up with *PLANT_SYMBOL_TYPE in other table for joins
- 06:57 PM Revision 10072: bugfix: inputs/FIA/REF_SPECIES/import: PLANT_SYMBOL_TYPE: prepended * since it's a datasource column, and needs to match up with *PLANT_SYMBOL_TYPE in other table for joins
- 06:23 PM Revision 10071: schemas/util.sql: try_create(): also ignore wrong_object_type exceptions thrown when trying to alter a view's columns
- 03:36 PM Revision 10070: added inputs/FIA/_src/run, which runs ./download
- 03:00 PM Revision 10069: lib/sh/make.sh: make(): run sys_cmd_path at a higher log_level since the make() steps should not be displayed by default
- 02:58 PM Revision 10068: /README.TXT: to synchronize vegbiendev, jupiter, and your local machine: added step to update mtimes/perms on ~/Dropbox/svn/ so that copying files back to ~/bien does not overwrite the permissions from what is on vegbiendev
- 02:44 PM Revision 10067: inputs/: don't upload test*.xml to jupiter on vegbiendev, because these files are also generated by the full database import but should only be backed up from one source machine, starscream (the Mac)
- 02:26 PM Revision 10066: bin/make: moved $make_filter_active test to lib/sh/make.sh make() so that it's also used when make() is run directly (e.g. in a runscript) rather than via the bin/make wrapper in the PATH
- 02:22 PM Revision 10065: bugfix: lib/sh/make.sh: make(): need to match absolute `make` paths such as /usr/bin/make
- 02:19 PM Revision 10064: lib/sh/util.sh: added self_name alias and use it in self/self_sys
- 02:18 PM Revision 10063: lib/sh/util.sh: added sys_cmd_path() and use it in cmd2sys
- 01:05 PM Revision 10062: bugfix: bin/make: use separate $make_filter_active flag instead of $is_outermost for avoiding duplicate output filtering, so that an outer runscript, which sets $is_outermost but does not activate the make filter, will not prevent the make filter from being activated when make is invoked
- 01:00 PM Revision 10061: bugfix: bin/make: need to use sys_cmd instead of command so that the system make command is invoked instead of the wrapper (which would cause infinite mutual recursion for the ~/bien working copy, although not for the ~/Dropbox/svn working copy because nonrecursive=1 was able to remove the single recursion)
- 12:19 PM Revision 10060: bin/make: use .rel to do relative includes
- 12:19 PM Revision 10059: bugfix: lib/sh/util.sh: .rel(): first use realpath() on BASH_SOURCE[1] in case it's a symlink (as it is for bin/make)
- 12:00 PM Revision 10058: inputs/FIA/_src/Makefile: Extraction: $(zips): use $(allZips) containing a zip for each state so that states that have not yet been downloaded and extracted (or had an empty dir created for them) will be downloaded. previously, the extract target only expanded existing zips but did not download new zips unless no zips had yet been downloaded. (this had been necessary because some states do not have a download, and the download of them would be continuously retried every time the Makefile was run.)
- 11:51 AM Revision 10057: bugfix: inputs/FIA/_src/Makefile: `%: %.zip`: if unzip fails because the download does not exist, create an empty dir for the state instead of aborting make
- 11:33 AM Revision 10056: inputs/FIA/_src/Makefile: use curl instead of wget because that is also available on Mac
- 11:32 AM Revision 10055: bugfix: lib/sh/web.sh: curl(): use --fail so that curl returns a nonzero exit status on error (e.g. file not found) instead of appearing to exit successfully but outputting an error HTML document instead of the file
- 11:05 AM Revision 10054: inputs/FIA/SUBPLOT/map.csv, import: prepended * to all FIA terms to clearly distinguish them from the VegCore terms. this is the standard convention for all datasources, to indicate which terms have not yet been mapped, but was not yet implemented at the beginning of new-style import (the FIA refresh was the first new-style datasource).
- the following replacements were performed to make this change:
in all map.csv: replace regexp (?<=,)(?=[A-Z_]{2,}) wi... - 08:59 AM Revision 10053: inputs/FIA/import_order.txt: added remaining src tables, whose runscripts will be invoked in the order listed by lib/runscripts/datasrc_dir.run
- 08:58 AM Revision 10052: added inputs/FIA/*/_no_import to src tables that are joined together in occurrence_all and should not also be imported separately once they are in import_order.txt
- 07:55 AM Revision 10051: inputs/GBIF/run: inherit from lib/runscripts/datasrc_dir.run, which uses import_order.txt to forward calls to the subdirs
- 07:54 AM Revision 10050: added blank runscripts inputs/GBIF/Source/run, Specimen/run because they are in import_order.txt (used by lib/runscripts/datasrc_dir.run)
- 12:34 AM Revision 10049: bugfix: bin/make: do not alter the PATH passed to the invoked make command, since this is a general-purpose wrapper and is not linked to a specific working copy (it could be used to wrap any make invocation, not just for commands in the svn dir). this uses lib/sh/local.sh's new PATH_add= flag.
- 12:30 AM Revision 10048: lib/sh/local.sh: added PATH_add= flag to allow turning off the addition of $bin_dir_abs to the PATH. this is useful for wrapper scripts that should not alter the PATH passed to their invoked command.
- 12:28 AM Revision 10047: bugfix: lib/sh/make.sh: make(): invoke only the system make command instead of any wrapper for it in the PATH (by using self_sys instead of self), to prevent infinite recursion. single recursion is resolved by nonrecursive=1, but there are cases where mutual recursion occurs due to the presence of two, different bin/makes in the PATH (e.g. if you have two working copies with bin/make, and one is symlinked in your ~/bin/ folder), and these cases can only be resolved by clearing out the PATH completely (since the bin/makes do not know of each other's existence, in order to remove their parent dirs from the PATH).
- 12:23 AM Revision 10046: lib/sh/util.sh: self_sys alias: use new sys_cmd() instead of `command -p` so that only the command path resolution is performed with a limited PATH, and the invoked command itself inherits the full PATH
- 12:22 AM Revision 10045: lib/sh/util.sh: added sys_cmd(), which runs a system command and allows running a system command of the same name as the script
- 12:20 AM Revision 10044: lib/sh/util.sh: added echo_builtin()
06/25/2013
- 06:37 PM Revision 10043: inputs/.rsync_ignore: test*.xml: turn on syncing again, but always treat the local side of the sync (starscream or vegbiendev) as the authoritative copy since they are the machines the tests can be run on
- 05:18 PM Revision 10042: /.rsync_ignore: temp files: hide them on upload so that they are never synced to jupiter. hiding is different than unidirectionally exclude'ing them, because it also causes them to be deleted on the destination if they were uploaded in previous syncs.
- 04:57 PM Revision 10041: inputs/VegBIEN/TWiki/.rsync_ignore: /**: turn syncing back on, but only allow it unidirectionally from vegbiendev->jupiter->starscream to avoid clobbering the live site or the jupiter backup. this is probably the only dir whose authoritative copy is *always* on vegbiendev. for all other dirs, edits can be made wherever convenient, so no copy is authoritative and no sync directions need to be restricted.
- 04:27 PM Revision 10040: /README.TXT: Maintenance: synchronization: fixed whitespace
- 04:07 PM Revision 10039: inputs/.rsync_ignore: install.log.sql: only exclude this on starscream (the local machine), using new machine-specific .rsync_filters, so that vegbiendev's copies of this will be backed up
- 03:46 PM Revision 10038: lib/sh/sync.sh: upload(): .rsync_filter: also support machine-specific filters, for cases when different machines produce the same file (e.g. a log file) but only one machine's copy should be backed up
- 03:43 PM Revision 10037: /README.TXT: Maintenance: to synchronize a Mac's settings with my testing machine's: removed filters that are now handled by .rsync_ignores
- 03:31 PM Revision 10036: added inputs/GBIF/_src/.rsync_filter.upload,download to prevent old versions of GBIFPortalDB-*.dump.gz from being downloaded to the local machine, while keeping them on jupiter. this avoids the need to store these files in ~/Documents/BIEN/large_files/ with symlinks from inputs/GBIF/_src/ to exclude them from the sync.
- 03:17 PM Revision 10035: bugfix: /README.TXT: Maintenance: to synchronize a Mac's settings with my testing machine's: sync ~/Dropbox/svn/ (the no-unversioned-files working copy) separately from the rest of the files, because .svn/ is now excluded by /.rsync_ignore, so that `svn up` needs to be used to keep the .svn/ dirs in sync. note that .svn/ should generally not be synced between machines, because they may use incompatible versions of the svn working copy format.
- 03:02 PM Revision 10034: /README.TXT: Maintenance: to synchronize a Mac's settings with my testing machine's: use new bin/sync_upload (with $sync_remote_subdir) so that per-dir .rsync_ignores are processed, and to use the default $sync_remote_url
- 02:57 PM Revision 10033: lib/sh/local.sh: $sync_remote_url: allow user to override just the sync subdir (not the whole URL) in $sync_remote_subdir. this is useful e.g. for backing up the Mac's files to jupiter.
- 02:28 PM Revision 10032: /README.TXT: Maintenance: to synchronize vegbiendev, jupiter, and your local machine: use new bin/sync_upload instead of specifying all the filter patterns manually. this replaces several `put` commands with various filters with just a bin/sync_upload each on vegbiendev and your machine (in overwrite=1 mode to force a complete sync).
- 02:21 PM Revision 10031: bugfix: backups/.rsync_filter.download: need to prevent existing backups from being deleted on the local side, too, by changing hide patterns to exclude
- 02:11 PM Revision 10030: lib/sh/sync.sh: upload(): make put's $subpath option relative to the currdir instead, like the --include paths. note that $subpath unfortunately can't be used in subdirs at this point because it will cause rsync to ignore the .rsync_ignores and .rsync_filters in parent dirs, including the essential .rsync_ignore in the sync root dir.
- 01:42 PM Revision 10029: /README.TXT: removed unnecessary `env` before kw params, which are treated as such whenever they appear before a command name
- 01:22 PM Revision 10028: bugfix: /README.TXT: updated `make backups/download` to `make backups/<file>/download`
- 01:21 PM Revision 10027: backups/Makefile: upload: use bin/sync_upload
- 01:12 PM Revision 10026: inputs/Makefile: download-logs: use bin/sync_upload like upload/download
- 01:07 PM Revision 10025: bugfix: /README.TXT: `make inputs/upload`, `make inputs/download`: added live=1 so that the sync operation runs rather than previewing what will be synced. removed test=1 because this flag is not used by put.
- 01:00 PM Revision 10024: bugfix: inputs/Makefile: upload, download: need to exclude files in .rsync_ignore, so that large local-only files, such as inputs/GBIF/raw_occurrence_record_plants/table*.tsv, do not have to be synced before `make inputs/upload` can complete (the corresponding .gz gets extracted instead); and deleted temp files in inputs/VegBIEN/TWiki/, such as active sessions, are not added back to the live copy on vegbiendev. previously, fixing this required extracting the rsync command run by `make inputs/upload`, etc. and manually editing it to exclude the applicable .rsync_ignore files, each time `make inputs/upload`, etc. was run (including before every column-based import).
- 12:23 PM Revision 10023: bugfix: bin/make: need to leave bin/, ~/bin/ in the PATH when running make nonrecursively, so that commands invoked by it which are located in these dirs (e.g. put, which will be used by `make inputs/upload`) can still be found. this requires using command()'s new nonrecursive=1 flag instead of running no_PATH_recursion, so that no_PATH_recursion() only affects the resolution of the command path, but does not propagate the filtered PATH to the invoked command itself.
- 12:18 PM Revision 10022: lib/sh/util.sh: command(): added nonrecursive=1 flag, which uses cmd2abs_path to run an external command nonrecursively
- 12:16 PM Revision 10021: lib/sh/util.sh: added cmd2abs_path, which makes the command in $1 nonrecursive
- 11:37 AM Revision 10020: bugfix: lib/sh/util.sh: PATH_rm(): also need to remove adjacent occurrences of the same path (or occurrences which become adjacent when other paths are removed), which :...: matching wasn't doing because the trailing : is consumed, preventing it from being matched at the beginning of the next path. since unlike filesystem paths with /, it is not necessary for a match to span multiple :-separated sections, we can just use new split() to split apart the PATH into an array of paths, filter each path, and join() them back together.
- 11:33 AM Revision 10019: lib/sh/util.sh: added split()
- 10:32 AM Revision 10018: lib/sh/util.sh: auto-echo common external commands: added `which`
- 10:32 AM Revision 10017: lib/sh/util.sh: auto-echo common external commands: use simpler echo_run instead of command since logging handling is not needed
- 08:50 AM Revision 10016: added backups/vegbien.r9897.backup.md5
Also available in: Atom