schemas/util.sql: added nulls_map(), for use with _map()
lib/runscripts/table.run: postprocess(): added remake action that calls trim_table()
lib/runscripts/table.run: added trim_table(), which calls util.trim(regclass, regclass)
lib/runscripts/table.run: map_table(): added remake action that calls reset_col_names()
lib/runscripts/table.run: added reset_col_names(), which calls util.reset_col_names()
bugfix: lib/runscripts/table.run: map_table(): moved $map_table to global var so it can be used by other functions
bugfix: lib/runscripts/table.run: postprocess(): don't propagate $remake to remake_VegBIEN_mappings(), since this will cause map.csv to be remade, which is not related to the postprocessing.
lib/runscripts/table.run: map_table(): util.set_col_names_with_metadata(): removed unnecessary cast to regclass, which is performed implicitly. this used to be needed when the polymorphic util.rename_cols() was used instead.
schemas/util.sql: added trim(), which trims a table to include only original columns, as defined by a map table
schemas/util.sql: added derived_cols(), which gets table_'s derived columns (all the columns not in the names table)
schemas/util.sql: added eval2set()
schemas/util.sql: added drop_column()
inputs/REMIB/Specimen/postprocess.sql: map_nulls__*(): turned off STRICT to allow dynamic inlining, which speeds up the mk_derived_col() statements by 5x (342799.823 ms -> 71533.252 ms (6 min -> 1 min) for latitude_sec)
inputs/REMIB/Specimen/postprocess.sql: runtimes: updated for vegbiendev, before dynamic inlining. the times are about twice as fast as on starscream, so vegbiendev is faster at whatever is the limiting speed factor (probably not CPU, based on other benchmarks).
schemas/util.sql: map_nulls(): documented that due to dynamic inlining, this is just as fast as util._map() which it wraps. dynamic inlining now brings altogether a 40x speed improvement to map_nulls() (4000 ms -> 100 ms), and would likely bring a comparable improvement for other functions that are run repeatedly and call other user-defined functions.
bugfix: schemas/util.sql: map_nulls(): updated to use hstore(text[], anyelement), which has replaced hstore(anyarray, anyelement)
schemas/util.sql: removed hstore(anyarray, anyelement), which did not support dynamic inlining, to avoid confusion over which hstore() function to use. use new hstore(text[], anyelement) instead (with explicit cast on the keys array if needed).
schemas/util.sql: added hstore(text[], anyelement), which dynamically inlines properly, unlike hstore(anyarray, anyelement). this can be selected by explicitly casting the keys array to text[], which now provides a 6x speed improvement (380 ms -> 60 ms) for map_nulls().
schemas/util.sql: fix_array(): turned off STRICT to allow dynamic inlining, which speeds up util.map_nulls() by 3x (1500 ms -> 500 ms)
schemas/util.sql: array_length(anyarray), array_length(anyarray, dimension integer): turned off STRICT to allow dynamic inlining, which speeds up util.map_nulls(). this requires adding a `CASE WHEN $1 IS NULL THEN NULL` statement to array_length(anyarray, dimension integer) to replace the functionality provided by STRICT.
schemas/util.sql: map_nulls(): turned off STRICT to allow dynamic inlining, which causes a 2x speed improvement1. (see r10352 for an explanation of dynamic inlining.) note that turning off STRICT disables NULL-skipping (avoiding running a function when all its params are NULL), so it should only be used when the NULL-skipping optimization is needed less than dynamic inlining....
schemas/util.sql: inlinable IMMUTABLE functions: avoid using config params (e.g. `SET search_path TO util`) because these prevent dynamic inlining (i.e. inlining of a function call with variable instead of constant arguments, by substituting the arguments into the function's body). dynamic inlining can speed up function evaluation significantly, because a (slow) call to a user-defined SQL function is avoided.
schemas/vegbien.my.sql: updated for new bin/repl text mode matching, which also affects non-regexps. this causes the replacement of a few more occurrences of PostgreSQL-only one-word typenames with their MySQL equivalents.
inputs/REMIB/Specimen/postprocess.sql: runtimes: documented the machine the times are from
inputs/REMIB/: switched to new-style import, using the steps at wiki.vegpath.org/Switching_to_new-style_import#stage-I-source-specific > "run the following for each datasource"
bugfix: bin/repl: text mode: repurpose this to match SQL identifiers, for use by inputs/input.Makefile %/postprocess.sql. %/postprocess.sql is the only place currently using this mode, so this will not affect other scripts.
bugfix: inputs/input.Makefile: %/postprocess.sql: need to run bin/repl in text mode (text=1) so that values to match are treated as literal strings rather than regular expressions. this difference is important for column names with spaces or special characters.
bugfix: inputs/Madidi/LocationObservation/map.csv: resolved Notes, Notes 2 -> locationRemarks collision by _alt()ing them together. note that _alt() is fine because only one of these is ever populated.
bugfix: schemas/util.sql: set_col_names(): need to generate error if destination column already exists (rather than suppressing it with try_create()), because this indicates a collision
bugfix: inputs/Madidi/IndividualObservation/map.csv: removed derived column FieldFamilyFullName#originalFamily, which should not be in the map table because it can contain only columns that are initially in the table before running postprocess.sql
schemas/util.sql: map table: added unique constraint on the to column as well, because the destination names also need to be distinct in order to be a valid set of column names
schemas/util.sql: map table: changed pkey to a unique constraint so pgAdmin would sort the entries in table order (matching the order they are in the staging table) instead of alphabetized by the pkey
bugfix: inputs/REMIB/Specimen/map.csv: state: changed output column name to stateProvince_verbatim to match the renaming in postprocess.sql
inputs/REMIB/Specimen/postprocess.sql: remove frameshifted rows: removed out-of-date rerun time, which applied to doing all the deletes in the same statement (however, the current rerun time is approximately the same). note that index scans are not actually used (as the previous comment incorrectly stated) because the conditions for this filter are prefix-less regexps.
inputs/REMIB/Specimen/: translated single-column filters to postprocessing derived columns, using the steps at wiki.vegpath.org/Switching_to_new-style_import#stage-I-source-specific > "translate single-column filters to postprocessing derived columns". null-mapping filters now use wrappers around new util.map_nulls(). note that the verbatim columns input to the filters need to be renamed to avoid name collisions with their filtered columns, which must be VegCore terms for new-style import.
inputs/REMIB/Specimen/postprocess.sql: remove frameshifted rows: also filter out non-numbers for long_sec, lat_min, lat_sec
inputs/REMIB/Specimen/postprocess.sql: remove frameshifted rows: remove rows where long_min is not a number
inputs/REMIB/Specimen/postprocess.sql: change E'' to regular '' to avoid the need to double \ (instead ' would be doubled). E'' used to be necessary in previous versions of PostgreSQL to avoid a warning about escape string syntax.
inputs/REMIB/Specimen/postprocess.sql: remove frameshifted rows: removed unnecessary () around `DELETE FROM :table WHERE long_deg ...`
inputs/REMIB/Specimen/postprocess.sql: removed coll_year, country, long_deg indexes because the frameshift filter conditions on these columns do not use index scans (because their regexp patterns do not contain a fixed prefix). eventually, some regexp patterns may be able to be modified to use prefixes.
bugfix: inputs/REMIB/Specimen/postprocess.sql: remove frameshifted rows: can't OR together conditions to determine rows to delete, because if any condition is NULL instead of true/false, this will NULL out the entire WHERE condition and prevent any other true conditions from causing a deletion. the best way to fix this is to use a separate DELETE statement for each condition, so that NULLs only impact that particular condition's DELETE. unlike using a modified, NULL-insensitive OR, which would prevent the use of index scans, this allows indexes to be used for conditions that support them.
inputs/REMIB/Specimen/postprocess.sql: removed duplicate CREATE INDEX for the acronym column
bugfix: inputs/REMIB/Specimen/postprocess.sql: switched back to the input column names, since the renaming to *_verbatim is part of a later step
inputs/REMIB/Specimen/create.sql: moved filtering out of frameshifted rows to postprocess.sql, where it can happen in an idempotent DELETE. this allows filters to remove additional rows to easily be added on top of the existing filters, without needing to remake Specimen (which takes a long time, because of the many stage I derived columns that get added). the logical inversion inherent in the DELETE condition has been factored through rather than wrapped in NOT (...), because removal of frameshifted rows is more accurately specified as the detection of specific patterns that indicate frameshifting rather than the validation of all fields.
bugfix: schemas/util.sql: not_empty(anyarray): array_length() now refers to different functions, with different semantics, depending on whether util is in the search_path. this necessitates explicitly selecting util.array_length() and switching to its semantics (ARRAY[] -> 0 instead of NULL)
schemas/util.sql: map_nulls(): support all datatypes, not just text
schemas/util.sql: added hstore(keys anyarray, value anyelement) and => (anyarray, anyelement) operator to support other element types for hstore
inputs/REMIB/Specimen/create.sql: also remove frameshifted rows with invalid long_deg values
schemas/util.sql: added map_nulls(), a common use case of _map()
bugfix: schemas/util.sql: hstore(keys text[], value text): use new fix_array() so that an empty keys array is made 1-dimensional to match up with the array generated by array_fill()
schemas/util.sql: added fix_array(), which ensures that the array will always have proper non-NULL dimensions
schemas/util.sql: added empty_array(), for constructing proper empty 1-dimensional arrays whose dimensions are not NULL ( {}::text[] does not do this)
bugfix: schemas/util.sql: array_length(anyarray): need to call util.array_length() instead of just array_length() (which uses pg_catalog.array_length()) so that empty arrays will be returned as 0 instead of NULL. note that for some reason, adding `SET search_path=util` to the function does not have the same effect.
inputs/ACAD/Specimen/postprocess.sql, inputs/ARIZ/omoccurrences/postprocess.sql: removed unnecessary "" around keys/values. "" are required in hstore input syntax in approximately the same places as they are in XPaths (around values containing spaces or special characters).
inputs/ACAD/Specimen/map.csv, inputs/ARIZ/omoccurrences/map.csv: removed derived columns, which cause an error when trying to rename a table that does not yet have the derived columns added. this error will not be noticed locally if the derived columns were added before switching to new-style import, but will be noticed on vegbiendev.
inputs/ARIZ/: switched to new-style import, using the steps at wiki.vegpath.org/Switching_to_new-style_import#stage-I-source-specific > "run the following for each datasource"
added inputs/ARIZ/omoccurrences/postprocess.sql
inputs/ARIZ/omoccurrences/: translated single-column filters to postprocessing derived columns, using the steps at wiki.vegpath.org/Switching_to_new-style_import#stage-I-source-specific > "translate single-column filters to postprocessing derived columns"
bugfix: schemas/util.sql: _map(map hstore, value anyelement): need to cast result to unknown to support types that don't have a cast directly from text
schemas/util.sql: added _map(map hstore, value anyelement) to seamlessly map types other than text (by casting back and forth between text and the value type)
inputs/ACAD/: switched to new-style import, using the steps at wiki.vegpath.org/Switching_to_new-style_import#stage-I-source-specific > "run the following for each datasource"
inputs/input.Makefile: added %/postprocess.sql to replace input column names with the corresponding output column names when switching to new-style import (this target must be manually run, but does simplify the process of renaming the postprocess.sql input columns)
planning/timeline/timeline.2013.xls: moved Individual datasource refresh under Importing to normalized VegCore instead of Switching to new-style import because it is actually related to the refactor-in-place method used to import to VegCore
inputs/ACAD/Specimen/: translated single-column filters to postprocessing derived columns, using the steps at wiki.vegpath.org/Switching_to_new-style_import#stage-I-source-specific > "translate single-column filters to postprocessing derived columns"
bugfix: schemas/util.sql: rename_cols(): run additional `SELECT NULL::void` query after the main for-loop query so that PostgreSQL does not try to fold away the execution of util.try_create() just because multiple rows are not returned by the function. the result set of the first query will still be discarded, but will be fully evaluated. (this has nothing to do with VOLATILE vs. IMMUTABLE; util.try_create() is already declared VOLATILE and would normally not be folded.) rename_cols() is used to rename derived columns, which are not part of the map.csv and cannot be positionally renamed.
schemas/util.sql: added text[] => text operator, analogous to text => text for multiple keys (uses hstore(keys text[], value text))
schemas/util.sql: added hstore(keys text[], value text), which can be used to avoid repeating the same value for each key. there are many /_map filters which use the XPath syntax for doing this, which now need to use an equivalent SQL syntax to avoid duplicating the value many times.
web/links/index.htm: updated to Firefox bookmarks. added link to Brian Enquist's fractals video on PBS NOVA.
schemas/util.sql: added array_fill(anyelement, integer), which doesn't require lengths for multiple dimensions
schemas/util.sql: added array_length(anyarray, dimension integer) wrapper, which returns 0 instead of NULL for empty arrays
schemas/util.sql: added array_length(anyarray), which does not require a second dimension argument
lib/sql_io.py: put_table(): documented that PostgreSQL 9.1+ now provides a way to implement insert/on duplicate select just once for each table (instead of dynamically for each insert) using the new INSTEAD OF triggers (http://www.postgresql.org/docs/9.1/static/plpgsql-trigger.html). INSTEAD OF triggers were not used when put_table() was developed, because it was necessary to support PostgreSQL 9.0, which was installed on the Mac and not easily upgradeable. it was eventually upgraded to add PostGIS, which required a complete reinstall of the DB from the staging tables, with the associated staging table reload bugs, as well as complete removal of the old Postgres version.
inputs/Madidi/: switched to new-style import
schemas/VegCore/VegCore.ERD.mwb: regenerated exports
schemas/VegCore/VegCore.ERD.mwb: categories: fixed position of place to match where it now is in the diagram. lined up boxes so that there is a visible line between the place- and occurrence-related categories.
inputs/Madidi/IndividualObservation/map.csv: translated 1:many mappings ( FieldFamilyFullName->{family,originalFamily} ) to derived columns (in postprocess.sql) to work with new-style import, which must have a 1:1 relationship between input and output columns
schemas/util.sql: added reset_col_names(), the counterpart to set_col_names(). note that this alters the map table, so it will need to be repopulated after running this function.
schemas/util.sql: mk_derived_col(): support using this function to overwrite an existing column (i.e. as a general-purpose function to perform in-place update with ALTER COLUMN TYPE USING)
lib/sh/db.sh: psql(): display stack traces and DETAIL sections of error messages at verbosity 2+, to help debugging (previously they were always turned off). in particular, the DETAIL section of a "duplicate key value violates unique constraint" error is useful because it contains the duplicated key.
inputs/.TNRS/: switched to new-style import. because this does not have data subdirs (data comes from the TNRS client), this is just a matter of adding ./run.
inputs/.TNRS/Source/: switched to new-style import. this had been missed when all the Source/ subdirs were batch-switched to new-style import.
inputs/*/*/map.csv: replaced /_first filter with mapping to DUPLICATE special term (VegCore.vegpath.org?DUPLICATE). this removes collisions that don't need a postprocessing formula to combine the columns.
inputs/Madidi/map.csv: removed filters on columns from before the refresh (which are not in active use), so that they don't show up in a search for map.csvs with filters (indicating collisions)
inputs/Madidi/IndividualObservation/map.csv: SeniorCollector: don't prepend it to the CollectorString because the CollectorString already contains it. this may be a change between the BIEN2 and refreshed Madidi data (which uses a significantly different schema).
mappings/VegCore.htm: regenerated from wiki. Special terms: added instructions for adding a distinguishing suffix to each special term in the format special_term#suffix. this is needed for new-style import to make the resulting column name unique within the staging table.
mappings/VegCore-VegBIEN.csv: mapped DUPLICATE to nothing so that it would not be treated as an unmapped term
mappings/VegCore.htm: regenerated from wiki. Special terms: added DUPLICATE.
/README.TXT: Maintenance: regenerate mappings/VegCore.csv: commit command: use single quotes ' instead of double quotes " to avoid needing to \-escape every special char (single quotes ' still need to be escaped)
mappings/VegCore.htm: regenerated from wiki. moved UNUSED, PRIVATE underneath OMIT as subterms.
mappings/VegCore.htm: Regenerated from wiki
bugfix: bin/*: spell out [:alnum:] as [a-zA-Z0-9] because Python unfortunately doesn't support character classes
web/links/index.htm: updated to Firefox bookmarks. moved Linux, Mac into Unix folder. added instructions to remove old Linux kernels, which fill up the /boot partition. added instructions to force sed to use raw binary mode instead of UTF-8 when UTF-8 is set in the environment. added methods of implementing DB disk space quotas in Postgres. added comparison on my Mac's CPU (2.66 GHz Intel Core i5) with vegbiendev's (2.44 GHz AMD Phenom X4). my Mac's seems to be much faster, so it might make sense to check that the Thor CPUs are faster than the Vis Lab computers' CPUs the next time it gets upgraded. (these diffs can be seen in WinMerge with Moved block detection on. see /README.TXT > WinMerge setup for details.)
inputs/bien_web/observation/VegBIEN.csv: regenerated now that *_index dummy columns have been removed
inputs/.TNRS/schema.sql: tnrs_populate_fields(): updated runtimes. it now takes 25 min instead of 16 min to regenerate the derived cols.
inputs/IRMNG/_README.TXT: added note that when refreshing this datasource, remember to regenerate the TNRS derived cols using the instructions in inputs/.TNRS/schema.sql > tnrs_populate_fields()
bin/*: replaced confusing regexp constructs involving \W inside [] with the much clearer explicit character class [:alnum:] . this avoids adding or subtracting from an inverted class in order to reach a subset of the corresponding positive class, because the subset can just be named explicitly instead.
bugfix: bin/repl: doesn't make sense to use other chars in a [^\W_] regexp, because they will have no effect since \w doesn't include the other chars to begin with. this is a result of confusion with the ^ and \W double negative.
lib/runscripts/table.run: postprocess(): propagate the $remake flag to remake_VegBIEN_mappings using self_make, so that a remake=1 on postprocess will cause map.csv to be regenerated as it would for a remake=1 directly on remake_VegBIEN_mappings
bugfix: postprocess(): moved $can_test flag from import() to this function because it is used here
lib/runscripts/table.run: import(): moved postprocessing commands to separate postprocess() function that can be invoked on an already-imported staging table to avoid running the load_data() target. this is especially useful when running the postprocessing on a working copy without the unversioned data files, for datasources whose load_data() target would otherwise try to download the files because they don't already exist.