bugfix: inputs/REMIB/Specimen/postprocess.sql: remove frameshifted rows: can't OR together conditions to determine rows to delete, because if any condition is NULL instead of true/false, this will NULL out the entire WHERE condition and prevent any other true conditions from causing a deletion. the best way to fix this is to use a separate DELETE statement for each condition, so that NULLs only impact that particular condition's DELETE. unlike using a modified, NULL-insensitive OR, which would prevent the use of index scans, this allows indexes to be used for conditions that support them.
inputs/REMIB/Specimen/postprocess.sql: removed duplicate CREATE INDEX for the acronym column
bugfix: inputs/REMIB/Specimen/postprocess.sql: switched back to the input column names, since the renaming to *_verbatim is part of a later step
inputs/REMIB/Specimen/create.sql: moved filtering out of frameshifted rows to postprocess.sql, where it can happen in an idempotent DELETE. this allows filters to remove additional rows to easily be added on top of the existing filters, without needing to remake Specimen (which takes a long time, because of the many stage I derived columns that get added). the logical inversion inherent in the DELETE condition has been factored through rather than wrapped in NOT (...), because removal of frameshifted rows is more accurately specified as the detection of specific patterns that indicate frameshifting rather than the validation of all fields.
bugfix: schemas/util.sql: not_empty(anyarray): array_length() now refers to different functions, with different semantics, depending on whether util is in the search_path. this necessitates explicitly selecting util.array_length() and switching to its semantics (ARRAY[] -> 0 instead of NULL)
schemas/util.sql: map_nulls(): support all datatypes, not just text
schemas/util.sql: added hstore(keys anyarray, value anyelement) and => (anyarray, anyelement) operator to support other element types for hstore
inputs/REMIB/Specimen/create.sql: also remove frameshifted rows with invalid long_deg values
schemas/util.sql: added map_nulls(), a common use case of _map()
bugfix: schemas/util.sql: hstore(keys text[], value text): use new fix_array() so that an empty keys array is made 1-dimensional to match up with the array generated by array_fill()
schemas/util.sql: added fix_array(), which ensures that the array will always have proper non-NULL dimensions
schemas/util.sql: added empty_array(), for constructing proper empty 1-dimensional arrays whose dimensions are not NULL ( {}::text[] does not do this)
bugfix: schemas/util.sql: array_length(anyarray): need to call util.array_length() instead of just array_length() (which uses pg_catalog.array_length()) so that empty arrays will be returned as 0 instead of NULL. note that for some reason, adding `SET search_path=util` to the function does not have the same effect.
inputs/ACAD/Specimen/postprocess.sql, inputs/ARIZ/omoccurrences/postprocess.sql: removed unnecessary "" around keys/values. "" are required in hstore input syntax in approximately the same places as they are in XPaths (around values containing spaces or special characters).
inputs/ACAD/Specimen/map.csv, inputs/ARIZ/omoccurrences/map.csv: removed derived columns, which cause an error when trying to rename a table that does not yet have the derived columns added. this error will not be noticed locally if the derived columns were added before switching to new-style import, but will be noticed on vegbiendev.
inputs/ARIZ/: switched to new-style import, using the steps at wiki.vegpath.org/Switching_to_new-style_import#stage-I-source-specific > "run the following for each datasource"
added inputs/ARIZ/omoccurrences/postprocess.sql
inputs/ARIZ/omoccurrences/: translated single-column filters to postprocessing derived columns, using the steps at wiki.vegpath.org/Switching_to_new-style_import#stage-I-source-specific > "translate single-column filters to postprocessing derived columns"
bugfix: schemas/util.sql: _map(map hstore, value anyelement): need to cast result to unknown to support types that don't have a cast directly from text
schemas/util.sql: added _map(map hstore, value anyelement) to seamlessly map types other than text (by casting back and forth between text and the value type)
inputs/ACAD/: switched to new-style import, using the steps at wiki.vegpath.org/Switching_to_new-style_import#stage-I-source-specific > "run the following for each datasource"
inputs/input.Makefile: added %/postprocess.sql to replace input column names with the corresponding output column names when switching to new-style import (this target must be manually run, but does simplify the process of renaming the postprocess.sql input columns)
planning/timeline/timeline.2013.xls: moved Individual datasource refresh under Importing to normalized VegCore instead of Switching to new-style import because it is actually related to the refactor-in-place method used to import to VegCore
inputs/ACAD/Specimen/: translated single-column filters to postprocessing derived columns, using the steps at wiki.vegpath.org/Switching_to_new-style_import#stage-I-source-specific > "translate single-column filters to postprocessing derived columns"
bugfix: schemas/util.sql: rename_cols(): run additional `SELECT NULL::void` query after the main for-loop query so that PostgreSQL does not try to fold away the execution of util.try_create() just because multiple rows are not returned by the function. the result set of the first query will still be discarded, but will be fully evaluated. (this has nothing to do with VOLATILE vs. IMMUTABLE; util.try_create() is already declared VOLATILE and would normally not be folded.) rename_cols() is used to rename derived columns, which are not part of the map.csv and cannot be positionally renamed.
schemas/util.sql: added text[] => text operator, analogous to text => text for multiple keys (uses hstore(keys text[], value text))
schemas/util.sql: added hstore(keys text[], value text), which can be used to avoid repeating the same value for each key. there are many /_map filters which use the XPath syntax for doing this, which now need to use an equivalent SQL syntax to avoid duplicating the value many times.
web/links/index.htm: updated to Firefox bookmarks. added link to Brian Enquist's fractals video on PBS NOVA.
schemas/util.sql: added array_fill(anyelement, integer), which doesn't require lengths for multiple dimensions
schemas/util.sql: added array_length(anyarray, dimension integer) wrapper, which returns 0 instead of NULL for empty arrays
schemas/util.sql: added array_length(anyarray), which does not require a second dimension argument
lib/sql_io.py: put_table(): documented that PostgreSQL 9.1+ now provides a way to implement insert/on duplicate select just once for each table (instead of dynamically for each insert) using the new INSTEAD OF triggers (http://www.postgresql.org/docs/9.1/static/plpgsql-trigger.html). INSTEAD OF triggers were not used when put_table() was developed, because it was necessary to support PostgreSQL 9.0, which was installed on the Mac and not easily upgradeable. it was eventually upgraded to add PostGIS, which required a complete reinstall of the DB from the staging tables, with the associated staging table reload bugs, as well as complete removal of the old Postgres version.
inputs/Madidi/: switched to new-style import
schemas/VegCore/VegCore.ERD.mwb: regenerated exports
schemas/VegCore/VegCore.ERD.mwb: categories: fixed position of place to match where it now is in the diagram. lined up boxes so that there is a visible line between the place- and occurrence-related categories.
inputs/Madidi/IndividualObservation/map.csv: translated 1:many mappings ( FieldFamilyFullName->{family,originalFamily} ) to derived columns (in postprocess.sql) to work with new-style import, which must have a 1:1 relationship between input and output columns
schemas/util.sql: added reset_col_names(), the counterpart to set_col_names(). note that this alters the map table, so it will need to be repopulated after running this function.
schemas/util.sql: mk_derived_col(): support using this function to overwrite an existing column (i.e. as a general-purpose function to perform in-place update with ALTER COLUMN TYPE USING)
lib/sh/db.sh: psql(): display stack traces and DETAIL sections of error messages at verbosity 2+, to help debugging (previously they were always turned off). in particular, the DETAIL section of a "duplicate key value violates unique constraint" error is useful because it contains the duplicated key.
inputs/.TNRS/: switched to new-style import. because this does not have data subdirs (data comes from the TNRS client), this is just a matter of adding ./run.
inputs/.TNRS/Source/: switched to new-style import. this had been missed when all the Source/ subdirs were batch-switched to new-style import.
inputs/*/*/map.csv: replaced /_first filter with mapping to DUPLICATE special term (VegCore.vegpath.org?DUPLICATE). this removes collisions that don't need a postprocessing formula to combine the columns.
inputs/Madidi/map.csv: removed filters on columns from before the refresh (which are not in active use), so that they don't show up in a search for map.csvs with filters (indicating collisions)
inputs/Madidi/IndividualObservation/map.csv: SeniorCollector: don't prepend it to the CollectorString because the CollectorString already contains it. this may be a change between the BIEN2 and refreshed Madidi data (which uses a significantly different schema).
mappings/VegCore.htm: regenerated from wiki. Special terms: added instructions for adding a distinguishing suffix to each special term in the format special_term#suffix. this is needed for new-style import to make the resulting column name unique within the staging table.
mappings/VegCore-VegBIEN.csv: mapped DUPLICATE to nothing so that it would not be treated as an unmapped term
mappings/VegCore.htm: regenerated from wiki. Special terms: added DUPLICATE.
/README.TXT: Maintenance: regenerate mappings/VegCore.csv: commit command: use single quotes ' instead of double quotes " to avoid needing to \-escape every special char (single quotes ' still need to be escaped)
mappings/VegCore.htm: regenerated from wiki. moved UNUSED, PRIVATE underneath OMIT as subterms.
mappings/VegCore.htm: Regenerated from wiki
bugfix: bin/*: spell out [:alnum:] as [a-zA-Z0-9] because Python unfortunately doesn't support character classes
web/links/index.htm: updated to Firefox bookmarks. moved Linux, Mac into Unix folder. added instructions to remove old Linux kernels, which fill up the /boot partition. added instructions to force sed to use raw binary mode instead of UTF-8 when UTF-8 is set in the environment. added methods of implementing DB disk space quotas in Postgres. added comparison on my Mac's CPU (2.66 GHz Intel Core i5) with vegbiendev's (2.44 GHz AMD Phenom X4). my Mac's seems to be much faster, so it might make sense to check that the Thor CPUs are faster than the Vis Lab computers' CPUs the next time it gets upgraded. (these diffs can be seen in WinMerge with Moved block detection on. see /README.TXT > WinMerge setup for details.)
inputs/bien_web/observation/VegBIEN.csv: regenerated now that *_index dummy columns have been removed
inputs/.TNRS/schema.sql: tnrs_populate_fields(): updated runtimes. it now takes 25 min instead of 16 min to regenerate the derived cols.
inputs/IRMNG/_README.TXT: added note that when refreshing this datasource, remember to regenerate the TNRS derived cols using the instructions in inputs/.TNRS/schema.sql > tnrs_populate_fields()
bin/*: replaced confusing regexp constructs involving \W inside [] with the much clearer explicit character class [:alnum:] . this avoids adding or subtracting from an inverted class in order to reach a subset of the corresponding positive class, because the subset can just be named explicitly instead.
bugfix: bin/repl: doesn't make sense to use other chars in a [^\W_] regexp, because they will have no effect since \w doesn't include the other chars to begin with. this is a result of confusion with the ^ and \W double negative.
lib/runscripts/table.run: postprocess(): propagate the $remake flag to remake_VegBIEN_mappings using self_make, so that a remake=1 on postprocess will cause map.csv to be regenerated as it would for a remake=1 directly on remake_VegBIEN_mappings
bugfix: postprocess(): moved $can_test flag from import() to this function because it is used here
lib/runscripts/table.run: import(): moved postprocessing commands to separate postprocess() function that can be invoked on an already-imported staging table to avoid running the load_data() target. this is especially useful when running the postprocessing on a working copy without the unversioned data files, for datasources whose load_data() target would otherwise try to download the files because they don't already exist.
lib/runscripts/table.run: postprocess(): renamed to custom_postprocess() since this runs only the datasource's custom postprocessing commands, not all the postprocessing commands including map_table, mk_derived
lib/runscripts/util.run: added , function, which treats each of the command-line args as commands the way make does (instead of as args to the same command, the way runscripts do)
lib/sh/util.sh: moved runscript-related commands to lib/runscripts/util.run because these only apply to runscripts
bugfix: inputs/*/*/map.csv (e.g. inputs/GBIF/raw_occurrence_record_plants/map.csv): remapped author to scientificNameAuthorship rather than authors, which it had gotten incorrectly automapped to. note that the VegCore term authors has now been renamed to data_authors to avoid ambiguity, but incorrect automappings resulting from it had not yet been fixed.
bugfix: inputs/GBIF/raw_occurrence_record_plants/run: updated herbaria.ih column names for staging table column renaming
bugfix: inputs/GBIF/table.run: need to include lib/runscripts/mysql.table.run instead of table.run (table.run was accidentally substituted when inputs/.NCBI/table.run was copied to all new-style datasources
backups/Makefile: %.backup/restore: documented runtime (11 h; ~5 h to insert data). note that this is still much shorter than re-running column-based import.
planning/workflow/bien3_architecture.pptx: exported from bien3_architecture.odp, as requested by Martha. the .pptx format is buggy and doesn't show the slide thumbnails on the slide notes view, but will have to do as .ppt (the older MS PowerPoint format) doesn't support the graphics.
planning/workflow/bien3_architecture.odp: added wiki page notes (wiki.vegpath.org/2013-06-20_conference_call, wiki.vegpath.org/2013-06-27_conference_call) in the slide notes
planning/workflow/bien3_architecture.odp: added responses to the red-highlighted questions (from e-mails to the list) in the slide notes
planning/timeline/timeline.2013.xls: fixed formatting: removed internal cell borders in spacer lines
added planning/workflow/bien3_architecture.odp with changes from Skype call with Martha, which include wiki page notes (wiki.vegpath.org/2013-06-20_conference_call) about the refactor-in-place method in the Notes area
planning/timeline/timeline.2013.xls: updated with changes from Skype call with Martha
inputs/*/ which do not contain any explicit collisions (wiki.vegpath.org/2013-06-27_conference_call#To-do-for-Aaron > #3.2 > the following datasources ...): switched to new-style import, which adds the staging table column renaming
inputs/newWorld/: switched to new-style import, which adds the staging table column renaming. these tables are used by the public schema (schemas/vegbien.sql), so the renamings are applied there as well.
inputs/bien_web/bien_web.schema.sql: regenerated using bin/my2pg, to remove the *_index dummy columns so they don't create lots of OMIT#... staging table columns
inputs/*/*/map.csv: added distinguishing #... suffix (e.g. UNUSED#institutionID) to the special terms OMIT, PRIVATE, UNUSED (VegCore.vegpath.org#Special-terms) to avoid creating a collision in the staging table renaming
bugfix: inputs/input.Makefile: Staging tables installation: $(allInstalls): don't filter out Source table, because it is now an installed table rather than just a mapping
bin/filter_out_ci, lib/maps.py: simplify(): also remove distinguishing #... suffix from terms (e.g. UNUSED#institutionID), to support mapping multiple columns to the special terms OMIT, PRIVATE, UNUSED (VegCore.vegpath.org#Special-terms), without creating a collision in the staging table renaming. note that this change must not be made to bin/canon, because this would cause suffixed terms to be autorenamed to their *un*suffixed VegCore versions.
backups/Makefile: $(restore): added --verbose to display pg_restore's incremental progress
bugfix: inputs/newWorld/newWorldCountries/postprocess.sql: use UPDATE statement (followed by VACUUM ANALYZE to remove dead tuples) instead of in-place update (ALTER COLUMN TYPE USING), so that the statement can be run even after the public schema has been installed and its views use the columns. (a view using the columns would normally block an ALTER COLUMN TYPE statement on a referenced column.)
bugfix: lib/runscripts/table.run: remake_VegBIEN_mappings(): when remaking, do not remake header.csv, because it should keep the original CSV columns rather than being reset to whatever the current staging table columns happen to be. to force-regenerate this, instead delete it first and then run remake_VegBIEN_mappings(). remake mode will now just regenerate map.csv from header.csv, in case map.csv's columns are incomplete or out of order.
bugfix: lib/runscripts/table.run: map_table(): do not rename view columns, since their column names come from their (column-renamed) joined tables rather than from a map.csv. header.csv, map.csv for views will generally become out-of-date whenever the joined tables change, so it is better not to generate them at all.
lib/runscripts/table.run: added $is_view
lib/runscripts/table.run: added $postprocess_sql to store postprocess.sql path, and use it in postprocess()
bugfix: lib/sh/local.sh: prevent automated tests when the public schema contains the live DB, so the user doesn't have to explicitly specify can_test= when running the import on vegbiendev
bugfix: lib/runscripts/table.run: import(): allow automated tests (remake_VegBIEN_mappings) to be disabled by setting can_test= if the public schema shouldn't be modified (e.g. if it's the live DB)
bugfix: inputs/*/*/postprocess.sql: made all operations idempotent, so that postprocess.sql can be run repeatedly (e.g. by new-style import)
schemas/util.sql: create_if_not_exists(): also suppress "multiple primary keys are not allowed" error
added inputs/newWorld/iso_code_gadm/.map.csv.last_cleanup
inputs/*/Source/VegBIEN.csv: regenerated for new-style import, which uses a symlink to mappings/VegCore-VegBIEN.csv instead of a custom mapping using the original column names
inputs/input.Makefile: Staging tables installation: %/install: run %/map_table at end to rename the staging table columns for new-style datasources
inputs/input.Makefile: Staging tables installation: added %/map_table to run the new-style import staging table renaming
inputs/bien2_traits/TraitObservation/map.csv: removed no longer needed mappings of dummy columns to OMIT, which were creating an unnecessary collision of staging table column names
inputs/bien2_traits/bien2_staging.schema.sql: regenerated from MySQL version so that dummy columns (which used to be generated by bin/my2pg) will be replaced with dummy CHECK constraints instead. this avoids needing to map several dummy columns all to OMIT, which was creating an unnecessary collision of staging table column names.
bin/my2pg*: keep MySQL indefinite dates as text strings instead of translating them (to the first of the month or year) to fit into a PostgreSQL timestamp. this allows the application to decide how to handle these values, which otherwise have no corresponding value in PostgreSQL. this requires changing the date/time related types to text instead of leaving them as-is, so that they can store the custom MySQL strings.
planning/timeline/timeline.2013.xls: Geoscrubbing: made it a subtask of Adding derived columns. moved it to July so that it can be run for Naia's new project.
planning/timeline/timeline.2013.xls: reordered tasks approximately in priority order (which corresponds to the month(s) in which they are scheduled). indented subtasks under their parent tasks.
planning/timeline/timeline.2013.xls: crossed out completed rows and moved them to the bottom