inputs/input.Makefile: Staging tables installation: %/install: run %/map_table at end to rename the staging table columns for new-style datasources
inputs/input.Makefile: Staging tables installation: added %/map_table to run the new-style import staging table renaming
inputs/bien2_traits/TraitObservation/map.csv: removed no longer needed mappings of dummy columns to OMIT, which were creating an unnecessary collision of staging table column names
inputs/bien2_traits/bien2_staging.schema.sql: regenerated from MySQL version so that dummy columns (which used to be generated by bin/my2pg) will be replaced with dummy CHECK constraints instead. this avoids needing to map several dummy columns all to OMIT, which was creating an unnecessary collision of staging table column names.
bin/my2pg*: keep MySQL indefinite dates as text strings instead of translating them (to the first of the month or year) to fit into a PostgreSQL timestamp. this allows the application to decide how to handle these values, which otherwise have no corresponding value in PostgreSQL. this requires changing the date/time related types to text instead of leaving them as-is, so that they can store the custom MySQL strings.
planning/timeline/timeline.2013.xls: Geoscrubbing: made it a subtask of Adding derived columns. moved it to July so that it can be run for Naia's new project.
planning/timeline/timeline.2013.xls: reordered tasks approximately in priority order (which corresponds to the month(s) in which they are scheduled). indented subtasks under their parent tasks.
planning/timeline/timeline.2013.xls: crossed out completed rows and moved them to the bottom
planning/timeline/timeline.2013.xls: use different-style checkmark because LibreOffice doesn't display the font of the previous one correctly anymore (it may already have been displayed incorrectly on other people's computers)
planning/timeline/timeline.2013.xls: Reload existing data in need of refresh: added Oct because Rick Condit is supposed to provide us with a CTFS refresh that we would be allowed to use (he wouldn't let us use the 2011-4-1 full-DB export)
planning/timeline/timeline.2013.xls: continuous tasks: populated past months
planning/timeline/timeline.2013.xls: added Sep, Oct months and moved tasks into them. moved continuous tasks to separate section at bottom to avoid confusion with discrete tasks.
planning/timeline/timeline.2013.xls: use bullet points (•) instead of background shading to indicate future tasks. this allows cells to easily be cleared by pressing Backspace, rather than having to copy a white-background cell on top of the cell.
planning/timeline/timeline.2013.xls: use 3-letter months to make room for more months
planning/timeline/timeline.2013.xls: added missing tasks: switching to new-style import, importing to normalized VegCore, adding derived columns
planning/timeline/timeline.2013.xls: removed alterate-row color highlighting because it makes it difficult to reorder rows or insert new rows in the middle
bin/my2pg: use util.sh $top_dir instead of setting $selfDir
bin/my2pg*: use the util.sh sed wrapper, which fixes the LANG=*.UTF-8 "illegal byte sequence" errors on invalid UTF-8
/Makefile: mysql-Linux: also install mysql-workbench, for use in modifying the VegCore ERD. (note that it has to be modified on Linux, because the Linux and Mac versions of MySQL Workbench position the lines differently.)
/README.TXT: Maintenance: to backup files not in Time Machine: removed VirtualBox VMs because they are now in Time Machine, and do not need to be backed up separately
/README.TXT: Maintenance: to synchronize a Mac's settings with my testing machine's: added steps to upload just the VirtualBox VMs
bugfix: /README.TXT: Maintenance: to synchronize a Mac's settings with my testing machine's: added overwrite=1 so that old snapshots, etc. are also deleted
/README.TXT: Maintenance: to synchronize a Mac's settings with my testing machine's: use better bin/sync_upload instead of put
/README.TXT: Maintenance: to synchronize a Mac's settings with my testing machine's: removed no longer needed inplace=1, because the VirtualBox VMs now all use a snapshot covering the full disk, so that the full disk is not altered (removing the need to optimize backing up a large file) and just the diff files need to be backed up each time
bugfix: lib/sh/util.sh: sed: must use alias instead of function because function causes segfault in redir() subshell when used with make.sh make() filter (may be bug in bash?). this involves translating `unset LANG` to `env LANG=` (`env -u` to unset a var isn't supported on Mac, but fortunately sed treats LANG="" the same as unset LANG).
archived planning/goals/BIEN3_derived_data_products.docx and replaced with symlink to new BIEN_3_derived_data_products_NormalizedDB_only.docx
added planning/goals/BIEN_3_derived_data_products_NormalizedDB_only.docx from Brad's e-mail
bugfix: lib/sh/util.sh: sed: unset LANG to avoid "illegal byte sequence" errors on invalid UTF-8 for LANG=*.UTF-8. these occur e.g. with MySQL data that is in Latin-1.
lib/sh/util.sh: sed: use function instead of alias so that env can be set up before calling sed
planning/workflow/bien3_architecture.pptx: updated to Martha's revised version from 2013-7-3
lib/runscripts/table.run: map_table(): run map_table repeatedly until no more renames are made: added command to do this
lib/runscripts/table.run: map_table(): documented that collisions may prevent all renames from being made at once. if this is the case, map_table must be run repeatedly until no more renames are made. collisions may result if the staging table gets messed up (e.g. due to missing input columns in map.csv).
inputs/*/*/map.csv for CSV tables with a row_num column: added missing row_num entry, which is needed by the staging table column renaming to make the order of the map.csv columns match the order in the staging table
bugfix: inputs/*/Source/map.csv: added missing row_num entry, which is needed by the staging table column renaming to make the order of the map.csv columns match the order in the staging table. the staging table column renaming is now used by all Source tables.
bugfix: populated empty inputs/IUCN/European_Red_List_Plants/header.csv
inputs/CTFS/*/map.csv: added *.src.row_num from joined tables so that the map.csv input columns would match the staging table. this is needed for the staging table column renaming, which is positional rather than name-based to work with any existing column name.
bugfix: inputs/input.Makefile: map.csv and derived files: use $(tables) instead of $(importTables) when making them so that the mappings of those tables are still kept up-to-date even though they are marked _no_import (and not imported into the main DB)
inputs/CTFS/*/test.xml.ref: regenerated. these got out of date because even though these tables are included in import_order.txt, they are marked as _no_import, which prevents map.csvs and derived files from being kept up-to-date.
bugfix: inputs/CTFS/*/VegBIEN.csv: regenerated from map.csv. they may have gotten out of date because they are marked as _no_import, even though they are in import_order.txt.
bugfix: added missing inputs/MO/Specimen/header.csv
bugfix: added missing inputs/QFA/Specimen/header.csv
bugfix: inputs/TEX/Specimen/header.csv: generated from staging table (was empty previously)
added inputs/newWorld/iso_code_gadm/header.csv
added inputs/analytical_db/table.run
bugfix: inputs/VASCAN/Taxon/map.csv: added missing row_num column added by bin/csv2db
lib/sql_io.py: cleanup_table(): added assertion that the table exists, so that if it doesn't, the error will occur as part of an assertion rather than as part of the util.table_nulls_mapped__get() call, which might confusingly lead users to believe that this is a bug in util.table_nulls_mapped__get() when in fact the problem is that the table is not installed
fix: inputs/import.stats.xls: removed spurious diff comment on total time, which only applied to the previous import
inputs/import.stats.xls: reformatted times longer than one day as a # of days instead of hours, for clarity. the days format is chosen automatically when the # hours exceeds one day.
bugfix: inputs/*/Source/: added missing ./run, which creates the new-style staging tables with the metadata fields as part of the table. this is needed now that these subdirs use installed staging tables instead of metadata-only map.csvs.
bin/map: removed no longer used support for map.csv input column prefixes (expand out the prefixes instead). this used to be used by SpeciesLink to use just one mapping for a single term with multiple DwC namespaces, but was replaced with an explicit, ordered rather than implicit, unordered /_alt-ing together of the terms.
bin/map: removed no longer accurate comment that this is case- and punctuation-insensitive, since the case- and punctuation-insensitivity is now instead handled by map.csv preprocessing scripts before the mappings are even provided to bin/map
inputs/.herbaria/: switched to new-style import, which renamed the columns to the VegCore names. this is done using the commands at wiki.vegpath.org/2013-06-27_conference_call#To-do-for-Aaron > "run the following for each datasource".
lib/sql_io.py: cleanup_table(): don't run the slow ALTER TABLE statement again if the table has already been cleaned up. documented that it is idempotent (and actually was before this change as well).
lib/sql_io.py: added table_nulls_mapped__set(), "__get() wrappers around the corresponding util schema functions
lib/sql_gen.py: added table2regclass_text()
schemas/util.sql: added table_nulls_mapped__get(), which gets whether a table's NULL-equivalent strings have been replaced with NULL
schemas/util.sql: added table_flag__get(), which gets whether a status flag is set by the presence of a table constraint
schemas/util.sql: added table_nulls_mapped__set(), which sets that a table's NULL-equivalent strings have been replaced with NULL
schemas/util.sql: added table_flag__set(), which stores a status flag by the presence of a table constraint
schemas/util.sql: create_if_not_exists(): also ignore duplicate_object exceptions, thrown when trying to add a duplicate constraint
inputs/input.Makefile: %/postprocess: removed no longer used invocation of $*/import (precursor to the runscripts used in FIA)
inputs/*/: added table.run for use by the table subdirs in new-style import. datasources without table subdirs do not need this.
inputs/*/: added top-level Makefile which includes inputs/input.Makefile, so that make can be run directly on the datasrc dir without needing to specify `--makefile=../input.Makefile` (see input.Makefile $(selfMake))
added inputs/test_taxonomic_names/Taxon/header.csv
web/links/index.htm: updated to Firefox bookmarks. removed dead favicons. PostgreSQL: added bookmarks about triggers.
bugfix: inputs/input.Makefile: %/VegBIEN.csv: for new-style datasources, use a symlink to mappings/VegCore-VegBIEN.csv directly instead of prefiltering VegCore-VegBIEN.csv to include only the columns in map.csv. prefiltering used to be performed as part of mapping the map.csv VegCore output terms to VegBIEN using bin/join, but is no longer needed because the staging table columns are now VegCore terms. instead, the full VegCore-VegBIEN.csv is needed so that derived columns added in stage I or II validations are detected by bin/map (rather than just the original source columns in map.csv).
mappings/VegCore-VegBIEN.csv: cultivated, oldGrowth: use just cultivated if it's provided, rather than /_alt-ing it back with oldGrowth (which it was generated from)
bugfix: mappings/VegCore-VegBIEN.csv: fixed priority of cultivated and oldGrowth so cultivated is used first if it's available
bugfix: lib/runscripts/table.run: need to run remake_VegBIEN_mappings after mk_derived rather than before so the derived cols will be included in the automated test result
bugfix: inputs/*/Source/: use installed staging table (with blank-line data.csv) in order to also work with new-style import. this also fixes a benign diff between the by-row and by-col test outputs, where row-based import would not import the Source/ entries because there was not at least one row in the input. note that in order to ensure that all datasources are properly run, you need to check `svn st|sort` against the datasource schema names to see if any are missing.
inputs/*/logs: updated svn:ignore
inputs/*/*/logs: updated svn:ignore
bugfix: inputs/input.Makefile: SVN: add: don't add subdirs for datasources marked _no_import (e.g. datasources which only have an inputs/ dir to be listed in VegPath)
bugfix: inputs/*/Source/data.csv for new-style datasources: need to include a blank row (plus a blank header) so that the metadata values are imported at least once instead of zero times, now that there is an installed staging table that will be iterated over. the blank row did not used to be necessary, because db_xml.put_table() has a special case for metadata-only tables with no installed table, which avoids iterating over the table's rows.
lib/sql_io.py: put_table() (column-based import): complexity note: clarified that INSERT RETURNING throws an error on duplicate instead of returning the existing row. added blank line after ¶ for readability.
lib/sql_io.py: put_table() (column-based import): warning about triggers populating unique constraint-covered columns: corrected limitation to include only the unique constraint used to do the DISTINCT ON, since other unique constraints are not affected by column-based import. note that the primary key will normally not be the DISTINCT ON constraint, so trigger-populated natural keys are supported unless the input table contains duplicate rows for some generated keys.
inputs/*/Source/ for new-style datasources: use an actual staging table instead of a metadata-only table, so that metadata values can be stored in the staging table instead of the map.csv (as will be required by new-style import)
inputs/input.Makefile: SVN: $(svnFilesGlob): added data.csv, used to store versioned data (such as the empty data.csv used by Source/ tables which have their metadata in the map table instead)
schemas/util.sql: type_qual(), type_qual_name(): added comments to distinguish these similarly-named functions, one of which gets a type qualifier and the other of which gets a qualified name (not the name of a type qualifier, which one might otherwise assume)
schemas/util.sql: typeof(): support expressions that are not relative to a table (which do not have a table_ param). note that this requires removing the STRICT qualifier, so that NULL expressions will now produce an error instead of passing through as NULL.
schemas/VegCore/VegCore.ERD.mwb: relationships legend: removed inheritance of base_class from record, so that the IS-A label would not confusingly appear to apply to the record connector stub instead of to the solid line between base_class and derived_class
bugfix: schemas/util.sql: col_names(): need to exclude dropped columns (which remain included in the pg_attribute table until the next tuple rewrite), by filtering on `NOT attisdropped`. lib/sql.py table_col_names() is not affected by this because it is able to access the column names from the DB driver directly, after performing `SELECT * FROM table LIMIT 0`.
schemas/util.sql: set_col_names_with_metadata(): don't delete the metadata entries from the map table, because they are now added before the renames take place, so that the renames can simply be performed on the constant columns themselves. this does, however, require that the metadata entries are always listed last in the map.csv (which is currently the case).
lib/runscripts/table.run: map_table(): store the map table in the datasource schema, so that it can easily be referred to when using the staging tables. this also allows it to be found more easily when debugging its contents.
lib/sh/db.sh: psql(): hide the verbose CONTEXT information that is output with each NOTICE by setting the VERBOSITY psql var to terse (postgresql.1045698.n5.nabble.com/Quiet-quot-CONTEXT-quot-td1906036.html#a1906037)
*{.sh,run}: use new log-() instead of log+() with a negative #
lib/sh/util.sh: added log-() because it's non-obvious that you would otherwise have to invoke log+() with a negative #
schemas/util.sql: reset_map_table(): drop the table and recreate it instead of just creating it if it doesn't exist, so that any change to the util.map table is propagated to persistent map tables whenever they are reloaded from the map.csv
lib/runscripts/table.run: map_table(): create the map table as a persistent table in the temp schema, so that its contents can be viewed for debugging
schemas/util.sql: added drop_table()
schemas/util.sql: set_col_names(): don't perform rename if the name is not changing, to avoid cluttering the debug output with unnecessary queries
lib/runscripts/table.run: use new util.set_col_names_with_metadata() instead of util.set_col_names() so that metadata values (beginning with : ) are automatically mapped to constant columns rather than needing to add a mk_const_col() call to postprocess.sql for each of them. there are a lot of metadata value entries, especially in the Source/ tables for each datasource, so this will save time in translating the datasources to new-style import. note that this requires disabling the map_filter_insert trigger on the map table to prevent it from filtering out the metadata entries before util.set_col_names_with_metadata() can use them.
bugfix: schemas/util.sql: set_col_names_with_metadata(): need `util.` before mk_const_col(). "to", "from" need to be referenced from row_. substring() needs to start from 2 rather than 1 because PostgreSQL string indexes are 1-based.
schemas/util.sql: try_create(), create_if_not_exists(): use eval() so the executed statement will be echoed for debugging
schemas/util.sql: added set_col_names_with_metadata()
bugfix: lib/sh/sync.sh: upload(): paths: don't dereference the path itself if it's a symlink; instead canonicalize just its parent dir. this allows syncing a specific file which is a symlink, rather than syncing the symlink's target.
lib/sh/util.sh: added canon_dir_rel_path(), which canonicalizes just the parent dir if the path is a symlink, to leave the symlink itself untouched
planning/workflow/validation/: archived BIEN2 validations documents which have been superseded by planning/goals/BIEN3_derived_data_products.docx, to avoid confusion