lib/util.run: Added echo_cmd and use it in echo_run
lib/util.run: echo_cmd(): Renamed to echo_run for clarity, because it also runs the command
lib/util.run: Added inline_make()
lib/util.run: Added echo_stdin()
bin/my2pg_export: Put --password first because it's an authentication-related option
Added lib/table.run, which includes the commands in import.sh but uses run scripts to allow running commands other than just import. (For example, map_table or postprocess can be run separately. Uninstall-related commands which would not belong in an import script can also be added, because import is only one of many commands a run script can offer.)
Added lib/util.run with general functions and template for run scripts (a bash-based replacement for make). Unlike make, run scripts support full bash functionality including multiline commands. The run script template also includes syntax for various kinds of relative includes in bash.
lib/common.Makefile: Added $(require_var)
bin/publish_analytical_db: Fixed bug where need to remove `ESCAPED BY '"'` because this would causing " followed by an escape sequence char to be interpreted specially (e.g. "n -> \n). MySQL automatically takes care of quote doubling when you specify `FIELDS OPTIONALLY ENCLOSED BY`.
lib/common.Makefile: Compression: Added `%:: .gz`, `.gz: %`
planning/workflow/import_process_comparison.odg: Moved "staging tables" under the method labels to reduce empty space
planning/workflow/import_process_comparison.odg: Removed margins so the labels would align with the page margin on the Import process wiki page <https://projects.nceas.ucsb.edu/nceas/projects/bien/wiki/Import_process>
Added planning/workflow/import_process_comparison.odg and .png export
lib/db_xml.py: put_table(): Fixed bug where command to advance start to fetch next set was unintentionally deleted when removing the is_view check
inputs/UNCC/Specimen/new_terms.csv: Updated for updated VegCore vocab
inputs/GBIF/_MySQL/GBIFPortalDB-2013-02-20.data.sql.md5: Regenerated after appending agent table to GBIFPortalDB-2013-02-20.data.sql
Added inputs/GBIF/_MySQL/GBIFPortalDB-2013-02-20.data.sql.gz.md5
Added inputs/GBIF/raw_occurrence_record/ from refresh
inputs/GBIF/MySQL.schema.sql: Regenerated with inline enum type translated to CHECK constraint
bin/my2pg: Translate inline enum type to CHECK constraint
Added inputs/GBIF/**/MySQL.schema.sql
Added inputs/GBIF/_MySQL/MySQL.*.sql.make
inputs/FIA/: Archived no longer used subdirs from BIEN2 export
inputs/input.Makefile: SVN: add: Removed Source/map.csv prerequisite because it is not related to adding unversioned files in the dir. It was originally a prerequisite in order to auto-create it when the datasource dir is first created, but the map.csv recipe does not currently create metadata-only map.csvs. In the future, metadata-only map.csvs will be replaced with constant columns added to the applicable tables.
Added inputs/FIA/_archive
inputs/input.Makefile: %/map.csv: Fixed bug where can only make header.csv if map.csv does not exist, because some subdirs are metadata-only and don't have a corresponding DB table
README.TXT: Datasource setup: Install the staging tables: For a MySQL .sql export: Documented which password to use at each of the two password prompts my2pg_export will give you. You could also embed the value of the 2nd prompt in the _MySQL/*.make file using `--password="$(cat path/to/config/bien_password)"`.
README.TXT: Datasource setup: Install the staging tables: Removed requirement that `make inputs/<datasrc>/reinstall quiet=1 &` be run on vegbiendev for MySQL .sql exports, because the hostname is now set to vegbiendev instead of localhost
inputs/input.Makefile: sql/install: Use psql_script_vegbien instead of $(psqlNoSearchPath) (which uses psql_verbose_vegbien) because the insert statement for each data row should not be echoed
inputs/FIA/occurrence_all/import: Run remake_VegBIEN_mappings at end to keep mappings to next stage of import process up to date
inputs/FIA/occurrence_all/: Accepted new test output
lib/import.sh: remake_VegBIEN_mappings(): Also remake VegBIEN.csv and test.xml.ref use `make test`
lib/import.sh: Added remake_VegBIEN_mappings()
inputs/input.Makefile: %/map.csv: make $*/header.csv first in case it doesn't exist (e.g. if it has been deleted so that it will be remade)
inputs/FIA/occurrence_all/map.csv: Regenerated using new input table mappings
lib/import.sh: Added make() and use it instead of the full make command
inputs/input.Makefile: postprocess: Use %/postprocess instead of %/postprocess.sql/run so $*/import is also run
inputs/FIA/: Ran inputs/FIA/import. This maps to VegCore's commonName.
inputs/input.Makefile: %/postprocess: Also run the $*/import script, if it exists. Note that this is not the same as the %/import make target.
inputs/input.Makefile: %/postprocess.sql/run: Factored out into separate %/postprocess command, which can eventually also perform other actions
inputs/FIA/PLOT/map.csv: ELEV: Remapped to elevation_ft, assuming units based on the actual elevation of the region for a sample plot record
inputs/VegBank/taxonobservation_/map.csv: Mapped int_currplantcommon to vernacularName
mappings/VegCore.htm: Renamed salvias_plots table plotMetadata to PlotMetadata because of SALVIAS refresh on nimoy
mappings/VegCore.htm: Regenerated from wiki. Added flower, fruit, commonName.
mappings/Makefile: $(vocab); bin/redmine_synonyms: Support crossed out (deprecated) terms
README.TXT: Maintenance: VegCore data dictionary: Added steps to update the data dictionary's Tables section if necessary
inputs/GBIF/_MySQL/Makefile: %.data.sql: Added agent table
Added inputs/GBIF/_MySQL/GBIFPortalDB-2013-02-20.data.sql.md5
Added inputs/GBIF/_MySQL/GBIFPortalDB-2013-02-20.schema.sql
Added web/main/svn*/, now using .htaccess to forward to Redmine/*
Removed web/main/svn, svn-web symlinks because they need to be .htaccess-es in order for the relative mod_rewrite commands to work correctly
Added web/main/svn, svn-web symlinks to Redmine/* for shorter URLs
Added web/main/Redmine/svn-web/
inputs/GBIF/: Added scripts for subsetting refresh
lib/sql.py: table_order_by(): Documented that it returns None if table is a view, because table_cluster_on() would return None. This is necessary for inputs/FIA/occurrence_all/ sorting to work correctly, because specifying a manual sort order would prevent the query planner from just using fast nested loop joins and instead cause it to perform a slow sort. (This appears to be a bug in the query planner, because when the column list specified matches the joined-on indexes, there should be no need for post-nested loop re-sorting.)
inputs/FIA/occurrence_all/test.xml.ref: Updated inserted row count for new row sort order
lib/db_xml.py: put_table(): Fixed bug where also need to advance start to fetch next set when table is a view, because the views that are now being used with the import (inputs/FIA/occurrence_all/) are static rather than dynamic and do not return different rows after the previous set of rows has been imported
inputs/FIA/occurrence_all/import: Removed no longer applicable comment that directional joins are needed for PostgreSQL query planner to avoid slow sorts
inputs/FIA/TREE/import: Reclustered table by TREE.parent path index, to facilitate path-order joins
inputs/FIA/occurrence_all/import: Changed all RIGHT JOINs to inner joins so that tables would be joined in path order (i.e. general->specific). This optimizes the incremental joins so that the small tables are joined to each other before being joined to the large tables, rather than each row of the large tables being looked up in the small tables. This effect may not be noticeable for small LIMIT values, but would become apparent for large LIMIT values, such as the 1-million-row partitions used by db_xml.put_table() for column-based import. Note that inner joins used to cause the query planner to produce incorrect results containing slow sorts, but now this appears to no longer be an issue, perhaps because the result is not sorted by the TREE.ID index (which is not in the same order as the path indexes *.unique, *.parent).
inputs/FIA/occurrence_all/import: Removed trailing whitespace
Removed unused inputs/FIA/COND_unique/. Use COND instead.
inputs/FIA/import: Use `set -o errexit` instead of putting ` || exit` after each command
lib/import.sh: map_table(): Removed unneeded () around psql. This also fixes a bug where an error exit status from psql would not have aborted the script because `set -o errexit` does not apply to commands enclosed in (). For () you need to use ` || exit` instead (or ` || return` inside a function).
lib/import.sh: Use `set -o errexit` so any command that exits with an error aborts the script. Note that a command's exit status can still be ignored using ` || true`. Removed no longer needed ` || return` in functions.
schemas/util.sql: Renamed rename_if_exists() to try_create() because it can be used to create a column in any way, not just by renaming another column
lib/import.sh: functions: abort if a command encounters an error
schemas/VegCore/mk_derived: Added cultivated from oldGrowth
schemas/util.sql: Added try_mk_derived_col()
inputs/FIA/*/import: Run mk_derived after postprocessing commands
inputs/FIA/import_order.txt: Added occurrence_all/
mappings/VegCore-VegBIEN.csv: subplotID,subplot -> location.sourceaccessioncode: Fixed bug where need /_first to handle the case where both subplotID and subplot are provided
Added inputs/FIA/map.csv, which maps shared columns to VegCore
inputs/FIA/FIA_COND_unique/test.xml.ref: Updated now that PLOT, CONDID have been mapped
inputs/FIA/*/map.csv for pre-refresh tables: Added back * before unmapped column names
lib/csvs.py: stream_info(): Fixed bug where headers with multiline columns were not supported because only the first line (not the first multiline row) is sniffed for the dialect
inputs/input.Makefile: %/header.csv: Fixed bug where newlines inside column names were incorrectly formatted by psql's table header formatting, by using COPY TO STDOUT instead
schemas/util.sql: Added do_optionally_ignore()
lib/import.sh: Added mk_derived(). Added mk_derived to usage template.
Added schemas/VegCore/mk_derived, which will be run in the import scripts
lib/import.sh: psql(): Set psql vars :schema, :table, :table_str for use by the psql commands
lib/import.sh: Export $schema, $table so they are available to programs invoked within an import script, which should not reset these vars if they include import.sh
lib/import.sh: Only set $table, $schema if they don't already exist
lib/import.sh: Added $root_dir and use it in $bin_dir
inputs/FIA/*/import: Use new mk_*_col()
schemas/*functions.sql: Renamed to *util.sql because now that these schemas are used by the new-style import scripts, there can be more than just functions in them
schemas/util.sql: Added mk_const_col()
schemas/util.sql: Added type_qual()
schemas/util.sql: mk_derived_col(): Added "idempotent" comment
schemas/util.sql: Added mk_derived_col()
inputs/FIA/COND/import: oldGrowth: Updated expr column names
schemas/util.sql: Added typeof(text, regtype)
inputs/FIA/*/import: Removed util. before function names because util is in the search_path
schemas/functions.sql: Added existing_cols()
schemas/functions.sql: col_type(): Fixed bug where a NULL col name crashed the undefined_column throw, because MESSAGE can't be NULL and the NULL name was nulling out the entire message
schemas/functions.sql: Added col_exists()
inputs/FIA/COND/map.csv: Mapped SLOPE, ASPECT