schemas/util.sql: mk_subset_by_row_num_func(): regular subset function: Fixed bug where need to add 1 to the 0-based offset_ to get the 1-based row_num (which is usually a serial column)
schemas/util.sql: mk_subset_by_row_num_func(): regular subset function: Fixed bug where need to subtract 1 from the end row_num because BETWEEN limits are inclusive of the bounds
schemas/util.sql: mk_subset_by_row_num_func(): regular subset function: Fixed bug where also need to COALESCE offset_ to 0 when it's added to the limit_
schemas/util.sql: mk_subset_by_row_num_func(): subset function which turns off enable_sort: Fixed bug where need to pass ($2, $3) instead of ($1, $2) to the regular subset function
inputs/FIA/occurrence_all/import: Added occurrence_all-row_num column for use with mk_subset_by_row_num_func()
schemas/util.sql: mk_subset_by_row_num_func(): Also create subset function which turns off enable_sort. This is used for limit values greater than ~100,000 to avoid unwanted slow sorts. The regular subset function is still needed to work with EXPLAIN, so that it produces expanded output instead of just a function scan.
schemas/util.sql: Added mk_subset_by_row_num_func()
schemas/util.sql: Added type_qual_name()
schemas/util.sql: force_update_view(): Fixed bug where also need to drop view for "cannot change name of view column" errors
inputs/FIA/occurrence_all/import: Use new force_update_view(), which only drops the view if its columns have changed and otherwise just uses CREATE OR REPLACE VIEW, rather than always first running DROP VIEW IF EXISTS
schemas/util.sql: Added force_update_view()
bin/make_analytical_db: Commented out export_analytical_db because we are not yet using the analytical DB in MySQL, and it doesn't make sense to generate a large, unused CSV export each time
bin/export_analytical_db: Replaced analytical_aggregate with analytical_stem
inputs/FIA/occurrence_all/: Updated header.csv for new column order
inputs/FIA/occurrence_all/import: Use directional joins (LEFT/RIGHT JOIN) instead of inner joins to ensure that the PostgreSQL query planner always joins starting with the TREE table. Note that the directional joins are now needed for a different reason than when they were initially added, which had been to avoid slow sorts. The sorts (at least for LIMIT-only queries) went away when small tables such as COUNTY and REF_UNIT were added to the joins.
inputs/FIA/*/map.csv: Changed newlines between table and field name to - because the newlines mess up the flow of queries and also break pgAdmin's display of EXPLAIN output. The - was chosen because it's a non-whitespace character that linewraps in browsers, phpPgAdmin, and Google spreadsheets (although unfortunately not in pgAdmin). It is better than space because you can set a text editor to treat it as a word character, allowing the entire column name (<table>-<field>) to be selected by double-clicking it.
Added planning/workflow/normalized_vs_denormalized/denormalized.generic_standardizations.png (a slide from Brad's bien3_architecture_denormalized.pptx PowerPoint), which shows the staging table preprocessing particularly well
README.TXT: Full database import: record the import times in inputs/import.stats.xls: Added instructions for what to do if the rightmost imports start getting truncated due to the 255-column limit in spreadsheets. (This will occur in 8 imports.)
inputs/import.stats.xls: Removed the previous imports from the current tab because they are also in the 2012-6~9 tab, and should not be in two places
inputs/import.stats.xls: Updated import times. MO and FIA have been refreshed.
Removed no longer needed inputs/GBIF/import. Use ./run instead.
Removed no longer needed inputs/GBIF/_MySQL/import. Use ./run instead.
inputs/GBIF/_MySQL/run: import: Run make directly instead of via ./import
inputs/GBIF/_MySQL/run: Use new import.run, which defines all()
Added planning/workflow/normalized_vs_denormalized/bien3_architecture_(de)normalized.pptx
Added planning/workflow/normalized_vs_denormalized/BIEN-modArch-Dec2010 NS-SBD 1.4.ppt.url
planning/workflow/: Moved normalized vs. denormalized files to separate normalized_vs_denormalized/ subfolder
Regenerated inputs/ACAD/Specimen/logs/steps.by_col.log.sql
inputs/GBIF/raw_occurrence_record/run: Override MySQL_export() so $filter can be customized
inputs/GBIF/table.run: import(): Updated for lib/table.run template changes
lib/table.run: template: import(): Also pass "$@" to superclass method
lib/table.run: template: Use "$FUNCNAME" instead of hardcoding import
Added inputs/GBIF/MySQL_export, used by ./table.run
lib/util.run: echo_func: Fixed bug where need to use BASH_LINENO0 for the line #s to match up with the files. For some reason the required array indexes for BASH_SOURCE (1) and BASH_LINENO (0) differ by one.
inputs/GBIF/run: Use new import.run, which defines all()
lib/table.run: Use new import.run, which defines all()
Added lib/import.run
lib/util.run: echo_func: Include the line # of the function to make it easier to find where the code being run is
lib/table.run: Added all (default target)
lib/util.run: run_cmd: If bash exited with an error, don't run the "$@" command. This test is necessary because `trap run_cmd EXIT` will run run_cmd as the result of any exit from the shell, including an error.
*run: Use -e option to bash on the #! line instead of separate `set -o errexit` line so that there is no issue with the `set -o errexit` line getting separated from the #! line (errexit is required for the scripts to work properly)
lib/util.run: run_cmd: When no command specified, default to running the `all` command, just like make
lib/util.run: Run run_cmd at shell exit (using trap) instead of requiring every runscript to have `run_cmd ` at the end of it
Added inputs/GBIF/run
Added inputs/GBIF/raw_occurrence_record/run
Added inputs/GBIF/table.run
Added inputs/GBIF/_MySQL/run
lib/util.run: fwd: Check that $subdirs is defined. Added $subdirs to usage.
lib/util.run: fwd: Added usage
lib/table.run: Switched from echo_run to echo_func
lib/util.run: run_cmd: Echo the command being run, including the top-level run script. This is in addition to the echoing of the command in the function itself (using echo_func), which provides both the runscript that was run and the file where the invoked command was actually located (which may be different due to includes).
lib/util.run: Echo the command at the beginning of each function using new echo_func, instead of having to type echo_run before every call to a function. Note that because echo_func uses BASH_SOURCE, the path to the file containing the function will be included in the debug message, which greatly facilitates locating which file a command is in.
lib/util.run: Added echo_func
lib/util.run: Added echo_cmd and use it in echo_run
lib/util.run: echo_cmd(): Renamed to echo_run for clarity, because it also runs the command
lib/util.run: Added inline_make()
lib/util.run: Added echo_stdin()
bin/my2pg_export: Put --password first because it's an authentication-related option
Added lib/table.run, which includes the commands in import.sh but uses run scripts to allow running commands other than just import. (For example, map_table or postprocess can be run separately. Uninstall-related commands which would not belong in an import script can also be added, because import is only one of many commands a run script can offer.)
Added lib/util.run with general functions and template for run scripts (a bash-based replacement for make). Unlike make, run scripts support full bash functionality including multiline commands. The run script template also includes syntax for various kinds of relative includes in bash.
lib/common.Makefile: Added $(require_var)
bin/publish_analytical_db: Fixed bug where need to remove `ESCAPED BY '"'` because this would causing " followed by an escape sequence char to be interpreted specially (e.g. "n -> \n). MySQL automatically takes care of quote doubling when you specify `FIELDS OPTIONALLY ENCLOSED BY`.
lib/common.Makefile: Compression: Added `%:: .gz`, `.gz: %`
planning/workflow/import_process_comparison.odg: Moved "staging tables" under the method labels to reduce empty space
planning/workflow/import_process_comparison.odg: Removed margins so the labels would align with the page margin on the Import process wiki page <https://projects.nceas.ucsb.edu/nceas/projects/bien/wiki/Import_process>
Added planning/workflow/import_process_comparison.odg and .png export
lib/db_xml.py: put_table(): Fixed bug where command to advance start to fetch next set was unintentionally deleted when removing the is_view check
inputs/UNCC/Specimen/new_terms.csv: Updated for updated VegCore vocab
inputs/GBIF/_MySQL/GBIFPortalDB-2013-02-20.data.sql.md5: Regenerated after appending agent table to GBIFPortalDB-2013-02-20.data.sql
Added inputs/GBIF/_MySQL/GBIFPortalDB-2013-02-20.data.sql.gz.md5
Added inputs/GBIF/raw_occurrence_record/ from refresh
inputs/GBIF/MySQL.schema.sql: Regenerated with inline enum type translated to CHECK constraint
bin/my2pg: Translate inline enum type to CHECK constraint
Added inputs/GBIF/**/MySQL.schema.sql
Added inputs/GBIF/_MySQL/MySQL.*.sql.make
inputs/FIA/: Archived no longer used subdirs from BIEN2 export
inputs/input.Makefile: SVN: add: Removed Source/map.csv prerequisite because it is not related to adding unversioned files in the dir. It was originally a prerequisite in order to auto-create it when the datasource dir is first created, but the map.csv recipe does not currently create metadata-only map.csvs. In the future, metadata-only map.csvs will be replaced with constant columns added to the applicable tables.
Added inputs/FIA/_archive
inputs/input.Makefile: %/map.csv: Fixed bug where can only make header.csv if map.csv does not exist, because some subdirs are metadata-only and don't have a corresponding DB table
README.TXT: Datasource setup: Install the staging tables: For a MySQL .sql export: Documented which password to use at each of the two password prompts my2pg_export will give you. You could also embed the value of the 2nd prompt in the _MySQL/*.make file using `--password="$(cat path/to/config/bien_password)"`.
README.TXT: Datasource setup: Install the staging tables: Removed requirement that `make inputs/<datasrc>/reinstall quiet=1 &` be run on vegbiendev for MySQL .sql exports, because the hostname is now set to vegbiendev instead of localhost
inputs/input.Makefile: sql/install: Use psql_script_vegbien instead of $(psqlNoSearchPath) (which uses psql_verbose_vegbien) because the insert statement for each data row should not be echoed
inputs/FIA/occurrence_all/import: Run remake_VegBIEN_mappings at end to keep mappings to next stage of import process up to date
inputs/FIA/occurrence_all/: Accepted new test output
lib/import.sh: remake_VegBIEN_mappings(): Also remake VegBIEN.csv and test.xml.ref use `make test`
lib/import.sh: Added remake_VegBIEN_mappings()
inputs/input.Makefile: %/map.csv: make $*/header.csv first in case it doesn't exist (e.g. if it has been deleted so that it will be remade)
inputs/FIA/occurrence_all/map.csv: Regenerated using new input table mappings
lib/import.sh: Added make() and use it instead of the full make command
inputs/input.Makefile: postprocess: Use %/postprocess instead of %/postprocess.sql/run so $*/import is also run
inputs/FIA/: Ran inputs/FIA/import. This maps to VegCore's commonName.
inputs/input.Makefile: %/postprocess: Also run the $*/import script, if it exists. Note that this is not the same as the %/import make target.
inputs/input.Makefile: %/postprocess.sql/run: Factored out into separate %/postprocess command, which can eventually also perform other actions
inputs/FIA/PLOT/map.csv: ELEV: Remapped to elevation_ft, assuming units based on the actual elevation of the region for a sample plot record
inputs/VegBank/taxonobservation_/map.csv: Mapped int_currplantcommon to vernacularName
mappings/VegCore.htm: Renamed salvias_plots table plotMetadata to PlotMetadata because of SALVIAS refresh on nimoy
mappings/VegCore.htm: Regenerated from wiki. Added flower, fruit, commonName.
mappings/Makefile: $(vocab); bin/redmine_synonyms: Support crossed out (deprecated) terms
README.TXT: Maintenance: VegCore data dictionary: Added steps to update the data dictionary's Tables section if necessary