Added inputs/FIA/occurrence_all/, which combines all the core tables in a denormalized view. Note that it is not necessary to materialize this view into a (large) denormalized table, because the unique indexes and left/right joins allow the rows to be denormalized on the fly.
inputs/FIA/*/import: Use map_table to set column names based on the contents of map.csv, instead of using functions.to_global_col_names() and functions.rename_if_exists(). Added map.csv for all tables.
inputs/FIA/: Changed postprocess.sql scripts to import scripts that can be run directly. Added top-level inputs/FIA/import to run all of them together.
inputs/FIA/COND/postprocess.sql: Removed trailing whitespace
Added lib/import.sh, for use by new, simpler import scripts used by FIA. Note that for now, input.Makefile is still used to create map.csv.
inputs/input.Makefile: Moved postprocess.sql from $(exportHeader) to %/install because that is not part of the $(exportHeader) functionality. Added %/header.csv and use it in $(exportHeader).
inputs/input.Makefile: $(catSrcs): Fixed bug where need to use $(nonHeaderSrcs) instead of $(srcs) to exclude header.csv
schemas/functions.sql: map: Added additional columns that are present in the standard map spreadsheet format (filter, notes). These columns are necessary to make COPY FROM work, because it requires the # of columns to be the same in the input data and the output table.
inputs/input.Makefile: Moved $(cleanup) from $(exportHeader) to %/install because this is not part of exportHeader's functionality
inputs/input.Makefile: $(mkSrcMap): Use header.csv instead of the header of the CSVs, so that the column list in the map spreadsheet matches the actual DB table
inputs/input.Makefile: %.sql/run: Change to the directory the file is located in, so that includes (\i) are relative to the file, rather than relative to whatever happens to be the current directory
inputs/input.Makefile: %/install: Always generate a header.csv, even for CSV inputs with their own header. This will include the actual column names in the staging table, which may differ from their names in the CSVs (e.g. the addition of row_num). Note that header.csv is not included in the CSVs list itself, and will not override the header or dialect in them.
schemas/functions.sql: Added set_col_names()
schemas/functions.sql: rename_if_exists(): Also ignore duplicate_column exceptions, which are generated when a column is renamed to itself (as well as when two columns are renamed to the same place)
schemas/functions.sql: Added col_names(regclass), which unlike col_names(regtype) returns names in the order they are in the table
schemas/functions.sql: Added map_values()
schemas/functions.sql: map_get(): Fixed bug where can't use STRICT in EXECUTE INTO because there will sometimes be no match, causing a "query returned no rows" error
schemas/functions.sql: rename_cols(): Support any renames type with an -> operator
schemas/functions.sql: Added operator ->(regclass, text)
schemas/functions.sql: Added map_get()
schemas/functions.sql: table2hstore(): Made it STABLE instead of IMMUTABLE because the input table is not constant
schemas/functions.sql: Added table2hstore()
schemas/functions.sql: Added reset_map_table()
schemas/functions.sql: Added truncate()
schemas/functions.sql: mk_map_table(): Use the sql language instead of plpgsql because EXECUTE is not used directly, so plpgsql is not actually needed
schemas/functions.sql: mk_map_table(): Store map table schema in separate `map` table and extend it using LIKE, for easier maintainability of the map schema
schemas/functions.sql: Added mk_map_table()
schemas/functions.sql: ensure_prefix(): Made it IMMUTABLE instead of STABLE
schemas/functions.sql: Added rename_cols()
inputs/FIA/*/postprocess.sql: Avoid using :table, :table_str so that the commands in the script can also be run by pasting them into pgAdmin
README.TXT: Full database import: Manual steps to run TNRS/remake analytical DB: Added `export version=<version>` to ensure that the import is run into the correct schema. Since these instructions are for running commands separately from the rest of the import, it's important to first ensure that the import environment is set up properly.
schemas/vegbien.ERD.mwb: Added taxon_trait to ERD
schemas/vegbien.ERD.mwb: Regenerated exports
schemas/vegbien.sql: Removed unused analytical_aggregate table, because analytical_stem provides much more detailed, higher-quality data, both in terms of the number or of rows and the number of columns. analytical_aggregate has also long been out of sync with the analytical DB schema, and it doesn't make sense to spend processing time in make_analytical_db to perform the DISTINCT ON if the table isn't being used. We may revisit analytical_aggregate later once we have ID fields for each entity in the DISTINCT ON and can avoid DISTINCTing on all analytical_aggregate columns.
inputs/FIA/*/postprocess.sql: Added index on *.CN (autogen IDs)
README.TXT: Full database import: Added steps to use `screen` to allow recovering from a closed terminal window
inputs/FIA/TREE/postprocess.sql: TREE.unique index: Renamed to TREE.ID because this is on an autogenerated pkey rather than on domain values (for which a set of unique columns has not yet been found and may not exist)
inputs/FIA/REF_SPECIES/postprocess.sql: Matched SPECIES_SYMBOL to .SYMBOL. Added .SYMBOL_TYPE for use in joining to REF_PLANT_DICTIONARY.
Added inputs/FIA/REF_UNIT/postprocess.sql
Added inputs/FIA/REF_RESEARCH_STATION/postprocess.sql
Added inputs/FIA/COUNTY/postprocess.sql
Added inputs/FIA/REF_PLANT_DICTIONARY/postprocess.sql
inputs/FIA/COND/postprocess.sql: Matched COND.HABTYPCD1, COND.HABTYPCD1_PUB_CD to REF_HABTYP_DESCRIPTION
inputs/input.Makefile: Staging tables installation: $(exportHeader): Fixed bug where need to run postprocess.sql before exporting the header, because it can change the column names
inputs/input.Makefile: Staging tables installation: $(exportHeader): export the header before running $(cleanup), because the header is not affected by the data cleanup operations and thus can be generated right away, to allow mapping while the cleanup operations run
inputs/FIA/REF_HABTYP_DESCRIPTION/postprocess.sql: Prepare columns for joining with COND
inputs/input.Makefile: Staging tables installation: $(exportHeader): Fixed bug where need to use psql_script_vegbien instead of the psql_verbose_vegbien used by $(psqlAsBien), to avoid echoing commands as part of the exported header
Added planning/workflow/(de)normalized_import.mappings.png
Added planning/workflow/denormalized_import.png, normalized_import.png
web/main/IH/: Added lowercase alias
Added web/main/IH/
inputs/input.Makefile: Staging tables installation: Added postprocess target, which runs all the postprocess.sql files
inputs/FIA/REF_SPECIES/postprocess.sql: Cast ID column to integer
inputs/FIA/*/postprocess.sql: Cluster tables by their *.unique index for faster joins
inputs/FIA/*/postprocess.sql: Cast ID columns to integer using new functions.set_col_types()
bin/psql_verbose_vegbien: Run with client_min_messages = NOTICE to display notices for debugging. This is supposed to be the default, but apparently isn't.
inputs/input.Makefile: BIEN commands: $(psqlAsBien): Use psql_verbose_vegbien instead of psql_script_vegbien so that timings and notices are displayed, which is useful for profiling and debugging
schemas/functions.sql: Added col_cast and set_col_types()
schemas/functions.sql: Added col_ref, col_type()
schemas/functions.sql: Added cluster_once()
schemas/functions.sql: Added cluster_index()
schemas/functions.sql: create_if_not_exists(): Also handle duplicate_column exceptions
schemas/functions.sql: Added rename_if_exists()
inputs/FIA/COND/postprocess.sql: Renamed oldgrowth to COND.oldgrowth so it wouldn't be renamed by to_global_col_names()
inputs/FIA/COND/postprocess.sql: Added oldgrowth column as part of the postprocessing instead of as part of the view that left joins the core tables together. This avoids needing to regenerate the oldgrowth field whenever the view is queried or materialized.
inputs/FIA/TREE/postprocess.sql: Added index on columns that join to parent tables
inputs/FIA/*/postprocess.sql: Removed table prefix from globally-unique columns that should be joined on
schemas/functions.sql: Marked STRICT functions as such
schemas/functions.sql: col_global_names(): Treat any column name that contains . as already being globally unique, and don't prepend the table name. This allows renaming the table columns after running col_global_names(), without causing the table name to be re-prepended the next time col_global_names() is run.
schemas/functions.sql: Added contains()
schemas/functions.sql: Added create_if_not_exists()
inputs/FIA/*/postprocess.sql: Use functions.to_global_col_names() to ensure that all column names are globally unique. This makes it easy to join the tables together without worrying about column name collisions.
inputs/FIA/*/postprocess.sql: Use new functions.create_if_not_exists() to allow re-running postprocess.sql idempotently
inputs/input.Makefile: Staging tables installation: %/install: Use new %.sql/run to run postprocess.sql
inputs/input.Makefile: Staging tables installation: Added %.sql/run to run postprocess.sql, etc. separately from the install targets they are a part of
schemas/functions.sql: Added to_global_col_names()
schemas/functions.sql: col_global_names(): Use new functions.ensure_prefix() to only add the table name prefix if it doesn't already exist. This makes the function idempotent.
schemas/functions.sql: Added ensure_prefix()
schemas/functions.sql: Added has_prefix()
schemas/functions.sql: Added col_global_names()
schemas/functions.sql: Added name(regtype)
schemas/functions.sql: Added col_names()
root Makefile: Installation: Fixed bug where need to run schemas/public/install separately because schemas/install installs only the util schemas
root Makefile: Installation: install util schemas (temp functions py_functions) before inputs, so that inputs can use util functions in their postprocess.sql or create.sql scripts. (However, they must not use util functions in views or index functions, because these would be cascadingly deleted whenever the util schemas are reinstalled before an import.)
README.TXT: Single datasource import: Added by_col=1 to all commands
mappings/VegCore-VegBIEN.csv: locationRemarks: Remapped to locationnarrative because location.notespublic is a boolean field
lib/sql_io.py: mk_errors_table(): Create a unique index on the MD5 of the value and error instead of on the values directly, because some strings are too long to index (e.g. row 2537268 of MO.Specimen causes an error "index row size 3032 exceeds maximum 2712 for index [...] Values larger than 1/3 of a buffer page cannot be indexed")
inputs/import.stats.xls: Updated import times
bin/map: No mappings warning: Added explanation that this could also be due to no column name matches, and hint to check if you are importing the correct input table
inputs/MO/: Renamed Specimen.2/ -> now available Specimen/
inputs/MO/: Removed old import in Specimen/
Refreshed MO
csvs.py: TsvReader.next(): Fixed bug where empty line needs to be separately returned as [], because csv.reader would interpret it as EOF since the line ending has already been removed
csvs.py: sniff(): TSVs: Turn off quoting because TSVs use \-escapes instead of quotes to escape delimeters, newlines, etc.
csvs.py: InputRewriter.readline(): Surround function in a try block that prints all exceptions, so that debugging information is available if an error occurs when this stream is used as input for psycopg's copy_expert() (COPY FROM)
Populated inputs/MO/import_order.txt
Refreshed SALVIAS
Added web/main/CTFS/