bugfix: inputs/input.Makefile: $(datasrc_schema_exists): need to use $(datasrc), not $(schema), as $schema is only what this var is called in the runscripts
fix: inputs/input.Makefile: $(sortFile): don't print the "add any missing tables to $(sortFile)" message every time the Makefile is run
bugfix: inputs/input.Makefile: install: only run this for datasource dirs
inputs/input.Makefile: install: use ./run's install target for clarity
bugfix: inputs/input.Makefile: install: made it idempotent (using new $(datasrc_schema_exists)) so that it could be run by `make install` on an existing system
bugfix: inputs/input.Makefile: $(datasrc_schema_exists): need to use $(shell ...)
inputs/input.Makefile: added $(datasrc_schema_exists)
inputs/input.Makefile: add: verify/: also svn:ignore *.log
bugfix: inputs/input.Makefile: %/postprocess: invoke runscript if it exists
bugfix: inputs/input.Makefile: validations.sql must be in a subdir so it won't get run by sql/install
inputs/input.Makefile: install: also run validate/install
inputs/input.Makefile: added validate/install
lib/common.Makefile: added $(nice) and use it everywhere its definition is used
inputs/input.Makefile: validate: redirect the output to the log, as for other import-related operations
inputs/input.Makefile: import: validate at the end of the import
inputs/input.Makefile: added new-style aggregating validations (`validate` target)
bugfix: lib/common.Makefile: $(add*): need to wrap w/ $(wildcard) to prevent "targets don't exist" error, because svn 1.7 does not suppress this error even with --force
bugfix: inputs/input.Makefile: add!: add* of $(svnFiles): need to ignore errors because svn 1.7 does not suppress the "targets don't exist" error even with --force
fix: inputs/input.Makefile: don't treat *.xml as data files since these are not currently supported
fix: inputs/input.Makefile: removed no longer used special handling of XML inputs, support for which was never added to the Makefile. (bin/map, however, does support importing an XML file into a database.) this fixes a bug in XAL, which used to abort with an error but now just imports an empty table.
fix: inputs/input.Makefile: %/install: don't ignore errors if table does not exist, to ensure a proper errexit. this is now possible because every dir that this target is being run on should be a data dir. (Source/ used to be a metadata-only dir.)
bugfix: inputs/input.Makefile: $(cleanup): need `set -o pipefail`
bugfix: inputs/input.Makefile: %/postprocess.sql: don't perform replacements using map.csv, because map.csv is not idempotent. this functionality was only there to facilitate switching to new-style import, which is now largely done. (the remaining datasources NVS, SALVIAS, TEAM contain only 1 postprocess.sql: inputs/SALVIAS/projects/postprocess.sql (`st inputs/{NVS,SALVIAS,TEAM}/*/postprocess.sql`).)
inputs/input.Makefile: %/postprocess.sql: always run this, not just if the associated map spreadsheets change, to avoid needing to `touch` them to cause %/postprocess.sql to run
bugfix: inputs/input.Makefile: %/postprocess.sql: also need to apply renames from mappings/VegCore.thesaurus.csv, as these have been applied to map.csv
inputs/input.Makefile: $(svnFilesGlob): added validations.sql
inputs/input.Makefile: verify/%.out: use a *.sql file in the verify/ directory itself to generate *.out, so that each datasource can have its own set of output queries. for datasources that should share the same set of queries, they can instead be symlinked to the same file.
inputs/input.Makefile: add!: verify/: also svn:ignore *.tsv, *.txt
moved everything into /trunk/ to create the standard svn layout, for use with tools that require this (eg. git-svn). IMPORTANT: do NOT do an `svn up`. instead, re-use your working copy's existing files with `svn switch` (http://svnbook.red-bean.com/en/1.6/svn.ref.svn.c.switch.html).
bugfix: inputs/input.Makefile: install: for new-style datasources, use the associated runscript instead (the old-style install target will not do everything that's needed for a new-style datasource)
bugfix: inputs/input.Makefile: %/header.csv: errexit the command so that errors won't scroll by, which in this case requires `set -o pipefail`
bugfix: inputs/input.Makefile: `%/install: %/create.sql`: errexit the command so that errors won't scroll by, which in this case requires `set -o pipefail`
inputs/input.Makefile: scrub: clarified that using & (background process) also ignores TNRS errors (the primary purpose of & , of course, is to run asynchronously)
bugfix: inputs/input.Makefile: $(import): except in a full-database import, errexit so that the import will stop on an error and not let it scroll by
fix: inputs/input.Makefile: $(svnFilesGlob): removed schema and PDF files, since these are owned by the data provider and should not be in the repository that gets open-sourced
bugfix: inputs/input.Makefile: sql/install: exit on error by using `set -o pipefail`
inputs/input.Makefile: $(_svnFilesGlob): also svn-add _no_import in the top-level datasrc dir. (this requires using add! , because the presence of a _no_import file there will normally turn off adding by svnFilesGlob.)
bugfix: inputs/input.Makefile: %/install: don't run map_table, because this instead done by the runscript. although it does not hurt to do it twice, invoking load_data by itself should not run map_table at all, so that the original column names can be inspected in the table and map.csv reordered to match.
inputs/input.Makefile: added %/import_temp alias for %/import, to mirror the presence of import_temp for import
bugfix: inputs/input.Makefile: import: remove the temp suffix once the import is done, so that the full database import doesn't keep the suffix attached to the datasources that import_all didn't import with reimport. removed unused import_publish target (instead use import_temp to invoke just the import without the temp suffix removal).
bugfix: *Makefile: recursive invocation of $(MAKE): enclose targets in "" in case they contain *
bugfix: inputs/input.Makefile: %/uninstall: allow user to set is_view=1 flag to use DROP VIEW instead of DROP TABLE
bugfix: inputs/input.Makefile: %/VegBIEN.csv: `ln -s` to create VegBIEN.csv: enclose the filenames in "" since they may contain * (e.g. taxon_observation.**)
bugfix: inputs/input.Makefile: `%/install: %/create.sql`: don't include %/header.csv as a target, so that it won't get deleted if the install fails (especially on a step that happens after the header is exported)
inputs/input.Makefile: reimport: don't remove the existing import first, because it will instead be removed by the publish step. this ensures there is always one complete copy of the datasource in the DB.
inputs/input.Makefile: reimport: use import_publish instead of import so that the reimport replaces the previous import
inputs/input.Makefile: added import_publish, which removes the temp suffix when the import is done
inputs/input.Makefile: $(map2db): import to datasrc.new instead of plain datasrc, so that the current import of the datasrc is not overwritten
inputs/input.Makefile: added publish (`make inputs/src/publish`)
inputs/input.Makefile: added %/publish (`make inputs/src/src.version/publish`)
bugfix: inputs/input.Makefile: %/test: in by_col mode, also need to run %/test.by_col.xml
inputs/input.Makefile: rm: use new datasource_rm(), which encapsulates the schema-specific aspects of removing a datasource
inputs/input.Makefile: scrub: documented that using & (background process) ignores TNRS errors, so that TNRS bugs do not prevent the remaining tables from being imported even if TNRS can't be run
inputs/input.Makefile: $(import): support restarting the import where it left off by setting continue=1. this is done by grepping the restart row out of the log file's last partition.
inputs/input.Makefile: added %/import_scrub, similar to import_scrub but just imports one table
bugfix: inputs/input.Makefile: %/postprocess.sql: need to run bin/repl in text mode (text=1) so that values to match are treated as literal strings rather than regular expressions. this difference is important for column names with spaces or special characters.
inputs/input.Makefile: added %/postprocess.sql to replace input column names with the corresponding output column names when switching to new-style import (this target must be manually run, but does simplify the process of renaming the postprocess.sql input columns)
bugfix: inputs/input.Makefile: Staging tables installation: $(allInstalls): don't filter out Source table, because it is now an installed table rather than just a mapping
inputs/input.Makefile: Staging tables installation: %/install: run %/map_table at end to rename the staging table columns for new-style datasources
inputs/input.Makefile: Staging tables installation: added %/map_table to run the new-style import staging table renaming
bugfix: inputs/input.Makefile: map.csv and derived files: use $(tables) instead of $(importTables) when making them so that the mappings of those tables are still kept up-to-date even though they are marked _no_import (and not imported into the main DB)
inputs/input.Makefile: %/postprocess: removed no longer used invocation of $*/import (precursor to the runscripts used in FIA)
bugfix: inputs/input.Makefile: %/VegBIEN.csv: for new-style datasources, use a symlink to mappings/VegCore-VegBIEN.csv directly instead of prefiltering VegCore-VegBIEN.csv to include only the columns in map.csv. prefiltering used to be performed as part of mapping the map.csv VegCore output terms to VegBIEN using bin/join, but is no longer needed because the staging table columns are now VegCore terms. instead, the full VegCore-VegBIEN.csv is needed so that derived columns added in stage I or II validations are detected by bin/map (rather than just the original source columns in map.csv).
bugfix: inputs/input.Makefile: SVN: add: don't add subdirs for datasources marked _no_import (e.g. datasources which only have an inputs/ dir to be listed in VegPath)
inputs/input.Makefile: SVN: $(svnFilesGlob): added data.csv, used to store versioned data (such as the empty data.csv used by Source/ tables which have their metadata in the map table instead)
inputs/input.Makefile: added support for separate grants.sql file, which may contain GRANT statements that would normally be filtered out by pg_dump_limit
inputs/input.Makefile: sql/install: added $debug option to run the *.sql import verbosely, to display which statements are being run. this should only be used for SQL files that use COPY FROM to import data, to avoid echoing pages of insert statements.
inputs/input.Makefile: keep $(sortFile) up-to-date: use sort_file_updated=1 flag to indicate that import_order.txt has already been checked, so that recursive invocations of make don't need to recheck it. also use this flag instead of an explicit $(MAKECMDGOALS) list to prevent the $(sortFile) check from being infinite-recursively reinvoked when input.Makefile is read as part of the $(sortFile) check itself.
inputs/input.Makefile: keep import_order.txt up-to-date by running `make $(sortFile)` each time make is run. this ensures that new datasources always have import_order.txt populated when make is first run. eventually, $(tables) can be always set to $(allTables) so that this auto-updating can also be used to ensure that new subdirs added by the user always make it into import_order.txt (so that they will be included in the subdirs that get remade, etc.). import_order.txt is primarily for specifying the order of the subdirs, but some datasources also use it to filter out subdirs, so it can't yet be always updated to include the full list of subdirs. however, the filter-out usage should no longer be necessary after the switch to new-style import.
inputs/input.Makefile: added $(filter_make), used to filter the output of embedded $(shell make ...) invocations
inputs/input.Makefile: $(sortFile): use $(filter-out)->then instead of $(filter)->else for clarity
inputs/input.Makefile: added $(sortFile) (import_order.txt) target which adds any missing tables to import_order.txt
inputs/input.Makefile: added list_tables to print $(tables) for use in populating import_order.txt
bugfix: inputs/input.Makefile: `%/install %/header.csv: %/create.sql`: in noclobber mode, mark %/header.csv as .PRECIOUS so the existing file won't be deleted if the table already exists (causing an error exit)
inputs/input.Makefile: $(_svnFilesGlob): added *Makefile
inputs/input.Makefile: $(_svnFilesGlob): added *run (runscripts)
inputs/input.Makefile: $(dontImport): also support putting a _no_import file at the top level in the datasource to exclude the entire datasource
bugfix: inputs/input.Makefile: %/VegBIEN.csv: use header from map.csv instead of the new columns, so that source.shortname is set to GBIF instead of VegCore
inputs/input.Makefile: %/VegBIEN.csv: when a runscript is available, instead map the output columns of map.csv to VegBIEN, because the columns have been renamed in the staging table
bugfix: inputs/input.Makefile: %/install: don't run $(cleanup) if it has already been run by $(import_install_), so that it doesn't run twice
inputs/input.Makefile: %/postprocess: don't run postprocess.sql if it is supposed to be run by a runscript, because postprocess.sql may then depend on additional steps the runscript runs before it
inputs/input.Makefile: Staging tables installation: $(logInstall): don't output to the install log if $noclobber flag is set, to prevent overwriting the log when re-running the install target idempotently
inputs/input.Makefile: $(svnFiles): also exclude *.data.sql, which should never be in svn
bugfix: inputs/input.Makefile: sql/install: manually specify $no_search_path option to psql_script_vegbien, which is added automatically in $(psqlNoSearchPath) but that uses psql_verbose_vegbien
inputs/input.Makefile: SVN: add, %/add: */logs: also svn:ignore *.gz, used for compressed log files
inputs/input.Makefile: Removed .PRECIOUS from %/header.csv, %/map.csv so that these scripts are deleted on error. This is useful for the runscripts and for non-data dirs whose header.csv cannot be made.
inputs/input.Makefile: SVN: Only run %/add on subdirs with visible (non-hidden) files. Subdirs with only hidden files (e.g. .htaccess) are assumed to be non-data dirs.
inputs/input.Makefile: Staging tables installation: %/install: $(logInstall): Only use log file if log dir exists, to support non-data dirs
inputs/input.Makefile: Staging tables installation: %/install: ignore errors in $(exportHeader) and $(cleanup) if the table does not exist (i.e. a non-data dir)
inputs/input.Makefile: Staging tables installation: %/install: Don't run $(import_install_) for empty dirs because there is no data to import
inputs/input.Makefile: SVN: add: Removed Source/map.csv prerequisite because it is not related to adding unversioned files in the dir. It was originally a prerequisite in order to auto-create it when the datasource dir is first created, but the map.csv recipe does not currently create metadata-only map.csvs. In the future, metadata-only map.csvs will be replaced with constant columns added to the applicable tables.
inputs/input.Makefile: %/map.csv: Fixed bug where can only make header.csv if map.csv does not exist, because some subdirs are metadata-only and don't have a corresponding DB table
inputs/input.Makefile: sql/install: Use psql_script_vegbien instead of $(psqlNoSearchPath) (which uses psql_verbose_vegbien) because the insert statement for each data row should not be echoed
inputs/input.Makefile: %/map.csv: make $*/header.csv first in case it doesn't exist (e.g. if it has been deleted so that it will be remade)
inputs/input.Makefile: postprocess: Use %/postprocess instead of %/postprocess.sql/run so $*/import is also run
inputs/input.Makefile: %/postprocess: Also run the $*/import script, if it exists. Note that this is not the same as the %/import make target.
inputs/input.Makefile: %/postprocess.sql/run: Factored out into separate %/postprocess command, which can eventually also perform other actions
inputs/input.Makefile: %/header.csv: Fixed bug where newlines inside column names were incorrectly formatted by psql's table header formatting, by using COPY TO STDOUT instead