inputs/import.stats.xls: updated import times
bugfix: inputs/input.Makefile: %/map.csv: need to save it if errors occur in unmapped_terms.csv, new_terms.csv
fix: inputs/input.Makefile: $(svnFilesGlob): only svn:ignore *.log in the top-level dir
fix: inputs/input.Makefile: add!: verify/: also svn:ignore .zip
bugfix: inputs/input.Makefile: postprocess must be run after cleanup rather than before because it depends on the cleanup having been performed.
this bug was not previously detected because this is only a problem when refreshing a datasource to data in the same format: this would attempt to run an existing postprocess.sql, out of order, instead of starting with no postprocess.sql as we usually do....
bugfix: inputs/input.Makefile: $(dbExports): also need to put data.sql before clean_up.sql, etc. previously, this ordering had to be done by naming clean_up.sql, etc so they would sort after data.sql alphabetically, but it can be confusing to have to remember to do this. this fixes a bug in the CVS refresh where cvs.~.clean_up.sql was being run before data.sql, causing some private columns to have been deleted before the data was imported into the tables, creating a column mismatch error.
inputs/input.Makefile: pass make var $(null_strs) to invoked commands so it can be used by lib/sql_io.py
fix: *Makefile: changed line endings to \n so that `patch` can work with pasted input. use `svn di --extensions --ignore-eol-style` to verify no diff.
fix: inputs/input.Makefile: $(nonHeaderSrcs): updated to exclude new header.txt
inputs/input.Makefile: added %/list_srcs
fix: inputs/input.Makefile: need to escape $ in commands, including inside comments
bugfix: inputs/input.Makefile: `$(call add*,$(svnFiles))` must be invoked externally to clear the $(wildcard) cache before expanding $(svnFiles)
fix: inputs/input.Makefile: $(svnFilesGlob): *.log should be in both the subdirs and the main dir
inputs/input.Makefile: $(svnFilesGlob): *.log
bugfix: inputs/input.Makefile: %/install: $(exportHeader) must come before postprocess because postprocess renames columns
bugfix: inputs/input.Makefile: $(import_install_): need `set -o pipefail` to enable errexit
bugfix: inputs/input.Makefile: sql/install: ";" for commands inside $(if) blocks need to be inside the $(if) block, too, because otherwise there will be dangling ";" without a statement (bash does not support empty statements containing just ";")
bugfix: inputs/input.Makefile: sql/install: schema.sql should not be passed through pg_dump_limit because it contains GRANT statements that need to be run
bugfix: inputs/input.Makefile: $(datasrc_schema_exists): need to use $(datasrc), not $(schema), as $schema is only what this var is called in the runscripts
fix: inputs/input.Makefile: $(sortFile): don't print the "add any missing tables to $(sortFile)" message every time the Makefile is run
bugfix: inputs/input.Makefile: install: only run this for datasource dirs
inputs/input.Makefile: install: use ./run's install target for clarity
bugfix: inputs/input.Makefile: install: made it idempotent (using new $(datasrc_schema_exists)) so that it could be run by `make install` on an existing system
bugfix: inputs/input.Makefile: $(datasrc_schema_exists): need to use $(shell ...)
inputs/input.Makefile: added $(datasrc_schema_exists)
inputs/input.Makefile: add: verify/: also svn:ignore *.log
bugfix: inputs/input.Makefile: %/postprocess: invoke runscript if it exists
bugfix: inputs/input.Makefile: validations.sql must be in a subdir so it won't get run by sql/install
inputs/input.Makefile: install: also run validate/install
inputs/input.Makefile: added validate/install
lib/common.Makefile: added $(nice) and use it everywhere its definition is used
inputs/input.Makefile: validate: redirect the output to the log, as for other import-related operations
inputs/input.Makefile: import: validate at the end of the import
inputs/input.Makefile: added new-style aggregating validations (`validate` target)
bugfix: lib/common.Makefile: $(add*): need to wrap w/ $(wildcard) to prevent "targets don't exist" error, because svn 1.7 does not suppress this error even with --force
bugfix: inputs/input.Makefile: add!: add* of $(svnFiles): need to ignore errors because svn 1.7 does not suppress the "targets don't exist" error even with --force
fix: inputs/input.Makefile: don't treat *.xml as data files since these are not currently supported
fix: inputs/input.Makefile: removed no longer used special handling of XML inputs, support for which was never added to the Makefile. (bin/map, however, does support importing an XML file into a database.) this fixes a bug in XAL, which used to abort with an error but now just imports an empty table.
fix: inputs/input.Makefile: %/install: don't ignore errors if table does not exist, to ensure a proper errexit. this is now possible because every dir that this target is being run on should be a data dir. (Source/ used to be a metadata-only dir.)
bugfix: inputs/input.Makefile: $(cleanup): need `set -o pipefail`
bugfix: inputs/input.Makefile: %/postprocess.sql: don't perform replacements using map.csv, because map.csv is not idempotent. this functionality was only there to facilitate switching to new-style import, which is now largely done. (the remaining datasources NVS, SALVIAS, TEAM contain only 1 postprocess.sql: inputs/SALVIAS/projects/postprocess.sql (`st inputs/{NVS,SALVIAS,TEAM}/*/postprocess.sql`).)
inputs/input.Makefile: %/postprocess.sql: always run this, not just if the associated map spreadsheets change, to avoid needing to `touch` them to cause %/postprocess.sql to run
bugfix: inputs/input.Makefile: %/postprocess.sql: also need to apply renames from mappings/VegCore.thesaurus.csv, as these have been applied to map.csv
inputs/input.Makefile: $(svnFilesGlob): added validations.sql
inputs/input.Makefile: verify/%.out: use a *.sql file in the verify/ directory itself to generate *.out, so that each datasource can have its own set of output queries. for datasources that should share the same set of queries, they can instead be symlinked to the same file.
inputs/input.Makefile: add!: verify/: also svn:ignore *.tsv, *.txt
moved everything into /trunk/ to create the standard svn layout, for use with tools that require this (eg. git-svn). IMPORTANT: do NOT do an `svn up`. instead, re-use your working copy's existing files with `svn switch` (http://svnbook.red-bean.com/en/1.6/svn.ref.svn.c.switch.html).
bugfix: inputs/input.Makefile: install: for new-style datasources, use the associated runscript instead (the old-style install target will not do everything that's needed for a new-style datasource)
bugfix: inputs/input.Makefile: %/header.csv: errexit the command so that errors won't scroll by, which in this case requires `set -o pipefail`
bugfix: inputs/input.Makefile: `%/install: %/create.sql`: errexit the command so that errors won't scroll by, which in this case requires `set -o pipefail`
inputs/input.Makefile: scrub: clarified that using & (background process) also ignores TNRS errors (the primary purpose of & , of course, is to run asynchronously)
bugfix: inputs/input.Makefile: $(import): except in a full-database import, errexit so that the import will stop on an error and not let it scroll by
fix: inputs/input.Makefile: $(svnFilesGlob): removed schema and PDF files, since these are owned by the data provider and should not be in the repository that gets open-sourced
bugfix: inputs/input.Makefile: sql/install: exit on error by using `set -o pipefail`
inputs/input.Makefile: $(_svnFilesGlob): also svn-add _no_import in the top-level datasrc dir. (this requires using add! , because the presence of a _no_import file there will normally turn off adding by svnFilesGlob.)
bugfix: inputs/input.Makefile: %/install: don't run map_table, because this instead done by the runscript. although it does not hurt to do it twice, invoking load_data by itself should not run map_table at all, so that the original column names can be inspected in the table and map.csv reordered to match.
inputs/input.Makefile: added %/import_temp alias for %/import, to mirror the presence of import_temp for import
bugfix: inputs/input.Makefile: import: remove the temp suffix once the import is done, so that the full database import doesn't keep the suffix attached to the datasources that import_all didn't import with reimport. removed unused import_publish target (instead use import_temp to invoke just the import without the temp suffix removal).
bugfix: *Makefile: recursive invocation of $(MAKE): enclose targets in "" in case they contain *
bugfix: inputs/input.Makefile: %/uninstall: allow user to set is_view=1 flag to use DROP VIEW instead of DROP TABLE
bugfix: inputs/input.Makefile: %/VegBIEN.csv: `ln -s` to create VegBIEN.csv: enclose the filenames in "" since they may contain * (e.g. taxon_observation.**)
bugfix: inputs/input.Makefile: `%/install: %/create.sql`: don't include %/header.csv as a target, so that it won't get deleted if the install fails (especially on a step that happens after the header is exported)
inputs/input.Makefile: reimport: don't remove the existing import first, because it will instead be removed by the publish step. this ensures there is always one complete copy of the datasource in the DB.
inputs/input.Makefile: reimport: use import_publish instead of import so that the reimport replaces the previous import
inputs/input.Makefile: added import_publish, which removes the temp suffix when the import is done
inputs/input.Makefile: $(map2db): import to datasrc.new instead of plain datasrc, so that the current import of the datasrc is not overwritten
inputs/input.Makefile: added publish (`make inputs/src/publish`)
inputs/input.Makefile: added %/publish (`make inputs/src/src.version/publish`)
bugfix: inputs/input.Makefile: %/test: in by_col mode, also need to run %/test.by_col.xml
inputs/input.Makefile: rm: use new datasource_rm(), which encapsulates the schema-specific aspects of removing a datasource
inputs/input.Makefile: scrub: documented that using & (background process) ignores TNRS errors, so that TNRS bugs do not prevent the remaining tables from being imported even if TNRS can't be run
inputs/input.Makefile: $(import): support restarting the import where it left off by setting continue=1. this is done by grepping the restart row out of the log file's last partition.
inputs/input.Makefile: added %/import_scrub, similar to import_scrub but just imports one table
bugfix: inputs/input.Makefile: %/postprocess.sql: need to run bin/repl in text mode (text=1) so that values to match are treated as literal strings rather than regular expressions. this difference is important for column names with spaces or special characters.
inputs/input.Makefile: added %/postprocess.sql to replace input column names with the corresponding output column names when switching to new-style import (this target must be manually run, but does simplify the process of renaming the postprocess.sql input columns)
bugfix: inputs/input.Makefile: Staging tables installation: $(allInstalls): don't filter out Source table, because it is now an installed table rather than just a mapping
inputs/input.Makefile: Staging tables installation: %/install: run %/map_table at end to rename the staging table columns for new-style datasources
inputs/input.Makefile: Staging tables installation: added %/map_table to run the new-style import staging table renaming
bugfix: inputs/input.Makefile: map.csv and derived files: use $(tables) instead of $(importTables) when making them so that the mappings of those tables are still kept up-to-date even though they are marked _no_import (and not imported into the main DB)
inputs/input.Makefile: %/postprocess: removed no longer used invocation of $*/import (precursor to the runscripts used in FIA)
bugfix: inputs/input.Makefile: %/VegBIEN.csv: for new-style datasources, use a symlink to mappings/VegCore-VegBIEN.csv directly instead of prefiltering VegCore-VegBIEN.csv to include only the columns in map.csv. prefiltering used to be performed as part of mapping the map.csv VegCore output terms to VegBIEN using bin/join, but is no longer needed because the staging table columns are now VegCore terms. instead, the full VegCore-VegBIEN.csv is needed so that derived columns added in stage I or II validations are detected by bin/map (rather than just the original source columns in map.csv).
bugfix: inputs/input.Makefile: SVN: add: don't add subdirs for datasources marked _no_import (e.g. datasources which only have an inputs/ dir to be listed in VegPath)
inputs/input.Makefile: SVN: $(svnFilesGlob): added data.csv, used to store versioned data (such as the empty data.csv used by Source/ tables which have their metadata in the map table instead)
inputs/input.Makefile: added support for separate grants.sql file, which may contain GRANT statements that would normally be filtered out by pg_dump_limit
inputs/input.Makefile: sql/install: added $debug option to run the *.sql import verbosely, to display which statements are being run. this should only be used for SQL files that use COPY FROM to import data, to avoid echoing pages of insert statements.
inputs/input.Makefile: keep $(sortFile) up-to-date: use sort_file_updated=1 flag to indicate that import_order.txt has already been checked, so that recursive invocations of make don't need to recheck it. also use this flag instead of an explicit $(MAKECMDGOALS) list to prevent the $(sortFile) check from being infinite-recursively reinvoked when input.Makefile is read as part of the $(sortFile) check itself.
inputs/input.Makefile: keep import_order.txt up-to-date by running `make $(sortFile)` each time make is run. this ensures that new datasources always have import_order.txt populated when make is first run. eventually, $(tables) can be always set to $(allTables) so that this auto-updating can also be used to ensure that new subdirs added by the user always make it into import_order.txt (so that they will be included in the subdirs that get remade, etc.). import_order.txt is primarily for specifying the order of the subdirs, but some datasources also use it to filter out subdirs, so it can't yet be always updated to include the full list of subdirs. however, the filter-out usage should no longer be necessary after the switch to new-style import.
inputs/input.Makefile: added $(filter_make), used to filter the output of embedded $(shell make ...) invocations
inputs/input.Makefile: $(sortFile): use $(filter-out)->then instead of $(filter)->else for clarity
inputs/input.Makefile: added $(sortFile) (import_order.txt) target which adds any missing tables to import_order.txt
inputs/input.Makefile: added list_tables to print $(tables) for use in populating import_order.txt
bugfix: inputs/input.Makefile: `%/install %/header.csv: %/create.sql`: in noclobber mode, mark %/header.csv as .PRECIOUS so the existing file won't be deleted if the table already exists (causing an error exit)
inputs/input.Makefile: $(_svnFilesGlob): added *Makefile
inputs/input.Makefile: $(_svnFilesGlob): added *run (runscripts)
inputs/input.Makefile: $(dontImport): also support putting a _no_import file at the top level in the datasource to exclude the entire datasource
bugfix: inputs/input.Makefile: %/VegBIEN.csv: use header from map.csv instead of the new columns, so that source.shortname is set to GBIF instead of VegCore
inputs/input.Makefile: %/VegBIEN.csv: when a runscript is available, instead map the output columns of map.csv to VegBIEN, because the columns have been renamed in the staging table