Moved wait on tnrs.make lock from import_all to make_analytical_db, so that running make_analytical_db for a one-time import also waits on the lock
schemas/vegbien.sql: taxondetermination: taxondetermination_unique: Added determinationtype so that when the matched and accepted determinations are the same, they still both get created rather than the second one being removed due to the unique constraint
schemas/vegbien.sql: analytical_specimen: Removed speciesBinomialWithMorphospecies because it doesn't apply to specimens
schemas/vegbien.sql: Added analytical_plot view
schemas/vegbien.sql: Added analytical_specimen view
schemas/vegbien.sql: analytical_stem_view: Moved recordedBy, recordNumber before dateCollected as requested by Brad <https://projects.nceas.ucsb.edu/nceas/projects/bien/wiki/Spot-checking#ACAD>
schemas/vegbien.ERD.mwb: Synced with schema
schemas/vegbien.sql: Added reproductiveCondition
mappings/VegCore-VegBIEN.csv: Mapped reproductiveCondition
schemas/vegbien.sql: plantobservation: Added reproductivecondition
mappings/VegCore.htm: Regenerated from wiki. matched*Fit_fraction has been renamed to matched*Confidence_fraction.
inputs/.TNRS/public.unscrubbed_taxondetermination_view/map.csv: Updated for new mappings/VegCore.htm
inputs/bien_web/observation/map.csv: Re-automapped taxonMorphospecies
mappings/VegCore.htm: Regenerated from wiki. Data owner terms and taxon synonyms have been added, and morphospecies has been disambiguated.
schemas/vegbien.sql: analytical_stem_view: Moved identifiedBy, dateIdentified, identificationRemarks right after the *_verbatim terms that they relate to, as requested by Brad <https://projects.nceas.ucsb.edu/nceas/projects/bien/wiki/Spot-checking#ACAD>
schemas/vegbien.sql: analytical_stem_view: Use new concat_delim() instead of array_to_string() surrounded by NULLIF
schemas/vegbien.sql: Added concat_delim()
schemas/vegbien.sql: analytical_stem_view: Removed deprecated taxonNameWithMorphospecies now that we have speciesBinomialWithMorphospecies
schemas/vegbien.sql: analytical_stem_view: speciesBinomial: Added morphospecies suffix to create speciesBinomialWithMorphospecies
inputs/import.stats.xls: Updated import times
README.TXT: Full database import: Check that unscrubbed_taxondetermination_view returns no rows: Documented that this takes 90 s with LIMIT 1
schemas/vegbien.sql: _taxon_family_require_std(): Also allow non-aceae families accepted by TNRS
Added inputs/SALVIAS/_archive/salvias_plots.*.sql.zip.md5
Added inputs/VegBank/_archive/vegbank_for_bien.tar.gz.url
Added inputs/U/UtrechtHerbarium.csv.tar.gz.url
Added inputs/TEAM/_archive/ci-team_extract.tar.gz.url
Added inputs/SpeciesLink/_archive/specieslink*.txt.gz.url
Added inputs/REMIB/_archive/remib_raw.csv.tar.gz.url
Added inputs/NY/NYSpecimenDataAmericas.csv.tar.gz.url
Added inputs/NCU/_archive/NCU-NCSC_2010-02-12.csv.tar.gz.url
Added inputs/MO/mo_digirexport.tar.gz.url
Added inputs/Madidi/_archive/2010-1-2/madidi_plots_original_12jan2010.zip.url
Added inputs/GBIF/gbif_extract.tar.gz.url
Added inputs/FIA/fia_extract.tar.gz.url
Added inputs/CVS/_archive/CVS-allTaxonOccurrences_2010-01-12.txt.tar.gz.url
Added inputs/ARIZ/ARIZ_DiGIR_21012010.csv.tar.gz.url
Added inputs/UNCC/Specimen/UNCC.csv.url, UNCC.csv.md5
Added inputs/XAL/_src/digir.xml.gz.md5
Added inputs/UNCC/_src/ with UNCC.csv.zip.md5
Added inputs/SpeciesLink/_src
README.TXT: Datasource setup: MySQL inputs: .sql exports: Use new mysql_bien to connect to the MySQL DB created for the datasource
Added mysql_bien, which runs a MySQL command on the local MySQL server
Added inputs/GBIF/_src/GBIFPortalDB-2012-12-11.dump.md5 (md5sum of the expanded file)
root Makefile: MySQL: mysql-Linux: Also install phpMyAdmin
root Makefile: MySQL: mysql-Linux: Split apt-get dependencies into separate commands, like for other apt-get commands, to avoid having one failed dependency prevent the following dependencies from being installed
root Makefile: MySQL: *mysql_users: Also add bien_read user
root Makefile: MySQL: Renamed *mysql_user to *mysql_users because there can be multiple users
inputs/: Added .md5 files for all .zip, .gz
Added inputs/HVAA/Specimen/Herbario_occur_1360871068.csv.url
lib/common.Makefile: rsync: $(rsync*): Use --no-group because the file group is different depending on the machine
input.Makefile: SVN: $(_svnFilesGlob): Also add .md5 files. This allows svn to track where unversioned files should be in the directory tree.
input.Makefile: SVN: $(_svnFilesGlob): .url, .pdf, and README.TXT in the top-level dir: Fixed bug where had extra / after brace expr
input.Makefile: SVN: $(_svnFilesGlob): Also add .url, .pdf, and README.TXT in the top-level dir
input.Makefile: SVN: $(_svnFilesGlob): Add .url, .pdf, and README.TXT files in all subdirs, not just _src
lib/common.Makefile: remote server: Use jupiter instead of vegbiendev, to ensure that all files get uploaded there rather than only to vegbiendev. This involves adding an extra database import step to download the uploaded files from jupiter onto vegbiendev.
inputs/FIA/_src/Makefile: all: Extract zip files before running tables target, because it requires the created dirs
schemas/vegbien.ERD.mwb: Fixed table sizes
Removed no longer used fix_permissions. Use root fix_perms instead.
Added root fix_perms
Moved Checksums from backups/Makefile to lib/common.Makefile so all dirs (including inputs/) can use md5sum testing
lib/common.Makefile: $(remote): Made remote basepath configurable in $(remote_basepath)
lib/common.Makefile: Renamed $(src_server) to $(remote_host) and $(src_user) to $(remote_user) for clarity
inputs/GBIF/: Added refresh metadata
Added inputs/HVAA/
Added inputs/ARIZ/_archive
inputs/ARIZ/: Removed previous data now that it has been refreshed
inputs/ARIZ/: Mapped refresh
Added inputs/ARIZ/import_order.txt
Added inputs/NY/_archive/
inputs/NY/: Removed tables from previous extract
inputs/NY/: Mapped refresh
inputs/*/*/VegBIEN.csv: Regenerated from mappings/VegCore-VegBIEN.csv
Added inputs/NY/import_order.txt
inputs/ARIZ/: Added SQL export for refresh
my2pg.data: Translate indefinite (zero) months which have a definite day. This is unusual, but does appear in some data such as the ARIZ DB.
my2pg.data: Translate indefinite dates (dates with 0 as the month or day)
my2pg: Use my2pg.data to perform data-only replacements, instead of duplicating them in both my2pg and my2pg.data
my2pg: named UNIQUE KEYs: Comment out the name because PostgreSQL requires it to be globally unique, but MySQL only requires it to be unique within the table
my2pg: Translate UNIQUE KEYs instead of removing them
my2pg*: Removed KEYs: Comment out the definition rather than removing it
my2pg*: Remove FOREIGN KEYs because MySQL does not dump tables in dependency order, which prevents PostgreSQL from creating tables whose fkeys refer to a later table
my2pg*: Replacing invalid table elements to remove them: Use a dummy CHECK constraint instead of a boolean field to avoid adding fields to the table. The elements can't always simply be removed because sed can't remove the trailing comma of the previous element, and removing the following comma doesn't work for the last element in the table.
my2pg*: Replace '0000-00-00 00:00:00' with '-infinity'
my2pg: Replace datetime with timestamp
my2pg: Remove COLLATE field attribute
lib/MySQL.*.sql.make: Documented that $server user/host are for ssh, not the DB
lib/MySQL.*.sql.make: Documented that $server can also contain a username (which will be used by ssh)
my2pg_export: Use the --quick option to facilitate exporting large tables (it avoids retrieving all rows before outputting any of them)
README.TXT: Datasource setup: Added instructions for MS Access databases
README.TXT: Datasource setup: MySQL inputs: Added instruction to skip the Add input data for each table section
inputs/NY/: Added SQL export for refresh
mappings/VegCore.htm: Regenerated from wiki. Brad's new DwC ID terms spreadsheet has now been added, and a number of the ID terms clarified, disambiguated, and recategorized. In particular, institutionCode has now been split into the custodialInstitutions and collectingInstitution, to differentiate between which institution has the specimen vs. stamped the specimen. This distinction is important because the catalogNumber, stamped on the specimen, is only unique within the collectingInstitution. Most datasources don't unambiguously specify which institution their institutionCode is referring to, so it has been assumed to be custodialInstitutions unless a data dictionary says otherwise (as is the case for UNCC). In addition, a MatchedTaxonDetermination table has been added with the *_matched fields from TNRS.
inputs/CVS/observation_/map.csv: baseSaturation: Resolved ambiguous term
mappings/Makefile: VegCore.vocab.csv: Ignore leading ? when sorting so that ambiguous terms sort alphabetically with other terms. This prevents terms from moving from their previous location when they become ambiguous.
Added sort_ci to sort a spreadsheet, ignoring leading punctuation
mappings/VegCore.vocab.csv: Changed line endings to \r\n in preparation for having a Python script run on it (which changes the line endings)
mappings/Makefile: VegCore.vocab.csv: Added back ambiguous terms, so that the vocabulary contains all terms defined by VegCore, regardless of whether they are ambiguous or unambiguous terms
mappings/Makefile: VegCore.vocab.csv: Added back synonyms, so that the vocabulary contains all terms defined by VegCore, regardless of whether they are synonyms or primary terms. This also prevents VegCore.vocab.csv from losing entries when terms are renamed, which made it difficult to verify that no terms were lost when refactoring.
inputs/MO/Specimen/postprocess.sql: Remove frameshifted rows by detecting InstitutionCodes without any letters
inputs/ARIZ/Specimen/map.csv: CollectorNumber/FieldNumber: Use /_first to map these identical fields to the same location