Project

General

Profile

1
Installation:
2
	Check out svn: svn co https://code.nceas.ucsb.edu/code/projects/bien
3
	cd bien/
4
	Install: make install
5
		**WARNING**: This will delete the public schema of your VegBIEN DB!
6
	Uninstall: make uninstall
7
		**WARNING**: This will delete your entire VegBIEN DB!
8
		This includes all archived imports and staging tables.
9

    
10
Connecting to vegbiendev:
11
	ssh -t vegbiendev.nceas.ucsb.edu exec sudo su - aaronmk
12
	cd /home/bien/svn # should happen automatically at login
13

    
14
Notes on system stability:
15
	**WARNING**: system upgrades can break key parts of the full-database
16
		import, causing errors such as disk space overruns. for this reason, it
17
		is recommended to maintain a snapshot copy of the VM as it was at the
18
		last successful import, for fallback use if a system upgrade breaks
19
		anything. system upgrades on the snapshot VM should be disabled
20
		completely, and because this will also disable security fixes, the
21
		snapshot VM should be disconnected from the internet and all networking
22
		interfaces. (this is an unfortunate consequence of modern OSes being
23
		written in non-memory-safe languages such as C and C++.)
24

    
25
Notes on running programs:
26
	**WARNING**: always start with a clean shell, to avoid spurious bugs. the
27
		shell should not have changes to the env vars. (there have been bugs
28
		that went away after closing and reopening the terminal window.) note
29
		that running `exec bash` is not sufficient to *reset* the env vars.
30

    
31
Notes on editing files:
32
	**WARNING**: shell scripts should always be read-only, so that editing them
33
		while an import is in progress will not crash the import (see
34
		http://vegpath.org/links/#**%20modifying%20a%20running%20shell%20script)
35

    
36
Single datasource import:
37
	ssh -t vegbiendev.nceas.ucsb.edu exec sudo su - aaronmk
38
	(Re)import and scrub: make inputs/<datasrc>/reimport_scrub by_col=1 &
39
	(Re)import only: make inputs/<datasrc>/reimport by_col=1 &
40
	Note that these commands also work if the datasource is not yet imported
41
	Remake analytical DB: see Full database import > To remake analytical DB
42

    
43
Full database import:
44
	**WARNING**: You must perform *every single* step listed below, to avoid
45
		breaking column-based import
46
	**WARNING**: always start with a clean shell, as described above under
47
		"Notes on running programs"
48
	**IMPORTANT**: the beginning of the import should be scheduled at a time
49
		when the DB will not be needed for other uses. this is necessary because
50
		vegbiendev will be slow for the first few hours of the import, due to
51
		the import using all the available cores.
52
	do steps under Maintenance > "to synchronize vegbiendev, jupiter, and
53
		your local machine"
54
	On local machine:
55
		make inputs/upload
56
		make inputs/upload live=1
57
		make test by_col=1 # runtime: 20 min ("4m46.108s" + ("21:50:43" - "21:37:09")) @starscream
58
			if you encounter errors, they are most likely related to the
59
				PostgreSQL error parsing in /lib/sql.py parse_exception()
60
			See note under Testing below
61
	ssh -t vegbiendev.nceas.ucsb.edu exec sudo su - aaronmk
62
	Ensure there are no local modifications: svn st
63
	up
64
	make inputs/download
65
	make inputs/download live=1
66
	For each newly-uploaded datasource above: make inputs/<datasrc>/reinstall
67
	Update the auxiliary schemas: make schemas/reinstall
68
		**WARNING**: requires sudo access!
69
		The public schema will be installed separately by the import process
70
	Delete imports before the last so they won't bloat the full DB backup:
71
		make backups/vegbien.<version>.backup/remove
72
		To keep a previous import other than the public schema:
73
			export dump_opts='--exclude-schema=public --exclude-schema=<version>'
74
			# env var will be inherited by `screen` shell
75
	restart Postgres to free up any disk space used by temp tables from the last
76
		import (this is apparently not automatically reclaimed):
77
		make postgres_restart
78
	Make sure there is at least 1 TB of disk space on /: df -h
79
		**WARNING**: sometimes, this amount of available space is insufficient
80
			and the entire disk space gets used up, crashing the import. if this
81
			occurs, the problem will often be fixed just by rerunning the import
82
			again. (the high-water mark varies by import.)
83
		although the import schema itself is only 315 GB, Postgres uses
84
			significant temporary space at the beginning of the import.
85
			the total disk usage oscillates between 1.2 TB and the entire disk
86
			for the first day (for import started @12:55:09, high-water marks of
87
			1.7 TB @14:00:25, 1.8 TB @15:38:32, entire disk for 4 min
88
			@05:35:44 w/ only a few datasources running).
89
		To free up space, remove backups that have been archived on jupiter:
90
			List backups/ to view older backups
91
			Check their MD5 sums using the steps under On jupiter below
92
			Remove these backups
93
	unset version # clear any version from last import, etc.
94
	if no commits have been made since the last import (eg. if retrying an
95
		import), set a custom version that differs from the auto-assigned one
96
		(would otherwise cause a collision with the last import):
97
		svn info
98
		extract the svn revision after "Revision:"
99
		export version=r[revision]_2 # +suffix to distinguish from last import
100
			# env var will be inherited by `screen` shell
101
	screen
102
	Press ENTER
103
	unset TMOUT # TMOUT causes screen to exit even with background processes
104
	set -o ignoreeof #prevent Ctrl+D from exiting `screen` to keep attached jobs
105
	Start column-based import: . bin/import_all
106
		To use row-based import: . bin/import_all by_col=
107
		To stop all running imports: . bin/stop_imports
108
		**WARNING**: Do NOT run import_all in the background, or the jobs it
109
			creates won't be owned by your shell.
110
		Note that import_all will take up to an hour to import the NCBI backbone
111
			and other metadata before returning control to the shell.
112
		To view progress:
113
			tail inputs/{.,}*/*/logs/$version.log.sql
114
	note: at the beginning of the import, the system may send out CPU load
115
		warning e-mails. these can safely be ignored. (they happen because the
116
		parallel imports use all the available cores.)
117
	Wait (4 days) for the import to finish
118
	To recover from a closed terminal window: screen -r
119
	To restart an aborted import for a specific table:
120
		export version=<version>
121
		(set -o errexit; make inputs/<datasrc>/<table>/import_scrub by_col=1 continue=1; make inputs/<datasrc>/publish) &
122
		bin/after_import $! & # $! can also be obtained from `jobs -l`
123
	Get $version: echo $version
124
	Set $version in all vegbiendev terminals: export version=<version>
125
	When there are no more running jobs, exit `screen`: exit # not Ctrl+D
126
	upload logs: make inputs/upload live=1
127
	On local machine: make inputs/download-logs live=1
128
	check for disk space errors:
129
		grep --files-with-matches -F 'No space left on device' inputs/{.,}*/*/logs/$version.log.sql
130
		if there are any matches:
131
			manually reimport these datasources using the steps under
132
				Single datasource import
133
			bin/after_import &
134
			wait for the import to finish
135
	tail inputs/{.,}*/*/logs/$version.log.sql
136
	In the output, search for "Command exited with non-zero status"
137
	For inputs that have this, fix the associated bug(s)
138
	If many inputs have errors, discard the current (partial) import:
139
		make schemas/$version/uninstall
140
	Otherwise, continue
141
	In PostgreSQL:
142
		Go to wiki.vegpath.org/VegBIEN_contents
143
		Get the # observations
144
		Get the # datasources
145
		Get the # datasources with observations
146
		in the r# schema:
147
		Check that analytical_stem contains [# observations] rows
148
		Check that source contains [# datasources] rows up through XAL. If this
149
			is not the case, manually check the entries in source against the
150
			datasources list on the wiki page (some datasources may be near the
151
			end depending on import order).
152
		Check that provider_count contains [# datasources with observations]
153
			rows with dataset="(total)" (at the top when the table is unsorted)
154
	Check that TNRS ran successfully:
155
		tail -100 inputs/.TNRS/tnrs/logs/tnrs.make.log.sql
156
		If the log ends in an AssertionError
157
			"assert sql.table_col_names(db, table) == header":
158
			Figure out which TNRS CSV columns have changed
159
			On local machine:
160
				Make the changes in the DB's TNRS and public schemas
161
				rm=1 inputs/.TNRS/schema.sql.run export_
162
				make schemas/remake
163
				inputs/test_taxonomic_names/test_scrub # re-run TNRS
164
				rm=1 inputs/.TNRS/data.sql.run export_
165
				Commit
166
			ssh -t vegbiendev.nceas.ucsb.edu exec sudo su - aaronmk
167
				If dropping a column, save the dependent views
168
				Make the same changes in the live TNRS.tnrs table on vegbiendev
169
				If dropping a column, recreate the dependent views
170
				Restart the TNRS client: make scrub by_col=1 &
171
	Publish the new import:
172
		**WARNING**: Before proceeding, be sure you have done *every single*
173
			verification step listed above. Otherwise, a previous valid import
174
			could incorrectly be overwritten with a broken one.
175
		make schemas/$version/publish # runtime: 1 min ("real 1m10.451s")
176
	unset version
177
	make backups/upload live=1
178
	on local machine:
179
		make backups/vegbien.$version.backup/download live=1
180
			# download backup to local machine
181
	ssh aaronmk@jupiter.nceas.ucsb.edu
182
		cd /data/dev/aaronmk/bien/backups
183
		For each newly-archived backup:
184
			make -s <backup>.md5/test
185
			Check that "OK" is printed next to the filename
186
	If desired, record the import times in inputs/import.stats.xls:
187
		On local machine:
188
		Open inputs/import.stats.xls
189
		If the rightmost import is within 5 columns of column IV:
190
			Copy the current tab to <leftmost-date>~<rightmost-date>
191
			Remove the previous imports from the current tab because they are
192
				now in the copied tab instead
193
		Insert a copy of the leftmost "By column" column group before it
194
		export version=<version>
195
		bin/import_date inputs/{.,}*/*/logs/$version.log.sql
196
		Update the import date in the upper-right corner
197
		bin/import_times inputs/{.,}*/*/logs/$version.log.sql
198
		Paste the output over the # Rows/Time columns, making sure that the
199
			row counts match up with the previous import's row counts
200
		If the row counts do not match up, insert or reorder rows as needed
201
			until they do. Get the datasource names from the log file footers:
202
			tail inputs/{.,}*/*/logs/$version.log.sql
203
		Commit: svn ci -m 'inputs/import.stats.xls: updated import times'
204
	Running individual steps separately:
205
	To run TNRS:
206
		To use an import other than public: export version=<version>
207
		make scrub &
208
		To view progress:
209
			tail -100 inputs/.TNRS/tnrs/logs/tnrs.make.log.sql
210
	To remake analytical DB:
211
		To use an import other than public: export version=<version>
212
		bin/make_analytical_db & # runtime: 13 h ("12:43:57elapsed")
213
		To view progress:
214
			tail -150 inputs/analytical_db/logs/make_analytical_db.log.sql
215
	To back up DB (staging tables and last import):
216
		To use an import *other than public*: export version=<version>
217
		make backups/TNRS.backup-remake &
218
		dump_opts=--exclude-schema=public make backups/vegbien.$version.backup/test &
219
			If after renaming to public, instead set dump_opts='' and replace
220
			$version with the appropriate revision
221
		make backups/upload live=1
222

    
223
Datasource setup:
224
	On local machine:
225
	Example steps for a datasource: wiki.vegpath.org/Import_process_for_Madidi
226
	umask ug=rwx,o= # prevent files from becoming web-accessible
227
	Add a new datasource: make inputs/<datasrc>/add
228
		<datasrc> may not contain spaces, and should be abbreviated.
229
		If the datasource is a herbarium, <datasrc> should be the herbarium code
230
			as defined by the Index Herbariorum <http://sweetgum.nybg.org/ih/>
231
	For a new-style datasource (one containing a ./run runscript):
232
		"cp" -f inputs/.NCBI/{Makefile,run,table.run} inputs/<datasrc>/
233
	For MySQL inputs (exports and live DB connections):
234
		For .sql exports:
235
			Place the original .sql file in _src/ (*not* in _MySQL/)
236
			Follow the steps starting with Install the staging tables below.
237
				This is for an initial sync to get the file onto vegbiendev.
238
			ssh -t vegbiendev.nceas.ucsb.edu exec sudo su - aaronmk
239
				Create a database for the MySQL export in phpMyAdmin
240
				Give the bien user all database-specific privileges *except*
241
					UPDATE, DELETE, ALTER, DROP. This prevents bugs in the
242
					import scripts from accidentally deleting data.
243
				bin/mysql_bien database <inputs/<datasrc>/_src/export.sql &
244
		mkdir inputs/<datasrc>/_MySQL/
245
		cp -p lib/MySQL.{data,schema}.sql.make inputs/<datasrc>/_MySQL/
246
		Edit _MySQL/*.make for the DB connection
247
			For a .sql export, use server=vegbiendev and --user=bien
248
		Skip the Add input data for each table section
249
	For MS Access databases:
250
		Place the .mdb or .accdb file in _src/
251
		Download and install Access To PostgreSQL from
252
			http://www.bullzip.com/download.php
253
		Use Access To PostgreSQL to export the database:
254
			Export just the tables/indexes to inputs/<datasrc>/<file>.schema.sql
255
			Export just the data to inputs/<datasrc>/<file>.data.sql
256
		In <file>.schema.sql, make the following changes:
257
			Replace text "BOOLEAN" with "/*BOOLEAN*/INTEGER"
258
			Replace text "DOUBLE PRECISION NULL" with "DOUBLE PRECISION"
259
		Skip the Add input data for each table section
260
	Add input data for each table present in the datasource:
261
		For .sql exports, you must use the name of the table in the DB export
262
		For CSV files, you can use any name. It's recommended to use a table
263
			name from <https://projects.nceas.ucsb.edu/nceas/projects/bien/wiki/VegCSV#Suggested-table-names>
264
		Note that if this table will be joined together with another table, its
265
			name must end in ".src"
266
		make inputs/<datasrc>/<table>/add
267
			Important: DO NOT just create an empty directory named <table>!
268
				This command also creates necessary subdirs, such as logs/.
269
		If the table is in a .sql export: make inputs/<datasrc>/<table>/install
270
			Otherwise, place the CSV(s) for the table in
271
			inputs/<datasrc>/<table>/ OR place a query joining other tables
272
			together in inputs/<datasrc>/<table>/create.sql
273
		Important: When exporting relational databases to CSVs, you MUST ensure
274
			that embedded quotes are escaped by doubling them, *not* by
275
			preceding them with a "\" as is the default in phpMyAdmin
276
		If there are multiple part files for a table, and the header is repeated
277
			in each part, make sure each header is EXACTLY the same.
278
			(If the headers are not the same, the CSV concatenation script
279
			assumes the part files don't have individual headers and treats the
280
			subsequent headers as data rows.)
281
		Add <table> to inputs/<datasrc>/import_order.txt before other tables
282
			that depend on it
283
		For a new-style datasource:
284
			"cp" -f inputs/.NCBI/nodes/run inputs/<datasrc>/<table>/
285
			inputs/<datasrc>/<table>/run
286
	Install the staging tables:
287
		make inputs/<datasrc>/reinstall quiet=1 &
288
		For a MySQL .sql export:
289
			At prompt "[you]@vegbiendev's password:", enter your password
290
			At prompt "Enter password:", enter the value in config/bien_password
291
		To view progress: tail -f inputs/<datasrc>/<table>/logs/install.log.sql
292
		View the logs: tail -n +1 inputs/<datasrc>/*/logs/install.log.sql
293
			tail provides a header line with the filename
294
			+1 starts at the first line, to show the whole file
295
		For every file with an error 'column "..." specified more than once':
296
			Add a header override file "+header.<ext>" in <table>/:
297
				Note: The leading "+" should sort it before the flat files.
298
					"_" unfortunately sorts *after* capital letters in ASCII.
299
				Create a text file containing the header line of the flat files
300
				Add an ! at the beginning of the line
301
					This signals cat_csv that this is a header override.
302
				For empty names, use their 0-based column # (by convention)
303
				For duplicate names, add a distinguishing suffix
304
				For long names that collided, rename them to <= 63 chars long
305
				Do NOT make readability changes in this step; that is what the
306
					map spreadsheets (below) are for.
307
				Save
308
		If you made any changes, re-run the install command above
309
	Auto-create the map spreadsheets: make inputs/<datasrc>/
310
	Map each table's columns:
311
		In each <table>/ subdir, for each "via map" map.csv:
312
			Open the map in a spreadsheet editor
313
			Open the "core map" /mappings/Veg+-VegBIEN.csv
314
			In each row of the via map, set the right column to a value from the
315
				left column of the core map
316
			Save
317
		Regenerate the derived maps: make inputs/<datasrc>/
318
	Accept the test cases:
319
		For a new-style datasource:
320
			inputs/<datasrc>/run
321
			svn di inputs/<datasrc>/*/test.xml.ref
322
			If you get errors, follow the steps for old-style datasources below
323
		For an old-style datasource:
324
			make inputs/<datasrc>/test
325
			When prompted to "Accept new test output", enter y and press ENTER
326
			If you instead get errors, do one of the following for each one:
327
			-	If the error was due to a bug, fix it
328
			-	Add a SQL function that filters or transforms the invalid data
329
			-	Make an empty mapping for the columns that produced the error.
330
				Put something in the Comments column of the map spreadsheet to
331
				prevent the automatic mapper from auto-removing the mapping.
332
			When accepting tests, it's helpful to use WinMerge
333
				(see WinMerge setup below for configuration)
334
		make inputs/<datasrc>/test by_col=1
335
			If you get errors this time, this always indicates a bug, usually in
336
				the VegBIEN unique constraints or column-based import itself
337
	Add newly-created files: make inputs/<datasrc>/add
338
	Commit: svn ci -m "Added inputs/<datasrc>/" inputs/<datasrc>/
339
	Update vegbiendev:
340
		ssh aaronmk@jupiter.nceas.ucsb.edu
341
			up
342
		On local machine:
343
			./fix_perms
344
			make inputs/upload
345
			make inputs/upload live=1
346
		ssh -t vegbiendev.nceas.ucsb.edu exec sudo su - aaronmk
347
			up
348
			make inputs/download
349
			make inputs/download live=1
350
			Follow the steps under Install the staging tables above
351

    
352
Maintenance:
353
	on a live machine, you should put the following in your .profile:
354
--
355
# make svn files web-accessible. this does not affect unversioned files, because
356
# these get the right permissions on the local machine instead.
357
umask ug=rwx,o=rx
358

    
359
unset TMOUT # TMOUT causes screen to exit even with background processes
360
--
361
	if http://vegbiendev.nceas.ucsb.edu/phppgadmin/ goes down:
362
		ssh -t vegbiendev.nceas.ucsb.edu exec sudo su - aaronmk
363
			make phppgadmin-Linux
364
	regularly, re-run full-database import so that bugs in it don't pile up.
365
		it needs to be kept in working order so that it works when it's needed.
366
	to synchronize vegbiendev, jupiter, and your local machine:
367
		**WARNING**: pay careful attention to all files that will be deleted or
368
			overwritten!
369
		install put if needed:
370
			download https://uutils.googlecode.com/svn/trunk/bin/put to ~/bin/ and `chmod +x` it
371
		when changes are made on vegbiendev:
372
			avoid extraneous diffs when rsyncing:
373
				on all machines:
374
				up
375
				./fix_perms
376
			ssh -t vegbiendev.nceas.ucsb.edu exec sudo su - aaronmk
377
				upload:
378
				overwrite=1 bin/sync_upload --size-only
379
					then rerun with l=1 ...
380
			on your machine:
381
				download:
382
				overwrite=1 swap=1 src=. dest='aaronmk@jupiter.nceas.ucsb.edu:~/bien' put --exclude=.svn inputs/VegBIEN/TWiki
383
					then rerun with l=1 ...
384
				swap=1 bin/sync_upload backups/TNRS.backup
385
					then rerun with l=1 ...
386
				overwrite=1 swap=1 bin/sync_upload --size-only
387
					then rerun with l=1 ...
388
				overwrite=1 sync_remote_url=~/Dropbox/svn/ bin/sync_upload --existing --size-only # just update mtimes/perms
389
					then rerun with l=1 ...
390
	to back up e-mails:
391
		on local machine:
392
		/Applications/gmvault-v1.8.1-beta/bin/gmvault sync --multiple-db-owner --type quick aaronmk.nceas@gmail.com
393
		/Applications/gmvault-v1.8.1-beta/bin/gmvault sync --multiple-db-owner --type quick aaronmk@nceas.ucsb.edu
394
		open Thunderbird
395
		click the All Mail folder for each account and wait for it to download the e-mails in it
396
	to back up the version history:
397
		ssh -t vegbiendev.nceas.ucsb.edu exec sudo su - aaronmk
398
		svnsync sync file:///home/aaronmk/Dropbox/docs/BIEN/svn_repo/ # initial runtime: 1.5 h ("08:21:38" - "06:45:26") @vegbiendev
399
		(cd ~/Dropbox/docs/BIEN/git/; git svn fetch)
400
		overwrite=1        src=~ dest='aaronmk@jupiter.nceas.ucsb.edu:~' put Dropbox/docs/BIEN/svn_repo/ # runtime: 1 min ("1:05.08")
401
			then rerun with l=1 ...
402
		overwrite=1        src=~ dest='aaronmk@jupiter.nceas.ucsb.edu:~' put Dropbox/docs/BIEN/git/
403
			then rerun with l=1 ...
404
		on local machine:
405
		overwrite=1 swap=1 src=~ dest='aaronmk@jupiter.nceas.ucsb.edu:~' put Dropbox/docs/BIEN/svn_repo/ # runtime: 30 s ("36.19")
406
			then rerun with l=1 ...
407
		overwrite=1 swap=1 src=~ dest='aaronmk@jupiter.nceas.ucsb.edu:~' put Dropbox/docs/BIEN/git/
408
			then rerun with l=1 ...
409
	to synchronize a Mac's settings with my testing machine's:
410
		download:
411
			**WARNING**: this will overwrite all your user's settings!
412
			on your machine:
413
			overwrite=1 swap=1 sync_local_dir=~/Library/ sync_remote_subdir=Library/ bin/sync_upload --exclude="/Saved Application State"
414
				then rerun with l=1 ...
415
		upload:
416
			do step when changes are made on vegbiendev > on your machine, download
417
			ssh aaronmk@jupiter.nceas.ucsb.edu
418
				(cd ~/Dropbox/svn/; up)
419
			on your machine:
420
				rm ~/'Library/Thunderbird/Profiles/9oo8rcyn.default/ImapMail/imap.googlemail.com/[Gmail].sbd/Spam'
421
					# remove the downloaded Spam folder, because spam e-mails often contain viruses that would trigger clamscan
422
				overwrite=1 del= sync_local_dir=~/Dropbox/svn/ sync_remote_subdir=Dropbox/svn/ bin/sync_upload --size-only # just update mtimes
423
					then rerun with l=1 ...
424
				overwrite=1      sync_local_dir=~              sync_remote_subdir=             bin/sync_upload --exclude="/Library/Saved Application State" --exclude="/.Trash" --exclude="/bin" --exclude="/bin/pg_ctl" --exclude="/bin/unzip" --exclude="/Dropbox/home" --exclude="/.profile" --exclude="/.shrc" --exclude="/.bashrc"
425
					then rerun with l=1 ...
426
				overwrite=1      sync_local_dir=~              sync_remote_url=~/Dropbox/home  bin/sync_upload --exclude="/Library/Saved Application State" --exclude="/.Trash" --exclude="/.dropbox" --exclude="/Documents/BIEN" --exclude="/Dropbox" --exclude="/software" --exclude="/VirtualBox VMs/**.sav" --exclude="/VirtualBox VMs/**.vdi" --exclude="/VirtualBox VMs/**.vmdk"
427
					then rerun with l=1 ...
428
		upload just the VirtualBox VMs:
429
			on your machine:
430
				overwrite=1      sync_local_dir=~              sync_remote_subdir=             bin/sync_upload ~/"VirtualBox VMs/**"
431
					then rerun with l=1 ...
432
	to backup files not in Time Machine:
433
		On local machine:
434
		overwrite=1 src=/ dest=/Volumes/Time\ Machine\ Backups/ sudo -E put Library/PostgreSQL/9.3/data/
435
			then rerun with l=1 ...
436
		pg_ctl. stop # stop the PostgreSQL server
437
		overwrite=1 src=/ dest=/Volumes/Time\ Machine\ Backups/ sudo -E put Library/PostgreSQL/9.3/data/
438
			then rerun with l=1 ...
439
		pg_ctl. start # start the PostgreSQL server
440
	VegCore data dictionary:
441
		Regularly, or whenever the VegCore data dictionary page
442
			(https://projects.nceas.ucsb.edu/nceas/projects/bien/wiki/VegCore)
443
			is changed, regenerate mappings/VegCore.csv:
444
			On local machine:
445
			make mappings/VegCore.htm-remake; make mappings/
446
			apply new data dict mappings to datasource mappings/staging tables:
447
				inputs/run postprocess # runtime: see inputs/run
448
				time yes|make inputs/{NVS,SALVIAS,TEAM}/test # old-style import; runtime: 1 min ("0m59.692s") @starscream
449
			svn di mappings/VegCore.tables.redmine
450
			If there are changes, update the data dictionary's Tables section
451
			When moving terms, check that no terms were lost: svn di
452
			svn ci -m 'mappings/VegCore.htm: regenerated from wiki'
453
			ssh -t vegbiendev.nceas.ucsb.edu exec sudo su - aaronmk
454
				perform the steps under "apply new data dict mappings to
455
					datasource mappings/staging tables" above
456
	Important: Whenever you install a system update that affects PostgreSQL or
457
		any of its dependencies, such as libc, you should restart the PostgreSQL
458
		server. Otherwise, you may get strange errors like "the database system
459
		is in recovery mode" which go away upon reimport, or you may not be able
460
		to access the database as the postgres superuser. This applies to both
461
		Linux and Mac OS X.
462

    
463
Backups:
464
	Archived imports:
465
		ssh -t vegbiendev.nceas.ucsb.edu exec sudo su - aaronmk
466
		Back up: make backups/<version>.backup &
467
			Note: To back up the last import, you must archive it first:
468
				make schemas/rotate
469
		Test: make -s backups/<version>.backup/test &
470
		Restore: make backups/<version>.backup/restore &
471
		Remove: make backups/<version>.backup/remove
472
		Download: make backups/<version>.backup/download
473
	TNRS cache:
474
		ssh -t vegbiendev.nceas.ucsb.edu exec sudo su - aaronmk
475
		Back up: make backups/TNRS.backup-remake &
476
			runtime: 3 min ("real 2m48.859s")
477
		Restore:
478
			yes|make inputs/.TNRS/uninstall
479
			make backups/TNRS.backup/restore &
480
				runtime: 5.5 min ("real 5m35.829s")
481
			yes|make schemas/public/reinstall
482
				Must come after TNRS restore to recreate tnrs_input_name view
483
	Full DB:
484
		ssh -t vegbiendev.nceas.ucsb.edu exec sudo su - aaronmk
485
		Back up: make backups/vegbien.<version>.backup &
486
		Test: make -s backups/vegbien.<version>.backup/test &
487
		Restore: make backups/vegbien.<version>.backup/restore &
488
		Download: make backups/vegbien.<version>.backup/download
489
	Import logs:
490
		On local machine:
491
		Download: make inputs/download-logs live=1
492

    
493
Datasource refreshing:
494
	VegBank:
495
		ssh -t vegbiendev.nceas.ucsb.edu exec sudo su - aaronmk
496
		make inputs/VegBank/vegbank.sql-remake
497
		make inputs/VegBank/reinstall quiet=1 &
498

    
499
Schema changes:
500
	On local machine:
501
	When changing the analytical views, run sync_analytical_..._to_view()
502
		to update the corresponding table
503
	Remember to update the following files with any renamings:
504
		schemas/filter_ERD.csv
505
		mappings/VegCore-VegBIEN.csv
506
		mappings/verify.*.sql
507
	Regenerate schema from installed DB: make schemas/remake
508
	Reinstall DB from schema: make schemas/public/reinstall schemas/reinstall
509
		**WARNING**: This will delete the public schema of your VegBIEN DB!
510
	If needed, reinstall staging tables:
511
		On local machine:
512
			sudo -E -u postgres psql <<<'ALTER DATABASE vegbien RENAME TO vegbien_prev'
513
			make db
514
			. bin/reinstall_all
515
			Fix any bugs and retry until no errors
516
			make schemas/public/install
517
				This must be run *after* the datasources are installed, because
518
				views in public depend on some of the datasources
519
			sudo -E -u postgres psql <<<'DROP DATABASE vegbien_prev'
520
		ssh -t vegbiendev.nceas.ucsb.edu exec sudo su - aaronmk
521
			repeat the above steps
522
			**WARNING**: Do not run this until reinstall_all runs successfully
523
			on the local machine, or the live DB may be unrestorable!
524
	update mappings and staging table column names:
525
		on local machine:
526
			inputs/run postprocess # runtime: see inputs/run
527
			time yes|make inputs/{NVS,SALVIAS,TEAM}/test # old-style import; runtime: 1 min ("0m59.692s") @starscream
528
		ssh -t vegbiendev.nceas.ucsb.edu exec sudo su - aaronmk
529
			manually apply schema changes to the live public schema
530
			do steps under "on local machine" above
531
	Sync ERD with vegbien.sql schema:
532
		Run make schemas/vegbien.my.sql
533
		Open schemas/vegbien.ERD.mwb in MySQLWorkbench
534
		Go to File > Export > Synchronize With SQL CREATE Script...
535
		For Input File, select schemas/vegbien.my.sql
536
		Click Continue
537
		In the changes list, select each table with an arrow next to it
538
		Click Update Model
539
		Click Continue
540
		Note: The generated SQL script will be empty because we are syncing in
541
			the opposite direction
542
		Click Execute
543
		Reposition any lines that have been reset
544
		Add any new tables by dragging them from the Catalog in the left sidebar
545
			to the diagram
546
		Remove any deleted tables by right-clicking the table's diagram element,
547
			selecting Delete '<table name>', and clicking Delete
548
		Save
549
		If desired, update the graphical ERD exports (see below)
550
	Update graphical ERD exports:
551
		Go to File > Export > Export as PNG...
552
		Select schemas/vegbien.ERD.png and click Save
553
		Go to File > Export > Export as SVG...
554
		Select schemas/vegbien.ERD.svg and click Save
555
		Go to File > Export > Export as Single Page PDF...
556
		Select schemas/vegbien.ERD.1_pg.pdf and click Save
557
		Go to File > Print...
558
		In the lower left corner, click PDF > Save as PDF...
559
		Set the Title and Author to ""
560
		Select schemas/vegbien.ERD.pdf and click Save
561
		Commit: svn ci -m "schemas/vegbien.ERD.mwb: Regenerated exports"
562
	Refactoring tips:
563
		To rename a table:
564
			In vegbien.sql, do the following:
565
				Replace regexp (?<=_|\b)<old>(?=_|\b) with <new>
566
					This is necessary because the table name is *everywhere*
567
				Search for <new>
568
				Manually change back any replacements inside comments
569
		To rename a column:
570
			Rename the column: ALTER TABLE <table> RENAME <old> TO <new>;
571
			Recreate any foreign key for the column, removing CONSTRAINT <name>
572
				This resets the foreign key name using the new column name
573
	Creating a poster of the ERD:
574
		Determine the poster size:
575
			Measure the line height (from the bottom of one line to the bottom
576
				of another): 16.3cm/24 lines = 0.679cm
577
			Measure the height of the ERD: 35.4cm*2 = 70.8cm
578
			Zoom in as far as possible
579
			Measure the height of a capital letter: 3.5mm
580
			Measure the line height: 8.5mm
581
			Calculate the text's fraction of the line height: 3.5mm/8.5mm = 0.41
582
			Calculate the text height: 0.679cm*0.41 = 0.28cm
583
			Calculate the text height's fraction of the ERD height:
584
				0.28cm/70.8cm = 0.0040
585
			Measure the text height on the *VegBank* ERD poster: 5.5mm = 0.55cm
586
			Calculate the VegBIEN poster height to make the text the same size:
587
				0.55cm/0.0040 = 137.5cm H; *1in/2.54cm = 54.1in H
588
			The ERD aspect ratio is 11 in W x (2*8.5in H) = 11x17 portrait
589
			Calculate the VegBIEN poster width: 54.1in H*11W/17H = 35.0in W
590
			The minimum VegBIEN poster size is 35x54in portrait
591
		Determine the cost:
592
			The FedEx Kinkos near NCEAS (1030 State St, Santa Barbara, CA 93101)
593
				charges the following for posters:
594
				base: $7.25/sq ft
595
				lamination: $3/sq ft
596
				mounting on a board: $8/sq ft
597

    
598
Testing:
599
	On a development machine, you should put the following in your .profile:
600
		umask ug=rwx,o= # prevent files from becoming web-accessible
601
		export log= n=2
602
	For development machine specs, see /planning/resources/dev_machine.specs/
603
	On local machine:
604
	Mapping process: make test
605
		Including column-based import: make test by_col=1
606
			If the row-based and column-based imports produce different inserted
607
			row counts, this usually means that a table is underconstrained
608
			(the unique indexes don't cover all possible rows).
609
			This can occur if you didn't use COALESCE(field, null_value) around
610
			a nullable field in a unique index. See sql_gen.null_sentinels for
611
			the appropriate null value to use.
612
	Map spreadsheet generation: make remake
613
	Missing mappings: make missing_mappings
614
	Everything (for most complete coverage): make test-all
615

    
616
Debugging:
617
	"Binary chop" debugging:
618
		(This is primarily useful for regressions that occurred in a previous
619
		revision, which was committed without running all the tests)
620
		up -r <rev>; make inputs/.TNRS/reinstall; make schemas/public/reinstall; make <failed-test>.xml
621
	.htaccess:
622
		mod_rewrite:
623
			**IMPORTANT**: whenever you change the DirectorySlash setting for a
624
				directory, you *must* clear your browser's cache to ensure that
625
				a cached redirect is not used. this is because RewriteRule
626
				redirects are (by default) temporary, but DirectorySlash
627
				redirects are permanent.
628
				for Firefox:
629
					press Cmd+Shift+Delete
630
					check only Cache
631
					press Enter or click Clear Now
632

    
633
WinMerge setup:
634
	In a Windows VM:
635
	Install WinMerge from <http://winmerge.org/>
636
	Open WinMerge
637
	Go to Edit > Options and click Compare in the left sidebar
638
	Enable "Moved block detection", as described at
639
		<http://manual.winmerge.org/Configuration.html#d0e5892>.
640
	Set Whitespace to Ignore change, as described at
641
		<http://manual.winmerge.org/Configuration.html#d0e5758>.
642

    
643
Documentation:
644
	To generate a Redmine-formatted list of steps for column-based import:
645
		On local machine:
646
		make schemas/public/reinstall
647
		make inputs/ACAD/Specimen/logs/steps.by_col.log.sql
648
	To import and scrub just the test taxonomic names:
649
		ssh -t vegbiendev.nceas.ucsb.edu exec sudo su - aaronmk
650
		inputs/test_taxonomic_names/test_scrub
651

    
652
General:
653
	To see a program's description, read its top-of-file comment
654
	To see a program's usage, run it without arguments
655
	To remake a directory: make <dir>/remake
656
	To remake a file: make <file>-remake
(6-6/11)