Project

General

Profile

1
Installation:
2
    Check out svn: svn co https://code.nceas.ucsb.edu/code/projects/bien
3
    cd bien/
4
    Install: make install
5
        WARNING: This will delete the current public schema of your VegBIEN DB!
6
    Uninstall: make uninstall
7
        WARNING: This will delete your entire VegBIEN DB!
8
        This includes all archived imports and staging tables.
9

    
10
Maintenance:
11
    VegCore data dictionary:
12
        Regularly, or whenever the VegCore data dictionary page
13
            (https://projects.nceas.ucsb.edu/nceas/projects/bien/wiki/VegCore)
14
            is changed, regenerate mappings/VegCore.csv:
15
            make mappings/VegCore.htm-remake; make mappings/
16
            svn ci -m "mappings/VegCore.csv: Regenerated from wiki"
17
    Important: Whenever you install a system update that affects PostgreSQL or
18
        any of its dependencies, such as libc, you should restart the PostgreSQL
19
        server. Otherwise, you may get strange errors like "the database system
20
        is in recovery mode" which go away upon reimport, or you may not be able
21
        to access the database as the postgres superuser. This applies to both
22
        Linux and Mac OS X.
23

    
24
Single datasource import:
25
    (Re)import and scrub: make inputs/<datasrc>/reimport_scrub
26
    (Re)import only: make inputs/<datasrc>/reimport
27
    (Re)scrub: make inputs/<datasrc>/rescrub
28
    Note that these commands also work if the datasource is not yet imported
29

    
30
Full database import:
31
    On local machine:
32
        make inputs/upload
33
        make test by_col=1
34
            See note under Testing below
35
    On vegbiendev:
36
    Ensure there are no local modifications: svn st
37
    svn up
38
    For each newly-uploaded datasource above: make inputs/<datasrc>/reinstall
39
    Update the auxiliary schemas: make schemas/reinstall
40
        The public schema will be installed separately by the import process
41
    Delete imports before the last so they won't bloat the full DB backup:
42
        make backups/vegbien.<version>.backup/remove
43
        To keep a previous import other than the public schema:
44
            export dump_opts='--exclude-schema=public --exclude-schema=<version>'
45
    Make sure there is at least 150GB of disk space on /: df -h
46
        The import schema is 100GB, and may use additional space for temp tables
47
        To free up space, remove backups that have been archived on jupiter:
48
            List backups/ to view older backups
49
            Check their MD5 sums using the steps under On jupiter below
50
            Remove these backups
51
    unset version
52
    Start column-based import: . bin/import_all by_col=1
53
        To use row-based import: . bin/import_all
54
        To stop all running imports: . bin/stop_imports
55
        WARNING: Do NOT run import_all in the background, or the jobs it creates
56
            won't be owned by your shell.
57
        Note that import_all will take several hours to import the NCBI backbone
58
            and TNRS names before returning control to the shell.
59
    Wait (overnight) for the import to finish
60
    On local machine: make inputs/download-logs
61
    In PostgreSQL:
62
        Check that the provider_count and source tables contain entries for all
63
            inputs
64
        Check that unscrubbed_taxondetermination_view returns no rows
65
        Check that there are taxondeterminations whose source_id is
66
            source_by_shortname('TNRS')
67
    tail inputs/{.,}*/*/logs/$version.log.sql
68
    In the output, search for "Command exited with non-zero status"
69
    For inputs that have this, fix the associated bug(s)
70
    If many inputs have errors, discard the current (partial) import:
71
        make schemas/$version/uninstall
72
    Otherwise, continue
73
    make schemas/$version/publish
74
    unset version
75
    sudo backups/fix_perms
76
    make backups/upload
77
    On jupiter:
78
        cd /data/dev/aaronmk/bien/backups
79
        For each newly-archived backup:
80
            make <backup>.md5/test
81
            Check that "OK" is printed next to the filename
82
    On nimoy:
83
        cd /home/bien/svn/
84
        svn up
85
        export version=<version>
86
        make backups/analytical_aggregate.$version.csv/download
87
        make backups/analytical_aggregate.$version.csv.md5/test
88
        Check that "OK" is printed next to the filename
89
        In the bien_web DB:
90
            Create the analytical_aggregate_<version> table using its schema
91
                in schemas/vegbien.my.sql
92
        env table=analytical_aggregate_$version bin/publish_analytical_db \
93
            backups/analytical_aggregate.$version.csv
94
    If desired, record the import times in inputs/import.stats.xls:
95
        Open inputs/import.stats.xls
96
        Insert a copy of the leftmost "By column" column group before it
97
        bin/import_date inputs/{.,}*/*/logs/$version.log.sql
98
        Update the import date in the upper-right corner
99
        bin/import_times inputs/{.,}*/*/logs/$version.log.sql
100
        Paste the output over the # Rows/Time columns, making sure that the
101
            row counts match up with the previous import's row counts
102
        If the row counts do not match up, insert or reorder rows as needed
103
            until they do
104
        Commit: svn ci -m "inputs/import.stats.xls: Updated import times"
105
    To remake analytical DB: bin/make_analytical_db &
106
        To view progress:
107
            tail -f inputs/analytical_db/logs/make_analytical_db.log.sql
108

    
109
Backups:
110
    Archived imports:
111
        Back up: make backups/<version>.backup &
112
            Note: To back up the last import, you must archive it first:
113
                make schemas/rotate
114
        Test: make backups/<version>.backup/test &
115
        Restore: make backups/<version>.backup/restore &
116
        Remove: make backups/<version>.backup/remove
117
        Download: make backups/download
118
    TNRS cache:
119
        Back up: make backups/TNRS.backup-remake &
120
        Restore:
121
            yes|make inputs/.TNRS/uninstall
122
            make backups/TNRS.backup/restore &
123
            yes|make schemas/public/reinstall
124
                Must come after TNRS restore to recreate tnrs_input_name view
125
    Full DB:
126
        Back up: make backups/vegbien.<version>.backup &
127
        Test: make backups/vegbien.<version>.backup/test &
128
        Restore: make backups/vegbien.<version>.backup/restore &
129
        Download: make backups/download
130
    Import logs:
131
        Download: make inputs/download-logs
132

    
133
Datasource setup:
134
    Add a new datasource: make inputs/<datasrc>/add
135
        <datasrc> may not contain spaces, and should be abbreviated.
136
        If the datasource is a herbarium, <datasrc> should be the herbarium code
137
            as defined by the Index Herbariorum <http://sweetgum.nybg.org/ih/>
138
    For MySQL inputs (exports and live DB connections):
139
        For .sql exports:
140
            Place the original .sql file in _src/ (*not* in _MySQL/)
141
            Create a database for the MySQL export in phpMyAdmin
142
            mysql -p database <inputs/<datasrc>/_src/export.sql
143
        mkdir inputs/<datasrc>/_MySQL/
144
        cp -p lib/MySQL.{data,schema}.sql.make inputs/<datasrc>/_MySQL/
145
        Edit _MySQL/*.make for the DB connection
146
            For a .sql export, use your local MySQL DB
147
        Install the export according to Install the staging tables below
148
    Add input data for each table present in the datasource:
149
        For .sql exports, you must use the name of the table in the DB export
150
        For CSV files, you can use any name. It's recommended to use a table
151
            name from <https://projects.nceas.ucsb.edu/nceas/projects/bien/wiki/VegCSV#Suggested-table-names>
152
        Note that if this table will be joined together with another table, its
153
            name must end in ".src"
154
        make inputs/<datasrc>/<table>/add
155
            Important: DO NOT just create an empty directory named <table>!
156
                This command also creates necessary subdirs, such as logs/.
157
        If the table is in a .sql export: make inputs/<datasrc>/<table>/install
158
            Otherwise, place the CSV(s) for the table in
159
            inputs/<datasrc>/<table>/ OR place a query joining other tables
160
            together in inputs/<datasrc>/<table>/create.sql
161
        Important: When exporting relational databases to CSVs, you MUST ensure
162
            that embedded quotes are escaped by doubling them, *not* by
163
            preceding them with a "\" as is the default in phpMyAdmin
164
        If there are multiple part files for a table, and the header is repeated
165
            in each part, make sure each header is EXACTLY the same.
166
             (If the headers are not the same, the CSV concatenation script
167
             assumes the part files don't have individual headers and treats the
168
             subsequent headers as data rows.)
169
        Add <table> to inputs/<datasrc>/import_order.txt before other tables
170
            that depend on it
171
    Install the staging tables:
172
        make inputs/<datasrc>/reinstall quiet=1 &
173
        To view progress: tail -f inputs/<datasrc>/<table>/logs/install.log.sql
174
        View the logs: tail -n +1 inputs/<datasrc>/*/logs/install.log.sql
175
            tail provides a header line with the filename
176
            +1 starts at the first line, to show the whole file
177
        For every file with an error 'column "..." specified more than once':
178
            Add a header override file "+header.<ext>" in <table>/:
179
                Note: The leading "+" should sort it before the flat files.
180
                    "_" unfortunately sorts *after* capital letters in ASCII.
181
                Create a text file containing the header line of the flat files
182
                Add an ! at the beginning of the line
183
                    This signals cat_csv that this is a header override.
184
                For empty names, use their 0-based column # (by convention)
185
                For duplicate names, add a distinguishing suffix
186
                For long names that collided, rename them to <= 63 chars long
187
                Do NOT make readability changes in this step; that is what the
188
                    map spreadsheets (below) are for.
189
                Save
190
        If you made any changes, re-run the install command above
191
    Auto-create the map spreadsheets: make inputs/<datasrc>/
192
    Map each table's columns:
193
        In each <table>/ subdir, for each "via map" map.csv:
194
            Open the map in a spreadsheet editor
195
            Open the "core map" /mappings/Veg+-VegBIEN.csv
196
            In each row of the via map, set the right column to a value from the
197
                left column of the core map
198
            Save
199
        Regenerate the derived maps: make inputs/<datasrc>/
200
    Accept the test cases:
201
        make inputs/<datasrc>/test
202
            When prompted to "Accept new test output", enter y and press ENTER
203
            If you instead get errors, do one of the following for each one:
204
            -   If the error was due to a bug, fix it
205
            -   Add a SQL function that filters or transforms the invalid data
206
            -   Make an empty mapping for the columns that produced the error.
207
                Put something in the Comments column of the map spreadsheet to
208
                prevent the automatic mapper from auto-removing the mapping.
209
            When accepting tests, it's helpful to use WinMerge
210
                (see WinMerge setup below for configuration)
211
        make inputs/<datasrc>/test by_col=1
212
            If you get errors this time, this always indicates a bug, usually in
213
                the VegBIEN unique constraints or column-based import itself
214
    Add newly-created files: make inputs/<datasrc>/add
215
    Commit: svn ci -m "Added inputs/<datasrc>/" inputs/<datasrc>/
216
    Update vegbiendev:
217
        On vegbiendev: svn up
218
        On local machine: make inputs/upload
219
        On vegbiendev:
220
            Follow the steps under Install the staging tables above
221
            make inputs/<datasrc>/test
222

    
223
Datasource refreshing:
224
    VegBank:
225
        make inputs/VegBank/vegbank.sql-remake
226
        make inputs/VegBank/reinstall quiet=1 &
227

    
228
Schema changes:
229
    Remember to update the following files with any renamings:
230
        schemas/filter_ERD.csv
231
        mappings/VegCore-VegBIEN.csv
232
        mappings/verify.*.sql
233
    Regenerate schema from installed DB: make schemas/remake
234
    Reinstall DB from schema: make schemas/public/reinstall schemas/reinstall
235
        WARNING: This will delete the current public schema of your VegBIEN DB!
236
    Reinstall staging tables: . bin/reinstall_all
237
    Sync ERD with vegbien.sql schema:
238
        Run make schemas/vegbien.my.sql
239
        Open schemas/vegbien.ERD.mwb in MySQLWorkbench
240
        Go to File > Export > Synchronize With SQL CREATE Script...
241
        For Input File, select schemas/vegbien.my.sql
242
        Click Continue
243
        In the changes list, select each table with an arrow next to it
244
        Click Update Model
245
        Click Continue
246
        Note: The generated SQL script will be empty because we are syncing in
247
            the opposite direction
248
        Click Execute
249
        Reposition any lines that have been reset
250
        Add any new tables by dragging them from the Catalog in the left sidebar
251
            to the diagram
252
        Remove any deleted tables by right-clicking the table's diagram element,
253
            selecting Delete '<table name>', and clicking Delete
254
        Save
255
        If desired, update the graphical ERD exports (see below)
256
    Update graphical ERD exports:
257
        Go to File > Export > Export as PNG...
258
        Select schemas/vegbien.ERD.png and click Save
259
        Go to File > Export > Export as SVG...
260
        Select schemas/vegbien.ERD.svg and click Save
261
        Go to File > Export > Export as Single Page PDF...
262
        Select schemas/vegbien.ERD.1_pg.pdf and click Save
263
        Go to File > Print...
264
        In the lower left corner, click PDF > Save as PDF...
265
        Set the Title and Author to ""
266
        Select schemas/vegbien.ERD.pdf and click Save
267
        Commit: svn ci -m "schemas/vegbien.ERD.mwb: Regenerated exports"
268
    Refactoring tips:
269
        To rename a table:
270
            In vegbien.sql, do the following:
271
                Replace regexp (?<=_|\b)<old>(?=_|\b) with <new>
272
                    This is necessary because the table name is *everywhere*
273
                Search for <new>
274
                Manually change back any replacements inside comments
275
        To rename a column:
276
            Rename the column: ALTER TABLE <table> RENAME <old> TO <new>;
277
            Recreate any foreign key for the column, removing CONSTRAINT <name>
278
                This resets the foreign key name using the new column name
279
    Creating a poster of the ERD:
280
        Determine the poster size:
281
            Measure the line height (from the bottom of one line to the bottom
282
                of another): 16.3cm/24 lines = 0.679cm
283
            Measure the height of the ERD: 35.4cm*2 = 70.8cm
284
            Zoom in as far as possible
285
            Measure the height of a capital letter: 3.5mm
286
            Measure the line height: 8.5mm
287
            Calculate the text's fraction of the line height: 3.5mm/8.5mm = 0.41
288
            Calculate the text height: 0.679cm*0.41 = 0.28cm
289
            Calculate the text height's fraction of the ERD height:
290
                0.28cm/70.8cm = 0.0040
291
            Measure the text height on the *VegBank* ERD poster: 5.5mm = 0.55cm
292
            Calculate the VegBIEN poster height to make the text the same size:
293
                0.55cm/0.0040 = 137.5cm H; *1in/2.54cm = 54.1in H
294
            The ERD aspect ratio is 11 in W x (2*8.5in H) = 11x17 portrait
295
            Calculate the VegBIEN poster width: 54.1in H*11W/17H = 35.0in W
296
            The minimum VegBIEN poster size is 35x54in portrait
297
        Determine the cost:
298
            The FedEx Kinkos near NCEAS (1030 State St, Santa Barbara, CA 93101)
299
                charges the following for posters:
300
                base: $7.25/sq ft
301
                lamination: $3/sq ft
302
                mounting on a board: $8/sq ft
303

    
304
Testing:
305
    On a development machine, you should put the following in your .profile:
306
        export log= n=2
307
    Mapping process: make test
308
        Including column-based import: make test by_col=1
309
            If the row-based and column-based imports produce different inserted
310
            row counts, this usually means that a table is underconstrained
311
            (the unique indexes don't cover all possible rows).
312
            This can occur if you didn't use COALESCE(field, null_value) around
313
            a nullable field in a unique index. See sql_gen.null_sentinels for
314
            the appropriate null value to use.
315
    Map spreadsheet generation: make remake
316
    Missing mappings: make missing_mappings
317
    Everything (for most complete coverage): make test-all
318

    
319
Debugging:
320
    "Binary chop" debugging:
321
        (This is primarily useful for regressions that occurred in a previous
322
        revision, which was committed without running all the tests)
323
        svn up -r <rev>; make inputs/.TNRS/reinstall; make schemas/public/reinstall; make <failed-test>.xml
324

    
325
WinMerge setup:
326
    Install WinMerge from <http://winmerge.org/>
327
    Open WinMerge
328
    Go to Edit > Options and click Compare in the left sidebar
329
    Enable "Moved block detection", as described at
330
        <http://manual.winmerge.org/Configuration.html#d0e5892>.
331
    Set Whitespace to Ignore change, as described at
332
        <http://manual.winmerge.org/Configuration.html#d0e5758>.
333

    
334
Documentation:
335
    To generate a Redmine-formatted list of steps for column-based import:
336
        make schemas/public/reinstall
337
        make inputs/ACAD/Specimen/logs/steps.by_col.log.sql
338
    To import and scrub just the test taxonomic names:
339
        inputs/test_taxonomic_names/test_scrub
340

    
341
General:
342
    To see a program's description, read its top-of-file comment
343
    To see a program's usage, run it without arguments
344
    To remake a directory: make <dir>/remake
345
    To remake a file: make <file>-remake
(2-2/5)