Project

General

Profile

1
Installation:
2
    Install: make install
3
        WARNING: This will delete the current public schema of your VegBIEN DB!
4
    Uninstall: make uninstall
5
        WARNING: This will delete your entire VegBIEN DB!
6
        This includes all archived imports and staging tables.
7

    
8
Maintenance:
9
    Important: Whenever you install a system update that affects PostgreSQL or
10
        any of its dependencies, such as libc, you should restart the PostgreSQL
11
        server. Otherwise, you may get strange errors like "the database system
12
        is in recovery mode" which go away upon reimport.
13

    
14
Data import:
15
    On local machine:
16
        make test by_col=1
17
            See note under Testing below
18
    On vegbiendev:
19
    svn up
20
    make inputs/upload
21
    For each newly-uploaded datasource: make inputs/<datasrc>/reinstall
22
    Update the schemas: make schemas/reinstall
23
        WARNING: This will delete the current public schema of your VegBIEN DB!
24
        To save it: make schemas/rotate
25
    Make sure there is at least 100GB of disk space on /: df -h
26
        The import schema is 75GB, and may use additional space for temp tables
27
    Start column-based import: . bin/import_all by_col=1
28
        To use row-based import: . bin/import_all
29
        To stop all running imports: . bin/stop_imports
30
        WARNING: Do NOT run import_all in the background, or the jobs it creates
31
            won't be owned by your shell.
32
        Note that import_all will take several hours to import the NCBI backbone
33
            and TNRS names before returning control to the shell.
34
    Wait (overnight) for the import to finish
35
    On local machine: make inputs/download-logs
36
    tail inputs/{.,}*/*/logs/*.r<revision>[.-]*log.sql
37
    In the output, search for "Command exited with non-zero status"
38
    For inputs that have this, fix the associated bug(s)
39
    If many inputs have errors, discard the current (partial) import:
40
        make schemas/public/reinstall
41
    Otherwise, continue
42
    Determine the import name:
43
        bin/import_name inputs/{.,}*/*/logs/*.r<revision>[.-]*log.sql
44
    Archive the last import: make schemas/rename/public.<import_name>
45
    Delete previous imports so they won't bloat the full DB backup:
46
        make backups/vegbien.<version>.backup/remove
47
    make backups/TNRS.backup-remake &
48
    make backups/vegbien.<version>.backup/test &
49
    env public=public.<version> bin/export_analytical_db &
50
    make backups/upload
51
    On jupiter:
52
        cd /data/dev/aaronmk/VegBIEN.backups
53
        For each newly-archived backup:
54
            make <backup>.md5/test
55
            Check that "OK" is printed next to the filename
56
    On nimoy:
57
        cd /home/bien/svn/
58
        svn up
59
        make backups/analytical_aggregate.public.<version>.csv/download
60
        make backups/analytical_aggregate.public.<version>.csv.md5/test
61
        Check that "OK" is printed next to the filename
62
        In the bien_web DB:
63
            Create the analytical_aggregate_r<revision> table using its schema
64
                in schemas/vegbien.my.sql
65
        env table=analytical_aggregate_r<revision> bin/publish_analytical_db \
66
            backups/analytical_aggregate.public.<version>.csv
67
    If desired, record the import times in inputs/import.stats.xls:
68
        Open inputs/import.stats.xls
69
        Insert a copy of the leftmost Column-based column group before it
70
        Update the import date in the upper-right corner
71
        ./bin/import_times inputs/{.,}*/*/logs/*.r<revision>[.-]*log.sql
72
        Paste the output over the # Rows/Time columns, making sure that the
73
            row counts match up with the previous import's row counts
74
        If the row counts do not match up, insert or reorder rows as needed
75
            until they do
76
        Commit: svn ci -m "inputs/import.stats.xls: Updated import times"
77
    To remake analytical DB: env public=... bin/make_analytical_db &
78
        public should be set to the current import's schema name
79
        To view progress:
80
            tail -f inputs/analytical_db/logs/make_analytical_db.log.sql
81

    
82
Backups:
83
    Archived imports:
84
        Back up: make backups/public.<date>.backup &
85
            Note: To back up the last import, you must archive it first:
86
                make schemas/rotate
87
        Test: make backups/public.<date>.backup/test &
88
        Restore: make backups/public.<date>.backup/restore &
89
        Remove: make backups/public.<date>.backup/remove
90
        Download: make backups/download
91
    TNRS cache:
92
        Back up: make backups/TNRS.backup-remake &
93
        Restore:
94
            yes|make inputs/.TNRS/uninstall
95
            make backups/TNRS.backup/restore &
96
            yes|make schemas/public/reinstall
97
                Must come after TNRS restore to recreate tnrs_input_name view
98
    Full DB:
99
        Back up: make backups/vegbien.<date>.backup &
100
        Test: make backups/vegbien.<date>.backup/test &
101
        Restore: make backups/vegbien.<date>.backup/restore &
102
        Download: make backups/download
103
    Import logs:
104
        Download: make inputs/download-logs
105

    
106
Datasource setup:
107
    Add a new datasource: make inputs/<datasrc>/add
108
        <datasrc> may not contain spaces, and should be abbreviated.
109
        If the datasource is a herbarium, <datasrc> should be the herbarium code
110
            as defined by the Index Herbariorum <http://sweetgum.nybg.org/ih/>
111
    For MySQL inputs (exports and live DB connections):
112
        For .sql exports:
113
            Place the original .sql file in _src/ (*not* in _MySQL/)
114
            Create a database for the MySQL export in phpMyAdmin
115
            mysql -p database <inputs/<datasrc>/_src/export.sql
116
        mkdir inputs/<datasrc>/_MySQL/
117
        cp -p lib/MySQL.{data,schema}.sql.make inputs/<datasrc>/_MySQL/
118
        Edit _MySQL/*.make for the DB connection
119
            For a .sql export, use your local MySQL DB
120
        Install the export according to Install the staging tables below
121
    Add input data for each table present in the datasource:
122
        For .sql exports, you must use the name of the table in the DB export
123
        For CSV files, you can use any name. It's recommended to use a table
124
            name from <https://projects.nceas.ucsb.edu/nceas/projects/bien/wiki/VegCSV#Suggested-table-names>
125
        Note that if this table will be joined together with another table, its
126
            name must end in ".src"
127
        make inputs/<datasrc>/<table>/add
128
            Important: DO NOT just create an empty directory named <table>!
129
                This command also creates necessary subdirs, such as logs/.
130
        If the table is in a .sql export: make inputs/<datasrc>/<table>/install
131
            Otherwise, place the CSV(s) for the table in
132
            inputs/<datasrc>/<table>/ OR place a query joining other tables
133
            together in inputs/<datasrc>/<table>/create.sql
134
        Important: When exporting relational databases to CSVs, you MUST ensure
135
            that embedded quotes are escaped by doubling them, *not* by
136
            preceding them with a "\" as is the default in phpMyAdmin
137
        If there are multiple part files for a table, and the header is repeated
138
            in each part, make sure each header is EXACTLY the same.
139
             (If the headers are not the same, the CSV concatenation script
140
             assumes the part files don't have individual headers and treats the
141
             subsequent headers as data rows.)
142
        Add <table> to inputs/<datasrc>/import_order.txt before other tables
143
            that depend on it
144
    Install the staging tables:
145
        make inputs/<datasrc>/reinstall quiet=1 &
146
        To view progress: tail -f inputs/<datasrc>/<table>/logs/install.log.sql
147
        View the logs: tail -n +1 inputs/<datasrc>/*/logs/install.log.sql
148
            tail provides a header line with the filename
149
            +1 starts at the first line, to show the whole file
150
        For every file with an error 'column "..." specified more than once':
151
            Add a header override file "+header.<ext>" in <table>/:
152
                Note: The leading "+" should sort it before the flat files.
153
                    "_" unfortunately sorts *after* capital letters in ASCII.
154
                Create a text file containing the header line of the flat files
155
                Add an ! at the beginning of the line
156
                    This signals cat_csv that this is a header override.
157
                For empty names, use their 0-based column # (by convention)
158
                For duplicate names, add a distinguishing suffix
159
                For long names that collided, rename them to <= 63 chars long
160
                Do NOT make readability changes in this step; that is what the
161
                    map spreadsheets (below) are for.
162
                Save
163
        If you made any changes, re-run the install command above
164
    Auto-create the map spreadsheets: make inputs/<datasrc>/
165
    Map each table's columns:
166
        In each <table>/ subdir, for each "via map" map.csv:
167
            Open the map in a spreadsheet editor
168
            Open the "core map" /mappings/Veg+-VegBIEN.csv
169
            In each row of the via map, set the right column to a value from the
170
                left column of the core map
171
            Save
172
        Regenerate the derived maps: make inputs/<datasrc>/
173
    Accept the test cases:
174
        make inputs/<datasrc>/test
175
            When prompted to "Accept new test output", enter y and press ENTER
176
            If you instead get errors, do one of the following for each one:
177
            -   If the error was due to a bug, fix it
178
            -   Add a SQL function that filters or transforms the invalid data
179
            -   Make an empty mapping for the columns that produced the error.
180
                Put something in the Comments column of the map spreadsheet to
181
                prevent the automatic mapper from auto-removing the mapping.
182
            When accepting tests, it's helpful to use WinMerge
183
                (see WinMerge setup below for configuration)
184
        make inputs/<datasrc>/test by_col=1
185
            If you get errors this time, this always indicates a bug, usually in
186
                the VegBIEN unique constraints or column-based import itself
187
    Add newly-created files: make inputs/<datasrc>/add
188
    Commit: svn ci -m "Added inputs/<datasrc>/" inputs/<datasrc>/
189
    Update vegbiendev:
190
        On vegbiendev: svn up
191
        On local machine: make inputs/upload
192
        On vegbiendev:
193
            Follow the steps under Install the staging tables above
194
            make inputs/<datasrc>/test
195

    
196
Datasource refreshing:
197
    VegBank:
198
        make inputs/VegBank/vegbank.sql-remake
199
        make inputs/VegBank/reinstall quiet=1 &
200

    
201
Schema changes:
202
    Remember to update the following files with any renamings:
203
        schemas/filter_ERD.csv
204
        mappings/VegCore-VegBIEN.csv
205
        mappings/verify.*.sql
206
    Regenerate schema from installed DB: make schemas/remake
207
    Reinstall DB from schema: make schemas/reinstall
208
        WARNING: This will delete the current public schema of your VegBIEN DB!
209
    Reinstall staging tables: . bin/reinstall_all
210
    Sync ERD with vegbien.sql schema:
211
        Run make schemas/vegbien.my.sql
212
        Open schemas/vegbien.ERD.mwb in MySQLWorkbench
213
        Go to File > Export > Synchronize With SQL CREATE Script...
214
        For Input File, select schemas/vegbien.my.sql
215
        Click Continue
216
        In the changes list, select each table with an arrow next to it
217
        Click Update Model
218
        Click Continue
219
        Note: The generated SQL script will be empty because we are syncing in
220
            the opposite direction
221
        Click Execute
222
        Reposition any lines that have been reset
223
        Add any new tables by dragging them from the Catalog in the left sidebar
224
            to the diagram
225
        Remove any deleted tables by right-clicking the table's diagram element,
226
            selecting Delete '<table name>', and clicking Delete
227
        Save
228
        If desired, update the graphical ERD exports (see below)
229
    Update graphical ERD exports:
230
        Go to File > Export > Export as PNG...
231
        Select schemas/vegbien.ERD.png and click Save
232
        Go to File > Export > Export as SVG...
233
        Select schemas/vegbien.ERD.svg and click Save
234
        Go to File > Export > Export as Single Page PDF...
235
        Select schemas/vegbien.ERD.1_pg.pdf and click Save
236
        Go to File > Print...
237
        In the lower left corner, click PDF > Save as PDF...
238
        Set the Title and Author to ""
239
        Select schemas/vegbien.ERD.pdf and click Save
240
    Refactoring tips:
241
        To rename a table:
242
            In vegbien.sql, do the following:
243
                Replace regexp (?<=_|\b)<old>(?=_|\b) with <new>
244
                    This is necessary because the table name is *everywhere*
245
                Search for <new>
246
                Manually change back any replacements inside comments
247
        To rename a column:
248
            Rename the column: ALTER TABLE <table> RENAME <old> TO <new>;
249
            Recreate any foreign key for the column, removing CONSTRAINT <name>
250
                This resets the foreign key name using the new column name
251
    Creating a poster of the ERD:
252
        Determine the poster size:
253
            Measure the line height (from the bottom of one line to the bottom
254
                of another): 16.3cm/24 lines = 0.679cm
255
            Measure the height of the ERD: 35.4cm*2 = 70.8cm
256
            Zoom in as far as possible
257
            Measure the height of a capital letter: 3.5mm
258
            Measure the line height: 8.5mm
259
            Calculate the text's fraction of the line height: 3.5mm/8.5mm = 0.41
260
            Calculate the text height: 0.679cm*0.41 = 0.28cm
261
            Calculate the text height's fraction of the ERD height:
262
                0.28cm/70.8cm = 0.0040
263
            Measure the text height on the *VegBank* ERD poster: 5.5mm = 0.55cm
264
            Calculate the VegBIEN poster height to make the text the same size:
265
                0.55cm/0.0040 = 137.5cm H; *1in/2.54cm = 54.1in H
266
            The ERD aspect ratio is 11 in W x (2*8.5in H) = 11x17 portrait
267
            Calculate the VegBIEN poster width: 54.1in H*11W/17H = 35.0in W
268
            The minimum VegBIEN poster size is 35x54in portrait
269
        Determine the cost:
270
            The FedEx Kinkos near NCEAS (1030 State St, Santa Barbara, CA 93101)
271
                charges the following for posters:
272
                base: $7.25/sq ft
273
                lamination: $3/sq ft
274
                mounting on a board: $8/sq ft
275

    
276
Testing:
277
    Mapping process: make test
278
        Including column-based import: make test by_col=1
279
            If the row-based and column-based imports produce different inserted
280
            row counts, this usually means that a table is underconstrained
281
            (the unique indexes don't cover all possible rows).
282
            This can occur if you didn't use COALESCE(field, null_value) around
283
            a nullable field in a unique index. See sql_gen.null_sentinels for
284
            the appropriate null value to use.
285
    Map spreadsheet generation: make remake
286
    Missing mappings: make missing_mappings
287
    Everything (for most complete coverage): make test-all
288

    
289
WinMerge setup:
290
    Install WinMerge from <http://winmerge.org/>
291
    Open WinMerge
292
    Go to Edit > Options and click Compare in the left sidebar
293
    Enable "Moved block detection", as described at
294
        <http://manual.winmerge.org/Configuration.html#d0e5892>.
295
    Set Whitespace to Ignore change, as described at
296
        <http://manual.winmerge.org/Configuration.html#d0e5758>.
297

    
298
Documentation:
299
    To generate a Redmine-formatted list of steps for column-based import:
300
        make inputs/ACAD/Specimen/logs/steps.by_col.log.sql
301
    To import and scrub just the test taxonomic names:
302
        inputs/test_taxonomic_names/test_scrub
303

    
304
General:
305
    To see a program's description, read its top-of-file comment
306
    To see a program's usage, run it without arguments
307
    To remake a directory: make <dir>/remake
308
    To remake a file: make <file>-remake
(2-2/5)