VegX-VegBIEN.organisms.csv: Renamed abioticObservation user-defined field clayPercent to clay to be consistent with VegBIEN
VegX-VegBIEN.organisms.csv: Renamed abioticObservation user-defined field cationCap to cationExchangeCapacity to be consistent with VegBIEN
VegX-VegBIEN.organisms.csv: Renamed plotObservation user-defined field precipMm to precipitation to be consistent with VegBIEN
VegX-VegBIEN.organisms.csv: Changed plotObservation user-defined field plotMethodology to /simpleUserdefined[name=method]/*ID/method/name
schemas/postgresql.nimoy.conf: Increased default_statistics_target to 8.4 default value to improve execution query plans
Added schemas/postgresql.Mac.conf (for tuning developers' local testing DBs)
schemas/postgresql*.conf: Increased checkpoint_segments and checkpoint_completion_target so that checkpoints (performance intensive) are written less often and load-balanced better
xml_dom.py: Don't print whitespace from parsed XML document when pretty-printing XML. minidom modifications section: Added subsection labels for the class each modification applies to.
Parser.py: Renamed SyntaxException to SyntaxError because it's an unexpected condition that should exit the program, a.k.a. an error
bin/map: process_rows(): When iterating over each row, only retrieve the next row if the end (limit of # of rows) has not been reached. This prevents the next row from being fetched, possibly causing an entire additional consecutive XML document to be parsed, if the limit has already been reached. This is primarily useful for XML inputs with a ".0.top" segment prepended before the other documents, which contains just the first two nodes for fast parsing of this smaller XML document when only the first two nodes are needed for testing. Without this fix, the ".0.top" segment would have needed to contain the first three nodes instead.
inputs/XAL: Accepted initial test outputs
inputs/XAL: Added maps
bin/map: Extended consecutive XML document support to direct-XML inputs (without a map spreadsheet). Factored out consecutive XML document row-iteration code into helper method get_rows() which does the iters.flatten() and itertools.imap() calls.
bin/map: Fixed bug in iteration over consecutive XML documents where only the first element of the first document was processed. Use of iters.flatten() and itertools.imap() fixes this problem so that the consecutive XML documents are regarded as a continuous stream of rows.
bin/map: Use new xml_parse.docs_iter() to iterate over each consecutive XML document in stdin
xml_parse.py: Added support for parsing consecutive XML documents in a stream
Added iters.py
streams.py: Added FilterStream. Changed TracedStream to use FilterStream.
Moved parse_str() from xml_dom.py to xml_parse.py
Added xml_parse.py
streams.py: CaptureStream: Ignore start_str when recording and end_str when not recording
streams.py: CaptureStream: Get each match as a separate array elem instead of concatenated together
ch_root, repl, map: Use new maps.col_info() instead of parsing col name manually. This allows maps with prefixes containing ":" to be supported, without the ":" being misinterpreted as the label-root separator.
maps.py: Added col_info() to get label, root, prefixes from col_name. Added col_formats() for use by combinable(). Use new col_formats() in combinable(). Removed no longer needed col_label().
input.Makefile: Use with_cat instead of with_cat_csv for XML sources
Renamed inputs/XAL/src/digir.xml.make to digir.specimens.xml.make so it would generate an output file with the proper table name
bin/map: Support concatenated XML documents for XML inputs
bin/map: Merged XML inputs with and without a map into the in_is_xml section
digir_client: Output profiling information
Added inputs/XAL/src/digir.xml.make
digir_client: Import http to take advantage of httplib modifications to deal with IncompleteRead errors
Added http.py with httplib modifications to deal with IncompleteRead errors
digir_client: Fixed bug where chunk size was being adjusted even if count == None (indicating no determinable last chunk), causing a type mismatch between None and the integer total
input.Makefile: Removed no longer needed "ifneq ($(wildcard test/),)" guard around Testing section because all inputs now have a test subdir
Added inputs/XAL
digir_client: Made chunk_size a configurable env var. Removed schema env var because schema is always the same for DiGIR (can be different for TAPIR). Make sure output ends in a newline so that consecutive XML documents are on different lines.
digir_client: Fixed bug where chunk_size records would always be retrieved even in the last chunk, which ignored any manual count the user might have set via the "n" option
digir_client: Repeatedly retrieve data in chunks. Provide match count. Added section comments.
xpath.py: Added get_value() to run get_1() and returns the value of any result node
xml_dom.py: Added parse_str()
digir_client: Use new streams.copy() to copy returned data to stdout
streams.py: Added copy(). Added section comment for traced streams.
digir_client: Label debugging output
streams.py: Renamed LineCountOutputStream to LineCountStream since TracedStream now works on both input and output streams
digir_client: Capture diagnostics for later use in determining next start/count values
streams.py: Added CaptureStream to wrap a stream, capturing matching text. Renamed TracedOutputStream to TracedStream and made it work on both input and output streams. Made TracedStream inherit from WrapStream so that close() would be forwarded properly.
bin/map: Changed XML input prefix handling to prepend prefix directly to XPath instead of separating it from the XPath with a "/". Changed get_with_prefix() to use new strings.with_prefixes().
strings.py: Added with_prefixes()
digir_client: Made schema customizable
digir_client: Set header sendTime, source dynamically. In debug mode, print the request XML.
Added local_ip to get local IP address
bin/map: Added prefixes support for XML inputs
digir_client: Filter by darwin:Kingdom=PLANTAE because presumably all records will have this. Don't debug-print URL.
Added initial bin/digir_client
Renamed timeout.py to timeouts.py. Renamed timeout_ vars to timeout.
opts.py: get_env_var(): default defaults to None
inputs/SpeciesLink: Accepted test outputs for new TAPIR download
bin/tapir/tapir2flat.php: Output to specieslink.specimens.csv instead of specieslink.txt so that the output file can be used right away without renaming
inputs/REMIB/src/nodes.make: Stop after a configurable # of empty responses (indicating no more nodes), instead of at a preset node ID, because there seem to be many more nodes than are listed on the web form
input.Makefile: import/rotate: Add "." before the date
input.Makefile: Added targets for editing import: import/rotate, import/rm
bin/tapir/tapir2flat.php: Fixed XML parsing to strip control chars so DOMDocument::loadXML() wouldn't complain about "PCDATA invalid Char value 8 in Entity", etc.
main Makefile: php-Darwin: Added instruction to set PHPRC if needed
Added inputs/SpeciesLink/src/tapir.make
input.Makefile: `src/%: src/%.make`: Don't tee recipe's stderr to make's stderr, because long-running make_scripts usually will be tracked using `tail -f`
input.Makefile: `src/%: src/%.make`: Name the log file using the make_script name instead of the output file name
cat_csv: If dialect == None, ignore that file because it's empty
csvs.py: stream_info(): If header_line == '', set dialect to None rather than trying (and failing) to auto-detect it
input.Makefile: Use new sort_filenames to putmultiple numbered sources in the correct order, dealing correctly with embedded numbers that aren't padded with leading zeros
Added sort_filenames to sort a list of filenames, comparing embedded numbers numerically instead of lexicographically
schemas/postgresql.conf: Decreased shared_buffers again because 4000MB wasn't enough less than 4GB SHMMAX
schemas/postgresql.conf: Expressed shared_buffers in MB, since decimal GB doesn't seem to work anymore on 9.1
schemas/postgresql.conf: Decreased shared_buffers to 3.9GB, slightly less than SHMMAX
schemas/postgresql.conf: Optimized again using same changes as were applied to 8.4 version
schemas/postgresql.conf: Replaced with original 9.1 version
schemas/postgresql.conf: Optimized using analogous settings as postgresql.nimoy.conf
inputs/REMIB/src/nodes.make: Don't abort entire import on empty response, because an empty response is also returned for nodes that are temporarily down, not just nodes that don't exist (assumed to be after the highest numbered node). Instead, stop import after 150 nodes if user did not specify an explicit # nodes.
inputs/REMIB/src/nodes.make: Abort prefix on empty response using break, rather than just done = True, to avoid running any more code except the finally block. Moved metadata row validation outside metadata row retrieval try-except block.
inputs/REMIB/src/nodes.make: If a read times out, abort the entire node rather than just the prefix to avoid waiting 20 sec for each of 26*26 prefixes
profiling.py ItersProfiler, exc.py ExPercentTracker: Only output fraction of rows with errors if self.iter_ct > 0, to avoid divide-by-zero error
inputs/REMIB/src/nodes.make: Fixed bug where row count was output in the middle of the row processing code, instead of after the first row is processed and the row count incremented. This removes "Processed 0 row(s)" messages at the beginning of every prefix.
inputs/REMIB/src/nodes.make: Support custom starting node ID and # nodes processed via env vars
Renamed inputs/REMIB/src/nodes.all.0.header.specimens.csv to node.0.header.specimens.csv so it would sort correctly with the new output file names
Renamed inputs/REMIB/src/nodes.all.specimens.csv.make to inputs/REMIB/src/nodes.make since it will not be used to generate nodes.all.specimens.csv. However, it can still be used with the `src/%.make` make target, but will generate a dummy empty output file "nodes".
inputs/REMIB/src/nodes.all.specimens.csv.make: Write each node to a separate output file
inputs/REMIB/src/nodes.all.specimens.csv.make: Raise InputException instead of AssertionError if invalid metadata row, so that it will be caught and printed instead of aborting the program
inputs/REMIB/src/nodes.all.specimens.csv.make: Moved header reading code inside TimeoutException try-except block since read sometimes times out before the header is even read
schemas/postgresql.nimoy.conf: Increased shared_buffers to 1.5GB since kernel.shmmax has been increased to 2GB
Renamed inputs/REMIB/src/remib_raw.0.header.specimens.txt to nodes.all.0.header.specimens.csv
inputs/REMIB/src/nodes.all.specimens.csv.make: Increased read timeout
inputs/REMIB/src/nodes.all.specimens.csv.make: Timeout stuck reads because sometimes nodes are offline, etc.
exc.py: str_(): Strip trailing whitespace. print_ex(): Since str_() now strips trailing whitespace, strings.ensure_newl() is no longer necessary.
streams.py: Added TimeoutInputStream and WrapStream. Changed StreamIter to use new WrapStream.
Added timeout.py
inputs/REMIB/src/nodes.all.specimens.csv.make: Download from all prefixes of all nodes. Stop when a node produces an empty response (not even an error), which indicates no more nodes. Changed status messages.
input.Makefile: `src/%: src/%.make`: Append stderr to log file
Added inputs/REMIB/src/nodes.all.specimens.csv.make to download REMIB data for all nodes
Added streams.py for I/O, which contains StreamIter, TracedOutputStream, and LineCountOutputStream
term.py: Added clear_line. Corrected file comment.
Makefiles: Let subdir's Makefile decide whether to delete on error