Reading COBOL Files

For sample data, we’re using data found here: http://wonder.cdc.gov/wonder/sci_data/codes/fips/type_txt/cntyxref.asp

The data files are in two zip archives: http://wonder.cdc.gov/wonder/sci_data/datasets/zipctyA.zip and http://wonder.cdc.gov/wonder/sci_data/datasets/zipctyB.zip

Each of these archives contains five large files, with 2,310,000 rows of data, plus a header. The 10th file has 2,037,944 rows of data plus a header.

The member names of the ZIP archive are zipcty1 to zipcty5 and zipcty6 to zipcty10.

We’ll work with two small subsets in the sample directory.

Here are the two record layouts.

COUNTY CROSS-REFERENCE FILE - COBOL EXAMPLE

        BLOCK CONTAINS 0 RECORDS
        LABEL RECORDS ARE STANDARD
        RECORD CONTAINS 53 CHARACTERS
        RECORDING MODE IS F
        DATA RECORDS ARE
               COUNTY-CROSS-REFERENCE-RECORD.

    01  COUNTY-CROSS-REFERENCE-RECORD.
        05   ZIP-CODE                                 PIC X(05).
        05   UPDATE-KEY-NO                            PIC X(10).
        05   ZIP-ADD-ON-RANGE.
             10  ZIP-ADD-ON-LOW-NO.
                  15  ZIP-SECTOR-NO                   PIC X(02).
                  15  ZIP-SEGMENT-NO                  PIC X(02).
             10  ZIP-ADD-ON-HIGH-NO.
                  15  ZIP-SECTOR-NO                   PIC X(02).
                  15  ZIP-SEGMENT-NO                  PIC X(02).
        05   STATE-ABBREV                             PIC X(02).
        05   COUNTY-NO                                PIC X(03).
        05   COUNTY-NAME                              PIC X(25).
COPYRIGHT HEADER RECORD - COBOL EXAMPLE

           BLOCK CONTAINS 0 RECORDS
           LABEL RECORDS ARE STANDARD
           RECORD CONTAINS 53 CHARACTERS
           RECORDING MODE IS F
           DATA RECORDS ARE
               COPYRIGHT-HEADER RECORD.

      01  COPYRIGHT-HEADER-RECORD.
          05  FILLER                                     PIC  X(05).
          05  FILE-VERSION-YEAR                          PIC  X(02).
          05  FILE-VERSION-MONTH                         PIC  X(02).
          05  COPYRIGHT-SYMBOL                           PIC  X(11).
          05  TAPE-SEQUENCE-NO                           PIC  X(03).
          05  FILLER                                     PIC  X(30).

First Steps

The actual COBOL code for the schema is in sample/zipcty.cob. This file has both record layouts. These are two 01 level items in a single file.

When working with unknown files, we sometimes need to preview a raw dump of the records.

def raw_dump(sheet: Sheet) -> None:
    for row in sheet.rows():
        row.dump()

This is a handy expedient for debugging.

Builder Functions

As suggested in Using Stingray Reader, here are two builder functions. The header_builder() function creates a header object from the first row of each zipcty* file.

def header_builder(row: Row) -> dict[str, Any]:
    return {
        "file_version_year": row["FILE-VERSION-YEAR"].value(),
        "file_version_month": row["FILE-VERSION-MONTH"].value(),
        "copyright_symbol": row["COPYRIGHT-SYMBOL"].value(),
        "tape_sequence_no": row["TAPE-SEQUENCE-NO"].value(),
    }

The detail_builder() function creates a detail object from the subsequent rows of each zipcty* file.

Because the names within the COBOL layout are not unique at the bottom-most element level, we must use path names. The default path names include all levels of the DDE. More clever path name components might be useful here.

COBOL uses an “of” to work up the hierarchy looking for a unique name.

Maybe we could build a fluent interface schema['ZIP-SECTOR-NO'].of('ZIP-ADD-ON-LOW-NO').

def detail_builder(row: Row) -> dict[str, Any]:
    return {
        "zip_code": row["ZIP-CODE"].value(),
        "update_key_no": row["UPDATE-KEY-NO"].value(),
        "low_sector": row["ZIP-ADD-ON-RANGE"]["ZIP-ADD-ON-LOW-NO"][
            "ZIP-SECTOR-NO"
        ].value(),
        "low_segment": row["ZIP-ADD-ON-RANGE"]["ZIP-ADD-ON-LOW-NO"][
            "ZIP-SEGMENT-NO"
        ].value(),
        "high_sector": row["ZIP-ADD-ON-RANGE"]["ZIP-ADD-ON-HIGH-NO"][
            "ZIP-SECTOR-NO"
        ].value(),
        "high_segment": row["ZIP-ADD-ON-RANGE"]["ZIP-ADD-ON-HIGH-NO"][
            "ZIP-SEGMENT-NO"
        ].value(),
        "state_abbrev": row["STATE-ABBREV"].value(),
        "county_no": row["COUNTY-NO"].value(),
        "county_name": row["COUNTY-NAME"].value(),
    }

Sheet Processing

Here’s the process_sheet() function which applies the builders to the various rows in each sheet. Currently, all that happens is a print of the object that was built.

Note that we’ve transformed the schema from a simple, flat list into a dictionary keyed by field name. For COBOL processing, this is essential, since the numeric order of fields isn’t often sensible.

Also note that we’ve put two versions of each name into the schema dictionary.

  • The lowest level name.

  • The entire path down to the lowest level name.

[For spreadsheets, where columns are numbered, the positional information may be useful.]

def process_sheet(sheet: Sheet, schema_1: Schema, schema_2: Schema) -> Counter:
    counts = Counter()
    row_iter = sheet.rows()
    sheet.set_schema(schema_2)
    row = next(row_iter)
    try:
        header = header_builder(row)
    except KeyError as e:
        print(repr(e))
        row.dump()
        raise
    print(header)

    sheet.set_schema(schema_1)
    for row in row_iter:
        data = detail_builder(row)
        print(data)
        counts["read"] += 1
    return counts

Top-Level Script

The top-level script must do two things:

  1. Parse the "zipcty.cob" data definition to create a schema.

  2. Open a data file as a cobol.Character_File. This presumes the file is all character (no COMP-3) and already translated into ASCII.

    The process_sheet() is applied to each file.

Here’s a function to parse arguments.

def parse_args(argv: list[str]) -> argparse.Namespace:
    parser = argparse.ArgumentParser()
    parser.add_argument("file", type=Path, nargs="+")
    parser.add_argument("-s", "--schema", type=Path, required=True)
    parser.add_argument("-d", "--dry-run", default=False, action="store_true")
    parser.add_argument(
        "-v",
        "--verbose",
        dest="verbosity",
        default=logging.INFO,
        action="store_const",
        const=logging.DEBUG,
    )
    return parser.parse_args(argv)

Given this function to parse the command-lines arguments, the main() function looks like this:

def main(argv: list[str] = sys.argv[1:]) -> None:
    logger = logging.getLogger(__name__)
    args = parse_args(argv)
    logger.setLevel(args.verbosity)

    schema = args.schema
    with schema.open() as cobol:
        parser = schema_iter(cobol)
        json_schema_1 = next(parser)
        json_schema_2 = next(parser)
        logger.debug(pformat(json_schema_1, sort_dicts=False))
        logger.debug(pformat(json_schema_2, sort_dicts=False))
        schema_1 = SchemaMaker().from_json(json_schema_1)
        schema_2 = SchemaMaker().from_json(json_schema_2)

    for filename in args.file:
        with COBOL_Text_File(filename) as wb:
            sheet = wb.sheet("")
            # raw_dump(sheet.set_schema(schema_1))
            counts = process_sheet(sheet, schema_1, schema_2)
            logger.info(pformat(counts))

Running the demo

We can run this program like this:

python demo/cobol_reader.py --schema sample/zipcty.cob sample/zipcty[1-2]

The output looks like this.

{'file_version_year': '88', 'file_version_month': '09', 'copyright_symbol': ' (C)USPS   ', 'tape_sequence_no': '001'}
{'zip_code': '00401', 'update_key_no': '0000000001', 'low_sector': '00', 'low_segment': '01', 'high_sector': '00', 'high_segment': '01', 'state_abbrev': 'NY', 'county_no': '119', 'county_name': 'WESTCHESTER              '}
{'zip_code': '02186', 'update_key_no': '0000462001', 'low_sector': '52', 'low_segment': '66', 'high_sector': '52', 'high_segment': '66', 'state_abbrev': 'MA', 'county_no': '021', 'county_name': 'NORFOLK                  '}
{'zip_code': '06111', 'update_key_no': '0000924001', 'low_sector': '49', 'low_segment': '01', 'high_sector': '49', 'high_segment': '01', 'state_abbrev': 'CT', 'county_no': '003', 'county_name': 'HARTFORD                 '}
{'zip_code': '07901', 'update_key_no': '0001386001', 'low_sector': '22', 'low_segment': '08', 'high_sector': '22', 'high_segment': '08', 'state_abbrev': 'NJ', 'county_no': '039', 'county_name': 'UNION                    '}
{'zip_code': '10463', 'update_key_no': '0001848001', 'low_sector': '17', 'low_segment': '05', 'high_sector': '17', 'high_segment': '05', 'state_abbrev': 'NY', 'county_no': '005', 'county_name': 'BRONX                    '}
INFO:__main__:Counter({'read': 5})
{'file_version_year': '88', 'file_version_month': '09', 'copyright_symbol': ' (C)USPS   ', 'tape_sequence_no': '002'}
{'zip_code': '11789', 'update_key_no': '0002310001', 'low_sector': '25', 'low_segment': '43', 'high_sector': '25', 'high_segment': '43', 'state_abbrev': 'NY', 'county_no': '103', 'county_name': 'SUFFOLK                  '}
{'zip_code': '14767', 'update_key_no': '0002772001', 'low_sector': '97', 'low_segment': '71', 'high_sector': '97', 'high_segment': '71', 'state_abbrev': 'NY', 'county_no': '013', 'county_name': 'CHAUTAUQUA               '}
{'zip_code': '17201', 'update_key_no': '0003234001', 'low_sector': '90', 'low_segment': '33', 'high_sector': '90', 'high_segment': '33', 'state_abbrev': 'PA', 'county_no': '055', 'county_name': 'FRANKLIN                 '}
{'zip_code': '19438', 'update_key_no': '0003696001', 'low_sector': '28', 'low_segment': '22', 'high_sector': '28', 'high_segment': '22', 'state_abbrev': 'PA', 'county_no': '091', 'county_name': 'MONTGOMERY               '}
{'zip_code': '21740', 'update_key_no': '0004158001', 'low_sector': '53', 'low_segment': '05', 'high_sector': '53', 'high_segment': '05', 'state_abbrev': 'MD', 'county_no': '043', 'county_name': 'WASHINGTON               '}

Working with Archives

We don’t need to unpack the archives to work with files inside them. We can open a ZipFile member and process that. This can be a helpful optimization when small extracts are pulled from ZIP archives.

The trick is this:

When we open the file with COBOL_Text_File(filename) we can pass the file object created by ZipFile.open() as the second argument.

It looks like this:

with COBOL_Text_File(filename, file_object=archive.open(filename)) as wb:
    ...

This uses an already opened explicit file object rather than opening the given file name.