[TNO-logo]

File Input and Output in Scope3.4 and NOS/BE

In SCOPE 3.4 and NOS/BE systems, nearly all input and output from user programs was done through local files. A local file could reside on a disk drive, a tape drive, or an interactive terminal.

Local filenames

Local filenames could be 1-7 alphanumeric characters long; the first character had to be alphabetic. Famed CDC systems programmer G. R. Mansfield wrote a brilliant segment of CPU assembly language, about 12 instructions long, that tested a 60-bit word to see whether it contained a valid local filename. Operating system convention required local filenames to be left-justified and zero-filled. Mansfield's code depended heavily upon the Display Code character set.

Local filenames were unique only within a job; there could be many jobs with identically-named files that were completely unrelated. In fact, it was difficult for jobs to share files; see Permanent Files.

Permanent filenames

Permanent files could only be accessed through the use of local files. Either one had to associate an up to 7 character local file name with a permanent file name (pfn), or the first 7 characters, if available, of the permanent file became the local file name or lfn.
(ATTACH,[lfn,]pfn,ID=username.

File tables

User programs maintained data structures named File Environment Tables (FETs) through which they issued I/O requests to the Scope or NOS/BE operating system (OS).

Non-disk files

Most I/O was, of course, to disk files. This included I/O nominally from a card reader or to a printer. In SCOPE or its successor, NOS/BE, only the OS could actually do I/O to these devices. When a card deck was read in, the Operting System created a disk file containing the contents of the cards. The programs responsible for controlling I/O to printers/plotters and from cards readers was the JANUS program with the PP's 1IR and 1IQ. When the resulting user job ran, the file was a local file named INPUT. Special cards signalled the End-of-Record (level) and End-of-file status.

Printer output was default written to a file named OUTPUT, which was printed at job termination. Similarly, the file PUNCH (not used at TNO), if it existed, was sent to a card punch. That because these 'pre-defined' files had a special "disposition".) and were always placed on a queue device.
Other files could be a new batch job, printed, plotted or previewed by giving them the correct "disposition", usually via the DISPOSE or ROUTE control statements. However, these local files had to be assigned beforehand to a queue device:
REQUEST,local_filename,*Q.
This unless all disk packs for user files had the Q-property. Then the user had not to be bothered with pre-assigning the disk selection.

A local file could be created and associated with a reel of tape on a tape drive via the REQUEST statement. (REQUEST,lfn,NT6250,VSN=tapeno,RING.)

Files could be associated with the user's terminal via the CONNECT statement. This only worked from interactive jobs, of course, and it only applied to that job's terminal. There were different ways of "connecting" a file, depending upon the character set (Display Code or 63/64 character set (connect mode 0) vs. ASCII 95 character set (connect mode 1) or byte output (connect mode 2).

Normally, only brand-new files were connected, but a trivial anomoly of the implementation was that an existing disk file could be connected. In that case, the contents of the disk file would be unavailable until the file was disconnected.

Circular I/O

All file I/O was accomplished through the CIO (Circular I/O) Peripheral Program, shortly PP, request. CIO used circular buffers in which data transfers could wrap from the last word of a buffer to the first word. The user job and the OS together kept track of buffer information through four 18-bit fields in the File Environment Table in the user's field length:

FIRST pointed to the first address of the file's buffer (in the user's address space, aka field length).
LIMIT was the last word address + 1 of the buffer. The length of the buffer was not stated explicitly, as it was slightly more efficient to check for the end of the buffer by comparing a pointer to the contents of LIMIT.
IN was the address in the buffer of the next location into which data would be placed. In the case of an input request, this would be the OS placing data into the buffer from a file. In the case of output, this would be the next place that the user program would put data to be written to a file.
OUT was the opposite of IN: the address in the buffer of the next valid location in the buffer containing data to be processed. In the case of an input request, this would be the user job retrieving data recently placed there by the OS. In the case of an output request, this would be the OS removing data from the buffer in order to write it to a file.

If IN == OUT, then the buffer was empty. As a result, the effective size of the buffer was one word less than the number of words in the buffer. Believe it or not, this bothered me: memory was tight in those days!

The use of circular I/O allowed a job to issue an I/O request before it had completely finished processing the previous request. It also allowed a single I/O request to transfer more data than the size of the buffer. This was possible because the user job, for instance, could be processing data and updating the OUT pointer while the OS was placing data into the buffer from a file. As long as neither side caught up to the other, a single I/O request could go on and on for several buffer's worth of data. This so-called 'pointer-chasing' made full use of the CDC systems architecture. Since I/O was performed by PP programs and user jobs executed in the CPU, it was in fact quite feasible for more than one buffer's worth of data to be transferred in a single request.

 

(with special thanks to Mark Riordan who provided the basis for this page)


[email protected]
25/02/1998

Museum Homepage