[Info-vax] File I/O BandWidth Versus Disk I/O Bandwidth
Lawrence D'Oliveiro
ldo at nz.invalid
Sun Jan 14 19:10:52 EST 2024
This book I’m looking at on filesystem design mentions the paper by
McKusick, Joy, Leffler and Fabry in the August 1984 “Communications of the
ACM” on the Berkeley Fast File System (FFS, later became more widely
popular as UFS).
This was a breakthrough, at least in the Unix world at the time, because
the previous filesystem could only make use of 3-5% of the available disk
bandwidth, while FFS took this to more like 47%.
Back then, other OSes (like VMS) did not try to hide from applications the
fact that file space allocations were done in units of sectors (or some
multiple thereof). Whereas Unix pioneered the idea that, if an application
wrote 975 bytes to a file, then it will only read back 975 bytes, not 1024
bytes (or some even larger amount).
Were these other non-Unix OSes making more efficient use of disk I/O
bandwidth than Unix, at the time? Was the abstraction away from whole
sectors/clusters really that costly, at least to begin with?
More information about the Info-vax
mailing list