Computer Basics for Librarians and Information Scientists

Catherine Pellegrino at Saint Mary’s College Library (in Notre Dame, Indiana) was weeding QA76 and weeded this book. She noted that on FriendFeed; I said “Might be interesting to read that book as early library automation history” and she sent it to me.

I finally got around to reading it. Well, reading part of it, skimming the rest. It’s from 1981. It’s by Howard Fosdick. It really doesn’t say much about library automation; it’s mostly a consideration of very basic aspects of computers–things that I really wouldn’t have thought most librarians needed to understand even in 1981. (Such as, for example, whether a language compiler is part of systems software and exactly how long it takes to read a record from a 1600bpi tape.)

And, after skimming it, I wondered: Was it really as primitive in 1981 as it seems, based on this book?

I was there

Not only was on involved in library automation in 1981, I’d already been involved in it for more than a decade. At that point, I’d been at RLG for two years; my possibly-flawed recollection is that by 1981 I’d just about finished (or fully finished) the design and programming of the product batch system supporting RLIN II, RLG’s full-fledged cataloging network system (based on SPIRES).

It strikes me that, by 1981, I didn’t really have to worry about whether or not I could use PL/I because it took a full 164K of RAM, where some less powerful languages only needed 120K. I know for sure I still spent a lot of time at that point optimizing program operation–but not, I think, at the levels suggested in this book.

OK, that’s probably not fair. RLG, and UC Berkeley before it, had much stronger computing environments than most libraries would have access to. Still…I developed the first working version of the Serials Key Word System in 1973, eight years before 1981, in PL/I (and wrote about it in my first published article, in the March 1976 Journal of Library Automation). And, you know, that Serials Key Word System used full MARC II as an input format.

Were computers still using core memory in 1981? I suppose it’s possible for mainframes; I’m certain the Datapoint multiterminal data entry system (based on a Z80 CPU with 128K RAM, developed in the mid-1970s; I wrote the time-sharing environment, but based on a highly sophisticated OS with direct database support built in) didn’t use core memory!

Not missing the good old days

Admittedly, I remember 1981 as being a little more advanced than this book seems to portray (although the author does view PL/I as the best language for library automation, which I’m pretty certain was true for the time). But that doesn’t mean I remember it with a lot of fondness.

Yes, it’s “wasteful” in some ways that today’s PCs spend 1GB+ of RAM just on the operating system–and probably most CPU cycles as well. But isn’t it wonderful that RAM and CPU power are both so cheap that we can afford to be “wasteful”? I’m guessing the 2-year-old, low-priced notebook I’m using to write this is sitting mostly idle (just opened Task Manager–yep, CPU usage is running 2% to 5% as I write this, occasionally spiking higher). And that’s fine with me. It means I can edit in high-res proportional type instead of 5×7-matrix fixed characters on an 80×25 green-on-black (or, if you’re lucky, amber-on-black) screen–and use about 1/3 the power for my whole two-screen system that the old CRT terminal used all by itself. All that waste CPU power is saving me time: Whoopee.

That Intel core 2 duo CPU in my notebook is a little underpowered by 2010 standards–only two threads and a mere 1.66GHz. By 1981 standards? Were there any mainframes with that much computing power?

And, if you really want silly-season numbers, the 1981 book devotes an appendix to the IBM 3330 Reference Card. That’s a disk drive, hot stuff for its day. The 3336 Model II disk pack had a total capacity of 200 million characters (200 megabytes). I know the drive itself was huge; I don’t know how much a pack cost, but I’m guessing it wasn’t cheap.

I also remember much later, when RLG needed to add a terabyte of disk storage (probably in the late 1990s). That procurement process was a big and expensive deal–but who could imagine adding a terabyte of disk storage to a library automation facility in 1981?

Now? I could go pick up a 2TB disk drive for about $180 if I had use for one. It would fit neatly next to my notebook. (I could probably get it cheaper than that by mail order.) Two terabytes. That’s how many 3336 Model II disk packs? Ten thousand of them, by my calculations.

Comments are closed.