There’s a cute little item on engadget today, showing an IBM 305 RAMAC hard disk subsystem from 1956 being unloaded from a cargo plane.
This 1956 HDD was composed of 50 24-inch discs, stacked together and taking up 16 sq ft of real estate. The once-cutting-edge monstrosity was capable of commanding an annual fee of $35,000 and stored up to 5MB of data. Sure, by modern standards it’s a pretty modest capacity, but the RAMAC still weighed in at just shy of a ton.
The RAMAC was way before my time (as a programmer/analyst, although I was certainly alive in 1956). Notably, when some of us library nerd types tried to compare contemporary disk storage with “historic” disk storage, we looked at the first really widespread removable-cartridge drive, IBM’s Winchester (3340), a 1973 design for use with the System/370 that stored 35 or 70 megabytes. (It wasn’t the first removable drive–but it was the first sealed disk pack, so that it was removable in ordinary use.) The name Winchester–IBM’s code name during development–was supposedly coined because the original design had two removable 30-megabyte modules, thus 30-30, like the famous rifle. (For lots more on removable and other IBM mag disk drives, see Wikipedia–this is one of the kinds of things that source does exceptionally well.)
Ah, the comments
As is frequently the case, the comments on the engadget piece can be more interesting than the article itself. (I haven’t looked at the discussion page for the Wikipedia article, but I frequently find those more interesting than the articles as well).
Ones I found particularly interesting:
You would think that storing the data on punch cards would take much less physical space.
That’s where the “grandpa” comment came in–the first response to that odd comment is:
According to my grandfather who worked with punch cards. This was much, much better.
I used Hollerith cards (the IBM 80-column cards that most people mean by “punch cards,” although there are several other varieties as well) for my first library automation system–the Doe Library’s circulation system, installed in 1968 and used until some time after I left UC (apparently because IBM could no longer maintain the collator)–and I prepared programs on Hollerith cards in Basic Assembler Language and PL/I for years in the 1970s, submitting the boxes of cards at the computer center to be compiled at UCSF, the closest IBM System/360 available at the time.
Yes, hard disk was much, better–but it’s also true that 5MB worth of data on punch cards would take up less space. That would require roughly 12 boxes of 2,000 cards each (assuming 80 bytes per card–that is, character-based storage). The problem, of course, is that retrieval of the data you want from 11 boxes of cards is excruciatingly slow. The circ system only worked because I designed a keypunch algorithm that meant that the couple of hundred thousand cards showing circulating items could be sorted into absolute call number order (combining Berkeley’s five or more call number systems in use at the time), so that pages could look up individual books by hand. That was feasible; finding an individual book “by computer” wasn’t, at the time.
Another comment says “5MBs would have required 65,536 punched cards”–in other words, 12.5 boxes–that would be a stack of cards 12-18 meters high. That’s silly, of course: You’d never store cards in one vertical stack. You’d have a stack of boxes, and that stack would be manageable, but absurd for individual retrieval. (We had a U-shaped space in the circulation area with tub files–open card drawers–on two of the three walls, the equipment on the third. The set of tub files was the equivalent of, I think, 100 boxes.)
But then there’s this:
An 80 column punch card held 960 bytes, so this replaced over 5,000 cards.
Um…no. An 80 column punch card used for binary data would hold 960 bits (80×12), not bytes, so it would take around 44,000 cards to hold that much data in pure binary form (and it would be even harder to retrieve it, since each card would be meaningless).
The 30-year difference
The multiperson effort to compare size and cost of today’s hard disks with those of the 1970s was not entirely successful, as I remember. Here’s a 30-year comparison, however, based on the inclusion of prices for later IBM disk drives.
The IBM 3380 began shipping in October 1981 and was apparently IBM’s first multi-gigabyte disk drive. A 2.52-gigabyte unit cost $81,000 and up.
If I drop by Fry’s Electronics I can pick up a name-brand external 3TB hard drive for $200; it would be less, but prices are high because flooding has affected the factories of most manufacturers.
Let’s see. Assume that “2.55GB” uses traditional 1,024 orders of magnitude, so we’ll round that capacity up to 2.7GB. So you’d need about 1,100 3380s to offer the storage of one Seagate GoFlex external hard drive. Those 1,100 drives would cost a bit more than $89 million dollars. I won’t even begin to speculate on the space required for those drives, the amount you’d pay for controllers to make them all work together, or the cost of power to run them.
Today’s external hard drive gives you 445,500 times the storage per dollar–and consumes almost no space or power. (The GoFlex has to be plugged in, I think, but there are certainly external drives that don’t cost much more and rely on the USB connection for power.)
Do I miss Hollerith cards? Not so much. (Am I old enough to be a grandfather? Yes indeed. Am I procrastinating slightly on rewriting the second chapter of my next book? Well, yes, but I’ll go work on it now…)
Hat tip to Michael Sauers for tweeting about the engadget item.
“and consumes almost no space or power.”
That’s quite significant too. The power change factor between then and now seems to be even more impressive.
That’s a good point. Who would have guessed that you could run a one terabyte drive off “spare power” over a USB port? That can’t be more than 4.5 watts (the maximum allowed by USB specs, that is, 5 volts times 900mA) and is probably no more than 2.5 watts–in either case, less than the “off” power our TV used to draw.