Archive for the ‘Media’ Category

How much will you pay me to pay for a subscription?

Monday, March 19th, 2012

There was a curious item in today’s (USPS) mail–a note from Wired Magazine trying to coax me to renew my subscription (which expires in June, and which I believe I got for free using airline miles, then was extended because the Conde Nast magazine I actually wanted–their classy business mag–went under the same month I subscribed).

Oh, it’s not curious that they want me to renew. What’s curious is just how far they’re going. The offer: $8 for one year. $15 for two years. That’s $15 for 24 issues of a monthly magazine, which may or may not be enough to cover postage.

To me, that smacks of desperation. For that matter, the typical $10 renewal price for many slick large-subscription monthlies strikes me as a little desperate, but when you fall significantly below that threshhold…

What’s happening here, I’m fairly certain, is that Conde Nast is anxious (desperate?) to retain its rate base, the guaranteed level of circulation used to set its ad rates, since it is ads that actually pay for Wired. And I can see the difficulty. The magazine’s media kit for the print magazine says “Paid/Verified Subscriptions: 715,749”–but the December 2011 required USPS magazine-rate information shows an average of 628,364 paid subscriptions for the past 12 issues, down to 621,059 for the most recent issue.

I suspect I’m missing a bunch of requested but not paid subscriptions, since I really don’t believe Wired‘s print version has been bleeding subscribers that rapidly–but I can understand the desire not to lose any more.

As it happens, it won’t work. It doesn’t help that the issue I’ll read next (not the current issue–I’m a bit behind) has this as its primary cover text: “YOUR NEXT CAR WILL DRIVE ITSELF.” Not “some day, some of us may have self-driving cars” but rather typical Wired: metahyperbole, like hype-squared, a statement that’s so ludicrous as to be offensive.

I wonder just how low the magazine will go? $5 a year? $1? Send me a free iPad if I send them a $15 three-year print subscription? (I might go for that. I might not.)

Samir Husni, “Mr. Magazine,” has written about print magazines underselling their own value. I think his figure was $1/issue–that is, offering subscriptions for that little money, or less, indicates that a magazine’s publisher doesn’t place much value in the magazine.

It’s not a bad figure. Let’s say that going significantly below $1/issue doesn’t inspire confidence. What does that say about $15/24, that is, $0.625 per issue?

Understand: I’m a print magazine kind of guy. I take a bunch of them. I read them pretty much cover to cover (which may be why my to-be-read stack, between two and three months’ worth, is now completely filling the designated basket). I like some variety.

I’m keeping Fast Company (which offered me extensions at $10/year, but it’s a 10x/year “monthly” so that’s $1/issue). Lately, I find that Fast Company is almost as tendentious and assured of the single future as Wired always has been (“GenFlux” as the universal future, my nether regions), but in general it’s a little less zooey. Taking both strikes me as overload.


Closing the Digital Frontier

Monday, February 20th, 2012

One reason I’ve said there’s likely to be more activity in this blog in the future (and there has been!) is that I plan to post some pieces that are elements of likely future Cites & Insights composite sections–posted here in their raw form, subject to revision or deletion along the way. Not all of them, by any means, but some. Here’s an example. It’s also an example of something you’re likely to see a lot of in C&I this year and maybe next: Catching up, as I go through two years of tagging–and find that although the cited articles are “outdated,” they still speak to things that are happening now and might be worth commenting on.

So here’s a part of one section, assuming it stays. The title on the section is the same as on this post:

Closing the Digital Frontier

According to Michael Hirschorn’s article of that name, in the July/August 2010 Atlantic Magazine, the “era of the Web browser’s dominance is coming to a close.” Why? Because “things are changing all over again.”

The shift of the digital frontier from the Web, where the browser ruled supreme, to the smart phone, where the app and the pricing plan now hold sway, signals a radical shift from openness to a degree of closed-ness that would have been remarkable even before 1995. In the U.S., there are only three major cell-phone networks, a handful of smart-phone makers, and just one Apple, a company that has spent the entire Internet era fighting the idea of open (as anyone who has tried to move legally purchased digital downloads among devices can attest). As far back as the ’80s, when Apple launched the desktop-publishing revolution, the company has always made the case that the bourgeois comforts of an artfully constructed end-to-end solution, despite its limits, were superior to the freedom and danger of the digital badlands.

So we have one of those “shifty” articles—where we all move from one paradigm to another paradigm, with no room for both, for people who use smartphones, apps and iPads but also notebooks and browsers.

But as I read it, this doesn’t seem to be about the web in general as it is about traditional media and its relationship to the web. Even there, I think the thesis is overstated—and with an odd countergenerational overtone: “or under-30s whelped on free content, the prospect of paying hundreds or thousands of dollars yearly for print, audio, and video (on expensive new devices that require paying AT&T $30 a month) is not going to be an easy sell.” But, Hirschorn says, that won’t stop “the rush to apps” because, especially with Apple as semi-benevolent overlord, “there’s too much potential upside” (and besides, people don’t criticize Apple for behavior that they would assault other companies for—a point with which I’m sympathetic).

I find the article bemusing. We learn that Twitter barely cares about, well, Twitter—that the smartphone version is more fully featured. It’s clearly an “or” situation: Apps can only rise at the expense of the browser. The grand finale? Harking back to the American frontier, Hirschorn concludes:

Now, instead of farmers versus ranchers, we have Apple versus Google. In retrospect, for all the talk of an unencumbered sphere, of a unified planetary soul, the colonization and exploitation of the Web was a foregone conclusion. The only question now is who will own it.

As Sue Kamm has said in another contest, “In the words of the immortal Nero Wolfe, ‘Pfui.’” It doesn’t help to read the byline: Hirschorn runs a TV production company. I suspect, and particularly based on rereading the article, that he views the world in media terms: There are producers and consumers, and that’s just the way it is.

Relatively few comments over the past year, the first of which rushes to Apple’s defense—followed by one that posits that, you know, people can and probably will use both “walled gardens” and the open web. A few items down, we get a reasonably sound comment that begins with this subtle paragraph: “This is absolute rubbish.”

I’ll quote Dale Dietrich’s comment in full (typos and all—and since Dietrich was probably typing on a virtual keyboard, an occasional typo’s forgivable), as I think it speaks to the truth if you’re dealing with something more than corporate media:

The app does NOT diminish the importance of the browser. The app merely extends the web to more devices that it was hitherto inaccessible to. The App, as first popularized on the iPhone, wrested contol of what can be done on mobile devices from big telco to the individual. Like the browser-based web did before it, the app gave control to the end user. The author would do well to consider that all modern smart phones include browsers that are heavily used both independenty by users and by mobile apps that frequently embed the browser within the app. Case in point, I am viewing and responding to this silly article within the Safari browser that is embedded within my iPad’s Twitterific app. Hell, Twitter-based apps INCREASE my viewing of browser-based content by curating the web for me by the trusted folks I follow.

And, a bit later, this from David McGavock:

All of this assumes that the people who are participating in the read-write-create web will walk away and let apps dominate all their interactions. This dichotomy of apps vs. browser seems false to me in light of the fact that both have their strengths and weaknesses. This entire article assumes that the billions of people that are creating their own digital footprints will give it up for paid service. There is an explosion of legal sharing going on here. Are we all going to pack it up and go home because of the apps we use. I think not.

Then there’s a strange comment from “John_LeB” who apparently is aware of something I’m not:

It is true that some information remains free on the Web, but much research-based scholarship definitely does not. With on-line fee-based jobbers such as Taylor & Francis, Elsevier, Blackwell, Springer, etc., research that used to be freely distributed on the Web now carries a subscription fee. All well and good, perhaps; academic researchers are entitled to compensation for their scholarly production—but wait! Access fees rarely trickle down to their producing authors. Their reward lies in the “points” they can embed in their CVs for tenure or promotion. The jobbers are running free with the pecuniary revenue. One unfortunate spin-off is that access to research is foreclosed where it’s needed the most, in the developing world where the contemporary price of a journal article can represent a week’s worth of food. (Food for the stomach, that is.)

Ah, the good old days when research articles were always freely distributed on the web, back before those young upstarts like Elsevier grabbed it all… And that’s the complete comment. The writer’s probably as ignorant of open access as he is of the history of web access to research articles.

Mike Masnick does a pretty fair fisking of Hirschorn’s article in “Another Journalist Seduced By App Madness Predicts The End of the Web,” posted July 1, 2010 at techdirt. I won’t bother to excerpt his commentary: It’s free, and you can go read it yourself, unless you’re reading this on a smartphone that lacks any form of browser (a combination that seems somewhere between unlikely and impossible). Of course, if your only access to e-stuff is through such a smartphone or some truly locked down tablet, then you’re not reading this anyway, are you?

Oddly, in comments on Masnick’s piece, Hirschorn objects that his piece is “largely an attack on Apple’s efforts to curtail that freedom…”—which, if true, means that Hirschorn is an inarticulate writer, since I certainly didn’t read it that way. Even in this response, Hirschorn’s an Only One Future man: “Also clearly and obviously, the rise of mobile computing will result in less non-mobile-computing and the center of power will move from the browser to the smartphone/ipad experience.” Right. And neither smartphones nor tablets have browsers. Now, if Apple had a browser—oh, let’s give it some fanciful name like Safari—that would really change the inevitable future. But that’s as silly as it would be for Amazon to add a browser, say one with an even sillier name like Silk, to its entirely-walled-garden Kindle Fire.

If you do read Masnick’s piece, scroll through at least some of the comments. Hirschorn starts doing a complex “that’s not what I was intending/that’s not what I really wrote” dance that leads me more and more to believe that he really is inarticulate or incoherent. As you probably already know, I’m definitely not one of those who regards traditional journalism and media as irrelevant (as some commenters do)—but neither do I regard them as the whole of the landscape.

Why mention this now, almost two years later? Because we haven’t gone All Apps, All The Time. Because traditional real-world media continues to do better than a lot of digital junkies realize (for example, did’ja know that there are more than 300 million print magazine subscriptions in the US, and that 100 million people in the US still read print newspapers? hmm?). Because the world continues to evolve mostly in “and not or” ways, with more choices complementing one another rather than One Triumphant Paradigm shifting to Another Triumphant Paradigm, with no room for alternatives…and because this sort of “journalism” continues to be prevalent.

Prospectus: An Ongoing Public Library Social Network Scan

Wednesday, January 4th, 2012

I wrote about this six weeks ago in “Dreaming out loud: An ideal ongoing sponsored project.” This is a slightly more fleshed-out version, done after completing the 5,958-library survey and revisiting the first 2,406 libraries. (I’m well into revising the manuscript now, preparing secondary metrics as I go).

Briefly, I believe it would be worthwhile to do an annual nationwide survey of public library presence on social networks, looking at all U.S. public libraries–9,184 of them (based on IMLS figures as reported in Hennen’s American Public Library Ratings (HAPLR)).

The earlier post offers a longer version of what I’ve already done and some other notes; this post goes into a little more detail on possibilities. I would love to see this happen, but I don’t really have the contacts to make that happen. In late January and early February, I’ll be trying to see what’s possible (with your help, I hope). At some point, it may make sense to try the long shot (see “How to Pay” below).

The Baseline

As background for my 2012 ALA Editions book Successful Social Networking in Public Libraries, I looked at all of the public libraries in 25 states (distributed by population) in late summer 2011–and later added the libraries in 13 more states, for a total of 5,958 libraries in 38 states. For the first 25 states (and 2,406 libraries), I revisited four months after the first visit to look at changes in social networking.

The result is two spreadsheets, one of which (LSNALL) would be the baseline for the new project. (The other, LSN25, looks at the four-month changes. It wouldn’t be particularly relevant for the new project.)

LSNALL includes, for each library, the following–based on my own searching, as detailed in the earlier post:

  • Library name and Legal Service Area population as provided by the state library in its spreadsheet, noting that “Library name” is frequently something other than the name the library actually uses. (Only libraries that have an LSA are included, leaving out 7,000-odd branches but also cooperative agencies that aren’t double-counted.)
  • State abbreviation
  • Date on which I checked the library
  • “FB?”–a code indicating whether I found a working Facebook link to a library Facebook page on the website (w), in the first 100 Google results (g), or by searching Facebook itself (f), in that order–or, if none was found, whether I found a community or information Facebook page instead (i) or nothing at all (n).
  • If there is a Facebook page (or group, or non-page account), the number of Likes (or friends).
  • For the most recent and fifth most recent Facebook post from the library itself, a code indicating its currency bucket: d (the day I checked), e (week–within the past seven days including today), f (fortnight), m (month), q (quarter), s (six months), y (year) or z (more than a year). (“e” was chosen to make the buckets directly sortable.)
  • A one-letter code indicating whether I found some signs of interactivity within the “visible” posts (usually 20 to 30): “y” for a non-library comment or a non-spam post from someone other than the library; “l” if I found likes (by someone other than the library!) on posts but no comments; “s” if I found only spam comments (or only spam and likes); and blank if I found none of those.
  • A Twitter code, similar to Facebook except that there are no “i” cases and I use “t” instead of “f” if the Twitter account could only be found within Twitter itself.
  • Followers, following, and tweets, one number each.
  • The same most recent and fifth most recent bucket codes for tweets
  • An interactivity code, usually based on either non-library tweets, retweets, or tweets beginning “@”–I didn’t look as far for these, and don’t regard the results as very meaningful.
  • Comments if needed–sparsely. (E.g., “FB0” for a few cases where a library Facebook page is apparently the library’s actual page but has no updates, up through FB4 if there aren’t yet five updates, or “FB teen” or the like where there’s no general-purpose FB page but appear to be specialized pages.)
  • Added after the initial scan: “SN?”–a number from 0 to 2 indicating how many of the two possibilities the library had–and “H”–a number from 0 to 9 providing the HAPLR size category (0 being under 1,000, 9 being 500,000 and up), to ease sorting and, as it turns out, reporting.

A derivative spreadsheet, LSN38, leaves out all the rows with SN?=0 (libraries with no findable social network presence) and adds derivative columns for use in the book, such as “F%” (Likes divided by LSA), “T%” (same for Twitter followers), “T/F” (followers divided by likes), “Fr” (followings divided by followers) and “Fmx” and “Tmx”–two-character codes indicating frequency and reach buckets. There are also metrics spreadsheets and pages within these spreadsheets, of course, but the primary LSNALL spreadsheet is the true baseline.

Proposed One-Year Revision

With proper funding in place and possibly better ways to distribute the results, I’d see doing this between June and November 2012:

  • Start a new spreadsheet (linked to the old one for comparative metrics) to include the other 12 states and DC, based on the most recent IMLS dataset  (which would require either acquiring Access or working with a partner, since the other 12 states don’t seem to have downloadable spreadsheets). [Note: I now find that I can open the IMLS dataset within Excel, although I would swear that this wouldn’t work when I tried it in August 2011.]
  • Update 1/25/12: The IMLS database includes actual library names, not always the same as those used in the state library spreadsheet columns I was using; I’d use those names.
  • Check each row in the spreadsheet to fill in columns as follows:
  1. Actual library name, as used for a Google search (with the state name added to the library name).
  2. Position of the library’s official website (if one is found) in the result.
  3. Facebook columns as at present, with these changes:
    a. The second “current post” bucket would be based on the 10th most recent post, but normalized to the same meanings (i.e., 2 days, 2 weeks, 2 fortnights, etc.)
    b. The interactivity column would be replaced with a number representing the number of non-library, non-spam comments and posts found within the first 10 library posts, from 0 to whatever. Post-level likes would be ignored.
  4. Twitter columns as at present, with the same “bucket” change as for Facebook and with the “Following” actual number replaced with a code indicating general approach of following (open to modification, but storing the actual number feels like overkill). Unsure whether to modify the interactivity column or simply drop it.
  5. Google+ columns along the same lines as Facebook columns, but with the number for “Added to circles” replacing Likes. (Subject to change.)
  6. Optional, if someone believes it’s worth doing and would pay extra for it: Blogging column, with a number for the number of blogs identified on the library’s homepage, and with a separate spreadsheet identifying those blogs. (This could lead to a five-year update of my Public Library Blogs study. It may be a lot more work than it’s worth. The Public Library Blogs book sold 31 copies, but that was with only my own publicity.)
  7. Optional, and I’m not sure any of these are worthwhile: Columns for MySpace presence, YouTube presence, Flickr presence indicated on homepage.

Later Years

Similar spreadsheet, linked to earlier sheets or pages for analysis, and adding significant new social networks that welcome institutional pages/accounts if such networks appear.


The deliverables would depend heavily on who’s paying for this and what they want. Possibilities:

  • Writeup of results including comparisons to 2011 and metrics similar to those planned for the forthcoming book, distributed as a free PDF. The writeup (and specific writeups) would include not only benchmarks by size of library and state, but also lists of libraries doing particularly well in various metrics relative to their size, to serve as examples for other libraries wanting to improve their social networking.
  • More specific writeups for individual states or for specific library sizes.
  • Possibly the spreadsheet itself for further analysis.

How to Pay

I believe the results would be valuable, since I believe most public libraries can benefit from a social networking presence and it’s clear that most of them are not reaching nearly as many people as they’d probably like to. A variety of benchmarks and examples should help. (My book should help too, combining benchmarks, examples, discussion, advice…)

But it’s way too much work to do for free or on spec. My experiments in self-publishing have taught me that, and have taught me that I can’t really do it based on the hope of selling the results on my own.

I figure $15,000 a year for the whole process, including deliverables (but not including #6 and #7 above). Adding #6 would push that to $20,000; adding #7 is unclear.

That is in some ways small potatoes for most funding agencies, but it would meet my needs.

Possible avenues:

  1. An agency could sponsor this–either a foundation or an agency that already does library research, or, for that matter, an agency that finds it worthwhile. I’d be delighted to work with almost any such agency. The one real exception is one where I can’t imagine they’d want to work with me. (As noted in the earlier post, I’d be delighted to work with OCLC or WebJunction on this, or the Gates Foundation, a library school, a vendor, whoever. I suspect my lack of institutional affiliation is a problem for most funding sources, but I’d love to be proved wrong. Unfortunately, independent research is not highly regarded in this field, as with most other fields.)
  2. A group of state libraries could sponsor it, in which case I’d narrow the research to cover only those states and charge a different fee, something along the lines of $500 + $n per state, where $n is the number of reporting libraries in the state times a constant, probably $1 to $2.
  3. I could find some way to be assured that sales of the report–which I’d prefer to be free–would come close to generating $15,000 in revenue. I don’t see too many pigs flying overhead, and I don’t regard that one as probable.
  4. I could prepare a Kickstarter project, video and all. Would it be accepted by the curators? No idea. Would it stand a chance? Stranger things have happened…

I need help on this.

I’m scheduling this post to appear on January 4, just after most people get back to work. I won’t do anything active beyond this post until the draft of the social networking book is done (around the end of January if all goes well); then, I’d make whatever contacts I can–but I don’t have the contacts I’d like to have. I’d need to have funding lined up by June in order to plot out the survey process, and by May if I was going to attend this year’s ALA Annual Conference. If I can’t work something out by June, I’ll probably turn my attention to other book or writing possibilities and abandon this.

If you think you could help find a home for this, let me know, either in a comment or by email to waltcrawford at gmail dot com.

If you believe the project is worthless, there’s no real need to comment. [If that reaction is based on an assertion that all the public libraries already are on Facebook, you’re simply wrong, by several thousand libraries, but never mind…]

Updated 1/7/12: This blog is now receiving so many spam comments–over 130 on some days, even after tightening the comment period–that a response via comment is likely to disappear. So if you can help or are interested, please send me email, waltcrawford at gmail dot com.

This post describes a possible offshoot of the public library social network scan, and a likely thank-you premium if I try to go the Kickstarter route and succeed.

Thinking about magazines and journals

Tuesday, April 12th, 2011

Years ago, writing one of my “OA independent” articles on Library Access to Scholarship in Cites & Insights, I commented that it was highly unlikely we’d ever get to 100% e-publishing for STM articles. Specifically, I said it was unlikely that Nature or Science would go away, since both have substantial non-library subscription bases.

I hadn’t actually read an issue of either one, at least not for decades…

As I noted (probably just on FF), a few weeks ago I received an invitation to join the American Association for the Advancement of Science for $99/year (a first-year discount from the usual $149/year), which includes 51 issues of Science. The invitation included a copy of the February 18, 2011 Science.

The last thing I need at this point is more magazines, particularly given my idiot tendency to actually read everything I subscribe to from cover to cover, or at least start each article. I discarded the invitation…but kept the issue.

And read it. Not the whole thing, to be sure, but nearly all of what I’d call the “magazine portion”–in this case, pages 811-875, as opposed to pages 876-931, the “journal portion” (followed by ads and advertorials).

My initial conclusion: If AAAS was really committed to the advancement of science, they would and could go to gold OA, turning the print version of Science into a weekly or fortnightly magazine (about half as thick as it currently is) and publishing all the full peer-reviewed research articles and reports online with full and immediate access. Oh, and charging a much more reasonable fee for an institutional subscription to the magazine than the current $990–like, say, $149, or perhaps twice that including immediate online access to all the features that make up the magazine portion and the ScienceNow daily news, ScienceInsider, and so on…

Why? Well, it’s a really good science magazine. It has lots of ads. It includes lots of well-edited, well-written material. I suspect it would continue to thrive as a magazine. As a journal, however, it makes more sense online, both because it covers too much territory to make sense as a browsing resource for any given scientist and because much of it’s online-only anyway.

Maybe it would need article processing fees, although it’s hard to believe they’d need to be four-digit fees. In any case, going full gold OA for the peer-reviewed material would certainly be a huge step forward in the advancement of science. And it’s always been part of the serious OA advocates (e.g., Peter Suber) that it’s legitimate to charge for added value, such as popularized versions, discussions, news, etc.–all the stuff that makes up Science‘s magazine section.

This has probably all been said before, but I really was struck by how much the issue came off as a very good science magazine with a bunch of very specialized peer-reviewed items in back of the magazine. And how likely it is that the magazine would survive and probably prosper without charging high fees (or any fees) for online access to the peer-reviewed items.

Comparing potatoes and truffles

Monday, December 13th, 2010

Remember Wired Magazine‘s absurd “The Web is dead” cover article (September 2010)?

I can’t think of anything that was right about the article. One of the things that was most wrong was the big graph that showed how the web was dying–by plotting all internet traffic, in bytes, on a market-share graph (that is, one where the Y axis is always filled, since it goes up to 100% and the segments show percentage of each area over time).

One thing that was wrong with it is that this kind of graph is almost always misleading or meaningless when an overall space is either growing or shrinking, since it represents percentages, not absolutes. If Amazon goes from selling 90% of ebooks when ebook sales are $1 million per year to selling 30% of ebooks when ebook sales are $1 billion per year, I can assure you nobody at Amazon is saying “Damn. We’ve died in the ebook space.” But that’s what a market-share graph would show: A dramatic, awful, terrible decline in Amazon ebook sales.

The other is even more absurd, and is where I get “potatoes and truffles.” Well, you know, they’re both edibles that come from the ground, so clearly truffles are dead, since the weight of potatoes sold each year must surpass the weight of truffles by several orders of magnitude. Actually, they’re both tubers, so what’s the difference? (“Several orders of magnitude”: I can’t readily find the current total production/sale of truffles, but it apparently peaked at “several hundred tonnes” early in the last century, so I’d guess it’s no more than, say, 314 tonnes now. Which is a deliberate choice because 2008 worldwide production of potatoes was 314 million tones. So figure at least a million times as many potatoes, by weight. And there’s even the time element, since truffle production has dropped enormously while potato production continues to rise.)

The other fallacy? Choosing one measurement and assuming that it’s meaningful in other contexts. In this case, choosing data volume (bits or bytes) and assuming it relates somehow to “where people spend their time.”

I choose that quotation because here’s how Wired responded to the criticisms of their chartjunk in this case:

While not perfect, traffic volume is a decent proxy for where people spend their time.


Last Saturday, we had a friend over and spent a wonderful two hours and 31 minutes watching the glorious Blu-ray version of The Music Man. I felt as though I’d never really seen the picture before. It was great. It was also 2.5 hours.

I’m guessing The Music Man probably took up around 40GB (a dual-layer Blu-ray Disk has 50GB capacity).

Today, I’ll start reading a mystery novel that I’m certain is going to be enormously entertaining as well. At 250 pages, the text in it would probably occupy about–well, let’s call it 80,000 bytes, although that’s probably on the high side.

By Wired‘s “reasoning,” it’s a fair approximation to say that I should spend around 0.018 seconds reading that book, since it has only one-five hundred thousandths as much data as The Music Man–and “traffic volume is a decent proxy for where people spend their time.”

In the real world, I’ll probably spend three or four hours reading the novel, maybe a little longer.

An extreme case?

OK, so a Blu-ray Disc is an extreme case. Internet traffic almost never includes 30mb/s streams, which is roughly BD level. But it does include loads of video, probably at traffic rates between 250kb/s and 6mb/s, and audio, at traffic rates of at least 64kb/s for anything with halfway decent sound (“halfway decent” is the operative term here).

So if I watch a one-minute YouTube clip, it’s likely that the traffic amounts to at least 1.9 megabytes (at the lowest datarate supported by YouTube) and more likely at least twice that much.

How much time would it take me to read 1.9 megabytes worth of text, even with HTML/XML overhead?  Without overhead, that’s about 300,000 words, or the equivalent of three long books. With PDF overhead (which, for embedded typefaces, is considerably more than HTML overhead), that’s four typical issues of Cites & Insights–but for the text itself (with Word .docx overhead), it’s at least a year of C&I. I pretty much guarantee that anybody who reads C&I at all spends more than a minute doing so, even though the data traffic only amounts to a few seconds worth of  YouTube.

Equating “traffic” for text, or even still photos, with “traffic” for sound or video, as being in any way meaningful in terms of time spent is just nonsense. Wired says “We stand by the chart.” That says a lot about Wired–and almost nothing about the present or future of the web.

What year did downloaded music start outselling CDs/vinyl?

Thursday, July 1st, 2010

Very short post, with the heart of it in the title above, so as to encourage FriendFeed participation.

This is a test of your digital awareness. Without looking it up, try to answer that one question:

In what year did downloaded music (iTunes, etc.) start outselling music-on-containers (CDs, vinyl, etc.), worldwide?

Comments will remain open until Saturday, July 3, after which I’ll comment on the responses.

The apparent answer: According to the sources I’ve seen, the answer is “2011, probably.”

Which is also to say: It hasn’t happened yet.

So Davin’s right, Stephen’s close enough, and John’s…not so much.

Random Sunday music musing

Sunday, June 13th, 2010

When I got my dandy little 8GB Sansa Fuze “MP3 player” (because it was on sale for $69 at Radio Shack, which it is again this week), I loaded it with 863 songs from my collection that I think are better than average–ones I give 3, 4 or 5 stars. (The whole collection, excluding classical, is around 3,000 songs–mostly fairly old CDs, but I’ve added a handful of used CDs purchased recently…)

And I’m going through it for a first pass, really listening to songs, usually about 10 a day.

Today I hit a song that was great–a recent addition, so I’d probably only heard it once before–but that also reminded me I’ve lost most of the specific vocabulary for music I might once have had.

The song: “Hard to Love” by Vance Gilbert (from one thru fourteen, released 2002 on Louisiana Red Hot Records).

It’s a blues of a particular style–with verses minimally accompanied (Hammond B3, electric lead & rhythm guitars, acoustic bass, drums–the bass descending one note per bar, minimal riffs from the rest), and then a solid horn section cutting in on the chorus. I mean a tasty horn section. (I’d actually been thinking, you know, I need a few more songs with really tasty horn sections.)

Yes, I can recognize a Hammond B3 almost instantly…or one heck of a good synthesizer simulation. Can’t you? Some day, all the Hammond B3s will be gone and irreparable; that will be a sad day for blues/jazz/whatever. Also yes, I’m one of those who thinks Al Kooper’s contributions to American music have been undervalued…

And I realized that I didn’t know whether this was Tower of Power-style horns, Memphis Horns style, or something else entirely. It only matters in that it’s hard for me to describe this number adequately.

[Checking the liner notes/booklet, one of those things that come with CDs, I find something really interesting, given that the horns seemed to have a pretty natural acoustic and stereo spread: The horns are the “Joe Mennona horns,” which appears to consist of Joe Mennona overdubbing all the horn parts–tenor sax, alto sax, baritone sax, trombone and trumpet.]

No real significance here. You can enjoy music without being able to describe it properly.


Monday, June 7th, 2010

The June San Francisco Chronicle Magazine (the Chron only does its own glossy-magazine section once a month, a very sensible decision–the weekly book section and review/entertainment section are separate anyway) leads off with an editor’s column with the same title as this post.

It’s not all that long (465 words–shorter than this 558-word post); you can read the whole thing yourself, and look at the amusing picture. The theme: Meredith May (the writer) has been

getting into polite arguments with friends who have been posting pictures of me on Facebook and Flickr that I would never want you to see.

They’re not nude shots or anything like that–but they were “taken in private moments with friends before the world was wide and covered in a Web.” May doesn’t think it’s up to other people–even her friends–to decide which parts of her own history should be made public.

She notes a specific incident–she’s going to the Pulitzer Center on Crisis Reporting to talk about her story on girl slavery in Nepal and, checking Facebook in the airport, finds that an old friend has psoted pictures of her drinking and posing at high school house parties…

May doesn’t quite understand people’s impulse to overshare their own stuff–“but over-sharing someone other than yourself without his or her permission is baffling.” And, indeed, since we learn that any candid shot is likely to turn up on the web, spontaneity could be suffering.

I have had parties at my house with a dozen of my lovely artist friends, and nine will bring a camera and start shooting. The whole reason for having your homies over for a party is that you can let down your hair and dance on the counter if you want to. But I’m more cautious now. The joie de vivre, the carpe diem, the being alive part of living – is tempered.

In our haste to document and share everything, are we losing what it means to live in the moment?

I can’t speak for anyone else, but this editorial certainly resonates with me. I’ll take it a step further: “Agreeing” that a picture can be posted isn’t always being entirely happy about it. Coercion is a strong word for the process that takes place, but it’s a form of social pressure–the desire not to be thought a complete killjoy.

There are pictures of me on the web (oddly enough, they show up in Google but not on Bing) that I could do without. One of them has a caption about what a good sport I was. “Good sport” in this case really means “didn’t feel he could avoid this without looking like a killjoy.”

I know that my own behavior at, say, conference receptions is now much more circumspect than it might have been in the past, that I’m much less willing to don silly hats or assume silly poses or hold up silly signs. A few years ago, I would have assumed that a few folks would have gotten little laughs out of the silliness as captured in photos. Now, I assume that the silly pictures will live forever on the web and in search-engine results–and while they can’t really do me any harm, I’d just as soon not, thank you.

So does this make me a killjoy? Maybe so. Such is life. Apparently I’m not the only one…

A quick twofer

Wednesday, June 2nd, 2010

Two miniposts for the price of one!

Gold star

I would be remiss if I did not mention that this here blog received a gold star from Salem Press in its library blog thingie, particularly since they were very quick to move this blog from Public Library Blogs (!) to General Blogs (I was hoping for Quirky, but you can’t always get what you want) after I let them know…

(There seems to be no shortage of links to the Salem Press list, so the lack of one here shouldn’t be an issue.)

Quick expert advice from librarians about web tools

Here’s an easy two-part test for modern librarians–or, better yet, just those who are considered web specialists. They’re honest questions, and presumably y’all should be able to answer them on the spot, in the comments:

  1. I have a fully-formatted book ms. done using Word 2007, but also in PDF. How do I convert it to epub (without DRM), retaining as much of the formatting as possible? I even have Calibre, if that helps.
  2. OK, so I have the new Facebook privacy tools now, but I just looked at my Privacy settings and I don’t understand what’s going on here:

Facebook Privileges
Note: This is a straight screen capture, cropped but with no other changes. You may have to scroll right to see what I’m really interested in.

To wit: What does “Other” mean? How can I find out?

I await responses with some interest. Based on other discussions, I assume that any employable web services librarian should have answers…

dr? dc!

Monday, May 24th, 2010

Right up front: I’ve been guilty of this before and probably will be again.

As I was working on a Zeitgeist piece, I looked at a nicely-done 1,300-word essay. On a national newspaper website. About one aspect of social networking. With some interesting and slightly controversial things to say, some of them certainly open to argument.

The very first comment detailed the length of the essay–how many words, how many characters, how many sentences, average number of letters per word, length of longest sentence–and ended with a note suggesting that there was no content, or at least that the commenter hadn’t read it.

Understand: The commenter didn’t disagree with what was being said–the commenter was trashing the essay based on its length (apparently). Several other commenters offered variants of the old “tl; dr” brushoff–that is, “too long; didn’t read.” (I rarely see that on liblogs–maybe library folks actually have more than ten-second attention spans, or at least believe that “tl; dr” leaves one open to accusations of subliteracy.)

I’m not going to argue that people damn well should read longer essays. After all, 1,300 words is just a bit less than two pages of C&I, or three or four pages of a typical trade paperback, or one-third of a typical In the library post, or nine Friendfeed posts. If that’s so much text it makes your brain explode or your eyes hurt, who am I to argue.

dr? dc

But, well…

If you didn’t read the article or post, why are you commenting on it?

Equally, if you read the article or post and have nothing to say about the topic or the substance of the post or article… why comment on it?

Because you know the writer hangs on your every word so much that she will at least appreciate knowing you dropped by? Because you’re so damned important that you must respond? Because you’re a frustrated graffitist? Because you have no life?

I think all of usmany of us do this sort of thing–or equally vapid responses–once in a while. (Yes, that’s a preventive strikeout: I was about to commit a universalism, and I damn well should know better.)

It works both ways. I waste time on FriendFeed. (I also use FriendFeed, and maintain friendships on FriendFeed, and gain valuable insights on FriendFeed. And sometimes I waste time on FriendFeed–the activities aren’t mutually exclusive.) As many categories as I’ve hidden, as rarely as I Follow anybody new, I still see dozens of posts (mostly from Twitter, but not all) of the “what’ll I have for breakfast / I just had X for dinner / I just posted from Y” flavor, stuff that for me is almost exclusively in the “who cares?” category–just as some of my posts here fall into the “who cares?” category for some, maybe most, occasionally all readers.

I don’t believe I’ve ever found any reason to comment on a “what I had for breakfast” FF item by asking who cares or saying “don’t clutter up the feed with that crap” or anything of the sort. If I don’t care, why would I take the time to comment? (And, for that matter, if I don’t care, how does that imply that nobody else could possibly care?) I’m dead certain I’ve left equivalent responses on some posts and FF messages, however, and I’m sure I will in the future.

And I’ll be (trivially) wrong to do so.

As of that last period, this post contains 570 words. That’s probably too long for some of you–but I suspect that people who can’t handle 600, 800, or 6,000 words aren’t among my audience anyway.

By the way: I’m tagging this “Net Media”–but I no longer believe that term has much of any meaning, and I’m also doubtful about “Social Media.” That’s an essay I’ll be writing one of these days, probably in C&I. 636 words. My work here is done (645).