Archive for the ‘Media’ Category

Sunday silliness: Two meaningless miniposts

Sunday, November 4th, 2012

If you’re looking for significance, librarianship, or any of that stuff, you’ve come to the wrong place (and look! there’s an Oxford Comma, which I normally don’t use).


Part 1: Really? A 72-page Section?

The San Francisco Chronicle Sporting Green section (yes, it’s on green paper–or, rather, has green as a background except when printing color photos/ads) has been doing a bang-up job of covering that local baseball team, including a remarkable series of full-page (broadside, 11″ x 21″, good-quality color printing albeit on newsprint) action photos of most of the team, with each photo as a single sheet (backed with a full-page color ad, of course) during the post-season. It doesn’t hurt that the paper recently added first-rate writer Ann Killion to its already superb sportswriting staff. There’s been a ton of orange ink along with all that green…

So Friday the paper noted that it would have a 72-page special section commemorating the World Series in today’s Sunday paper. First reaction: Really? 72 pages? That’s as much content as a fairly substantial book. Of course, lots of it will be reprints of front pages or Sporting Green front pages from the post-season, but still…

Then I thought: How is the Chron going to print a single 72-page section? None of its sections are ever more than about 20-24 pages: Can the presses even handle that big a wad of paper?

Got the answer today. You redefine “section”–it’s actually four sections, two 24 page and two 12 page. No, haven’t skimmed through it yet. (I’m not much of a sports fan, but I read a fair amount of the Sporting Green for the same reason I once subscribed to Sports Illustrated: I love quality writing.)

Part 2: “You would have won…”

Some of you know that I enjoy playing video poker–and until the past decade or so (my wife’s asthma has gotten a little worse and we’ve both become more sensitive to smoke, especially as sane states outlaw smoking in hotels and restaurants), we went up to Reno two or three times a year, spending half of each day visiting places in Northern Nevada, half getting cheap entertainment at the poker slots.

(It’s not gambling in our case, it’s gaming: Neither of us had any expectation of winning, since we’re both very numerate, and we set our limits such that it really was cheap entertainment, never more than $100 a day for both of us combined. Playing quarter or nickel video poker one or two coins at a time, with a 96% or better payback–not 98% or better, because you don’t get the 4,000-coin royal flush payback unless you play five coins–$50 per day per person goes a long way.)

It’s been years since we’ve gone to Reno for a vacation (that may change, and would certainly change if the casinos would listen to 80% of their customers…), and the last time I played for money was during ALA in New Orleans, spending two or three very profitable hours in Harrah’s. Meanwhile, thanks to (a non-gambling site run by the maker of most multihand video poker slot machines), I’ve enjoyed video poker whenever I need a break from whatever I’m doing on the computer–for no money, with of course no money to be won either.

Technically, I pay $29/year to avoid lots of flashing third-party ads. And technically, there is money to be won–there’s a daily contest with a $50 first prize, which I won once and, based on normal odds, might win again in about 10-12 years. The last few days of each month, there’s a monthly contest with several cash winners up to $500. None of this is gambling because paying either the $29/year silver membership or an $8/month gold membership–which lets you chat and do other things–improves your chances of winning in any way whatsoever, much to the chagrin of some gold members.

Anyway: The daily and monthly contests–and the site as a whole–are designed to expose people to different variations on video poker and, presumably, encourage us to be more adventurous next time we’re in a real casino. (It also helps thoughtful people figure out what they should or shouldn’t hold, to actually get the 98%-101% payback that’s possible on some video poker variations in casinos with Nevada odds.)

It’s backfired for me, I think: I find the video poker almost as much fun as the real thing, with the advantage of no smoke, my choice of background music (if any), my choice of whether or not the slot machine makes noise, my choice of “free” drinks…and the ability to enjoy a five-minute or fifteen-minute session as often or as occasionally as I want, with no effort. My desire to go to real casinos is considerably less than it was before I started playing at the site…although, if I go to ALA 2014, I’ll certainly drop in to a few of the casinos there. Briefly.

And, after all this digression, here’s the point. The daily and monthly contests are hundred-hand rounds (of which you can play up to five or eight, depending on whether you played five the previous day), always at maximum bet. Instead of the way the site usually works–where you start with 10,000 points and the total goes up or down depending on your play–in this case, you start with zero and gain whatever you win. At the end of the round, your score is reported and you land on another screen.

If your score is higher than the nut–the amount you’d actually bet if you were playing with real money–you get a big Congrulations! and the amount you would have won if you’d been playing at a quarter machine in an actual casino. (As some of the gold members have commented during monthly contests, some of the high scorers really need imaginary wheelbarrows to cart off all that imaginary money.)

This is all amusing, and keeping track of won/loss for a particular variation is one thing I do (and I’m sure others do), and would probably guide what I actually play if/when I do go to a real casino. But…

Last month’s monthly contest was a new variation: Hundred-hand poker (that’s not new: one hand is dealt; you choose which cards to hold; those cards show up on 99 other hands, and each of 100 hands is dealt out)…with Super Times Pay, which means that about 6% of the time your hands are worth anywhere from 2 to 10 times as much. (“About” is key: I’ve seen as few as zero and as many as 14 out of 100 hands get the STP multiplier, although it’s usually from 4 to 8.)

With Super Times Pay, max bet for each hand is six rather than five.

Doing the arithmetic…six times 100, carry the…you can see that you’re wagering 600 coins. On each hand.

So when, on my best session last month, I was informed that I would have won $7,370 or so…I found it hard not to laugh. Sure, if I was willing to wager $125.00 on each play. Let’s see: My total voluntary exposure is $50 per day. So I could play one play every 2.5 days… And, by the way, on the round just before that (which was far and away my best round–and about 5% of what I’d need to win the monthly contest), I would have lost $2,848.50.

Not. Gonna. Happen. Not ever. Oh, I’ll play 100-hand poker: It’s a great way to test out the odds of various holding strategies in real life. But I won’t play it in a casino: Even at a penny machine, that’s $5 per hand (without the Super Times Pay nonsense).

Let me amplify Not. Gonna. Happen. There are, I’d say, three categories of video poker:

  • Versions I would play in a casino once in a great while: Mostly versions where the total exposure on each hand is, say, $1 or less. That could include three-hand poker (the most common multihand option) with maximum wager on a nickel machine ($0.75/deal).
  • Versions I might play if I’d won Super Lotto or the Publisher’s Clearing House megaprize and was really bored, but probably not for very long: Those are games like my favorite online, Multi-Strike Poker (my favorite mostly because it’s visually and sonically superior to most others), where you’re betting 20 coins per deal.
  • Versions I wouldn’t play even if I won both Super Lotto and PCH, unless somebody else was paying for all the wagers and giving me some portion of the winnings. That’s basically anything involving a wager of more than $2 or so per deal. Which puts $600 per deal way out there.

All of which means I’ll never be a casino’s favorite customer. I don’t gamble: I game. And I only game as cheap entertainment, where I assume that I’ll lose all of my allotted funds and stop. Even if I was wealthy, it would offend my sensibilities to redefine “cheap” in a manner that made spending $5 on a single deal plausible.

As for actual real-world winning and losing: The odds say that, even if you play perfectly, you will spend money in the long run…and, of course, most slot players don’t even play close to optimally. A 1.5% house edge adds up over the long run.

But the long run is the long run. In fact, I’m up overall for at least the past decade, because other than a little gaming on cruises and the New Orleans sessions, I really haven’t spent much time playing with actual money in the past decade–and I was extraordinarily lucky in NOLA, including the first royal flush I’ve ever had. Was I disappointed that I only got $62.50 for the royal flush instead of $1,000 because I’d bet one quarter, not five? Not at all. I was gaming, not gambling.

And that’s it: some Sunday silliness. Now to get back to a project. Or maybe try one round of today’s free contest, where I can neither win nor lose any actual money.

Why I’m Giving Up Wired: Exhibit 1

Friday, April 13th, 2012

Some time back, I was offered Wired Magazine on one of those airline miles-for-magazines deals, from one of several airlines I don’t expect to use much in the future.

I’d subscribed to Wired back in the days when reading it was an ordeal thanks to “innovative” design and typography. While I found much of the writing good, I found the overall attitude so absurdly deterministic (digital is always better, the new always replaces the old, Negroponte and Jobs are both saints and never ever wrong) that I give up on it.

When it started coming again, I saw several things:

  • The layout is much more conservative. It’s easier to read the text.
  • It’s a Conde Nast magazine–which means it’s not only generally well written but generally well edited and even proofread, with strong production values.
  • But it’s still Wired–except that now it seems to be an even stranger mix.

It’s possibly worth noting that I’d also been visiting every day or two. I stopped doing that because I didn’t need more material for the snarky sections of Cites & Insights and eventually found the cheerleading and oversimplification (and, yes, Apple-worship) tiresome.

That turned out to be true in the print mag as well–even though along with the  product reviews that reminded me how silly most product reviews are (outside of Consumer Reports and specialist magazines) and the sheer digital triumphalism, there were and are some first-rate pieces of journalism.

I’d decided to give it up. That turned out to take a lot longer than I expected–because another mag-for-miles deal, Conde Nast’s newish business magazine, folded before I received my first issue and the publisher chose to extend my Wired subscription.

Now it’s finally coming to an end: The June 2012 issue is the final one.

The publisher’s been trying pretty hard to entice me to stay (it’s all about the guaranteed base circulation for advertising rates). The last two offers have been on the absurd side: $15 for two years of a monthly magazines.

Was I tempted? Well, not very much, but…

Your Next Car Will Drive Itself

There’s the killer. The 72pt (inch-high), all-caps, bright red text on the February 2012 cover. (I’m a couple months behind on magazine reading.)

In somewhat smaller type: “NO TRAFFIC JAMS   NO CRASHES   UNLIMITED TEXTING.” Now there’s an interesting trio…and I’m guessing that “UNLIMITED TEXTING” is at least as important to the target demographic as the other two.

My immediate response to the big bold text?


I knew damn good and well that for a fair number of the well-to-do folks who read Wired (and yes, there are lots of those), their next car would have been purchased between the time this issue was published and the time I read it. Not one of those cars would drive itself.

I finally read the article. “Let the Robot Drive.” It’s a pretty good article, actually–and, of course, it certainly does not say what the cover implies. The most optimistic projection (and I’m suspicious of that) was that by the end of the decade, self-driving cars might be fairly standard.

In other words, for the cover to be right, no Wired subscriber can buy a car for the next eight years.

[Realistically? Yeah, I’d love to see self-driving cars, if the car mfrs. assume the liability when things go

wrong (oh, that’s right, things never g

o wrong where computers are involved). I’m not at all convinced that they can really work effectively unless every car is self-driving and it’s not possible for a self-centered idiot to override the autonomous features. Think that’s gonna happen next year? Next decade?

Maybe. I’d love to see it. But the cover turns out to be such a pure example of the hype that makes Wired run that I’m tempted to save it, just in case the publisher comes back with even more absurdly low renewal prices.

I’m just as happy not to have the website and the mag as unending sources of stuff to make fun of in my own writing. There’s enough of that around in any case. If you love Wired, I’m sure you’ll continue to do so. Don’t let me discourage you.

How much will you pay me to pay for a subscription?

Monday, March 19th, 2012

There was a curious item in today’s (USPS) mail–a note from Wired Magazine trying to coax me to renew my subscription (which expires in June, and which I believe I got for free using airline miles, then was extended because the Conde Nast magazine I actually wanted–their classy business mag–went under the same month I subscribed).

Oh, it’s not curious that they want me to renew. What’s curious is just how far they’re going. The offer: $8 for one year. $15 for two years. That’s $15 for 24 issues of a monthly magazine, which may or may not be enough to cover postage.

To me, that smacks of desperation. For that matter, the typical $10 renewal price for many slick large-subscription monthlies strikes me as a little desperate, but when you fall significantly below that threshhold…

What’s happening here, I’m fairly certain, is that Conde Nast is anxious (desperate?) to retain its rate base, the guaranteed level of circulation used to set its ad rates, since it is ads that actually pay for Wired. And I can see the difficulty. The magazine’s media kit for the print magazine says “Paid/Verified Subscriptions: 715,749”–but the December 2011 required USPS magazine-rate information shows an average of 628,364 paid subscriptions for the past 12 issues, down to 621,059 for the most recent issue.

I suspect I’m missing a bunch of requested but not paid subscriptions, since I really don’t believe Wired‘s print version has been bleeding subscribers that rapidly–but I can understand the desire not to lose any more.

As it happens, it won’t work. It doesn’t help that the issue I’ll read next (not the current issue–I’m a bit behind) has this as its primary cover text: “YOUR NEXT CAR WILL DRIVE ITSELF.” Not “some day, some of us may have self-driving cars” but rather typical Wired: metahyperbole, like hype-squared, a statement that’s so ludicrous as to be offensive.

I wonder just how low the magazine will go? $5 a year? $1? Send me a free iPad if I send them a $15 three-year print subscription? (I might go for that. I might not.)

Samir Husni, “Mr. Magazine,” has written about print magazines underselling their own value. I think his figure was $1/issue–that is, offering subscriptions for that little money, or less, indicates that a magazine’s publisher doesn’t place much value in the magazine.

It’s not a bad figure. Let’s say that going significantly below $1/issue doesn’t inspire confidence. What does that say about $15/24, that is, $0.625 per issue?

Understand: I’m a print magazine kind of guy. I take a bunch of them. I read them pretty much cover to cover (which may be why my to-be-read stack, between two and three months’ worth, is now completely filling the designated basket). I like some variety.

I’m keeping Fast Company (which offered me extensions at $10/year, but it’s a 10x/year “monthly” so that’s $1/issue). Lately, I find that Fast Company is almost as tendentious and assured of the single future as Wired always has been (“GenFlux” as the universal future, my nether regions), but in general it’s a little less zooey. Taking both strikes me as overload.


Closing the Digital Frontier

Monday, February 20th, 2012

One reason I’ve said there’s likely to be more activity in this blog in the future (and there has been!) is that I plan to post some pieces that are elements of likely future Cites & Insights composite sections–posted here in their raw form, subject to revision or deletion along the way. Not all of them, by any means, but some. Here’s an example. It’s also an example of something you’re likely to see a lot of in C&I this year and maybe next: Catching up, as I go through two years of tagging–and find that although the cited articles are “outdated,” they still speak to things that are happening now and might be worth commenting on.

So here’s a part of one section, assuming it stays. The title on the section is the same as on this post:

Closing the Digital Frontier

According to Michael Hirschorn’s article of that name, in the July/August 2010 Atlantic Magazine, the “era of the Web browser’s dominance is coming to a close.” Why? Because “things are changing all over again.”

The shift of the digital frontier from the Web, where the browser ruled supreme, to the smart phone, where the app and the pricing plan now hold sway, signals a radical shift from openness to a degree of closed-ness that would have been remarkable even before 1995. In the U.S., there are only three major cell-phone networks, a handful of smart-phone makers, and just one Apple, a company that has spent the entire Internet era fighting the idea of open (as anyone who has tried to move legally purchased digital downloads among devices can attest). As far back as the ’80s, when Apple launched the desktop-publishing revolution, the company has always made the case that the bourgeois comforts of an artfully constructed end-to-end solution, despite its limits, were superior to the freedom and danger of the digital badlands.

So we have one of those “shifty” articles—where we all move from one paradigm to another paradigm, with no room for both, for people who use smartphones, apps and iPads but also notebooks and browsers.

But as I read it, this doesn’t seem to be about the web in general as it is about traditional media and its relationship to the web. Even there, I think the thesis is overstated—and with an odd countergenerational overtone: “or under-30s whelped on free content, the prospect of paying hundreds or thousands of dollars yearly for print, audio, and video (on expensive new devices that require paying AT&T $30 a month) is not going to be an easy sell.” But, Hirschorn says, that won’t stop “the rush to apps” because, especially with Apple as semi-benevolent overlord, “there’s too much potential upside” (and besides, people don’t criticize Apple for behavior that they would assault other companies for—a point with which I’m sympathetic).

I find the article bemusing. We learn that Twitter barely cares about, well, Twitter—that the smartphone version is more fully featured. It’s clearly an “or” situation: Apps can only rise at the expense of the browser. The grand finale? Harking back to the American frontier, Hirschorn concludes:

Now, instead of farmers versus ranchers, we have Apple versus Google. In retrospect, for all the talk of an unencumbered sphere, of a unified planetary soul, the colonization and exploitation of the Web was a foregone conclusion. The only question now is who will own it.

As Sue Kamm has said in another contest, “In the words of the immortal Nero Wolfe, ‘Pfui.’” It doesn’t help to read the byline: Hirschorn runs a TV production company. I suspect, and particularly based on rereading the article, that he views the world in media terms: There are producers and consumers, and that’s just the way it is.

Relatively few comments over the past year, the first of which rushes to Apple’s defense—followed by one that posits that, you know, people can and probably will use both “walled gardens” and the open web. A few items down, we get a reasonably sound comment that begins with this subtle paragraph: “This is absolute rubbish.”

I’ll quote Dale Dietrich’s comment in full (typos and all—and since Dietrich was probably typing on a virtual keyboard, an occasional typo’s forgivable), as I think it speaks to the truth if you’re dealing with something more than corporate media:

The app does NOT diminish the importance of the browser. The app merely extends the web to more devices that it was hitherto inaccessible to. The App, as first popularized on the iPhone, wrested contol of what can be done on mobile devices from big telco to the individual. Like the browser-based web did before it, the app gave control to the end user. The author would do well to consider that all modern smart phones include browsers that are heavily used both independenty by users and by mobile apps that frequently embed the browser within the app. Case in point, I am viewing and responding to this silly article within the Safari browser that is embedded within my iPad’s Twitterific app. Hell, Twitter-based apps INCREASE my viewing of browser-based content by curating the web for me by the trusted folks I follow.

And, a bit later, this from David McGavock:

All of this assumes that the people who are participating in the read-write-create web will walk away and let apps dominate all their interactions. This dichotomy of apps vs. browser seems false to me in light of the fact that both have their strengths and weaknesses. This entire article assumes that the billions of people that are creating their own digital footprints will give it up for paid service. There is an explosion of legal sharing going on here. Are we all going to pack it up and go home because of the apps we use. I think not.

Then there’s a strange comment from “John_LeB” who apparently is aware of something I’m not:

It is true that some information remains free on the Web, but much research-based scholarship definitely does not. With on-line fee-based jobbers such as Taylor & Francis, Elsevier, Blackwell, Springer, etc., research that used to be freely distributed on the Web now carries a subscription fee. All well and good, perhaps; academic researchers are entitled to compensation for their scholarly production—but wait! Access fees rarely trickle down to their producing authors. Their reward lies in the “points” they can embed in their CVs for tenure or promotion. The jobbers are running free with the pecuniary revenue. One unfortunate spin-off is that access to research is foreclosed where it’s needed the most, in the developing world where the contemporary price of a journal article can represent a week’s worth of food. (Food for the stomach, that is.)

Ah, the good old days when research articles were always freely distributed on the web, back before those young upstarts like Elsevier grabbed it all… And that’s the complete comment. The writer’s probably as ignorant of open access as he is of the history of web access to research articles.

Mike Masnick does a pretty fair fisking of Hirschorn’s article in “Another Journalist Seduced By App Madness Predicts The End of the Web,” posted July 1, 2010 at techdirt. I won’t bother to excerpt his commentary: It’s free, and you can go read it yourself, unless you’re reading this on a smartphone that lacks any form of browser (a combination that seems somewhere between unlikely and impossible). Of course, if your only access to e-stuff is through such a smartphone or some truly locked down tablet, then you’re not reading this anyway, are you?

Oddly, in comments on Masnick’s piece, Hirschorn objects that his piece is “largely an attack on Apple’s efforts to curtail that freedom…”—which, if true, means that Hirschorn is an inarticulate writer, since I certainly didn’t read it that way. Even in this response, Hirschorn’s an Only One Future man: “Also clearly and obviously, the rise of mobile computing will result in less non-mobile-computing and the center of power will move from the browser to the smartphone/ipad experience.” Right. And neither smartphones nor tablets have browsers. Now, if Apple had a browser—oh, let’s give it some fanciful name like Safari—that would really change the inevitable future. But that’s as silly as it would be for Amazon to add a browser, say one with an even sillier name like Silk, to its entirely-walled-garden Kindle Fire.

If you do read Masnick’s piece, scroll through at least some of the comments. Hirschorn starts doing a complex “that’s not what I was intending/that’s not what I really wrote” dance that leads me more and more to believe that he really is inarticulate or incoherent. As you probably already know, I’m definitely not one of those who regards traditional journalism and media as irrelevant (as some commenters do)—but neither do I regard them as the whole of the landscape.

Why mention this now, almost two years later? Because we haven’t gone All Apps, All The Time. Because traditional real-world media continues to do better than a lot of digital junkies realize (for example, did’ja know that there are more than 300 million print magazine subscriptions in the US, and that 100 million people in the US still read print newspapers? hmm?). Because the world continues to evolve mostly in “and not or” ways, with more choices complementing one another rather than One Triumphant Paradigm shifting to Another Triumphant Paradigm, with no room for alternatives…and because this sort of “journalism” continues to be prevalent.

Prospectus: An Ongoing Public Library Social Network Scan

Wednesday, January 4th, 2012

I wrote about this six weeks ago in “Dreaming out loud: An ideal ongoing sponsored project.” This is a slightly more fleshed-out version, done after completing the 5,958-library survey and revisiting the first 2,406 libraries. (I’m well into revising the manuscript now, preparing secondary metrics as I go).

Briefly, I believe it would be worthwhile to do an annual nationwide survey of public library presence on social networks, looking at all U.S. public libraries–9,184 of them (based on IMLS figures as reported in Hennen’s American Public Library Ratings (HAPLR)).

The earlier post offers a longer version of what I’ve already done and some other notes; this post goes into a little more detail on possibilities. I would love to see this happen, but I don’t really have the contacts to make that happen. In late January and early February, I’ll be trying to see what’s possible (with your help, I hope). At some point, it may make sense to try the long shot (see “How to Pay” below).

The Baseline

As background for my 2012 ALA Editions book Successful Social Networking in Public Libraries, I looked at all of the public libraries in 25 states (distributed by population) in late summer 2011–and later added the libraries in 13 more states, for a total of 5,958 libraries in 38 states. For the first 25 states (and 2,406 libraries), I revisited four months after the first visit to look at changes in social networking.

The result is two spreadsheets, one of which (LSNALL) would be the baseline for the new project. (The other, LSN25, looks at the four-month changes. It wouldn’t be particularly relevant for the new project.)

LSNALL includes, for each library, the following–based on my own searching, as detailed in the earlier post:

  • Library name and Legal Service Area population as provided by the state library in its spreadsheet, noting that “Library name” is frequently something other than the name the library actually uses. (Only libraries that have an LSA are included, leaving out 7,000-odd branches but also cooperative agencies that aren’t double-counted.)
  • State abbreviation
  • Date on which I checked the library
  • “FB?”–a code indicating whether I found a working Facebook link to a library Facebook page on the website (w), in the first 100 Google results (g), or by searching Facebook itself (f), in that order–or, if none was found, whether I found a community or information Facebook page instead (i) or nothing at all (n).
  • If there is a Facebook page (or group, or non-page account), the number of Likes (or friends).
  • For the most recent and fifth most recent Facebook post from the library itself, a code indicating its currency bucket: d (the day I checked), e (week–within the past seven days including today), f (fortnight), m (month), q (quarter), s (six months), y (year) or z (more than a year). (“e” was chosen to make the buckets directly sortable.)
  • A one-letter code indicating whether I found some signs of interactivity within the “visible” posts (usually 20 to 30): “y” for a non-library comment or a non-spam post from someone other than the library; “l” if I found likes (by someone other than the library!) on posts but no comments; “s” if I found only spam comments (or only spam and likes); and blank if I found none of those.
  • A Twitter code, similar to Facebook except that there are no “i” cases and I use “t” instead of “f” if the Twitter account could only be found within Twitter itself.
  • Followers, following, and tweets, one number each.
  • The same most recent and fifth most recent bucket codes for tweets
  • An interactivity code, usually based on either non-library tweets, retweets, or tweets beginning “@”–I didn’t look as far for these, and don’t regard the results as very meaningful.
  • Comments if needed–sparsely. (E.g., “FB0” for a few cases where a library Facebook page is apparently the library’s actual page but has no updates, up through FB4 if there aren’t yet five updates, or “FB teen” or the like where there’s no general-purpose FB page but appear to be specialized pages.)
  • Added after the initial scan: “SN?”–a number from 0 to 2 indicating how many of the two possibilities the library had–and “H”–a number from 0 to 9 providing the HAPLR size category (0 being under 1,000, 9 being 500,000 and up), to ease sorting and, as it turns out, reporting.

A derivative spreadsheet, LSN38, leaves out all the rows with SN?=0 (libraries with no findable social network presence) and adds derivative columns for use in the book, such as “F%” (Likes divided by LSA), “T%” (same for Twitter followers), “T/F” (followers divided by likes), “Fr” (followings divided by followers) and “Fmx” and “Tmx”–two-character codes indicating frequency and reach buckets. There are also metrics spreadsheets and pages within these spreadsheets, of course, but the primary LSNALL spreadsheet is the true baseline.

Proposed One-Year Revision

With proper funding in place and possibly better ways to distribute the results, I’d see doing this between June and November 2012:

  • Start a new spreadsheet (linked to the old one for comparative metrics) to include the other 12 states and DC, based on the most recent IMLS dataset  (which would require either acquiring Access or working with a partner, since the other 12 states don’t seem to have downloadable spreadsheets). [Note: I now find that I can open the IMLS dataset within Excel, although I would swear that this wouldn’t work when I tried it in August 2011.]
  • Update 1/25/12: The IMLS database includes actual library names, not always the same as those used in the state library spreadsheet columns I was using; I’d use those names.
  • Check each row in the spreadsheet to fill in columns as follows:
  1. Actual library name, as used for a Google search (with the state name added to the library name).
  2. Position of the library’s official website (if one is found) in the result.
  3. Facebook columns as at present, with these changes:
    a. The second “current post” bucket would be based on the 10th most recent post, but normalized to the same meanings (i.e., 2 days, 2 weeks, 2 fortnights, etc.)
    b. The interactivity column would be replaced with a number representing the number of non-library, non-spam comments and posts found within the first 10 library posts, from 0 to whatever. Post-level likes would be ignored.
  4. Twitter columns as at present, with the same “bucket” change as for Facebook and with the “Following” actual number replaced with a code indicating general approach of following (open to modification, but storing the actual number feels like overkill). Unsure whether to modify the interactivity column or simply drop it.
  5. Google+ columns along the same lines as Facebook columns, but with the number for “Added to circles” replacing Likes. (Subject to change.)
  6. Optional, if someone believes it’s worth doing and would pay extra for it: Blogging column, with a number for the number of blogs identified on the library’s homepage, and with a separate spreadsheet identifying those blogs. (This could lead to a five-year update of my Public Library Blogs study. It may be a lot more work than it’s worth. The Public Library Blogs book sold 31 copies, but that was with only my own publicity.)
  7. Optional, and I’m not sure any of these are worthwhile: Columns for MySpace presence, YouTube presence, Flickr presence indicated on homepage.

Later Years

Similar spreadsheet, linked to earlier sheets or pages for analysis, and adding significant new social networks that welcome institutional pages/accounts if such networks appear.


The deliverables would depend heavily on who’s paying for this and what they want. Possibilities:

  • Writeup of results including comparisons to 2011 and metrics similar to those planned for the forthcoming book, distributed as a free PDF. The writeup (and specific writeups) would include not only benchmarks by size of library and state, but also lists of libraries doing particularly well in various metrics relative to their size, to serve as examples for other libraries wanting to improve their social networking.
  • More specific writeups for individual states or for specific library sizes.
  • Possibly the spreadsheet itself for further analysis.

How to Pay

I believe the results would be valuable, since I believe most public libraries can benefit from a social networking presence and it’s clear that most of them are not reaching nearly as many people as they’d probably like to. A variety of benchmarks and examples should help. (My book should help too, combining benchmarks, examples, discussion, advice…)

But it’s way too much work to do for free or on spec. My experiments in self-publishing have taught me that, and have taught me that I can’t really do it based on the hope of selling the results on my own.

I figure $15,000 a year for the whole process, including deliverables (but not including #6 and #7 above). Adding #6 would push that to $20,000; adding #7 is unclear.

That is in some ways small potatoes for most funding agencies, but it would meet my needs.

Possible avenues:

  1. An agency could sponsor this–either a foundation or an agency that already does library research, or, for that matter, an agency that finds it worthwhile. I’d be delighted to work with almost any such agency. The one real exception is one where I can’t imagine they’d want to work with me. (As noted in the earlier post, I’d be delighted to work with OCLC or WebJunction on this, or the Gates Foundation, a library school, a vendor, whoever. I suspect my lack of institutional affiliation is a problem for most funding sources, but I’d love to be proved wrong. Unfortunately, independent research is not highly regarded in this field, as with most other fields.)
  2. A group of state libraries could sponsor it, in which case I’d narrow the research to cover only those states and charge a different fee, something along the lines of $500 + $n per state, where $n is the number of reporting libraries in the state times a constant, probably $1 to $2.
  3. I could find some way to be assured that sales of the report–which I’d prefer to be free–would come close to generating $15,000 in revenue. I don’t see too many pigs flying overhead, and I don’t regard that one as probable.
  4. I could prepare a Kickstarter project, video and all. Would it be accepted by the curators? No idea. Would it stand a chance? Stranger things have happened…

I need help on this.

I’m scheduling this post to appear on January 4, just after most people get back to work. I won’t do anything active beyond this post until the draft of the social networking book is done (around the end of January if all goes well); then, I’d make whatever contacts I can–but I don’t have the contacts I’d like to have. I’d need to have funding lined up by June in order to plot out the survey process, and by May if I was going to attend this year’s ALA Annual Conference. If I can’t work something out by June, I’ll probably turn my attention to other book or writing possibilities and abandon this.

If you think you could help find a home for this, let me know, either in a comment or by email to waltcrawford at gmail dot com.

If you believe the project is worthless, there’s no real need to comment. [If that reaction is based on an assertion that all the public libraries already are on Facebook, you’re simply wrong, by several thousand libraries, but never mind…]

Updated 1/7/12: This blog is now receiving so many spam comments–over 130 on some days, even after tightening the comment period–that a response via comment is likely to disappear. So if you can help or are interested, please send me email, waltcrawford at gmail dot com.

This post describes a possible offshoot of the public library social network scan, and a likely thank-you premium if I try to go the Kickstarter route and succeed.

Thinking about magazines and journals

Tuesday, April 12th, 2011

Years ago, writing one of my “OA independent” articles on Library Access to Scholarship in Cites & Insights, I commented that it was highly unlikely we’d ever get to 100% e-publishing for STM articles. Specifically, I said it was unlikely that Nature or Science would go away, since both have substantial non-library subscription bases.

I hadn’t actually read an issue of either one, at least not for decades…

As I noted (probably just on FF), a few weeks ago I received an invitation to join the American Association for the Advancement of Science for $99/year (a first-year discount from the usual $149/year), which includes 51 issues of Science. The invitation included a copy of the February 18, 2011 Science.

The last thing I need at this point is more magazines, particularly given my idiot tendency to actually read everything I subscribe to from cover to cover, or at least start each article. I discarded the invitation…but kept the issue.

And read it. Not the whole thing, to be sure, but nearly all of what I’d call the “magazine portion”–in this case, pages 811-875, as opposed to pages 876-931, the “journal portion” (followed by ads and advertorials).

My initial conclusion: If AAAS was really committed to the advancement of science, they would and could go to gold OA, turning the print version of Science into a weekly or fortnightly magazine (about half as thick as it currently is) and publishing all the full peer-reviewed research articles and reports online with full and immediate access. Oh, and charging a much more reasonable fee for an institutional subscription to the magazine than the current $990–like, say, $149, or perhaps twice that including immediate online access to all the features that make up the magazine portion and the ScienceNow daily news, ScienceInsider, and so on…

Why? Well, it’s a really good science magazine. It has lots of ads. It includes lots of well-edited, well-written material. I suspect it would continue to thrive as a magazine. As a journal, however, it makes more sense online, both because it covers too much territory to make sense as a browsing resource for any given scientist and because much of it’s online-only anyway.

Maybe it would need article processing fees, although it’s hard to believe they’d need to be four-digit fees. In any case, going full gold OA for the peer-reviewed material would certainly be a huge step forward in the advancement of science. And it’s always been part of the serious OA advocates (e.g., Peter Suber) that it’s legitimate to charge for added value, such as popularized versions, discussions, news, etc.–all the stuff that makes up Science‘s magazine section.

This has probably all been said before, but I really was struck by how much the issue came off as a very good science magazine with a bunch of very specialized peer-reviewed items in back of the magazine. And how likely it is that the magazine would survive and probably prosper without charging high fees (or any fees) for online access to the peer-reviewed items.

Comparing potatoes and truffles

Monday, December 13th, 2010

Remember Wired Magazine‘s absurd “The Web is dead” cover article (September 2010)?

I can’t think of anything that was right about the article. One of the things that was most wrong was the big graph that showed how the web was dying–by plotting all internet traffic, in bytes, on a market-share graph (that is, one where the Y axis is always filled, since it goes up to 100% and the segments show percentage of each area over time).

One thing that was wrong with it is that this kind of graph is almost always misleading or meaningless when an overall space is either growing or shrinking, since it represents percentages, not absolutes. If Amazon goes from selling 90% of ebooks when ebook sales are $1 million per year to selling 30% of ebooks when ebook sales are $1 billion per year, I can assure you nobody at Amazon is saying “Damn. We’ve died in the ebook space.” But that’s what a market-share graph would show: A dramatic, awful, terrible decline in Amazon ebook sales.

The other is even more absurd, and is where I get “potatoes and truffles.” Well, you know, they’re both edibles that come from the ground, so clearly truffles are dead, since the weight of potatoes sold each year must surpass the weight of truffles by several orders of magnitude. Actually, they’re both tubers, so what’s the difference? (“Several orders of magnitude”: I can’t readily find the current total production/sale of truffles, but it apparently peaked at “several hundred tonnes” early in the last century, so I’d guess it’s no more than, say, 314 tonnes now. Which is a deliberate choice because 2008 worldwide production of potatoes was 314 million tones. So figure at least a million times as many potatoes, by weight. And there’s even the time element, since truffle production has dropped enormously while potato production continues to rise.)

The other fallacy? Choosing one measurement and assuming that it’s meaningful in other contexts. In this case, choosing data volume (bits or bytes) and assuming it relates somehow to “where people spend their time.”

I choose that quotation because here’s how Wired responded to the criticisms of their chartjunk in this case:

While not perfect, traffic volume is a decent proxy for where people spend their time.


Last Saturday, we had a friend over and spent a wonderful two hours and 31 minutes watching the glorious Blu-ray version of The Music Man. I felt as though I’d never really seen the picture before. It was great. It was also 2.5 hours.

I’m guessing The Music Man probably took up around 40GB (a dual-layer Blu-ray Disk has 50GB capacity).

Today, I’ll start reading a mystery novel that I’m certain is going to be enormously entertaining as well. At 250 pages, the text in it would probably occupy about–well, let’s call it 80,000 bytes, although that’s probably on the high side.

By Wired‘s “reasoning,” it’s a fair approximation to say that I should spend around 0.018 seconds reading that book, since it has only one-five hundred thousandths as much data as The Music Man–and “traffic volume is a decent proxy for where people spend their time.”

In the real world, I’ll probably spend three or four hours reading the novel, maybe a little longer.

An extreme case?

OK, so a Blu-ray Disc is an extreme case. Internet traffic almost never includes 30mb/s streams, which is roughly BD level. But it does include loads of video, probably at traffic rates between 250kb/s and 6mb/s, and audio, at traffic rates of at least 64kb/s for anything with halfway decent sound (“halfway decent” is the operative term here).

So if I watch a one-minute YouTube clip, it’s likely that the traffic amounts to at least 1.9 megabytes (at the lowest datarate supported by YouTube) and more likely at least twice that much.

How much time would it take me to read 1.9 megabytes worth of text, even with HTML/XML overhead?  Without overhead, that’s about 300,000 words, or the equivalent of three long books. With PDF overhead (which, for embedded typefaces, is considerably more than HTML overhead), that’s four typical issues of Cites & Insights–but for the text itself (with Word .docx overhead), it’s at least a year of C&I. I pretty much guarantee that anybody who reads C&I at all spends more than a minute doing so, even though the data traffic only amounts to a few seconds worth of  YouTube.

Equating “traffic” for text, or even still photos, with “traffic” for sound or video, as being in any way meaningful in terms of time spent is just nonsense. Wired says “We stand by the chart.” That says a lot about Wired–and almost nothing about the present or future of the web.

What year did downloaded music start outselling CDs/vinyl?

Thursday, July 1st, 2010

Very short post, with the heart of it in the title above, so as to encourage FriendFeed participation.

This is a test of your digital awareness. Without looking it up, try to answer that one question:

In what year did downloaded music (iTunes, etc.) start outselling music-on-containers (CDs, vinyl, etc.), worldwide?

Comments will remain open until Saturday, July 3, after which I’ll comment on the responses.

The apparent answer: According to the sources I’ve seen, the answer is “2011, probably.”

Which is also to say: It hasn’t happened yet.

So Davin’s right, Stephen’s close enough, and John’s…not so much.

Random Sunday music musing

Sunday, June 13th, 2010

When I got my dandy little 8GB Sansa Fuze “MP3 player” (because it was on sale for $69 at Radio Shack, which it is again this week), I loaded it with 863 songs from my collection that I think are better than average–ones I give 3, 4 or 5 stars. (The whole collection, excluding classical, is around 3,000 songs–mostly fairly old CDs, but I’ve added a handful of used CDs purchased recently…)

And I’m going through it for a first pass, really listening to songs, usually about 10 a day.

Today I hit a song that was great–a recent addition, so I’d probably only heard it once before–but that also reminded me I’ve lost most of the specific vocabulary for music I might once have had.

The song: “Hard to Love” by Vance Gilbert (from one thru fourteen, released 2002 on Louisiana Red Hot Records).

It’s a blues of a particular style–with verses minimally accompanied (Hammond B3, electric lead & rhythm guitars, acoustic bass, drums–the bass descending one note per bar, minimal riffs from the rest), and then a solid horn section cutting in on the chorus. I mean a tasty horn section. (I’d actually been thinking, you know, I need a few more songs with really tasty horn sections.)

Yes, I can recognize a Hammond B3 almost instantly…or one heck of a good synthesizer simulation. Can’t you? Some day, all the Hammond B3s will be gone and irreparable; that will be a sad day for blues/jazz/whatever. Also yes, I’m one of those who thinks Al Kooper’s contributions to American music have been undervalued…

And I realized that I didn’t know whether this was Tower of Power-style horns, Memphis Horns style, or something else entirely. It only matters in that it’s hard for me to describe this number adequately.

[Checking the liner notes/booklet, one of those things that come with CDs, I find something really interesting, given that the horns seemed to have a pretty natural acoustic and stereo spread: The horns are the “Joe Mennona horns,” which appears to consist of Joe Mennona overdubbing all the horn parts–tenor sax, alto sax, baritone sax, trombone and trumpet.]

No real significance here. You can enjoy music without being able to describe it properly.


Monday, June 7th, 2010

The June San Francisco Chronicle Magazine (the Chron only does its own glossy-magazine section once a month, a very sensible decision–the weekly book section and review/entertainment section are separate anyway) leads off with an editor’s column with the same title as this post.

It’s not all that long (465 words–shorter than this 558-word post); you can read the whole thing yourself, and look at the amusing picture. The theme: Meredith May (the writer) has been

getting into polite arguments with friends who have been posting pictures of me on Facebook and Flickr that I would never want you to see.

They’re not nude shots or anything like that–but they were “taken in private moments with friends before the world was wide and covered in a Web.” May doesn’t think it’s up to other people–even her friends–to decide which parts of her own history should be made public.

She notes a specific incident–she’s going to the Pulitzer Center on Crisis Reporting to talk about her story on girl slavery in Nepal and, checking Facebook in the airport, finds that an old friend has psoted pictures of her drinking and posing at high school house parties…

May doesn’t quite understand people’s impulse to overshare their own stuff–“but over-sharing someone other than yourself without his or her permission is baffling.” And, indeed, since we learn that any candid shot is likely to turn up on the web, spontaneity could be suffering.

I have had parties at my house with a dozen of my lovely artist friends, and nine will bring a camera and start shooting. The whole reason for having your homies over for a party is that you can let down your hair and dance on the counter if you want to. But I’m more cautious now. The joie de vivre, the carpe diem, the being alive part of living – is tempered.

In our haste to document and share everything, are we losing what it means to live in the moment?

I can’t speak for anyone else, but this editorial certainly resonates with me. I’ll take it a step further: “Agreeing” that a picture can be posted isn’t always being entirely happy about it. Coercion is a strong word for the process that takes place, but it’s a form of social pressure–the desire not to be thought a complete killjoy.

There are pictures of me on the web (oddly enough, they show up in Google but not on Bing) that I could do without. One of them has a caption about what a good sport I was. “Good sport” in this case really means “didn’t feel he could avoid this without looking like a killjoy.”

I know that my own behavior at, say, conference receptions is now much more circumspect than it might have been in the past, that I’m much less willing to don silly hats or assume silly poses or hold up silly signs. A few years ago, I would have assumed that a few folks would have gotten little laughs out of the silliness as captured in photos. Now, I assume that the silly pictures will live forever on the web and in search-engine results–and while they can’t really do me any harm, I’d just as soon not, thank you.

So does this make me a killjoy? Maybe so. Such is life. Apparently I’m not the only one…