Archive for the ‘Media’ Category

Prospectus: An Ongoing Public Library Social Network Scan

Wednesday, January 4th, 2012

I wrote about this six weeks ago in “Dreaming out loud: An ideal ongoing sponsored project.” This is a slightly more fleshed-out version, done after completing the 5,958-library survey and revisiting the first 2,406 libraries. (I’m well into revising the manuscript now, preparing secondary metrics as I go).

Briefly, I believe it would be worthwhile to do an annual nationwide survey of public library presence on social networks, looking at all U.S. public libraries–9,184 of them (based on IMLS figures as reported in Hennen’s American Public Library Ratings (HAPLR)).

The earlier post offers a longer version of what I’ve already done and some other notes; this post goes into a little more detail on possibilities. I would love to see this happen, but I don’t really have the contacts to make that happen. In late January and early February, I’ll be trying to see what’s possible (with your help, I hope). At some point, it may make sense to try the long shot (see “How to Pay” below).

The Baseline

As background for my 2012 ALA Editions book Successful Social Networking in Public Libraries, I looked at all of the public libraries in 25 states (distributed by population) in late summer 2011–and later added the libraries in 13 more states, for a total of 5,958 libraries in 38 states. For the first 25 states (and 2,406 libraries), I revisited four months after the first visit to look at changes in social networking.

The result is two spreadsheets, one of which (LSNALL) would be the baseline for the new project. (The other, LSN25, looks at the four-month changes. It wouldn’t be particularly relevant for the new project.)

LSNALL includes, for each library, the following–based on my own searching, as detailed in the earlier post:

  • Library name and Legal Service Area population as provided by the state library in its spreadsheet, noting that “Library name” is frequently something other than the name the library actually uses. (Only libraries that have an LSA are included, leaving out 7,000-odd branches but also cooperative agencies that aren’t double-counted.)
  • State abbreviation
  • Date on which I checked the library
  • “FB?”–a code indicating whether I found a working Facebook link to a library Facebook page on the website (w), in the first 100 Google results (g), or by searching Facebook itself (f), in that order–or, if none was found, whether I found a community or information Facebook page instead (i) or nothing at all (n).
  • If there is a Facebook page (or group, or non-page account), the number of Likes (or friends).
  • For the most recent and fifth most recent Facebook post from the library itself, a code indicating its currency bucket: d (the day I checked), e (week–within the past seven days including today), f (fortnight), m (month), q (quarter), s (six months), y (year) or z (more than a year). (“e” was chosen to make the buckets directly sortable.)
  • A one-letter code indicating whether I found some signs of interactivity within the “visible” posts (usually 20 to 30): “y” for a non-library comment or a non-spam post from someone other than the library; “l” if I found likes (by someone other than the library!) on posts but no comments; “s” if I found only spam comments (or only spam and likes); and blank if I found none of those.
  • A Twitter code, similar to Facebook except that there are no “i” cases and I use “t” instead of “f” if the Twitter account could only be found within Twitter itself.
  • Followers, following, and tweets, one number each.
  • The same most recent and fifth most recent bucket codes for tweets
  • An interactivity code, usually based on either non-library tweets, retweets, or tweets beginning “@”–I didn’t look as far for these, and don’t regard the results as very meaningful.
  • Comments if needed–sparsely. (E.g., “FB0” for a few cases where a library Facebook page is apparently the library’s actual page but has no updates, up through FB4 if there aren’t yet five updates, or “FB teen” or the like where there’s no general-purpose FB page but appear to be specialized pages.)
  • Added after the initial scan: “SN?”–a number from 0 to 2 indicating how many of the two possibilities the library had–and “H”–a number from 0 to 9 providing the HAPLR size category (0 being under 1,000, 9 being 500,000 and up), to ease sorting and, as it turns out, reporting.

A derivative spreadsheet, LSN38, leaves out all the rows with SN?=0 (libraries with no findable social network presence) and adds derivative columns for use in the book, such as “F%” (Likes divided by LSA), “T%” (same for Twitter followers), “T/F” (followers divided by likes), “Fr” (followings divided by followers) and “Fmx” and “Tmx”–two-character codes indicating frequency and reach buckets. There are also metrics spreadsheets and pages within these spreadsheets, of course, but the primary LSNALL spreadsheet is the true baseline.

Proposed One-Year Revision

With proper funding in place and possibly better ways to distribute the results, I’d see doing this between June and November 2012:

  • Start a new spreadsheet (linked to the old one for comparative metrics) to include the other 12 states and DC, based on the most recent IMLS dataset  (which would require either acquiring Access or working with a partner, since the other 12 states don’t seem to have downloadable spreadsheets). [Note: I now find that I can open the IMLS dataset within Excel, although I would swear that this wouldn’t work when I tried it in August 2011.]
  • Update 1/25/12: The IMLS database includes actual library names, not always the same as those used in the state library spreadsheet columns I was using; I’d use those names.
  • Check each row in the spreadsheet to fill in columns as follows:
  1. Actual library name, as used for a Google search (with the state name added to the library name).
  2. Position of the library’s official website (if one is found) in the result.
  3. Facebook columns as at present, with these changes:
    a. The second “current post” bucket would be based on the 10th most recent post, but normalized to the same meanings (i.e., 2 days, 2 weeks, 2 fortnights, etc.)
    b. The interactivity column would be replaced with a number representing the number of non-library, non-spam comments and posts found within the first 10 library posts, from 0 to whatever. Post-level likes would be ignored.
  4. Twitter columns as at present, with the same “bucket” change as for Facebook and with the “Following” actual number replaced with a code indicating general approach of following (open to modification, but storing the actual number feels like overkill). Unsure whether to modify the interactivity column or simply drop it.
  5. Google+ columns along the same lines as Facebook columns, but with the number for “Added to circles” replacing Likes. (Subject to change.)
  6. Optional, if someone believes it’s worth doing and would pay extra for it: Blogging column, with a number for the number of blogs identified on the library’s homepage, and with a separate spreadsheet identifying those blogs. (This could lead to a five-year update of my Public Library Blogs study. It may be a lot more work than it’s worth. The Public Library Blogs book sold 31 copies, but that was with only my own publicity.)
  7. Optional, and I’m not sure any of these are worthwhile: Columns for MySpace presence, YouTube presence, Flickr presence indicated on homepage.

Later Years

Similar spreadsheet, linked to earlier sheets or pages for analysis, and adding significant new social networks that welcome institutional pages/accounts if such networks appear.


The deliverables would depend heavily on who’s paying for this and what they want. Possibilities:

  • Writeup of results including comparisons to 2011 and metrics similar to those planned for the forthcoming book, distributed as a free PDF. The writeup (and specific writeups) would include not only benchmarks by size of library and state, but also lists of libraries doing particularly well in various metrics relative to their size, to serve as examples for other libraries wanting to improve their social networking.
  • More specific writeups for individual states or for specific library sizes.
  • Possibly the spreadsheet itself for further analysis.

How to Pay

I believe the results would be valuable, since I believe most public libraries can benefit from a social networking presence and it’s clear that most of them are not reaching nearly as many people as they’d probably like to. A variety of benchmarks and examples should help. (My book should help too, combining benchmarks, examples, discussion, advice…)

But it’s way too much work to do for free or on spec. My experiments in self-publishing have taught me that, and have taught me that I can’t really do it based on the hope of selling the results on my own.

I figure $15,000 a year for the whole process, including deliverables (but not including #6 and #7 above). Adding #6 would push that to $20,000; adding #7 is unclear.

That is in some ways small potatoes for most funding agencies, but it would meet my needs.

Possible avenues:

  1. An agency could sponsor this–either a foundation or an agency that already does library research, or, for that matter, an agency that finds it worthwhile. I’d be delighted to work with almost any such agency. The one real exception is one where I can’t imagine they’d want to work with me. (As noted in the earlier post, I’d be delighted to work with OCLC or WebJunction on this, or the Gates Foundation, a library school, a vendor, whoever. I suspect my lack of institutional affiliation is a problem for most funding sources, but I’d love to be proved wrong. Unfortunately, independent research is not highly regarded in this field, as with most other fields.)
  2. A group of state libraries could sponsor it, in which case I’d narrow the research to cover only those states and charge a different fee, something along the lines of $500 + $n per state, where $n is the number of reporting libraries in the state times a constant, probably $1 to $2.
  3. I could find some way to be assured that sales of the report–which I’d prefer to be free–would come close to generating $15,000 in revenue. I don’t see too many pigs flying overhead, and I don’t regard that one as probable.
  4. I could prepare a Kickstarter project, video and all. Would it be accepted by the curators? No idea. Would it stand a chance? Stranger things have happened…

I need help on this.

I’m scheduling this post to appear on January 4, just after most people get back to work. I won’t do anything active beyond this post until the draft of the social networking book is done (around the end of January if all goes well); then, I’d make whatever contacts I can–but I don’t have the contacts I’d like to have. I’d need to have funding lined up by June in order to plot out the survey process, and by May if I was going to attend this year’s ALA Annual Conference. If I can’t work something out by June, I’ll probably turn my attention to other book or writing possibilities and abandon this.

If you think you could help find a home for this, let me know, either in a comment or by email to waltcrawford at gmail dot com.

If you believe the project is worthless, there’s no real need to comment. [If that reaction is based on an assertion that all the public libraries already are on Facebook, you’re simply wrong, by several thousand libraries, but never mind…]

Updated 1/7/12: This blog is now receiving so many spam comments–over 130 on some days, even after tightening the comment period–that a response via comment is likely to disappear. So if you can help or are interested, please send me email, waltcrawford at gmail dot com.

This post describes a possible offshoot of the public library social network scan, and a likely thank-you premium if I try to go the Kickstarter route and succeed.

Thinking about magazines and journals

Tuesday, April 12th, 2011

Years ago, writing one of my “OA independent” articles on Library Access to Scholarship in Cites & Insights, I commented that it was highly unlikely we’d ever get to 100% e-publishing for STM articles. Specifically, I said it was unlikely that Nature or Science would go away, since both have substantial non-library subscription bases.

I hadn’t actually read an issue of either one, at least not for decades…

As I noted (probably just on FF), a few weeks ago I received an invitation to join the American Association for the Advancement of Science for $99/year (a first-year discount from the usual $149/year), which includes 51 issues of Science. The invitation included a copy of the February 18, 2011 Science.

The last thing I need at this point is more magazines, particularly given my idiot tendency to actually read everything I subscribe to from cover to cover, or at least start each article. I discarded the invitation…but kept the issue.

And read it. Not the whole thing, to be sure, but nearly all of what I’d call the “magazine portion”–in this case, pages 811-875, as opposed to pages 876-931, the “journal portion” (followed by ads and advertorials).

My initial conclusion: If AAAS was really committed to the advancement of science, they would and could go to gold OA, turning the print version of Science into a weekly or fortnightly magazine (about half as thick as it currently is) and publishing all the full peer-reviewed research articles and reports online with full and immediate access. Oh, and charging a much more reasonable fee for an institutional subscription to the magazine than the current $990–like, say, $149, or perhaps twice that including immediate online access to all the features that make up the magazine portion and the ScienceNow daily news, ScienceInsider, and so on…

Why? Well, it’s a really good science magazine. It has lots of ads. It includes lots of well-edited, well-written material. I suspect it would continue to thrive as a magazine. As a journal, however, it makes more sense online, both because it covers too much territory to make sense as a browsing resource for any given scientist and because much of it’s online-only anyway.

Maybe it would need article processing fees, although it’s hard to believe they’d need to be four-digit fees. In any case, going full gold OA for the peer-reviewed material would certainly be a huge step forward in the advancement of science. And it’s always been part of the serious OA advocates (e.g., Peter Suber) that it’s legitimate to charge for added value, such as popularized versions, discussions, news, etc.–all the stuff that makes up Science‘s magazine section.

This has probably all been said before, but I really was struck by how much the issue came off as a very good science magazine with a bunch of very specialized peer-reviewed items in back of the magazine. And how likely it is that the magazine would survive and probably prosper without charging high fees (or any fees) for online access to the peer-reviewed items.

Comparing potatoes and truffles

Monday, December 13th, 2010

Remember Wired Magazine‘s absurd “The Web is dead” cover article (September 2010)?

I can’t think of anything that was right about the article. One of the things that was most wrong was the big graph that showed how the web was dying–by plotting all internet traffic, in bytes, on a market-share graph (that is, one where the Y axis is always filled, since it goes up to 100% and the segments show percentage of each area over time).

One thing that was wrong with it is that this kind of graph is almost always misleading or meaningless when an overall space is either growing or shrinking, since it represents percentages, not absolutes. If Amazon goes from selling 90% of ebooks when ebook sales are $1 million per year to selling 30% of ebooks when ebook sales are $1 billion per year, I can assure you nobody at Amazon is saying “Damn. We’ve died in the ebook space.” But that’s what a market-share graph would show: A dramatic, awful, terrible decline in Amazon ebook sales.

The other is even more absurd, and is where I get “potatoes and truffles.” Well, you know, they’re both edibles that come from the ground, so clearly truffles are dead, since the weight of potatoes sold each year must surpass the weight of truffles by several orders of magnitude. Actually, they’re both tubers, so what’s the difference? (“Several orders of magnitude”: I can’t readily find the current total production/sale of truffles, but it apparently peaked at “several hundred tonnes” early in the last century, so I’d guess it’s no more than, say, 314 tonnes now. Which is a deliberate choice because 2008 worldwide production of potatoes was 314 million tones. So figure at least a million times as many potatoes, by weight. And there’s even the time element, since truffle production has dropped enormously while potato production continues to rise.)

The other fallacy? Choosing one measurement and assuming that it’s meaningful in other contexts. In this case, choosing data volume (bits or bytes) and assuming it relates somehow to “where people spend their time.”

I choose that quotation because here’s how Wired responded to the criticisms of their chartjunk in this case:

While not perfect, traffic volume is a decent proxy for where people spend their time.


Last Saturday, we had a friend over and spent a wonderful two hours and 31 minutes watching the glorious Blu-ray version of The Music Man. I felt as though I’d never really seen the picture before. It was great. It was also 2.5 hours.

I’m guessing The Music Man probably took up around 40GB (a dual-layer Blu-ray Disk has 50GB capacity).

Today, I’ll start reading a mystery novel that I’m certain is going to be enormously entertaining as well. At 250 pages, the text in it would probably occupy about–well, let’s call it 80,000 bytes, although that’s probably on the high side.

By Wired‘s “reasoning,” it’s a fair approximation to say that I should spend around 0.018 seconds reading that book, since it has only one-five hundred thousandths as much data as The Music Man–and “traffic volume is a decent proxy for where people spend their time.”

In the real world, I’ll probably spend three or four hours reading the novel, maybe a little longer.

An extreme case?

OK, so a Blu-ray Disc is an extreme case. Internet traffic almost never includes 30mb/s streams, which is roughly BD level. But it does include loads of video, probably at traffic rates between 250kb/s and 6mb/s, and audio, at traffic rates of at least 64kb/s for anything with halfway decent sound (“halfway decent” is the operative term here).

So if I watch a one-minute YouTube clip, it’s likely that the traffic amounts to at least 1.9 megabytes (at the lowest datarate supported by YouTube) and more likely at least twice that much.

How much time would it take me to read 1.9 megabytes worth of text, even with HTML/XML overhead?  Without overhead, that’s about 300,000 words, or the equivalent of three long books. With PDF overhead (which, for embedded typefaces, is considerably more than HTML overhead), that’s four typical issues of Cites & Insights–but for the text itself (with Word .docx overhead), it’s at least a year of C&I. I pretty much guarantee that anybody who reads C&I at all spends more than a minute doing so, even though the data traffic only amounts to a few seconds worth of  YouTube.

Equating “traffic” for text, or even still photos, with “traffic” for sound or video, as being in any way meaningful in terms of time spent is just nonsense. Wired says “We stand by the chart.” That says a lot about Wired–and almost nothing about the present or future of the web.

What year did downloaded music start outselling CDs/vinyl?

Thursday, July 1st, 2010

Very short post, with the heart of it in the title above, so as to encourage FriendFeed participation.

This is a test of your digital awareness. Without looking it up, try to answer that one question:

In what year did downloaded music (iTunes, etc.) start outselling music-on-containers (CDs, vinyl, etc.), worldwide?

Comments will remain open until Saturday, July 3, after which I’ll comment on the responses.

The apparent answer: According to the sources I’ve seen, the answer is “2011, probably.”

Which is also to say: It hasn’t happened yet.

So Davin’s right, Stephen’s close enough, and John’s…not so much.

Random Sunday music musing

Sunday, June 13th, 2010

When I got my dandy little 8GB Sansa Fuze “MP3 player” (because it was on sale for $69 at Radio Shack, which it is again this week), I loaded it with 863 songs from my collection that I think are better than average–ones I give 3, 4 or 5 stars. (The whole collection, excluding classical, is around 3,000 songs–mostly fairly old CDs, but I’ve added a handful of used CDs purchased recently…)

And I’m going through it for a first pass, really listening to songs, usually about 10 a day.

Today I hit a song that was great–a recent addition, so I’d probably only heard it once before–but that also reminded me I’ve lost most of the specific vocabulary for music I might once have had.

The song: “Hard to Love” by Vance Gilbert (from one thru fourteen, released 2002 on Louisiana Red Hot Records).

It’s a blues of a particular style–with verses minimally accompanied (Hammond B3, electric lead & rhythm guitars, acoustic bass, drums–the bass descending one note per bar, minimal riffs from the rest), and then a solid horn section cutting in on the chorus. I mean a tasty horn section. (I’d actually been thinking, you know, I need a few more songs with really tasty horn sections.)

Yes, I can recognize a Hammond B3 almost instantly…or one heck of a good synthesizer simulation. Can’t you? Some day, all the Hammond B3s will be gone and irreparable; that will be a sad day for blues/jazz/whatever. Also yes, I’m one of those who thinks Al Kooper’s contributions to American music have been undervalued…

And I realized that I didn’t know whether this was Tower of Power-style horns, Memphis Horns style, or something else entirely. It only matters in that it’s hard for me to describe this number adequately.

[Checking the liner notes/booklet, one of those things that come with CDs, I find something really interesting, given that the horns seemed to have a pretty natural acoustic and stereo spread: The horns are the “Joe Mennona horns,” which appears to consist of Joe Mennona overdubbing all the horn parts–tenor sax, alto sax, baritone sax, trombone and trumpet.]

No real significance here. You can enjoy music without being able to describe it properly.


Monday, June 7th, 2010

The June San Francisco Chronicle Magazine (the Chron only does its own glossy-magazine section once a month, a very sensible decision–the weekly book section and review/entertainment section are separate anyway) leads off with an editor’s column with the same title as this post.

It’s not all that long (465 words–shorter than this 558-word post); you can read the whole thing yourself, and look at the amusing picture. The theme: Meredith May (the writer) has been

getting into polite arguments with friends who have been posting pictures of me on Facebook and Flickr that I would never want you to see.

They’re not nude shots or anything like that–but they were “taken in private moments with friends before the world was wide and covered in a Web.” May doesn’t think it’s up to other people–even her friends–to decide which parts of her own history should be made public.

She notes a specific incident–she’s going to the Pulitzer Center on Crisis Reporting to talk about her story on girl slavery in Nepal and, checking Facebook in the airport, finds that an old friend has psoted pictures of her drinking and posing at high school house parties…

May doesn’t quite understand people’s impulse to overshare their own stuff–“but over-sharing someone other than yourself without his or her permission is baffling.” And, indeed, since we learn that any candid shot is likely to turn up on the web, spontaneity could be suffering.

I have had parties at my house with a dozen of my lovely artist friends, and nine will bring a camera and start shooting. The whole reason for having your homies over for a party is that you can let down your hair and dance on the counter if you want to. But I’m more cautious now. The joie de vivre, the carpe diem, the being alive part of living – is tempered.

In our haste to document and share everything, are we losing what it means to live in the moment?

I can’t speak for anyone else, but this editorial certainly resonates with me. I’ll take it a step further: “Agreeing” that a picture can be posted isn’t always being entirely happy about it. Coercion is a strong word for the process that takes place, but it’s a form of social pressure–the desire not to be thought a complete killjoy.

There are pictures of me on the web (oddly enough, they show up in Google but not on Bing) that I could do without. One of them has a caption about what a good sport I was. “Good sport” in this case really means “didn’t feel he could avoid this without looking like a killjoy.”

I know that my own behavior at, say, conference receptions is now much more circumspect than it might have been in the past, that I’m much less willing to don silly hats or assume silly poses or hold up silly signs. A few years ago, I would have assumed that a few folks would have gotten little laughs out of the silliness as captured in photos. Now, I assume that the silly pictures will live forever on the web and in search-engine results–and while they can’t really do me any harm, I’d just as soon not, thank you.

So does this make me a killjoy? Maybe so. Such is life. Apparently I’m not the only one…

A quick twofer

Wednesday, June 2nd, 2010

Two miniposts for the price of one!

Gold star

I would be remiss if I did not mention that this here blog received a gold star from Salem Press in its library blog thingie, particularly since they were very quick to move this blog from Public Library Blogs (!) to General Blogs (I was hoping for Quirky, but you can’t always get what you want) after I let them know…

(There seems to be no shortage of links to the Salem Press list, so the lack of one here shouldn’t be an issue.)

Quick expert advice from librarians about web tools

Here’s an easy two-part test for modern librarians–or, better yet, just those who are considered web specialists. They’re honest questions, and presumably y’all should be able to answer them on the spot, in the comments:

  1. I have a fully-formatted book ms. done using Word 2007, but also in PDF. How do I convert it to epub (without DRM), retaining as much of the formatting as possible? I even have Calibre, if that helps.
  2. OK, so I have the new Facebook privacy tools now, but I just looked at my Privacy settings and I don’t understand what’s going on here:

Facebook Privileges
Note: This is a straight screen capture, cropped but with no other changes. You may have to scroll right to see what I’m really interested in.

To wit: What does “Other” mean? How can I find out?

I await responses with some interest. Based on other discussions, I assume that any employable web services librarian should have answers…

dr? dc!

Monday, May 24th, 2010

Right up front: I’ve been guilty of this before and probably will be again.

As I was working on a Zeitgeist piece, I looked at a nicely-done 1,300-word essay. On a national newspaper website. About one aspect of social networking. With some interesting and slightly controversial things to say, some of them certainly open to argument.

The very first comment detailed the length of the essay–how many words, how many characters, how many sentences, average number of letters per word, length of longest sentence–and ended with a note suggesting that there was no content, or at least that the commenter hadn’t read it.

Understand: The commenter didn’t disagree with what was being said–the commenter was trashing the essay based on its length (apparently). Several other commenters offered variants of the old “tl; dr” brushoff–that is, “too long; didn’t read.” (I rarely see that on liblogs–maybe library folks actually have more than ten-second attention spans, or at least believe that “tl; dr” leaves one open to accusations of subliteracy.)

I’m not going to argue that people damn well should read longer essays. After all, 1,300 words is just a bit less than two pages of C&I, or three or four pages of a typical trade paperback, or one-third of a typical In the library post, or nine Friendfeed posts. If that’s so much text it makes your brain explode or your eyes hurt, who am I to argue.

dr? dc

But, well…

If you didn’t read the article or post, why are you commenting on it?

Equally, if you read the article or post and have nothing to say about the topic or the substance of the post or article… why comment on it?

Because you know the writer hangs on your every word so much that she will at least appreciate knowing you dropped by? Because you’re so damned important that you must respond? Because you’re a frustrated graffitist? Because you have no life?

I think all of usmany of us do this sort of thing–or equally vapid responses–once in a while. (Yes, that’s a preventive strikeout: I was about to commit a universalism, and I damn well should know better.)

It works both ways. I waste time on FriendFeed. (I also use FriendFeed, and maintain friendships on FriendFeed, and gain valuable insights on FriendFeed. And sometimes I waste time on FriendFeed–the activities aren’t mutually exclusive.) As many categories as I’ve hidden, as rarely as I Follow anybody new, I still see dozens of posts (mostly from Twitter, but not all) of the “what’ll I have for breakfast / I just had X for dinner / I just posted from Y” flavor, stuff that for me is almost exclusively in the “who cares?” category–just as some of my posts here fall into the “who cares?” category for some, maybe most, occasionally all readers.

I don’t believe I’ve ever found any reason to comment on a “what I had for breakfast” FF item by asking who cares or saying “don’t clutter up the feed with that crap” or anything of the sort. If I don’t care, why would I take the time to comment? (And, for that matter, if I don’t care, how does that imply that nobody else could possibly care?) I’m dead certain I’ve left equivalent responses on some posts and FF messages, however, and I’m sure I will in the future.

And I’ll be (trivially) wrong to do so.

As of that last period, this post contains 570 words. That’s probably too long for some of you–but I suspect that people who can’t handle 600, 800, or 6,000 words aren’t among my audience anyway.

By the way: I’m tagging this “Net Media”–but I no longer believe that term has much of any meaning, and I’m also doubtful about “Social Media.” That’s an essay I’ll be writing one of these days, probably in C&I. 636 words. My work here is done (645).

Industry Standard, RIP–again

Friday, May 21st, 2010

The Spring 2010 C&I essay “The Zeitgeist: hypePad and buzzkill” includes several notes taken from The Industry Standard–a site that I still had bookmarked, even if it was a pale shadow of the wonderful trade magazine The Industry Standard, which was great reading, fat, interesting…and overextended itself during the dotcom boom, going under as that boom went bust.

That pale shadow is now itself dead, as of a couple months ago (I don’t remember exactly when). It was absorbed into InfoWorld…sort of.

Sort of?

Yep. I had a number of items from The Industry Standard tagged in delicious, for use in future C&I essays. I probably still do. Today I wanted to use a couple of them for part of an Interesting & Peculiar Products essay.

The delicious link leads to InfoWorld. Not to the article.

Searching for the articles, by any keywords I could think of (e.g., those in the title), comes up empty.

I can’t swear the articles aren’t there…but they’re not findable. Which means they might as well not be there.

This is a shame. There was still some good coverage there. And, as far as I can tell, it’s just gone.

MP3 Doesn’t Have DRM–Or Does It?

Monday, April 19th, 2010

One of the great steps forward for fair use and first-sale rights came last year, when iTunes finally stopped selling DRM-encased tracks and started selling DRM-free MP3 (or its direct, DRM-free, AAC equivalent).

“DRM-free MP3” is redundant, right? The MP3 format doesn’t allow for DRM, right?

Right…at least not now, at least not directly.

A Digression

DRM gets a bad rap. Actual Digital Rights Management can–or could–be valuable, in situations (which pretty much every library is familiar with) where access to digital resources is based on the user’s rights. Most of the time, in practice, those rights are understood indirectly: If you have access to a campus network for an appropriate definition of “access,” for example, you’re assumed to have rights to the databases the library licenses–and similarly for public libraries, if you’re either standing at a library computer or you can demonstrate (over the internet) that you’re a library patron. But the rights management could be more complex; you could have a digital signature that identified all the ways you might have rights to use various digital resources.

But most of the time, when we talk about DRM–especially as it relates to copyright–we’re talking about what I call Digital Restrictions Management: Basically, reducing or eliminating your fair use and first sale rights in digital resources that you think you’ve purchased.

The funny thing about that kind of DRM is that it has never done much to stop The Bad Guys, those who are out to pirate copyright material. They either have other methods to get access to non-DRM resources or they break the DRM. DRM mostly damages the innocent, people who want to device-shift or maybe use legitimate excerpts from something. So it’s hard not to cheer the move away from DRM in music…noting that audio CDs never had DRM. (Yes, there were silver discs with DRM; no, they weren’t legitimate Audio CDs. The Red Book, the key license for all audio CDs, does not allow for DRM.)

End of digression.

“At least not directly?”

Yep. Read this story in TechCrunch.

Seems that the tracks you buy from iTunes–or from LaLa or Walmart–have personal information embedded in the MP3. The post shows an example.

Who cares? Well, read the quoted section.

If you’re really paranoid, consider the possibilities: Could iTunes scan your library and delete any files that don’t have the right username?

Seems unlikely, but…

Maybe no more unlikely than, say, Amazon deleting an ebook from your Kindle…

Updated 4/23/10, to remove idiot error in post title. Odd that nobody called me on that!