Prospectus: An Ongoing Public Library Social Network Scan

I wrote about this six weeks ago in “Dreaming out loud: An ideal ongoing sponsored project.” This is a slightly more fleshed-out version, done after completing the 5,958-library survey and revisiting the first 2,406 libraries. (I’m well into revising the manuscript now, preparing secondary metrics as I go).

Briefly, I believe it would be worthwhile to do an annual nationwide survey of public library presence on social networks, looking at all U.S. public libraries–9,184 of them (based on IMLS figures as reported in Hennen’s American Public Library Ratings (HAPLR)).

The earlier post offers a longer version of what I’ve already done and some other notes; this post goes into a little more detail on possibilities. I would love to see this happen, but I don’t really have the contacts to make that happen. In late January and early February, I’ll be trying to see what’s possible (with your help, I hope). At some point, it may make sense to try the long shot (see “How to Pay” below).

The Baseline

As background for my 2012 ALA Editions book Successful Social Networking in Public Libraries, I looked at all of the public libraries in 25 states (distributed by population) in late summer 2011–and later added the libraries in 13 more states, for a total of 5,958 libraries in 38 states. For the first 25 states (and 2,406 libraries), I revisited four months after the first visit to look at changes in social networking.

The result is two spreadsheets, one of which (LSNALL) would be the baseline for the new project. (The other, LSN25, looks at the four-month changes. It wouldn’t be particularly relevant for the new project.)

LSNALL includes, for each library, the following–based on my own searching, as detailed in the earlier post:

  • Library name and Legal Service Area population as provided by the state library in its spreadsheet, noting that “Library name” is frequently something other than the name the library actually uses. (Only libraries that have an LSA are included, leaving out 7,000-odd branches but also cooperative agencies that aren’t double-counted.)
  • State abbreviation
  • Date on which I checked the library
  • “FB?”–a code indicating whether I found a working Facebook link to a library Facebook page on the website (w), in the first 100 Google results (g), or by searching Facebook itself (f), in that order–or, if none was found, whether I found a community or information Facebook page instead (i) or nothing at all (n).
  • If there is a Facebook page (or group, or non-page account), the number of Likes (or friends).
  • For the most recent and fifth most recent Facebook post from the library itself, a code indicating its currency bucket: d (the day I checked), e (week–within the past seven days including today), f (fortnight), m (month), q (quarter), s (six months), y (year) or z (more than a year). (“e” was chosen to make the buckets directly sortable.)
  • A one-letter code indicating whether I found some signs of interactivity within the “visible” posts (usually 20 to 30): “y” for a non-library comment or a non-spam post from someone other than the library; “l” if I found likes (by someone other than the library!) on posts but no comments; “s” if I found only spam comments (or only spam and likes); and blank if I found none of those.
  • A Twitter code, similar to Facebook except that there are no “i” cases and I use “t” instead of “f” if the Twitter account could only be found within Twitter itself.
  • Followers, following, and tweets, one number each.
  • The same most recent and fifth most recent bucket codes for tweets
  • An interactivity code, usually based on either non-library tweets, retweets, or tweets beginning “@”–I didn’t look as far for these, and don’t regard the results as very meaningful.
  • Comments if needed–sparsely. (E.g., “FB0” for a few cases where a library Facebook page is apparently the library’s actual page but has no updates, up through FB4 if there aren’t yet five updates, or “FB teen” or the like where there’s no general-purpose FB page but appear to be specialized pages.)
  • Added after the initial scan: “SN?”–a number from 0 to 2 indicating how many of the two possibilities the library had–and “H”–a number from 0 to 9 providing the HAPLR size category (0 being under 1,000, 9 being 500,000 and up), to ease sorting and, as it turns out, reporting.

A derivative spreadsheet, LSN38, leaves out all the rows with SN?=0 (libraries with no findable social network presence) and adds derivative columns for use in the book, such as “F%” (Likes divided by LSA), “T%” (same for Twitter followers), “T/F” (followers divided by likes), “Fr” (followings divided by followers) and “Fmx” and “Tmx”–two-character codes indicating frequency and reach buckets. There are also metrics spreadsheets and pages within these spreadsheets, of course, but the primary LSNALL spreadsheet is the true baseline.

Proposed One-Year Revision

With proper funding in place and possibly better ways to distribute the results, I’d see doing this between June and November 2012:

  • Start a new spreadsheet (linked to the old one for comparative metrics) to include the other 12 states and DC, based on the most recent IMLS dataset  (which would require either acquiring Access or working with a partner, since the other 12 states don’t seem to have downloadable spreadsheets). [Note: I now find that I can open the IMLS dataset within Excel, although I would swear that this wouldn’t work when I tried it in August 2011.]
  • Update 1/25/12: The IMLS database includes actual library names, not always the same as those used in the state library spreadsheet columns I was using; I’d use those names.
  • Check each row in the spreadsheet to fill in columns as follows:
  1. Actual library name, as used for a Google search (with the state name added to the library name).
  2. Position of the library’s official website (if one is found) in the result.
  3. Facebook columns as at present, with these changes:
    a. The second “current post” bucket would be based on the 10th most recent post, but normalized to the same meanings (i.e., 2 days, 2 weeks, 2 fortnights, etc.)
    b. The interactivity column would be replaced with a number representing the number of non-library, non-spam comments and posts found within the first 10 library posts, from 0 to whatever. Post-level likes would be ignored.
  4. Twitter columns as at present, with the same “bucket” change as for Facebook and with the “Following” actual number replaced with a code indicating general approach of following (open to modification, but storing the actual number feels like overkill). Unsure whether to modify the interactivity column or simply drop it.
  5. Google+ columns along the same lines as Facebook columns, but with the number for “Added to circles” replacing Likes. (Subject to change.)
  6. Optional, if someone believes it’s worth doing and would pay extra for it: Blogging column, with a number for the number of blogs identified on the library’s homepage, and with a separate spreadsheet identifying those blogs. (This could lead to a five-year update of my Public Library Blogs study. It may be a lot more work than it’s worth. The Public Library Blogs book sold 31 copies, but that was with only my own publicity.)
  7. Optional, and I’m not sure any of these are worthwhile: Columns for MySpace presence, YouTube presence, Flickr presence indicated on homepage.

Later Years

Similar spreadsheet, linked to earlier sheets or pages for analysis, and adding significant new social networks that welcome institutional pages/accounts if such networks appear.

Deliverables

The deliverables would depend heavily on who’s paying for this and what they want. Possibilities:

  • Writeup of results including comparisons to 2011 and metrics similar to those planned for the forthcoming book, distributed as a free PDF. The writeup (and specific writeups) would include not only benchmarks by size of library and state, but also lists of libraries doing particularly well in various metrics relative to their size, to serve as examples for other libraries wanting to improve their social networking.
  • More specific writeups for individual states or for specific library sizes.
  • Possibly the spreadsheet itself for further analysis.

How to Pay

I believe the results would be valuable, since I believe most public libraries can benefit from a social networking presence and it’s clear that most of them are not reaching nearly as many people as they’d probably like to. A variety of benchmarks and examples should help. (My book should help too, combining benchmarks, examples, discussion, advice…)

But it’s way too much work to do for free or on spec. My experiments in self-publishing have taught me that, and have taught me that I can’t really do it based on the hope of selling the results on my own.

I figure $15,000 a year for the whole process, including deliverables (but not including #6 and #7 above). Adding #6 would push that to $20,000; adding #7 is unclear.

That is in some ways small potatoes for most funding agencies, but it would meet my needs.

Possible avenues:

  1. An agency could sponsor this–either a foundation or an agency that already does library research, or, for that matter, an agency that finds it worthwhile. I’d be delighted to work with almost any such agency. The one real exception is one where I can’t imagine they’d want to work with me. (As noted in the earlier post, I’d be delighted to work with OCLC or WebJunction on this, or the Gates Foundation, a library school, a vendor, whoever. I suspect my lack of institutional affiliation is a problem for most funding sources, but I’d love to be proved wrong. Unfortunately, independent research is not highly regarded in this field, as with most other fields.)
  2. A group of state libraries could sponsor it, in which case I’d narrow the research to cover only those states and charge a different fee, something along the lines of $500 + $n per state, where $n is the number of reporting libraries in the state times a constant, probably $1 to $2.
  3. I could find some way to be assured that sales of the report–which I’d prefer to be free–would come close to generating $15,000 in revenue. I don’t see too many pigs flying overhead, and I don’t regard that one as probable.
  4. I could prepare a Kickstarter project, video and all. Would it be accepted by the curators? No idea. Would it stand a chance? Stranger things have happened…

I need help on this.

I’m scheduling this post to appear on January 4, just after most people get back to work. I won’t do anything active beyond this post until the draft of the social networking book is done (around the end of January if all goes well); then, I’d make whatever contacts I can–but I don’t have the contacts I’d like to have. I’d need to have funding lined up by June in order to plot out the survey process, and by May if I was going to attend this year’s ALA Annual Conference. If I can’t work something out by June, I’ll probably turn my attention to other book or writing possibilities and abandon this.

If you think you could help find a home for this, let me know, either in a comment or by email to waltcrawford at gmail dot com.

If you believe the project is worthless, there’s no real need to comment. [If that reaction is based on an assertion that all the public libraries already are on Facebook, you’re simply wrong, by several thousand libraries, but never mind…]


Updated 1/7/12: This blog is now receiving so many spam comments–over 130 on some days, even after tightening the comment period–that a response via comment is likely to disappear. So if you can help or are interested, please send me email, waltcrawford at gmail dot com.


This post describes a possible offshoot of the public library social network scan, and a likely thank-you premium if I try to go the Kickstarter route and succeed.

Comments are closed.