I need instant feedback on this one–by Wednesday, May 9, if at all possible. A quick comment here or email to waltcrawford at gmail.com will do.
When I was finishing the Public Library Closures study, I remembered back to many years ago when I was doing state library conference keynotes with some regularity. For several of them, I did snapshots of the public libraries in the state (at the time, using state databases), looking at per-capita expenditures and circulation. I found that, with very few exceptions, “libraries that do a lot do a lot”–well-funded libraries had higher circulation per capita than poorly-funded libraries, to the point that they were at least as good bargains.
I wondered whether a slightly more sophisticated calculation would be useful to libraries and library groups in telling the positive story I believe public libraries should be telling: That is, it’s not about forestalling closure, it’s about providing the resources so libraries can enrich and enhance communities. And, with very few exceptions, public libraries are demonstrably good stewards of additional resources.
I’ve taken the 2009 IMLS figures (the most recent available) and done two versions of a master spreadsheet–both using eight reported factors to determine direct value (countable events that are clearly valuable to community members) and indirect value (countables that provide a less clearly direct value to the community), calculating total operating expenses per capita, and preparing a Value Ratio: value per capita divided by expense per capita (with two sub-ratios for direct and indirect value).
One spreadsheet includes 8,936 libraries–excluding those that didn’t directly report expenses, hours, or circulation. It does include imputed figures for items other than expenses, hours and circulation. (There are some 300 libraries–mostly very small, quite a few not in the 51 states + DC, that just don’t report enough information for inclusion.)
The other begins with the 8,936, but moves 524 of them to a separate Outlier page based on one or more of these conditions:
- Less than one-quarter FTE librarian or total staff, or imputed staff levels rather than reported staff levels (350 libraries)
- Less than $5 per capita expenses (147 libraries)
- $300 or more per capita expenses (27 libraries))
That leaves 8,412 libraries. (Most of the 524 are small: While that’s 6% of the libraries, it includes only 1.8% of the population served–the remaining 8,412 include 293.7 million people in service areas, as compared to 5.3 million served by outliers.)
Also, these tables were cleaned up to zero out all numbers not directly reported by libraries–all imputed numbers became zeros.
There’s a strong correlation between expenses per capita and value per capita (>.6) for the 8,412 libraries. Libraries that are funded better generally provide more services, and–up to a point–it doesn’t seem to involve diminishing returns.
Here’s where I need immediate feedback: Is it a waste of time to break this down into a detailed set of charts and percentiles, using four different axes to look at subsets, yielding a reasonably compact book that I’d probably sell for $60/copy (and probably issue in a new version, with refinements, a few weeks after 2010 IMLS databases emerge)?
That is: Would a fair number of state libraries and possibly libraries find this analysis worthwhile (and possibly library schools), or would it be ignored or, worse, resented (as some folks seem to resent my finding that public libraries aren’t actually shutting down all over the place)?
The four axes or, if you will, chapters, following an overall look:
- Clumps of libraries by LSA size (using the 10 HAPLR divisions)
- Clumps of libraries by expense/budget ranges
- Clumps of libraries by per capita expense ranges
- State-by-state analyses (one clump for states with few libraries, probably three by broad size categories for states with many libraries)
For each clump, as for the overall figures, I’d provide correlations as appropriate, plus mean, median, and percentile levels in two ways–the 90th, 80th, 70th, etc. percentiles, but also the percentage of libraries exceeding certain value ratio set points.
(The overall value ratio for all 8,412 libraries is 4.59–that is, $4.59 in value for each $1 in expenses, including zeroing out all imputed numbers. Including the imputed numbers and the outlying libraries changes this to 4.62, a fairly trivial change. As it happens, the 4.59 is almost equally split between direct and indirect value.)
If you’re thinking either HAPLR or the LJ Star Libraries, there are similarities and differences.
Similarities: As with both of those, it’s based on nothing more than the IMLS database and a set of calculations.
Differences: It’s looking only at patterns, not “stars” or “top X.” In fact, I’ve removed library names from the primary spreadsheet, so I’m not even tempted to consider the library names themselves. (Yes, I can get back to the library name, but it’s a two-step process.) And it’s looking at more factors than LJ, although fewer than HAPLR.
Mostly, though, it’s not about naming names. It’s about showing value in general and providing appropriate benchmarks/comparisons.
Prepare or abandon?
I have the spreadsheet. Do I just say “that’s interesting” and drop it, or does it make sense to prepare the results in a manner that libraries/library groups might find useful?
Your feedback, please–and soon! (If I do this, the 2009 version will probably be ready by ALA Annual, which I won’t be attending.)