I need instant feedback on this one–by Wednesday, May 9, if at all possible. A quick comment here or email to waltcrawford at gmail.com will do.
The idea
When I was finishing the Public Library Closures study, I remembered back to many years ago when I was doing state library conference keynotes with some regularity. For several of them, I did snapshots of the public libraries in the state (at the time, using state databases), looking at per-capita expenditures and circulation. I found that, with very few exceptions, “libraries that do a lot do a lot”–well-funded libraries had higher circulation per capita than poorly-funded libraries, to the point that they were at least as good bargains.
I wondered whether a slightly more sophisticated calculation would be useful to libraries and library groups in telling the positive story I believe public libraries should be telling: That is, it’s not about forestalling closure, it’s about providing the resources so libraries can enrich and enhance communities. And, with very few exceptions, public libraries are demonstrably good stewards of additional resources.
The status
I’ve taken the 2009 IMLS figures (the most recent available) and done two versions of a master spreadsheet–both using eight reported factors to determine direct value (countable events that are clearly valuable to community members) and indirect value (countables that provide a less clearly direct value to the community), calculating total operating expenses per capita, and preparing a Value Ratio: value per capita divided by expense per capita (with two sub-ratios for direct and indirect value).
One spreadsheet includes 8,936 libraries–excluding those that didn’t directly report expenses, hours, or circulation. It does include imputed figures for items other than expenses, hours and circulation. (There are some 300 libraries–mostly very small, quite a few not in the 51 states + DC, that just don’t report enough information for inclusion.)
The other begins with the 8,936, but moves 524 of them to a separate Outlier page based on one or more of these conditions:
- Less than one-quarter FTE librarian or total staff, or imputed staff levels rather than reported staff levels (350 libraries)
- Less than $5 per capita expenses (147 libraries)
- $300 or more per capita expenses (27 libraries))
That leaves 8,412 libraries. (Most of the 524 are small: While that’s 6% of the libraries, it includes only 1.8% of the population served–the remaining 8,412 include 293.7 million people in service areas, as compared to 5.3 million served by outliers.)
Also, these tables were cleaned up to zero out all numbers not directly reported by libraries–all imputed numbers became zeros.
There’s a strong correlation between expenses per capita and value per capita (>.6) for the 8,412 libraries. Libraries that are funded better generally provide more services, and–up to a point–it doesn’t seem to involve diminishing returns.
The plan
Here’s where I need immediate feedback: Is it a waste of time to break this down into a detailed set of charts and percentiles, using four different axes to look at subsets, yielding a reasonably compact book that I’d probably sell for $60/copy (and probably issue in a new version, with refinements, a few weeks after 2010 IMLS databases emerge)?
That is: Would a fair number of state libraries and possibly libraries find this analysis worthwhile (and possibly library schools), or would it be ignored or, worse, resented (as some folks seem to resent my finding that public libraries aren’t actually shutting down all over the place)?
The four axes or, if you will, chapters, following an overall look:
- Clumps of libraries by LSA size (using the 10 HAPLR divisions)
- Clumps of libraries by expense/budget ranges
- Clumps of libraries by per capita expense ranges
- State-by-state analyses (one clump for states with few libraries, probably three by broad size categories for states with many libraries)
For each clump, as for the overall figures, I’d provide correlations as appropriate, plus mean, median, and percentile levels in two ways–the 90th, 80th, 70th, etc. percentiles, but also the percentage of libraries exceeding certain value ratio set points.
(The overall value ratio for all 8,412 libraries is 4.59–that is, $4.59 in value for each $1 in expenses, including zeroing out all imputed numbers. Including the imputed numbers and the outlying libraries changes this to 4.62, a fairly trivial change. As it happens, the 4.59 is almost equally split between direct and indirect value.)
If you’re thinking either HAPLR or the LJ Star Libraries, there are similarities and differences.
Similarities: As with both of those, it’s based on nothing more than the IMLS database and a set of calculations.
Differences: It’s looking only at patterns, not “stars” or “top X.” In fact, I’ve removed library names from the primary spreadsheet, so I’m not even tempted to consider the library names themselves. (Yes, I can get back to the library name, but it’s a two-step process.) And it’s looking at more factors than LJ, although fewer than HAPLR.
Mostly, though, it’s not about naming names. It’s about showing value in general and providing appropriate benchmarks/comparisons.
Prepare or abandon?
I have the spreadsheet. Do I just say “that’s interesting” and drop it, or does it make sense to prepare the results in a manner that libraries/library groups might find useful?
Your feedback, please–and soon! (If I do this, the 2009 version will probably be ready by ALA Annual, which I won’t be attending.)
I am not sure how I’d use this research at my level but it seems to me that it is always good to be able “prove” our value. But I am new to all of this.
For what it’s worth, it seems it would only by useful IF the Value Ratio to which you refer is an empirical statistically useful number. Simply calling some number “value” doesn’t make it valid. Are you using contingent valuation (CV) either willingness-to-pay (WTP) or willingness-to-accept (WTA)? Or are you using some arbitrarily assigned “value”?
It also seems like a no-brainer premise to say that “libraries that are better funded provide better service.” One of the pitfalls of using data collected for one purpose to apply to a different purpose is that it requires numerous questionable assumptions to make the numbers meaningful. How would this data be more valid or more valuable than the results from several states that conducted their own ROI study within the past few years? The 21st Century Library is More:
Our library has a link to show our value. It tells you what you saved for every dollar earned. It is very effective for our patron base who think we do not contribute to our area.
Here is the link to play with:
http://www.countylibrary.org/librarycalc.htm
I would welcome this information. Helping libraries show their value is particularly valuable in our current political environment.
I think it would be helpful, especially with names being omitted.
Thanks all (so far); I’d love to have additional comments.
d johnson: I’m well aware of the library ROI calculators (they’ve been around for years, all originating from one source) and think they’re great. But they’re for individual use. I’m looking at broader trends.
Steve: If you’ve read any of my research work, you know that I’m always 100% explicit about my assumptions and methods. That’s been true for years. (If you haven’t read any of my earlier work, I’m a little surprised that you’re commenting here.) (See below.)
Others: Thanks–and Crystal, I think I hear one reason I think this might be valuable and different: I really, truly, completely am NOT interested in setting up another list of “here are the GREAT libraries.”
Addition after following the link in Steve Matthews’ comment, all of the links in the post found, and all of the ROI documents from those links:
1. I’m not claiming that what I’m doing is “more valid”–but individual libraries and states might find it additionally valuable. I do not in any way intend to replace state ROI studies.
2. My model is deliberately simple, conservative, and designed to return a minimum value, based only on countables as reported to IMLS. The model will be stated in full as part of the book.
3. To me, it’s not at all obvious that well-funded libraries continue to return equivalent value for additional dollars spent, and I don’t think that’s obvious to a typical skeptical user either. Why is it obvious that a library spending $50 per capita will offer twice as much value to the average user as one spending $25 per capita? I certainly don’t assume the Pentagon’s $600 hammers are 30 times as good as ordinary $20 hammers…
4. The key difference between the book I’m considering doing and all of the ROI studies I’ve looked at: I’m not just offering One Big Number, but ranges and correlations–so, say, a library serving 25,000 people with $30 per capita expenditures in Colorado can compare itself to other libraries in the same size range, funding range, or state (and broader size range), and have some evidence that $40 per capita funding would yield worthwhile additional value. (Compare itself: Not because it would be named in the book–no library will be called out by name–but because any library should know its reported numbers and can repeat the calculation I’ve done. If a library doesn’t, email to me will return the appropriate line of the spreadsheet, for the library’s internal use, since I *can* go back to names through a two-step process.)
Sounds like you’ve already made up your mind, so I’m not sure why the request for feedback about which you appear defensive.
No, I haven’t made up my mind. If I seemed defensive about your comments, I apologize–but your comment seemed to be on the attack (“no-brainer,” “arbitrarily assigned,” etc.)–and to assume that I was trying to replace state ROI studies rather than providing a new form of information. If your response is typical of how state libraries would view what I’m thinking of, then I should abandon the idea.