Archive for the 'Cites & Insights' Category

C&I and The Project: A quick update

Posted in Cites & Insights, Stuff on September 13th, 2014

Just a quick update, also marking the last blog post I’ll do before I turn another year older…

The October 2014 Cites & Insights…

…will not exist. At least not as a separate issue. Most probably, the next C&I will be an October/November 2014 issue and will appear, with luck, some time in October or early November.

The project…

…is going swimmingly, I think. As of Wednesday, I’d have said “I’m sure”–but the last 300-odd journals in the Beall spreadsheet (the “independent” journals, because I checked them in publisher order) are slow going, as I should have expected.

For a bunch of journals with the same publisher, I can expect similar layout, the same place for APCs (if they’re hidden–some publishers are up front with them), the same possible shortcuts for counting articles. And for some “publishers,” I can anticipate spending very few keystrokes confirming that the “journals” are still nothing more than names on a web page.

The most extreme case of this came very early in the week, when I hit a “publisher” with 426 “journals,” only 20 of them having any articles at all. I usually consider it a good day if I can process 150 journals in all (usually doing 10 in the new DOAJ list followed by 30 in the much longer Beall list: the OASPA list has been done for a while now), an OK day if I process 100, and a great day if I can do 200. With that “publisher”, I managed 460 journals in one day, including 60 from the DOAJ list.

Given that Wednesday’s basically a half day and the weekend counts as a half day in total, here’s where I think I am:

  • I should finish Pass One on the Beall list by the end of this coming week. (Pass Two, a little additional refinement, should only take a week or so for all three lists combined.)
  • I might finish Pass One on the DOAJ list by the end of the following week–let’s say “within September” as a hoped-for deadline.
  • I can actually start working on Part One of the article(s) before the DOAJ list is complete, since that list should only enter into Part Two.

Then come lots of data massaging, thinking about the results, and writing it all up. I have no idea how long that will all take or, for that matter, how long the results will be. I’m aiming for somewhere between two 20-page and two 30-page essays, each constituting a C&I issue. My aim is notoriously weak.

I believe the project will be interesting and revealing. I know I’ve found some journals I might want to go back to and do some reading from…

Swan song?

At the moment, this project feels a little bit like a swan song. I don’t really have any major projects or book projects in mind at the moment. Oh, there are a couple of thousand–check that, 1,500–Diigo-tagged items waiting to be turned into various essays, but that’s just seeing C&I wind down. Or not.

It’s quite possible that new ideas will arise. Or I’ll start reading more, maybe finally join the local Friends and volunteer at the store or whatever. Or…

Anyway: Back to the project. 239 journals on the Beall list and 908 on the DOAJ list left to go; I’m sure a few of the DOAJ ones will disappear in the process (and I just deleted one duplicate title on the Beall list yesterday–a journal entered with two slightly different names but the same URL).

Open Data, Crowdsourcing, Independent Research and Misgivings

Posted in Cites & Insights, open access on September 1st, 2014

or Why Some Spreadsheets Probably Won’t Become Public

If you think that title is a mouthful, here’s the real title:

Why I’m exceedingly unlikely to make the spreadsheet(s) for my OA journals investigations public, and why I believe it’s reasonable not to do so.

For those of you on Friendfeed, there was a discussion on specifically this issue beginning August 26, 2014. The discussion was inconclusive (not surprisingly, partly because I was being a stubborn old goat), and I continued to think about the issues…even as I continued to build the new spreadsheet(s) for the project I hope to publish in the November and December 2014 Cites & Insights, if all goes well, cross several fingers and toes.

Consider this a public rethinking. Comments are most definitely open for this post (if I didn’t check the box, let me know and I’ll fix it), or you’re welcome to send me email, start a new thread on one of the social media I frequent (for this topic, Friendfeed or the OA community within Google+ seem most plausible), whatever…

Starting point: open data is generally a good idea

There may be some legitimate arguments against open datasets in general, but I’m not planning to make them here. And as you know (I suspect), I’m generally a supporter of open access; otherwise, I wouldn’t be spending hundreds of unpaid hours doing these investigations and writing them up.

All else being equal, I think I’d probably make the spreadsheet(s) available. I’ve done that in the past (the liblog projects, at least some of them).

But all else is rarely equal.

For example:

  • If a medical researcher released the dataset for a clinical trial in a manner that made it possible to determine the identities of the patients, even indirectly, that would be at best a bad thing and more likely actionable malpractice. Such datasets must be thoroughly scrubbed of identifying data before being released.

But of course, the spreadsheets behind Journals, “Journals” and Wannabes: Investigating The List have nothing to do with clinical trials; the explicitly named rows are journals, not people.

That will also be true of the larger spreadsheets in The Current Project.

How much larger? The primary worksheets in the previous project have, respectively, 9,219 [Beall’s Lists] and 1,531 [OASPA] data rows. The new spreadsheets will have somewhere around 6,779 [the subset of Beall’s Lists that was worth rechecking, but not including MDPI journals], exactly 1,378 [the subset of OASPA journals I rechecked, including MDPI journals], and probably slightly fewer than 3,386 [the new “control group,” consisting of non-medicine/non-biology/non-biomed journals in DOAJ that have enough English in the interface for me to analyze them and that aren’t in one of the other sets] rows—a total of somewhere around 11,543. But I’m checking them more deeply; it feels like a much bigger project.

So what’s the problem?

The spreadsheets I’ve built or am building are designed to allow me to look at patterns and counts.

They are not designed for “naming and shaming,” calling out specific journals in any way.

Yes, I did point out a few specific publishers in the July article, but only by quoting portions of their home pages. It was mostly cheap humor. I don’t plan to do it in the new project—especially since most of the journals in the new control group are from institutions with only one or a handful of journals; I think there are some 2,200 publisher names for 3,386 journals.

This is an important point: The July study did not name individual journals and say “stay away from this one, but this one’s OK.” Neither will the November/December study. That’s not something I’m interested in doing on a journal-by-journal or publisher-by-publisher basis. I lack the omniscience and universal subject expertise to even begin to consider such a task. (I question that anybody has such omniscience and expertise; I know that I don’t.) I offered possible approaches to drawing your own judgment, but that’s about it.

Nor do I much want to be the subject of “reanalysis” with regard to the grades I assigned. (I don’t want angry publishers emailing me saying “You gave us a C! We’re going to sue you!” either—such suits may be idiotic, but I don’t need the tsuris.)

Releasing the full spreadsheets would be doing something I explicitly do not want to do: spreading a new set of journal grades. There is no Crawford’s List, and there won’t be one.

For that matter, I’m not sure I much want to see my numbers revalidated: for both projects, I use approximation in some cases, on the basis that approximation will yield good results for the kind of analysis I’m doing. (I’ll explain most of the approximation and shortcuts when I write the articles; I try to be as transparent as possible about methodology.)

For those reasons and others, I would not be willing to release the raw spreadsheets.

Could you randomize or redact the spreadsheets to eliminate these problems?

Well, yes, I could—but (a) that’s more unpaid labor and, more important, (b) I’m not sure the results would be worth much.

Here, for example, are the data label rows and one (modified) data row from part of the current project:

Pub Journal 2014 2013 2012 2011 Start Peak Sum Gr GrF APC Note
pos POS Physics 15 34 14 1 2011 34 64 B $600

The columns, respectively, show: the publisher code (in this case, Pacific Open Science, a nonexistent—I think—publisher I may use to offer hypothetical examples in the discussion. Their slogan: If an article is in our journals, it’s a POS!); the journal name; the number of articles in January-June 2014, all of 2013, all of 2012, all of 2011; the starting year; the peak annual articles; the sum of the four years; the letter grade; a new “GrF”—the letter grade that journals with fewer than 20 articles per year would get if they had more; the article processing charge for a 10-page article; and any note I feel is needed. (If this was the new DOAJ control group, there would be another column, because hyperlinks were stored separately in DOAJ’s spreadsheet; for the one I chose, “POS Physics” is itself a hyperlink—but, of course, there’s no such journal. Don’t try to guess—the actual journal’s not remotely related to physics.)

I’ll probably add a column or two during analysis—e.g., the maximum annual APCs a given journal could have collected, in this case 34×600 or $20,400, and for the new DOAJ group the subject entry to do some further breakdowns.

I could certainly randomize/redact this spreadsheet in such a way that it could be fully re-analyzed—that is, sort the rows on some combination that yields a semi-random output, delete the Pub column, and change the Journal column to a serial number equal to the row. Recipients would have all the data—but not the journal or publisher names. That wouldn’t even take very long (I’d guess ten minutes on a bad day).

Would anybody actually want a spreadsheet like that? Really?

Alternatively, I could delete the Gr and GrF columns and leave the others—but the fact is, people will arrive at slightly different article counts in some significant percentage of cases, depending on how they define “article” and whether they take shortcuts. I don’t believe most journals would be off by more than a few percentage points (and it’s mostly an issue for journals with lots of articles), but that would still be troublesome.

Or, of course, I could delete all the columns except the first two—but in the case of DOAJ, anyone wanting to do that research can download the full spreadsheet directly. If I was adding any value at all, it would be in expanding Beall’s publisher entries.

What am I missing, and do you have great counter-arguments?

As you’ll see in the Friendfeed discussion, I got a little panicky about some potential Moral Imperative to release these spreadsheets—panicky enough that I pondered shutting down the new project, even though I was already about two-thirds of the way through. If I had had these requests when I began the project or was, say, less than 2,000 rows into it, I might have just shut it down to avoid the issues.

At this point, I believe I’m justified in not wanting to release the spreadsheets. I will not do so without some level of randomizing or redaction, and I don’t believe that redacted spreadsheets would be useful to anybody else.

But there are the questions above. Responses explicitly invited.

[Caveat: I wrote this in the Blog Post portion of Word, but it’s barely been edited at all. It’s probably very rough. A slightly revised version may—or may not—appear in the October 2014 Cites & Insights. If there is an October 2014 Cites & Insights.]

Now, back to the spreadsheets and looking at journals, ten at a time…


Added September 3, 2014:

Two people have asked–in different ways–whether I’d be willing to release a spreadsheet including only the journal names (and publishers) and, possibly, URLs.

Easy answer: Yes, if anybody thought it was worthwhile.

There are three possible sheets:

  • The Beall list, with publishers and the publisher codes I assigned on one page, the journals (with “xxind” as a publisher code for Beall’s separate journal list) and publisher codes on another page. All (I believe) publisher names and most but not all journal names have hyperlinks. (Some publishers didn’t have hyperlinked lists I could figure out how to download.) That one might be mildly useful as an expansion of Beall’s publisher list. (This would be the original Beall list, including MDPI, not the new one I’m using for the new study.)
  • The OASPA list, similarly structured and same comments, lacking MDPI (which is in the new one I’m using for the new study).
  • The new “partial DOAJ” list–DOAJ entries that aren’t in medicine, biology or biomed, that have English as a language code and that aren’t–if I got it right–in the other lists. I don’t honestly see how this could save anybody any time, since all it is is a portion of what’s downloadable directly from DOAJ, albeit in May 2014 rather than now.

If someone wants one of these, let me know–waltcrawford@gmail.com. I may not respond immediately, but I’ll either return the sheet you want as an email attachment or, if there’s more than one request, possibly load it at waltcrawford.name or in Dropbox and send you a link.

 

 

Correction in Cites & Insights 14:9

Posted in Cites & Insights on August 26th, 2014

Thanks to the eagle eye of an early reader, I was alerted to an error on page 15, column 2, of Cites & Insights 14:9. While there are almost certainly grammatical and spelling errors in every issue, this one was a math error that changed the significance of the paragraph–and since it was caught so early, I did something I normally never do: I fixed the paragraph, added a “[Corrected 8/26/14]” flag, and reissued the publication.

If you’ve already read it or downloaded it and don’t wish to do so again, here’s the change:

In the paragraph beginning “Most of the university libraries…” (in the subsection “Elsevier journals–some facts”), I managed to reverse the British pounds to dollars calculation. Doing it properly means changing the last two sentences in the paragraph.

What was there originally:

Notably, assuming that a pound is worth $1.70, JISC struck a much harder bargain than American public universities in general: the range is from $7.36 to $49.27, with a mean of $18.45, less than half the mean for U.S. institutions. of course, the package may very well be different.

I was somehow dividing pounds by $1.70 rather than multiplying them. Fixing that yields this text:

Converting to dollars, the range is $21.27 to $142.39 with a mean of $53.37—higher than the U.S. figures except at the low end. [Corrected 8/26/14.]

My apologies for the error.

Cites & Insights 14:9 (September 2014) available

Posted in Cites & Insights on August 25th, 2014

Cites & Insights 14:9 (September 2014) is now available for downloading at http://citesandinsights.info/civ14i9.pdf

This two-column print-oriented version is 18 pages.

For those reading C&I online or on an ereader, the single-column 35-page 6×9″ edition is available at http://citesandinsights.info/civ14i9on.pdf

This issue includes:

The Front: Toward 15 and 200: The Report    pp. 1-2

I promised a list of supporters and sponsors and an overall report on the outcome of the spring 2014 fundraising campaign for C&I. Here it is. Oh, there’s also “A Word to the Easily Confused” about the definition of “journal,” the change in the masthead to “periodical” because some folks are easily confused, and the need for consistency when choosing to regard gray literature as worthless.

Intersections: Some Notes on Elsevier  pp. 2-16

A half-dozen subtopics (actually five subtopics and some miscellanea) involving Elsevier that haven’t been covered recently elsewhere in C&I.

The Back  pp. 16-18

Four mini-essays.

 

NOTE: One paragraph on page 15 of this issue includes erroneous (reversed) pounds-to-dollars calculations. Those have been fixed and the issue has been replaced. The net change: JISC did *not* apparently strike a much harder bargain with Elsevier; the UK prices are higher except at the low end, where they’re about the same.

Clarifications

Posted in Cites & Insights on August 14th, 2014

Body of post deleted on the grounds of pointless semi-blind item and why bother?

Natureally, I’m delighted

Posted in Cites & Insights, open access on August 6th, 2014

My name appeared in a Nature news article today (August 6, 2014). Specifically:

The DOAJ, which receives around 600,000 page views a month, according to Bjørnshauge, is already supposed to be filtered for quality. But a study by Walt Crawford, a retired library systems analyst in Livermore, California, last month (see go.nature.com/z524co) found that the DOAJ currently includes some 900 titles that are mentioned in a blacklist of 9,200 potential predatory journals compiled by librarian Jeffrey Beall at the University of Colorado Denver (see Nature 495, 433–435; 2013).

and, later in the piece:

Bjørnshauge says that a small cohort of some 30 voluntary associate editors — mainly librarians and PhD students — will check the information submitted in reapplications with the publishers, and there will be a second layer of checks from managing editors. He also finds it “extremely questionable to run blacklists of open-access publishers”, as Beall has done. (Crawford’s study found that Beall’s apparently voluminous list includes many journals that are empty, dormant or publish fewer than 20 articles each year, suggesting that the problem is not as bad as Beall says.)

Naturally (or Natureally), I’m delighted to have my name show up, and a C&I issue linked to, in Nature. (It didn’t come as a complete surprise: the journalist sent me email asking about my affiliation–none–and, later, where I live.)

I’m not quite as delighted with the slant of that first paragraph (quite apart from the fact that Beall’s lists do not list some 9,200 “potential predatory journals,” they include publishers that publish or “publish” that number of journal names). Namely, I think the story is not that 900 “potentially predatory” journals appear in DOAJ with the loose listing criteria that site formerly used. I think the story is that more than 90% of the journals in DOAJ are not reflected in Beall’s list, given his seeming zeal to target OA journals.

But, of course, it’s the journalist’s story, not mine, and I do not feel I was quoted incorrectly or unfairly. (Incidentally, I don’t  have nits to pick with the second paragraph.)

I agree with Bjørnshauge that a blacklist is itself questionable.

Do I believe the much improved DOAJ will constitute a real whitelist? I’m not sure; I think it will be a great starting point. If a journal’s in the new DOAJ, and especially has the DOAJplus listing, it’s fair to assume that it’s probably a reasonably good place to be. (But then, I’m no more an expert in what journals are Good or Bad than Beall is.)

Anyway: thanks, Richard Van Noorden, for mentioning me. I hope the mention leads more people to read more about questionable journals than just Beall’s list. I strongly believe that the vast majority of Gold OA journals are as reputable as the vast majority of subscription journals, and I believe I’ve demonstrated that there aren’t any 9,200 “predatory” journals out there that are actual journals researchers with actual brains and a modicum of common sense would ever submit articles to.

A few readers may know that I’ve embarked on a related but even more ambitious (or idiotic) project, having to do with volume of articles and adding a new and very different control group. Dunno when (if?) I’ll finish the huge amount of desk work involved and produce some results. I do believe that, among other things, the results may shed some light on the apparent controversy over how prevalent APCs are among Gold OA journals… (And, incidentally, more financial support for C&I wouldn’t hurt this process.)

 

See chapter 1 free; save 15% on Cites & Insights Books

Posted in Cites & Insights on July 17th, 2014

Just a quick note on two semi-related issues:

Big-Deal Serial Purchasing: Tracking the Damage

If your academic library doesn’t already get Library Technology Reports–or even if it does–you might want to look at Chapter 1 of this report, the overview, which will provide a sense of the overall picture. To a great extent, the damage is in the details (covered in the other chapters), since a few very large academic libraries with strong budgetary support make the overall figures look better less bad than they otherwise would.

You can view Chapter 1 for free here. You can also purchase other chapters in e-form at that location.

(A reminder: I strongly believe that every academic library and academic librarian in the U.S. should be aware of this report, but not for personal gain–as with all Library Technology Reports, the writer–me, in this case–receives a single payment, so additional royalties don’t enter into it. I’m not complaining, and I’m delighted that LTR and editor Patrick Hogan saw fit to publish this.)

Beyond the Damage: Circulation, Coverage and Staffing–15% off the print version through 7/21/14

I’d like to think that dozens or hundreds of academic libraries/librarians will also find this book useful; it examines changes in the state of academic library circulation (no, it isn’t “down everywhere” or close to it), book coverage, and both professional and overall staffing relative to student population during the 2002-2012 period, using the same divisions of libraries as in the LTR report.

From now through July 21, 2014, you can save 15% on the print version by using the coupon code

DOGDAYS14

at checkout. The coupon code applies to all print books (technically, that’s not true, but it includes all the Cites & Insights print books–there’s a new economy category akin to mass-market paperbacks that isn’t included) in a single order, so it’s also a great time to buy Your Library Is... or one or more of the C&I Annuals.

Cites & Insights 14:8 (August 2014) available

Posted in Cites & Insights on July 15th, 2014

Cites & Insights 14:8 (August 2014) is now available for downloading at http://citesandinsights.info/civ14i8.pdf

The two-column print-oriented issue is 32 pages long. A single-column 6×9″ version designed for online/tablet reading is also available, at http://citesandinsights.info/civ14i8on.pdf   (The single-column version is 61 pages long.)

This issue includes the following:

The Front: Once More with [Big] Dealing   pp. 1-2

If you read the June 2014 issue, you may be aware that “Big-Deal Serial Purchasing: Tracking the Damage” wasn’t available when I thought it would be.

It’s available now; this brief essay offers the link to the ALA Store page for the Library Technology Reports issue and notes the complementary book for those academic librarians with deeper interests.

I believe every academic library should pay attention to this issue of LTR. If your library subscribes, it should be available now (electronically) or in a few days (in print form). If it doesn’t, you should buy the issue as a separate. Some of you really would find Beyond the Damage: Circulation, Coverage and Staffing useful as well.

Words: Doing It Yourself  pp. 2-18

Notes on self-publishing and whether or not it makes sense for you (or for your library to assist with).

Intersections: Access and Ethics 3  pp. 18-32

A range of commentaries having to do with open access and ethics over the past 18 months or so–and a couple of brief followups on previous essays. (You may notice that one Very Large Journal Publisher doesn’t show up much in this essay. Its time will come.)

What’s not here: the list of C&I supporters and sponsors. I’ll add the three names (yes, three) in a later issue.

Thinking About Effectiveness

Posted in Cites & Insights, open access on June 29th, 2014

It’s been roughly three weeks since “Journals, ‘Journals’ and Wannabes: Investigating the List” (Cites & Insights 14:7, July 2014) appeared.

Thanks largely to those who tweeted and retweeted items about it or even blogged about it (you know who you are, and thanks), it’s had reasonably good readership so far: just under 1,400 copies downloaded as of the last time I looked.

That’s not great–less than half the first-month downloads for “Ethics and Access 1: The Sad Case of Jeffrey Beall” (April 2014), although I suppose people could have been hot to read “Forecasts and Futurism” in that issue, but more than the first-month downloads for “Ethics and Access 2: The So-Called Sting” (May 2014, accompanied by “Future Libraries: A Roundup”).

In case it’s not obvious, the July issue was a lot of work, so much so that it can only be justified by whim. Still, I believe the results made it at least partly worthwhile–specifically, the finding (as I interpret it) that most of the vast number of “journals” on Beall’s lists aren’t really predatory because either they don’t actually exist or because authors who are paying attention wouldn’t submit papers to them anyway. Oh, and the perhaps-more-important finding that the casual assumption, which I’ve seen stated by people who should know better, that most OA journals are sketchy isn’t supported by any facts in evidence, and certainly not by Beall’s list.

So what?

There’s the question. The issue’s been downloaded. I’ll assume it’s been read (never quite a safe assumption, but…)

Will it have any medium-term or long-term impact?

Will people view Gold OA journals a little less cynically?

Will people regard Beall’s efforts as the hobby (or hobbyhorse) they are rather than as indictments of OA in general?

I don’t have answers. It is, of course, awfully early to say. I’m not sure how I would find answers.

But it feels like an important question.

Thoughts?

Library philosophy: the essay(s) that won’t be written

Posted in Cites & Insights on June 14th, 2014

A couple of times, when I’ve expressed frustration over failing in providing something I thought was of real value to libraries (e.g., the series of events that have led me to give up on public library projects in general), my wife–who has been an academic library director, public library cataloger/head of cataloging and more–has suggested:

“Maybe the library world has moved on. Maybe you should do the same. I’m sure the Livermore Friends of the Library could use your help.”

What she’s suggesting is anywhere from cutting back to dropping this stuff entirely.

I’m certainly not ready to do the latter, at least not yet.

On the other hand…

adding value

An ongoing issue for any of my writing and research is that it should add value to the field.

There’s two pieces to that: actually doing something that hasn’t been done before, and (enough) people in the field regarding what I do as valuable.

there’s no value if there’s no perception of value

That’s another way to put it. Apparently only a few dozen public libraries/librarians thought my first “Give Us a Buck” effort was valuable…and essentially none thought the second effort was worth even a sawbuck.

Was I doing something that hadn’t been done before? Yes. Was it actually worthwhile–did it actually add value? Apparently not.

which brings us back to library philosophy

After I finished up Cites & Insights 14:7 (entirely original content, and I hope that it’s regarded as added value, but we shall see…) and took a day off entirely, I looked at the kind of thing I mostly do in C&I–that is, take a set of other people’s essays that I’ve given the same tag in Diigo, look them over again, and construct a useful narrative out of the citations and my comments.

Looking over my Diigo library–as of June 10, right around 1,750 tags for (I”d guess) around 1,550-1,600 items–I concluded two things:

  1. I should proceed with Ethics and Access 3, the catchall set of stories that adds to the first two essays.
  2. I should look at some of the tags and see whether I still believe I’m likely to add value

the first of those is in progress

and going reasonably well, I think. The draft is probably halfway done. It should be a half-issue essay, maybe 8,000 to 10,000 words, and more of a mosaic than either of the first two.

the second…well, here comes library philosophy again

The most frequently used tag in my Diigo library as of June 10, 2014 was “lib-phil,” one of 19 or 20 “lib-” tags. It had 133 items, accumulated over the last four years.

Using my typical methods, that’s not one essay: it would yield about 66,000 words, give or take 15,000, which is at least two and probably three issues. (C&I 14.7 is 17,322 words long; C&I 14.4 and 14.5 together are 55,600 words.)

That’s not important: After all, the Ethics triptych became a three-parter because there were too many items for one essay and I found I could split them easily enough into two neat essays and one mosaic.

What is important, however: I was no longer especially confident that I would be adding any significant value other than “here’s a bunch of neat things you may have forgotten” (and “here’s a bunch of things I disagree with and why I disagree with them”).

I’m not a librarian, either academic or public (or school or special). Pace Chris Bourg, I’m not even a feral librarian: I haven’t worked in a library since 1979, and never worked in a librarianlike role.

I’m not a library philosopher–or, rather, that really is a case where the library world has moved on and I no longer believe I should be trying to influence its overall direction. (I’m not sure I ever really did: Neither Balanced Libraries nor Future Libraries was, in my opinion, a real attempt to change the course of library philosophy so much as to avoid what I regarded as unfortunate course changes.)

going through the articles

So, after writing about half of the Ethics and Access essay, I started going through the lib-phil items, a few at a time. I read part or all of (most of) the items (not all: about one-third had evaporated in the way of the web).

And I either assigned a new tag for a topic where I still thought, perhaps, I could add significant value, or I deleted the tag.

I just finished that process. Two items were retagged (one already had a secondary tag). The rest–131 of them–are gone.

realistic, not sad; one choice, not an overall decision

I found it interesting to reread some of these posts, columns and essays, especially those more than a year old. I read most of the comments as well. (I will admit that I did not make it all the way through two or three posts in the blog/journal hybrid I alternate between admiring and wanting to avoid.)

I also found that–in nearly all the cases–I honestly didn’t think that Walt Crawford had anything especially valuable to add to the stories; that this particular train had left the station.

That’s realistic. It’s not sad.

This was also one choice–the most heavily-populated tag.

It may be a partial decision (I’m less and less likely to believe that pontificating about What Libraries Should Be is a valuable use of my time and energy, either for me or for anyone else–which, of course, won’t stop me from commenting in various social spaces). I’ll look at the other 18-19 “lib-” tags carefully and skeptically.

I noticed the extent to which a few writers kept popping up, and at some point said to myself, “If Chris Bourg or Barbara Fister or Wayne Bivens-Tatum want to do essay collections, that’s up to them.” I dunno whether any or all of them will (there are one or two other names and one pseudonym, but these are the three most obvious cases), but in any case I found myself with little to add other than “Still good stuff. Go read it.” (WBT made things easier by deleting perhaps half of the essays I would have considered. That’s his choice.)

what’s next?

Dunno. I haven’t decided to stop writing, not yet, not entirely.

One minor anecdote: Until two hours before I prepared the final PDFs, the date for C&I 14.7 was July/August 2014, an express statement that I was going to take it easy.

I finally decided that this was pointless. The publication’s already irregular. I don’t intend for Volume 14 to have anything close to as many pages as Volumes 12 and 13, but I expect it will have more pages than Volume 11. (Respectively, 11, 12, and 13 total 274, 394 and 398 pages. Volume 14 to date totals 202 pages–so even four 20-page issues would take it past Volume 11.)

Partly things will depend on whether there’s any additional support or sponsorship for C&I (three supporters to date: count them, three). Partly things will depend on how recent essays are received and whether I believe they’re having any useful impact. Partly things will depend on whether the (slightly delayed at ALA) Library Technology Reports issue is well-received (and maybe generates at least a few sales of the related book!).

Partly things will depend on going through more of these tag lists, seeing what still makes me feel there’s something worth saying, and seeing where that winds up.

The local Friends group? Yeah, I might get involved (which probably means spending a couple of hours a week helping out at the bookstore). That doesn’t preclude other writing, of course; just cutting back and refocusing.

for that matter, the blog might come back to life

There haven’t been many non-announcement posts. That might change. Maybe.


This blog is protected by dr Dave\\\\\\\'s Spam Karma 2: 104279 Spams eaten and counting...