Archive for the 'open access' Category

Open Data, Crowdsourcing, Independent Research and Misgivings

Posted in Cites & Insights, open access on September 1st, 2014

or Why Some Spreadsheets Probably Won’t Become Public

If you think that title is a mouthful, here’s the real title:

Why I’m exceedingly unlikely to make the spreadsheet(s) for my OA journals investigations public, and why I believe it’s reasonable not to do so.

For those of you on Friendfeed, there was a discussion on specifically this issue beginning August 26, 2014. The discussion was inconclusive (not surprisingly, partly because I was being a stubborn old goat), and I continued to think about the issues…even as I continued to build the new spreadsheet(s) for the project I hope to publish in the November and December 2014 Cites & Insights, if all goes well, cross several fingers and toes.

Consider this a public rethinking. Comments are most definitely open for this post (if I didn’t check the box, let me know and I’ll fix it), or you’re welcome to send me email, start a new thread on one of the social media I frequent (for this topic, Friendfeed or the OA community within Google+ seem most plausible), whatever…

Starting point: open data is generally a good idea

There may be some legitimate arguments against open datasets in general, but I’m not planning to make them here. And as you know (I suspect), I’m generally a supporter of open access; otherwise, I wouldn’t be spending hundreds of unpaid hours doing these investigations and writing them up.

All else being equal, I think I’d probably make the spreadsheet(s) available. I’ve done that in the past (the liblog projects, at least some of them).

But all else is rarely equal.

For example:

  • If a medical researcher released the dataset for a clinical trial in a manner that made it possible to determine the identities of the patients, even indirectly, that would be at best a bad thing and more likely actionable malpractice. Such datasets must be thoroughly scrubbed of identifying data before being released.

But of course, the spreadsheets behind Journals, “Journals” and Wannabes: Investigating The List have nothing to do with clinical trials; the explicitly named rows are journals, not people.

That will also be true of the larger spreadsheets in The Current Project.

How much larger? The primary worksheets in the previous project have, respectively, 9,219 [Beall’s Lists] and 1,531 [OASPA] data rows. The new spreadsheets will have somewhere around 6,779 [the subset of Beall’s Lists that was worth rechecking, but not including MDPI journals], exactly 1,378 [the subset of OASPA journals I rechecked, including MDPI journals], and probably slightly fewer than 3,386 [the new “control group,” consisting of non-medicine/non-biology/non-biomed journals in DOAJ that have enough English in the interface for me to analyze them and that aren’t in one of the other sets] rows—a total of somewhere around 11,543. But I’m checking them more deeply; it feels like a much bigger project.

So what’s the problem?

The spreadsheets I’ve built or am building are designed to allow me to look at patterns and counts.

They are not designed for “naming and shaming,” calling out specific journals in any way.

Yes, I did point out a few specific publishers in the July article, but only by quoting portions of their home pages. It was mostly cheap humor. I don’t plan to do it in the new project—especially since most of the journals in the new control group are from institutions with only one or a handful of journals; I think there are some 2,200 publisher names for 3,386 journals.

This is an important point: The July study did not name individual journals and say “stay away from this one, but this one’s OK.” Neither will the November/December study. That’s not something I’m interested in doing on a journal-by-journal or publisher-by-publisher basis. I lack the omniscience and universal subject expertise to even begin to consider such a task. (I question that anybody has such omniscience and expertise; I know that I don’t.) I offered possible approaches to drawing your own judgment, but that’s about it.

Nor do I much want to be the subject of “reanalysis” with regard to the grades I assigned. (I don’t want angry publishers emailing me saying “You gave us a C! We’re going to sue you!” either—such suits may be idiotic, but I don’t need the tsuris.)

Releasing the full spreadsheets would be doing something I explicitly do not want to do: spreading a new set of journal grades. There is no Crawford’s List, and there won’t be one.

For that matter, I’m not sure I much want to see my numbers revalidated: for both projects, I use approximation in some cases, on the basis that approximation will yield good results for the kind of analysis I’m doing. (I’ll explain most of the approximation and shortcuts when I write the articles; I try to be as transparent as possible about methodology.)

For those reasons and others, I would not be willing to release the raw spreadsheets.

Could you randomize or redact the spreadsheets to eliminate these problems?

Well, yes, I could—but (a) that’s more unpaid labor and, more important, (b) I’m not sure the results would be worth much.

Here, for example, are the data label rows and one (modified) data row from part of the current project:

Pub Journal 2014 2013 2012 2011 Start Peak Sum Gr GrF APC Note
pos POS Physics 15 34 14 1 2011 34 64 B $600

The columns, respectively, show: the publisher code (in this case, Pacific Open Science, a nonexistent—I think—publisher I may use to offer hypothetical examples in the discussion. Their slogan: If an article is in our journals, it’s a POS!); the journal name; the number of articles in January-June 2014, all of 2013, all of 2012, all of 2011; the starting year; the peak annual articles; the sum of the four years; the letter grade; a new “GrF”—the letter grade that journals with fewer than 20 articles per year would get if they had more; the article processing charge for a 10-page article; and any note I feel is needed. (If this was the new DOAJ control group, there would be another column, because hyperlinks were stored separately in DOAJ’s spreadsheet; for the one I chose, “POS Physics” is itself a hyperlink—but, of course, there’s no such journal. Don’t try to guess—the actual journal’s not remotely related to physics.)

I’ll probably add a column or two during analysis—e.g., the maximum annual APCs a given journal could have collected, in this case 34×600 or $20,400, and for the new DOAJ group the subject entry to do some further breakdowns.

I could certainly randomize/redact this spreadsheet in such a way that it could be fully re-analyzed—that is, sort the rows on some combination that yields a semi-random output, delete the Pub column, and change the Journal column to a serial number equal to the row. Recipients would have all the data—but not the journal or publisher names. That wouldn’t even take very long (I’d guess ten minutes on a bad day).

Would anybody actually want a spreadsheet like that? Really?

Alternatively, I could delete the Gr and GrF columns and leave the others—but the fact is, people will arrive at slightly different article counts in some significant percentage of cases, depending on how they define “article” and whether they take shortcuts. I don’t believe most journals would be off by more than a few percentage points (and it’s mostly an issue for journals with lots of articles), but that would still be troublesome.

Or, of course, I could delete all the columns except the first two—but in the case of DOAJ, anyone wanting to do that research can download the full spreadsheet directly. If I was adding any value at all, it would be in expanding Beall’s publisher entries.

What am I missing, and do you have great counter-arguments?

As you’ll see in the Friendfeed discussion, I got a little panicky about some potential Moral Imperative to release these spreadsheets—panicky enough that I pondered shutting down the new project, even though I was already about two-thirds of the way through. If I had had these requests when I began the project or was, say, less than 2,000 rows into it, I might have just shut it down to avoid the issues.

At this point, I believe I’m justified in not wanting to release the spreadsheets. I will not do so without some level of randomizing or redaction, and I don’t believe that redacted spreadsheets would be useful to anybody else.

But there are the questions above. Responses explicitly invited.

[Caveat: I wrote this in the Blog Post portion of Word, but it’s barely been edited at all. It’s probably very rough. A slightly revised version may—or may not—appear in the October 2014 Cites & Insights. If there is an October 2014 Cites & Insights.]

Now, back to the spreadsheets and looking at journals, ten at a time…


Added September 3, 2014:

Two people have asked–in different ways–whether I’d be willing to release a spreadsheet including only the journal names (and publishers) and, possibly, URLs.

Easy answer: Yes, if anybody thought it was worthwhile.

There are three possible sheets:

  • The Beall list, with publishers and the publisher codes I assigned on one page, the journals (with “xxind” as a publisher code for Beall’s separate journal list) and publisher codes on another page. All (I believe) publisher names and most but not all journal names have hyperlinks. (Some publishers didn’t have hyperlinked lists I could figure out how to download.) That one might be mildly useful as an expansion of Beall’s publisher list. (This would be the original Beall list, including MDPI, not the new one I’m using for the new study.)
  • The OASPA list, similarly structured and same comments, lacking MDPI (which is in the new one I’m using for the new study).
  • The new “partial DOAJ” list–DOAJ entries that aren’t in medicine, biology or biomed, that have English as a language code and that aren’t–if I got it right–in the other lists. I don’t honestly see how this could save anybody any time, since all it is is a portion of what’s downloadable directly from DOAJ, albeit in May 2014 rather than now.

If someone wants one of these, let me know–waltcrawford@gmail.com. I may not respond immediately, but I’ll either return the sheet you want as an email attachment or, if there’s more than one request, possibly load it at waltcrawford.name or in Dropbox and send you a link.

 

 

Natureally, I’m delighted

Posted in Cites & Insights, open access on August 6th, 2014

My name appeared in a Nature news article today (August 6, 2014). Specifically:

The DOAJ, which receives around 600,000 page views a month, according to Bjørnshauge, is already supposed to be filtered for quality. But a study by Walt Crawford, a retired library systems analyst in Livermore, California, last month (see go.nature.com/z524co) found that the DOAJ currently includes some 900 titles that are mentioned in a blacklist of 9,200 potential predatory journals compiled by librarian Jeffrey Beall at the University of Colorado Denver (see Nature 495, 433–435; 2013).

and, later in the piece:

Bjørnshauge says that a small cohort of some 30 voluntary associate editors — mainly librarians and PhD students — will check the information submitted in reapplications with the publishers, and there will be a second layer of checks from managing editors. He also finds it “extremely questionable to run blacklists of open-access publishers”, as Beall has done. (Crawford’s study found that Beall’s apparently voluminous list includes many journals that are empty, dormant or publish fewer than 20 articles each year, suggesting that the problem is not as bad as Beall says.)

Naturally (or Natureally), I’m delighted to have my name show up, and a C&I issue linked to, in Nature. (It didn’t come as a complete surprise: the journalist sent me email asking about my affiliation–none–and, later, where I live.)

I’m not quite as delighted with the slant of that first paragraph (quite apart from the fact that Beall’s lists do not list some 9,200 “potential predatory journals,” they include publishers that publish or “publish” that number of journal names). Namely, I think the story is not that 900 “potentially predatory” journals appear in DOAJ with the loose listing criteria that site formerly used. I think the story is that more than 90% of the journals in DOAJ are not reflected in Beall’s list, given his seeming zeal to target OA journals.

But, of course, it’s the journalist’s story, not mine, and I do not feel I was quoted incorrectly or unfairly. (Incidentally, I don’t  have nits to pick with the second paragraph.)

I agree with Bjørnshauge that a blacklist is itself questionable.

Do I believe the much improved DOAJ will constitute a real whitelist? I’m not sure; I think it will be a great starting point. If a journal’s in the new DOAJ, and especially has the DOAJplus listing, it’s fair to assume that it’s probably a reasonably good place to be. (But then, I’m no more an expert in what journals are Good or Bad than Beall is.)

Anyway: thanks, Richard Van Noorden, for mentioning me. I hope the mention leads more people to read more about questionable journals than just Beall’s list. I strongly believe that the vast majority of Gold OA journals are as reputable as the vast majority of subscription journals, and I believe I’ve demonstrated that there aren’t any 9,200 “predatory” journals out there that are actual journals researchers with actual brains and a modicum of common sense would ever submit articles to.

A few readers may know that I’ve embarked on a related but even more ambitious (or idiotic) project, having to do with volume of articles and adding a new and very different control group. Dunno when (if?) I’ll finish the huge amount of desk work involved and produce some results. I do believe that, among other things, the results may shed some light on the apparent controversy over how prevalent APCs are among Gold OA journals… (And, incidentally, more financial support for C&I wouldn’t hurt this process.)

 

Thinking About Effectiveness

Posted in Cites & Insights, open access on June 29th, 2014

It’s been roughly three weeks since “Journals, ‘Journals’ and Wannabes: Investigating the List” (Cites & Insights 14:7, July 2014) appeared.

Thanks largely to those who tweeted and retweeted items about it or even blogged about it (you know who you are, and thanks), it’s had reasonably good readership so far: just under 1,400 copies downloaded as of the last time I looked.

That’s not great–less than half the first-month downloads for “Ethics and Access 1: The Sad Case of Jeffrey Beall” (April 2014), although I suppose people could have been hot to read “Forecasts and Futurism” in that issue, but more than the first-month downloads for “Ethics and Access 2: The So-Called Sting” (May 2014, accompanied by “Future Libraries: A Roundup”).

In case it’s not obvious, the July issue was a lot of work, so much so that it can only be justified by whim. Still, I believe the results made it at least partly worthwhile–specifically, the finding (as I interpret it) that most of the vast number of “journals” on Beall’s lists aren’t really predatory because either they don’t actually exist or because authors who are paying attention wouldn’t submit papers to them anyway. Oh, and the perhaps-more-important finding that the casual assumption, which I’ve seen stated by people who should know better, that most OA journals are sketchy isn’t supported by any facts in evidence, and certainly not by Beall’s list.

So what?

There’s the question. The issue’s been downloaded. I’ll assume it’s been read (never quite a safe assumption, but…)

Will it have any medium-term or long-term impact?

Will people view Gold OA journals a little less cynically?

Will people regard Beall’s efforts as the hobby (or hobbyhorse) they are rather than as indictments of OA in general?

I don’t have answers. It is, of course, awfully early to say. I’m not sure how I would find answers.

But it feels like an important question.

Thoughts?

Which new would-be journals are worth helping out?

Posted in open access on June 17th, 2014

This question was raised–not at all in those words–by a thoughtful reader of Journals, “Journals” and Wannabes: Investigating The List. Noting that six out of ten journals from The Lists were totally empty (but possibly brand new), essentially empty or had few articles, this person wondered when it would make sense to submit an article (or join an editorial board), given my conclusion that–for most authors–ignoring these “journals” and wannabes was most reasonable.

I thought about that, and I’ve prepared a tentative draft commentary, one that appears at the end of “Ethics and Access 3,” scheduled to appear in the August or September 2014 Cites & Insights.

But of course I’m no expert: I’m not a traditional scholar, tenure has never been an issue, etc., etc.

So I’m asking:

What are your suggestions?

Given a new or not-yet-established journal, what would you look for as positive or negative indicators for possible submission or participation (beyond the usual red flags)?

I think this may devolve into three subcategories:

  • Subscription and hybrid journals (I’m not ready to distinguish between those)
  • APC-charging Gold OA journals
  • No-fee Gold OA journals

I believe the bar is significantly lower for the third category than for the first two. Given the sheer number of journals out there already, I believe the bar for the first two should be fairly high–a big part of that bar being “Why do we need another journal on X?”

Comments? Either below or via email to waltcrawford at gmail.com

By July 7 to be most useful as I revise that essay (or scrap it). Unless you feel the need to offer suggestions as background, comments or email will be treated as quotable with attribution.

Thanks!

Cites & Insights July 2014 (14:7) available

Posted in Cites & Insights, open access on June 9th, 2014

Cites & Insights 14:7 (July 2014) is now available for downloading at http://citesandinsights.info/civ14i7.pdf

That URL is for the traditional two-column print-oriented ejournal. If you plan to read the journal on a computer, a tablet or other e-device (and if you plan to follow links), you’re much better off–especially in this case–downloading the single-column online-oriented version at http://citesandinsights.info/civ14i7on.pdf

[Links may not work from the two-column version. Conversely, some boldface may not show up in the one-column version. This issue has two dozen tables, some of which have smaller type in the two-column version, making the one-column version easier to read.]

The two-column version is 24 pages long. The single-column 6×9 version is 45 pages long.

The issue consists of a single essay, all original material (except for a few excerpts from publisher pages):

Intersections
Journals, “Journals” and Wannabes: Investigating the List (pp. 1-24)

Jeffrey Beall’s 4P (potential, probable, possible predatory) publisher and journal lists total 9,219 journals in early April 2014.

The Directory of Open Access Journals (DOAJ) totals 9.822 journals as of early June 2014.

9,219 is 93.9% of 9,822.

But: 90.8% of the journals in DOAJ are not represented in Beall’s lists.

A paradox? Not really.

This special issue does something I don’t believe has ever been done before (and is unlikely ever to be done again): looks at every journal from every publisher on Beall’s lists to see whether they’re plausible predators–whether they could reasonably attract any sensible author.

Yes, I even used a control group: members of the OASPA. And two subject groups from DOAJ as secondary control groups.

What’s here? A discussion of my methodology (of course); the results; the control-group results; the subject-group results; some notes on “the name game” (anyone want to help start up International Journal of International Journals?); a few notes from some “publisher” sites; some comments on fee vs. free; discussing real and possible predators–and a list of potentially predatory characteristics of subscription journal publishers; a couple of other issues; and some conclusions, including a new and faster “Is this a reasonable journal?” methodology.

If you read C&I 14.4 or 14.5 (and thousands of you did), I believe you must read this issue, the product of months of research and analysis.


Update, later on June 9, 2014: Someone reading the essay carefully might ask why I didn’t just do a mechanical comparison of all journal names I derived from the Beall lists against the DOAJ list, instead of looking up publishers and journals.

I tried that. Differences in the way names are offered by publisher sites and DOAJ mean that an Excel VLOOKUP function only yielded 272 matches, mostly MDPI journals (which typically have short, distinctive names). The method I used, if less automated, was more productive.

The steakhouse blog

Posted in Cites & Insights, open access on March 13th, 2014

When I finished editing “Ethics and Access 1: The Sad Case of Jeffrey Beall,” the lead essay in the April 2014 Cites & Insights*, I didn’t worry about the fact that I failed to reach clear conclusions about Beall or his list or blog. As with most essays of this sort, I was trying to paint a picture, not come up with a Declaration of Belief.

But I did think about why I found the situation so troubling–especially since it was and is clear that many librarians continue to assume that Beall is a reliable and worthy source. Last night, it came to me.

The steakhouse blog

Let’s say someone with some credentials as a judge of good meat starts a blog called Steakhouses. (If there is such a blog, this has nothing to do with it: I didn’t check.**) It gets a fair amount of readership and acclaim, even though every post on it is about bad steakhouses. After a while, there’s even a Bad Steakhouse List as a page from the blog.

Some people raise questions about the criteria used for judging a steakhouse to be bad, but lots of people say “Hey, here’s a great list so we can avoid bad steakhouses.”

The big reveal

After a couple of years, the author of the blog–who continues to be judge and jury for bad steakhouses–writes an article in which he denounces all meat-eaters as people with dire motives who, I dunno, wish to force other people to eat steak.

I will assert that, to the extent that this article became well known and the blog author didn’t deny writing it, the Steakhouse blog would be shunned as pointless–after all, if the author’s against all meat-eaters, why would he be a reliable guide to bad steakhouses?

Bad analogy?

So how exactly are the Scholarly Open Access blog and Beall’s List different from the Steakhouse blog and Bad Steakhouse List? And if they’re not, why would anybody take Beall seriously at this point?

Note that dismissing the Steakhouse blog and the Bad Steakhouse List as pointless does not mean saying “there are no bad steakhouses.” It doesn’t even mean abandoning the search for ways to identify and publicize bad steakhouses. It just means recognizing that, to the Steakhouse blog author, all steakhouses are automatically bad, which makes that author useless as a judge.


Full disclosure: I haven’t been to a steakhouse in years, and I rarely–almost never, actually–order steak at restaurants. I am an omnivore; different issue.

*Just under 2,900 downloads as of right now. Amazing.

**I’ve now done some crude checking. There are a number of blogs that include “Steakhouse” in their titles, I don’t find a Steakhouse blog as such, I don’t find a “Bad Steakhouse List,” and the blogs about steakhouses that I did find don’t appear to be uniformly anti-steakhouse.

Getting it wrong

Posted in open access on January 2nd, 2014

An open letter to a whole bunch of people talking about OA as though they know something about it:

If you use the phrase

The gold (author pays) open-access model

you should just stop right there and maybe actually learn something about OA.

A higher percentage of subscription-based journals have article processing charges than do gold OA journals, at least the last time anybody who cared about facts checked.

But if your intention is to scare people away from gold OA and OA in general, I guess facts don’t much matter.

 

Open access, advocacy, extremism and attention: A casual note

Posted in open access on December 9th, 2013

For a long time I viewed myself as an open access (henceforth OA, because that’s what I’m talking about) independent/observer: Not really involved in the “movement” but noting developments, commenting from time to time and–once in a while–indulging in a little “curse on both your houses” when it seemed necessary.

More recently, I found that I was gradually moving from independent to advocate–but I’m beginning to think that’s wrong, for a couple of reasons:

  • While I did write what’s still a key book on OA (Open Access: What You Need to Know Now), and while that book could be considered OA advocacy–it’s certainly not entirely neutral–I’m not in the trenches day-in and day-out responding to critics and espousing all forms of OA. I’m no Peter Suber or Michael Eisen. I’m also no (insert list of effective OA advocates here).
  • For whatever reasons–possibly lack of institutional affiliation, possibly lack of single-minded 100% support of any and all OA models and approaches, possibly not being either a scientist or an academic, possibly, I dunno, being a crappy writer–I find that I’m not really a significant part of the conversation. With relatively few exceptions (Peter Suber being a primary one), my contributions to the discussion are largely ignored, especially outside the library field. And even within the library field, I’d bet that J. Beall gets 10 times the attention and credibility that I do–and 10 times may be too conservative.

Maybe I’m overstating the second one. Maybe it doesn’t matter.

For various reasons, I don’t have readership numbers for Cites & Insights for most of 2013. My sense is that the three OA issues that year were not widely read, and they certainly haven’t been widely-referenced, at least from what I can see using Google searches.

It probably doesn’t help that one of the Great Men of OA labeled me an enemy of OA years ago, and as far as I know has never retracted that absurd charge. Yes, I’ve criticized that particular Great Man for what I consider an extremist view of OA and his frequent attempts to undermine forms of OA that differ from his own. I don’t apologize for that. (I’ve also criticized one of the would-be Great Women of OA for a form of extremism, for that matter, mostly having to do with appropriate CC licenses. I don’t apologize for that either.)

I had another jotted note toward a future post about escalating definitions of openness, another form of OA extremism that I find troubling (in that it makes it easier to oppose OA or ignore it entirely). I might yet write that…or maybe not.

So maybe I’m not really becoming an OA advocate. Of course I believe it’s important (and it’s fair to note that I was writing about it–and engaged in it–long before the term existed). Of course I’ll note it where it matters (e.g., in talking about possible solutions for the damage done by the big deal). And no, I’m not saying “screw it: I’m walking away from OA” again. That’s silly; while I may not have much of an audience or much credibility, I still have a little–and there continue to be some interesting aspects of OA to write about.

Still… My Soros funding still hasn’t come through (nor have I ever requested it); I don’t have a “cushy job” or any job at all to fall back on; I’m not sure I’m willing to plow through all the BS from extremists both opposed to OA and those favoring The One True Way; and, well, this is one area where:

  • There are some eloquent voices who do have some credibility
  • I sometimes feel as though writing in this area is mostly a waste of time.
  • It’s clear that I have no basis for direct OA advocacy.

So, back to being an observer–not really an independent, but not really an advocate. To attempt otherwise appears to be beating my head against a wall of gelatin: Not bloody-making but basically pointless.

 

 

 

 

Maybe I should write something about OA

Posted in Cites & Insights, open access on November 4th, 2013

Purely an incidental comment…

I used to write a fair amount about open access–in particular about how it related to libraries.

Enough so that in 2010 I self-published Open Access and Libraries: Essays from Cites & Insights 2001-2009. (That link is to the $17.50 paperback; the PDF ebook is absolutely free.)

The paperback is 513 pages long and includes 33 essays and an introduction. (It’s incomplete: it only includes whole essays on OA, not discussions of OA within other essays.)

I put it together partly because I’d sort of given up writing about OA at that point, partly because I didn’t think I was being heard at all, partly because more knowledgeable people and those with much larger voices were covering it so extensively.

Indeed, there were no essays specifically about OA in Cites & Insights during 2010, 2011 or 2012–although one could certainly argue that one or both of the essays in the December 2012 issue were pretty closely related to OA.

On the other hand, I did produce a compact book for ALA Editions in 2011, Open Access: What You Need to Know Now. It’s still available; I believe it’s still useful.

Then came 2013

In January 2013, I devoted most of the issue (90%) to “Catching Up with Open Access.”

In February 2013, I devoted most of the issue (>90%) to the second half of that essay.

And stuff kept happening that I thought was worth tagging for discussion…enough stuff so that I devoted nearly all (98+%) of the June 2013 issue to “Hot Times for Open Access.”

Adding it up

Just for fun (and because I could do it in three or four minutes), I thought I’d see what those essays–the ones in December 2012, January 2013, February 2013 and June 2013–would amount to if I was doing a second volume of Open Access and Libraries.

Three hundred and forty pages. Well, that’s without copyfitting. With copyfitting, it would probably come out to as little as 330-334 pages. Plus an introduction, table of contents and (maybe?) an index (but an index would be at least 10-12 pages).

In other words, by at least one measure, I’ve devoted almost precisely two-thirds as much space to open access since December 2012 as I did from 2001 through 2009. It comes out to about 126,000 words.

I don’t (currently) plan on doing such a second volume, partly because I don’t (currently) plan on abandoning OA coverage as a small voice grumbling in the wilderness, but even now it would be a fairly thick paperback.

211

That’s the number of items currently tagged “OA” in Diigo. Which means it’s all items that I have not yet written about. Dunno when I will. One significant chunk of that gets me a pleasant enough earworm of a particular Scott Joplin rag…

No deeper significance.

 

 

 

Go read this.

Posted in open access, Stuff, Writing and blogging on August 15th, 2013

Dorothea Salo has a new article out in the Journal of Librarianship and Scholarly Communication.

You should read it, especially if:

  • You care about open access
  • You care about scholarly communication in academic institutions
  • You would like to see a healthy future for scholarly communication and for scholars, including independent scholars
  • [This bullet removed as, well, a spoiler for those who don't read thoughtfully.]

The title: “How to Scuttle a Scholarly Communication Initiative.”

The remarkable thing about this article is that it appears to have been used as a blueprint by any number of institutions before it was published.

One consequence of Salo’s article: My planned article-in-installments, “How not to be the expert,” a series of autobiographical musings, may be postponed indefinitely. Once you’ve seen a master at work, it’s easy to recognize one’s own limitations. But that’s me. For you: Go read it. Now.

 

 


This blog is protected by dr Dave\\\\\\\'s Spam Karma 2: 104360 Spams eaten and counting...