Changes for the Better?

March 11th, 2016

Do you have suggestions that will help make Gold Open Access Journals 2011-2015 even better than The Gold OA Landscape 2011-2014?

If so, now’s the time to suggest them—any time between now and May 1, 2016 (the earliest date I’m likely to start working on data analysis and the book manuscript). Suggestions should go to me at

You say you haven’t purchased the book yet, either in paperback or PDF ebook form? You still can, and it will still be worthwhile when the new book comes out.

Alternatively, you can get a good idea of the general approach and tables used in the excerpt published as the October 2015 Cites & Insights, although that version lacks any graphs.

I’ve appended pages 39 through 73 of The Gold OA Landscape 2011-2014 to the end of the next Cites & Insights, probably out in late March 2016. That segment includes almost all varieties of tables and graphs used in the book. The online version is an exact replica of the print book; the print (two-column) version is just slightly smaller, so that four pages of the 6×9″ book fit on each 8.5×11″ sheet rather than having loads of waste space.

The Basics

Basically, the data used for analysis includes for each journal the year reported to DOAJ (which is not always the start of publication), the country of publication (again as reported to DOAJ), one of 28 subjects and three broad areas that I’ve derived from the subjects, keywords and journal/article titles for the journals, and the data I went looking for: whether there’s an author-side fee (usually called an APC or Article Processing Charge but they’re not all that straightforward) and how much it is, and the number of published articles (and similar items) for each year 2011 through 2015. There’s also a two-letter code (or “grade and subgrade”) for special cases, but most journals don’t have special codes. I also derive some measures: the peak article number during the five years and, if there are APCs, the maximum revenue for 2014 (2015 this time around).

Last year, after an overall discussion of maximum revenues, overall article counts, and special cases, I looked at journals by annual article volume for each of the three major areas (which have very different characteristics), fee and revenue levels, starting dates for free and APC-charging journals, and a number of measures by country of publication. I also provided one set of pie charts breaking down free and pay journals by major area.

For each of the three major areas (biomed, STEM, and humanities and social sciences) I looked at cost per article by year, journal and article volume by year (and free percentage of each), revenue brackets for journals, article volume brackets, and APC level brackets. A bar graph showed free and pay articles for each year.

For each subject within an area—using the revenue and article volume brackets appropriate for that area—I showed journals and articles for each year (and free percentage), the free/pay article bar graph, journals by article volume (and percent free), journals and articles by APC range, a line graph showing free and pay journals by starting date, and a table showing the countries with the most published 2014 articles for that subject.

At the end of the book, I provided a few subject summaries—percentage of free journals, percentage of articles in no-fee journals, change in article volume, change in free article volume, journals changing article volume by 10% or more from 2013 to 2014, average APC per paid article and for all articles, median APC per paid article and all articles, and the median, first quartile, and third quartile articles per journal for 2014.

Data Changes for 2015

There’s another year of data—more journals and more data for existing journals. I’m taking some pains to include more journals (and defining “articles” somewhat more inclusively and, I believe, consistently).

Beyond that, there may be one new category of derived data: a publisher category—breaking journals down into what seem to be five reasonable groups based on what’s in the DOAJ publisher field:

  • Academic, published by universities and colleges, including university presses.
  • Society, published by societies and associations.
  • Traditional*, published by publishers that also publish subscription journals.
  • OA publisher*, published by groups that don’t appear to publish subscription journals (and that publish at least a handful of journals—see notes on the “*” below)
  • Miscellany, everybody else.

About the asterisk on Traditional and OA publisher: there are 5,983 different “publisher names” (that is, distinct character strings in the DOAJ publisher field). That’s more than one “publisher” for every two journals. The vast majority of those, all but 919, publish a single DOAJ-listed journal.

I think it’s reasonable to limit the two “publisher” categories (Traditional and OA) to firms that publish at least a handful of journals, and lump the others in as Miscellany. (If nothing else, it makes this added data feasible.)

What’s a handful? If the cutoff is “five or more,” it involves only 221 publishers in all, accounting for 4,128 journals. If the cutoff is “four or more,” it involves 316 publishers—and, naturally, adds 380 journals for a total of 4,508. Dropping it to “three or more journals” brings us up to 486 publishers and 5,018 journals. I suspect the final cutoff will be either four or five

Incidentally, if I add that column, it will be in the anonymized spreadsheet made publicly available at the end of this project. Other than the list of journal titles apparently containing malware, it will be possible for anybody else to replicate any or all of the graphs and numbers in the book.

Probable Changes

I believe it will make sense to devote a chapter to publisher categories—whether there are major differences in article volume, APC charges (existence and amount) and, possibly, domination in some countries.

I’m fairly certain the pie charts will go away: I don’t believe they add enough information to justify the space. I could be convinced otherwise. (Note that the print paperback will, of necessity, be black and white to keep production costs down, so really attractive pie charts aren’t feasible.)

Possible Changes

What else should I consider? Which existing tables and graphs don’t seem especially valuable—and what would work better? (Assume that this year’s book can be larger than last, but not enormously larger.)

I’m open to suggestions, which I’ll discuss with my contacts at SPARC (and I anticipate suggestions from SPARC as well).

I would offer a free PDF version of this year’s book as a reward for good suggestions—but since this year’s PDF version will be free in any case, that’s

Gold OA Journals 2011-2015: Grade Changes and an Update

February 10th, 2016

After reviewing the numbers in The Gold OA Landscape 2011-2014 and considering what I can and, more significantly, cannot reasonably ascertain and judge in non-English journals and in short visits to websites, and in consultation with SPARC contacts, I made a number of changes in grades and, as a result, in exclusions.

I did not change the list of subjects and areas, although a few journals may have been assigned new subjects—and, as in the previous study, PLOS One is omitted from subject and area figures but included in overall discussions.

The fundamental meaning of Grade B has changed from “deserves attention” to “might be excluded from DOAJ or in some versions of Open Access.”

Changes in Grade A Subgrades

All subgrades for Grade A have been eliminated. Subgrade C (ceased) is now a subgrade for Grade B. Subgrades D, E, H, O and S—all cases where some year other than the first had fewer than five articles—have been collapsed into Grade B, Subgrade F (few or no 2015 articles) if the article count for 2015 is less than 5 and simply Grade A otherwise.

Changes in Grade B Subgrades

Grade B consists of journals that may or may not belong, either in DOAJ or in a study of open access, depending on your definitions. The old subgrades all have to do with mild visual or editorial issues that now seem as though they’re imposing my own values inappropriately.

There are four new subgrades—two from Grade A and two from Grade X, albeit with different letters.

  • C: Ceased—journals that published at least one article later than 2010 but explicitly ceased during or before 2015, have merged with other journals, or show no articles more recent than 2012.
  • F: Few or no 2015 articles—journals that published at least one article later than 2012 and published fewer than five articles in 2015. (By current DOAJ rules, these are subject to delisting.)
  • R: Conference and other reports—journals consisting entirely or primarily of conference papers and other reports. These were previously excluded, in subgrade XN, as not OA.
  • S: Sign-in or registration required—journals that require some form of registration before reading articles. These were previously excluded, also in subgrade XN, as not OA.

Changes in Grade C Subgrades

Grade C, “avoid this journal,” has been narrowed somewhat, specifically to eliminate subgrades that involve personal judgment or have so few journals that they’re hardly worth noting. Specifically, subgrades E (very bad English), S (incoherent site) and T (absurd article titles—there were almost none of these) have been eliminated, leaving subgrades A (APC missing), F (clear falsehoods), O (mix of problems) and P (implausible peer review turnaround). Briefly, clear falsehoods are statements such as “the leading journal in this field” for a brand-new journal; implausible peer-review turnaround involves promises to complete all peer reviews in a couple of days.

Changes in Grade X Subgrades

Grade X, excluded journals, retains the same subgrades—but the two largest categories within subgrade N (not OA) have been moved to subgrades BR and BS.

A Partial Checkpoint

What are the consequences of these changes? In general, and combined with more exhaustive checking of some difficult situations, they should mean that more journals will be included in the full analysis. As for specific results, those won’t be clear until the project is complete.

I thought it would be worth offering some glimpses into what might be happening at a natural breakpoint: essentially halfway through the first pass of data gathering (actually 5,500 of 10,948).

First pass? Yes indeed. There will be a second pass, beginning no earlier than April 1, 2016, for quite a few of the journals, for various reasons:

  • Many smaller journals, especially in the humanities and social sciences, post online articles and issues with significant delays. In practice, even waiting a year won’t get them all. I’m rechecking all journals that appear to be missing final issues for 2015; this gives them at least three months to get the articles posted.
  • I’m rechecking all journals that couldn’t be reached or that showed signs of malware, as well as those that showed as parking or ad pages or were unworkable.
  • I’ll take a second look at journals excluded for various reasons, trying harder to make sense of opaque cases and translation difficulties, looking more closely for apparently-missing APCs, rechecking whether certain journals are OA or not.

So far, it looks as though I’ll need to recheck about one-fifth of the journals: 1,047 of the first 5,500. I’d be delighted if that percentage goes down in the second half—but I’d also be surprised.

All the rest of these numbers are truly tentative, since review of the journals may change their categorization.

Free and Pay

Some journals started imposing APCs that didn’t have them previously (one large publisher dropped all of its free introductory periods); some (fewer) drop APCs; and some clarify the nature of their charges.

Overall, the percentage of no-APC journals (among journals where it’s clear) among the first half dropped from 64.9% to 59.8%: there are more no-fee journals than in the previous study, but there are a lot more APC-charging journals. (There are also, to be sure, more journals in general: about 412 so far.) There are fewer journals (so far) where there is an APC but it’s hidden.

The Newbies

Most journals that weren’t in the 2014 study are simply A (that is, “nothing special here one way or the other”), but 30 have fewer than five articles in 2015, a few couldn’t be contacted or were unworkable, a handful fall into various other categories—and, unfortunately, nine showed signs of malware.

Neutral Changes

Some changes in grade and subgrade are neutral: they’re just redefinitions. That’s true for the journals that changed from various A grades to BC (ceased explicitly or with no articles later than 2012): there are some 218 BC so far. It’s also true for the various A subgrades that are now simply A (around 230 of them) and for a number of other changes including quite a few moving from B subgrades to A.

Some 300 journals had five or more articles in 2014 but not in 2015, moving them all to BF: some of those will add articles in a recheck.

Changes for the Good

Some 27 journals previously graded CA (APC missing or hidden) now have more clarity (and four changed to various X subgrades).

Quite a few journals with explicit falsehoods on their homepages have been cleaned up—at least 80 of them.

Half a dozen journals flagged for malware no longer seem to have that problem (but see later!).

Most “not OA” entries in the first half have moved elsewhere on re-examination or redefinition, including 35 journals oriented to conference programs (another seven that had been “A” appear to be predominantly conferences and have been moved here) and ten that require registration to read articles. Some two dozen moved elsewhere, including 17 that now appear to be proper OA journals.

Most journals that I previously found too difficult to count (XO) are now handled, and I hope to reduce the number (70 for this half in the previous study is currently down to 28) even further.

Roughly half of the XT (couldn’t understand the site well enough to measure it) cases have been cleared up: so far, there are only three such journals in the first half, and I’ll try all of them again.

Changes for the Bad

A few journals have changed home pages such that I can no longer find an APC (but am sure they have one), but it’s a tiny number.

Some 70 journals that were reachable the last time around are either unreachable or unworkable when I checked this time; they’ll all be rechecked, but it’s unfortunate that there are so many.

Finally there’s the most unfortunate group, in my opinion: journals that now show signs of malware—frequently, I suspect, because they include ad networks that don’t have proper standards. A journal gets flagged for malware if Malwarebytes or McAfee Site Advisor or Windows Defender flags it or some of its components as malware; cases include phishing attempts and deliberate malware downloads. There are now twice as many of these as there were (for this subset of journals) in the previous study, and that’s about 72 too many.

Summing Up

Hundreds of new journals; a much shorter and simpler set of grades; adding literally thousands of peer-reviewed articles that were given as conference papers.

Far fewer journals falling by the wayside because I only read English (thanks, Google!) or because I can’t or am unwilling to count them (with true broadband, I’m willing to open up a dozen PDFs a year to see how many articles there are).

There will still be some approximate counts, but fewer (and better approximations) than last time around.

And, of course, the results will be freely available to everybody. In a few months.

Not quite gone: a short catchall post

February 9th, 2016

Just thought I’d drop a line to say why I’m posting even less than usual, and why that’s likely to continue for a few weeks months…

You can guess the major reason: Gold Open Access Journals 2011-2014.

I’m trying to do the scan as carefully as possible, and include as many DOAJ-listed journals as possible.

Oh, that’s not all I do: I rarely do any of it after supper, there’s still (some) TV, I’m still reading roughly a book a week and lots of magazines, there’s still the Wednesday hike (or long walk) and the daily 1.3-mile walk around the block. But it takes up a fair amount of time.

Optimistic schedule

If all goes well, I hope to complete the first pass sometime in mid-March. I won’t start the second pass (revisiting a couple of thousand journals where revisits are required or advisable) until early April.

In between, I hope to put together some sort of Cites & Insights issue.

But there’s also a medical situation in late March that could have me out of commission (at least where typing’s concerned) for anywhere from a day or two to several weeks; the day or two is more likely, but you never know. (Benign Schwannoma on the forearm, if you must know: “benign” being the key word.)

Come April, there’s the rescan–a lot fewer journals, but each one will take significantly more time. At least I hope many of them do: part of the revisit is all journals that were unreachable or unworkable or raised malware flags, and I hope a fair number of those don’t have the same exclusionary conditions.

(So far, the only discouraging part of this new project is that too damn many OA journals–not very many in the overall scheme of things, but still too damn many–cause Malwarebytes or McAfee Site Advisor or Windows Defender or, in one case MS Office to say “do you really want to go there?” I  believe that uncontrolled ad sites make up a lot of the problem, but in any case it is simply not acceptable for any journal site to have code that triggers malware warnings. Nor will I ignore the warnings. If I had a dedicated Chromebook, I suppose I could–but that wouldn’t be helpful for others. And yes, I did get a serious bit of malware last time around, and it became clear that at least one other journal was trying to install the same code; that’s why I use Malwarebytes these days.)

I’m guessing I’ll need to take more breaks during the rescan, so there may be more blog posts and activity at Cites & Insights. Then, of course, comes the analysis and writeup… after which I may have a good deal more time. Or not.

Not complaining; I love this. It’s a little triumph each time I can fully analyze a journal I’d left out before, even if it means opening up a dozen PDFs for each of the past five years. At least now I have real broadband, so that’s feasible if annoying. (“Real broadband” as in Comcast, guaranteed 25mbps, actual 30mbps–as opposed to “Uverse” 1.5mbps but dropping entirely once or twice or more a day.)

Still around, still mildly active in various parts of the LSW diaspora, but mostly doing research. And enjoying it.

“Trust Me”: The Other Problem with 87% of Beall’s Lists

January 29th, 2016

Here’s the real tl;dr: I could only find any discussion at all in Beall’s blog for 230 of the 1,834 journals and publishers in his 2016 lists—and those cases don’t include even 2% of the journals in DOAJ.

Now for the shorter version…

As long-time readers will know, I don’t much like blacklists. I admit to that prejudice belief: I don’t think blacklists are good ways to solve problems.

And yet, when I first took a hard look at Jeffrey Beall’s lists in 2014, I was mostly assessing whether the lists represented as massive a problem as Beall seemed to assert. As you may know, I concluded that they did not.

But there’s a deeper problem—one that I believe applies whether you dislike blacklists or mourn the passing of the Index Librorum Prohibitorum. To wit, Beall’s lists don’t meet what I would regard as minimal standards for a blacklist even if you agree with all of his judgments.

Why not? Because, in seven cases out of eight (on the 2016 lists), Beall provides no case whatsoever in his blog: the journal or publisher is in the lists Just Because. (Or, in some but not most cases, Beall provided a case on his earlier blog but failed to copy those posts.)

Seven cases out of eight: 87.5%. 1,604 journals and publishers of the 1,834 (excluding duplicates) on the 2016 versions have no more than an unstated “Trust me” as the reason for avoiding them.

I believe that’s inexcusable, and makes the strongest possible case that nobody should treat Beall’s lists as being significant. (It also, of course, means that research based on the assumption that the lists are meaningful is fatally flawed.)

The Short Version

Since key numbers will appear first as a blog post on Walt at Random and much later in Cites & Insights, I’ll lead with the short version.

I converted the two lists into an Excel spreadsheet (trivially easy to do), adding columns for “Type” (Pub or Jrn), Case (no, weak, maybe or strong), Beall (URL for Beall’s commentary on this journal or publisher—the most recent or strongest when there’s more than one), and—after completing the hard work—six additional columns. We’ll get to those.

Then I went through Beall’s blog, month by month, post by post. Whenever a post mentioned one or more publishers or independent journals, I pasted the post’s URL into the “Beall” column for the appropriate row, read the post carefully, and filled in the “Case” column based on the most generous reading I could make of Beall’s discussion. (More on this later in the full article, maybe.)

I did that for all four years, 2012 through 2015, and even January 2016.

The results? In 1,604 cases, I was unable to find any discussion whatsoever. (No, I didn’t read all of the comments on the posts. Surely if you’re going to condemn a publisher or journal, you would at least mention your reasons in the body of a post, right?)

If you discard those on the basis that it’s grotesquely unfair to blacklist a journal or publisher without giving any reason why, you’re left with a list of 53 journals and 177 publishers. Giving Beall the benefit of the doubt, I judged that he made no case at all in five cases (the fact that you think a publisher has a “funny name” is no case at all, for example). I think he made a very weak case (e.g., one questionable article in one journal from a multijournal publisher) in 69 cases. I came down on the side of “maybe” 43 times and “strong” 113 times, although it’s important to note that “strong” means that at some point for some journal there were significant issues raised, not that a publisher is forever doomed to be garbage.

Call it 156 reasonable cases—now we’re down to less than 10% of the lists.

Then I looked at the spreadsheets I’m working on for the 2015 project (note here that SPARC has nothing at all to do with this little essay!)—”spreadsheets” because I did this when I was about 35% of the way through the first-pass data gathering. I could certainly identify which publishers had journals in DOAJ, but could only provide article counts for those in the first 35% or so. (In the end, I just looked up the 53 journals directly in DOAJ.)

Here’s what I found.

  • Ignoring the strength of case, Beall’s lists include 209 DOAJ journals—or 1.9% of the total. But of those 209, 85 are from Bentham Open (which, in my opinion, has cleaned up its act considerably) and 49 are from Frontiers Media (which Beall never actually made a case to include in his list, but somehow it’s there). If you eliminate those, you’re down to 75 journals, or 0.7%: Less than one out of every hundred DOAJ journals.
  • For that matter, if you limit the results to strong and maybe cases, the number drops to 37 journals: 0.33%, roughly one in every three hundred DOAJ journals.
  • For journals I’ve already analyzed (and since I’m working by publisher name, that includes most of these—at this writing, January 29, I just finished Hindawi), total articles were just over 16,000 (with more to come on a second pass) in 2015, just under 14,000 in 2014, just over 10,000 in 2013, around 8,500 in 2012, and around 4,500 in 2011.
  • But most of those articles are from Frontiers Media. Eliminating them and Bentham brings article counts down to the 1,700-2,500 range. That’s considerably less than one half of one percent of total serious OA articles.
  • The most realistic counts—those where Beall’s made more than a weak case—show around 150 articles for 2015, around 200-250 for 2013 and 2014, around 1,000 for 2012 and around 780 for 2011 (Those numbers will go up, but probably not by much. There was one active journal that’s mostly fallen by the wayside since 2012.)

The conclusion to this too-long short version: Beall’s lists are mostly the worst possible kind of blacklist: one where there’s no stated reason for things to be included. If you’re comfortable using “trust me” as the basis for a tool, that’s your business. My comment might echo those of Joseph Welch, but that would be mean.

Oh, by the way: you can download the trimmed version of Beall’s lists (with partial article counts for journals in DOAJ, admittedly lacking some of them). It’s available in .csv form for minimum size and maximum flexibility. Don’t use it as a blacklist, though: it’s still far too inclusive, as far as I’m considered.

Modified 1/30: Apparently the original filename yields a 404 error; I’ve renamed the file, and it should now be available. (Thanks, Marika!)

Gold Open Access Journals 2011-2015: A SPARC Project

January 22nd, 2016

I’m delighted to announce that SPARC (the Scholarly Publishing and Academic Resources Coalition) is supporting the update of Gold Open Access Journals 2011-2015 to provide an empirical basis for evaluating Open Access sustainability models. I am carrying out this project with SPARC’s sponsorship, building from and expanding on The Gold OA Landscape 2011-2014.

The immediate effect of this project is that the dataset for the earlier project is publicly available for use on and on my personal website. The data is public domain, but attribution and feedback are both appreciated.

Here’s what the rest of the project means:

  • I am basing the study on the Directory of Open Access Journals as of December 31, 2015. With eleven duplicates (same URL, different journal names, typically in two languages) removed and reported back to DOAJ, that means a starting point of 10,948 journals. All journals will be accounted for, and as many as feasible will be fully analyzed.
  • The grades and subgrades have been simplified and clarified, and two categories of journal excluded from the 2014 study will now be included (but tagged so that they can be counted separately if desired): journals consisting primarily of conference reports peer-reviewed at the conference level, and journals that require free registration to read articles.
  • I’m visiting all journal sites (and using DOAJ as an additional source) to determine current article processing charges (if any), add 2015 article counts to data carried over from the 2014 project, clean up article counts as feasible, and add 2011-2014 article counts for journals not in the earlier report.
  • Since some journals (typically smaller ones) take some time to post articles, and since some journals will not be analyzed for various reasons (malware, inability to access, difficulty in translating site or counting articles), I’ll be doing a second pass for all those requiring such a pass, starting in April 2016 or after the first pass is complete. My intent is to include as many journals as possible (although existence of malware is an automatic stopping point), although that doesn’t extend to (for example) going through each issue of a weekly journal only available in PDF form.
  • The results will be written up in a form somewhat similar to The Gold OA Landscape 2011-2014, refined based on feedback and discussion.
  • Once the analysis and preparation are complete, the dataset (in anonymized form) will be made freely available at appropriate sites and publicized as available.
  • The PDF version of the final report will be freely available and carry an appropriate Creative Commons license.
  • A paperback version of the final report will be available; details will be announced closer to publication.
  • A shorter version of the final report will appear in Cites & Insights, and it’s likely that notes along the way will also appear there.

My thanks to SPARC for making this possible.

Dataset for The Gold OA Landscape 2011-2014 now available

January 21st, 2016

I’m pleased to announce that the anonymized dataset used to prepare The Gold OA Landscape 2011-2014 is now available for downloading and use.

The dataset–an Excel .xlsx spreadsheet with two workbooks–includes 9,824 rows of data, one for each journal graded A through C (and, thus, fully analyzed) in the project. Each row has a dozen columns. The columns are described on the second “data_key” workbook.

I would love to be able to say that this dataset was now on figshare–but after wasting spending far too much time attempting to complete the required fields and publish the dataset, it appears that the figshare mechanisms are at least partly broken. When (if) I receive assurances that the scripts (which fail in current versions of Chrome, Firefox and Internet Explorer) have been fixed, I’ll add the dataset there–although I’d be happy to hear about other no-fee dataset sharing sites that actually work. (It’s possible that figshare just doesn’t much care for free personal accounts any more: I also note that the counts of dataset usage that were previously available have disappeared.)

Update January 22, 2016: This dataset is now available on (Hat-tip to Thomas Munro.)

As always, the best way to understand the data in this spreadsheet is via either the paperback version or the PDF ebook site-licensed version of The Gold OA Landscape 2011-2014.

Note: This isn’t quite the “Watch This Space” announcement foreshadowed in Cites & Insights 16:2, and it doesn’t mean that sales of the book have suddenly mushroomed. That announcement–which is related to this one–should come in a few days.

By the way, while the dataset consists of facts and is therefore in the public domain, I’d appreciate being told about uses of the spreadsheet and certainly appreciate proper attribution. Send me a note at

I’d also love your suggestions as to ways the presentation in the book could be improved if or when there’s a newer version…leave a comment or, again, send email to

“Trust me”: The Apparent Case for 90% of Beall’s List Additions

January 7th, 2016

I’ve tried to stay away from Beall and his Lists, but sometimes it’s not easy.

The final section of the Intersections essay in the January 2016 Cites & Insights recounts a quick “investigation” into the rationales Beall provided for placing 223 publishers on his 2014 list. Go to page 8: it’s the section titled “Lagniappe: The Rationales, Once Over Easy.” I found that I could find any rationale for condemning the publishers in only 35% of cases.

Perhaps too charitably, I assumed that it was because Beall’s blog changed platforms and he didn’t take the time to restore older posts to the new blog.

Then I noted his 2016 lists–which add 230 (or more) publishers and 375 (or more) independent journals to the 2015 lists. I say “or more” because at least one major publisher has been removed via the Star Chamber Appeal Process, even though Beall continues to attack the publisher as unworthy.

In any case: 605 new listings. My recollection is that there haven’t even been close to 605 posts on Beall’s blog in the past year… but I thought I’d check it out.

The results: As far as I can tell, posts during 2015 include around 60 new publishers and journals. (I may have missed a couple of “copycat” journals, so let’s call it 65).

Sixty or 65. Out of 605.

In other words: for roughly 90% of publishers (most of them really “publishers,” I suspect) and journals added to the list, there is no published rationale whatsoever for Beall’s condemnation.


So if you’re wondering why I regard Beall as irrelevant to the reality of open access publishing (which isn’t all sweetness & light, any more than the reality of subscription publishing), there’s one answer.

Cites & Insights 16:2 (February-March 2016) available

January 2nd, 2016

Cites & Insights 16:2 (February-March 2016) is now available for downloading at

The double issue is 46 pages long.

If you’re reading online or on a tablet or other e-device, you may prefer the single-column 6″x9″ version, which is 89 pages long and available at

The issue includes:

The Front    p. 1

A placeholder of sorts.

Intersections: Economics and Access   pp. 1-46

Embargoes, costs, spending, Lingua/Glossa, flipping and more.


The semi-obligatory “I Still Read Books” post

December 30th, 2015

I started keeping a spreadsheet of books I’d read three or four years ago (OK: January 6, 2011–make that “five years ago”)  because I was starting to use the excellent local public library a lot more and, being old, didn’t want to accidentally pick up the same book twice.

As a side-effect, the spreadsheet lets me know how many books I’ve actually read each year.

My target is 39. To wit: The library’s check-out period is four weeks; I always take out three books (one “general” fiction, one nonfiction, one alternating between mystery and fantasy/science fiction). So: 13 four-week periods times three books.

This year, as last year, I managed to pass the target by a comfortable margin: 62(!) books read, assuming I don’t finish the current book before January 1. Or, rather, looking at the spreadsheet more carefully, I started 62 books and finished 59. Three (The Book of Lost Books, The Bite in the Apple, and William Safire’s Take My Word for It) I abandoned partway through.

So: Here are the books I thoroughly enjoyed, giving them full honors:

Thief of Time Terry Pratchett
Pale Kings and Princes Robert B. Parker
Night Watch Terry Pratchett
Monstrous Regiment Terry Pratchett
The Lake, The River & The Other Lake Steve Amick
This Case Is Gonna Kill Me Phillipa Bornikova
Hugger Mugger Robert B. Parker
The Pleasure of My Company Steve Martin
An Object of Beauty Steve Martin
Potshot Robert Parker
The Professional Robert B. Parker
Rough Weather Robert B. Parker
1634: The Ram Rebellion Eric Flint
Night Passage Robert B. Parker
Paper Doll Robert B. Parker
A Blink of the Screen Terry Pratchett
The Bromeliad Trilogy Terry Pratchett

and a few others that I enjoyed, but didn’t rate quite as high (A- rather than the A for those above)

Waiter Rant The Waiter
The Truth Terry Pratchett
Turtle Recall: the Discworld Companion Terry Pratchett & S. Briggs
Crimson Joy Robert Parker
Box Office Poison Phillipa Bornikova
1632 Eric Flint
1633 Eric Flint & David Weber
1634: The Bavarian Crisis Eric Flint & Virginia DeMarce
Now & Then Robert B. Parker
1634: The Baltic War Eric Flint & David Weber
Ring of Fire Eric Flint
Big Trouble Dave Barry
1635: The Eastern Front Eric Flint
True History of the Kelly Gang Peter Carey
Widow’s Walk Robert B. Parker

For those of you saying “Crawford’s got no Serious Literary Taste, he’s in there reading them Robert B. Parker and Terry Pratchett and Eric Flint genre pieces of crap,” I can only say phbttb. I’ve been a sucker for Pratchett since I first encountered Discworld (on a cruise ship, as it happens), and I’m pretty sure I’ve read all the adult Discworld novels and a couple of the nonfiction works (I’ll seek out the rest of the juveniles, and while I’m too damn old to start rereading stuff, it’s hard to let go of the Discworld folks). I’ve always been a fan of Robert B. Parker’s books, except for the fact that they’re so fluid and fast-moving that I finish one in at most three brief evening reading sessions. I’ve been captured by the 1632 alternate history told from the ground up, and that’s the way it is. I’m sure there are a few “serious” books in there. Somewhere.


Something positive for the holidays: A shout-out to OfficeDepot/OfficeMax

December 24th, 2015

As a few of you on LSW Slack may know, we (well, my wife, but I’m spending my time helping her cope with it) have been having more than our share of ComputerWoes this holiday:

  • Her 4-year-old Toshiba Satellite, which she was a couple of months away from replacing, suddenly died when she tried to wake it up from sleep mode.
  • She wants to stick with a 17″ screen (this is her *only* computer) and there aren’t a lot of good choices from brands we semi-trust. A clearance Toshiba model was sold out at our local OfficeMax. We went to Fry’s; they had a more expensive Toshiba that seemed pretty nice. We bought it (and Office 2016–and now realize we probably should have gone for the multiuser Office 365 subscription instead, but that’s a different story).
  • First good OD/OM news: Clark, the computer tech at our local OfficeMax was able to recover all the data and bookmarks from the broken Toshiba’s hard disk for a very fair price ($50; since OD/OM also had a great sale on 32GB USB 3.0 flash drives from the brand we both prefer, Sandisk–$9.99, which really is a great price, I purchased a couple of them and gave the tech one to use for the data).
  • But…two days later–day before yesterday–the new Toshiba wouldn’t boot up–power light, wifi light, nothing else. This is after she’d pretty much restored and loaded everything, and was starting to get stuff done again.
  • Took it back to Fry’s. They were actually willing to do a “brain transplant”–swap the hard disk into another Toshiba of the same model–but, tada, they’d run out of that Toshiba model. We could drive a long way to another store or… The only other suitable 17″ notebook was a Dell Inspiron: smaller hard disk, less RAM, but an Intel i3 CPU rather than an AMD; same price. So…we made the exchange.
  • ANYWAY: The other OfficeDepot/OfficeMax thing that feels like a seasonal miracle, even though I didn’t need it: Seeing just how much faster USB 3.0 is (and both of our machines–I have an 8-month-old Toshiba Satellite, also a 17″ screen, replacing a 7-year-old Gateway notebook that’s still operational but overheating–now have USB 3.0 ports), we both think we’d like to use USB 3.0 flash drives for backup and have extras. The store was out of the sale units (the sale ends Sunday), but what the heck, if I purchased four of them from (for $39.96 plus tax), they’d throw in free delivery.
  • I ordered the flash drives yesterday afternoon, around 3 p.m. Figured they’d arrive midweek next week, the usual 3-5 business days. That’s fine: we don’t need them yet.
  • Half an hour ago–20 hours after I ordered the flash drives with free delivery and no rush anything–there was a knock on the door and a delivery person handed me the box. Which apparently shipped last night at 7:30 p.m. from a Fremont OD warehouse.

A long and odd story, but the shoutout here is: Really? 20-hour FREE delivery when I didn’t even request it? I don’t expect it to happen again, but hey, good for OfficeDepot/OfficeMax

(I’d always been an OD shopper–Mountain View has a big and very good OD store that was in walking distance of our old house–but it’s OfficeMax in Livermore, and since they’re both really OfficeDepot now, we’ll manage. And their tech support person, Clark, really is great. He looked at my 7-yr-old Gateway to see whether it was plausible to replace the noisy fan and keep it as a backup computer. He concluded that the fan was a symptom of overheating, showed me the whole situation, said probably not worth trying to fix for such an old machine…and didn’t charge anything. Now, if only they had the Toshiba 17″ notebooks in stock…)

So: the closest to a Christmas present we’re likely to get (our family doesn’t do presents for adults, a wise decision made decades ago), and always good to deal with user-friendly companies.

And to all…happy holidays, whichever you do or don’t celebrate.