Is the quality of humanities book production falling ?

[I review a lot of books, and have noticed a falling-off in the quality of the published text in the kind of humanities books I review. In this post I ask: is this a general trend, or have I been unlucky ? If it is a trend, what are the causes?]

Part of the debate about Open Access centres on the value that traditional academic publishers contribute to the process, for which they should rightly be rewarded. The argument has played out something like this. By and large, scholars produce academic writing without significant financial reward; they edit journals, similarly without payment, as well as performing peer review. This much we know, and hasn’t changed all that much.

Ah yes (is the response): but the publisher’s unique input is in the stages after this – in transforming an accepted manuscript into a pristine typeset version of record, and in the marketing and distribution of that article to the readers who want to read it. This is the value added to the process.

I don’t need to elaborate here on the impact of online delivery and Open Access on the last of these. The tools that social media provide to journal editors and authors to market their own work have levelled that part of the field greatly as well. No; here I’m interested in the value that publishers add to the process in copy-editing, and then in the production and correction of typeset proofs, and then versions of record.

I do a good deal of book reviewing. I like it; it forces me to read the book properly. But in the last few months and years, I have needed to draw attention to several books in which the standard of production has dropped to an alarmingly low level.

Most academics could write better, and if someone were to argue that the standard of written English in academic work has been dropping, I wouldn’t argue with them. And so, there is probably more that authors and their peer reviewers could do to present better copy. Errors of fact must remain an academic responsibility. However, I am simply seeing far too many more basic errors making it into print.

Which errors ? Some are tpyos or erors of speling; others are in spacing and formatting, such as missing italicisation, or   extra or missingwhitespace. I’ve also seen phantom footnote markers that lead nowhere (1); other pages show signs of an amendment half-made, leaving some of the debris behind, resulting in nonsense debris behind.

I won’t name names of publishers here, although readers would be able to find plenty of evidence elsewhere in this blog. I would simply like to start a debate on two issues.

Firstly: is my experience matched by that of anyone else ? It may not be, and I should be delighted to be told that all is well, and that I have simply been unlucky.

If my experience is matched by that of others, why might this be ? There are several possibilities:

(i) are the manuscripts that authors submit getting messier, leading to a greater number of errors slipping through the net (a simple matter of probability) ?
(ii) are some copy editors being less careful ? During copy-editing I recently made some major changes to a paragraph of mine, tracking changes in Word. I can be pretty sure that the editor did nothing more than simply accept all my changes with a single click, since the errors in the proofs were the kind that you miss when you track changes in this way, in the mass of red. Had they been accepted one by one, they could not have got through.
(iii) are copy editors doing less work (a slightly different point) ? That is, is the apparent crisis in the business model for monographs and edited volumes in the humanities such that publishers simply can’t afford to do as much work as they might and still turn a profit ?
(iv) Or, finally, is it that time-poor academics are failing to check proofs properly ?

I have no answers to this; but I’m sure we need to be asking the question.

The bits-and-pieces time management method

Over the last year I’ve become interested in finding ways of being more productive. This has in part been forced on me by a change of job (becoming what I’ve called an interstitial scholar) with a greatly lengthened commute; but it has its own intrinsic interest, if kept under control. And I seem to have settled into a way of managing time and energy which hacks together two-and-a-half existing approaches. Since it’s a New Year, full of good intentions, I thought it worth sharing, just in case any element of it helps anyone else as much as it does me.

By Sun_Ladder at Wikimedia Commons
By Sun_Ladder at Wikimedia Commons

The element I found first was the Pomodoro Technique. At the beginner level, this is an incredibly simple way of managing one’s attention by dividing it into up into short bursts of 25 minutes (the Pomodori) separated by a five minute break. These are grouped again into cycles of four Pomodoro-plus-breaks, each cycle then separated by a longer break. I don’t propose to go into the detail here; but the principle is that the 25-minute period roughly matches most people’s capacity for sustained concentration on a single task.

I, like many people, used to sit down and simply plough through a task for two or three hours at a stretch, and I took the time spent in a sitting position as a proxy measure of concentrated effort. However, the evidence before and after of what I actually got done showed me how ineffective the former approach had been. The technique only works, however, if once you press the button on the timer (yes, you need a timer, available as an (free) app in Chrome), you cannot stop, or allow yourself to be distracted by anything else at all other than the task you planned. The five minute breaks feel like a long time, and the Pomodori are merciless, but it has worked excellently for me.

Of course, the technique presupposes that you know which tasks you should best be doing at the moment you start the timer. Like most people, I had been used to keeping a to-do-list, and also to making plans to do large projects, but had never found a successful way of connecting the two processes. This tended to mean that my to-do list was never empty, and was often full of tasks that hung around, and which it was difficult to prioritise until they became urgent. And the big plans ? Because they were too general, I would have no real sense of how they were progressing until a deadline was on top of me, by which time it was rather too late.

Killing time at Belfast City Airport, I happened upon 18 Minutes by Peter Bregman, which I had finished before I arrived back at Gatwick. Two particular elements of Bregman’s approach to “getting the right things done” have been successfully bolted onto the Pomodoro Technique. First was the discipline of an annual review, not of individual projects but of priorities, which I applied to everything over which I had that kind of control. And so it includes the plans for academic research in the next twelve months, along with plans for the house, the garden,  learning some Spanish and so on. Once you have decided what is important for a period of time, it makes it much easier to resist the next tempting opportunity to write for someone else for free.

The second element of Bregman’s approach is where the 18 minutes comes in. Twice a day – once on the train in the mornings, and again on the way home – I look at the day as a whole. In the morning I look at the to-do-list and schedule the things for that day; and at the end of the day I look back, see what went well, and decide on the priorities for the next day or two. Describing it this way sounds neater than it often is in practice, and Bregman gives some useful advice on how to manage the to-do-list (including, very importantly, when to delete tasks.)

The most recent plank is one of my own devising, building on elements of the Pomodoro. After a while, I began to get a reasonable idea of how much effort certain tasks tended to take. This is mostly because the Pomodoro is not a measure of time spent, but of attention spent, which I find to be a more accurate yardstick.  I know that I can write about 200 words per Pomodoro, and read a certain number of pages of typical academic writing (say, for a review.) And I know that in a typical week I can fit in a certain number of Pomodori (mostly on the train.) And so it was a short step from there to a simple planning sheet in which the priorities expressed each year are broken down into projects, which are then broken into tasks, with a certain number of Pomodori allocated to each task in each month. So far, the predictions are more or less matching the actual time spent but if it doesn’t, there is a means of rescheduling without throwing another project into difficulty.

Most recently I came across the notion of decision fatigue and the idea that the cognitive resources that we have in any one day for concentrated thought and good decision-making are finite. I’ve made sure that not all my time is booked, since in the course of a working week I need some working time in which I can plough through the more routine work that doesn’t demand much heavy cognitive lifting. I’m finding it very useful to think in terms of packets of concentrated mental effort, rather than in terms of time.

Are there any wider applications of this ? One may be that no one complete off-the-shelf time management system is likely to fit any one person perfectly; and so rather struggling to stay with the program 100%, it may be worth picking and choosing elements of different systems to suit your own needs. This process of hacking these systems together has taken me more than a year, and so there is something to be said for allowing your practice to evolve over time, rather than trying to find the magic formula that will transform your working practice overnight. Lastly, it has been very clear that the administrative overhead involved in managing these systems is far outweighed by the efficiency gains the rest of the time.

If you’ve enjoyed this post, why not support the blog on Patreon?

Book review: The Future of Scholarly Communication (Shorley and Jubb)

[This review appeared in the 24 July issue of Research Fortnight, and is reposted here by kind permission. For subscribers, it is also available here.]

Perhaps the one thing on which all the contributors to this volume could agree is that scholarly communication is changing, and quickly. As such, it is a brave publisher that commits to a collection such as this — in print alone, moreover. Such reflections risk being outdated before the ink dries.

The risk has been particularly acute in the last year, as policy announcements from government, funders, publishers and learned societies have come thick and fast as the implications of the Finch report, published in the summer of 2012, have been worked out. It’s a sign of this book’s lead time that it mentions Finch only twice, and briefly. That said, Michael Jubb, director of the Research Information Network, and Deborah Shorley, Scholarly Communications Adviser at Imperial College London, are to be congratulated for having assembled a collection that, even if it may not hold many surprises, is an excellent introduction to the issues. By and large, the contributions are clear and concise, and Jubb’s introduction is a model of lucidity and balance that would have merited publication in its own right as a summation of the current state of play.

As might be expected, there is much here about Open Access. Following Finch, the momentum towards making all publications stemming from publicly funded research free at the point of use is probably unstoppable. This necessitates a radical reconstruction of business models for publishers, and similarly fundamental change in working practices for scholars, journal editors and research libraries. Here Richard Bennett of Mendeley, the academic social network and reference manager recently acquired by Elsevier, gives the commercial publisher’s point of the view, while Mike McGrath gives a journal editor’s perspective that is as pugnacious as Bennett’s is anodyne. Robert Kiley writes on research funders, with particular reference to the Wellcome Trust, where he is head of digital services. Together with Jubb’s introduction and Mark Brown’s contribution on research libraries these pieces give a clear introduction to hotly contested issues.

There is welcome acknowledgement here that there are different forces at work in different disciplines, with STM being a good deal further on in implementing Open Access than the humanities. That said, all authors concentrate almost exclusively on the journal article, with little attention given to other formats, including the edited collection of essays, the textbook and — particularly crucial for the humanities — the monograph.

Thankfully, there’s more to scholarly communication than Open Access. The older linear process, where research resulted in a single fixed publication, disseminated to trusted repositories, libraries, that acted as the sole conduits of that work to scholars is breaking down. Research is increasingly communicated while it is in progress, with users contributing to the data on which research is based at every stage.

Fiona Courage and Jane Harvell provide a case study of the interaction between humanists and social scientists and their data from the long-established Mass Observation Archive. The availability of data in itself is prompting creative thinking about the nature of the published output: here, John Wood writes on how the data on which an article is founded can increasingly be integrated with the text. And the need to manage access to research data is one of several factors prompting a widening of the traditional scope of the research library.

Besides the changing roles of libraries and publishers, social media is allowing scholars themselves to become more active in how their work is communicated. Ellen Collins, also of RIN, explores the use of social media as means of sharing and finding information about research in progress or when formally published, and indeed as a supplementary or even alternative method of publication, particularly when reaching out to non-traditional audiences.

Collins also argues that so far social media have mimicked existing patterns of communication rather than disrupting them. She’s one of several authors injecting a note of cold realism that balances the technophile utopianism that can creep into collections of this kind. Katie Anders and Liz Elvidge, for example, note that researchers’ incentives to communicate creatively remain weak and indirect in comparison to the brute need to publish or perish. Similarly, David Prosser observes that research communication continues to look rather traditional because the mechanisms by which scholarship is rewarded have not changed, and those imperatives still outweigh the need for communication.

This collection expertly outlines the key areas of flux and uncertainty in scholarly communication. Since many of the issues will only be settled by major interventions by governments and research funders, this volume makes only as many firm predictions as one could expect. However, readers in need of a map to the terrain could do much worse than to start here.

[The Future of Scholarly Communication, edited by Deborah Shorley and Michael Jubb, is published by Facet, at £49.95.]

Wikipedia, authority and the free rider problem

[This post argues that historians have much to gain from getting involved in making Wikipedia authoritative, in spite of the many disincentives within the current ecology of academic research. However, to make it work, historians would need to embrace a more speculative and more risky model of collaborative work.]

I am a selfish Wikipedian. By which I mean, that while I am very happy to use Wikipedia, I have not been very serious about contributing to it. There are a small handful of pages for which I keep the further reading (reasonably) up to date, and correct if a particularly egregious error appears.  But it is sporadic, and one of the first things to be squeezed out if life gets busy.

And I wonder whether there aren’t real gains for historians from helping Wikipedia become truly authoritative, but which are obscured by natural disincentives in the way in which our scholarly ecosystem works.

Firstly, the disincentives. One is a residual wariness of something that can be edited by ‘just anyone’. I myself have dissuaded students from citing Wikipedia as an authority in itself, as part of what I am teaching is the ability to go to the scholarly article that is cited in Wikipedia, and indeed beyond it to the primary source. But my experience is that, in matters of fact, Wikipedia is very reliable unless it concerns a highly charged topic (the significance of Margaret Thatcher, say). And even the making of that judgement is an important part of learning to think critically about what it is we read.

Perhaps more significant is the fact that Wikipedia appears to be edited by no-one in particular. One of the contradictions of modern academic life is that most scholars would, I think, assert the existence of a common good, the pursuit of knowledge, towards which we work in some abstract sense. At the same time, the ways in which we are habituated to achieve that end are fundamentally about competition between scholars for scarce resources: attention, leading to esteem, leading to career advancement.

We write books and articles, which help us get and then keep a job. A smaller but growing number write blogs like this one, and tweet about those blogs. Part of this is about ‘impact’ (that is to say, increasing our share of those scarce quanta of public attention). And all of it depends on being identified as the creator of an item of intellectual property: tweet, blog post, article, book, media interview. Few, even at the wildest edges of the Open Access movement, propose licensing of scholarly outputs without attribution, even if a work may be licensed for the most radical of remixing. All depends on being known.

But Wikipedia doesn’t credit its authors, or at least not in a prominent and easily reportable way. And so the question arises: even though contributing to Wikipedia is to the common good, what is in it for me ?

The answer may depend on a more speculative and more risky model of collaborative work, but one which holds out the prospect of a genuinely authoritative resource, made by authorities. And that in turn should reward the best published work, in the good old-fashioned and citable way, by channelling readers to it. (It would be even better for works available Open Access.)

But it depends on everyone jumping together. As long as some contribute, but others only consume, there remains a classic economist’s ‘free rider’ problem. When people use a resource without ‘paying’ (in the form of their own time, and their own particular expertise) then the cost of production is unevenly spread, and the quality of the product denuded. But if editing Wikipedia became a genuinely widespread enterprise amongst scholars, then even if my contribution is not recognised with each and every edit, my ‘main’ work (if it is any good) will be cited and integrated into the fabric of Wikipedia by others. And we might get a more informed public debate about each and every matter, which looks like impact to me. Perhaps I should get more serious about this now.

Open Access and open licensing

Much of the recent concern about Open Access in the UK, at least for the humanities, has not been about the general principle, but rather about the means.

In my hearing, however, perhaps at least as much consternation was in reaction to the prospect of subsequently licensing those outputs for re-use using one or other of the Creative Commons suite of licences. CC allows various degrees of redistribution, and re-use, without further recourse to the author, but with credit given. Commercial use can be restricted (or not); the making of derivative works can be provided for (or not). You can Meet the Licenses here.

As an advocate of greater Open Access in the humanities, I suspect that Research Councils UK made a tactical error in suggesting that it intended to enforce the most liberal of these licenses. CC-BY ‘lets others distribute, remix, tweak, and build upon your work, even commercially, as long as they credit you for the original creation.’ Here’s why I think the focus on CC-BY has been a mistake, at this point.

Personally, I have never quite been convinced that ‘full’ or ‘real’ OA was dependent on maximally open licensing. I see free availability of the content for reading and citation as quite distinct from the subsequent reuse of that content in other ways. Both are desirable, but can be decoupled without damage. A move to any form of OA represents a major cultural change, albeit one that is necessary. Given this I would rather see an OA article with all rights reserved (as a staging post) than to not see that article at all. And to couple the two too closely risks the first goal by too strong an insistence on the second. Over time, cultures can and do change; but we ought to practice the art of the possible.

More generally, it isn’t yet clear to me what re-use of a traditional history article looks like. Quotation (with a reference) is a mode historians understand; so is citation as an authority in paraphrase. Both are possible from an article with all rights reserved. Compilation of readers and anthologies would be made easier by CC, but doesn’t require CC-BY. It also isn’t clear what ‘remixing’ of traditional historical writing looks like if it doesn’t involve quotation. Historians are also well used to acknowledging a seminal work in a footnote (or even once only in foreword or acknowledgments) without quoting it directly, but is this all that giving ‘credit’ for ‘remixing’ an idea really means ? If so, there is little to fear; but I’m not sure we know, yet.

Over time, there will be possibilities for data-mining in corpora of scholarly articles, but we ought to think on about whether this can be accommodated without full CC-BY. Much turns on the question of what counts as a derivative work in the context of an aggregated database, and what the output to the user is; and whether an insistence on  non-commercial re-use shuts down important future possibilities that we can’t yet foresee.

It may be that CC-BY is the right default option; my feeling is that it probably will be. But I think we should probably take more time to document some of these use cases, in order to plan a movement towards licensing for historical writing that is neither more restrictive nor more liberal than it need be, and allows scholars to dip in their toes without plunging in up to the neck. For now, there are horses we should avoid scaring, lest they bolt.