Reflections on Web Archiving Week 2017

Once in a while, the unplanned turns out to be as good if not better than the planned. It had not been the intention that the annual Web Archiving Conference of the IIPC should be combined with the second conference of ReSAW (the Research Infrastructure for the Study of Archived Web Materials). However, they came together in London last week, with intriguing results.

One of the great pleasures of the event is the diversity of both speakers and delegates: the institutions represented by the IIPC were there in strength, but also present were the largest assemblage of researchers I have yet seen. These include not only people from computer science and related fields – a group that has been engaged in this space for a while – but also an enlarged contingent of scholars of media and communications and several of the humanities disciplines. At the Archives Unleashed datathon on Monday and Tuesday there was a particularly creative meeting of scholars, technologists and archivists – the crucial nexus of relationships for making successful tools and services. The whole week was marked (for me) by a refreshing openness to the perspectives of others, a frankness about difference, and a collegiality without hierarchy which (if it can be sustained) bodes very well for the future.

If I compare this discussions last week with those in this community perhaps three or four years ago, a number of differences stand out. As I’ve tried to show in my short history of Web archiving, direct engagement between archiving institutions and researchers came relatively late in that twenty year history, and even four years ago there was still a sense that researcher engagement was still only very exploratory. We now seem to have reached the stage where substantial attention is being paid to understanding the needs of users as a preliminary step to developing new tools and services (of which there were also many exciting examples). Here I’m bound to mention the research study that I (as Webster Research and Consulting) carried out for the Parliamentary Archives, which Chris Fryer and I presented, but I also have in mind papers on citation practice (Nyvang et al), the research data management issues involved (Zierau and Jurik), and what users need to know about the materials they use (ie. what to do about descriptive metadata), a theme taken up variously by Bingham, Dooley et al, Maemura et al. The variety of different use cases both discussed in the abstract and demonstrated in concrete reminded me of how varied the user base for web archives is (or could be) and how much we need as fine-grained an understanding of those different users as possible. As Ben Steinberg of Harvard noted ‘How we [ie. the providers of services] think archives should or could be used may not be as pertinent as we imagine…’

Another theme for researchers that surfaced several times at the first ReSAW conference in Aarhus two years ago was the need to understand the offline as context for the online. In Aarhus the particular point was about the need for oral history and for analysis of print and manuscript sources to understand how web materials make it online to begin with, and the theme was taken up last week by Federico Nanni and (in passing) by Gareth Millward and Richard Deswarte. There were also reminders here that a full history of the Web will need to take account of the history of computing more generally (Baker and Geiringer), the interaction between the Web proper and other content delivered online, notably social media (Castex, Schafer et al, Day Thomson), as well as the wider social and intellectual context in which the Web is embedded (Schroeder, and my own paper on the religious language of the Web) .

What of the future? Delegates who followed the same tracks as me may have come away with a sense of the diversity of analytic approaches to the study of the Web, and impressed with the depth at which scholars are now seeking to understand the methodological challenges they face. The aim, however, must be to build on this reflection to a point at which the Web archive becomes simply one type of scholarly source amongst many in the production of substantive scholarly insight on history, sociology or literature as Gareth Millward noted. I look forward to the day when I can go to mainstream historical conferences and hear contemporary history written using the archived Web.

There is also, I think, a challenge to the community at large in navigating a path through the diversity of new technical development and analytical need on display here, to decide which elements best serve users in particular situations, and so should brought forward and made part of ‘business as usual’ operations. Some will be incorporated by web archives themselves, others maintained by communities of interested scholars, others probably commercialised. The IIPC has a part to play here, while remembering that a significant part of this new thinking is taking place outside the membership. At least one person on Twitter thought a combined conference like this was worth repeating, and it would certainly be a way of developing the listening process between archives, users and developers that is required.

Finally: I celebrated the diversity of the conference when viewed in terms of professional background, but in another sense there is still much to do in terms of geography. I counted some 17 or 18 nationalities represented here, a joyous thing in a fragmenting world, but nonetheless overwhelmingly from Europe and north America. The archiving and study of the Web, a global medium, still remains dominated by certain countries.

My thanks are due to all those involved in organising such an excellent event: Jane Winters as host at the School of Advanced Study (University of London), and Olga Holownia of the IIPC and my former colleagues at the British Library which also contributed most significantly. It was my pleasure to be a part of both the IIPC and the ReSAW programme committee, and to hear such a fine set of papers.

 

Religion, law and national identity in the archived Web: new article

I’m delighted to say that an article of mine has appeared this week in a new collection of essays, edited by Niels Brügger and Ralph Schroeder: The Web as History (London: UCL Press, 2017, ISBN: 9781911307563).

My article is ‘Religious discourse in the archived web: Rowan Williams, Archbishop of Canterbury, and the sharia law controversy of 2008’ (pp. 190-203). It examines the controversy over a public lecture given by the archbishop on the interaction of civil and religious law, but from a new angle: the imprint the controversy left in the archive of the UK web. It makes particular use of British Library data documenting the link structure of the .uk country code top level domain for the period 1996-2010.

The whole thing is available as an Open Access PDF, but here’s my conclusion.

It is a brave historian who attempts to interpret the very recent past, as opposed to merely documenting it. As with most aspects of very recent history, the full significance of Rowan Williams’ lecture about sharia law will only become clear as the passage of time grants the historian a sufficiently long perspective from which to view it. An exhaustive qualitative examination of both the published record, and memoirs and private papers that are as yet inaccessible (not least the papers of the archbishop himself, not due to be released until 2038) will be needed to place the episode in its fullest context. Without these, we cannot yet know how changes in patterns of communication that are observable in the archived web were motivated, or how opinions expressed online related to broader patterns of social and intellectual change. However, even if it is difficult to explain changing patterns of religious discourse on the web, we may nonetheless document those changes.

First, the sharia law episode prompted a step-change in the levels of attention paid to the domain of the archbishop of Canterbury, as evidenced by the incidence of inbound links, and also a broadening of the types of hosts that contained those links. Second, a comparison of the inbound links to the Canterbury domain to that of the archbishop of York suggests that the historic privilege given to the views of Canterbury over those of York was extended onto the web. Regardless of their actual status in relation to each other within the Church of England, the media and the public at large seemed only to pay attention to Canterbury. Finally, a qualitative examination of the site of the British National Party shows that at least one organization, with a very particular concern with the place of Islam in British life, certainly took new account of the person of the archbishop as a result of the 2008 controversy.

This chapter has also sought to use the episode as a means of demonstrating both the potential for historians to utilize the archived web to address older questions in a new way, and some of the particular issues of method that web archives present. At one level, the methodological complications presented here – understanding the meaning of a link from one resource to another, say – are peculiar to the archived web and must be understood anew. As with all other born- digital sources, there is work to be done amongst historians in understanding these issues of method, and in acquiring the skills needed to handle data at scale. At the same time, it is part of the historian’s stock- in- trade to assess the provenance of a body of sources, its completeness and the contexts in which those sources were transmitted and received. The task at hand is in fact the application of older critical methods to a new kind of source: a challenge which historians have confronted and overcome before.

This chapter has also tried to show some of the potential available to historians, should they accept the challenge. In the study of public controversy, the archived web allows the detection of changing communication patterns at scale that would be impossible using a traditional qualitative method. It also enables the detection of attention being paid online in places where a scholar would not think to look. More generally, the chapter has attempted to outline an approach that combines quantitative readings of the links in web archives with qualitative examination of particular subsets of resources. When dealing with a new superabundance of historical sources, a combination of distant and close reading will be required to understand the archived web.

What do we need to know about the archived web?

A theme that emerged for me in the IIPC web archiving conference in Reykjavik last week was metadata, and specifically: precisely which metadata do users of web archives need in order to understand the material they are using?

At one level, a precise answer to this will only come from sustained and detailed engagement with users themselves; research which I would very much hope that the IIPC would see as part of its role to stimulate, organise and indeed fund. But that takes time, and at present, most users understand the nature of the web archiving process only rather vaguely. As a result, I suspect that without the right kind of engagement, scholars are likely (as Matthew Weber noted) to default to ‘we need everything’, or if asked directly ‘what metadata do you need?’ may well answer ‘well, what do you have, and what would it tell me?’

During my own paper I referred to the issue, and was asked by a member of the audience if I could say what such enhanced metadata provision might look like. What I offer here is the first draft of an answer: a five-part scheme of kinds of metadata and documentation that may be needed (or at least, that I myself would need). I could hardly imagine this would meet every user requirement; but it’s a start.

1. Institutional
At the very broadest level, users need to know something of the history of the collecting organisation, and how web archiving has become part of its mission and purpose. I hope to provide a overview of aspects of this on a world scale in this forthcoming article on the recent history of web archiving.

2. Domain or broad crawl
Periodic archiving of a whole national domain under legal deposit provisions now offers the prospect of the kind of aggregate analysis that takes us way beyond single-resource views in Wayback. But it becomes absolutely vital to know certain things at a crawl level. How was territoriality determined – by ccTLD, domain registration, Geo-IP lookup, curatorial decision? The way the national web sphere is defined fundamentally shapes the way in which we can analyse it. How big was the crawl in relation to previous years? How many domains are new, and how many have disappeared? What’s the policy on robots.txt (by default) ? How deep was the crawl scope (by default)? Was there a data cap per host? Some of this will already be articulated in internal documents, some will need some additional data analysis; but it all goes to the heart of how we might read the national web sphere as a whole.

3. Curated collection level
Many web archives have extensive curated collections on particular themes or events. These are a great means of showcasing the value of web archives to the public and to those who hold the pursestrings. But if not transparently documented they present some difficulties to the user trying to interpret them, as the process introduced a level of human judgment to add to the more technical decisions that I outlined above. In order to evaluate the collection as a whole, scholars really do need to know the selection criteria, and at a more detailed level than is often provided right now. In particular, in cases where permissions were requested for sites but not received, being able to access the whole list of sites selected rather than just those that were successfully archived would help a great deal in understanding the way in which a collection was made.

4. Host/domain level
This is the level at which a great deal of effort is expended to create metadata that looks very much like a traditional catalogue record: subject keywords, free-text descriptions and the like. For me, it would be important to know when the first attempt to crawl a host was, and the most recent, and whether there were 404 responses received for crawl attempts at any time in between. Was this host capped (or uncapped) at the discretion of a curator differentially to the policy for a crawl as a whole? Similarly, was the crawl scoping different, or the policy on robots.txt? If the crawl incorporates a GeoIP check, what was the result? Which other domains has it redirected to, and which redirect to it, and which times?

5. Individual resource level
Finally, there are some useful things to know about individual resources. As at the host level, information about the date of the first and last attempts to crawl, and about intervening 404s, would tell the user useful things about what we might call the career of a resource. If the resource changes, what is the profile of that: for instance, how has the file size changed over time? Were there other captures which were rejected, perhaps on a QA basis, and if so, when?

Much if not quite all of this could be based on data which is widely collected already (in policy documents, or curator tools, crawl logs or CDX) or could be with some adjustment. It presents some very significant GUI design challenges in how best to deliver these data to users. Some might be better delivered as datasets for download or via an API. What I hope to have provided, though, is a first sketch of an agenda for what the next generation of access services might disclose, that is not a default to ‘everything’ and is feasible given the tools in use.

Towards a cultural history of web archiving

[UPDATE: this article is now published; see the free pre-print version here.]

This week I’m writing the first draft of a chapter on the cultural history of web archiving, for a forthcoming volume of essays (details here). It is subject to peer review and so isn’t yet certain to be published, but here’s the abstract.

I should welcome comments very much, and there may also be a short opportunity for open online peer review.

Users, technologies, organisations: towards a cultural history of world web archiving

‘As systematic archiving of the World Wide Web approaches its twentieth anniversary, the time is ripe for an initial historical assessment of the patterns in which web archiving has fallen. The scene is characterised by a highly asymmetric pattern, involving a single global organisation, the Internet Archive, alongside a growing number of national memory institutions, many of which are affiliated to the International Internet Preservation Consortium. Many other organisations also engage in archiving the web, including universities and other institutions in the galleries, libraries, archives and museums sector. Alongside these is a proliferation of private sector providers of web archiving services, and a small but highly diverse group of individuals acting on their own behalf. The evolution of this ecosystem, and the consequences of that evolution, are ripe for investigation.

‘Employing evidence derived from interviews and from published sources, the paper sets out to document at length for the first time the development of the sector in its institutional and cultural aspects. In particular it considers how the relationship between archiving organisations and their stakeholders has played out in different circumstances. How have the needs of the archives themselves and their internal stakeholders and external funders interacted with the needs of the scholarly end users of the archived web? Has web archiving been driven by the evolution of the technologies used to carry it out, the internal imperatives of the organisations involved, or by the needs of the end user?

Why hoping private companies will just do the Right Thing doesn’t work

In the last few weeks I’ve been to several conferences on the issue of the preservation of online content for research, and in particular social media. This is an issue that is attracting a lot of attention at the moment: for examples, see Helen Hockx-Yu’s paper for last year’s IFLA conference, or the forthcoming TechWatch report from the Digital Preservation Coalition. As I myself blogged a little while ago, and (obliquely) suggested in this presentation on religion and social media, there’s growing interest from social scientists in using social media data – most typically Twitter or Facebook – to understand contemporary social phenomena. But whereas users of the archived web (such as myself) can rely on continued access to the data we use, and can expect to be able to point to that data such that others may follow and replicate our results, this isn’t the case with social media.

Commercial providers of social media platforms impose several different kinds of barriers: These can include: limits on the amount of data that may be requested in any one period of time; provision of samples of data created by proprietary algorithms which may not themselves be scrutinised; limits on how much of and/or which fields in a dataset may be shared with other researchers. These issues are well-known, and aren’t my main concern here. My concern is with how these restrictions are being discussed by scholars, librarians and archivists.

I’ve noticed an inability to imagine why it is that these restrictions are made, and as a result, a struggle to begin to think what the solutions might be. There has been a similar trend amongst the Open Access community, to paint commercial academic publishers as profit-hungry dinosaurs, making money without regard to the public good element of scholarly publishing happens. Regarding social media, it is viewed as simply a failure of good manners when a social media firm shuts down a service without providing for scholarly access to its archive, or does not allow free access to and reuse of its data to scholars. Why (the question is implicitly posed) don’t these organisations do the Right Thing? Surely everyone thinks that preserving this stuff is worthwhile, and that it is a duty of all providers?

But private corporations aren’t individuals, endowed with an idea of duty and a moral sense. Private corporations are legal abstractions: machines designed for the maximisation of return on capital. If they don’t do the Right Thing, it isn’t because the people who run them are bad people. No; it’s because the thing we want them to do (or not do) impacts adversely on revenue, or adds extra cost without corresponding additional revenue.

Fundamentally, a commercial organisation is likely to shut down an unprofitable service without regard to the archive unless (i) providing access to the archive is likely to yield research findings which will help future service development, or; (ii) it causes positive harm to the brand to shut it down (or helps the brand to be seen *not* to do so.) Similarly, they are unlikely to incur costs to run additional services for researchers, or to share valuable data unless (again) they stand to gain something from the research, however obliquely, or by doing so they either help or protect the brand.

At this point, readers may despair of getting anywhere in this regard, which I could understand. One way through this might be an enlargement of the scope of legal deposit legislation such that some categories of data (politicians’ tweets, say, given the recent episode over Politwoops) are deemed sufficiently significant to be treated as public records. There will be lobbying against, surely, but once such law is passed, companies will adapt business models to a changed circumstance, as they always have done. An even harder task is so to shift the terms of public discourse such that a publicly accessible record of this data is seen by the public as necessary. Another way is to build communities of researchers around particular services such that generalisable research about a service can be absorbed by the providers, thus showing that openness with the data leads to a gain in terms of research and development.

All of these are in their ways Herculean tasks, and I have no blueprint for them. But recognising the commercial realities of the situation would get us further than vague pieties about persuading private firms to do the Right Thing. It isn’t how they work.