How do we use crowd-based methods? What benefits do they generate? Where do they fall short? How can we sustain them? And how do they support the work of organisations in a heritage ecology? These were the questions addressed at a knowledge exchange workshop hosted by the UCL Institute of Archaeology on 23 September. It builds upon the MicroPasts project, which has put nearly £400,000 of Arts and Humanities Research Council money to work in creating crowd-sourcing, crowd-funding and forum platforms to support the collaborative study of the human past. The workshop was an opportunity to share the learning from the MicroPasts project and reflect further upon how this insight can be deployed more widely across the heritage world. Those present were experts from across the fundraising, policy, evaluation and public engagement professions.
The workshop provoked a few thoughts about the nature of crowd-based methods and their use in research, fundraising and public engagement. My thoughts are partly informed by my ever-weakening relationship with the archaeology and heritage sector. My PhD looked at archaeology public policy, yet over the last ten years I have worked in a broader set of cultural disciplines and grown ever-more convinced that the archaeology world is too introspective for its own good. It was therefore refreshing at the workshop to get perspectives from a wide range of voices: from the Arts Council, Nesta, the Smithsonian Institute, and Zooniverse (the popular online citizen science platform).
What seemed clear from the day was that crowd-funding and crowd-sourcing are two distinct types of activity. Perhaps the only thing that unites them is the c-word. Crowd-funding is in many ways simply a form of fundraising that takes advantage of digital technology. There are a variety of interesting uses of crowd-funding in the broader culture sector (the publisher Unbound springs to mind) but overall it’s used in the heritage sector as a fairly straightforward development of classic fundraising methods. It therefore exhibits all the strengths and weaknesses of passing the donation plate. Crowd-sourcing is an entirely different beast, and demands a re-imagining of what constitutes public engagement, volunteering, labour, value and ethics.
MicroPasts and similar platforms are best thought of as Citizen Science, not crowd-sourcing. In Citizen Science the crowd is often contributing their labour, they are not donating materials. Quite frequently, those participating as Citizen Scientists far outnumber those who have been involved in the creation of a project. The tasks in MicroPasts are set by researchers and a few others drawn from a pool of interested “Citizens”, not the mass of people who constitute the “crowd”. The work of MicroPasts is complete once the researcher decides, not the crowd. This means that MicroPasts resembles a Zooniverse-style platform with an exclusively heritage flavour.
For archaeology and heritage to take full advantage of Citizen Science opportunities, I have the following simple suggestions inspired by conversations at the workshop: that similar projects combine resources to increase their visibility; and that all projects provide clear and compelling reasons for people to volunteer their time to help out.
We heard of a few examples of transcribing or annotating archive material. Four different examples were mentioned at the workshop. Each one had their own platform. There was Transcribe Bentham (bespoke UCL interface); AnnoTate (Zooniverse); Oxford HEIR project (bespoke Oxford interface); and Amarna Archive (MicroPasts). In the busy marketplace of distracting things to do online, it would make sense for there to be one global online shopfront for all projects that require the transcription, annotation and tagging skills of the general public. Anyone with an interest in history, archaeology and archives would know where to go. That way, these projects wouldn’t operate in isolation, like heritage needles in a lolcats haystack.
The other striking theme from the workshop was the various factors that drive people to participate in these online projects. To my mind, many of the projects resemble the tasks advertised through Amazon Mechanical Turk (a sort of online labour exchange run by the well-known online store). The motivation for Turkers (as the workforce on that platform are known) is superficially clear: piecemeal monetary return for tasks successfully completed. However, research into the experience of Turkers shows that they have a complex range of motivations, from the meditative state that the tasks can sometimes induce, to the simple pleasure of contributing to a job well done. For some people, monetary incentives are going to be key in helping Citizen Science projects to achieve their goals. At the workshop we heard a bit about the motivations of Zooniverse and MicroPasts participants. However, there is clearly a need for some modest segmentation of user motivations, in order to present them with a basket of incentives and rewards for participation. Without this, the heritage Citizen Science projects risk merely attracting the same enthusiasts that always engage in this activity, whether online or offline. That small and unrepresentative group could be greatly expanded to not only widen access to heritage but also get research projects completed more efficiently.