Open, comprehensive and innovative: the future of the Hub

image of archives hub map of contributors
Map of Archives Hub contributors

At a very productive and enjoyable workshop last week we brought Archives Hub contributors and others interested in the future development of the Hub together with members of our Steering Committee. Our aim was to generate discussion around some key topics that we think are particularly important at the moment, and relevant to how the Hub grows and develops.

1. Open Data

Our first discussion was around the open data agenda, something that is becoming increasingly relevant as we move towards new ways of outputting and inter-connecting data. We are particularly interested in this because of our current work on the Locah project (Linked Open Copac and Archives Hub). We had a group discussion around factors that may act as resistors and those that may act as drivers for open data (points are reproduced here as accurately as possible from the group discussion):

Resistors
–    Problem of providence/attribution
–    Is it just a fad?
–    Loose control of metadata
–    Potential de-skill
–    Create demand or expectations you cannot meet
–    May create resourcing problems
–    Manipulation of the data by others could reflect back badly on the institution
–    Brand/impact – ‘stamp’ is potentially lost (need to show impact often relates to funding)
–    Depositor relations could be an issue – what are their expectations?
–    Impact/ benefit is difficult to track and measure
–    Difficult to explain the benfits to get buy-in
–    Cultural shift away from concept of whole collection? [acknowledgment this could require a radical change in mindset]

Drivers
–    Linked data connections can be made
–    Political drivers – Government agenda
–    Opportunities may be missed
–    Without innovation we risk reinforcing the ‘dusty’ reputation of archives/archivists
–    Transparency & FOI
–    Users want to make their own connections within datasets
–    May help free us from constraints
–    We are sharing anyway
–    Increases discoverability
–    Opportunities for more collaboration
–    Can take advantage of innovations by others
–    As a profession we can show that we produce quality metadata
–    No pure form of metadata anyway

The feeling was that open data is probably the way to go, but there were some reservations. Probably the main ones that were emphasised were (1) the lack of control over the metadata that might lead to it being used in ways that reflect badly upon the institution, and (2) the problem of measuring impact once you relinquish control – it is hard enough when the data is essentially within your control. There were some interesting points made around things like user expectations and the dangers of raising expectations and not being able to meet them. I think that this referred to the need to maintain an open agenda once you have embarked upon that road. One point that I hadn’t fully considered was the attitude of depositors – some depositors may not be keen to see descriptions of their collections being released in this way. This brought us back to the need to make a compelling case and show how it could benefit the profile and use of archives.

The group discussed the difficulties around getting buy-in, and it was felt that more exemplars and applications were needed: ways to really show the benefits of open data in a way that funders and managers could understand. We also raised the whole issue of the change in mindset that may be needed here, particularly with Linked Data, where you are moving away from the archival collection as the main emphasis and towards data from different sources that is inter-connected and presented to the user in a myriad of ways (appropriate for their needs).

2. Remit

Here we had a general discussion and made points around the remit of the Hub in terms of the archives that are represented. The discussion also ranged more widely, around how the archive community should be developing and the role of the Hub more generally.

–    Hub funded for HE but acknowledgment we want to expand user base
–    It is not an either/or scenario – we can represent all archives
–    Search across various aggregators  has never been achieved
–    As a community we don’t present totality well – we still seem fragmented
–    We should think beyond the UK – researchers often want this
–    Many archive repositories will see higher Google ranking as incentive for being on the Hub
–    Stereotype of historians may not be positive: we should think more broadly
–    Recognition of ‘Old school’ research methods versus new methods – Hub could have role in exploring this – research habits & cultural shift – how does this affect archives?
–    Hub can help accelerate inter-disciplinary research using archives
–    UKAD could have a role looking at the profile of current archival networks
–    The Hub is an ac.uk address – does this have an impact on perceptions of it as being only for HE?
–    Hub could help with the current trend towards digitisation on demand – this could be functionality within the Hub (user requests digital copy)
–    Worth thinking about getting data from sources like Microsoft Excel and maybe helping with guidelines for cataloguing in Excel
–    Could the Hub cover more than EAD descriptions, e.g. PDFs?
–    Could the Hub include ISDIAH based descriptions as a means to give archive repositories at least a ‘placeholder’ presence?

We generally agreed that a broader remit is a good thing, and there was support for us approaching community archives to see whether we can represent them. But we did agree that, despite progress in terms of the existence of a number of aggregators in the UK and beyond, we don’t present the totality of archives very effectively – maybe we still need to make more effort to work together on this.

3. Priority Areas

In the afternoon, at our Steering Committee meeting, we asked members to rank priority areas, which we gave as:

  • Increase usage
  • Increase coverage/ depth of content
  • Innovation
  • Technical development (core service infrastructure)

One member wanted an additional priority area:

  • Understand our audience better

The interesting thing about this discussion was that we had a spread across all of these areas, but the ways that they are reliant on each other really became obvious. Some members felt that if you concentrate on increased coverage, increased use would follow. Others saw it the other way around. Some felt that innovation would help to place you at the forefront of the community and attract more profile, more contributors and more users. It was felt that ‘technical development’ went hand-in-hand with innovation.

Arrive in Wonder, Leave in Wisdom!

Roll Up Roll Up for Open Cuture!

image of open culture banner

I arrived at the Open Culture conference just in time to grab a cup of tea and dash along to hear Malcolm Howitt’s talk on Axiell. He focussed on Axiell Arena,
software, a new content management option. It provides for a more interactive experience, complete with tag cloud and the ability to add comments.  It looked pretty good, very much in line with where things are going in terms of these kinds of websites. However, from our point of view as an aggregator what we are keen to see is an API to the data to enable others to engage with it more flexibly, something that has yet to happen on CALM. Maybe this raises the whole issue of the challenge of open data to commercial suppliers – it does rather appear to threaten their business model, and I can see that this would be of concern to them.

The second presentation I saw was from Deep Visuals on ViziQuest, ‘a new way to explore digital collections’. They used natural language processing to extract the concepts from the text.  So the system uses existing metadata in order to enable semantic browsing.  The idea is to provide a different kind of search experience, where the user can meander through a collection of images. You can flip over image to find metadata about the image, which is quite neat.

Deep Visuals have worked with the Scott Poloar Research Institute, one of the Hub contributors, and there are some wonderful images of expeditions. For some images, the archivist has recorded an audio and there are also some film clips  – I saw a great clip on board a ship bound for the arctic.  Currently the software is only available for users within the institute, but it may be made available through the website. You can see a small demo here: http://www.deepvisuals.com/Demo/.  In addition, ViziQuest have taken some expedition diaries and recorded some audio with actors.

The morning was rounded off with a talk about Culture Grid. The importance of Culture Grid being part of national and international initiatives was emphasised, and there was reference to RDTF (now UKDiscovery) and the whole HE agenda, which was good to hear.

Currently Culture Grid contains about 1.65 million item records, mostly referring to images. There are also about 10,000 collection records and 8,000 institution records. We were told that ‘Cuture Grid site and search is not a destination in itself.’  This slightly surprised me, as I did think that this was one of its purposes, albeit only one and maybe not the primary one.

I was impressed by the way Culture Grid is positioning itself as a means to facilitate the use of data by others. Culture Grid has APIs and we were told that a growing range of users do take advantage of this. They are also getting very involved in developer days as a means to encourage innovation. I think this is something archives should engage with, otherwise we will get left behind in the innovative exploration of how to make the most of our data.

Whilst I am very much in agreement with the aims of opening up data, I am not entirely convinced by the Culture Grid website. It does appear to prioritise digital materials – it works much better where there are images. The links back to resources often don’t work. I did a search for ‘victorian theatre’ and first of all the default search was ‘images only’, excluding ‘collections’ and non-images based materials. Then, two of the first four links to resources I clicked on got an internal server error.  I found at least six links that didn’t work on the first two pages of results. Obviously this is not Culture Grid’s fault, but it is certainly a problem. I also wonder about how intuitive it is, with resource links going to so many different types of websites, and at so many different levels of granularity. Quite often you don’t go straight to the resource: one of the links I clicked on from an item went to the Coventry Council homepage, another went to the ‘how do I?’ page of the University of Hull. I asked about the broken links and didn’t feel that the reply was entirely convincing – I think it should be addressed more comprehensively.  I think if the Hub was to contribute descriptions to Culture Grid one of my main concerns would be around updating descriptions. I’m also not sure about the need to create additional metadata. I can’t quite get the reasoning behind the Culture Grid metadata, and the way that the link on the title goes to the ‘resource’ (the website of the contributor), but the ‘view details’ link goes to the Culture Grid metadata, which generally provides a cut down version of the description.

The afternoon was dedicated to Spectrum, something I know only a little about other than that it is widely used as a framework by museums in their collections care. Spectrum is, we were told, used in about 7,000 institutions across Europe. Nick Poole, the CEO of the Collections Trust, emphasised that Spectrum should be a collaborative venture, so everyone needs to engage in it.  Yet maybe it has become so embedded that people don’t think about it enough.  The new Spectrum 4 is seen as providing an opportunity to re-engage the community.

There was an interesting take on Spectrum by the first speaker as a means to actually put people off starting museums…but he was making the important point that a standard can show people what is involved – and that it is a non-trivial task to look after museum collections. I got the impression that Spectrum has been a way to get curators on board with the idea of standards and pulling together to work more professionally and consistently.

Alex Dawson spoke about the latest edition of Spectrum in her capacity as one of the co-editors. Spectrum is a consensus about collections management procedures, about consistency, accountability and a common vocabulary. It is not supposed to be prescriptive; it is the ‘what’ more than the ‘how’.  It has 21 procedures describing collections management activities, of which 8 are considered primary. We were told that the link to accreditation was very important in the history of spectrum, and other milestones have included the introduction of rights management procedures, establishing a clear link between procedures and policy and greater recognition of the importance of the knowledge held within museums (through Spectrum Knowledge).

There has been an acknowledgement that Spectrum started to become more cumbersome and information could get buried within this very large entity, it was also starting to get out of date in certain areas. I can see how Spectrum 4.0 is an improvement on this because it contains clear flow diagrams that bring out the processes much more obviously and shows related procedures. It also separates out the procedural and information requirements.  The advisory content has been stripped out (and put into online Spectrum Advice) in order to concentrate on procedural steps through flow diagrams.

The consultation on Spectrum 4 was opened up via a wiki: http://standards.collectionslink.org.uk/index.php/Collections_Link_Standards_wiki

The main day of the conference included some really great talks. Bill Thompson from the BBC was one highlight.  He talked about ‘A Killer App for Culture’, starting with musings on the meaning of ‘culture’. He talked about digital minds in this generation, which may change the answers that we come up with and may change the meaning of words. Shifting word sense can present us with challenges when we are in the business of data and information. He made the point convincingly that the world is NOT digital, as we often state; it is reassuringly still organic. But digital DATA is everywhere. It is an age in which we experience a digital culture, and maybe the ways that we do this are actually having an effect on the way that we think. Bill cited the book ‘Proust and the Squid’ by Maryanne Wolf which I would also throroughly recommend. Wolf looks at the way that learning to read impacts on the ways that we think.

Matthew Cock from the British Museum and Andrew Caspari from the BBC presented on A History of the World in 100 Objects.  We were told how this initiative gradually increased in scale to become enjoyed by millions of people across the world. It was a very collaborative venture between the BBC and British Museum. There were over 2.5 million visits to the site, often around 40,000 in a week when the programme was not on air.  It was interesting to hear that the mobile presence was seen as secondary at the time, but probably should have been prioritised more. ‘Permanent availability portable and for free’ was absolutely key said Andrew Caspari.

It was an initiative that really brought museums together – maybe not surprising with such a high profile initiative.  The project was about sharing and a different kind of partnership defined by mutual benefit, and most importantly, it was about closing the gap between public engagement and collection research. It obviously really touched people’s imaginations and they felt a sense of being part of something.  It does seem like a very successful combination of good fun, entertainment and learning. However,  we were told that there were issues. Maybe the digital capacity of museums was overestimated and longer lead in times were required than the BBC provided. Also, the upload to the site needed to be simpler.

Cock and Caspari referred to the way the idea spread, with things like ‘A history of the world in 100 sheds’. Should you be worried that this might trivialize the process, or should you be pleased that it caught on, stirred imaginations and controversy and debate?

David Fleming of National Museums Liverpool followed with an equally absorbing talk about museums and human rights. He said museums should be more aware that they are constructs of the society they are in. They should mirror society. They should give up on the idea of being neutral and engage in issues.  He is involved in the International Slavery Museum in Liverpool, and this is a campaigning museum. Should others follow suit? It makes museums an active part of society – both historical and contemporary. Fleming felt that a visit to the museum should stir people and make them want to get involved.

He gave a number of examples of museums where human rights are at the heart of the matter, including:

District Six in South Africa: http://www.districtsix.co.za – very much a campaigning museum that does not talk about collections so much as stories and lives, using emotion to engage people.

The  Tuol Sleng Museum of Genocide Victims in Cambodia, a building that was once Pol Pot’s secret prison. The photographs on this site are hugely affecting and harrowing. Just seemingly ordinary portrait shots of prisoners, but with an extraordinary power to them.

The Lithuanian Museum of Genocide Victims . This is a museum where visitors can get a very realistic experience of what it was like to live under the Soviet regime. Apparently this experience, using actors as Soviet guards, has led to some visitors passing out, but the older generation are passionate to ensure that their children understand what it was like at this time.

We moved on to a panel session on Hacking in Arts & Culture was of particular interest to me.  Linda Ellis from Black Country Museums gave a very positive assessment of how the experience of a hack day had been for them. She referred to the value of nurturing new relationships with developers, and took us through some of the ideas that were created.  You can read a bit more about this and about putting on a hack day on Dan Slee’s blog: https://danslee.wordpress.com/tag/black-country-museums/

What we need now is a Culture Hack day that focuses on archival data – this may be more challenging because the focus is text not images, but it could give us some great new perspectives on our data. According to Rachel Coldicutt, a digital consultant, we need beanbags, beer, pizza, good spirit and maybe a few prizes to hand out….. Doesn’t seem too hard. ….oh, and some developers of course :-)

Some final thoughts around a project at the New Walsall Art Gallery: Neil Lebeter told us that the idea was to make the voice of the artist key. In this case, Bob and Roberta Smith. The project centered around the Jacob Epstein archive and found ways to bring the archive alive through art – you can see some interesting video clips about this process on YouTube: http://www.youtube.com/user/newartgallerywalsall.

I found Open Culture was billed as a conference meeting the needs of museums, libraries and archives, but I do think it was essentially a museums conference with a nod to archives and maybe a slight nod to libraries. This is not to criticise the conference, which was very well presented, and there really were some great speakers, but maybe it points to the challenges of bringing together the three domains?  In the end, they are different domains with different needs and interests as well as areas of mutual interest. Clearly there is overlap, and there absolutely should be collaboration, but maybe there should also be an acknowledgement that we are also different communities, and we have some differing requirements and perspectives.

HubbuB

Diary of the Archives Hub, June 2011

Design Council Archive poster
Desing Council Archive: Festival of Britain poster

This is the first of our monthly diary entries, where we share news, ideas and thoughts about the Archives Hub and the wider world. This diary is aimed primarily at archives that contribute to the Hub, or are thinking about contributing, but we hope that it provides useful information for others about the sorts of developments going on at the Hub and how we are working to promote archives to researchers.

Hub Contributors’ Forum

At the Hub we are always looking to maintain an active and constructive relationship with our contributors. Our Contributors’ Forum provides one way to do this. It is informal, friendly, and just meets once or twice a year to give us a chance to talk directly to archivists. We think that archivists also value the opportunity to meet other contributors and think about issues around data discovery.

We have a Contributors’ Forum on 7th July at the University of Manchester and if any contributors out there would like to come we’d love to see you. It is a chance to think about where the Hub is going and to have input into what you think we should be doing, where our priorities should lie and how to make the service effective for users. Just in case you all jump in at once, we do have a limit on numbers….but please do get in touch if you are interested.

The session will be from 10.30 to 1.00 at the University of Manchester with lunch provided. It will be with some members of the Hub Steering Committee, so a chance for all to mix and mingle and get to know each other. And for you to talk to Steering Committee members directly.

Please email Lisa if you would like to attend: lisa.jeskins@manchester.ac.uk.

Contributor Audio Tutorials

Our audio tutorial is aimed at contributors who need some help with creating descriptions for the Hub. It takes you through the use of our EAD Editor, step-by-step. It is also useful in a general sense for creating archival descriptions, as it follows the principles of ISAD(G). The tutorial can be found at http://archiveshub.ac.uk/tutorials/. It is just a simple audio tutorial, split into convenient short modules, covering basic collection-level descriptions through to multi-level and indexing. Any feedback greatly appreciated – if you want any changes or more units added, just let us know.

Archives Hub Feature: 100 Objects

We are very pleased with our monthly features, founded by Paddy, now ably run by Lisa. They are a chance to show the wealth of archive collections and provide all contributors the opportunity to showcase their holdings.  They do quite well on Google searches as well!

Our monthly feature for June comes from Bradford Special Collections, one of our stalwart contributors, highlighting their current online exhibition: 100 Objects.  Some lovely images, including my favourite, ‘Is this man an anarchist?’ (No!! he’s just trying to look after his family): http://archiveshub.ac.uk/features/100objects/Nationalunionofrailwaymenposter.html

Relevance Ranking

Relevance ranking is a tricky beast, as our developer, John, will attest. How to rank the results of a search in a way that users see as meaningful? Especially with archive descriptions, which range from a short description of a 100 box archive to a 10 page description of a 2 box archive!

John has recently worked on the algorithm used for relevance ranking so that results now look more as most users would expect. For example, if you searched for ‘Sir John Franklin’ before, the ‘Sir John Franklin archive’ would not come up near the top of the results. It now appears 1st in results rather than way down the list, as it was previously. Result.

Images

Since last year we have provided the ability to add images to Hub descriptions. The images have to be stored elsewhere, but we will embed them into descriptions at any level (e.g. you can have an image to represent a whole collection, or an image at each item level description).

We’ve recently got some great images from the Design Council Archive: http://archiveshub.ac.uk/data/gb1837des-dca – take a look at the Festival of Britain entries, which have ‘digital objects’ linked at item level, enabling researchers to get a great idea of what this splendid archive holds.

Any contributors wishing to add images, or simple links to digital content, can easily do so through using the EAD Editor: http://archiveshub.ac.uk/images/ You can also add links to documents and audio files. Let us know if you would like more information on this.

Linking to descriptions

Linking to Hub descriptions from elsewhere has become simpler, thanks to our use of ‘cool URIs’. See http://archiveshub.ac.uk/linkingtodescriptions/. You simply need to use the basic URI for the Hub, with the /data/ directory, e.g. http://archiveshub.ac.uk/data/gb029ms207.

Out and About

It would take up too much space to tell you about all of our wanderings, but recently Jane spent a very productive week in Prague at the European Libraries Automation Group (ELAG), a very friendly bunch of people, a good mix of librarians and developers, and a very useful conference centering on Linked Data.

Bethan is at the CILIP new professionals information day today, busy twittering about networking and sharing knowledge.

Lisa is organising our contributors’ workshops for this year (feels like our summer season of workshops) and has already run one in Manchester. More to follow in Glasgow, London and Cardiff. This is our first workshop in Wales, so please take advantage of this opportunity if you are in Wales or south west England. More information at http://archiveshub.ac.uk/contributortraining/

Joy is very busy with the exciting initiative, UKDiscovery. This is about promoting an open data agenda for archives, museums and libraries – something that we know you are all interested in. Take a look at the new website: http://discovery.ac.uk/.

With best wishes,
The Hub Team

Whose Data Is It?: a Linked Data perspective

A comment on the blog post announcing the release of the Hub Linked Data maybe sums up what many archivists will think: “the main thing that struck me is that the data is very much for someone else (like a developer) rather than for an archivist. It is both ‘our data’ and not our data at the same time.”

Interfaces to the data

Archives Hub search interface

In many ways, Linked Data provides the same advantages as other machine based ways into the data. It gives you the ability to access data in a more unfiltered way. If you think about a standard Web interface search, what it does is to provide controlled ways into the data, and we present the data in a certain way. A user comes to a site, sees a keyword search box and enters a term, such as ‘antarctic exploration’. They have certain expectations of what they will get – some kind of list of results that are relevant to antarctica and famous explorers and expeditions – and yet they may not think much about the process – will all records that have any/either/both of these terms be returned, for example? Will the results be comprehensive? Might there be more effective ways to search for what they want? As those who run systems, we have to decide what a search is going to give the user. Will we look for these terms as adjacent terms and single terms? Will we return results from any field? How will we rank the results? We recently revised the relevance ranking on the Hub because although it was ‘pragmatically’ correct, it did not reflect what users expect to see. If a user enters ‘sir john franklin’ (with or without quotation marks) they would expect the Sir John Franklin Papers to come up first. This was not happening with the previous relevance ranking. The point here is that we (the service providers) decide – we have control over what the search returns and how it is displayed, and we do our best to provide something that will work for users.

Similarly, we decide how to display the results. We provide as a basis collection descriptions, maybe with lower-level entries, but the user cannot display information in different ways. The collection remains the indivisible unit.

With a Web interface we are providing (we hope) a user-friendly way to search for descriptions of archives – one that does not require prior knowledge. We know that users like a straightforward keyword search, as well as options for more advanced searching. We hide all of the mechanics of running the search and don’t really inform the user exactly what their search is doing in any kind of technical sense. When a user searches for a subject in the advanced subject search, they will expect to get all descriptions relating to that subject, but that is not necessarily what they will get. The reason is that the subject search looks for terms within the subject field. The creator of the description must put the subject in as an index term. In addition, the creator of the description may have entered a different term for the subject – say ‘drugs’ instead of ‘medicines’. The Archives Hub has a ‘subject finder’ that returns results for similar terms, so it would find both of these entries. However, maybe the case of the subject finder makes a good point about searching: it provides a really useful way to find results but it is quite hard to convey what it does quickly and obviously. It has never been widely used, even though evidence shows that users often want to search by subject, and by entering the subject as a keyword, they are more likely to get less relevant results.

These are all examples of how we, as service providers, look to find ways to make the data searchable in ways that we think users want and try to convey the search options effectively. But it does give a sense that they are coming into our world, searching ‘our data’, because we control how they can search and what they see.

Linked Data is a different way of formatting data that is based upon a model of the entities in the data and relationships between them. To read more about the basics of Linked Data take a look at some of the earlier posts on the Locah blog (http://blogs.ukoln.ac.uk/locah/2010/08/).

Providing machine interfaces gives a number of benefits. However, I want to refer to two types of ‘user’ here. The ‘intermediate user’ and the ‘end user’. The intermediate user is the one that gets the data and creates the new ways of searching and accessing the data. Typically, this may be a developer working with the archivist. But as tools are developed to faciliate this kind of work, it should become easier to work with the data in this way. The end user is the person who actually wants to use the data.

1) Data is made available to be selected and used in different ways

We want to provide the ability for the data to be queried in different ways and for users to get results that are not necessarily based upon the collection description. For example, the intermediate user could select only data that relates to a particular theme, because they are representing end users who are interested in combining that data with other sources on the same theme. The combined data can be displayed to end users in ways that work for a particular community or particular scenario.

The display within a service like the Hub is for the most part unchanging, providing consistency, and it generally does the job. We, of course, make changes and enhancements to improve the service based on user needs from time to time, but we’re still essentially catering for one generic user as best we can, However, we want to provide the potential to allow users to display data in their own way for their own purposes. Linked Data encourages this. There are other ways to make this possible of course, and we have an SRU interface that is being used by the Genesis portal for Women’s Studies. The important point is that we provide the potential for these kinds of innovations.

2) External links begin the process of interconnecting data

Machine interfaces provide flexible ways into the data, but I think that one of the main selling points of Linked Data is, well, linking data. To do this with the Hub data, we have put some links in to external datasets. I will be blogging about the process of linking to VIAF names (Virtual International Name Authority File), but suffice to say that if we can make the statement within our data that ‘Sir Ernest Shackleton’ on the Hub is the same as ‘Sir Ernest Shackleton’ on VIAF then we can benefit from anything that VIAF links to DBPedia for example (Wikipedia output as Linked Data). A user (or intermediate user) can potentially bring together information on Sir Ernest Shackleton from a wide range of sources. This provides a means to make data interconnected and bring people through to archives via a myriad of starting points.

3) Shared vocabularies provide common semantics

If we identify the title of a collection by using Dublin Core, then it shows that we mean the same thing by ‘title’ as others who use the Dublin Core title element. If we identify ‘English’ by using a commonly recognised URI (identifier) for English, from a common vocabulary (lexvo), then it shows that we mean the same thing as all the other datasets that use this vocabulary. The use of common vocabularies provides impetus towards more interoperability – again, connecting data more effectively. This brings the data out of the archival domain (where we share standards and terminology amongst our own community) and into a more global space.  It provides the potential for intermediate users to understand more about what our data is saying in order to provide services for end users. For example, they can create a cross-search of other data that includes titles, dates, extent, creator, etc. and have reasonable confidence that the cross-search will work because they are identifying the same type of content.

For the Hub there are certain entities where we have had to create our own vocabulary, because those in existence do not define what we need, but then there is the potential for other datasets to use the same terms that we use.

4) URIs are provided for all entities

For Linked Data one of the key rules is that entities are identified with HTTP URIs. This means that names, places, subjects, repositories, etc. within the Hub data are now brought to the fore through having their own identifier – all the individuals, for example, within the index terms, have their own URI. This allows the potential to link from the person identified on the Hub to the same person identified in other datasets.

Who is the user?

So far so good. But I think that whilst in theory Linked Data does bring significant benefits, maybe there is a need to explain the limitations of where we are currently at.Hub Sparql endpoint

Our Linked Data cannot currently be accessed via a human user friendly Web-based search interface; it can however be accessed via a Sparql endpoint. Sparql is the language for querying RDF, the format used for Linked Data. It shares many similarities to SQL, a language typically used for querying conventional relational databases that are the basis of many online services. (Our Sparql endpoint is at http://data.archiveshub.ac.uk/sparql ). What this means is that if you can write Sparql queries then you’re up and running. Most end users can’t, so they will not be able to pull out the data in this way. Even once you’ve got the data, then what? Most people wouldn’t know what to do with RDF output. In the main, therefore, fully utilising the data requires technical ability – it requires intermediate users to work with the data and create tools and services for end users.

For the Hub

we have provided Linked Data views, but it is important not to misunderstand the role of these views – they are not any kind of definite presentation, they are simply a means to show what the data consists of, and the user can then access that data as RDF/XML, JSON or Turtle (i.e. in a number of formats). It’s a human friendly view on the Linked Data if you access a Hub entity web address via a web browser. If however, you are a machine wanting machine readable RDF visiting the very same URI, you would get the RDF view straight off. This is not to say that it wouldn’t be possible to provide all sorts of search interfaces onto the data – but this is not really the point of it for us at the moment – the point is to allow other people to have the potential to do what they want to do.

The realisation of the user benefit has always been the biggest question mark for me over Linked Data – not so much the potential benefits, as the way people perceive the benefits and the confidence that they can be realised. We cannot all go off and create cool visualisations (e.g. http://www.simile-widgets.org/timeline/). However, it is important to put this into perspective. The Hub data at Mimas sits in directories as EAD XML. Most users wouldn’t find that very useful. We provide an interface that enables users with no technical knowledge to access the data, but we control this and it only provides access to our dataset and to a collection-based view. In order to step beyond this and allow users to access the data in different ways, we necessarily need to output it in a way that provides this potential, but there is likely to be a lag before tools and services come along that take advantage of this. In other words, what we are essentially doing is unlocking more potential, but we are not necessarily working with that potential ourselves – we are simply putting it out there for others.

Having said that, I do think that it is really important for us to now look to demonstrate the benefits of Linked Data for our service more clearly by providing some ways into the Linked Data that take advantage of the flexible nature of the data and the external links – something that ‘ordinary’ users can benefit from. We are looking to work on some visualisations that do demonstrate some of the potential. There does seem to be an increasing consensus within cultural heritage that primary resources are too severed from the process of research – we have a universe of unrelated bits that hint at what is possible but do not allow it to be realised. Linked Data is attempting to resolve this, so it’s worth putting some time and effort into exploring what it can do.

We want our data to be available so that anyone can use it as they want. It may be true that the best thing done with the data will be thought of by someone else. (see Paul Walk’s blog post for a view on this).

However, this is problematic when trying to measure impact, and if we want to understand the benefits of Linked Data we could do with a way to measure them. Certainly, we can continue to work to realise benefits by actively working with the Linked Data community and encouraging a more constructive and effective relationship between developers and managers. It seems to me that things like Linked Data require us to encourage developers to innovate and experiment with the data, enabling users to realise its benefits by taking full advantage of the global interconnectivity that is the vision of the Linked Data Web. This is the aim of UKOLN’s Dev CSI project – something I think we should be encouraging within our domain.

So, coming back to the starting point of this blog: The data maybe starts off as ‘our data’ but really we do indeed want it to be everyone’s data. A pick ‘n pix environment to suit every information need.

Flickr: davidlocke's photostream

The Standard Bearers

We generally like stdough cutting andards. Archivists, like many others within the information professions, see standards as a good thing. But if that is the case, and we follow descriptive standards, why aren’t our collection descriptions more interoperable? Why can’t users move seamlessly from one system to another and find them consistent?

I’ve been looking at a White Paper by Nick Poole of the Collections Trust: Where Next for Museum Standards? In this, he makes a good point about the reasons for using standards:

“Standards exist to condense and share the professional experience of our predecessors, to enable us to continue to build on their legacy of improvement.”

I think this point is sometimes overlooked – standards reflect the development of our understanding and expertise over time. As a novice jazz musician, I think this has a parallel with jazz theory – the point of theory is partly that it condenses what has been learnt about harmony, rhythm and melody over the past 100 years of jazz. The theory is only the means to the end, but without it acting effectively as a short cut, you would have to work your way through decades of musical development to get a good understanding of the genre.

Descriptive standards should be the means to the end – they should result in better metadata. Before the development of ISAD(G) for archives, we did not have an internationally recognised standard to help us describe archives in a largely consistent way (although ISAD(G) is not really a content standard). EAD has proved a vital addition to our range of standards, helping us to share descriptions far more effectively than we could do before.

But archives are diverse and maybe we have to accept that standards are not going to mould our descriptions so that they all come off of the conveyor belt of cataloguing looking the same? It may seem like something that would be of benefit to our users – descriptions that look pretty much identical apart from the actual content. But would it really suffice to reflect the reality of what archives are? Would it really suffice to reflect the reality of the huge range of users that there are?

Going back to Nick Poole’s paper, he says:

“The purpose of standards is not to homogenise, but to ensure that diversity is built on a solid foundation of shared knowledge and understanding and a collective commitment to quality and sustainability.”

I think this is absostatue of toy standard bearerlutely right. However, I do sometimes wonder how solid this foundation is for archives, and how much our standards facilitate collaborative understanding. Standards need to be clearly presented and properly understood by those who are implementing them. From the perspective of the Hub, where we get contributions of data from 200 different institutions, standards are not always well understood. I’m not sure that people always think carefully about why they are using standards – this is just as important as applying the standards. It is only by understanding the purpose that I think you do come to a good sense of how to apply a standard properly. For example, we get some index terms that are ostensibly using NCA Rules (National Council on Archives Rules for Personal, Family and Place Names), but the entries are not always in line with the rules. We also get subject entries that do not conform to any thesauri, or maybe they conform to an in-house thesaurus, but for an aggregated service, this does not really help in one of the main aims of subject indexing – to pull descriptions together by subject.

Just as for museums, standards, as Nick Poole says, must be “communicated through publications, websites, events, seminars and training. They must be supported, through infrastructure and investment, and they must be enforced through custom, practice or even assessment and sanction.”

For the Hub, we have made one important change that has made descriptions much more standards compliant – we have invested in an ‘EAD Editor’; a template based tool for the creation and editing of EAD based archival descriptions. This sophisticated tool helps to ensure valid and standards-based descriptions. This idea of supporting standards through this kind of approach seems to me to be vital. It is hard for many archivists to invest in the time that it takes to really become expert in applying standards. For the Hub we are only dealing with descriptive standards, but archivists have many other competing standards to deal with, such as environmental and conservation standards. Software should have standards-compliance built in, but it should also be designed to meet the needs of the archivists and the users. This balance between standards and flexibility is tricky. But standards are not going to be effective if they don’t actually meet real life needs. I do sometimes think that standards suffer from being developed somewhat in isolation of practical reality – this can be a result of the funding environment, where people are paid to work on standards, and they don’t tend to be the people who implement them. Standards may also suffer from the perennial problem of a shifting landscape – standards that were clearly relevant when they were created may be rather less so 10 years on, but revising standards is a time-consuming process. The archives community has the NCA Rules, which have served their purpose very well, but they really need revising now, to bring them in line with the online, global environment.

In the UK Archives Discovery network (UKAD) we are working to help archivists understand and use standards effectively. We are going to provide an indexing tutorial and we are discussing ways to provide more guidance on cataloguing generally. The survey that we carried out in 2009 showed that archivists do want more guidance here. Whilst maybe there are some who are not willing to embrace standards, the vast majority can see the sense in interoperability, and just need a low-barrier way to improve their understanding of the standards that we have and how best to use them. But in the end, I can’t see that we will ever have homogeneous descriptions, so we need to harness technology in order to help us work more effectively with the diverse range of descriptions out there that reflect the huge diversity of archives and users.

Images: Flickr goosmurf’s photostream (dough cutter); robartesm’s photostream (standard bearer)

The long tail of archives

For many of us, the importance of measuring use and impact are coming more to the fore. Funders are often keen for indications of the ‘value’ of archives and typically look for charts and graphs that can provide some kind of summary of users’ interaction with archives. For the Hub, in the most direct sense this is about use of the descriptions of archives, although, of course, we are just as interested in whether researchers go on to consult archives directly.

The pattern of use of archives and the implications of this are complex. The long tail has become a phrase that is banded around quite a bit, and to my mind it is one of those concepts that is quite useful. It was popularised by Chris Anderson, more in relation to the commercial world, relating to selling a smaller number of items in large quantities and a large number of items in relatively small quantities, and you can read more about it in Wikipedia: Long Tail.

If we think about books, we might assume that a smaller number of popular titles are widely used and use gradually declines until you reach a long tail of low use.  We might think that the pattern, very broadly speaking, is a bit like this:

I attended a talk at the UKSG Conference recently, where Terry Bucknell from the University of Liverpool was talking about the purchase of e-books for the University. He had some very whizzy and really quite absorbing statistics that analysed the use of packages of e-books. It seems that it is hard to predict use and that whilst a new package of e-books is the most widely used for that particular year, the older packages are still significantly used, and indeed, some books that are barely used one year may be get significant use in subsequent years. The patterns of use suggested that patron-driven acquisition, or selection of titles after one year of use, were not as good value as e-book packages, although you cannot accurately measure the return on investment after only one year.

Archives are kind of like this only a whole lot more tricky to deal with.

For archives, my feeling is that the graph is more like this:

No prizes for guessing which are the vastly more used collections*. We have highly used collections for popular research activities, archives of high-profile people and archives around significant events, and it is often these that are digitised in order to protect the originals.  But it is true to say that a large proportion of archives are in the ‘long tail’ of use.

I think this can be a problem for us. Use statistics can dominate perceptions of value and influence funding, often very profoundly. Yet I think that this is completely the wrong way to look at it. Direct use does not correlate to value, not within archives.

I think there are a number of factors at work here:

  • The use of archives is intimately bound up with how they are catalogued. If you have a collection of letters, and just describe it thus, maybe with the main author (or archival ‘creator’), and covering dates, then researchers will not know that there are letters by a number of very interesting people, about a whole range of subjects of great interest for all sorts of topics. Often, archivists don’t have the time to create rich metadata (I remember the frustrations of this lack of time). Having worked in the British Architectural Library, I remember that we had great stuff for social history, history of empire, in particular the Raj in India, urban planning, environment, even the history of kitchen design or local food and diet habits. We also had a wonderful collection of photographs, and I recall the Photographs Curator showing me some really early and beautiful photographs of Central Park in New York. Its these kind of surprises that are the stuff of archives, but we don’t often have time to bring these out in the cataoguing process.
  • The use of a particular archive collection may be low, and yet the value gained from the insights may be very substantial. Knowledge gained as a result of research in the archives may feed into one author’s book or article, and from there it may disseminate widely. So, one use of one archive may have high value over time. If you fed this kind of benefit in as indirect use, the pattern would look very different.
  • The ‘value’ of archives may change over time. Going back to my experience at the British Architectural Library, I remember being told how the drawings of Sir Edwin Lutyens were not considered particularly valuable back in the 1950s – he wasn’t very fashionable after his death. Yet now he is recognised as a truly great architect, and his archives and drawings are highly prized.
  • The use of archives may change over time. Just because an archive has not been used for some time – maybe only a couple of researchers have accessed it in a number of years – it doesn’t mean that it won’t become much more heavily used. I think that research, just like many things, is subject to fashions to some extent, and how we choose to look back at our past changes over time. This is one of the challenges for archivists in terms of acquisitions. What is required is a long-term perspective but organisations all too often operate within short-term perspectives.
  • Some archives may never be highly used, maybe due to various difficulties interpreting them. I suppose Latin manuscripts come to mind, but also other manuscripts that are very hard to read and those pesky letters that are cross-written. Also, some things are specialised and require professional or some kind of expert knowledge in order to understand them. This does not make them less valuable. It’s easy to think of examples of great and vital works of our history that are not easy for most people to read or interpret, but that are hugely important.
  • Some archives are very fragile, and therefore use has to be limited. Digitising may be one option, but this is costly, and there are a lot of fragile archives out there.

I’m sure I could think of some more – any thoughts on this are very welcome!

So, I think that it’s important for archivists to demonstrate that whilst there may be a long tail to archives, the value of many of those archives that are not highly used can be very substantial. I realise that this is not an easy task, but we do have one invention in our favour: The Web. Not to mention the standards that we have built up over time to help us to describe our content. The long tail graph does demonstrate to us that the ‘long tail of use’ can be just as much, or more, than the ‘high column of use’. The use of the Web is vital in making this into a reality, because researchers all over the world can discover archives that were previously extremely hard to surface.  That does still leave the problems of not being able to catalogue in depth in order to help surface content…the experiments with crowd-sourcing and user generated content may prove to be one answer. I’d like to see a study of this – have the experiments with asking researchers to help us catalogue our content proved successful if we take a broad overview? I’ve seen some feedback on individual projects, such as OldWeather:

“Old Weather (http://www.oldweather.org) is now more than 50% complete, with more than 400,000 pages transcribed and 80 ships’ logs finished. This is all thanks to the incredible effort that you have all put in. The science and history teams are constantly amazed at the work you’re all doing.” (a recent email sent out to the contributors, or ‘ship captains’).

If anyone has any thoughts or stories about demonstrating value, we’d love to hear your views.

* family history sources

New Horizons

The Horizon Report is an excellent way to get a sense of emerging and developing technologies, and it is worth thinking about what they might mean for archives. In this post I concentrate on the key trends that are featured for the next 1-4 years.

Electronic Books

“[E]lectronic books are beginning to demonstrate capabilities that challenge the very definition of reading.”

Electronic books promise not just convenience, but also new ways of thinking about reading. They encourage interactive, social and collaborative approaches. Does this have any implications for archives? Most archives are paper-based and do not lend themselves so well to this kind of approach. We think of consulting archives as a lone pursuit, in a reading room under carefully controlled conditions. The report refers to “a dynamic journey that changes every time it is opened.” An appealing thought, and indeed we might feel that archives also offer this kind of journey. Increasingly we have digital and born-digital archives, but could these form part of a more collaborative and interactive way of learning? Issues of authenticity, integrity and intellectual property may mitigate against this.

Whilst we may find it hard to see how archives may not become a part of this world – we are talking about archives, after all, and not published works – there may still be implications around the ways that people start to think about reading. Will students become hooked on rich and visual interfaces and collaborative opportunities that simply do not exist with archives?

Mobiles

“According to a recent report from mobile manufacturer Ericsson, studies show that by 2015, 80% of people accessing the Internet will be doing so from mobile devices.”

Mobiles are a major part of the portable society. Archive repositories can benefit from this, ensuring that people can always browse their holdings, wherever they are. We need to be involved in mobile innovation. As the report states: “Cultural heritage organizations and museums are also turning to mobiles to educate and connect with audiences.” We should surely see mobiles as an opportunity, not a problem for us, as we increasingly seek to broaden our user-base and connect with other domains. Take a look at the ‘100 most educational iPhone Apps‘. They include a search of US historical documents with highlighting and the ability to add notes.

Augmented Reality

We have tended to think of augmented reality as something suitable for marketing, social engagement and amuseument. But it is starting to provide new opportunities for learning and changing expectations around access to information. This could provide opportunities for archives to engage with users in new ways, providing a more visual experience. Could it provide a means to help people understand what archives are all about? Stanford University in the US has created an island in Second Life. The unique content that the archives provide was seen as something that could draw visitors back and showcase the extensive resources available. Furthermore, they created a ‘virtual archives’, giving researchers an opportunity to explore the strong rooms, discover and use collections and collaborate in real time.

The main issue around using these kinds of tools is going to be the lack of skills and resources. But we may still have a conflict of opinions over whether virtual reality really has a place in ‘serious research’. Does it trivialize archives and research? Or does it provide one means to engage younger potential users of archives in a way that is dynamic and entertaining? I think that it is a very positive thing if used appropriately. The Horizon Report refers to several examples of its use in cultural heritage: the Getty Museum are providing ‘access’ to a 17th century collector’s cabinet of wonders; the Natural History Museum in London are using it in an interactive video about dinosaurs; the Museum of London are using it to allow people to view 3D historical images overlaid on contemporary buildings. Another example is the Powerhouse Museum in Sydney, using AR to show the environment around the Museum 100 years ago. In fact, AR does seem to lend itself particularly well to teaching people about the history around them.

Game-Based Learning

Another example of blending entertainment with learning, games are becoming increasingly popular in higher education, and the Serious Games movement is an indication of how far we have come from the notion that games are simply superficial entertainment. “[R]esearch shows that players readily connect with learning material when doing so will help them achieve personally meaningful goals.” For archives, which are often poorly understood by people, I think that gaming may be one possible means to explain what archives are, how to navigate through them and find what may be of interest, and how to use them. How about something a bit like this Smithsonian initiative, Ghosts of a Chance, but for archives?

These technologies offer new ways of learning, but they also suggest that our whole approach to learning is changing. As archivists, we need to think about how this might impact upon us and how we can use it to our advantage. Archives are all about society, identity and story. Surely, therefore, these technologies should give us opportunities to show just how much they are a part of our life experiences.

A bit about Resource Discovery

The UK Archives Discovery Network (UKAD) recently advertised our up and coming Forum on the archives-nra listserv. This prompted one response to ask whether ‘resource discovery’ is what we now call cataloguing and getting the catalogues online. The respondent went on to ask why we feel it necessary to change the terminology of what we do, and labelled the term resource discovery as ‘gobledegook’. My first reaction to this was one of surprise, as I see it as a pretty plain talking way of describing the location and retrieval of information , but then I thought that it’s always worth considering how people react and what leads them to take a different perspective.

It made me think that even within a fairly small community, which archivists are, we can exist in very different worlds and have very different experiences and understanding. To me, ‘resource discovery’ is a given; it is not in any way an obscure term or a novel concept. But I now work in a very different environment from when I was an archivist looking after physical collections, and maybe that gives me a particular perspective. Being manager of the Archives Hub, I have found that a significant amount of time has to be dedicated to learning new things and absorbing new terminology. There seem to be learning curves all over the place, some little and some big. Learning curves around understanding how our Hub software (Cheshire) processes descriptions, Encoded Archival Description , deciding whether to move to the EAD schema, understanding namespaces, search engine optimisation, sitemaps, application programming interfaces, character encoding, stylesheets, log reports, ways to measure impact, machine-to-machine interfaces, scripts for automated data processing, linked data and the semantic web, etc. A great deal of this is about the use of technology, and figuring out how much you need to know about technology in order to use it to maximum effect. It is often a challenge, and our current Linked Data project, Locah, is very much a case in point (see the Locah blog). Of course, it is true that terminology can sometimes get in the way of understanding, and indeed, defining and having a common understanding of terms is often itself a challenge.

My expectation is that there will always be new standards, concepts and innovations to wrestle with, try to understand, integrate or exclude, accept or reject, on pretty much a daily basis. When I was the archivist at the RIBA (Royal Institute of British Architects), back in the 1990’s, my world centered much more around solid realities: around storerooms, temperature and humidity, acquisitions, appraisal, cataloguing, searchrooms and the never ending need for more space and more resources. I certainly had to learn new things, but I also had to spend far more time than I do now on routine or familiar tasks; very important, worthwhile tasks, but still largely familiar and centered around the institution that I worked for and the concepts terminology commonly used by archivists. If someone had asked me what resource discovery meant back then, I’m not sure how I would have responded. I think I would have said that it was to do with cataloguing, and I would have recognised the importance of consistency in cataloguing. I might have mentioned our Website, but only in as far as it provided access through to our database. The issues around cross-searching were still very new and ideas around usability and accessibility were yet to develop.

Now, I think about resource discovery a great deal, because I see it as part of my job to think of how to best represent the contributors who put time and effort into creating descriptions for the Hub. To use another increasingly pervasive term, I want to make the data that we have ‘work harder’. For me, catalogues that are available within repositories are just the beginning of the process. That’s fine if you have researchers who know that they are interested in your particular collections. But we need to think much more broadly about our potential global market: all the people out there who don’t know they are interested in archives – some, even, who don’t really know what archives are. To reach them, we have to think beyond individual repositories and we have to see things from the perspective of the researcher. How can we integrate our descriptions into the ‘global information environment’ in a much more effective way. A most basic step here, for example, is to think about search engine optimisation. Exposing archival descriptions through Google, and other search engines, has to be one very effective way to bring in new researchers. But it is not a straightforward exercise – books are written about SEO and experts charge for their services in helping optimise data for the Web. For the Archives Hub, we were lucky enough to be part of an exercise looking at SEO and how to improve it for our site. We are still (pretty much as I write) working on exposing our actual descriptions more effectively.

Linked Data provides another whole world of unfamiliar terminology to get your head round. Entities, triples, URI patterns, data models, concepts and real world things, sparql queries, vocabularies – the learning curve has indeed been steep. Working on outputting our data as RDF (a modelling framework for Linked Data) has made me think again about our approach to cataloguing and cataoguing standards. At the Hub, we’re always on about standards and interoperability, and it’s when you come to something like Linked Data, where there are exciting possibilities for all sorts of data connections, well beyond just the archive community, that you start to wish that archivists catalogued far more consistently. If only we had consistent ‘extent’ data, for example, we could look at developing a lovely map-based visualisation showing where there are archives based on specific subjects all around the country and have a sense of where there are more collections and where there are fewer collections. If only we had consistent entries for people’s names, we could do the same sort of thing here, but even with thesauri, we often have more than one name entry for the same person. I sometimes think that cataloguing is more of an art than a science, partly because it is nigh on impossible to know what the future will bring, and therefore knowing how to catalogue to make the most of as yet unknown technologies is tricky to say the least. But also, even within the environment we now have, archivists do not always fully appreciate the global and digital environment which requires new ways of thinking about description. Which brings me back to the idea of whether resource discovery is another term for cataloguing and getting catalogues online. No, it is not. It is about the user perspective, about how researchers locate resources and how we can improve that experience. It has increasingly become identified with the Web as a way to define the fundamental elements of the Web: objects that are available and can be accessed through the Internet, in fact, any concept that has an identity expressed as a URI. Yes, cataloguing is key to archives discovery, cataloguing to recognised standards is vital, and getting catalogued online in your own particular system is great…but there is so much more to the whole subject of enabling researchers to find, understand and use archives and integrating archives into the global world of resources available via the Web.

Digital Curation: think use, not preservation

For the keynote presentation at the DCC/RIN Research Data Management Forum on ‘The Economics of Applying and Sustaining Digital Curation’, Chris Rusbridge gave us some reflections from the Blue Ribbon Task Force (BRTF): http://brtf.sdsc.edu/about.html on Sustainable Digital Preservation and Access. This was a 2 year project, finishing earlier this year, and the final report is available from: http://brtf.sdsc.edu/biblio/BRTF_Final_Report.pdfpicture of digital data

Chris kicked off by asking us to think about how we currently support access to digital information. Avenues include Government grants, advertisements (e.g. through Google), subscriptions (to journals), pay per service (e.g. Amazon Web service), and donations.

One of the key themes that he raised and returned to was around the alignment, or lack of alignment between those who pay, those who provide and those who benefit from digital data: they are not necessarily the same, and the more different they are the harder it may be to create a sustainable model . Who owns, who benefits, who selects, who preserves, who pays?  This has interesting parallels with archive repositories, where an institution may pay for the acquisition, appraisal, storage, cataloguing and access for these resources, but the beneficiaries are far broader than just members of the institution. Some institutions may require payment for access, but others will provide access free of charge. They may see this as a means to enhance their reputation and status as a learned society.

Around 15 years ago we started to think about digital preservation as a technical problem and then the OAIS reference model was produced. The technical capabilities that we now have are well up to the task, although Chris warned that the most elegant technical solution is no good if it is not sustainable; digital preservation has to be a sustainable economic activity. Today the focus is on the economic and organisational problems. It is not just about money; it requires building upon a value proposition, providing incentives to act and defining roles and responsibilities.

Digital preservation represents a derived demand.  No one ‘wants’ preservation per se; what they want is access to a resource.  It is not easy to sell a derived demand – often it needs to be sold on some other  basis. This idea of selling the importance of providing use (over time) rather than trying to sell the idea of preservation was emphasised throughout the Forum.

Digital preservation is also ‘path dependent’, meaning that the actions and decision you take change over time; they are different at different points of the life-cycle. Today’s actions can remove other options for all time.

Cultural issues, and mindset may be an issue here, and I was interested in the potential problem Chris proposed of  the ‘free-rider’ culture when it comes to making research datasets available. It may be that some (many?) researchers don’t want to pay for things, under value services and maybe underestimate costs. Researchers may also resent conformity and what they see as beauracracy. All in all, it may be difficult to make a case that researchers should in some way pay. This may be compounded by a sense that money invested in preservation is money taken out of research.  Chris suggested that the incentives for preservation are less apparent to the individual researcher, but are more clearly defined when the data is aggregated.

Typically, long-term preservation activities  have been funded by short-term resource allocation, although maybe this is gradually changing; a more thorny issue is that of recognising and valuing the benefits of digital preservation, to provide incentives that attract funding. More work needs to be done on articulating the benefits in order to cultivate a sense of the value.However, other speakers at the Forum wondered whether we should actually take the value as a given – maybe we shouldn’t keep asking the question about benefits, but simply acknowledge that it is the right thing to make research and other digital outputs available long-term?  We may be creating problems for ourselves if we emphasise the need to demonstrate value too much, and then struggle to quantify the value. However, this was just one argument, and overall I think that there was a belief that we do need to understand and articulate the benefits of providing long-term access.

There is often a lack of clear responsibility around digital preservation – maybe this is one of those areas where it’s always thought to be someone else’s responsibility? So, appropriate organisation and governance is essential for efficient ongoing preservation, especially when considering the tendency for data to be transferred – these ‘handoffs’ need to be secure.

The three imperatives that the BRTF report comes up with are: to articulate a compelling value proposition; to provide clear incentives to preserve in the public interest; to define role and responsibilities.

Commenting briefly on the post BRTF developments, Chris mentioned the EU digital agenda and the  LIBER pan-european survey on sustainability preparedness.

There are some mandates emerging:  the NERC and ESRC, for example.  Some publishers do require authors to make available data that substantiates an article, but at present this is not rigorous enough. We need to focus more on the data behind the research and how important it is.

Chris contrasted domain data repositories and institutional data repositories. Domain data repositories: leverage scale and expertise; are valuable for ‘high curation’ data; can carry out a ‘community proxy’ role such as tool development; aggregate demand; are potentially vulnerable to policy change (e.g. AHDS). A mixed funding models desirable for domain data repositories (e.g. ICPSR). Institutional data repositories: have a reputational business case (risk management, records management aspects, showcasing); should be aligned with institutional goals; can link to institutional research services (e.g. universal backup); can work well for ‘low curation’ cases (relatively small, static datasets); demand aggregation across a set of disciplines.

One issue that came up in the discussion was that we must remember that in fact digital preservation is relatively cheap, especially when compared to the preservation of hard-copy archives, held in acid-free boxes on rows and rows of shelving in secure, controlled search rooms.  So, if the cost is actually not prohibitive, and the technical know-how is there, then it seems imperative to address the organisational issues and to really hammer home the true value of preserving our digital data.

Opening the door to demonstrating value

The Archives Hub team value the links that we have with our contributors, who, after all, make the Hub what it is. We have a Contributors’ Forum in order to establish and develop links with contributors and get their feedback on Hub developments.

photo of open doorThis week we ran a Contributors’ Forum that concentrated on measuring impact, something that is becoming increasingly important in order to demonstrate value.  Unfortunately, we ended up with quite a small group, despite sending out some enticing emails – maybe a sign of the difficult times. But we still had a stimulating discussion, and for us it is always very valuable to get a perspective from the actual archives repositories.

We spent the first part of the morning with updates on the Hub and reports from the contributors: John Rylands at Manchester, Salford, Liverpool and Glasgow. Joy then gave a presentation on measuring impact, reflecting on some work that the Archives Hub, Copac and Zetoc services have carried out through online surveys and one-to-one interviews with researchers in order to create case studies.

In the afternoon we concentrated on measuring impact by asking the contributors to think about (i) what sort of information they currently collect about their researchers and (ii) what sort of information they would like to have. Overall, it seems that most archives have some form of registration, where researchers give some details about themselves. But the information recorded varies, not surprisingly. Sometimes information such as the items consulted is given, sometimes researchers are asked to specify their subject area, and at Glasgow they are asked how they found out about the University Archive. At Liverpool all  of the requisition slips are studiously kept, so that there is a record of who has looked at what, and at Glasgow there is a log of everything leaving the strong room, and I’m sure that for most archives this is the case.  At Salford, phone and email enquiries are all logged, as well as website statistics kept.

However, it seems that in general there is very little information on what happens next. How does the visit to the archive benefit the researcher? Do they use what they have found in publications? reports? articles? The archive repository may find this sort of information out if the researcher asks about copyright issues, but otherwise it is very hard to know. We agreed that informal networks can be valuable here. Archivists often get to know regular researchers, and in fact, this may be more likely to happen at smaller repositories where there is a lone archivist. But this can only account for a small part of the use of the collections. In fact, two of our contributors said that a reception desk had recently been installed so that researchers often don’t really interact directly with the archivist unless they have a particular query, so whilst this may be more efficient, it may distance us more from our users.

Also, it seems that the information that is gathered is not really utilised. It ‘may’ go into reports, and it ‘may’ be used for funding applications, but the suggestion is that this is done in a rather ad hoc manner. At Glasgow, it is important to show that the researchers and students from the University are being prioritised, so the information gathered can help to support this kind of situation.

From the discussion that we had around this topic, our first likely action arose: If there is an easy way for a researcher to grab an archival reference, it will encourage people to include the correct citation, which will help with tracking the use of archives.  This is something that we should be able to introduce for the Hub.

We talked about how easy it would be to simply ask researchers if they will speak about their research. Maybe they could be encouraged to put something about this on the registration form. We felt that if we are honest about what we need (which is often to demonstrate our value in order to secure continued funding), then researchers may be more willing than we might suppose. There is, undoubtedly, an huge feeling of goodwill towards archives, and, as one contributor said, we may be pushing at an open door here.

We talked about the sort of information we would like to gather, and came up with some possibilities:

We would like to know how researchers are coming to the repository – e.g. from the Archives Hub, from a Hub Spoke, from the NRA?
We would like to know if users find what they need from the archival descriptions themselves. Maybe more detailed descriptions sometimes provide the information that they need – they might even show that the archive is not relevant to the research, thus saving the researcher a wasted visit (a positive negative outcome!).
We would like to know more about how people behave when looking at an archive catalogue: Where do they navigate to? Do they explore the catalogue Do they search laterally?

From the discussion with these contributors, it seems that the Archives Hub is having to place more emphasis on issues around ‘market penetration’ than they are at present,  although it was felt that this is starting to change and that archives may well be faced with more pressure to understand their markets and how to effectively reach them.

Finally, we came up with another action, which was to try to compile 3 case studies over the next year. John Rylands agreed to work with us on the first one, so that we can test out how best to approach this. It may be that telling stories is the most fruitful way to get a sense of the impact that archives have. But we cannot ignore the fact that statistics are required, and we do have to continue to look for different ways to demonstrate our value.