Arrive in Wonder, Leave in Wisdom!

Roll Up Roll Up for Open Cuture!

image of open culture banner

I arrived at the Open Culture conference just in time to grab a cup of tea and dash along to hear Malcolm Howitt’s talk on Axiell. He focussed on Axiell Arena,
software, a new content management option. It provides for a more interactive experience, complete with tag cloud and the ability to add comments.  It looked pretty good, very much in line with where things are going in terms of these kinds of websites. However, from our point of view as an aggregator what we are keen to see is an API to the data to enable others to engage with it more flexibly, something that has yet to happen on CALM. Maybe this raises the whole issue of the challenge of open data to commercial suppliers – it does rather appear to threaten their business model, and I can see that this would be of concern to them.

The second presentation I saw was from Deep Visuals on ViziQuest, ‘a new way to explore digital collections’. They used natural language processing to extract the concepts from the text.  So the system uses existing metadata in order to enable semantic browsing.  The idea is to provide a different kind of search experience, where the user can meander through a collection of images. You can flip over image to find metadata about the image, which is quite neat.

Deep Visuals have worked with the Scott Poloar Research Institute, one of the Hub contributors, and there are some wonderful images of expeditions. For some images, the archivist has recorded an audio and there are also some film clips  – I saw a great clip on board a ship bound for the arctic.  Currently the software is only available for users within the institute, but it may be made available through the website. You can see a small demo here: http://www.deepvisuals.com/Demo/.  In addition, ViziQuest have taken some expedition diaries and recorded some audio with actors.

The morning was rounded off with a talk about Culture Grid. The importance of Culture Grid being part of national and international initiatives was emphasised, and there was reference to RDTF (now UKDiscovery) and the whole HE agenda, which was good to hear.

Currently Culture Grid contains about 1.65 million item records, mostly referring to images. There are also about 10,000 collection records and 8,000 institution records. We were told that ‘Cuture Grid site and search is not a destination in itself.’  This slightly surprised me, as I did think that this was one of its purposes, albeit only one and maybe not the primary one.

I was impressed by the way Culture Grid is positioning itself as a means to facilitate the use of data by others. Culture Grid has APIs and we were told that a growing range of users do take advantage of this. They are also getting very involved in developer days as a means to encourage innovation. I think this is something archives should engage with, otherwise we will get left behind in the innovative exploration of how to make the most of our data.

Whilst I am very much in agreement with the aims of opening up data, I am not entirely convinced by the Culture Grid website. It does appear to prioritise digital materials – it works much better where there are images. The links back to resources often don’t work. I did a search for ‘victorian theatre’ and first of all the default search was ‘images only’, excluding ‘collections’ and non-images based materials. Then, two of the first four links to resources I clicked on got an internal server error.  I found at least six links that didn’t work on the first two pages of results. Obviously this is not Culture Grid’s fault, but it is certainly a problem. I also wonder about how intuitive it is, with resource links going to so many different types of websites, and at so many different levels of granularity. Quite often you don’t go straight to the resource: one of the links I clicked on from an item went to the Coventry Council homepage, another went to the ‘how do I?’ page of the University of Hull. I asked about the broken links and didn’t feel that the reply was entirely convincing – I think it should be addressed more comprehensively.  I think if the Hub was to contribute descriptions to Culture Grid one of my main concerns would be around updating descriptions. I’m also not sure about the need to create additional metadata. I can’t quite get the reasoning behind the Culture Grid metadata, and the way that the link on the title goes to the ‘resource’ (the website of the contributor), but the ‘view details’ link goes to the Culture Grid metadata, which generally provides a cut down version of the description.

The afternoon was dedicated to Spectrum, something I know only a little about other than that it is widely used as a framework by museums in their collections care. Spectrum is, we were told, used in about 7,000 institutions across Europe. Nick Poole, the CEO of the Collections Trust, emphasised that Spectrum should be a collaborative venture, so everyone needs to engage in it.  Yet maybe it has become so embedded that people don’t think about it enough.  The new Spectrum 4 is seen as providing an opportunity to re-engage the community.

There was an interesting take on Spectrum by the first speaker as a means to actually put people off starting museums…but he was making the important point that a standard can show people what is involved – and that it is a non-trivial task to look after museum collections. I got the impression that Spectrum has been a way to get curators on board with the idea of standards and pulling together to work more professionally and consistently.

Alex Dawson spoke about the latest edition of Spectrum in her capacity as one of the co-editors. Spectrum is a consensus about collections management procedures, about consistency, accountability and a common vocabulary. It is not supposed to be prescriptive; it is the ‘what’ more than the ‘how’.  It has 21 procedures describing collections management activities, of which 8 are considered primary. We were told that the link to accreditation was very important in the history of spectrum, and other milestones have included the introduction of rights management procedures, establishing a clear link between procedures and policy and greater recognition of the importance of the knowledge held within museums (through Spectrum Knowledge).

There has been an acknowledgement that Spectrum started to become more cumbersome and information could get buried within this very large entity, it was also starting to get out of date in certain areas. I can see how Spectrum 4.0 is an improvement on this because it contains clear flow diagrams that bring out the processes much more obviously and shows related procedures. It also separates out the procedural and information requirements.  The advisory content has been stripped out (and put into online Spectrum Advice) in order to concentrate on procedural steps through flow diagrams.

The consultation on Spectrum 4 was opened up via a wiki: http://standards.collectionslink.org.uk/index.php/Collections_Link_Standards_wiki

The main day of the conference included some really great talks. Bill Thompson from the BBC was one highlight.  He talked about ‘A Killer App for Culture’, starting with musings on the meaning of ‘culture’. He talked about digital minds in this generation, which may change the answers that we come up with and may change the meaning of words. Shifting word sense can present us with challenges when we are in the business of data and information. He made the point convincingly that the world is NOT digital, as we often state; it is reassuringly still organic. But digital DATA is everywhere. It is an age in which we experience a digital culture, and maybe the ways that we do this are actually having an effect on the way that we think. Bill cited the book ‘Proust and the Squid’ by Maryanne Wolf which I would also throroughly recommend. Wolf looks at the way that learning to read impacts on the ways that we think.

Matthew Cock from the British Museum and Andrew Caspari from the BBC presented on A History of the World in 100 Objects.  We were told how this initiative gradually increased in scale to become enjoyed by millions of people across the world. It was a very collaborative venture between the BBC and British Museum. There were over 2.5 million visits to the site, often around 40,000 in a week when the programme was not on air.  It was interesting to hear that the mobile presence was seen as secondary at the time, but probably should have been prioritised more. ‘Permanent availability portable and for free’ was absolutely key said Andrew Caspari.

It was an initiative that really brought museums together – maybe not surprising with such a high profile initiative.  The project was about sharing and a different kind of partnership defined by mutual benefit, and most importantly, it was about closing the gap between public engagement and collection research. It obviously really touched people’s imaginations and they felt a sense of being part of something.  It does seem like a very successful combination of good fun, entertainment and learning. However,  we were told that there were issues. Maybe the digital capacity of museums was overestimated and longer lead in times were required than the BBC provided. Also, the upload to the site needed to be simpler.

Cock and Caspari referred to the way the idea spread, with things like ‘A history of the world in 100 sheds’. Should you be worried that this might trivialize the process, or should you be pleased that it caught on, stirred imaginations and controversy and debate?

David Fleming of National Museums Liverpool followed with an equally absorbing talk about museums and human rights. He said museums should be more aware that they are constructs of the society they are in. They should mirror society. They should give up on the idea of being neutral and engage in issues.  He is involved in the International Slavery Museum in Liverpool, and this is a campaigning museum. Should others follow suit? It makes museums an active part of society – both historical and contemporary. Fleming felt that a visit to the museum should stir people and make them want to get involved.

He gave a number of examples of museums where human rights are at the heart of the matter, including:

District Six in South Africa: http://www.districtsix.co.za – very much a campaigning museum that does not talk about collections so much as stories and lives, using emotion to engage people.

The  Tuol Sleng Museum of Genocide Victims in Cambodia, a building that was once Pol Pot’s secret prison. The photographs on this site are hugely affecting and harrowing. Just seemingly ordinary portrait shots of prisoners, but with an extraordinary power to them.

The Lithuanian Museum of Genocide Victims . This is a museum where visitors can get a very realistic experience of what it was like to live under the Soviet regime. Apparently this experience, using actors as Soviet guards, has led to some visitors passing out, but the older generation are passionate to ensure that their children understand what it was like at this time.

We moved on to a panel session on Hacking in Arts & Culture was of particular interest to me.  Linda Ellis from Black Country Museums gave a very positive assessment of how the experience of a hack day had been for them. She referred to the value of nurturing new relationships with developers, and took us through some of the ideas that were created.  You can read a bit more about this and about putting on a hack day on Dan Slee’s blog: https://danslee.wordpress.com/tag/black-country-museums/

What we need now is a Culture Hack day that focuses on archival data – this may be more challenging because the focus is text not images, but it could give us some great new perspectives on our data. According to Rachel Coldicutt, a digital consultant, we need beanbags, beer, pizza, good spirit and maybe a few prizes to hand out….. Doesn’t seem too hard. ….oh, and some developers of course :-)

Some final thoughts around a project at the New Walsall Art Gallery: Neil Lebeter told us that the idea was to make the voice of the artist key. In this case, Bob and Roberta Smith. The project centered around the Jacob Epstein archive and found ways to bring the archive alive through art – you can see some interesting video clips about this process on YouTube: http://www.youtube.com/user/newartgallerywalsall.

I found Open Culture was billed as a conference meeting the needs of museums, libraries and archives, but I do think it was essentially a museums conference with a nod to archives and maybe a slight nod to libraries. This is not to criticise the conference, which was very well presented, and there really were some great speakers, but maybe it points to the challenges of bringing together the three domains?  In the end, they are different domains with different needs and interests as well as areas of mutual interest. Clearly there is overlap, and there absolutely should be collaboration, but maybe there should also be an acknowledgement that we are also different communities, and we have some differing requirements and perspectives.

HubbuB

Diary of the Archives Hub, June 2011

Design Council Archive poster
Desing Council Archive: Festival of Britain poster

This is the first of our monthly diary entries, where we share news, ideas and thoughts about the Archives Hub and the wider world. This diary is aimed primarily at archives that contribute to the Hub, or are thinking about contributing, but we hope that it provides useful information for others about the sorts of developments going on at the Hub and how we are working to promote archives to researchers.

Hub Contributors’ Forum

At the Hub we are always looking to maintain an active and constructive relationship with our contributors. Our Contributors’ Forum provides one way to do this. It is informal, friendly, and just meets once or twice a year to give us a chance to talk directly to archivists. We think that archivists also value the opportunity to meet other contributors and think about issues around data discovery.

We have a Contributors’ Forum on 7th July at the University of Manchester and if any contributors out there would like to come we’d love to see you. It is a chance to think about where the Hub is going and to have input into what you think we should be doing, where our priorities should lie and how to make the service effective for users. Just in case you all jump in at once, we do have a limit on numbers….but please do get in touch if you are interested.

The session will be from 10.30 to 1.00 at the University of Manchester with lunch provided. It will be with some members of the Hub Steering Committee, so a chance for all to mix and mingle and get to know each other. And for you to talk to Steering Committee members directly.

Please email Lisa if you would like to attend: lisa.jeskins@manchester.ac.uk.

Contributor Audio Tutorials

Our audio tutorial is aimed at contributors who need some help with creating descriptions for the Hub. It takes you through the use of our EAD Editor, step-by-step. It is also useful in a general sense for creating archival descriptions, as it follows the principles of ISAD(G). The tutorial can be found at http://archiveshub.ac.uk/tutorials/. It is just a simple audio tutorial, split into convenient short modules, covering basic collection-level descriptions through to multi-level and indexing. Any feedback greatly appreciated – if you want any changes or more units added, just let us know.

Archives Hub Feature: 100 Objects

We are very pleased with our monthly features, founded by Paddy, now ably run by Lisa. They are a chance to show the wealth of archive collections and provide all contributors the opportunity to showcase their holdings.  They do quite well on Google searches as well!

Our monthly feature for June comes from Bradford Special Collections, one of our stalwart contributors, highlighting their current online exhibition: 100 Objects.  Some lovely images, including my favourite, ‘Is this man an anarchist?’ (No!! he’s just trying to look after his family): http://archiveshub.ac.uk/features/100objects/Nationalunionofrailwaymenposter.html

Relevance Ranking

Relevance ranking is a tricky beast, as our developer, John, will attest. How to rank the results of a search in a way that users see as meaningful? Especially with archive descriptions, which range from a short description of a 100 box archive to a 10 page description of a 2 box archive!

John has recently worked on the algorithm used for relevance ranking so that results now look more as most users would expect. For example, if you searched for ‘Sir John Franklin’ before, the ‘Sir John Franklin archive’ would not come up near the top of the results. It now appears 1st in results rather than way down the list, as it was previously. Result.

Images

Since last year we have provided the ability to add images to Hub descriptions. The images have to be stored elsewhere, but we will embed them into descriptions at any level (e.g. you can have an image to represent a whole collection, or an image at each item level description).

We’ve recently got some great images from the Design Council Archive: http://archiveshub.ac.uk/data/gb1837des-dca – take a look at the Festival of Britain entries, which have ‘digital objects’ linked at item level, enabling researchers to get a great idea of what this splendid archive holds.

Any contributors wishing to add images, or simple links to digital content, can easily do so through using the EAD Editor: http://archiveshub.ac.uk/images/ You can also add links to documents and audio files. Let us know if you would like more information on this.

Linking to descriptions

Linking to Hub descriptions from elsewhere has become simpler, thanks to our use of ‘cool URIs’. See http://archiveshub.ac.uk/linkingtodescriptions/. You simply need to use the basic URI for the Hub, with the /data/ directory, e.g. http://archiveshub.ac.uk/data/gb029ms207.

Out and About

It would take up too much space to tell you about all of our wanderings, but recently Jane spent a very productive week in Prague at the European Libraries Automation Group (ELAG), a very friendly bunch of people, a good mix of librarians and developers, and a very useful conference centering on Linked Data.

Bethan is at the CILIP new professionals information day today, busy twittering about networking and sharing knowledge.

Lisa is organising our contributors’ workshops for this year (feels like our summer season of workshops) and has already run one in Manchester. More to follow in Glasgow, London and Cardiff. This is our first workshop in Wales, so please take advantage of this opportunity if you are in Wales or south west England. More information at http://archiveshub.ac.uk/contributortraining/

Joy is very busy with the exciting initiative, UKDiscovery. This is about promoting an open data agenda for archives, museums and libraries – something that we know you are all interested in. Take a look at the new website: http://discovery.ac.uk/.

With best wishes,
The Hub Team

Whose Data Is It?: a Linked Data perspective

A comment on the blog post announcing the release of the Hub Linked Data maybe sums up what many archivists will think: “the main thing that struck me is that the data is very much for someone else (like a developer) rather than for an archivist. It is both ‘our data’ and not our data at the same time.”

Interfaces to the data

Archives Hub search interface

In many ways, Linked Data provides the same advantages as other machine based ways into the data. It gives you the ability to access data in a more unfiltered way. If you think about a standard Web interface search, what it does is to provide controlled ways into the data, and we present the data in a certain way. A user comes to a site, sees a keyword search box and enters a term, such as ‘antarctic exploration’. They have certain expectations of what they will get – some kind of list of results that are relevant to antarctica and famous explorers and expeditions – and yet they may not think much about the process – will all records that have any/either/both of these terms be returned, for example? Will the results be comprehensive? Might there be more effective ways to search for what they want? As those who run systems, we have to decide what a search is going to give the user. Will we look for these terms as adjacent terms and single terms? Will we return results from any field? How will we rank the results? We recently revised the relevance ranking on the Hub because although it was ‘pragmatically’ correct, it did not reflect what users expect to see. If a user enters ‘sir john franklin’ (with or without quotation marks) they would expect the Sir John Franklin Papers to come up first. This was not happening with the previous relevance ranking. The point here is that we (the service providers) decide – we have control over what the search returns and how it is displayed, and we do our best to provide something that will work for users.

Similarly, we decide how to display the results. We provide as a basis collection descriptions, maybe with lower-level entries, but the user cannot display information in different ways. The collection remains the indivisible unit.

With a Web interface we are providing (we hope) a user-friendly way to search for descriptions of archives – one that does not require prior knowledge. We know that users like a straightforward keyword search, as well as options for more advanced searching. We hide all of the mechanics of running the search and don’t really inform the user exactly what their search is doing in any kind of technical sense. When a user searches for a subject in the advanced subject search, they will expect to get all descriptions relating to that subject, but that is not necessarily what they will get. The reason is that the subject search looks for terms within the subject field. The creator of the description must put the subject in as an index term. In addition, the creator of the description may have entered a different term for the subject – say ‘drugs’ instead of ‘medicines’. The Archives Hub has a ‘subject finder’ that returns results for similar terms, so it would find both of these entries. However, maybe the case of the subject finder makes a good point about searching: it provides a really useful way to find results but it is quite hard to convey what it does quickly and obviously. It has never been widely used, even though evidence shows that users often want to search by subject, and by entering the subject as a keyword, they are more likely to get less relevant results.

These are all examples of how we, as service providers, look to find ways to make the data searchable in ways that we think users want and try to convey the search options effectively. But it does give a sense that they are coming into our world, searching ‘our data’, because we control how they can search and what they see.

Linked Data is a different way of formatting data that is based upon a model of the entities in the data and relationships between them. To read more about the basics of Linked Data take a look at some of the earlier posts on the Locah blog (http://blogs.ukoln.ac.uk/locah/2010/08/).

Providing machine interfaces gives a number of benefits. However, I want to refer to two types of ‘user’ here. The ‘intermediate user’ and the ‘end user’. The intermediate user is the one that gets the data and creates the new ways of searching and accessing the data. Typically, this may be a developer working with the archivist. But as tools are developed to faciliate this kind of work, it should become easier to work with the data in this way. The end user is the person who actually wants to use the data.

1) Data is made available to be selected and used in different ways

We want to provide the ability for the data to be queried in different ways and for users to get results that are not necessarily based upon the collection description. For example, the intermediate user could select only data that relates to a particular theme, because they are representing end users who are interested in combining that data with other sources on the same theme. The combined data can be displayed to end users in ways that work for a particular community or particular scenario.

The display within a service like the Hub is for the most part unchanging, providing consistency, and it generally does the job. We, of course, make changes and enhancements to improve the service based on user needs from time to time, but we’re still essentially catering for one generic user as best we can, However, we want to provide the potential to allow users to display data in their own way for their own purposes. Linked Data encourages this. There are other ways to make this possible of course, and we have an SRU interface that is being used by the Genesis portal for Women’s Studies. The important point is that we provide the potential for these kinds of innovations.

2) External links begin the process of interconnecting data

Machine interfaces provide flexible ways into the data, but I think that one of the main selling points of Linked Data is, well, linking data. To do this with the Hub data, we have put some links in to external datasets. I will be blogging about the process of linking to VIAF names (Virtual International Name Authority File), but suffice to say that if we can make the statement within our data that ‘Sir Ernest Shackleton’ on the Hub is the same as ‘Sir Ernest Shackleton’ on VIAF then we can benefit from anything that VIAF links to DBPedia for example (Wikipedia output as Linked Data). A user (or intermediate user) can potentially bring together information on Sir Ernest Shackleton from a wide range of sources. This provides a means to make data interconnected and bring people through to archives via a myriad of starting points.

3) Shared vocabularies provide common semantics

If we identify the title of a collection by using Dublin Core, then it shows that we mean the same thing by ‘title’ as others who use the Dublin Core title element. If we identify ‘English’ by using a commonly recognised URI (identifier) for English, from a common vocabulary (lexvo), then it shows that we mean the same thing as all the other datasets that use this vocabulary. The use of common vocabularies provides impetus towards more interoperability – again, connecting data more effectively. This brings the data out of the archival domain (where we share standards and terminology amongst our own community) and into a more global space.  It provides the potential for intermediate users to understand more about what our data is saying in order to provide services for end users. For example, they can create a cross-search of other data that includes titles, dates, extent, creator, etc. and have reasonable confidence that the cross-search will work because they are identifying the same type of content.

For the Hub there are certain entities where we have had to create our own vocabulary, because those in existence do not define what we need, but then there is the potential for other datasets to use the same terms that we use.

4) URIs are provided for all entities

For Linked Data one of the key rules is that entities are identified with HTTP URIs. This means that names, places, subjects, repositories, etc. within the Hub data are now brought to the fore through having their own identifier – all the individuals, for example, within the index terms, have their own URI. This allows the potential to link from the person identified on the Hub to the same person identified in other datasets.

Who is the user?

So far so good. But I think that whilst in theory Linked Data does bring significant benefits, maybe there is a need to explain the limitations of where we are currently at.Hub Sparql endpoint

Our Linked Data cannot currently be accessed via a human user friendly Web-based search interface; it can however be accessed via a Sparql endpoint. Sparql is the language for querying RDF, the format used for Linked Data. It shares many similarities to SQL, a language typically used for querying conventional relational databases that are the basis of many online services. (Our Sparql endpoint is at http://data.archiveshub.ac.uk/sparql ). What this means is that if you can write Sparql queries then you’re up and running. Most end users can’t, so they will not be able to pull out the data in this way. Even once you’ve got the data, then what? Most people wouldn’t know what to do with RDF output. In the main, therefore, fully utilising the data requires technical ability – it requires intermediate users to work with the data and create tools and services for end users.

For the Hub

we have provided Linked Data views, but it is important not to misunderstand the role of these views – they are not any kind of definite presentation, they are simply a means to show what the data consists of, and the user can then access that data as RDF/XML, JSON or Turtle (i.e. in a number of formats). It’s a human friendly view on the Linked Data if you access a Hub entity web address via a web browser. If however, you are a machine wanting machine readable RDF visiting the very same URI, you would get the RDF view straight off. This is not to say that it wouldn’t be possible to provide all sorts of search interfaces onto the data – but this is not really the point of it for us at the moment – the point is to allow other people to have the potential to do what they want to do.

The realisation of the user benefit has always been the biggest question mark for me over Linked Data – not so much the potential benefits, as the way people perceive the benefits and the confidence that they can be realised. We cannot all go off and create cool visualisations (e.g. http://www.simile-widgets.org/timeline/). However, it is important to put this into perspective. The Hub data at Mimas sits in directories as EAD XML. Most users wouldn’t find that very useful. We provide an interface that enables users with no technical knowledge to access the data, but we control this and it only provides access to our dataset and to a collection-based view. In order to step beyond this and allow users to access the data in different ways, we necessarily need to output it in a way that provides this potential, but there is likely to be a lag before tools and services come along that take advantage of this. In other words, what we are essentially doing is unlocking more potential, but we are not necessarily working with that potential ourselves – we are simply putting it out there for others.

Having said that, I do think that it is really important for us to now look to demonstrate the benefits of Linked Data for our service more clearly by providing some ways into the Linked Data that take advantage of the flexible nature of the data and the external links – something that ‘ordinary’ users can benefit from. We are looking to work on some visualisations that do demonstrate some of the potential. There does seem to be an increasing consensus within cultural heritage that primary resources are too severed from the process of research – we have a universe of unrelated bits that hint at what is possible but do not allow it to be realised. Linked Data is attempting to resolve this, so it’s worth putting some time and effort into exploring what it can do.

We want our data to be available so that anyone can use it as they want. It may be true that the best thing done with the data will be thought of by someone else. (see Paul Walk’s blog post for a view on this).

However, this is problematic when trying to measure impact, and if we want to understand the benefits of Linked Data we could do with a way to measure them. Certainly, we can continue to work to realise benefits by actively working with the Linked Data community and encouraging a more constructive and effective relationship between developers and managers. It seems to me that things like Linked Data require us to encourage developers to innovate and experiment with the data, enabling users to realise its benefits by taking full advantage of the global interconnectivity that is the vision of the Linked Data Web. This is the aim of UKOLN’s Dev CSI project – something I think we should be encouraging within our domain.

So, coming back to the starting point of this blog: The data maybe starts off as ‘our data’ but really we do indeed want it to be everyone’s data. A pick ‘n pix environment to suit every information need.

Flickr: davidlocke's photostream

The Standard Bearers

We generally like stdough cutting andards. Archivists, like many others within the information professions, see standards as a good thing. But if that is the case, and we follow descriptive standards, why aren’t our collection descriptions more interoperable? Why can’t users move seamlessly from one system to another and find them consistent?

I’ve been looking at a White Paper by Nick Poole of the Collections Trust: Where Next for Museum Standards? In this, he makes a good point about the reasons for using standards:

“Standards exist to condense and share the professional experience of our predecessors, to enable us to continue to build on their legacy of improvement.”

I think this point is sometimes overlooked – standards reflect the development of our understanding and expertise over time. As a novice jazz musician, I think this has a parallel with jazz theory – the point of theory is partly that it condenses what has been learnt about harmony, rhythm and melody over the past 100 years of jazz. The theory is only the means to the end, but without it acting effectively as a short cut, you would have to work your way through decades of musical development to get a good understanding of the genre.

Descriptive standards should be the means to the end – they should result in better metadata. Before the development of ISAD(G) for archives, we did not have an internationally recognised standard to help us describe archives in a largely consistent way (although ISAD(G) is not really a content standard). EAD has proved a vital addition to our range of standards, helping us to share descriptions far more effectively than we could do before.

But archives are diverse and maybe we have to accept that standards are not going to mould our descriptions so that they all come off of the conveyor belt of cataloguing looking the same? It may seem like something that would be of benefit to our users – descriptions that look pretty much identical apart from the actual content. But would it really suffice to reflect the reality of what archives are? Would it really suffice to reflect the reality of the huge range of users that there are?

Going back to Nick Poole’s paper, he says:

“The purpose of standards is not to homogenise, but to ensure that diversity is built on a solid foundation of shared knowledge and understanding and a collective commitment to quality and sustainability.”

I think this is absostatue of toy standard bearerlutely right. However, I do sometimes wonder how solid this foundation is for archives, and how much our standards facilitate collaborative understanding. Standards need to be clearly presented and properly understood by those who are implementing them. From the perspective of the Hub, where we get contributions of data from 200 different institutions, standards are not always well understood. I’m not sure that people always think carefully about why they are using standards – this is just as important as applying the standards. It is only by understanding the purpose that I think you do come to a good sense of how to apply a standard properly. For example, we get some index terms that are ostensibly using NCA Rules (National Council on Archives Rules for Personal, Family and Place Names), but the entries are not always in line with the rules. We also get subject entries that do not conform to any thesauri, or maybe they conform to an in-house thesaurus, but for an aggregated service, this does not really help in one of the main aims of subject indexing – to pull descriptions together by subject.

Just as for museums, standards, as Nick Poole says, must be “communicated through publications, websites, events, seminars and training. They must be supported, through infrastructure and investment, and they must be enforced through custom, practice or even assessment and sanction.”

For the Hub, we have made one important change that has made descriptions much more standards compliant – we have invested in an ‘EAD Editor’; a template based tool for the creation and editing of EAD based archival descriptions. This sophisticated tool helps to ensure valid and standards-based descriptions. This idea of supporting standards through this kind of approach seems to me to be vital. It is hard for many archivists to invest in the time that it takes to really become expert in applying standards. For the Hub we are only dealing with descriptive standards, but archivists have many other competing standards to deal with, such as environmental and conservation standards. Software should have standards-compliance built in, but it should also be designed to meet the needs of the archivists and the users. This balance between standards and flexibility is tricky. But standards are not going to be effective if they don’t actually meet real life needs. I do sometimes think that standards suffer from being developed somewhat in isolation of practical reality – this can be a result of the funding environment, where people are paid to work on standards, and they don’t tend to be the people who implement them. Standards may also suffer from the perennial problem of a shifting landscape – standards that were clearly relevant when they were created may be rather less so 10 years on, but revising standards is a time-consuming process. The archives community has the NCA Rules, which have served their purpose very well, but they really need revising now, to bring them in line with the online, global environment.

In the UK Archives Discovery network (UKAD) we are working to help archivists understand and use standards effectively. We are going to provide an indexing tutorial and we are discussing ways to provide more guidance on cataloguing generally. The survey that we carried out in 2009 showed that archivists do want more guidance here. Whilst maybe there are some who are not willing to embrace standards, the vast majority can see the sense in interoperability, and just need a low-barrier way to improve their understanding of the standards that we have and how best to use them. But in the end, I can’t see that we will ever have homogeneous descriptions, so we need to harness technology in order to help us work more effectively with the diverse range of descriptions out there that reflect the huge diversity of archives and users.

Images: Flickr goosmurf’s photostream (dough cutter); robartesm’s photostream (standard bearer)

The long tail of archives

For many of us, the importance of measuring use and impact are coming more to the fore. Funders are often keen for indications of the ‘value’ of archives and typically look for charts and graphs that can provide some kind of summary of users’ interaction with archives. For the Hub, in the most direct sense this is about use of the descriptions of archives, although, of course, we are just as interested in whether researchers go on to consult archives directly.

The pattern of use of archives and the implications of this are complex. The long tail has become a phrase that is banded around quite a bit, and to my mind it is one of those concepts that is quite useful. It was popularised by Chris Anderson, more in relation to the commercial world, relating to selling a smaller number of items in large quantities and a large number of items in relatively small quantities, and you can read more about it in Wikipedia: Long Tail.

If we think about books, we might assume that a smaller number of popular titles are widely used and use gradually declines until you reach a long tail of low use.  We might think that the pattern, very broadly speaking, is a bit like this:

I attended a talk at the UKSG Conference recently, where Terry Bucknell from the University of Liverpool was talking about the purchase of e-books for the University. He had some very whizzy and really quite absorbing statistics that analysed the use of packages of e-books. It seems that it is hard to predict use and that whilst a new package of e-books is the most widely used for that particular year, the older packages are still significantly used, and indeed, some books that are barely used one year may be get significant use in subsequent years. The patterns of use suggested that patron-driven acquisition, or selection of titles after one year of use, were not as good value as e-book packages, although you cannot accurately measure the return on investment after only one year.

Archives are kind of like this only a whole lot more tricky to deal with.

For archives, my feeling is that the graph is more like this:

No prizes for guessing which are the vastly more used collections*. We have highly used collections for popular research activities, archives of high-profile people and archives around significant events, and it is often these that are digitised in order to protect the originals.  But it is true to say that a large proportion of archives are in the ‘long tail’ of use.

I think this can be a problem for us. Use statistics can dominate perceptions of value and influence funding, often very profoundly. Yet I think that this is completely the wrong way to look at it. Direct use does not correlate to value, not within archives.

I think there are a number of factors at work here:

  • The use of archives is intimately bound up with how they are catalogued. If you have a collection of letters, and just describe it thus, maybe with the main author (or archival ‘creator’), and covering dates, then researchers will not know that there are letters by a number of very interesting people, about a whole range of subjects of great interest for all sorts of topics. Often, archivists don’t have the time to create rich metadata (I remember the frustrations of this lack of time). Having worked in the British Architectural Library, I remember that we had great stuff for social history, history of empire, in particular the Raj in India, urban planning, environment, even the history of kitchen design or local food and diet habits. We also had a wonderful collection of photographs, and I recall the Photographs Curator showing me some really early and beautiful photographs of Central Park in New York. Its these kind of surprises that are the stuff of archives, but we don’t often have time to bring these out in the cataoguing process.
  • The use of a particular archive collection may be low, and yet the value gained from the insights may be very substantial. Knowledge gained as a result of research in the archives may feed into one author’s book or article, and from there it may disseminate widely. So, one use of one archive may have high value over time. If you fed this kind of benefit in as indirect use, the pattern would look very different.
  • The ‘value’ of archives may change over time. Going back to my experience at the British Architectural Library, I remember being told how the drawings of Sir Edwin Lutyens were not considered particularly valuable back in the 1950s – he wasn’t very fashionable after his death. Yet now he is recognised as a truly great architect, and his archives and drawings are highly prized.
  • The use of archives may change over time. Just because an archive has not been used for some time – maybe only a couple of researchers have accessed it in a number of years – it doesn’t mean that it won’t become much more heavily used. I think that research, just like many things, is subject to fashions to some extent, and how we choose to look back at our past changes over time. This is one of the challenges for archivists in terms of acquisitions. What is required is a long-term perspective but organisations all too often operate within short-term perspectives.
  • Some archives may never be highly used, maybe due to various difficulties interpreting them. I suppose Latin manuscripts come to mind, but also other manuscripts that are very hard to read and those pesky letters that are cross-written. Also, some things are specialised and require professional or some kind of expert knowledge in order to understand them. This does not make them less valuable. It’s easy to think of examples of great and vital works of our history that are not easy for most people to read or interpret, but that are hugely important.
  • Some archives are very fragile, and therefore use has to be limited. Digitising may be one option, but this is costly, and there are a lot of fragile archives out there.

I’m sure I could think of some more – any thoughts on this are very welcome!

So, I think that it’s important for archivists to demonstrate that whilst there may be a long tail to archives, the value of many of those archives that are not highly used can be very substantial. I realise that this is not an easy task, but we do have one invention in our favour: The Web. Not to mention the standards that we have built up over time to help us to describe our content. The long tail graph does demonstrate to us that the ‘long tail of use’ can be just as much, or more, than the ‘high column of use’. The use of the Web is vital in making this into a reality, because researchers all over the world can discover archives that were previously extremely hard to surface.  That does still leave the problems of not being able to catalogue in depth in order to help surface content…the experiments with crowd-sourcing and user generated content may prove to be one answer. I’d like to see a study of this – have the experiments with asking researchers to help us catalogue our content proved successful if we take a broad overview? I’ve seen some feedback on individual projects, such as OldWeather:

“Old Weather (http://www.oldweather.org) is now more than 50% complete, with more than 400,000 pages transcribed and 80 ships’ logs finished. This is all thanks to the incredible effort that you have all put in. The science and history teams are constantly amazed at the work you’re all doing.” (a recent email sent out to the contributors, or ‘ship captains’).

If anyone has any thoughts or stories about demonstrating value, we’d love to hear your views.

* family history sources

Training and the Archives Hub.

A couple of weeks ago I took part in a training session for postgraduate students from the English department at the University of Salford. This had been organised with Ian Johnston, University Archivist at Salford, and Professor Sharon Ruston from ESPaCH. (School of English, Sociology, Politics & Contemporary History)

Training Room

Sharon kicked off the session by explaining what archives mean to her career and how she had actually made her name and written a book on the strength of some new evidence that she uncovered about Shelley and his desire to be a doctor: Shelley and Vitality (Palgrave Macmillan, 2005), which explored the medical and scientific contexts which inform Shelley’s concept of vitality in his major poetry.

She went on to detail some of her new research on Humphry Davy (examining poetry & science) and explained that although it can often be a lot of effort to look for archives, it can pay dividends if you put the time and energy into searching.

Ian then took the floor and showed the students some of the hidden gems from the University’s archives. He also brought some items with him – a letter from Edith Sitwell, papers from the Duke of Bridgewater archive etc. He also showed some photos of Salford University in the 1970s. We were all fairly amazed by the picture of the paternoster lift, which is a lift that doesn’t stop. Literally you have to jump on as it’s going past. Talk about students living dangerously!

Ian explained why Salford University contributed to the Hub: the benefits of profile in being part of a national cross-searching service leading to more researchers benefitting from the Salford University Archives Collections.

I then did a demonstration of some different websites where you can search for archives online and went on to show how the Archives Hub, Copac and Zetoc work and the different types of information that you can find in each.

Prior to the session, Ian and Sharon had asked the students for their research areas and I used these as my examples. I find if students cannot easily see how and why something is relevant to them, then they switch off. It’s important to tailor your examples to your audience, whatever level they are studying at.

We then got the students to have a go themselves as we walked around the room and gave more individual help. This worked really well as each student got at least 5 or 10 mins of one-to-one help on searching for their particular subject area.

We were all really pleased with how the session went. I could actually see the students sit up and take notice when Sharon was talking about making her name from finding new knowledge. It underlined how primary source material can lead to students incorporating unique perspectives to their research. I feel that this was key to the success of the session. The students were able to see how important archives had been to someone who they respected and knew was an expert in her field.

Ian showed them actual papers and letters from the archive and this allowed them to see concrete examples of what we were talking about, as opposed to thinking about archive materials in an abstract and ‘virtual’ way by just looking at online finding aids.

Sharon and Ian did a great job of explaining the benefits of using archives, I just told them how to find stuff… It was great to see how engaged the students were with what we were explaining to them. So much so I’ve been asked back for a repeat performance. (With the academics!)

UKAD Forum

The National Archives
The National Archives (used under a CC licence from http://www.flickr.com/photos/that_james/2693236972/)

Weds 2nd March was the inaugural event of the UK Archives Discovery Network – better known as UKAD.  Held at the National Archives, the UKAD Forum was a chance for archive practitioners to get together, share ideas, and hear about interesting new projects.

The day was organised into 3 tracks: A key themes for information discovery; B standards and crowdsourcing; and C demonstrating sites and systems.  Plenary sessions came from John Sheridan of TNA, Richard Wallis of Talis, David Flanders of Jisc, and Teresa Doherty of the Women’s Library.

I would normally have been tweeting away, but unfortunately although I could connect to the wifi, I couldn’t get any further!  So here are my edited highlights of the day (also known as ‘tweets I wish I could have sent’).

Richard Sheridan kicked off the proceedings by talking about open data.  The government’s Coalition Agreement contains a commitment to open data, which obviously affects The National Archives, as repository for government data.  They are using light-weight existing Linked Data vocabularies, and then specialising them for their needs. I was particularly interested to hear about the particular challenges posed by legislation.gov.uk, explained by John as ‘A changes B when C says so’: new legislation may alter existing legislation, and these changes might come into force at a time specified by a third piece of legislation…

Richard Wallis carried on the open data theme, by talking about Linked Data and Linked Open Data. His big prediction? That the impact of Linked Data will be greater than the impact of the World Wide Web it builds on. A potentially controversial statement, delivered with a very nice slide deck.

Off to the tracks, and I headed for track B to hear Victoria Peters from Strathclyde talk about ICA-AtoM.  This is open source, web based archival  description software, aimed at archivists and institutions with limited financial and technical resources.  It looks rather nifty, and supports EAD and EAC import and export, as well as digital objects.  If you want to try it out, you can download a demo from the ICA-AtoM website, or have a look at Strathclyde’s installation.

Bill Stockting from the BL gave us an update on EAD and EAC-CPF.  I’m just starting to learn about EAC-CPF, so it was interesting to hear the plans for it.  One of Bill’s main points was that they’re trying to move beyond purely archival concerns, and are hoping that EAC-CPF can be used in other domains, such as MARC.  This is an interesting development, and I hope to hear more about it in the future!  Bill also mentioned SNAC, the Social Networks and Archival Context project, which is looking at using EAC-CPF with a number of tools (including VIAF) to ‘to “unlock” descriptions of people from finding aids and link them together in exciting new ways’.

David Flanders’ post-lunch plenary provided absolutely my favourite moment of the day: David said ‘Technology will fail if not supported by the users’… and then, with perfect timing, the projector turned off.  One of David’s key points was that ‘you are not your users’.  You can’t be both expert and user, and you will never know exactly how what users want from your systems, and how they will use them unless you actually ask them! Get users involved in your projects and bids, and you’re likely to be much more successful.

Alexandra Eveleigh spoke in track B about ‘crowds and communities: user participation in the archives’.  I especially liked her distinction between ‘crowds’ and ‘communities’ – crowds are likely to be larger, and quickly dip in and out, while communities are likely to be smaller overall, but dedicate more time and effort.  She also pointed out that getting users involved isn’t a new thing – there’s always been a place in archives for those pursuing ‘serious leisure’, and bringing their own specialist knowledge and experience.  A point Alexandra made that I found particularly interesting was that of being fair to your users – don’t ask them to participate and help you, if you’re not going to listen to their opinions!

I have to admit that I’d never really heard of Historypin before I saw them on the conference programme.  Don’t click on that link if you have anything you need to get done today!  Historypin takes old photographs, and ‘pins’ them to their exact geographic location using Google maps.  You can see them in streetview, overlaid on the modern background, and it is absolutely fascinating.  Photos can be contributed by anyone, and anyone can add stories or more information to photos on the site.  One of the developments on the way is the ability to ‘pin’ video and audio clips in the same way.

CEO Nick Stanhope was keen to point out that Historypin is a not-for-profit – they’re in partnership with Google, but not owned by them, and they don’t ask for any rights to any of the material posted on Historypin.  They’re keen to work with archives to add their photographic collections, and have a couple of things they hope to soon be able to offer archives in return (as well as increased exposure!):  they’ll be allowing any archive to have an instance of Historypin embedded on the archive’s site for free.  They’re also developing a smartphone app, and will be offering any archive their own branded version of the app – for free!  These developments sound really exciting, and I hope we hear more from them soon.

Teresa Doherty’s closing plenary was on the re-launch of the Genesis project.  As Teresa said ‘many of you will be sitting there thinking ‘this isn’t plenary material! what’s going on?”, but Teresa definitely made it a plenary worth attending.  Genesis is a project which allows users to cross-search women’s studies resources from museums, libraries and archives in the UK, and Teresa made the persuasive point that while the project itself might not be revolutionary, how they’ve done it is.  Genesis has had no funding since 200 – everything they’ve done since then, including the relaunch, has been done with only the in-house resources they have available.  They’ve used SRU to search the Archives Hub, and managed to put together a valuable service with minimal resources.

As a librarian and a new professional, I found Teresa’s insights into the history of archival cataloguing particularly fascinating.  I knew that ISAD(G) was released in 1996, but I hadn’t had any real understanding of what that meant: that before 1996, there were no standards or guidelines for archival cataloguing. Each institution would catalogue in entirely their way – a revelation to me, and completely alien to my entirely standards-based professional background!  And I now have a new mantra, learned from one of Teresa’s old managers back in the early 90s:

‘We may not have a database now, but if we have structured data then one day we will have a database to put it in!’

I don’t think I’ve ever heard a better definition of the interoperability mindset.

After the day officially ended, it was off the the pub for a swift pint and wind-down. An excellent, instructive, and fun day.

Slides from the day are available on SlideShare – tag ukad.

New Horizons

The Horizon Report is an excellent way to get a sense of emerging and developing technologies, and it is worth thinking about what they might mean for archives. In this post I concentrate on the key trends that are featured for the next 1-4 years.

Electronic Books

“[E]lectronic books are beginning to demonstrate capabilities that challenge the very definition of reading.”

Electronic books promise not just convenience, but also new ways of thinking about reading. They encourage interactive, social and collaborative approaches. Does this have any implications for archives? Most archives are paper-based and do not lend themselves so well to this kind of approach. We think of consulting archives as a lone pursuit, in a reading room under carefully controlled conditions. The report refers to “a dynamic journey that changes every time it is opened.” An appealing thought, and indeed we might feel that archives also offer this kind of journey. Increasingly we have digital and born-digital archives, but could these form part of a more collaborative and interactive way of learning? Issues of authenticity, integrity and intellectual property may mitigate against this.

Whilst we may find it hard to see how archives may not become a part of this world – we are talking about archives, after all, and not published works – there may still be implications around the ways that people start to think about reading. Will students become hooked on rich and visual interfaces and collaborative opportunities that simply do not exist with archives?

Mobiles

“According to a recent report from mobile manufacturer Ericsson, studies show that by 2015, 80% of people accessing the Internet will be doing so from mobile devices.”

Mobiles are a major part of the portable society. Archive repositories can benefit from this, ensuring that people can always browse their holdings, wherever they are. We need to be involved in mobile innovation. As the report states: “Cultural heritage organizations and museums are also turning to mobiles to educate and connect with audiences.” We should surely see mobiles as an opportunity, not a problem for us, as we increasingly seek to broaden our user-base and connect with other domains. Take a look at the ‘100 most educational iPhone Apps‘. They include a search of US historical documents with highlighting and the ability to add notes.

Augmented Reality

We have tended to think of augmented reality as something suitable for marketing, social engagement and amuseument. But it is starting to provide new opportunities for learning and changing expectations around access to information. This could provide opportunities for archives to engage with users in new ways, providing a more visual experience. Could it provide a means to help people understand what archives are all about? Stanford University in the US has created an island in Second Life. The unique content that the archives provide was seen as something that could draw visitors back and showcase the extensive resources available. Furthermore, they created a ‘virtual archives’, giving researchers an opportunity to explore the strong rooms, discover and use collections and collaborate in real time.

The main issue around using these kinds of tools is going to be the lack of skills and resources. But we may still have a conflict of opinions over whether virtual reality really has a place in ‘serious research’. Does it trivialize archives and research? Or does it provide one means to engage younger potential users of archives in a way that is dynamic and entertaining? I think that it is a very positive thing if used appropriately. The Horizon Report refers to several examples of its use in cultural heritage: the Getty Museum are providing ‘access’ to a 17th century collector’s cabinet of wonders; the Natural History Museum in London are using it in an interactive video about dinosaurs; the Museum of London are using it to allow people to view 3D historical images overlaid on contemporary buildings. Another example is the Powerhouse Museum in Sydney, using AR to show the environment around the Museum 100 years ago. In fact, AR does seem to lend itself particularly well to teaching people about the history around them.

Game-Based Learning

Another example of blending entertainment with learning, games are becoming increasingly popular in higher education, and the Serious Games movement is an indication of how far we have come from the notion that games are simply superficial entertainment. “[R]esearch shows that players readily connect with learning material when doing so will help them achieve personally meaningful goals.” For archives, which are often poorly understood by people, I think that gaming may be one possible means to explain what archives are, how to navigate through them and find what may be of interest, and how to use them. How about something a bit like this Smithsonian initiative, Ghosts of a Chance, but for archives?

These technologies offer new ways of learning, but they also suggest that our whole approach to learning is changing. As archivists, we need to think about how this might impact upon us and how we can use it to our advantage. Archives are all about society, identity and story. Surely, therefore, these technologies should give us opportunities to show just how much they are a part of our life experiences.

Voices for the Library

Voices for the Library is a place for anyone who loves and values libraries to share their experiences and stories about what libraries mean to them.  Also known as VftL, or simply ‘Voices’, the campaign was set up in September 2010 by a group of information professionals who were concerned about the negative and inaccurate coverage of libraries in the media.

The group felt that public libraries were being misrepresented in the media, for instance by their insistence on using footfall as the only measure of library use, ignoring all online services and interactions.  Voices started out as a way to combat this, to provide accurate information, and to share stories of what libraries mean to people.   Much of our content comes from library users, who want to share their stories about how libraries have affected their lives.

And of, course, there are stories from librarians as well.  Some are examples of the kind of work they do, to show the range and depth of what trained library staff do, and to illustrate that it’s not all stamping books and shushing!  And some are more theoretical debates, about the philosophy of public libraries.

Recently, we’ve started to look into the impact these closure might have on archives and special collections.  This was prompted by a blog post from Alison Cullingford, and campaigners are starting to look at what might happen to archive services in their region, as VftL member Lauren has done for Doncaster.

As more closures and cutbacks are threatened, the VftL team have been working overtime.  We’re all volunteers, and do Voices work on top of our day jobs, other professional involvement, continuing education – oh, and real lives!  We’re also scattered across the country, from Brighton to Harrogate, and all points between.  This means that the entire campaign so far has been co-ordinated virtually, using email and various other social media tools.  Most of the team had never even met each other.

Until Wednesday 26 Jan, that is!  Thanks to sponsorship from Credo Reference we were able to get most of the team down to London for a proper face-to-face board meeting, which I chaired.  I’ve never chaired a real meeting before, and I have to thank the Voices team for making it incredibly easy!  We only ran an hour over time, and managed to discuss and make decisions on several key points.   I think it definitely ranks as the best all-day meeting I’ve ever attended.

One of the things that hasn’t changed is that we’re always on the lookout for stories about the value of public library services, and why they are so important to people.  If you’d like to share your story, or tell us more about what’s going on in your area, you can contact us at stories@voicesforthelibrary.org.uk.

A bit about Resource Discovery

The UK Archives Discovery Network (UKAD) recently advertised our up and coming Forum on the archives-nra listserv. This prompted one response to ask whether ‘resource discovery’ is what we now call cataloguing and getting the catalogues online. The respondent went on to ask why we feel it necessary to change the terminology of what we do, and labelled the term resource discovery as ‘gobledegook’. My first reaction to this was one of surprise, as I see it as a pretty plain talking way of describing the location and retrieval of information , but then I thought that it’s always worth considering how people react and what leads them to take a different perspective.

It made me think that even within a fairly small community, which archivists are, we can exist in very different worlds and have very different experiences and understanding. To me, ‘resource discovery’ is a given; it is not in any way an obscure term or a novel concept. But I now work in a very different environment from when I was an archivist looking after physical collections, and maybe that gives me a particular perspective. Being manager of the Archives Hub, I have found that a significant amount of time has to be dedicated to learning new things and absorbing new terminology. There seem to be learning curves all over the place, some little and some big. Learning curves around understanding how our Hub software (Cheshire) processes descriptions, Encoded Archival Description , deciding whether to move to the EAD schema, understanding namespaces, search engine optimisation, sitemaps, application programming interfaces, character encoding, stylesheets, log reports, ways to measure impact, machine-to-machine interfaces, scripts for automated data processing, linked data and the semantic web, etc. A great deal of this is about the use of technology, and figuring out how much you need to know about technology in order to use it to maximum effect. It is often a challenge, and our current Linked Data project, Locah, is very much a case in point (see the Locah blog). Of course, it is true that terminology can sometimes get in the way of understanding, and indeed, defining and having a common understanding of terms is often itself a challenge.

My expectation is that there will always be new standards, concepts and innovations to wrestle with, try to understand, integrate or exclude, accept or reject, on pretty much a daily basis. When I was the archivist at the RIBA (Royal Institute of British Architects), back in the 1990’s, my world centered much more around solid realities: around storerooms, temperature and humidity, acquisitions, appraisal, cataloguing, searchrooms and the never ending need for more space and more resources. I certainly had to learn new things, but I also had to spend far more time than I do now on routine or familiar tasks; very important, worthwhile tasks, but still largely familiar and centered around the institution that I worked for and the concepts terminology commonly used by archivists. If someone had asked me what resource discovery meant back then, I’m not sure how I would have responded. I think I would have said that it was to do with cataloguing, and I would have recognised the importance of consistency in cataloguing. I might have mentioned our Website, but only in as far as it provided access through to our database. The issues around cross-searching were still very new and ideas around usability and accessibility were yet to develop.

Now, I think about resource discovery a great deal, because I see it as part of my job to think of how to best represent the contributors who put time and effort into creating descriptions for the Hub. To use another increasingly pervasive term, I want to make the data that we have ‘work harder’. For me, catalogues that are available within repositories are just the beginning of the process. That’s fine if you have researchers who know that they are interested in your particular collections. But we need to think much more broadly about our potential global market: all the people out there who don’t know they are interested in archives – some, even, who don’t really know what archives are. To reach them, we have to think beyond individual repositories and we have to see things from the perspective of the researcher. How can we integrate our descriptions into the ‘global information environment’ in a much more effective way. A most basic step here, for example, is to think about search engine optimisation. Exposing archival descriptions through Google, and other search engines, has to be one very effective way to bring in new researchers. But it is not a straightforward exercise – books are written about SEO and experts charge for their services in helping optimise data for the Web. For the Archives Hub, we were lucky enough to be part of an exercise looking at SEO and how to improve it for our site. We are still (pretty much as I write) working on exposing our actual descriptions more effectively.

Linked Data provides another whole world of unfamiliar terminology to get your head round. Entities, triples, URI patterns, data models, concepts and real world things, sparql queries, vocabularies – the learning curve has indeed been steep. Working on outputting our data as RDF (a modelling framework for Linked Data) has made me think again about our approach to cataloguing and cataoguing standards. At the Hub, we’re always on about standards and interoperability, and it’s when you come to something like Linked Data, where there are exciting possibilities for all sorts of data connections, well beyond just the archive community, that you start to wish that archivists catalogued far more consistently. If only we had consistent ‘extent’ data, for example, we could look at developing a lovely map-based visualisation showing where there are archives based on specific subjects all around the country and have a sense of where there are more collections and where there are fewer collections. If only we had consistent entries for people’s names, we could do the same sort of thing here, but even with thesauri, we often have more than one name entry for the same person. I sometimes think that cataloguing is more of an art than a science, partly because it is nigh on impossible to know what the future will bring, and therefore knowing how to catalogue to make the most of as yet unknown technologies is tricky to say the least. But also, even within the environment we now have, archivists do not always fully appreciate the global and digital environment which requires new ways of thinking about description. Which brings me back to the idea of whether resource discovery is another term for cataloguing and getting catalogues online. No, it is not. It is about the user perspective, about how researchers locate resources and how we can improve that experience. It has increasingly become identified with the Web as a way to define the fundamental elements of the Web: objects that are available and can be accessed through the Internet, in fact, any concept that has an identity expressed as a URI. Yes, cataloguing is key to archives discovery, cataloguing to recognised standards is vital, and getting catalogued online in your own particular system is great…but there is so much more to the whole subject of enabling researchers to find, understand and use archives and integrating archives into the global world of resources available via the Web.