Digital Curation: think use, not preservation

For the keynote presentation at the DCC/RIN Research Data Management Forum on ‘The Economics of Applying and Sustaining Digital Curation’, Chris Rusbridge gave us some reflections from the Blue Ribbon Task Force (BRTF): http://brtf.sdsc.edu/about.html on Sustainable Digital Preservation and Access. This was a 2 year project, finishing earlier this year, and the final report is available from: http://brtf.sdsc.edu/biblio/BRTF_Final_Report.pdfpicture of digital data

Chris kicked off by asking us to think about how we currently support access to digital information. Avenues include Government grants, advertisements (e.g. through Google), subscriptions (to journals), pay per service (e.g. Amazon Web service), and donations.

One of the key themes that he raised and returned to was around the alignment, or lack of alignment between those who pay, those who provide and those who benefit from digital data: they are not necessarily the same, and the more different they are the harder it may be to create a sustainable model . Who owns, who benefits, who selects, who preserves, who pays?  This has interesting parallels with archive repositories, where an institution may pay for the acquisition, appraisal, storage, cataloguing and access for these resources, but the beneficiaries are far broader than just members of the institution. Some institutions may require payment for access, but others will provide access free of charge. They may see this as a means to enhance their reputation and status as a learned society.

Around 15 years ago we started to think about digital preservation as a technical problem and then the OAIS reference model was produced. The technical capabilities that we now have are well up to the task, although Chris warned that the most elegant technical solution is no good if it is not sustainable; digital preservation has to be a sustainable economic activity. Today the focus is on the economic and organisational problems. It is not just about money; it requires building upon a value proposition, providing incentives to act and defining roles and responsibilities.

Digital preservation represents a derived demand.  No one ‘wants’ preservation per se; what they want is access to a resource.  It is not easy to sell a derived demand – often it needs to be sold on some other  basis. This idea of selling the importance of providing use (over time) rather than trying to sell the idea of preservation was emphasised throughout the Forum.

Digital preservation is also ‘path dependent’, meaning that the actions and decision you take change over time; they are different at different points of the life-cycle. Today’s actions can remove other options for all time.

Cultural issues, and mindset may be an issue here, and I was interested in the potential problem Chris proposed of  the ‘free-rider’ culture when it comes to making research datasets available. It may be that some (many?) researchers don’t want to pay for things, under value services and maybe underestimate costs. Researchers may also resent conformity and what they see as beauracracy. All in all, it may be difficult to make a case that researchers should in some way pay. This may be compounded by a sense that money invested in preservation is money taken out of research.  Chris suggested that the incentives for preservation are less apparent to the individual researcher, but are more clearly defined when the data is aggregated.

Typically, long-term preservation activities  have been funded by short-term resource allocation, although maybe this is gradually changing; a more thorny issue is that of recognising and valuing the benefits of digital preservation, to provide incentives that attract funding. More work needs to be done on articulating the benefits in order to cultivate a sense of the value.However, other speakers at the Forum wondered whether we should actually take the value as a given – maybe we shouldn’t keep asking the question about benefits, but simply acknowledge that it is the right thing to make research and other digital outputs available long-term?  We may be creating problems for ourselves if we emphasise the need to demonstrate value too much, and then struggle to quantify the value. However, this was just one argument, and overall I think that there was a belief that we do need to understand and articulate the benefits of providing long-term access.

There is often a lack of clear responsibility around digital preservation – maybe this is one of those areas where it’s always thought to be someone else’s responsibility? So, appropriate organisation and governance is essential for efficient ongoing preservation, especially when considering the tendency for data to be transferred – these ‘handoffs’ need to be secure.

The three imperatives that the BRTF report comes up with are: to articulate a compelling value proposition; to provide clear incentives to preserve in the public interest; to define role and responsibilities.

Commenting briefly on the post BRTF developments, Chris mentioned the EU digital agenda and the  LIBER pan-european survey on sustainability preparedness.

There are some mandates emerging:  the NERC and ESRC, for example.  Some publishers do require authors to make available data that substantiates an article, but at present this is not rigorous enough. We need to focus more on the data behind the research and how important it is.

Chris contrasted domain data repositories and institutional data repositories. Domain data repositories: leverage scale and expertise; are valuable for ‘high curation’ data; can carry out a ‘community proxy’ role such as tool development; aggregate demand; are potentially vulnerable to policy change (e.g. AHDS). A mixed funding models desirable for domain data repositories (e.g. ICPSR). Institutional data repositories: have a reputational business case (risk management, records management aspects, showcasing); should be aligned with institutional goals; can link to institutional research services (e.g. universal backup); can work well for ‘low curation’ cases (relatively small, static datasets); demand aggregation across a set of disciplines.

One issue that came up in the discussion was that we must remember that in fact digital preservation is relatively cheap, especially when compared to the preservation of hard-copy archives, held in acid-free boxes on rows and rows of shelving in secure, controlled search rooms.  So, if the cost is actually not prohibitive, and the technical know-how is there, then it seems imperative to address the organisational issues and to really hammer home the true value of preserving our digital data.

Opening the door to demonstrating value

The Archives Hub team value the links that we have with our contributors, who, after all, make the Hub what it is. We have a Contributors’ Forum in order to establish and develop links with contributors and get their feedback on Hub developments.

photo of open doorThis week we ran a Contributors’ Forum that concentrated on measuring impact, something that is becoming increasingly important in order to demonstrate value.  Unfortunately, we ended up with quite a small group, despite sending out some enticing emails – maybe a sign of the difficult times. But we still had a stimulating discussion, and for us it is always very valuable to get a perspective from the actual archives repositories.

We spent the first part of the morning with updates on the Hub and reports from the contributors: John Rylands at Manchester, Salford, Liverpool and Glasgow. Joy then gave a presentation on measuring impact, reflecting on some work that the Archives Hub, Copac and Zetoc services have carried out through online surveys and one-to-one interviews with researchers in order to create case studies.

In the afternoon we concentrated on measuring impact by asking the contributors to think about (i) what sort of information they currently collect about their researchers and (ii) what sort of information they would like to have. Overall, it seems that most archives have some form of registration, where researchers give some details about themselves. But the information recorded varies, not surprisingly. Sometimes information such as the items consulted is given, sometimes researchers are asked to specify their subject area, and at Glasgow they are asked how they found out about the University Archive. At Liverpool all  of the requisition slips are studiously kept, so that there is a record of who has looked at what, and at Glasgow there is a log of everything leaving the strong room, and I’m sure that for most archives this is the case.  At Salford, phone and email enquiries are all logged, as well as website statistics kept.

However, it seems that in general there is very little information on what happens next. How does the visit to the archive benefit the researcher? Do they use what they have found in publications? reports? articles? The archive repository may find this sort of information out if the researcher asks about copyright issues, but otherwise it is very hard to know. We agreed that informal networks can be valuable here. Archivists often get to know regular researchers, and in fact, this may be more likely to happen at smaller repositories where there is a lone archivist. But this can only account for a small part of the use of the collections. In fact, two of our contributors said that a reception desk had recently been installed so that researchers often don’t really interact directly with the archivist unless they have a particular query, so whilst this may be more efficient, it may distance us more from our users.

Also, it seems that the information that is gathered is not really utilised. It ‘may’ go into reports, and it ‘may’ be used for funding applications, but the suggestion is that this is done in a rather ad hoc manner. At Glasgow, it is important to show that the researchers and students from the University are being prioritised, so the information gathered can help to support this kind of situation.

From the discussion that we had around this topic, our first likely action arose: If there is an easy way for a researcher to grab an archival reference, it will encourage people to include the correct citation, which will help with tracking the use of archives.  This is something that we should be able to introduce for the Hub.

We talked about how easy it would be to simply ask researchers if they will speak about their research. Maybe they could be encouraged to put something about this on the registration form. We felt that if we are honest about what we need (which is often to demonstrate our value in order to secure continued funding), then researchers may be more willing than we might suppose. There is, undoubtedly, an huge feeling of goodwill towards archives, and, as one contributor said, we may be pushing at an open door here.

We talked about the sort of information we would like to gather, and came up with some possibilities:

We would like to know how researchers are coming to the repository – e.g. from the Archives Hub, from a Hub Spoke, from the NRA?
We would like to know if users find what they need from the archival descriptions themselves. Maybe more detailed descriptions sometimes provide the information that they need – they might even show that the archive is not relevant to the research, thus saving the researcher a wasted visit (a positive negative outcome!).
We would like to know more about how people behave when looking at an archive catalogue: Where do they navigate to? Do they explore the catalogue Do they search laterally?

From the discussion with these contributors, it seems that the Archives Hub is having to place more emphasis on issues around ‘market penetration’ than they are at present,  although it was felt that this is starting to change and that archives may well be faced with more pressure to understand their markets and how to effectively reach them.

Finally, we came up with another action, which was to try to compile 3 case studies over the next year. John Rylands agreed to work with us on the first one, so that we can test out how best to approach this. It may be that telling stories is the most fruitful way to get a sense of the impact that archives have. But we cannot ignore the fact that statistics are required, and we do have to continue to look for different ways to demonstrate our value.

Do we need index terms?

image of road signArchival descriptions need to include associated subjects, names and places as index terms. Is that self-evident? Well, certainly we need to do what we can to provide ways into an archive, and you might say the more ways to access it the better. But do archival descriptions need index terms? Do they add anything that keyword searches don’t have?

The Archives Hub encourage our contributors to add access points, which is EAD speak for index terms for subjects, names and places that reflect the content of the description, and therefore the archive. But if those terms are already included in the description, with the technology at our disposal, maybe we can dispense with them as access points and simply query the main body of the description? What are the arguments in favour of keeping index terms?

1. It’s about what is significant. One of the great challenges with archives is drawing out what is important within the archive; enabling researchers to know whether the archive is relevant to them. But this is always going to be a very imperfect exercise. I remember cataloguing an architect’s diaries (Robert Mylne, architect of Blackfriars Bridge) and ending up taking months because I couldn’t bear to leave out any people, or place names or buildings, or building techniques, etc. What if someone really wanted to know about stanchions? If I didn’t mention them, then a search would not bring back the Mylne diaries, and I would have failed to connect researcher to research material. The reality is that with the time and resources at our disposal, what we need to try do is reflect what is ‘most significant’ and include ‘key concepts’, accepting that this is a somewhat subjective judgement and hoping that this is enough to lead the researcher in the right direction. For the Hub we usually recommend adding somewhere between 3 and 10 index terms to a description. It means that the archivist can (arguably) draw out the most pertinent subjects and list the most significant people.

2. It allows for drawing out entities. So, in a sentence like “The collection comprises of material relating to the British National Antarctic Expedition, 1901-1904 (leader Robert Falcon Scott), the British Antarctic Expedition, 1907-1909, led by Shackleton, correspondence with his family, miscellaneous papers and biographical information”, you can separate out the entities. Corporate bodies such as British National Antarctic Expedition, 1901-1904, and personal names such as Robert Falcon Scott.  This is very useful for machine processing of content, as machines do not know that Robert Falcon Scott is a personal name (although we are increaingly developing sophisticated text mining techniques to address this).

It can be particularly useful where the entities are not obvious from the text, such as “[A]s well as material relating to his broadcast and published works, the archive also includes many scripts…”. Notice a lack of definite subject terms such as ‘playwright’, or ‘writer’.  A human user may infer this, but a general search on ‘playwright’ will not bring back any results becauase a machine has to know it too, in order to serve the human user.

3. You can then apply consistency to the entities, in terms of using a pre-defined controlled vocabulary.  Bu in a world where folksonomies are becoming increasingly popular, with increasing use of user tagging, does it make sense to insist on controlled vocabularies?

Take the example above, which is about Arthur Hopcraft. The index terms do include ‘playwrights’ and ‘writers’ so that the user can do a keyword search on these terms, or a specific subject search, and find the description. However, there is an obvious flaw here: the archivist has chosen these terms. Whilst they do both come from the Unesco thesaurus, she could easily have chosen different terms. The index terms do not include ‘scriptwriter’ for example. They do not include ‘television’ or ‘journalism’, both of which could have reasonably been used for this description. We end up with some descriptions that use ‘playwrights’ as a controlled vocabulary term, but others that don’t, and some that maybe use ‘scriptwriters’ when they are essentially about the same subject, or ‘authors’ which is the Unesco preferred term for scriptwriters.

But you cannot cover everything, so you have to make a choice about which subject terms to use. The question is: is it better to have some subject terms rather than none, even if they do not necessarily cover ‘all’ subjects, and so the researcher may carry out a subject search and not find the archive? One important point is that with our without subject terms, you have the same problem; it is just that a specific subject search does actually narrow what the researcher is searching on – the search may not include other fields, such as the scope & content or biographical history. Therefore whilst a subject search helps the researcher to find the most significant collections, it may exclude some collections that might be very pertinent for their research (collections that they may find through a keyword search).

4. Index terms allow for clarification of which entity you are talking about. This can be particularly helpful with identifying people and corporations. The scope and content may refer to Linsday Anderson, but the index entry will provide the dates and maybe an epithet to clarify that this is Lindsay Gordon Anderson, 1923-1994, film director. You could add this information to the scope and content, but it would tend to make it much more dense and arguably more difficult to read if you did this with all names. It would also imply that all names are of equal significance, and it would not be very helpful for machine processing unless you marked it up so that a machine could identify it as a personal name.

5. Index terms allow for connecting the same entity throughout the system. A very useful and powerful reason to have index terms. The main issue here is that contributors do not always enter the same thing, even with rules and sources to draw upon. Personal and corporate names are usually consistent, but inevitably the addition of the epithet, which is much more of an archival practice than a library practice, means that one person often has a number of different entries. If you took the epithet away, at least for the purposes of identifying the same entity, then things would work reasonably well. For subjects it’s more a case of just the amount of subjects that can be used to describe an archive. If you look for all the descriptions with the subject of ‘first world war’, then you won’t find all the descritions that are significantly about this subject because some of them are indexed with ‘world war one’, and other may use ‘war’ and ‘conflict’.

The way around this for the Hub is our ‘Subject Finder’. This is different from a straightforward subject search. It actually looks for similar terms and brings them together. So, a search for ‘first world war’ will bring back ‘world war one’. Similarly, a search for ‘railways’ will bring back the Library of Congress heading of ‘railroads’.

The Subject Finder helps, but does not comletely address this problem of the differing choice of terms. It cannot by-pass the fact that sometimes descriptions do not include any subject terms, so then they will not show up in a subject search. Recently I was looking for archives in the Hub on ‘exploration’, and was surprised to find that many of the Antarctic expeditions collections were not listed in the results. This was because some repositories did not use this subject term; a perfectly legitimate choice not to use it, but many other similar archives do use it.

I still feel that it is worth adding the significant entities as index terms, even with the problems of selecting what is ‘significant’ and with the inconsistencies that we have. Cataloguing as a whole is a subjective exercise, and it will never be perfect. For those who say that index terms are out-dated, I can only say that they are proving pretty useful for our current Linked Data project, and that is certainly pretty up to the minute in terms of Web technologies.

One final point in favour: the Archives Hub index terms exist within the descriptions as clickable links. This allows researchers to carry out ‘lateral’ searches, and it is a popular means to traverse descriptions, exploring from one subject to another, from one person to another.

Whether we should also consider enabling researchers to tag descriptions themselves is a whole other issue for another blog post…

This is not a complete case for and against by any means, but I think I’d better leave it there. I’d love to hear your views.

Is the reading room an echo chamber?

I attended the CILIP Yorkshire and Humberside branch & CDG members day at Leeds Met last week.  It was a great day overall, but one of the highlights – and one of the main reasons I’d wanted to attend – was Laura and Ned’s presentation on Escaping the Echo Chamber.

I’d really recommend watching the presentation – it’s a great example of a well-done Prezi, and although it obviously can’t capture everything from the presentation, it stands alone very well.

The basic premise is this:  librarians talk a lot about the state of libraries and information management and literacy and society and all sorts of other highly interesting and exciting stuff. But they only talk about it to other librarians.  They (we!) only talk about it in library blogs read by other librarians.  And I think it really is only other librarians – I can’t do my usual device here of saying ‘librarians/info profs’, because I’m not sure if librarians even talk to other information professionals about these issues.  Well, I’m here to make a tiny start – I’m going to break out of the librarian echo chamber and extend the conversation to archivists. And record-managers.  And knowledge-managers.  And anyone else who reads this blog!

The problem is: how do we get this information, these discussions to people outside our immediate professional neighbourhood?  This seems to be especially urgent now, with funding under threat – to demonstrate the value of what we do to people outside our professions.  Ideally, to our users and stakeholders – or to create new users and stakeholders by fuelling their understanding of what we do and what we stand for.

I don’t think this problem is unique to the information professions.  All professions suffer from a skewed public perception of their work.  The trouble is, for most professions this perception is formed from the exciting side of their job:  police catch criminals; doctors cure sick people; firefighters rush heroically into burning buildings.  For information professionals, it’s formed from the most boring and routine part of their job: stamping books, putting documents into boxes, making lists.  Why? Police, doctors and firefighters all do paperwork too, they all have the boring and mundane side to their jobs.  Yet no-one (and I really hope that this is still true by the time this post is published, with how the Big Society is shaping up) is suggesting that volunteers can police our streets, remove our appendices, or extinguish our blazes.

Is this because the routine work for most other professions is done in back rooms, behind closed doors?  For information professionals it’s often the exact opposite – we do our most interesting and exciting work away from the public view.  What people often see us doing are those rote jobs that could be (and increasingly are) done by machines.

So how can we address this? How do we get people to understand the value of what we really do?  It’s far from an easy task. Too often we rely on the same sources that have perpetuated the ‘boring’ stereotypes to bring them down – I’m sure that  ‘Who do you think you are?‘ has helped to change the public perception of archives and archivists.  But we can’t rely on the media deciding to use our professions as a prop for their next hit.  So how can we get out there ourselves?

Please do comment!  There’s a lively debate going on about this over on Twitter – check out #echolib to see what’s been said so far.

First class citizens of the Web

Linked Data enthusiasts like to talk about making concepts within data into first-class citizens. This should appeal to archivists. The idea that the concepts within our data are equal sounds very democratic, and is very appealing for rich data such as archival descriptions. But, where does that leave the notion of the all important top-level archival collection description? Archivists do tend to treat the collection description as superior; the series, sub-series, file, item, etc., are important, but subservient to the collection. You may argue that actually they are not less important, but they must be seen in the context of the collection. But I would still propose that (certainly within the UK) the collection-level description generally tends to be the focus and is considered to be the ‘right’ way into the collection, or at least, because of the way we catalogue, it beomes the main way into the collection.

Linked Data uses as its basis the data graph. This is different from the relational model and the tree structure model. In a graph, entities are all linked together in such a way that none has special status. All concepts are linked, the links are specified – that is to say, the relationships are clarified. In a tree structure, everything filters down, so it is inevitable that the top of the tree does seem like the most important part of the data. A data graph can be thought of as a tree structure where links go both ways, and nothing is top or bottom. You could still talk about the collection description being the ‘parent’ of the series description, but the series description is represented equally in RDF. But, maybe more fundamentally than this, Linked Data really moves away from the idea of the record as being at the heart of things and  replaces this with the idea of concepts being paramount. The record simply becomes one other piece of data, one other concept.

This type of modelling accords with the idea that users want to access the data from all sorts of starting points, and that they are usually interested in finding out about something real (a subject, a person) rather than an archive per se. When you model your data into RDF what you are trying to think about is exactly that – how will people want to access this data. In Australia, the record series is the preferred descriptive entry, and a huge amount has been written about the merits of this approach. It seems to me, with RDF, we don’t need to start with the collection or start with the series. We don’t need to start with anything.

Linked Data graph

This diagram, courtesy of Talis, shows part of a data graph for modelling information about spacecraft. You can see how the subjects (which are always represented by URLs) have values that may be literal (in rectangular boxes) or may point to other resources (URLs). Some of this data may come from other datasets (use of the same URL for a spacecraft enables you to link to a different resource and use the values within that resource).

The emphasis here is on the data – the concepts – not on the carrier of the data – the ‘record’.

In our LOCAH project we will need to look at the issue of hierarchy of multi-level descriptions. In truth, I am not yet familiar enough with Linked Data to really understand how this is going to work, and we have not yet really started to tackle this work. I think I’m still struggling to move away from thinking of the record as the basis of things, because, to coin a rather tiresome phrase, RDF modelling is a paradigm shift.  RDF is all about relationships between concepts and I will be interested to see where this leaves relationships between hierarchical parts of an archive description. But I am heartened by Rob Styles’ (of Talis) assertion that RDF allows anyone to say anything about anything.

Who is the creator?

I am currentphoto of quill pensly working on an exciting new Linked Data project, looking at exposing the Archives Hub metadata in a different way, that could provide great potential for new uses of the data. More on that in future posts. But it has got me thinking about the thorny issue of ‘Name of creator(s)’, as ISAD(G) says. The ‘creator’ of the archive. In RDF modelling (required for Linked Data output) we need to think about how data elements relate to eachother and be explicit about the data elements and the relationships between concepts.

Dublin Core has a widely used ‘createdBy’ element – it would be nice and easy to use that to define the relationship between the person and the archive. The ‘Sir Ernest Shakleton Collection’ createdBy Sir Ernest Shakleton. There is our statement. For RDF we’ll want to identify the names of things with URIs, but leaving that for now, what I’m interested in here is the predicate – the collection was created by Sir Ernest Shakleton, an Arctic explorer whose papers are represented on the Hub.

The only trouble with this is that the collection was not created by him. Well, it was and it wasn’t. The ‘collection’ as a group of things was created by him. That particular group of things would not exist otherwise. But people will usually take ‘created by’ to mean ‘authored by’. It is quite possible that none of the items in the collection were authored by Sir Ernest Shakleton. ISAD(G) refers to the ‘creation, accumulation and maintenance’ and uses ‘creator’ as shorthand for these three different activities. EAD uses ‘origination’ for the ‘individual or organisation responsible for the creation, accumulation or assembly of the described materials’. Maybe that definition is more accurate because it says ‘or assembly’. The idea of an originator appears to get nimbly around the fact that the person or organisation we attribute the archive to is not necessarily the author – they did not necessary create any of the records. But the OED defines the originator as the person who originates something, the creator.

It all seems to hang upon whether the creator can reasonably mean the creator of this archive collection – they are responsible for this collection of materials coming together. The trouble is, even if we go with that, it might work within an archival context – we all agree that this is what we mean – but it doesn’t work so well in a general context. If our Linked Data statement is that the Sir Ernest Shakleton collection ‘was created by’ Sir Ernest Shakleton then this is going to be seen, semantically, as the bog-standard meaning of creator, especially if we use a vocabulary that usually defines creator as author. Dublin Core has dc:creator. Dublin Core does not really have the concept of an archival originator, and I suspect that there are no other vocabularies that have addressed this need.

I would like to end this post with an insightful solution…but none such is coming to me at present. I suppose the most accurate one word description of the role of this person or organisation is ‘accumulator’ or ‘gatherer’. But something doesn’t sound quite right when you start talking about the accumulator. Sounds a bit like a Hollywood movie. Maybe gives it a certain air of mystery, but for representing data in RDF we need clarity and consistency in the use of terms.

Ditchling: A Craft Community

Ethel Mairet's Ditchling workshop Photo: Ethel Mairet’s workgirls and apprentices at her ‘Gospels’ workshop, Ditchling, in the 1930s; copyright © the Crafts Study Centre, and courtesy of VADS.

In 1921, the letter-cutter, sculptor, artist and writer Eric Gill founded an arts and crafts colony in Ditchling, East Sussex. Known as The Guild of St Joseph and St Dominic, it was a unique experiment in communal life in the early twentieth century, and survived until 1989.

This month we highlight descriptions for the Ditchling collections held by The Crafts Study Centre, which are especially rich in the work of the calligrapher Edward Johnston (1872-1944) and the weaver and dyer Ethel Mairet (1872-1952).

All for one and one for all: the management of cultural collections

Collections Trust Collections Management guide
Collections Management: A Practical Guide, by the Collections Trust

PAS 197 has recently been published as a new standard, sponsored by the Collections Trust and developed by the BSI. At this week’s meeting of the Archives and Records Association’ Data Standards Group we heard more about the involvement of archivists in the development of the standard.  This came about originally following a talk at the DSG by Nick Poole of the Collections Trust; there was a feeling that it was important to ensure the standard met the requirements of archives as well as museums and libraries.  Susan Snell (Archivist at the Library of Freemasonry) and Teresa Doherty (Collections Manager at The Women’s Library) put themselves forward to work on a standard for collections management that would be truly cross-domain, and look to fulfill the MLA ideal of one standard for all three sectors.

Essentially, what we needed was a standard that said what we do with materials in our care.  The Collections Trust approached the British Standards Institute (BSI) to develop a publicly available specification (PAS) that would provide a broad overview of how heritage collections should be managed. Susan and Teresa only became involved after a number of meetings, but from the presentation that they gave, we were in no doubt that they made their voices heard very successfully. Dr Norman James ( TNA) and Christopher Marsend (V&A) was also there representing our profession.

PAS 197 sets out clearly how you manage an archive and how and why archives are different from other collections. It gives a broad overview of how the collections should be managed and lists the key standards. Susan and Teresa explained to us the way that the different contexts within which we work can create barriers. For example, the museum fraternity talk about ‘objects’, which is not appropriate for archives (‘item’ was used instead in PAS 197, as the best compromise). There was a danger that the standard would not lead to cross-sectoral compatibility if these issues were not addressed, but it needed to be a consensus document.

Teresa emphasised that PAS 197 is not aimed at the average archivist or curator or librarian; it is aimed at senior managers, or at helping us to deal more effectively with senior managers, funders and other similar stakeholders. It codifies our best practice for our boards and financial advisers. It sets out what needs to be resourced.

Teresa pointed out a diagram within the standard that summarises the four main areas of work:

i) Developing collections – acquisitions and disposals

ii) Information procedures – catalogues, indeces, survey lists, accessions registers, etc.

iii) Access

iv) Care and conservation

This may seem straightforward to us, but it needs to be spelt out. We should have policies, procedures and documentation for these areas, and resources directed to each area.  Teresa and Susan pointed out that the scale should be suitable for the repository. A very small repository may just need one document with four paragraphs addressing each area of collections management. Also, it is a generic model, so it can apply to cross-domain collections. The four areas map to what is required for accredication for musuems and TNA self-assessment. The area that is not addressed in PAS is governance, which is also part of accreditation. This is addressed by other BSI standards.

Susan took us through a diagram that sets out processing collections and clarifies terminology – pre-accession, accession, appraisal, cataloguing, deaccessioning/disposal. So, for example, accession in archives is the same as acquisition in the museum and library world, and appraisal only happens in the archival world.

Susan and Teresa felt that working with the BSI was very productive. They were very professional and gave a neutral perspective, looking to ensure a balanced approach so that all voices were heard.  They also told us that we should be pleased with ourselves as a profession, as we lead the way in terms of the development of useful standards to help us do our work more effectively – see the appendix of the standard for proof of this!

We were informed that there is a move for archives to get accreditaion by 2012, to take over the TNA self-assessment scheme.  There may be issues around scalability here, but hopefully, if the accredication procedure is guided by PAS 197, it will be achievable for very small collections. Cross-domain accreditation may encourage institutions that are primarily museums or libraries to ensure that their archives are well cared for, catalogued to the appropriate standards and accessible for use.

The Collections Trust have now produced Collections Management: a practical guide (by Susanna Hillhouse, priced at £29.99).

If you are interested in getting a copy of PAS 197, being a BSI standard, it is a little expensive, at about £56. But, it sounds like it may be well worth having. Thumbs up to Susan and Teresa for helping to ensure that this key standard is relevant for archives as well as museums and libraries.

Democracy 2.0 in the US

Democracy 2.0: A Case Study in Open Government from across the pond.

I have just listened to a presentation by David Ferriero – 10th Archivist of the US at the National Archives and Records Administration (www.archives.gov). He was talking about democracy, about being open and participatory. He contrasted the very early days of American independence, where there was a high level of secrecy in Government, to the current climate, where those who make decisions are not isolated from the citizens, and citizens’ voices can be heard. He referred to this as ‘Democracy 2.0.’ Barack Obama set out his open government directive right from the off, promoting the principles of more transparecy, participation and collaboration. Ferriero talked about seeking to inform, educate and maybe even entertain citizens.

The backbone of open government must be good record keeping. Records document individual rights and entitlements, record actions of government and who is responsible and accountable. They give us the history of the national experience. Only 2-3 percent of records created in conducting the public’s business are considered to be of permanent value and therefore kept in the US archives (still, obviously, a mind-bogglingly huge amount of stuff).

Ferriero emphasised the need to ensure that Federal records of historical value are in good order. But there are still too many records are at risk of damange or loss. A recent review of record keeping in Federal Agencies showed that 4 out of 5 agencies are at high or moderate risk of improper destruction of records. Cost effective IT solutions are required to address this, and NARA is looking to lead in this area. An electronic records archive (ERA) is being build in partnership with the private sector to hold all the Federal Government’s electronic records, and Ferriero sees this as the priority and the most important challenge for the National Archives. He felt that new kinds of records create new challenges, that is, records created as result of social media, and an ERA needs to be able to take care of these types of records.

Change in processes and change in culture is required to meet the new online landscape. The whole commerce of information has changed permanently and we need to be good stewards of the new dynamic. There needs to be better engagement with employees and with the public. NARA are looking to improve their online capabilities to improve the delivery of records. They are developing their catalogue into a social catalogue that allows users to contribute and using Web 2.0 tools to allow greater communication between staff. They are also going beyond their own website to reach users where they are, using YouTube, Twitter, blogs, etc. They intend to develop comprehensive social media strategy (which will be well worth reading if it does emerge).

The US Government are publishing high value datasets on data.gov and Ferriero said that they are eager to see the response to this, in terms of the innovative use of data. They are searching for ways to step of digitisation – looking at what to prioritise and how to accomplish the most with least cost. They want to provide open government leadership to Federal Agencies, for example, mediating in disputes relating to FoI. There are around 2,000 different security classification guides in the government, which makes record processing very comlex. There is a big backlog of documents waiting to be declassified, some pertaining to World War Two, the Koeran War and the Vietnam War, so they will be of great interest to researchers.

Ferriero also talked about the challenge of making the distiction between business records and personal records. He felt that the personal has to be there, within the archive, to help future researchers recreate the full picture of events.

There is still a problem with Government Agencies all doing their own thing. The Chief Information officers of all agencies have a Council (the CIO Council). The records managers have the Records Management Council. But it is a case of never the twain shall meet at the moment. Even within Agencies the two often have nothing to do with eachother….there are now plans to address this!

This was a presentation that ticked many of the boxes of concern – the importance of addressing electronic records, new media, bringing people together to create efficiencies and engaging the citizens. But then, of course,  it’s easy to do that in words….