International archival standards: living in perfect harmony?

The International Council on Archives Committee on Best Practices and Standards met recently to look at the four ICA descriptive standards: ISAD(G), ISAAR(CPF), ISDF and ISDIAH. It was agreed at this meeting to delay a full review that might lead to more substantial changes and to concentrate on looking at harmonization.
On the Hub we use ISAD(G), which has become very widely recognised and used. ISAAR(CPF) is something that would be important if we started to think about implementing EAC-CPF, enabling our contributors to create authority records for creators of archives. We think that this is the sort of development that should have cross-sectoral agreement, and we are actively involved in the UK Archives Discovery Network (UKAD), which provides a means for us to discuss these sorts of issues across the archives community in the UK.
As far as the International Description for Descriptive Function (ISDF) is concencerned, I feel that a great deal more work is needed to help archivists understand how this can be practically implemented. Our new EAD Editor does allow contributors to add functions to their descriptions, but this is just using the EAD tag for functions. To me, the whole issue of functions and activities is problematic because I am looking at it from the perspective of aggregation. It is all very well for one institution to define their own functions and activities, but how does this translate into the wider environment? How do we successfully enable researchers to access archives by searching functions and activities across diverse institutions?
I have not really given any thought at all to the International Standard Description for Institutions with Archival Holdings (ISDIAH) other than to basically familiarise myself with the standard. For us, the unique code that identifies the institution and the institution’s name is all that we require within our descritions. We link to the Archon details for the institution, and maybe it is in the Archon directory of UK archives, that ISDIAH should be implemented? I am not sure that it would be appropriate to hold detailed information about individual institutions on the Hub.
I will be interested to see what the outcomes of the Committee’s work are. I wonder whether we need a greater understanding of the standards themselves before we try to understand how they work together? Maybe adopting more consistent terminology and providing a conceptual framework will help archivists to appreciate what the standards are trying to achieve and encourage more use, but I am doubtful. I think that a few training days: ‘Understanding the ICA Descriptive Standards’ wouldn’t go amiss for many archivists, who may have only recently adopted ISAD(G), let alone thought about the implications of the other standards.
In the appendices to the minutes, there are some interesting points of discussion. Even some of the assumptions seem to be based on a greater understanding of the standards than most archivists have. For example, ‘if you use ISAD(G) in conjuction with ISAAR, the Admin/Biog history element of ISAD(G) becomes useless because the description of the record creator is managed by ISAAR’. Well, yes, but I’m not sure that this is so clear cut in practice. It makes sense, of course, but how do we relate that to all the descriptions we now have? Also, ‘ISAAR can be used to structure the information contained in the Admin/Biog history element of ISAD(G)’ – that makes sense, but I know of no practical examples that show archivists are doing this.
I wonder if we really need to help archivists to understand the standards – what they are, what they do, how they work, how they can benefit resource discovery – before we throw a conceptual framework at them. At the same time, I increasingly feel that ISAD(G) is not relevant to the modern environment and therefore I think there is a pressing need to review ISAD(G) before looking at how it relates to other standards.

Hub contributors’ reflections on the current and future state of the Hub



The Archives Hub is what the contributors make it, and with over 170 institutions now contributing, we want to continue to ensure that we listen to them and develop in accordance with their needs. This week we brought together a number of Archives Hub contributors for a workshop session. The idea was to think about where the Hub is now and where it could go in the future.
We started off by giving a short overview of the new Hub strategy, and updating contributors on the latest service developments. We then spent the rest of the morning asking them to look at three questions: What are the benefits of being part of the Hub? What are the challenges and barriers to contributing? What sort of future developments would you like to see?
Probably the strongest benefit was exposure – as a national service with an international user-base the Hub helps to expose archival content, and we also engage in a great deal of promotional work across the country and abroad. Other benefits that were emphasised included the ability to search for archives without knowing which repository they are held at, and the pan-disciplinary approach that a service like the Hub facilitates. Many contributors also felt that the Hub provides them with credibility, a useful source of expertise and support, and sometimes ‘a sympathetic ear’, which can be invaluable for lone archivists struggling to make their archives available to researchers. The network effect was also raised – the value of having a focus for collaboration and exchange of idea.
A major barrier to contributing is the backlog of data, which archivists are all familiar with, and the time required to deal with this, especially with the lack of funding opportunities for cataloguing and retro-conversion. The challenges of data exchange were cited, and the need to make this a great deal easier. For some, getting the effective backing of senior managers is an issue. For those institutions who host their own descriptions (Spokes), the problems surrounding the software, particularly in the earlier days of the distributed system, were highlighted, and also the requirement for technical support. One of the main barriers here may be the relationship with the institution’s own IT department. It was also felt that the use of Encoded Archival Description (EAD) may be off-putting to those who feel a little intimidated by the tags and attributes.
People would like to see easy export routines to contribute to the Hub from other sytems, particularly from CALM, a more user-friendly interface for the search results, and maybe more flexibility with display, as well as the ability to display images and seamless integration of other types of files. ‘More like Google’ was one suggestion, and certainly exposure to Google was considered to be vital. It would be useful for researchers to be able to search a Spoke (institution) and then run the same search on the central Hub automatically, which would create closer links between Spokes and Hub. Routes through to other services would add to our profile and more interoperability with digital repositories would be well-received. Similarly, the ability to search across archival networks, and maybe other systems, would benefit users and enable more people to find archival material of relevance. The importance of influencing the right people and lobbying were also listed as something the Hub could do on behalf of contributors.
After a very good lunch at Christie’s Bistro we returned to look at three particular developments that we all want to see, and each group took one issues and thought about what the drivers are that move it forward and what the retraining forces are that stop it from happening. We thought about usability, which is strongly driven by the need to be inclusive and to de-mystify archival descriptions for those not familiar with archives and in particular archival hierarchies. It is also driven by the need to (at least in some sense) compete with Google, the need to be up-to-date, and to think about exposing the data to mobile devices. However, the unrealistic expectations that people have and, fundamentally, the need to be clear about who our users are and understanding their needs are hugely important. The quality and consistency of the data and markup also come into play here, and the recognition that this sort of thing requires a great deal of expert software development.
The need for data export, the second issue that we looked at, is driven by the huge backlogs of data and the big impact that this should have on the Hub in terms of quantity of descriptions. It should be a selling point for vendors of systems, with the pressure of expectation from stakeholders for good export routines. It should save time, prove to be good value for money and be easily accommodated into the work flow of an archive office. However, complications arise with the variety of systems out there and the number of standards, and variance in application of standards. There may be issues about the quality of the data and people may be resistant to changing their work habits.
Our final issue, the increased access to digital content, is driven by increased expectations for accessing content, making the interface more visually attractive (with embedded images), the drive towards digitisation and possibly the funding opportunities that exist around this area. But there is the expense and time to consider, issues surrounding copyright, the issue of where the digital content is stored and issues around preservation and future-proofing.
The day ended with a useful discussion on measuring impact. We got some ideas from contributors that we will be looking at and sharing with you through our blog. But the challenges of understanding the whole research life-cycle and the way that primary sources fit into this are certainly a major barrier to measuring the impact that the Hub may have in the context of research outputs.

Web 2.0 for teaching: wishy-washy or nitty-gritty?

A useful report, summarising Web 2.0 and some of the perspectives in literature about Web 2.0 and teaching, was recently produced by Susan A. Brown of the School of Education at the University of Manchester: The Potential of Web 2.0 in Teaching: a study of academics’ perceptions and use. The findings were based on a questionnaire (74 respondents across 4 Faculties) and interviews (8 participants) with teaching staff from the University of Manchester. It is available on request, so let us know if you would like a copy.
Some of the points that came out of the report:
  • It is the tutors’ own beliefs about teaching that are the main influence on their perceptions of Web 2.0
  • There is little discussion about Web 2.0 amongst colleagues and the use of it is generally a personal decision
  • Top-down goals and initiatives do not play a major part in use of Web 2.0
  • It may be that a bottom-up experimental approach is the most appropriate, especially given the relative ease with which Web 2.0 tools can be deployed, although there were interviewees who argued for a more considered and maybe more strategic approach, which suggests something that is more top-down
  • There is little evidence that students’ awareness of Web 2.0 is a factor, or that students are actively arguing in favour of its use:
“This absence of a ‘student voice’ in tutors’ comments on Web 2.0 is interesting given the perceptions of ‘digital natives’ – the epithet often ascribed to 21st Century students – as drivers for the greater inclusion of digital technologies. It may shore up the view that epithets such as ‘digital natives’ and ‘Millennials’ to describe younger students over-simplify a complex picture where digital/Web technology users do not necessarily see the relevance of Web 2.0 in education.”
  • The use of and familiarity with Web 2.0 tools (personal use or use for research) was not a particularly influential factor in whether the respondents judged them to have potential for teaching.
  • In terms of the general use of Web 2.0 tools, mobile social networking (e.g Twitter) and bookmarking were the tools used the least amongst respondents. Wikis, blogs and podcasting had higher use.
  • In terms of using these tools for teaching, the data was quite complex, and rather more qualitative than quantitative, so it is worth looking at the report for the full analysis. There were interviewees who felt that Web 2.0 is not appropriate for teaching, where the role of a teacher is to lay down the initial building blocks of knowledge, implying that discussion can only follow understanding, not be used to achieve understanding. There was also a notion that Web 2.0 facilitates more surface, social interactions, rather than real cognitive engagement.
“A number of…respondents expressed the view that Web 2.0 is largely socially orientated, facilitating surface ‘wishy-washy’ discussion that cannot play a role in tacklinkg the ‘nitty-gritty’ of ‘hard’ subject matter”.
Three interviewees saw a clear case for the use of Web 2.0 and they referred to honing research skills, taking a more inquiry-based approach and taking a more informal approach and tapping into a broader range of expertise.
In conclusion “The study indicates that there are no current top-down and bottom-up influences operating that are likely to spread Web 2.0 use beyond individuals/pockets of users at the UoM [Universtiy of Manchester]”. The study recommends working with a small group of academics to get a clearer understanding of the issues they face in teaching and how Web 2.0 might offer opportunities, as well as providing an opportunity for more detailed discussion about teaching practices and thinking about how to tailor Web2.0 for this context.

Archival Context: entities and multiple identities


I recently took part in a Webinar (Web seminar) on the new EAC-CPF standard. This is a standard for the encoding of information about record creators: corporate bodies, persons and families. This information can add a great deal to the context of archives, supporting a more complete understanding of the records and their provenance.

We were given a brief overview of the standard by Kathy Wisser, one of the Working Group members, and then the session was open to questions and discussion.

The standard is very new, and archivists are still working out how it fits in to the landscape and how it relates to various other standards. It was interesting to note how many questions essentially involved the implementation of EAC-CPF: who creates the records? where are they kept? how are they searched? who decides what?
These questions are clearly very important, but the standard is just a standard for the encoding of ISAAR(CPF) information. It will not help us to figure out how to work together to create and use EAC-CPF records effectively.
In general, archivists use EAD to include a biographical history of the record creator, and may not necessarily create or link to a whole authority record for them. The idea is that providing separate descriptions for different entities is more logical and efficient. The principle of separation of entities is well put: “Because relations occur between the descriptive nodes [i.e. between archive collections, creators, functions, activities], they are most efficiently created and maintained outside of each node.” So that if you have a collection description and a creator description, the relationship between the two is essentially maintained separately to the actual descriptions. If only EAD itself was a little more data-centric (database friendly you might say), this would facilitate a relational approach.
I am interested in how we will effectively link descriptions of the same person, because I cannot see us managing to create one single authoritative record for each creator. This is enabled via the ‘identities’: a record creator can have two or more identities with each represented by a distinct EAC-CPF instance. I think the variety of identity relationships that the standard provides for is important, although it inevitably adds a level of complexity. It is something we have implemented in our use of the tag to link to related descriptions. Whilst this kind of semantic markup is a good thing, there is a danger that the complexity will put people off.
I’m quite hung-up on the whole issue of identifiers at the moment. This may be because I’ve been looking at Linked Data and the importance of persistent URLs to identify entities (e.g. I have a URL, you have a URL, places have a URL, things have a URL and that way we can define all these things and then provide links between them). The Archives Hub is going to be providing persistent URLs for all our descriptions, using the unique identifier of the countrycode, repository code and local reference for the collection (e.g. http://www.archiveshub.ac.uk/search/record.html?id=gb100mss, where 100 is the repository code and MSS is the local reference).
I feel that it will be important for ISAAR(CPF) records to have persistent URLs, and these will come from the recordID and the agencyCode. Part of me thinks the agency responsible for the EAC-CPF instance should not be part of the identifer, because the record should exist apart from the institution that created it, but then realistically, we’re not going to get consensus on some kind of independent stand-alone ISAAR(CPF) record. One of the questions I’m currently asking myself is: If two different bodies have EAC-CPF records, does it matter what the identifers/URLs are for those records, even if they are for the same person? Is the important thing to relate them as representing the same thing? I’m sure its very important to have a persistent URL for all EAC-CPF instances, because that is how they will be discoverable; that is their online identity. But the question of providing one unique identifier for one person, or one corporate body is not something I have quite made my mind up about.
It will be interesting to see how the standard is assessed by archivists and more examples of implementation. The Archives Hub would be very interested to hear from anyone using it.

Designs on Delivery: GPO Posters from 1930 to 1960: Online extras

 Mail Coach A.D. 1784

University of the Arts London Archives and Special Collections Centre, in collaboration with The British Postal Museum & Archive, presents Designs on Delivery: GPO Posters from 1930 to 1960. The exhibition at the Well Gallery – and online here on the Archives Hub – focuses on a period when the Post Office was at the cutting edge of poster design and mass communication. It explores how the GPO translated, often complex, messages to the public in order to educate them about the services offered, by using text, image, and colour.

The Archives Hub website now has online extras: exclusively online, an additional eight posters representing the range of themes adopted by the General Post Office in their advertising.

Illustration: John Armstrong (1893-1973) ‘Mail Coach A.D. 1784’ (1935) reference The Royal Mail Archive POST 110/3175; copyright © Royal Mail Group Ltd and courtesy of The British Postal Museum & Archive.

Sustainable content: visits to contributors

I recently visited two of the contributors to the Archives Hub sustainable content development project. The archivists at Queen Mary, University of London (QMUL) and the BT Archives were nice enough to let me drink their tea, and see how they used CALM.

Axiell, developers of the CALM software, have kindly let us have access to a trial version of CALM to help with this project, but it

Designs on Delivery: GPO Posters from 1930 to 1960

NIGHT MAIL

University of the Arts London Archives and Special Collections Centre, in collaboration with The British Postal Museum & Archive, presents Designs on Delivery: GPO Posters from 1930 to 1960. The exhibition at the Well Gallery – and online here on the Archives Hub – focuses on a period when the Post Office was at the cutting edge of poster design and mass communication. It explores how the PO translated, often complex, messages to the public in order to educate them about the services offered, by using text, image, and colour.

As part of the exhibition, the Well Gallery will be showing on loop Night Mail (1936) which the British Film Institute calls "one of the most popular and instantly recognised films in British film history … one of the most critically acclaimed films … [of the] documentary film movement".

Illustration: poster designed by Pat Keely (died 1970) for the film Night Mail, reference The Royal Mail Archive POST 109/377; copyright © Royal Mail Group Ltd and courtesy of The British Postal Museum & Archive.

A few thoughs on context and content

I have been reading with interest the post and comments on Mark Matienzo’s blog: http://thesecretmirror.com. He asks ‘Must contextual description be bound to records description?’

I tend to agree with his point of view that this is not a good thing. The Archives Hub uses EAD, and our contributors happily add very excellent biographical and administrative history information into their descriptions, via the tag, information that I am sure is very valuable for researchers. But should our descriptions leave out this sort of information and be just descriptions of the collection and no more? Wouldn’t it be so much more sensible to then link to contextual information that is stored separately?
Possibly, on the other side of the argument, if archivists created separate biographical/administrative history records, would they still want to contextualise them for specific collection descriptions anyway? It makes perfect sense to have the information separate to the collection description if it is going to be shared, but will archivists want to modify it to make it relevant to particular collections? Is it sensible to link to a comprehensive biographical record for someone when you are describing a very small collection that only refers to a year in their life?
Of course, we don’t have the issue with EAD at the moment, in so far as we can’t include an EAC-CPF record in an EAD record anyway, because it doesn’t allow stuff to be included from other XML schemas (no components from other namespaces can be used in EAD). But I can’t help thinking that an attractive model for something like the Archives Hub would be collection descriptions (including sub-fonds, series, items), that can link to whatever contextual information is appropriate, whether that information is stored by us or elsewhere. This brings me back to my current interest – Linked Data. If the Web is truly moving towards the Linked Data model, then maybe EAD should be revised in line with this? By breaking information down into logical components, it can be recombined in more imaginative ways – open and flexible data!