Names (9): Structuring Data

In the last Names post I wrote about the 4-step process that covers ‘matching and meaning’. Step 2 was ‘Structuring data’, which means implementing a process to structure the elements that form part of a name string.

Many names are not structured. But if we can process the data to create better structure, we have a much better chance of matching it to other entries.

Here is a table showing some name entries around ‘J Watson’ (my examples are taken from real data, but sometimes tweaked a bit in order to cover different types of patterns – all the patterns will be found within the data).

Names based around ‘J Watson’ put into a structure table

The elements have been put into columns, and this is the idea with our structuring process. Some names are still strings – we cannot always know which part is a surname and which part a forename; and some names do not have that kind of structure anyway. We hope to identify floruit dates, and categorise them as distinct from life dates. We don’t want to match ‘1888-1938’ with ‘fl 1888-1938’ (although we might want to see this as a potential match). We will aim do something similar with birth and death dates. We want to gather all the information that is not a name or a date as ‘supporting information’.

Once we have the structure, it is far more likely we can match the name, and also control our level of confidence about matching. Here is a shorter table based on some of the entries from above:

namesurnameforenamedatesfl datesinfo
WatsonJames1834-1847stockbroker
Watson James1834-stockbroker
WatsonJb 1834Mr
James Watson1840-1847

You can see that two of the names are simply name strings. We may not be able to identify a surname and forename in ‘James Watson’ or ‘Watson James’. With the structure that we have imposed, it is possible to write a name matching process that provides a match between the first and second entries in the above table, because we can say with some confidence that a name that includes ‘James Watson’ and that has the birth date of ‘1834’ and the additional information ‘stockbroker’ refers to the same person. We might say this is a ‘definite’ match, or a ‘probable’ match. The third entry could be a ‘possible’ match, as it includes ‘Watson’ and ‘J’ with the same birth date of 1834. If the fourth entry had ‘stockbroker’, for example, then we might consider a possible match, but as things stand, it would not be a match.

It is very important that the interface we develop indicates to end users that we are matching name strings. There is a distinction between matching name strings and simply stating that X and Y are the same person. This will help us with introducing the idea of likely, probable and possible matches.

This structuring work is absolutely at the heart of creating a name interface, and enabling researchers to look up ‘James Watson’ and then potentially go in many different directions through the connections, finding ways that archives may be related. But it is really challenging. We will not ‘get it right’. Even if we had really substantial resources and time, we could not make it perfect. Archivists, as information professionals, are keen on ‘getting it right’, which is usually a good thing; but pulling together information using names created over decades, by thousands of cataloguers, in different systems, without a clear standard to work to….it ain’t ever going to be perfect. The key question is, whether this will substantially enhance the researcher experience and allow new connections to be made. And whether it will enable us to create connections outside of the archives domain. We have to have a change of mindset to accept that it is not perfect, but it is still hugely beneficial to research.

Just to emphasise the variation in data that we have, here are some EAD names, given as they are structured. They are all fine displayed within a description, suitable for a human reader, but they create challenges in terms of name matching. When you look at these, you have to think of the structure and semantics – essentially, how can we write an algorithm that allows us to truly identify the person (or that they are not a person!):

<persname>Barron, Lilias Mary Watson (b1912 : science graduate : University of Glasgow, Scotland)</persname>

<origination label=”Creator: “><persname role=”author”>Name of Author: various </persname></origination>

<persname source=”nra”>
<emph altrender=”surname”>Blore</emph>
<emph altrender=”forename”>Edward</emph>
<emph altrender=”dates”>1787-1879</emph>
<emph altrender=”epithet”>Architect Artist Antiquary</emph>
</persname>

<origination>Gerald and Joy Finzi</origination>

<origination>Walls Family – Tom Kirby Walls and Tom Kenneth Walls</origination>

<origination>National Union of Railwaymen; Associated Society of Locomotive Engineers and Firemen; National Busworkers’ Association.</origination>

<origination>
<persname authfilenumber=”https://viaf.org/viaf/61775126″ role=”creator” rules=”ncarules” source=”viaf”>
<emph altrender=”surname”>actor</emph>
<emph altrender=”forename”>dramatist and criticJohn Whiting English</emph>
</persname>

The last one was actually taken directly from VIAF and imported into the Archives Hub, which is, in principle, a really good way to create a structured name. Unfortunately, the process of pulling it into the Hub using the VIAF APE did not go quite according to plan. VIAF has just the same challenges as we do – there will be structural mistakes. However, it has the VIAF ID, so funnily enough, it is easier to match than many other names.

Many of the above examples are names added as archival creator names (‘origination’). Unfortunately, there has been a tendency for cataloguers to add creator names in a very unstructured way. The old Archives Hub Editor used to encourage this, and most archival systems have a free text field for name of creator. (Now, our Editor structures the creator name and adds it as an index term – so they are both identical).

We are currently looking at the challenge of matching origination name with the index term within the same description. That may sound like an easy task, but very often they are really quite different. For example, for the name of creator you may get:

<origination>Name of Authors: various but include Reverend<persname role=”author”>Thomas Frognall Dibdin</persname>,<persname role=”author”>Richard Bentley</persname>,<persname role=”author”>Philip Bliss </persname>and<persname role=”author”>Frederick James Furnivall</persname></origination>

This is nicely structured, so that it is easy to see that they are separate names, although the lack of life dates makes unique identification more difficult. If these individual names are also added as index terms, then we want to create just one entry for e.g. ‘Thomas Frognall Dibdin’ – we don’t want two entries for the one name (taken from the ‘origination’ and the ‘controlaccess’ index area) that both represent the same archive collection.

A common pattern is something like:

<origination label=”name of creator:”>Frances Dennis</origination>

With an index term of:

<persname rules=”ncarules”><emph altrender=”surname”>Dennis</emph><emph altrender=”forename”>Frances Mary</emph><emph altrender=”dates”>b 1874</emph><emph altrender=”epithet”>missionary</emph></persname>

‘Frances Dennis’ as a name string is very likely to be a match with ‘Frances Mary Dennis b1847 missionary’ when it is within the same collection. If these two entries were in different descriptions, we would not match them.

Our pre-match structuring will go a long way to increasing the number of matches, and hence the intellectual bringing together of knowledge through names. Matching creator name and index term name will reduce the amount of duplication. The framework will be tweakable, so that we can constantly review and improve.

Archives Hub Data and Workflow

Introduction

As those of you who contribute to or use the Hub will know, we went live with our new system in Dec 2016.  At the heart of our new system is our new workflow.  One of the key requirements that we set out with when we migrated to a new system was a more robust and sustainable workflow; the system was chosen on the basis that it could accommodate what we needed.

This post is about the EAD (Encoded Archival Data) descriptions, and how they progress through our processing workflow. It is the data that is at the heart of the Archives Hub world. We also work with EAG (Encoded Archival Guide) for repository descriptions, and EAC-CPF (Encoded Archival Context, Corporate bodies, Persons and Families) for name entities. Our system actually works with JSON internally, but EAD remains our means of taking in data and providing data out via our API.

On the Archives Hub now we have two main means of data ingest, via our own EAD Editor, which can be thought of as ‘internal’, and via exports from archive systems, which can be thought of as ‘external’.

Data Ingest via the EAD Editor

1. The nature of the EAD

The Editor creates EAD according to the Archives Hub requirements. These have been carefully worked out over time, and we have a page detailing them at http://archiveshub.jisc.ac.uk/eadforthehub

screenshot of eadforthehub page
Part of a Hub webpage about EAD requirements

When we started work on the new system, we were aware that having a clear and well-documented set of requirements was key. I would recommend having this before starting to implement a new system! But, as is often the case with software development, we didn’t have the luxury of doing that – we had to work it out as we went along, which was sometimes problematic, because you really need to know exactly what your data requirements are in order to set your system up. For example, simply knowing which fields are mandatory and which are not (ostensibly simple, but in reality this took us a good deal of thought, analysis and discussion).

Screenshot of the EAD Editor
EAD Editor

2. The scope of the EAD

EAD has plenty of tags and attributes! And they can be used in many ways. We can’t accommodate all of this in our Editor. Not only would it take time and effort, but it would result in a complicated interface, that would not be easy to use.

screenshot of EAD Tag Library
EAD Tag Library

So, when we created the new Editor, we included the tags and attributes for data that contributors have commonly provided to the Hub, with a few more additions that we discussed and felt were worthwhile for various reasons. We are currently looking again at what we could potentially add to the Editor, and prioritising developments. For example, the <materialspec> EAD tag is not accommodated at the moment. But if we find that our contributors use it, then there is a good argument for including it, as details specific to types of materials, such as map scales, can be useful to the end user.

We don’t believe that the Archives Hub necessarily needs to reflect the entire local catalogue of a contributor. It is perfectly reasonable to have a level of detail locally that is not brought across into an aggregator. Having said that, we do have contributors who use the Archives Hub as their sole online catalogue, so we do want to meet their needs for descriptive data. Field headings are an example of content we don’t utilise. These are  contained within <head> tags in EAD. The Editor doesn’t provide for adding these. (A contributor who creates data elsewhere may include <head> tags, but they just won’t be used on the Hub, see Uploading to the Editor).

We will continue to review the scope in terms of what the Editor displays and allows contributors to enter and revise; it will always be a work in progress.

3. Uploading to the Editor

In terms of data, the ability to upload to the Editor creates challenges for us. We wanted to preserve this functionality, as we had it on the old Editor, but as EAD is so permissive, the descriptions can vary enormously, and we simply can’t cope with every possible permutation. We undertake the main data analysis and processing within our main system, and trying to effectively replicate this in the Editor in order to upload descriptions would be duplicating effort and create significant overheads. One of our approaches to this issue is that we will preserve the data that is uploaded, but it may not display in the Editor. If you think of the model as ‘data in’ > ‘data editing’ > ‘data out’, then the idea is that the ‘data in’ and ‘data out’ provides all the EAD, but the ‘data editing’ may not necessary allow for editing of all the data. A good example of this situation occurs with the <head> tag, which is used for section headings. We don’t use these on the Hub, but we can ensure they remain in the EAD and they are there in the output from the Editor, so they are retained, but not displayed in the Editor. They can then be accessed by other means, such as through an XML Editor, and displayed in other interfaces.

We have disabled upload of exports from the Calm system to the Editor at present, as we found that the data variations, which often caused the EAD to be invalid, were too much for our Editor to cope with. It has to analyse the data that comes in and decide which fields to populate with which data. Some are straightforward – ‘title’ goes into <unittitle> for example, but some are not…for example, Calm has references and alternative references, and we don’t have this in our system, so they cause problems for the Editor.

4. Output from the Editor

When a description is submitted to the Archives Hub from the Editor, it is uploaded to our system (CIIM, pronounced ‘sim’), which is provided by Knowledge Integration, and modified for our own data processing requirements.

Screenshot of the CIIM
CIIM Browse screen

The CIIM framework allows us to implement data checking and customised transformations, which can be specific to individual repositories. For the data from the Editor, we know that we only need a fairly basic default processing, because we are in control of the EAD that is created. However, we will have to consider working with EAD that is uploaded to the Editor, but has not been created in the Editor – this may lead to a requirement for additional data checking and transformations. But the vast majority of the time descriptions are created in the Editor, so we know they are good, valid, Hub EAD, and they should go through our processing with no problems.

Data Ingest from External Data Providers

1. The nature of the EAD

EAD from systems such as Calm, Archivist’s Toolkit and AtoM is going to vary far more than EAD produced from the Editor. Some of the archival management systems have EAD exports. To have an export is one thing; it is not the same as producing EAD that the Hub can ingest. There are a number of factors here. The way people catalogue varies enormously, so, aside from the system itself, the content can be unpredictable – we have to deal with how people enter references; how they enter dates; whether they provide normalised dates for searching; whether entries in fields such as language are properly divided up, or whether one entry box is used for ‘English, French, Latin’, or ‘English and a small amount of Latin’; whether references are always unique; whether levels are used to group information, rather than to represent a group of materials; what people choose to put into ‘origination’ and if they use both ‘origination’ and ‘creator’; whether fields are customised, etc. etc.

The system itself will influence on the EAD output. A system will have a template, or transformation process, that maps the internal content to EAD. We have only worked in any detail with the Calm template so far. Axiell, the provider of Calm, made some changes for us, for example, only six languages were exporting when we first started testing the export, so they expanded this list, and then we made additional changes, such as allowing for multiple creators, subjects and dates to export, and ensuring languages in Welsh would export. This does mean that any potential Calm exporter needs to use this new template, but Axiell are going to add it to their next upgrade of Calm.

We are currently working to modify the AdLib template, before we start testing out the EAD export. Our experience with Calm has shown us that we have to test the export with a wide variety of descriptions, and modify it accordingly, and we eventually get to a reasonably stable point, where the majority of descriptions export OK.

We’ve also done some work with AtoM, and we are hoping to be able to harvest descriptions directly from the system.

2. The scope of the EAD

As stated above, finding aids can be wide ranging, and EAD was designed to reflect this, but as a result it is not always easy to work with. We have worked with some individual Calm users to extend the scope of what we take in from them, where they have used fields that were not being exported. For instance, information about condition and reproduction was not exporting in one case, due to the particular fields used in Calm, which were not mapping to EAD in the template. We’ve also had instances of index terms not exporting, and sometimes this had been due to the particular way an institution has set up their system. It is perfectly possible for an institution to modify the template themselves so that it suits their own particular catalogues, but this is something we are cautious about, as having large numbers of customised exports is going to be harder to manage, and may lead to more unpredictable EAD.

3. Uploading to the Editor

In the old Hub world, we expected exports to be uploaded to the Editor. A number of our contributors preferred to do this, particularly for adding index terms. However, this lead to problems for us because we ended up with such varied EAD, which mitigated against our aim of interoperable content. If you catalogue in a system, export from that system, upload to another system, edit in that system, then submit to an aggregator (and you do this sometimes, but other times you don’t), you are likely to run into problems with version control. Over the past few years we have done a considerable amount of work to clarify ‘master’ copies of descriptions. We have had situations where contributors have ended up with different versions to ours, and not necessarily been aware of it. Sometimes the level of detail would be greater in the Hub version, sometimes in the local version. It led to a deal of work sorting this out, and on some occasions data simply had to be lost in the interests of ending up with one master version, which is not a happy situation.

We are therefore cautious about uploading to the Editor, and we are recommending to contributors that they either provide their data directly (through exports) or they use the Editor. We are not ruling out a hybrid approach if there is a good reason for it, but we need to be clear about when we are doing this, what the workflow is, and where the master copy resides.

4. Output from Exported Descriptions

When we pass the exports through our processing, we carry out automated transformations based on analysis of the data. The EAD that we end up with – the processed version – is appropriate for the Hub. It is suitable for our interface, for aggregated searching, and for providing to others through our APIs. The original version is kept, so that we have a complete audit trail, and we can provide it back to the contributor. The processed EAD is provided to the Archives Portal Europe. If we did not carry out the processing, APE could not ingest many of the descriptions, or else they would ingest, but not display to the optimum standard.

Future Developments

Our automated workflow is working well. We have taken complete, or near complete,  exports from Calm users such as the Universities of Nottingham, Hull and (shortly) Warwick, and a number of Welsh local authority archives. This is a very effective way to ensure that we have up-to-date and comprehensive data.

We have well over one hundred active users of the EAD Editor and we also have a number of potential contributors who have signed up to it, keen to be part of the Archives Hub.

We intend to keep working on exports, and also hope to return to some work we started a few years ago on taking in Excel data. This is likely to require contributors to use our own Excel template, as it is impractical to work with locally produced templates. The problem is that working with one repository’s spreadsheet, translating it into EAD, could take weeks of work, and it would not replicate to other repositories, who will have different spreadsheets. Whilst Excel is reasonably simple, and most offices have it, it is also worth bearing in mind that creating data in Excel has considerable shortcomings. It is not designed for hierarchical archival data, which has requirements in terms of both structure and narrative, and is constantly being revised. TNA’s Discovery are also working with Excel, so we may be able to collaborate with them in progressing this area of work.

Our new architecture is working well, and it is gratifying to see that what we envisaged when we started working with Knowledge Integration and started setting out our vision for our workflow is now a reality.  Nothing stands still in archives, in standards, in technology or in user requirements, so we cannot stand still either, but we have a set-up that enables us to be flexible, and modify our processing to meet any new challenges.

Archives Wales Catalogues Online: Working with the Archives Hub

Stacy Capner reflects on her first six months as Project Officer for the Archives Wales Catalogues Online project, a collaboration between the Archives and Records Council Wales and the Archives Hub to increase the discoverability of Welsh archives.

For a few years now there has been a strategic goal to get Wales’ archive collections more prominently ‘out there’ using the Archives Wales website. Collection level descriptions have been made available previously through the ‘Archives Network Wales’ project, but the aim now is to create a single portal to search and access multi-level descriptions from across services. The Archives Hub has an established, standards based way of doing this, so instead of re-inventing the wheel, Archives and Records Council Wales (ARCW) saw an opportunity to work with them to achieve these aims.

The work to take data from Welsh Archives into the Archives Hub started some time ago, but it became clear that getting exports from different systems and working with different cataloguing practices required more dedicated 1-2-1 liaison. I am the project officer on a defined project which began in April to provide dedicated support to archive services across Wales and to establish requirements for uploading their catalogue data to the Archives Hub (and subsequently to Archives Wales).

This project is supported by the Welsh Government through its Museums Archives and Libraries Division, with a grant to Swansea University, a member of ARCW and a long-standing contributor to the Hub. I’m on secondment from the University to the project, which means I’ve found myself back in my northern neck of the woods working alongside the Archives Hub team. This project has come at a time when the Archives Hub have been putting a lot of thought into their processes for uploading data straight from systems, which means that the requirements for Welsh services have started to define an approach which could be applied to archive services across Scotland, England and Northern Ireland.

Here are my reflections on the project so far:

  1. Wales has fantastic collections, holding internationally significant material. They deserve to be promoted, accessible and searchable to as wide an audience as possible. Some examples-

National Library of Wales, The Survey of the Manors of Crickhowell & Tretower (inscribed in the UNESCO Memory of the World Register, 2016) https://www.llgc.org.uk/blog/?p=11715

Swansea University, South Wales Coalfield Collection http://www.swansea.ac.uk/library/archive-and-research-collections/richard-burton-archives/ourcollections/southwalescoalfieldcollection/

West Glamorgan, Neath Abbey Ironworks collection (inscribed in the UNESCO Memory of the World register, 2014) http://www.southwales-eveningpost.co.uk/treasured-neath-port-talbot-history-recognised/story-26073633-detail/story.html

Bangor University, Penrhyn Estate papers (including material relating to the sugar plantations in Jamaica) https://www.bangor.ac.uk/archives/sugar_slate.php.en#project

Photograph of Ammanford colliers and workmen standing in front of anthracite truck, c 1900.
Photograph of Ammanford colliers and workmen standing in front of anthracite truck, c 1900. From the South Wales Coalfield Collection. Source: Richard Burton Archives, Swansea University (Ref: SWCC/PHO/COL/11)

  1. Don’t be scared of EAD ! I was. My knowledge of EAD (Encoded Archival Description) hadn’t been refreshed in 10 years, since Jane Stevenson got us to create brownie recipes using EAD tags on the archives course. So, whilst I started the task with confidence in cataloguing and cataloguing systems, my first month or so was spent learning about the Archives Hub EAD requirements. For contributors, one of the benefits of the Archives Hub is that they’ve created guidance, tools and processes so that archivists don’t have to become experts at creating or understanding EAD (though it is useful and interesting, if you get the chance!).
  1. The Archives Hub team are great! Their contributor numbers are growing (over 300 now) and their new website and editor are only going to make it easier for archive services to contribute and for researchers to search. What has struck me is that the team are all hot on data, standards and consistency, but it’s combined with a willingness to find solutions/processes which won’t put too much extra pressure on archive services wishing to contribute. It’s a balance that seems to work well and will be crucial for this project.
  1. The information gathering stage was interesting. And tiring. I visited every ARCW member archive service in Wales to introduce them to the project, find out what cataloguing systems they were using, and to review existing electronic catalogues. Most services in Wales are using Calm, though other systems currently being used include internally created databases, AtoM, Archivists Toolkit and Modes. It was really helpful to see how fields were being used, how services had adapted systems to suit them, and how all of this fitted in to Archives Hub requirements for interoperability.

    Photo of icecream
    Perks of working visits to beautiful parts of Wales.
  1. The support stage is set to be more interesting. And probably more tiring! The next 6 months will be spent providing practical support to services to help enable their catalogues to meet Archives Hub requirements. I’ll be able to address most of the smaller, service specific, tasks on site visits. The Hub team and I have identified a number of trickier ‘issues’ which we’ll hash out with further meetings and feedback from services. I can foresee further blog posts on these so briefly they are:
  • Multilingualism- most services catalogue Welsh items/collections in Welsh, English items/collections in English and multi-language item/collections bilingually. However, the method of doing this across services (and within services) isn’t consistent. We’re going to look at what can be done to ensure that descriptions in multiple languages are both human and machine readable.
  • Ref no/Alt ref- due to legacy issues with non-hierarchical catalogues, or just services personal preference, there are variations in the use of these fields. Some services use the ref no as the reference, others use the alt ref no as the reference. This isn’t a problem (as long as it’s consistent). Some services use ref no as the reference but not at series level, others use the alt ref no as the reference but not at series level. This will prove a little trickier for the Archives Hub to handle but hopefully workarounds for individual services will be found.
  • Extent fields missing- this is a mandatory field at collection level for the Archives Hub. It’s important to give researchers an idea of the size of the collection/series (it’s also an ISAD(G) required field). However, many services have hundreds of collection level descriptions which are missing extent. It’s not something I’ll practically be able to address on my support visits so the possibility of further work/funding will be looked into.
  • Indexing- this is understandably very important to the Archives Hub (they explain why here). For several archive services in Wales it seems to have been a step too far in the cataloguing process, mainly due to a lack of resource/time/training. Most have used imported terms from an old database or nothing at all. Although this will not prevent services from contributing catalogues to the Archives Hub, it does open up opportunities to think about partnership projects which might address this in the future (including looking at Welsh language index terms).

The project has made me think about how I’ve catalogued in the past. It’s made me much more aware that catalogues shouldn’t just be an inward-facing, local or an intellectual control based task; we should be constantly aware of making our descriptions more discoverable to researchers. And it’s shown me the importance of standards and consistency in achieving this (I feel like I’ve referenced consistency a lot in this one blog post; consistency is important!).  I hope that the project is also prompting Welsh archive services to reflect on the accessibility of their own cataloguing- something which might not have been looked at in many years.

There’s a lot of work to be done, both in this foundation work and further funding/projects which might come of the back of it. But hopefully in the next few years you’ll be discovering much more of Wales’ archive collections online.

Stacy Capner
Project Officer
Archives Wales Catalogues Online

Related:

Archives Hub EAD Editor – http://archiveshub.ac.uk/eadeditor/

Archives Hub contributors – list and map

 

EAD and Next Generation Discovery

This post is in response to a recent article in Code4Lib, ‘Thresholds for Discovery: EAD Tag Analysis in ArchiveGrid, and Implications for Discovery Systems‘ by M. Bron, M. Proffitt and B. Washburn. All quotes are from that article, which looked at the instances of tags within ArchiveGrid, the US based archival aggregation run by OCLC. This post compares some of their findings to the UK based Archives Hub.

Date

In the ArchivesGrid analysis, the <unitdate> field use is around 72% within the high-level (usually collection level) description. The Archives Hub does significantly better here, with an almost universal inclusion of dates at this level of description. Therefore, a date search is not likely to exclude any potentially relevant descriptions. This is important, as researchers are likely to want to restrict their searches by date. Our new system also allows sorting retrieved results by date. The only issue we have is where the dates are non-standard and cause the ordering to break down in some way. But we do have both displayed dates and normalised dates, to enable better machine processing of the data.

Collection Title

“for sorting and browsing…utility depends on the content of the element.”

Titles are always provided, but they are very varied. Setting aside lower-level descriptions, which are particularly problematic, titles may be more or less informative. We may introduce sorting by title, but the utility of this will be limited. It is unlikely that titles will ever be controlled to the extent that they have a level of consistency, but it would be fascinating to analyse titles within the context of the ways people search on the Web, and see if we can gauge the value of different approaches to creating titles. In other words, what is the best type of title in terms of attracting researchers’ attention, search engine optimisation, display within search engine results, etc?

Lower-level descriptions tend to have titles such as ‘Accounts’, ‘Diary’ or something more difficult to understand out of context such as ‘Pigs and boars’ or ‘The Moon Dragon’. It is clearly vital to maintain the relationship of these lower-level descriptions to their parent level entries, otherwise they often become largely meaningless. But this should be perfectly possible when working on the Web.

It is important to ensure that a researcher finding a lower-level description through a general search engine gets a meaningful result.

Archives Hub search result from a Google search
A search result within Google

 

 

 

The above result is from a search for ‘garrick theatre archives joanna lumley’ – the sort of search a researcher might carry out. Whilst the link is directly to a lower -level entry for a play at the Garrick Theatre, the heading is for the archive collection. This entry is still not ideal, as the lower-level heading should be present as well. But it gives a reasonable sense of what the researcher will get if they click on this link. It includes the <unitid> from the parent entry and the URL for the lower-level, with the first part of the <scopecontent> for the entry.  It also includes the Archives Hub tag line, which could be considered superfluous to a search for Garrick Theatre archives! However, it does help to embed the idea of a service in the mind of the researcher – something they can use for their research.

Extent

“It would be useful to be able to sort by size of collection, however, this would require some level of confidence that the <extent> tag is both widely used and that the content of the tag would lends itself to sorting.”

This was an idea we had when working on our Linked Data output. We wanted to think about visualizations that would help researchers get a sense of the collections that are out there, where they are, how relevant they are, and so on. In theory the ‘extent’ could help with a weighting system, where we could think about a map-based visualization showing concentrations of archives about a person or subject. We could also potentially order results by size – from the largest archive to the smallest archive that matches a researchers’ search term. However, archivists do not have any kind of controlled vocabulary for ‘extent’. So, within the Archives Hub this field can contain anything from numbers of boxes and folders to length in linear metres, dimensions in cubic metres and items in terms of numbers of photographs, pamphlets and other formats. ISAD(G) doesn’t really help with this; the examples they give simply serve to show how varied the description of extent can be.

Genre

“Other examples of desired functionality include providing a means in the interface to limit a search to include only items that are in a certain genre (for example, photographs)”.

This is something that could potentially be useful to researchers, but archivists don’t tend to provide the necessary data. We would need descriptions to include the genre, using controlled vocabulary. If we had this we could potentially enable researchers to select types of materials they are interested in, or simply include a flag to show, e.g. where a collection includes photographs.

The problem with introducing a genre search is that you run the risk of excluding key descriptions, because the search will only include results where the description includes that data in the appropriate location. If the word ‘photograph’ is in the general description only then a specific genre search won’t find it. This means a large collection of photographs may be excluded from a search for photographs.

Subject

In the Bron/Proffitt/Washburn article <controlaccess> is present around 72% of the time. I was surprised that they did not choose to analyse tags within <controlaccess> as I think these ‘access points’ can play a very important role in archival descrpition.  They use the presence of <controlaccess> as an indication of the presence of subjects, and make the point that “given differences in library and archival practices, we would expect control of form and genre terms to be relatively high, and control of names and subjects to be relatively low.”

On the Archives Hub, use of subjects is relatively high (as well as personal and corporate names) and use of form and genre is very low. However, it is true to say that we have strongly encouraged adding subject terms, and archivists don’t generally see this as integral to cataloguing (although some certainly do!), so we like to think that we are partly responsible for such a high use of subject terms.

Subject terms are needed because they (1) help to pull out significant subjects, often from collections that are very diverse, (2) enable identification of words such as ‘church’ and ‘carpenter’ (ie. they are subjects, not surnames), (3) allow researchers to continue searching across the Archives Hub by subject (subjects are all linked to the browse list) and therefore pull collections together by theme (4) enable advanced searching (which is substantially used on the Hub).

Names (personal and corporate)

In Bron/Proffitt/Washburn the <origination> tag is present 87% of the time. The analysis did not include the use of <persname> and <corpname> within <origination> to identify the type of originator. In the Archives Hub the originator is a required field, and is present 99%+ of the time. However, we made what I think is a mistake in not providing for the addition of personal or corporate name identification within <origination> via our EAD Editor (for creating descriptions) or by simply recommending it as best practice. This means that most of our originators cannot be distinguished as people or corporate bodies. In addition, we have a number where several names are within one <origination> tag and where terms such as ‘and others’, ‘unknown’ or ‘various’ are used. This type of practice is disadvantageous to machine processing. We are looking to rectify it now, but addressing something like this in retrospect is never easy to do. The ideal is that all names within origination are separately entered and identified as people or organisations.

We do also have names within <controlaccess>, and this brings the same advantages as for <subjects>, ensuring the names are properly structured, can be used for searching and for bringing together archives relating to any one individual or organisation.

Repository

“Use of this element falls into the promising complete category (99.46%: see Table 7). However, a variety of practice is in play, with the name of the repository being embellished with <subarea> and <address> tags nested within <repository>.”

On the Archives Hub repository is mandatory, but as yet we do not have a checking system whereby a description is rejected if it does not contain this field. We are working towards something like this, using scripts to check for key information to help ensure validity and consistency at least to a minimum standard. On one occasion we did take in a substantial number of descriptions from a repository that omitted the name of repository, which is not very useful for an aggregation service! However, one thing about <repository> is that it is easy to add because it is always the same entry. Or at least it should be….we did recently discovery that a number of repositories had entered their name in various ways over the years and this is something we needed to correct.

Scope and content, biographical history and abstract

It is notable that in the US <abstract> is widely used, whereas we don’t use it at all. It is intended as a very brief summary, whereas <scopecontent> can be of any length.

“For search, its worth noting that the semantics of these elements are different, and may result in unexpected and false “relevance””

One of the advantages of including <controlaccess> terms is to mitigate against this kind of false relevance, as a search for ‘mason’ as a person and ‘mason’ as a subject is possible through restricted field searching.

The Bron/Proffitt /Washburn analysis shows <bioghist> used 70% of the time. This is lower than the Archives Hub, where it is rare for this field not to be included. Archivists seem to have a natural inclination to provide a reasonably detailed biographical history, especially for a large collection focussed on one individual or organisation.

Digital Archival Objects

It is a shame that the analysis did not include instances of <dao>, but it is likely to be fairly low (in line with previous analysis by Wisser and Dean, which puts it lower than 10%). The Archives Hub currently includes around 1,200 instances of images or links to digital content. But what would be interesting is to see how this is growing over time and whether the trajectory indicates that in 5 years or so we will be able to provide researchers with routes into much of the Archives Hub content. However, it is worth bearing in mind that many archives are not digitised and are not likely to be digitised, so it is important for us not to raise expectations that links to digital content will become a matter of course.

The Future of Discovery

“In order to make EAD-encoded finding aids more well suited for use in discovery systems, the population of key elements will need to be moved closer to high or (ideally) complete.”

This is undoubtedly true, but I wonder whether the priority over and above completeness is consistency and controlled vocabulary where appropriate. There is an argument in favour of a shorter description, that may exclude certain information about a collection, but is well structured and easier to machine process. (Of course, completeness and consistency is the ideal!).

The article highlights geo-location as something that is emerging within discovery services. The Archives Hub is planning on promoting this as an option once we move to the revised EAD schema (which will allow for this to be included), but it is a question of whether archivists choose to include geographical co-ordinates in their catalogues. We may need to find ways to make this as easy as possible and to show the potential benefits of doing so.

In terms of the future, we need a different perspective on what EAD can and should be:

“In the early days of EAD the focus was largely on moving finding aids from typescript to SGML and XML. Even with much attention given over to the development of institutional and consortial best practice guidelines and requirements, much work was done by brute force and often with little attention given to (or funds allocated for) making the data fit to the purpose of discovery.”

However, I would argue that one of the problems is that archivists sometimes still think in terms of typescript finding aids; of a printed finding aid that is available within the search room, and then made available online….as if they are essentially the same thing and we can use the same approach with both. I think more needs to be done to promote, explain and discuss ‘next generation finding aids’. By working with Linked Data, I have gained a very different perspective on what is possible, challenging the traditional approach to hierarchical finding aids.

Maybe we need some ‘next generation discovery’ workshops and discussions – but in order to really broaden our horizons we will need to take heed of what is going on outside of our own domain. We can no longer consider archival practice in isolation from discovery in the most general sense because the complexity and scale of online discovery requires us to learn from others with expertise and understanding of digital technologies.

 

 

 

 

 

 

 

Out and about or Hub contributor training

Every year we provide our contributors and potential contributors with free training on how to use our EAD editor software.

The days are great fun and we really enjoy the chance to meet archivists from around the UK and find out what they are working on.

The EAD editor has been developed so that archivists can create online descriptions of their collections without having to know EAD.  It’s intuitive and user friendly and allows contributors to easily add collection level and multi-level descriptions to the Hub.  Users can also enhance their descriptions by adding digital archival objects  – images, documents and sound files.

Contributor training day

Our training days are a mixture of presentation, demonstration and practical hands on. We (The training team consists of Jane, Beth and myself) tend to start by talking a little about Hub news and developments to set the scene for the day and then we move onto why the Hub uses EAD and why using standards is important for interoperability and means that more ‘stuff’ can be done with the data. We go from here on to a hands-on session that demonstrates how to create a basic record. We cover also cover adding lower level components and images and we show contributors how to add index terms to their descriptions. (Something that we heartily endorse! We LOVE standards and indexing!).

We always like to tailor our training to the users, and encourage users to bring along their own descriptions for the hands-on sessions. Some users manage to submit their first descriptions to the Hub by the end of the training session!

This year we have done training in Manchester and London, for the Lifeshare project team in Sheffield and for the Oxford colleges. We are also hoping (if we get enough take up) to run courses in Glasgow and Cardiff this year. (6th Sept at Glasgow Caledonian, Cardiff date TBC. Email archiveshub@mimas.ac.uk to book a place)

So far this year three new contributors have joined the Hub as a result of training:  Middle East Centre Archive, St Antony’s College, Oxford; Salford City Archive and the Taylor Institute, Oxford. We’ve also enabled four of our existing contributors to start updating their collections on the Hub: National Fairground Archive, the Co-operative Archive, St John’s College, Oxford and the V&A.

We have been given some great feedback this year and 100% of our attendees agreed/strongly agreed that they were satisfied with the content and teaching style of the course.

Some our feedback:

A very good introductory session to working with the EAD editor for the Archives Hub. I have not used the Archives Hub for a long time so an excellent refresher course.

This was a fantastic workshop – excellently designed resources, Lisa and Jane were really helpful (and patient!). The hands-on aspect was really useful: I now feel quite confident about creating EAD records for the Hub, and even more confident that the Hub team are on hand with online help

The hands on experience and being able to ask questions of the course leaders as things happened was really useful. Being able to work on something relevant to me was also a bonus.

Excellent presentation and delivery. I came along with a theoretical but not a practical knowledge of the Archives Hub and its workings, and the training session was pitched perfectly and was completely relevant to my job. Many thanks.

The Hub team train archivists how to use the EAD editor, archive students about EAD and Social media and research students in how to use the Hub to search for primary source materials. You can find our list of training that we provide on our training pages: http://archiveshub.ac.uk/trainingmodules/ .  We’re always happy to hear from people who are interested in training – do let us know!

The Standard Bearers

We generally like stdough cutting andards. Archivists, like many others within the information professions, see standards as a good thing. But if that is the case, and we follow descriptive standards, why aren’t our collection descriptions more interoperable? Why can’t users move seamlessly from one system to another and find them consistent?

I’ve been looking at a White Paper by Nick Poole of the Collections Trust: Where Next for Museum Standards? In this, he makes a good point about the reasons for using standards:

“Standards exist to condense and share the professional experience of our predecessors, to enable us to continue to build on their legacy of improvement.”

I think this point is sometimes overlooked – standards reflect the development of our understanding and expertise over time. As a novice jazz musician, I think this has a parallel with jazz theory – the point of theory is partly that it condenses what has been learnt about harmony, rhythm and melody over the past 100 years of jazz. The theory is only the means to the end, but without it acting effectively as a short cut, you would have to work your way through decades of musical development to get a good understanding of the genre.

Descriptive standards should be the means to the end – they should result in better metadata. Before the development of ISAD(G) for archives, we did not have an internationally recognised standard to help us describe archives in a largely consistent way (although ISAD(G) is not really a content standard). EAD has proved a vital addition to our range of standards, helping us to share descriptions far more effectively than we could do before.

But archives are diverse and maybe we have to accept that standards are not going to mould our descriptions so that they all come off of the conveyor belt of cataloguing looking the same? It may seem like something that would be of benefit to our users – descriptions that look pretty much identical apart from the actual content. But would it really suffice to reflect the reality of what archives are? Would it really suffice to reflect the reality of the huge range of users that there are?

Going back to Nick Poole’s paper, he says:

“The purpose of standards is not to homogenise, but to ensure that diversity is built on a solid foundation of shared knowledge and understanding and a collective commitment to quality and sustainability.”

I think this is absostatue of toy standard bearerlutely right. However, I do sometimes wonder how solid this foundation is for archives, and how much our standards facilitate collaborative understanding. Standards need to be clearly presented and properly understood by those who are implementing them. From the perspective of the Hub, where we get contributions of data from 200 different institutions, standards are not always well understood. I’m not sure that people always think carefully about why they are using standards – this is just as important as applying the standards. It is only by understanding the purpose that I think you do come to a good sense of how to apply a standard properly. For example, we get some index terms that are ostensibly using NCA Rules (National Council on Archives Rules for Personal, Family and Place Names), but the entries are not always in line with the rules. We also get subject entries that do not conform to any thesauri, or maybe they conform to an in-house thesaurus, but for an aggregated service, this does not really help in one of the main aims of subject indexing – to pull descriptions together by subject.

Just as for museums, standards, as Nick Poole says, must be “communicated through publications, websites, events, seminars and training. They must be supported, through infrastructure and investment, and they must be enforced through custom, practice or even assessment and sanction.”

For the Hub, we have made one important change that has made descriptions much more standards compliant – we have invested in an ‘EAD Editor’; a template based tool for the creation and editing of EAD based archival descriptions. This sophisticated tool helps to ensure valid and standards-based descriptions. This idea of supporting standards through this kind of approach seems to me to be vital. It is hard for many archivists to invest in the time that it takes to really become expert in applying standards. For the Hub we are only dealing with descriptive standards, but archivists have many other competing standards to deal with, such as environmental and conservation standards. Software should have standards-compliance built in, but it should also be designed to meet the needs of the archivists and the users. This balance between standards and flexibility is tricky. But standards are not going to be effective if they don’t actually meet real life needs. I do sometimes think that standards suffer from being developed somewhat in isolation of practical reality – this can be a result of the funding environment, where people are paid to work on standards, and they don’t tend to be the people who implement them. Standards may also suffer from the perennial problem of a shifting landscape – standards that were clearly relevant when they were created may be rather less so 10 years on, but revising standards is a time-consuming process. The archives community has the NCA Rules, which have served their purpose very well, but they really need revising now, to bring them in line with the online, global environment.

In the UK Archives Discovery network (UKAD) we are working to help archivists understand and use standards effectively. We are going to provide an indexing tutorial and we are discussing ways to provide more guidance on cataloguing generally. The survey that we carried out in 2009 showed that archivists do want more guidance here. Whilst maybe there are some who are not willing to embrace standards, the vast majority can see the sense in interoperability, and just need a low-barrier way to improve their understanding of the standards that we have and how best to use them. But in the end, I can’t see that we will ever have homogeneous descriptions, so we need to harness technology in order to help us work more effectively with the diverse range of descriptions out there that reflect the huge diversity of archives and users.

Images: Flickr goosmurf’s photostream (dough cutter); robartesm’s photostream (standard bearer)