Manchester Art Gallery

Code and collections

Data visualisations

It’s been almost one year since our collections were last available to view online. We lost access to them following the move of our website from one server to another; our website was so old that newer versions of server operating software would no longer support the technologies that helped run it and our collection search.

With support from Alan Holding, Technical Analyst from the ICT Web and Microsoft Team at Manchester City Council, and the development team from Reading Room we’ve been refining a new Collection Explorer API to improve the experience of searching and viewing our collection. The API was originally developed by Asa Calow as part of a year long project, Code for Europe, managed by the former Manchester Digital Development Agency. Asa worked closely with curators and our collections database manager John Peel to get a feel of the data we had available about our collections. Early analysis by Asa produced some useful and unique insights into the collection data, including visualising schools or movements within the fine art collection as networks – via Google Fusion Tables – and visualisations showing the growth of the collection over time and the frequency of usage of collection specific terms. It also surfaced the many inconsistencies and anomalies that naturally occur in a dataset that has been developed by successive curators, assistants and other documentation staff over the decades. Some of these have been fairly straightforward to resolve – date formats in particular, while others will take time to be reviewed and for procedures to be set in place to amend the data.

Our new collection search uses a copy of the data from our Axiel (KE EMu) Collection Management System. We decided to take this route as opposed to developing an API which directly queried the CMS because the CMS server is in a very secure hosting environment which made developing any application that wanted to query the data held on it practically impossible. There’s always a danger when a database gets duplicated for specific purposes that the duplicate will, as the original is continually updated, very quickly become out of date and less reliable. However, we’ve put in place a process for scheduled updates which will, on a daily basis, export changes from the original to the duplicate database.

The API has been developed in Ruby and the majority of the search function is handled by elasticsearch.

The version we have live here, is not perfect by any means and there’s a number of minor bugs we still have to fix, including a problem with displaying only records with images. However, we think it represents a significant step forward in the quantity and quality of data about our collections which is available for public searching. We have plans for how we’d like to further develop our collection search, but, for now, we’d like to know what you think. Please let us know about any issues you discover, delights from the collection that you find, or improvements you’d like to see.

 

Martin Grimes, Web Manager
John Peel, Collection Information Officer

Leave a Reply

Your email address will not be published. Required fields are marked *