I've shared these at work and thought it might be helpful to post my notes from the launch of the Museum Data Service at Bloomberg last week in public too.
The MDS aggregates museum (and museum-like) metadata, encouraging use by data scientists, researchers, the public, etc. The MDS doesn't include images, but link to them if they're available on museum websites. (When they open APIs, presumably people could build their own image-focused site on the service).
It was launched by Sir Chris Bryant (Minister of State at the Department for Science, Innovation and Technology and the Department for Culture, Media and Sport) who said it could be renamed the ‘Autolycus project', after Shakespeare's snapper up of unconsidered trifles. He presented it as a rare project that sits between his two portfolios.
Allan Sudlow of the AHRC (one of the funders) described it as secure, reliable digital infrastructure for GLAMs, especially providing security and sustainability for smaller museums, and meeting a range of needs, including reciprocal relationships between museums and researchers. He positioned it as part of the greater ecosystem, infrastructure for digital creativity and innovation. Kevin Gosling (Collections Trust) mentioned that it helps deliver the Mendoza Report's ‘dynamic collections'.
I'd seen a preview over the summer and was already impressed with the way it builds on decades of experience managing and aggregating real museum data between internal and centralised systems. They've thought hard about what museums really need to represent their collections, what they find hard about managing data/tech, and what the MDS can do to lighten that load.
The MDS can operate as a backup of last resort, including data that isn't shared even inside the organisation. They're not trying to pre-shape the data in any way, to allow for as many uses as possible (apart from the process of mapping specific museum data to their fields). It has persistent links (fundamental to FAIR data and citing records). They're linking to wikidata (and creating records there where necessary). APIs will be available soon (which might finally mean an end to the ‘does every museum need an API' debate).
The site https://museumdata.uk/ has records for institutions, collections, object records, and ‘new and enhanced data' about object records (e.g. exhibition interpretation, AI-generated keywords). It feels a bit like a rope bridge – lightweight but strong and flexible infrastructure that meets a community need.
Their announcement is at https://artuk.org/discover/stories/museum-data-service-will-revolutionise-access-to-the-uks-cultural-heritage
I admire the way they've used just enough technology to deliver it both practically and elegantly. They've also worked hard on explaining why it matters to different stakeholders, and finding models for funding and sustainability.
On a personal note, the launch was a bit like a school reunion (if you went to a particularly nerdy school). It was great to see people like David Dawson, Richard Light and Gordon McKenna there (plus Andy Ellis, Ross Parry and Kevin Gosling) as they'd shared visions for a service like this many years ago, and finally got to see it go live.