There's a post on Museums and Social Networking Sites that is nicely timed given the 'should museums be on Facebook' discussions on the UK Museums Computer Group and Museum Computer Network mailing lists. I particularly liked the following:

[M]useums that venture haphazardly into the wilderness of social networking sites may end up looking stiff and frozen. Institutions need to enter these spaces with firm answers to these questions:

  • What audience(s) are we trying to reach, and why?
  • What information do we want to convey to these people?
  • What actions do we want them to take?
  • Demographically, where do these constituents congregate online?
  • Do these virtual spaces provide the tools that will allow us to circulate our message?
  • Do the sites then provide ways for users to circulate our message without too much futher effort from us–that is, do the sites allow for percolation, or will our message merely appear for a moment and then pass quickly from users' radar?

I would add, is it an appropriate space for instutitions or is it a personal space?

The post also points out one of the major problems with Facebook groups that's been irritating me for a while – they don't notify you of new content, whether as an RSS feed, Facebook notification or in email. The Groups page doesn't even order groups by those with the most recent wall or discussion posts. No wonder groups languish on Facebook – most seem to collect members easily, but hardly anyone actually posts any content on them. There are always barriers to participation on social software or reasons why more people lurk than post, but if people don't know new content has been added, they'll never respond. It's a step backwards to the world of checking to see if sites have new content – who does that now we have RSS?

And just because I like it: when xkcd and wikipedia collide.

Collected links and random thoughts on user testing

First, some links on considerations for survey design and quick accessibility testing.

Given the constraints of typical museum project budgets, it's helpful to know you can get useful results with as few as five testers. Here's everybody's favourite, Jakob Nielsen, on why you can do usability testing with only five users, card sorting exercises for information architecture with 15 users and quantitative studies with 20 users. Of course, you have to allow for testing for each of your main audiences and ideally for iterative testing too, but let's face it – almost any testing is better than none. After all, you can't do user-centred design if you don't know what your users want.

There were a few good articles about evaluation and user-centred design in Digital Technology in Japanese Museums, a special edition of the Journal of Museum Education. I particularly liked the approach in "What Impressions Do People Have Regarding Mobile Guidance Services in Museums? Designing a Questionnaire that Uses Opinions from the General Public" by Hiromi Sekiguchi and Hirokazu Yoshimura.

To quote from their abstract: "There are usually serious gaps between what developers want to know and what users really think about the system. The present research aims to develop a questionnaire that takes into consideration the users point of view, including opinions of people who do not want to use the system". [my emphasis]

They asked people to write down "as many ideas as they could – doubts, worries, feelings, and expectations" about the devices they were testing. They then grouped the responses and used them as the basis for later surveys. Hopefully this process removes developer- and content producer-centric biases from the questions asked in user testing.

One surprising side-effect of good user testing is that it helps get everyone involved in a project to 'buy into' accessibility and usability. We can all be blinded by our love of technology, our love of the bottom line, our closeness to the material to be published, etc, and forget that we are ultimately only doing these projects to give people access to our collections and information. User testing gives representative users a voice and helps everyone re-focus on the people who'll be using the content will actually want to do with it.

I know I'm probably preaching to the converted here, but during Brian Kelly's talk on Accessibility and Innovation at UKMW07 I realised that for years I've had an unconscious test for how well I'll work with someone based on whether they view accessibility as a hindrance or as a chance to respond creatively to a limitation. As you might have guessed, I think the 'constraints' of accessibility help create innovations. As 37rules say, "let limitations guide you to creative solutions".

One of the points raised in the discussion that followed Brian's talk was about how to ensure compliance from contractors if quantitative compliance tests and standards are deprecated for qualitative measures. Thinking back over previous experiences, it became clear to me that anyone responding to a project tender should be able to demonstrate their intrinsic motivation to create accessible sites, not just an ability to deal with the big stick of compliance, because a contractors commitment to accessibility makes such a difference to the development process and outcomes. I don't think user testing will convince a harried project manager to push a designer for a more accessible template but I do think we have a better chance of implementing accessible and usable sites if user requirements considered at the core of the project from the outset.

The UK as a knowledge-based economy

Rather off-topic, but I wonder what role cultural heritage organisations might have in a knowledge economy. I would imagine that libraries and archives are already leading in that regard, but also that skills currently regarded as belonging to the 'digital humanities' will become more common.

In less than three years time, more than half of UK GDP will be generated by people who create something from nothing, according to the 2007 Developing the Future (DtF) report launched today at the British Library.

The report, commissioned by Microsoft and co-sponsored by Intellect, the BCS and The City University, London, sets out the key challenges facing the UK as it evolves into a fully-fledged knowledge-based economy. The report also sets out a clear agenda for action to ensure the UK maintains its global competitiveness in the face of serious challenges.

The report identifies a number of significant challenges that the technology industry needs to address if these opportunities are to be grasped. Primarily, these are emerging markets and skills shortages:

  • At current rates of growth China will overtake the UK in five years in the knowledge economy sector.
  • The IT industry faces a potential skills shortage: The UK’s IT industry is growing at five to eight times the national growth average, and around 150,000 entrants to the IT workforce are required each year. But between 2001 and 2006 there was a drop of 43 per cent in the number of students taking A-levels in computing.
  • The IT industry is only 20 per cent female and currently only 17 per cent of those undertaking IT-related degree courses are women. In Scotland, only 15 per cent of the IT workforce is female.

BCS: Developing the future.

The report also suggests that the 'IT industry should look to dramatically increase female recruitment' – I won't comment for now but it will be interesting to see how that issue develops.

At UK Museums and the Web 2007 I suggested looking at how other sites differentiate user-generated content from institutionally-created content. In that light, this post could be of interest: Newspapers 2.0: How Web 2.0 are British newspaper web sites?

Over the last two weeks I've reviewed eight British newspaper web sites in depth, trying to identify where and how they are using the technologies that make up the so-called "Web 2.0" bubble. I've examined their use of blogs, RSS feeds, social bookmarking widgets, and the integration of user-generated content into their sites.

Tim Berners-Lee on the Semantic Web

Via O'Reilly GMT, this video: Inside the semantic Web with Sir Tim Berners-Lee:

ZDNet's David Berlind got some time with Sir Tim Berners-Lee, the inventor of the World Wide Web. Topics covered include the semantic Web (see also: Microformats), mashups, and the benefits of open standards versus proprietary development environments such as Flash and Silverlight.

Watch a community excavation in progress

The LAARC are doing a community excavation at the Michael Faraday School down in Southwark, and they're putting photos and video as they go. I love the way they're using Flickr notes to point out possible features (features are things like walls, ditches or pits). There's also an experimental wiki (http://laarchaeology.wetpaint.com) and some youtube video (http://www.youtube.com/user/LAARCaeologist). Check out the photos at http://flickr.com/photos/laarc/collections/72157600500588102/

Disclosure: I have a vested interest because it's a work project, but I'm also enjoying this way too much not to share it. We've been working with the LAARC (London Archaeological Archive Resource Centre, part of the Museum of London Group) on pilots for increasing user interaction and engagement.

Who's talking about you?

This article explains how you can use RSS feeds to track mentions of your company (or museum) in various blog search sites: Ego Searches and RSS.

It's a good place to start if you're not sure what people are saying about your institution, exhibitions or venues or whether they might already be creating content about you. Don't forget to search Flickr and YouTube too.

Are shared data standards and shared repositories the future?

I keep having or hearing similar conversations about shared repositories and shared data standards in places like the SWTT, Antiquist, the Museums Computers Group, the mashed museum group and the HEIRNET Data Sans Frontières. The mashed museum hack day also got me excited about the infinite possibilities for mashups and new content creation that accessible and reliable feeds, web services or APIs into cultural heritage content would enable.

So this post is me thinking aloud about the possible next steps – what might be required; what might be possible; and what might be desired but would be beyond the scope of any of those groups to resolve so must be worked around. I'll probably say something stupid but I'll be interested to see where these conversations go.

I might be missing out lots of the subtleties but seems to me that there are a few basic things we need: shared technical and semantic data standards or the ability to map between institutional standards consistently and reliably; shared data, whether in a central repository or a service/services like federated searches capable of bringing together individual repositories into a virtual shared repository. The implementation details should be hidden from the end user either way – it should Just Work.

My preference is for shared repositories (virtual or real) because the larger the group, the better the chance that it will be able to provide truly permanent and stable URIs; and because we'd gain efficiencies when introducing new partners, as well as enabling smaller museums or archaeological units who don't have the technical skills or resources to participate. One reason I think stable and permanent URIs are so important is that they're a requirement for the semantic web. They also mean that people re-using our data, whether in their bookmarks, in mashup applications built on top of our data or on a Flickr page, have a reliable link back to our content in the institutional context.

As new partners join, existing tools could often be re-used if they have a collections management system or database used by a current partner. Tools like those created for project partners to upload records to the PNDS (People's Network Discovery Service, read more at A Standards Framework For Digital Library Programmes) for Exploring 20th Century London could be adapted so that organisations could upload data extracted from their collections management, digital asset or excavation databases to a central source.

But I also think that each (digital or digitised) object should have a unique 'home' URI. This is partly because I worry about replication issues with multiple copies of the same object used in various places and projects across the internet. We've re-used the same objects in several Museum of London projects and partnerships, but the record for that object might not be updated if the original record is changed (for example, if a date was refined or location changed). Generally this only applies to older projects, but it's still an issue across the sector.

Probably more importantly for the cultural heritage sector as a whole, a central, authoritative repository or shared URL means we can publish records that should come with a certain level of trust and authority by virtue of their inclusion in the repository. It does require playing a 'gate keeper' role but there are already mechanisms for determining what counts as a museum, and there might also be something for archaeological units and other cultural heritage bodies. Unfortunately this would mean that the Framley Museum wouldn't be able to contribute records – maybe we should call the whole thing off.

If a base record is stored in a central repository, it should be easy to link every instance of its use back to the 'home' URI, or to track discoverable instances and link to them from the home URI. If each digital or digitised object has a home URI, any related content (information records, tags, images, multimedia, narrative records, blog posts, comments, microformats, etc) created inside or outside the institution or sector could link back to the home URI, which would mean the latest information and resources about an object are always available, as well as any corrections or updates which weren't replicated across every instance of the object.

Obviously the responses to Michelangelo's David are going to differ from those to a clay pipe, but I think it'd be really interesting to be able to find out how an object was described in different contexts, how it inspired user-generated content or how it was categorised in different environments.

I wonder if you could include the object URL in machine tags on sites like Flickr? [Yes, you could. Or in the description field]

There are obviously lots of questions about how standards would be agreed, where repositories would be hosted, how the scope of each are decided, blah blah blah, and I'm sure all these conversations have happened before, but maybe it's finally time for something to happen.

[Update – Leif has two posts on a very similar topic at HEIR tonic and News from the Ouse.

Also I found this wiki on the business case for web standards – what a great idea!]

[Update – this was written in June 2007, but recent movements for Linked Open Data outside the sector mean it's becoming more technically feasible. Institutionally, on the other hand, nothing seems to have changed in the last year.]