Notes from 'Unheard Stories – Improving access for Deaf visitors' at MCG's Spring Conference

These are my notes from the presentation 'Unheard Stories – Improving access for Deaf visitors' by Linda Ellis at the MCG Spring Conference. There's some background to my notes about the conference in a previous post.

Linda's slides for Unheard Stories – Improving access for Deaf visitors are online.

This was a two year project, fit around their other jobs [and more impressive for that]. The project created British Sign Language video guides for Bantock House. The guides are available on mp3 players and were filmed on location.

Some background:
Not all 'deaf' people are the same – there's a distinction between 'deaf' and 'Deaf'. The notation 'd/Deaf' is often used. Deaf people use sign language as their first language and might not know English; deaf people probably become deaf later in life, and English is their first language. The syntax of British Sign Language (BSL) is different to English syntax. Deaf people will generally use BSL syntax, but deaf people might use signs with English grammar. Not all d/Deaf people can lip-read.

Deaf people are one of the most excluded groups in our society. d/Deaf people can be invisible in society as it's not obvious if someone is d/Deaf. British sign language was only recognised as an official language in March 2003.

Their Deaf visitors said they wanted:
Concise written information; information in BSL; to explore exhibits independently; stories about local people and museum objects; events just for Deaf people (and dressing up, apparently).

Suggestions:
Put videos on website to tell people what to expect when they visit. But think about what you put on website – they're Deaf, not stupid, and can read addresses and opening hours, etc. Put a mobile number on publicity so that Deaf people can text about events – it's cheap and easy to do but can make a huge difference. If you're doing audience outreach with social software, don't just blog – think about putting signed videos on YouTube. Use local Deaf people, not interpreters. Provide d/Deaf awareness training for all staff and volunteers. Provide written alternatives to audio guides; add subtitles and an English voice over signed video if you can afford it.

Notes from 'Rhagor – the collections based website from Amgueddfa Cymru' at MCG's Spring Conference

There are my notes from the presentation 'Rhagor – the collections based website from Amgueddfa Cymru' by Graham Davies at the MCG Spring meeting.

This paper talked about the CMS discussed in Building a Bilingual CMS.

'Rhagor' is Welsh for more – the project is about showing more of the collections online. It's not a 'virtual museum'.

With this project, they wanted to increase access to collections and knowledge associated with those collections; to explain more about collections than can be told in galleries with space limitations; and to put very fragile objects online.

[He gave a fascinating example of a 17th century miniature portrait with extremely fragile mica overlays – the layers have been digitised, and visitors to the website can play dress-up with the portrait layers in a way that would never be physically possible.]

The site isn't just object records, it also has articles about them. There's a basic article structure (with a nice popout action for images) that deals with the kinds of content that might be required. While developing this they realised they should test the usability of interface elements with general users, because the actions aren't always obvious to non-programmers.

They didn't want to dumb down their content so they've explain with a glossary where necessary. Articles can have links to related articles; other parts of the website and related publications, databases etc. Visitors can rate articles – a nice quick simple bit of user interactivity. Visitors can share articles on social networking sites, and the interface allows them to show or hide comments on site. Where articles are geographically based, they can be plotted onto a map. Finally, it's all fully bilingual. [But I wondered if they translate comments and the replies to them?]

In their next phase they want to add research activities and collections databases. They're also reaching out to new audiences through applications like Flickr and Google Earth, to go to where audiences are. If the content is available, audiences will start to make links to your content based on their interests.

The technology itself should be invisible, user has enriched experience through the content.

Questions:
Alex: to what extent is this linked with collection management system (CollMS)? Graham: it's linked to their CMS (discussed in earlier papers), not their CollMS. They don't draw directly from CollMS into CMS. Their CollMS is working tool for curators, needs lots of data cleaning, and doesn't necessarily have the right content for web audiences; it's also not bilingual.

More on cultural heritage and resistance to the participatory web

I've realised that in my post on 'Resistance to the participatory web from within the cultural heritage sector?', I should have made it clear that I wasn't thinking specifically of people within my current organisation. I've been lucky enough to meet a range of people from different institutions at various events or conferences, and when I get a chance I keep up with various cultural heritage email discussion lists and blogs. One way or another I've been quietly observing discussions about the participatory web from a wide range of perspectives within the cultural heritage and IT sectors for some time.

Ok, that said, the responses have been interesting.

Thomas at Medical Museion said:

This are interesting observations, and I wonder: Can this resistance perhaps be understood in terms of an opposition among curators against a perceived profanation of the sacred character of the museum? In the same way as Wikipedia and other user-generated content websites have been viewed with skepticism from the side of many academics — not just because they may contain errors (which encyclopedia doesn’t?), but also because it is a preceived profanation of Academia. (For earlier posts about profanation of the museum as a sacred institution, see here and here.). Any ideas?

I'm still thinking about this. I guess I don't regard museums as sacred institutions, but then as I don't produce interpretative or collection-based content that could be challenged from outside the institution, I haven't had a vested interest in retaining or reinforcing authority.

Tom Goskar at Past Thinking provided an interesting example of the visibility and usefulness of user-generated content compared to official content and concluded:

People like to talk about ancient sites, they like to share their photos and experiences. These websites are all great examples of the vibrancy of feeling about our ancient past.

For me that's one of the great joys of working in the cultural heritage sector – nearly everyone I meet (which may be a biased sample) has some sense of connection to museums and the history they represent.

The growth of internet forums on every topic conceivable shows that people enjoy and/or find value in sharing their observations, opinions or information on a range of subjects, including cultural heritage objects or sites. Does cultural heritage elicit a particular response that is motivated by a sense of ownership, not necessarily of the objects themselves, but rather of the experience of, or access to, the objects?

It seems clear that we should try and hook into established spaces and existing conversations about our objects or collections, and perhaps create appropriate spaces to host those conversations if they aren't already happening. We could also consider participating in those conversations, whether as interested individuals or as representatives of our institutions.

However institutional involvement with and exposure to user-generated content could have quite different implications. It not only changes the context in which the content is assessed but it also lends a greater air of authority to the dialogues. This seems to be where some of the anxiety or resistance to the participatory web resides. Institutions or disciplines that have adapted to the idea of using new technologies like blogs or podcasts to disseminate information may baulk at the idea that they should actually read, let alone engage with any user-generated content created in response to their content or collections.

Alun wrote at Vidi:

Interesting thoughts on how Web 2.0 is or isn’t used. I think one issue is a question of marking authorship, which is why Flickr may be more acceptable than a Wiki.

I think that's a good observation. Sites like Amazon also effectively differentiate between official content from publishers/authors and user reviews (in addition to 'recommendation'-type content based on the viewing habits of other users).

Another difference between Flickr and a wiki is that the external user cannot edit the original content of the institutional author. User-generated content sites like the National Archives wiki can capture the valuable knowledge generated when external people access collections and archives, but when this user-generated content is intermingled with, and might edit or correct, 'official' content it may prove a difficult challenge for institutions.

The issue of whether (and how) museums respond to user-generated content, and how user-generated content could be evaluated and integrated with museum-generated content is still unresolved across the cultural heritage sector and may ultimately vary by institution or discipline.

Integrating Accessibility Throughout Design

Integrating Accessibility Throughout Design is a great resource for thinking about how to incorporate accessibility testing in user-centered design processes. It's available as a website and a book, that covers:

  • The basics of including accessibility in design projects
    • Shortcuts for involving people with disabilities in your project
    • Tips for comfortable interaction with people with disabilities
  • Details on accessibility in each phase of the user-centered design process (UCD)
    • Examples of including accessibility in user group profiles, personas, and scenarios
    • Guidance on evaluating for accessibility through heuristic evaluation, design walkthroughs, and screening techniques
    • Thorough coverage of planning, preparing for, conducting, analyzing, and reporting effective usability tests with participants with disabilities
    • Questions to include in your recruiting screener
    • Checklist for usability testing with participants with disabilities

Two unrelated posts I've liked recently, on Navigators, Explorers, and Engaged Participants as user models; and going back to basics for digital museum content:

We don't need new technologies to attract teen audiences, we need, if anything, to revisit how we (and others) interpret our collections and ideas and then decide what new technologies can best convey the information

Usability articles at Webcredible

The article on 10 ways to orientate users on your site is useful because more and more users arrive at our sites via search engines or deep links. Keeping these tips in mind when designing sites helps us give users a sense of the scope, structure and purpose of a website, no matter whether they start from the front page or three levels down.

How to embed usability & UCD internally "offers practical advice of what a user champion can do to introduce and embed usability and user-centered design within a company" and includes 'guerrilla tactics' or small steps towards getting usability implemented. But probably the most important point is this:

The most effective method of getting user centered design in the process is through usability testing. Invite key stakeholders to watch the usability testing sessions. Usability testing is a real eye-opener and once observed most stakeholders find it difficult to ignore the user as part of the production process. (The most appropriate stakeholders are likely to be project managers, user interface designs, creative personnel, developers and business managers.)

I would have emphasised the point above even if they hadn't. The difference that usability testing makes to the attitudes of internal stakeholders is amazing and can really focus the whole project team on usability and user-centred design.

Collected links and random thoughts on user testing

First, some links on considerations for survey design and quick accessibility testing.

Given the constraints of typical museum project budgets, it's helpful to know you can get useful results with as few as five testers. Here's everybody's favourite, Jakob Nielsen, on why you can do usability testing with only five users, card sorting exercises for information architecture with 15 users and quantitative studies with 20 users. Of course, you have to allow for testing for each of your main audiences and ideally for iterative testing too, but let's face it – almost any testing is better than none. After all, you can't do user-centred design if you don't know what your users want.

There were a few good articles about evaluation and user-centred design in Digital Technology in Japanese Museums, a special edition of the Journal of Museum Education. I particularly liked the approach in "What Impressions Do People Have Regarding Mobile Guidance Services in Museums? Designing a Questionnaire that Uses Opinions from the General Public" by Hiromi Sekiguchi and Hirokazu Yoshimura.

To quote from their abstract: "There are usually serious gaps between what developers want to know and what users really think about the system. The present research aims to develop a questionnaire that takes into consideration the users point of view, including opinions of people who do not want to use the system". [my emphasis]

They asked people to write down "as many ideas as they could – doubts, worries, feelings, and expectations" about the devices they were testing. They then grouped the responses and used them as the basis for later surveys. Hopefully this process removes developer- and content producer-centric biases from the questions asked in user testing.

One surprising side-effect of good user testing is that it helps get everyone involved in a project to 'buy into' accessibility and usability. We can all be blinded by our love of technology, our love of the bottom line, our closeness to the material to be published, etc, and forget that we are ultimately only doing these projects to give people access to our collections and information. User testing gives representative users a voice and helps everyone re-focus on the people who'll be using the content will actually want to do with it.

I know I'm probably preaching to the converted here, but during Brian Kelly's talk on Accessibility and Innovation at UKMW07 I realised that for years I've had an unconscious test for how well I'll work with someone based on whether they view accessibility as a hindrance or as a chance to respond creatively to a limitation. As you might have guessed, I think the 'constraints' of accessibility help create innovations. As 37rules say, "let limitations guide you to creative solutions".

One of the points raised in the discussion that followed Brian's talk was about how to ensure compliance from contractors if quantitative compliance tests and standards are deprecated for qualitative measures. Thinking back over previous experiences, it became clear to me that anyone responding to a project tender should be able to demonstrate their intrinsic motivation to create accessible sites, not just an ability to deal with the big stick of compliance, because a contractors commitment to accessibility makes such a difference to the development process and outcomes. I don't think user testing will convince a harried project manager to push a designer for a more accessible template but I do think we have a better chance of implementing accessible and usable sites if user requirements considered at the core of the project from the outset.

Is Web 2.0 user-centred design in action?

An interesting perspective from Mike Ellis and Brian Kelly at MW2007:

Trawling the Web finds the following phrases recurring around Web 2.0: “mashup”, “de-centralisation”, “non-Web like”, “user generated content”, “permission based activity”, “collaboration”, “Creative Commons” … What sits at the heart of all of these, and one of the reasons Web 2.0 has been difficult for bigger, established organisations (including museums) to embrace, is that almost all the things talked about put users and not the organisation at the centre of the equation. Organisational structures, departmental ways of naming things, the perceived ‘value’ of our assets, in fact, what the organisation has to say about itself – all are being challenged.

Web 2.0: How to Stop Thinking and Start Doing: Addressing Organisational Barriers

Notes on usability testing

Further to my post about the downloadable usability.gov guidelines, I've picked out the bits from the chapter on 'Usability Testing' that are relevant to my work but it's worth reading the whole of the chapter if you're interested. My comments or headings are in square brackets below.

"Generally, the best method is to conduct a test where representative participants interact with representative scenarios.

The second major consideration is to ensure that an iterative approach is used.

Use an iterative design approach

The iterative design process helps to substantially improve the usability of Web sites. One recent study found that the improvements made between the original Web site and the redesigned Web site resulted in thirty percent more task completions, twenty-five percent less time to complete the tasks, and sixty-seven percent greater user satisfaction. A second study reported that eight of ten tasks were performed faster on the Web site that had been iteratively designed. Finally, a third study found that forty-six percent of the original set of issues were resolved by making design changes to the interface.

[Soliciting comments]

Participants tend not to voice negative reports. In one study, when using the ’think aloud’ [as opposed to retrospective] approach, users tended to read text on the screen and verbalize more of what they were doing rather than what they were thinking.

[How many user testers?]

Performance usability testing with users:
– Early in the design process, usability testing with a small number of users (approximately six) is sufficient to identify problems with the information architecture (navigation) and overall design issues. If the Web site has very different types of users (e.g., novices and experts), it is important to test with six or more of each type of user. Another critical factor in this preliminary testing is having trained usability specialists as the usability test facilitator and primary observers.
– Once the navigation, basic content, and display features are in place,
quantitative performance testing … can be conducted

[What kinds of prototypes?]

Designers can use either paper-based or computer-based prototypes. Paper-based prototyping appears to be as effective as computer-based prototyping when trying to identify most usability issues.

Use inspection evaluation [and cognitive walkthroughs] results with caution.
Inspection evaluations include heuristic evaluations, expert reviews, and cognitive walkthroughs. It is a common practice to conduct an inspection evaluation to try to detect and resolve obvious problems before conducting usability tests. Inspection evaluations should be used cautiously because several studies have shown that they appear to detect far more potential problems than actually exist, and they also tend to miss some real problems.

Heuristic evaluations and expert reviews may best be used to identify potential usability issues to evaluate during usability testing. To improve somewhat on the performance of heuristic evaluations, evaluators can use the ’usability problem inspector’ (UPI) method or the ’Discovery and Analysis Resource’ (DARe) method.

Cognitive walkthroughs may best be used to identify potential usability issues to evaluate during usability testing.

Testers can use either laboratory or remote usability testing because they both elicit similar results.

[And finally]

Use severity ratings with caution."