From the AHDS blog:

The AHDS has done some investigation of the user statistics of the Stormont Papers resource. Two main points are uncovered

1. User searches show the 'long tail' effect. The bulk of searches are not on the most popular terms (which account for 21% of searches) , but on terms, phrases and words that are used very rarely (which account for 54% of searches)

2. Of the ten most popular search terms, all are available as pre-arranged links on the home page. A quick click on a link, rather than typing something into a keyboard, provides a more user-friendly way for a user to get to know what is within a resource.

Source: Users do not want what you expect, AHDS.

Oracle get into the Semantic Web

I just got an email from the Oracle Technology Network:

"Explore OTN Semantic Web (Beta) – and Provide Feedback!
OTN Semantic Web, now in Beta release, is a proof-of-concept application that demonstrates the use of RDF (Resource Description Framework)-based "Semantic Web" technology as the basis for a user experience that relies on dynamic relational navigation as well as Ajaxian user interfaces."

They've given links to the demo and FAQ.

As a developer, I thought this FAQ point was interesting:
"Why do different collection pages have different user interfaces?
This Beta is intended to expose users to a range of Semantic Web functionality and Ajaxian UIs. For that reason you will see several different varieties of each, and we welcome your feedback about each of them."

I wonder if they're collecting data on mouse patterns and running path analyses to see which interfaces are more effective.

Tagging goes mainstream?

"A December 2006 survey has found that 28% of internet users have tagged or categorized content online such as photos, news stories or blog posts. On a typical day online, 7% of internet users say they tag or categorize online content.

Tagging is gaining prominence as an activity some classify as a Web 2.0 hallmark in part because it advances and personalizes online searching. Traditionally, search on the web (or within websites) is done by using keywords. Tagging is a kind of next-stage search phenomenon – a way to mark, store, and then retrieve the web content that users already found valuable and of which they want to keep track. It is, of course, more tailored to individual needs and not designed to be the all-inclusive system"
Pew Internet and American Life project: Tagging

The report also goes into the definition of tagging as well as who tags and there's an interview with David Weinberger on 'Why Tagging Matters'.

More on Web 2.0 post-CAA

Some more quick thoughts as conversations I had at and after CAA UK settle into my brain. This doesn't really apply to anyone I talked to there, but as a general rule I think it's worth saying:

Don't chase the zeitgeist. It's not a popularity contest and it's not a race to see who can cram the most buzzwords into their site.

Also, here's a link to the blog of the AHRC-funded Semantic Web Think Tank I mentioned, and the original announcements about the SWTT.

Finally, what's hopefully a quite useful link for those considering official institutional blogs: Sample guidelines for institutional blog authors.

Via the Museums Computer Group list, the Emerging Technologies Initiative 2007 Horizon Report has just been released. It "highlights six technologies that the underlying research suggests will become very important to higher education over the next one to five years. A central focus of the discussion of each technology is its relevance for teaching, learning, and creative expression. Live weblinks to example applications are provided in each section, as well as to additional readings."

CAA UK 2007 Chapter Meeting

Last week I went to the Computer applications and quantitative methods in archaeology (CAA) UK 2007 Chapter Meeting in Southampton. There was a range of interesting papers and it was really exciting to talk to people with similar passions.

I managed to overrun and didn't get to the last few slides of my paper, which were some random suggestions for cultural heritage organisations looking to get started with Web 2.0. They're based on the assumption that resources are limited so the basic model I've suggested is that you think about why you're doing it and who you're doing it for, then start with something small. I would also suggest matching the technology to your content, using applications that meet existing standards to avoid lock-in, ensuring you backup your data regularly (including user-generated content) and taking advantage of existing participation models, particularly from commercial sites that have User Interface and Information Arcitect specialists.

  • Start small, monitor usage and build on the response
  • Design for extensibility
    • Sustainable
    • Interoperable
    • Re-usable
  • Use existing applications, services, APIs, architectures, design patterns wherever possible
  • Embrace your long tail
  • It's easy and free/cheap to create a blog, or a Flickr account to test the waters
  • Investigate digitising and publishing existing copyright free audio or video content as a podcast or on YouTube
  • Add your favourite specialist sites to a social bookmarking site
  • Check out Myspace or Second Life to see where your missing users hang out
  • Publish your events data in the events microformat so they can be included in social event sites
  • Geotag photos and publish them online
  • Or just publish photos on Flickr and watch to see if people start creating a folksonomy for you

Notes on usability testing

Further to my post about the downloadable usability.gov guidelines, I've picked out the bits from the chapter on 'Usability Testing' that are relevant to my work but it's worth reading the whole of the chapter if you're interested. My comments or headings are in square brackets below.

"Generally, the best method is to conduct a test where representative participants interact with representative scenarios.

The second major consideration is to ensure that an iterative approach is used.

Use an iterative design approach

The iterative design process helps to substantially improve the usability of Web sites. One recent study found that the improvements made between the original Web site and the redesigned Web site resulted in thirty percent more task completions, twenty-five percent less time to complete the tasks, and sixty-seven percent greater user satisfaction. A second study reported that eight of ten tasks were performed faster on the Web site that had been iteratively designed. Finally, a third study found that forty-six percent of the original set of issues were resolved by making design changes to the interface.

[Soliciting comments]

Participants tend not to voice negative reports. In one study, when using the ’think aloud’ [as opposed to retrospective] approach, users tended to read text on the screen and verbalize more of what they were doing rather than what they were thinking.

[How many user testers?]

Performance usability testing with users:
– Early in the design process, usability testing with a small number of users (approximately six) is sufficient to identify problems with the information architecture (navigation) and overall design issues. If the Web site has very different types of users (e.g., novices and experts), it is important to test with six or more of each type of user. Another critical factor in this preliminary testing is having trained usability specialists as the usability test facilitator and primary observers.
– Once the navigation, basic content, and display features are in place,
quantitative performance testing … can be conducted

[What kinds of prototypes?]

Designers can use either paper-based or computer-based prototypes. Paper-based prototyping appears to be as effective as computer-based prototyping when trying to identify most usability issues.

Use inspection evaluation [and cognitive walkthroughs] results with caution.
Inspection evaluations include heuristic evaluations, expert reviews, and cognitive walkthroughs. It is a common practice to conduct an inspection evaluation to try to detect and resolve obvious problems before conducting usability tests. Inspection evaluations should be used cautiously because several studies have shown that they appear to detect far more potential problems than actually exist, and they also tend to miss some real problems.

Heuristic evaluations and expert reviews may best be used to identify potential usability issues to evaluate during usability testing. To improve somewhat on the performance of heuristic evaluations, evaluators can use the ’usability problem inspector’ (UPI) method or the ’Discovery and Analysis Resource’ (DARe) method.

Cognitive walkthroughs may best be used to identify potential usability issues to evaluate during usability testing.

Testers can use either laboratory or remote usability testing because they both elicit similar results.

[And finally]

Use severity ratings with caution."