Museums and the audience comments paradox

I was at the Imperial War Museum for an advisory board meeting for the Social Interpretation project recently, and had a chance to reflect on my experiences with previous audience participation projects.  As Claire Ross summarised it, the Social Interpretation project is asking: does applying social media models to collections successfully increase engagement and reach?  And what forms of moderation work in that environment – can the audience be trusted to behave appropriately?

One topic for discussion yesterday was whether the museum should do some 'gardening' on the comments.  Participation rates are relatively high but some of the comments are nonsense ('asdf'), repetitive (thousands of variants of 'Cool' or 'sad') or off-topic ('I like the museum') – a pattern probably common to many museum 'have your say' kiosks.  Gardening could involve 'pruning' out comments that were not directly relevant to the question asked in the interactive, or finding ways to surface the interesting comments.  While there are models available in other sectors (e.g. newspapers), I'm excited by the possibility that the Social Interpretation project might have a chance to address this issue for museums.

A big design challenge for high-traffic 'have your say' interactives is providing a quality experience for the audience who is reading comments – they shouldn't have to wade through screens of repeated, vacuous or rude comments to find the gems – while appropriately respecting the contribution and personal engagement of the person who left the comment.

In the spirit of 'have your say', what do you think the solution might be?  What have you tried (successfully or not) in your own projects, or seen working well elsewhere?

Update: the Social Interpretation have posted I iz in ur xhibition trolling ur comments:

"One of the most discussed issues was about what we have termed ‘gardening comments’ but to put it bluntly it’s more a case of should we be ‘curating the visitor voice’ in order to improve the visitor experience? It’s a difficult question to deal with… 

We are at the stage where we really do want to respect the commenter, but also want to give other readers a high value experience. It’s a question of how we do that, and will it significantly change the project?"

If you found this post, you might also be interested in Notes from 'The Shape of Things: New and emerging technology-enabled models of participation through VGC'.

Update, March 2014: I've just been reading a journal article on 'Normative Influences on Thoughtful Online Participation'. The authors set out to test this hypothesis:

'Individuals exposed to highly thoughtful behavior from others will be more thoughtful in their own online comment contributions than individuals exposed to behavior exhibiting a low degree of thoughtfulness.' 

Thoughtful comments were defined by the number of words, how many seconds it took to write them, and how much of the content was relevant to the issue discussed in the original post. And the results? 'We found significant effects of social norm on all three measures related to participants’ commenting behavior. Relative to the low thoughtfulness condition, participants in the high thoughtfulness condition contributed longer comments, spent more time writing them, and presented more issue-relevant thoughts.' To me, this suggests that it's worth finding ways to highlight the more thoughtful comments (and keeping pulling out those 'asdf' weeds) in an interactive as this may encourage other thoughtful comments in turn.

Reference: Sukumaran, Abhay, Stephanie Vezich, Melanie McHugh, and Clifford Nass. “Normative Influences on Thoughtful Online Participation.” In Proceedings of the 2011 Annual Conference on Human Factors in Computing Systems, 3401–10. Vancouver, BC, Canada: ACM, 2011. http://dl.acm.org/citation.cfm?id=1979450.

Slow and still dirty Digital Humanities Australasia notes: day 3

These are my very rough notes from day 3 of the inaugural Australasian Association for Digital Humanities conference (see also Quick and dirty Digital Humanities Australasia notes: day 1 and Quick and dirty Digital Humanities Australasia notes: day 2) held in Canberra's Australian National University at the end of March.

We were welcomed to Day 3 by the ANU's Professor Marnie Hughes-Warrington (who expressed her gratitude for the methodological and social impact of digital humanities work) and Dr Katherine Bode.  The keynote was Dr Julia Flanders on 'Rethinking Collections', AKA 'in praise of collections'… [See also Axel Brun's live blog.]

She started by asking what we mean by a 'collection'? What's the utility of the term? What's the cultural significance of collections? The term speaks of agency, motive, and implies the existence of a collector who creates order through selectivity. Sites like eBay, Flickr, Pinterest are responding to weirdly deep-seated desire to reassert the ways in which things belong together. The term 'collection' implies that a certain kind of completeness may be achieved. Each item is important in itself and also in relation to other items in the collection.

There's a suite of expected activities and interactions in the genre of digital collections, projects, etc. They're deliberate aggregations of materials that bear, demand individual scrutiny. Attention is given to the value of scale (and distant reading) which reinforces the aggregate approach…

She discussed the value of deliberate scope, deliberate shaping of collections, not craving 'everythingness'. There might also be algorithmically gathered collections…

She discussed collections she has to do with – TAPAS, DHQ, Women Writers Online – all using flavours of TEI, the same publishing logic, component stack, providing the same functionality in the service of the same kinds of activities, though they work with different materials for different purposes.

What constitutes a collection? How are curated collections different to user-generated content or just-in-time collections? Back 'then', collections were things you wanted in your house or wanted to see in the same visit. What does the 'now' of collections look like? Decentralisation in collections 'now'… technical requirements are part of the intellectual landscape, part of larger activities of editing and design. A crucial characteristic of collections is variety of philosophical urgency they respond to.

The electronic operates under the sign of limitless storage… potentially boundless inclusiveness. Design logic is a craving for elucidation, more context, the ability for the reader to follow any line of thought they might be having and follow it to the end. Unlimited informational desire, closing in of intellectual constraints. How do boundedness and internal cohesion help define the purpose of a collection? Deliberate attempt at genre not limited by technical limitations. Boundedness helps define and reflect philosophical purpose.

What do we model when we design and build digital collections? We're modelling the agency through which the collection comes into being and is sustained through usage. Design is a collection of representational practices, item selection, item boundaries and contents. There's a homogeneity in the structure, the markup applied to items. Item-to-item interconnections – there's the collection-level 'explicit phenomena' – the directly comparable metadata through which we establish cross-sectional views through the collection (eg by Dublin Core fields) which reveal things we already know about texts – authorship of an item, etc. There's also collection-level 'implicit phenomena' – informational commonalities, patterns that emerge or are revealed through inspection; change shape imperceptibly through how data is modelled or through software used [not sure I got that down right]; they're always motivated so always have a close connection with method.

Readerly knowledge – what can the collection assume about what the reader knows? A table of contents is only useful if you can recognise the thing you want to find in it – they're not always self-evident. How does the collection's modelling affect us as readers? Consider the effects of choices on the intellectual ecology of the collection, including its readers. Readerly knowledge has everything to do with what we think we're doing in digital humanities research.

The Hermeneutics of Screwing Around (pdf). Searching produces a dynamically located just-in-time collection… Search is an annoying guessing game with a passive-aggressive collection. But we prefer to ask a collection to show its hand in a useful way (i. e. browse)… Search -> browse -> explore.

What's the cultural significance of collections? She referenced Liu's Sidney's Technology… A network as flow of information via connection, perpetually ongoing contextualisation; a patchwork is understood as an assemblage, it implies a suturing together of things previously unrelated. A patchwork asserts connections by brute force. A network assumes that connections are there to be discovered, connected to. Patchwork, mosaic – connects pre-existing nodes that are acknowledged to be incommensurable.

We avow the desirability of the network, yet we're aware of the itch of edge cases, data that can't be brought under rule. What do we treat as noise and what as signal, what do we deny is the meaning of the collection? Is exceptionality or conformance to type the most significant case? On twitter, @aylewis summarised this as 'Patchworking metaphor lets us conceptualise non-conformance as signal not noise'

Pay attention to the friction in the system, rather than smoothing it over. Collections both express and support analysis. Expressing theories of genre etc in internal modelling… Patchwork – the collection articulates the scholarly interest that animated its creation but also interests of the reader… The collection is animated by agency, is modelled by it, even while it respects the agency we bring as readers. Scholarly enquiry is always a transaction involving agency on both ends.

My (not very good) notes from discussion afterwards… there was a question about digital femmage; discussion of the tension between the desire for transparency and the desire to permit many viewpoints on material while not disingenuously disavowing the roles in shaping the collection; the trend at one point for factoids rather than narratives (but people wanted the editors' view as a foundation for what they do with that material); the logic of the network – a collection as a set of parameters not as a set of items; Alan Liu's encouragement to continue with theme of human agency in understanding what collections are about (e.g. solo collectors like John Soane); crowdsourced work is important in itself regardless of whether it comes up with the 'best' outcome, by whatever metric. Flanders: 'the commitment to efficiency is worrisome to me, it puts product over people in our scale of moral assessment' [hoorah! IMO, engagement is as important as data in cultural heritage]; a question about the agency of objects, with the answer that digital surrogates are carriers of agency, the question is how to understand that in relation to object agency?

GIS and Mapping I

The first paper was 'Mapping the Past in the Present' by Andrew Wilson, which was a fast run-through some lovely examples based on Sydney's geo-spatial history. He discussed the spatial turn in history, and the mid-20thC shift to broader scales, territories of shared experience, the on-going concern with the description of space, its experience and management.

He referenced Deconstructing the map, Harley, 1989, 'cartography is seldom what the cartographers say it is'. All maps are lies. All maps have to be read, closely or distantly. He referenced Grace Karskens' On the rocks and discussed the reality of maps as evidence, an expression of European expansion; the creation of the maps is an exercise in power. Maps must be interpreted as evidence. He talked about deriving data from historic maps, using regressive analysis to go back in time through the sources. He also mentioned TGIS – time-enabled GIS. Space-time composite model – when have lots and lots of temporal changes, create polygon that describes every change in the sequence.

The second paper was 'Reading the Text, Walking the Terrain, Following the Map: Do We See the Same Landscape?' by Øyvind Eide. He said that viewing a document and seeing a landscape are often represented as similar activities… but seeing a landscape means moving around in it, being an active participant. Wood (2010) on the explosion of maps around 1500 – part of the development of the modern state. We look at older maps through modern eyes – maps weren't made for navigation but to establish the modern state.

He's done a case study on text v maps in Scandinavia, 1740s. What is lost in the process of converting text to maps? Context, vagueness, under-specification, negation, disjunction… It's a combination of too little and too much. Text has information that can't fit on a map and text that doesn't provide enough information to make a map. Under-specification is when a verbal text describes a spatial phenomenon in a way that can be understood in two different ways by a competent reader. How do you map a negative feature of a landscape? i.e. things that are stated not to be there. 'Or' cannot be expressed on a map… Different media, different experiences – each can mediate only certain aspects for total reality (Ellestrom 2010).

The third paper was 'Putting Harlem on the Map' by Stephen Robertson. This article on 'Writing History in the Digital Age' is probably a good reference point: Putting Harlem on the Map, the site is at Digital Harlem. The project sources were police files, newspapers, organisational archives… They were cultural historians, focussed on individual level data, events, what it was like to live in Harlem. It was one of first sites to employ geo-spatial web rather than GIS software. Information was extracted and summarised from primary sources, [but] it wasn't a digitisation project. They presented their own maps and analysis apart from the site to keep it clear for other people to do their work.  After assigning a geo-location it is then possible to compare it with other phenomena from the same space. They used sources that historians typically treat as ephemera such as society or sports pages as well as the news in newspapers.

He showed a great list of event types they've gotten from the data… Legal categories disaggregate crime so it appears more often in the list though was the minority of data. Location types also offers a picture of the community.

Creating visualisations of life in the neighbourhood…. when mapping at this detailed scale they were confronted with how vague most historical sources are and how they're related to other places. 'Historians are satisfied in most cases to say that a place is 'somewhere in Harlem'.' He talked about visualisations as 'asking, but not explaining, why there?'.

I tweeted that I'd gotten a lot more from his demonstration of the site than I had from looking at it unaided in the past, which lead to a discussion with @claudinec and @wragge about whether the 'search vs browse' accessibility issue applies to geospatial interfaces as well as text or images (i.e. what do you need to provide on the first screen to help people get into your data project) and about the need for as many hooks into interfaces as possible, including narratives as interfaces.

Crowdsourcing was raised during the questions at the end of the session, but I've forgotten who I was quoting when I tweeted, 'by marginalising crowdsourcing you're marginalising voices', on the other hand, 'memories are complicated'.  I added my own point of view, 'I think of crowdsourcing as open source history, sometimes that's living memory, sometimes it's research or digitisation'.  If anything, the conference confirmed my view that crowdsourcing in cultural heritage generally involves participating in the same processes as GLAM staff and humanists, and that it shouldn't be exploitative or rely on user experience tricks to get participants (though having made crowdsourcing games for museums, I obviously don't have a problem with making the process easier to participate in).

The final paper I saw was Paul Vetch, 'Beyond the Lowest Common Denominator: Designing Effective Digital Resources'. He discussed the design tensions between: users, audiences (and 'production values'); ubiquity and trends; experimentation (and failure); sustainability (and 'the deliverable'),

In the past digital humanities has compartmentalised groups of users in a way that's convenient but not necessarily valid. But funding pressure to serve wider audiences means anticipating lots of different needs. He said people make value judgements about the quality of a resource according to how it looks.

Ubiquity and trends: understanding what users already use; designing for intuition. Established heuristics for web design turn out to be completely at odds with how users behave.

Funding bodies expect deliverables, this conditions the way they design. It's difficult to combine: experimentation and high production values [something I've posted on before, but as Vetch said, people make value judgements about the quality of a resource according to how it looks so some polish is needed]; experimentation and sustainability…

Who are you designing for? Not the academic you're collaborating with, and it's not to create something that you as a developer would use. They're moving away from user testing at the end of a project to doing it during the project. [Hoorah!]

Ubiquity and trends – challenges include a very highly mediated environment; highly volatile and experimental… Trying to use established user conventions becomes stifling. (He called useit.com 'old nonsense'!) The ludic and experiential are increasingly important elements in how we present our research back.

Mapping Medieval Chester took technology designed for delivering contextual ads and used it to deliver information in context without changing perspective (i.e. without reloading the page, from memory).  The Gough map was an experiment in delivering a large image but also in making people smile.  Experimentation and failure… Online Chopin Variorum Edition was an experiment. How is the 'work' concept challenged by the Chopin sources? Technical methodological/objectives: superimposition; juxtaposition; collation/interpolation…

He discussed coping strategies for the Digital Humanities: accept and embrace the ephemerality of web-based interfaces; focus on process and experience – the underlying content is persistent even if the interfaces don't last.  I think this was a comment from the audience: 'if a digital resource doesn't last then it breaks the principle of citation – where does that leave scholarship?'

Summary

So those are my notes.  For further reference I've put a CSV archive of #DHA2012 tweets from searchhash.com here, but note it's not on Australian time so it needs transposing to match the session times.

This was my first proper big Digital Humanities conference, and I had a great time.  It probably helped that I'm an Australian expat so I knew a sprinkling of people and had a sense of where various institutions fitted in, but the crowd was also generally approachable and friendly.

I was also struck by the repetition of phrases like 'the digital deluge', the 'tsunami of data' – I had the feeling there's a barely managed anxiety about coping with all this data. And if that's how people at a digital humanities conference felt, how must less-digital humanists feel?

I was pleasantly surprised by how much digital history content there was, and even more pleasantly surprised by how many GLAMy people were there, and consequently how much the experience and role of museums, libraries and archives was reflected in the conversations.  This might not have been as obvious if you weren't on twitter – there was a bigger disconnect between the back channel and conversations in the room than I'm used to at museum conferences.

As I mentioned in my day 1 and day 2 posts, I was struck by the statement that 'history is on a different evolutionary branch of digital humanities to literary studies', partly because even though I started my PhD just over a year ago, I've felt the title will be outdated within a few years of graduation.  I can see myself being more comfortable describing my work as 'digital history' in future.

I have to finish by thanking all the speakers, the programme committee, and in particular, Dr Paul Arthur and Dr Katherine Bode, the organisers and the aaDH committee – the whole event went so smoothly you'd never know it was the first one!

And just because I loved this quote, one final tweet from @mikejonesmelb: Sir Ken Robinson: 'Technology is not technology if it was invented before you were born'.

Museum Computer Network 2011 conference notes

Last November I went to the Museum Computer Network (MCN2011) conference for the first time – I was lucky enough to get a scholarship (for which many, many thanks).  The theme was 'hacking the museum: innovation, agility and collaboration' and the conference was packed with interesting sessions.My rough notes are below, though they're probably even sketchier than usual because I had a pretty full conference (running a workshop, taking part in a panel and a debate).  (I thought I'd posted this at the time, but I just found it in draft, so here goes…)

Pre-conference workshop, Wednesday
I ran a half-day workshop on 'Hacking and mash-ups for beginners', which had a great turn-out of people willing to get stuck in.  The basic idea was to give people a first go at scripting 'hello world' and a bit beyond (with JavaScript, because it can be run locally), to provide some insight into thinking computationally (understanding something of programmers think and how ideas might be turned into something on a screen), to play with real museum data and try different visualisation tools to create simple mashups.  My slides and speaker notes are at Hacking and mash-ups for beginners at MCN2011 and I'd be happy to share the exercises on request.  I used lots of cooking/food analogies so have a snack to hand in case the slides make you hungry! I had lots of good feedback from the workshop, but I think my favourite comment was this from Katie Burns (@K8burns): '…I loved the workshop. I nerded out and kept playing with your exercises on my flight home from ATL.'.

Thursday
Kevin Slavin's (@slavin_fpo) thought-provoking keynote took us to Walter Benjamin by way of the Lascaux Caves and onto questions like: what does it do to us [as writers of wall captions and object labels] when objects provide information?.  He observed, 'visitors turn to the caption as if the work of art is a question to be answered' – are we reducing the work to information?  We should be evoking, rather than educating; amplifying rather than answering the question; producing a memory instead of preserving one; making the moment in which you're actually present more precious… Ultimately, the authenticity of his experience [with the artwork in the caves] was in learning how to see it [in the context, the light in which it was created]. Kevin concluded that technology is not about giving additional things to look at, but additional ways to see.

I've posted about the panel discussing 'What's the point of a museum website?' I was in after the keynote at Report from 'What's the point of a museum website'… and Brochureware, aggregators and the messy middle: what's the point of a museum website?.  I also popped into the session 'Valuing Online-only Visitors: Let's Get Serious' which was grappling with many of the issues raised by Culture 24's action research project, How to evaluate success online?.  This all seems to point to a growing momentum for finding new measurable models for value and engagement, possibly including online to on-site conversion, impact, even epiphanies. Interestingly, crowdsourcing is one place where it's relatively easy to place a monetary value on online action – @alastairdunning popped up to say: 'http://www.oucs.ox.ac.uk/ww1lit/ project – 'Normal' digitisation = £40 per item. Crowdsourced = £3.50 per item', adding 'But obviously cultural value of a Wilfred Owen mss is more than your neighbour's WW1 letters and diaries'.

Friday
One of the sessions I was most looking forward to was Online cataloguing tools and strategies, as it covered crowdsourcing, digital scholarly practices and online collections – some of my favourite things!

Digital Mellini turned 17th C Italian manuscript (an inventory of paintings written in rhyming verse) into an online publication and a collaboration tool for scholars. The project asked 'What will digital art history look like?'.  The old way of doing art history was about solo exploration, verbal idea-sharing, physical book publications, unlinked data, image rights issues; but the promise of digital scholarship is: linked data opens new routes to analysis, scholars collaborate online, conversations are captured, digital-only publications count for tenure, no copyright restrictions… I was impressed by their team-based, born-digital approach, even if it's not their norm: 'the process was very non-Getty, it was iterative and agile'.  They had a solid set of requirements included annotations and conversations at the word or letter level of the text, with references to related artworks. They're now tackling 'rules of engagement' for scholars – where to comment, etc – and working out what an online publication looks like and how it affects scholarly practices.

Yale Center for British Art (YCBA) Online Collections's goal was search across all YCBA collections.  All the work they've done is open source – Solr, Lucene – cool!  They're also using LIDO (superceding CDWA and MuseumDat) and looking to linked data including vocabulary harmonisation.  As with many cross-catalogue projects, they ended up using a lowest common denominator between collections and had to compromise on shared fields in search.  I'm not sure who used the lovely phrase 'dedication to public domain'… Both art history presentations mentioned linked data – we've come far!

The final paper was Crowdsourcing transcription: who, why, what and how, with Perian Sully from Balbao Park talking with Ben Brumfield about how they've used his 'From the Page' transcription software.  Transcription is not only useful because you can't do OCR on cursive writing, but it's also a form of engagement and outreach (as I've found with other cultural heritage crowdsourcing).  They covered some similar initiatives like Family Search Indexing, whose goal is to get 175,000 new user volunteering to transcribe records (they've already transcribed close to a billion records) and the Historic Journals project whose goal is to link transcriptions with records in genealogy databases (and lots more examples but these were most relevant to my PhD research).

Reasons for crowd participation (from an ornithology project survey) included the importance of the programme, filling free time, love of nature, civic duty and school requirement.  People participate for a sense of purpose, love of the subject, immersion in the text (deep reading). The question of fun leads into peril of gamification – if you split text line by line to make a microtask-style game, you lose the interesting context.

They gave some tips on how to start a crowdsourced transcription project based on your material and the uses for your transcription.  The design will also affect interpretive decisions made when transcribing – do you try to replicate the line structure on the page? – and can provide incentives like competition to transcribe more materials, though as Perian pointed out, accuracy can be affected by motivation.

I had to leave Philosophical Leadership Needed for the Future: Digital Humanities Scholars in Museums early but it all made a lot more sense to me when I realised Neal wasn't using 'digital humanities' in the sense it's used academically (the application of computational techniques to humanities research questions) – as I see it, he's talking about something much closer to 'digital heritage'.

I still haven't sorted out my notes from History Museums are not Art Museums: Discuss! but it was one of my favourite sessions and a great chance to discuss one of my museumy interests with really smart people.

Saturday
I popped into a bit of THATCamp/CultureHack and had fun playing with an imaginary museum, but unfortunately I didn't get to spend any time in the THATCamp itself, because…

The MCN 'Great Debate'
I was invited to take part in the Great Debate held as the closing plenary session.  I was on the affirmative side with Bruce Wyman, debating 'there are too many museums' against Rob Stein and Roseanna Flouty. For now, I think I'll just say that I think it's the hardest bit of public speaking I've ever done – the trickiness of the question was the least of it!  I think there's a tension between the requirements of the formal debating structure and the desire to dissect the question so you can touch on issues relevant to the audience, so it'll be interesting to see how the format might change in future.

Finally, a silly tweet from me: '#mcn2011 I've decided the perfect visitor-friendly museum is the Mona Lisa on spaceship held by a dinosaur. That you can buy on a t-shirt.' lead to the best thing ever from @timsven: '@mia_out- this pic is for you- museum of the future: trex w/ mona lisa riding millenium falcon #MCN2011 http://t.co/37GdAD1O'.

Museum of the Future

'…and they all turn on their computers and say 'yay!" (aka, 'mapping for humanists')

I'm spending a few hours of my Sunday experimenting with 'mapping for humanists' with an art historian friend, Hannah Williams (@_hannahwill).  We're going to have a go at solving some issues she has encountered when geo-coding addresses in 17th and 18th Century Paris, and we'll post as we go to record the process and hopefully share some useful reflections on what we found as we tried different tools.

We started by working out what issues we wanted to address.  After some discussion we boiled it down to two basic goals: a) to geo-reference historical maps so they can be used to geo-locate addresses and b) to generate maps dynamically from list of addresses. This also means dealing with copyright and licensing issues along the way and thinking about how geospatial tools might fit into the everyday working practices of a historian.  (i.e. while a tool like Google Refine can generate easily generate maps, is it usable for people who are more comfortable with Word than relying on cloud-based services like Google Docs?  And if copyright is a concern, is it as easy to put points on an OpenStreetMap as on a Google Map?)

Like many historians, Hannah's use of maps fell into two main areas: maps as illustrations, and maps as analytic tools.  Maps used for illustrations (e.g. in publications) are ideally copyright-free, or can at least be used as illustrative screenshots.  Interactivity is a lower priority for now as the dataset would be private until the scholarly publication is complete (owing to concerns about the lack of an established etiquette and format for citation and credit for online projects).

Maps used for analysis would ideally support layers of geo-referenced historic maps on top of modern map services, allowing historic addresses to be visually located via contemporaneous maps and geo-located via the link to the modern map.  Hannah has been experimenting with finding location data via old maps of Paris in Hypercities, but manually locating 18th Century streets on historic maps then matching those locations to modern maps is time-consuming and she suspects there are more efficient ways to map old addresses onto modern Paris.

Based on my research interviews with historians and my own experience as a programmer, I'd also like to help humanists generate maps directly from structured data (and ideally to store their data in user-friendly tools so that it's as easy to re-use as it is to create and edit).  I'm not sure if it's possible to do this from existing tools or whether they'd always need an export step, so one of my questions is whether there are easy ways to get records stored in something like Word or Excel into an online tool and create maps from there.  Some other issues historians face in using mapping include: imprecise locations (e.g. street names without house numbers); potential changes in street layouts between historic and modern maps; incomplete datasets; using markers to visually differentiate types of information on maps; and retaining descriptive location data and other contextual information.

Because the challenge is to help the average humanist, I've assumed we should stay away from software that needs to be installed on a server, so to start with we're trying some of the web-based geo-referencing tools listed at http://help.oldmapsonline.org/georeference.

Geo-referencing tools for non-technical people

The first bump in the road was finding maps that are re-usable in technical and licensing terms so that we could link or upload them to the web tools listed at http://help.oldmapsonline.org/georeference.  We've fudged it for now by using a screenshot to try out the tools, but it's not exactly a sustainable solution.  
Hannah's been trying georeferencer.org, Hypercities and Heurist (thanks to Lise Summers ‏@morethangrass on twitter) and has written up her findings at Hacking Historical Maps… or trying to.  Thanks also to Alex Butterworth @AlxButterworth and Joseph Reeves @iknowjoseph for suggestions during the day.

Yahoo! Mapmixer's page was a 404 – I couldn't find any reference to the service being closed, but I also couldn't find a current link for it.

Next I tried Metacarter Labs' Map Rectifier.  Any maps uploaded to this service are publicly visible, though the site says this does 'not grant a copyright license to other users', '[t]here is no expectation of privacy or protection of data', which may be a concern for academics negotiating the line between openness and protecting work-in-progress or anyone dealing with sensitive data.  Many of the historians I've interviewed for my PhD research feel that some sense of control over who can view and use their data is important, though the reasons why and how this is manifested vary.

Screenshot from http://labs.metacarta.com/rectifier/rectify/7192


The site has clear instructions – 'double click on the source map… Double click on the right side to associate that point with the reference map' but the search within the right-hand side 'source map' didn't work and manually navigating to Paris, then the right section of Paris was a huge pain.  Neither of the base maps seemed to have labels, so finding the right location at the right level of zoom was too hard and eventually I gave up.  Maybe the service isn't meant to deal with that level of zoom?  We were using a very small section of map for our trials.

Inspired by Metacarta's Map Rectifier, Map Warper was written with OpenStreetMap in mind, which immediately helps us get closer to the goal of images usable in publications.  Map Warper is also used by the New York Public Library, which described it as a 'tool for digitally aligning ("rectifying") historical maps … to match today's precise maps'.  Map Warper also makes all uploaded maps public: 'By uploading images to the website, you agree that you have permission to do so, and accept that anyone else can potentially view and use them, including changing control points', but also offers 'Map visibility' options 'Public(default)' and 'Don't list the map (only you can see it)'.

Screenshot showing 'warped' historical map overlaid on OpenStreetMap at http://mapwarper.net/

Once a map is uploaded, it zooms to a 'best guess' location, presumably based on the information you provided when uploading the image.  It's a powerful tool, though I suspect it works better with larger images with more room for error.  Some of the functionality is a little obscure to the casual user – for example, the 'Rectify' view tells me '[t]his map either is not currently masked. Do you want to add or edit a mask now?' without explaining what a mask is.  However, I can live with some roughness around the edges because once you've warped your map (i.e. aligned it with a modern map), there's a handy link on the Export tab, 'View KML in Google Maps' that takes you to your map overlaid on a modern map.  Success!

Sadly not all the export options seem to be complete (they weren't working on my map, anyway) so I couldn't work out if there was a non-geek friendly way to open the map in OpenStreetMap.

We have to stop here for now, but at this point we've met one of the goals – to geo-reference historical maps so locations from the past can be found in the present, but the other will have to wait for another day.  (But I'd probably start with openheatmap.com when we tackle it again.  Any other suggestions would be gratefully received!)

(The title quote is something I heard one non-geek friend say to another to explain what geeks get up to at hackdays. We called our experiment a 'hackday' because we were curious to see whether the format of a hackday – working to meet a challenge within set parameters within a short period of time – would work for other types of projects. While this ended up being almost an 'anti-hack', because I didn't want to write code unless we came across a need for a generic tool, the format worked quite well for getting us to concentrate solidly on a small set of problems for an afternoon.)

Designing for participatory projects: emergent best practice, getting discussion started

I was invited over to New Zealand (from Australia) recently to talk at Te Papa in Wellington and the Auckland Museum.  After the talks I was asked if I could share some of my notes on design for participatory projects and for planning for the impact of participatory projects on museums.  Each museum has a copy of my slides, but I thought I'd share the final points here rather than by email, and take the opportunity to share some possible workshop activities to help museums plan audience participation around its core goals.

Both talks started by problematising the definition of a 'museum website' – it doesn't work to think of your 'museum website' as purely stuff that lives under your domain name when it's now it's also the social media accounts under your brand, your games and mobile apps, and maybe also your objects and content on Google Art Project or even your content in a student’s Tumblr.  The talks were written to respond to the particular context of each museum so they varied from there, but each ended up with these points.  The sharp-eyed among you might notice that they're a continuation of ideas I first shared in my Europeana Tech keynote: Open for engagement: GLAM audiences and digital participation.  The second set are particularly aimed at helping museums think about how to market participatory projects and sustain them over the longer term by making them more visible in the museum as a whole.

Best practice in participatory project design

  • Have an answer to 'Why would someone spend precious time on your project?'
  • Be inspired by things people love
  • Design for the audience you want
  • Make it a joy to participate
  • Don't add unnecessary friction, barriers (e.g. don't add sign-up forms if you don’t really need them, or try using lazy registration if you really must make users create accounts)
  • Show how much you value contributions (don't just tell people you value their work)
  • Validate procrastination – offer the opportunity to make a difference by providing meaningful work
  • Provide an easy start and scaffolded tasks (see e.g. Nina Simon's Self-Expression is Overrated: Better Constraints Make Better Participatory Experiences)
  • Let audiences help manage problems – let them know which behaviours are acceptable and empower them to keep the place tidy
  • Test with users; iterate; polish

Best practice within your museum

  • Fish where the fish are – find the spaces where people are already engaging with similar content and see how you can slot in, don't expect people to find their way to you unless you have something they can’t find anywhere else
  • Allow for community management resources – you’ll need some outreach to existing online and offline communities to encourage participation, some moderation and just a general sense that the site hasn’t been abandoned. If you can’t provide this for the life of the project, you might need to question why you’re doing it.
  • Decide where it's ok to lose control. Try letting go… you may find audiences you didn't expect, or people may make use of your content in ways you never imagined. Watch and learn and tweak in response – this is a good reason to design in iterations, and to go into public or invited-beta earlier rather than later. 
  • Realistically assess fears, decide acceptable levels of risk. Usually fears can be turned into design requirements, they’re rarely show-stoppers.
  • Have a clear objective, ideally tied to your museum’s mission. Make sure the point of the project is also clear to your audience.
  • Put the audience needs first. You’re asking people to give up their time and life experience, so make sure the experience respects this. Think carefully before sacrificing engagement to gain efficiency.
  • Know how to measure success
  • Plan to make the online activity visible in the organisation and in the museum. Displaying online content in the museum is a great way to show how much you value it, as well as marketing the project to potential contributors.  Working out how you can share the results with the rest of the organization helps everyone understand how much potential there is, and helps make online visitors ‘real’.
  • Have an exit strategy – staff leave, services fold or change their T&Cs

I'd love to know what you think – what have I missed?  [Update: for some useful background on the organisational challenges many museums face when engaging with technology, check out Collections Access and the use of Digital Technology (pdf).]

More on designing museum projects for audience participation

I prepared this activity for one of the museums, but on the day the discussion after my talk went on so long that we didn't need to use a formal structure to get people talking. In the spirit of openness, I thought I'd share it. If you try it in your organisation, let me know how it goes!

The structure – exploratory idea generation followed by convergence and verification – was loosely based on the 'creativity workshops' developed by City University's Centre for Creativity (e.g. the RESCUE creativity workshops discussed in Use and Influence of Creative Ideas and Requirements for a Work-Integrated Learning System).  It's designed to be a hackday-like creative activity for non-programmers.

In small groups…

  • Pick two strategic priorities or organisational goals…
  • In 5 minutes: generate as many ideas as possible
  • In 2 minutes: pick one idea to develop further

Ideas can include in-gallery and in-person activity; they must include at least two departments and some digital component.

Developing your idea…
Ideas can include in-gallery and in-person activity; they must include at least two departments

  • You have x minutes to develop your idea
  • You have 2 minutes each to report back. Include: which previous museum projects provide relevant lessons? How will you market it? How will it change the lives of its target audience? How will it change the museum?
  • How will you alleviate potential risks?  How will you maximise potential benefits?
  • You have x minutes for general discussion. How can you build on the ideas you've heard?

For bonus points…

These discussion points were written for another museum, but they might be useful for other organisations thinking about audience participation and online collections:

What are the museum’s goals in engaging audiences with collections online?

  • What does success look like?
  • How will it change the museum?
  • Which past projects provide useful lessons?

How can the whole organisation be involved in supporting online conversations?

  • What are the barriers?
  • What small, sustainable steps can be taken?
  • Where are online contributions visible in the museum?

What are the right questions about museum websites?

It should be fairly simple to answer the question, 'what's the point of a museum website?' because the answer should surely be some variant on 'to further the mission and goals of the museum'.

But what is it about being online, about being on or of the web that problematises that answer?

Is it that there are so many other sites providing similar content, activities and access to knowledge? Is it that the niche role many museums play in their local communities doesn't translate into online space? Is it that other sites got in earlier and now host better conversations about museum collections?

Or is the answer not really problematic – there have always been other conversations about collections and ways of accessing knowledge, and the question is really about where museums and their various activities fit in the digital landscape?

I don't know, but it's Friday night and I should be on my way out, so I'm going to turn the question over to smarter minds… What are the right questions and why is it difficult for a museum to translate its mission directly to its website?

Update, the next day… This quote from an article, Lost professors: we won’t need academics in 60 years, addresses one of my theories about why translating a museum's mission into the online context is problematic:

…there are probably several hundred academics in Australia who lecture on, say, regression analysis, and very few of us could claim to be in the top 1% – actually only 1% of us.

The web allows 100% of the students to access the best 1%. Where is the market for duplication of mediocre course material by research academics?

I'm not saying any museum content is mediocre, of course, but the point about the challenges of the sudden visibility of duplicated content remains. If the museum up the road or in the next town has produced learning activities or expert commentary about the same regional/national history events or objects, does it further your mission to post similar content? What content or activities can you host that is unique to your museum, either because of your particular niche collections or context or because no-one else has done it yet?

Also, for further context, Report from 'What's the point of a museum website' at MCN2011 and Brochureware, aggregators and the messy middle: what's the point of a museum website? (which is really about 'what forms do museum websites take'), and earlier posts on What would a digital museum be like if there was never a physical museum? and the related Thoughts towards the future of museums for #kulturwebb, What's the point of museum collections online? (Angelina's succinct response: digital content recognises audience experiences, providing opportunities for personal stories to form significant part of the process of interpretation) and finally, thoughts about The rise of the non-museum – museums are possibly the least agile body in the cultural content market right now.

Quick and dirty Digital Humanities Australasia notes: day 2

What better way to fill in stopover time in Abu Dhabi than continuing to post my notes from DHA2012? [Though I finished off the post and re-posted once I was back home.] These are my very rough notes from day 2 of the inaugural Australasian Association for Digital Humanities conference (see also Quick and dirty Digital Humanities Australasia notes: day 1 and Slow and still dirty Digital Humanities Australasia notes: day 3). In the interests of speed I'll share my notes and worry about my own interpretations later.

Keynote panel, 'Big Digital Humanities?'

Day 2 was introduced by Craig Bellamy, and began with a keynote panel with Peter Robinson, Harold Short and John Unsworth, chaired by Hugh Craig. [See also Snurb's liveblogs for Robinson, Short and Unsworth.] Robinson asked 'what constitutes success for the digital humanities?' and further, what does the visible successes of digital humanities mask? He said it's harder for scholars to do high quality research with digital methods now than it was 20 years ago. But the answer isn't more digital humanists, it's having the ingredients to allow anyone to build bridges… He called for a new generation of tools and methods to support the scholarship that people want to do: 'It should be as easy to make a digital edition (of a document/book) as it is to make a Facebook page', it shouldn't require collaboration with a digital humanist. To allow data made by one person to be made available to others, all digital scholarship should be made available under a Creative Commons licence (publishers can't publish it now if it's under a non-commercial licence), and digital humanities data should be structured and enriched with metadata and made available for re-use with other tools. The model for sustainability depends on anyone and everyone being able to access data.

Harold Short talked about big (or at least unescapable) data and the 'Svensson challenge' – rather than trying to work out how to take advantage of infrastructure created by and for the sciences, use your imagination to figure out what's needed for the arts and humanities. He called for a focus on infrastructure and content rather than 'data'.

John Unsworth reminded us that digital humanities is a certain kind of work in the humanities that uses computational methods as its research methods. It's not just using digital materials, though it does require large collections of data – it also requires a sense of how how the tools work.

What is the digital humanities?

Very different versions of 'digital humanities' emerged through the panel and subsequent discussion, leaving me wondering how they related to the different evolutionary paths of digital history and digital literature studies mentioned the day before. Meanwhile, on the back channel (from the tweets that are to hand), I wondered if a two-tier model of digital humanities was emerging – one that uses traditional methods with digital content (DH lite?); another that disrupts traditional methods and values. Though thinking about it now, the 'tsunami' of data mentioned is disruptive in its own right, regardless of the intentional choices one makes about research practices (which might have been what Alan Liu meant when he asked about 'seamless' and 'seamful' views of the world)…. On twitter, other people (@mikejonesmelb, @bestqualitycrab, @1n9r1d) wondered if the panel's interpretation of 'big' data was gendered, generational, sectoral, or any other combination of factors (including as the messiness and variability of historical data compared to literature) and whether it could have been about 'disciplinary breadth and inclusiveness' rather than scale.

Data morning session

The first speaker was Toby Burrows on 'Using Linked Data to Build Large‐Scale e‐Research Environments for the Humanities'. [Update: he's shared his slides and paper online and see also Snurb's liveblog.] Continuing some of the themes from the morning keynote panel, he said that the humanities has already been washed away in the digital deluge, the proliferation of digital stuff is beyond the capacity of individual researchers. It's difficult to answer complex humanities questions only using search with this 'industrialised' humanities data, but large-scale digital libraries and collections offer very little support for functions other than search. There's very little connection between data that researchers are amassing and what institutions are amassing.

He's also been looking at historians/humanists research practices [and selfishly I was glad to see many parallels with my own early findings]. The tools may be digital rather than paper and scissors, but historians are still annotating and excerpting as they always have. The 'sharing' part of their work has changed the most – it's easier to share, and they can share at an earlier stage if they choose to do that, but not a lot has changed at the personal level.

Burrows said applying applying linked data approach to manuscript research would go a long way to addressing the complexity of the field. For example, using global URIs for manuscripts and parts; separating names and concepts from descriptive information; and using linked data functions to relate scholarly activities (annotations, excerpts, representations etc) to manuscript descriptions, objects and publications. Linked data can provide a layer of entities that sits between research activities and descriptions/collections/publications, which avoids conflating the entities and the source material. Multiple naming schemes are necessary for describing entities and relationships – there's no single authoritative vocabulary. It's a permanent work in progress, with no definitive or final structure. Entities need to include individuals as well as categories, with a network graph showing relatedness and the evidence for that relatedness as the basic structure.

He suggested a focus on organising knowledge, not collections, whether objects or texts. Collaborative activities should be based around this knowledge, using tools that work with linked data entities. This raised the issue of contested ground and the application of labels and meaning to data: your 'discovery' is my 'invasion'. This makes citizen humanities problematic – who gets to describe, assign, link, and what does that mean for scholarly authority?

My notes aren't clear but I think Burrows said these ideas were based on analysis of medieval manuscript research, which Jane Hunter had also worked on, and they were looking towards the architecture for HuNI. It was encouraging to see an approach to linked data so grounded in the complexity of historians research practices and data, and is yet another reason I'm looking forward to following HuNI's progress – I think it will have valuable lessons for linked data projects in the rest of the world. [These slides from the Linked Open Data workshop in Melbourne a few weeks later show the academic workflow HuNI plans to support and some of the issues they'll have to tackle.]

The second speaker was the University of Sydney's Stephen Hayes on 'how linked is linked enough?'. [See also Snurb's liveblog.] He's looking at projects through a linked data lens, trying to assess how much further projects need to go to comfortably claim to be linked data. He talked about the issues projects encountered trying to get to be 5 star Linked Data.

He looked at projects like the Dictionary of Sydney, which expresses data as RDF as well in a public-facing HTML interface and comes close to winning 5 stars. It is a demonstration of the fact that once data is expressed in one form, it can be easily expressed in another form – stable entities can be recombined to form new structures. The project is powered by Heurist, a tool for managing a wide range of research data. The History of Balinese Painting could not find other institutions that exposed Balinese collection data in programmable form so they could link to them (presumably a common problem for early adopters but at least it helps solve the 'chicken or the egg' problem that dogs linked data in cultural heritage and the humanities). The sites URLs don't return useful metadata but they do try to refer to image URLs so it's 'sorta persistent'. He gave it a rating of 3.5 stars. Other projects mentioned (also built on Heurist?) were the Charles Harpur Critical Archive, rated at 3.5 stars and Virtual Zagora, rated at 3 stars.

The paper was an interesting discussion of the team work required to get the full 5 stars of linked data, and the trade-offs in developing functions for structured data (e.g. implementing schema.org's painting markup versus focussing on the quality of the human-facing pages); reassuring curators about how much data would be released and what would be kept back; developing ontologies throughout a project or in advance and the overhead in mapping other projects concepts to their own version of Dublin Core.

The final paper in the session was 'As Curious An Entity: Building Digital Resources from Context, Records and Data' by Michael Jones and Antonina Lewis (abstract). [See also Snurb's liveblog.] They said that improving the visibility of relationships between entities enriches archives, as does improving relationships between people. The title quote in full is 'as curious an entity as bullshit writ on silk' – if the parameters, variables and sources of data are removed from material, then it's just bullshit written on silk. Visualisations remove sources, complexity and 'relative context', and would be richer if they could express changes in data over time and space. They asked how one would know that information presented in a visualisation is accurate if it doesn't cite sources? You must seek and reference original material to support context layers.

They presented an overview of the Saulwick Archive project (Saulwick ran polls for the Fairfax newspapers for years) and the Australian Women's Register, discussed common issues faced in digital humanities, and the role of linked data and human relationships in building digital resources. They discussed the value of maintaining relationships between archives and donors after the transfer of material, and the need to establish data management plans to make provision for raw data and authoritative versions of related contextual material, and to retain data to make sense of the archives in the future. The Australian Women's Register includes content written for the site and links out to the archival repositories and libraries where the records are held. In a lovely phrase, they described records as the 'evidential heart' for the context and data layers. They also noted that the keynote overlooked non-academic re-use of digital resources, but it's another argument for making data available where possible.

Digital histories session

The first paper was 'Community Connections: The Renaissance of Local History' by Lisa Murray. Murray discussed the 'three Cs' needed for local history: connectivity, community, collaboration.

Is the process of geo-referencing forcing historians to be more specific about when or where things happened? Are people going from the thematic to the particular? Is it exciting for local historians to see how things fit into state or national narratives? Digital history has enormous potential for local and family history and to represent complicated relationships within a community and how they've changed over time. Digital history doesn't have to be article-centric – it enables new forms of presentation. Historians have to acknowledge that Wikipedia is aligned to historians' processes. Local history is strongly represented on Wikipedia. The Dictionary of Sydney provides a universal framework for accessing Sydney's history.

The democratisation of historical production is exciting but raises it challenges for public understandings of how history undertaken and represented. Are some histories privileged? Making History (a project by Museum Victoria and Monash University) encourages the use of online resources but does that privilege digitised sources, and will others be neglected? Are easily accessible sources privileged, and does that change what history is written? What about community collections or vast state archives that aren't digitised?

History research methodologies are changing – Google etc is shaping how research is undertaken; the ubiquity of keyword searching reinforces the primacy of names. She noted the impact of family historians on how archives prioritise work. It's not just about finding sources – to produce good history you need to analyse the sources. Professional historians are no longer the privileged producers of knowledge. History can be parochial, inclusive, but it can also lack sense of historical perspective, context. Digital history production amplifies tensions between popular history and academic history [and presumably between amateur and academic historians?].

Apparently primary school students study more local history than university students do. Local and community history is produced by broad spectrum of community but relatively few academic historians are participating. There's a risk of favouring quirky facts over significance and context. Unless history is more widely taught, local history will be tarred with same brush as antiquarians. History is not only about narrative and context… Historians need to embrace the renaissance of local and community history.

In the questions there was some discussion of the implications of Sydney's city archives being moved to a more inconvenient physical location. The justification is that it's available through Ancestry but that removes it from all context [and I guess raises all the issues of serendipity etc in digital vs physical access to archives].

The next speaker was Tim Sherratt on 'Inside the bureaucracy of White Australia'. His slides are online and his abstract is on the Invisible Australians site. The Invisible Australians project is trying to answer the question of what the White Australia policy looked like to a non-white Australian.  He talked about how digital technology can help explore the practice of exclusion as legislation and administrative processes were gradually elaborated. Chinese Australians who left Australia and wanted to return had to prove both their identity and their right to land to convince officials they could return: 'every non-white resident was potentially a prohibited immigrant just waiting to be exposed'. He used topic modelling on file titles from archival series and was able to see which documents related to the White Australia policy. This is a change from working through hierarchical structures of archives to working directly through the content of archives. This provides a better picture of what hasn't survived, what's missing and would have many other exciting uses. [His post on Topic modelling in the archives explains it better than my summary would.]

The final paper was Paul Turnbull on 'Pancake history'. He noted that in e-research there's a difference between what you can use in teaching and what makes people nervous in the research domain. He finds it ironic that professional advancement for historians is tied to writing about doing history rather than doing history. He talked about the need to engage with disciplinary colleagues who don't engage with digital humanities, and issues around historians taking digital history seriously.

Sherratt's talk inspired discussion of funding small-scale as well as large-scale infrastructure, possibly through crowdfunding. Turnbull also suggested 'seeding ideas and sharing small apps is the way to go'.

[Note from when I originally posted this: I don't know when my flight is going to be called, so I'll hit publish now and keep working until I board – there's lots more to fit in for day 2! In the afternoon I went to the 'Digital History' session. I'll tidy up when I'm in the UK as I think blogger is doing weird LTR things because it may be expecting Arabic.]

See also Slow and still dirty Digital Humanities Australasia notes: day 3.

Quick and dirty Digital Humanities Australasia notes: day 1

As always, I should have done this sooner and tidied them up more, but better rough notes than nothing, so here goes… The Australasian Association for Digital Humanities held their inaugural conference in Canberra in March, 2012.  You can get an overall sense of the conference from the #DHA2012 tweets (I've put a CSV archive of #DHA2012 tweets from searchhash.com here, but note it's not on Australian time) and from the keynotes.

In his opening keynote on the movements between close and distant reading, Alan Liu observed that the crux of the 'reading' issue depends on the field, and further, that 'history is on a different evolutionary branch of digital humanities to literary studies'.  This is something I've been wondering about since finding myself back in digital humanities, and was possibly reflected in the variety of papers in the overall programme.  I was generally following sessions on digital history, geospatial themes and crowdsourcing, but there was so much in the programme that you could have followed a literary studies line and had a totally different conference experience.

In the next session I went to a panel on 'Connecting Australia's Cultural Datasets: A Vision for Collaboration' with various people from the new 'Humanities Networked Infrastructure' (HuNI) (more background) presenting.  It started with Deb Verhoeven on 'jailbreaking cultural data' and the tension identified by Brand: "information wants to be expensive because it's so valuable.  The right information in the right place just changes your life.  On the other hand, information wants to be free, because the cost of getting it out is lower and lower all the time. So you have these two things fighting against each other". 'Information wants to be social': she discussed the need to understand the value of research in terms of community engagement, not just as academically ranked output, and to return research to the communities they're investigating in meaningful ways.
 
Other statements that resonated were the need for organisational, semantic and technical interoperability in datasets to create collaborative environments. Collaboration requires data integration and exchange as well as dealing with different ideas about what 'data' is in different disciplines in the humanities. Collaboration in the cultural datasets community can follow unmet needs: discover data that's currently hidden, make connections between disparate data sources, publish and share connections.

Ross Harley talked about how interoperability facilitates serendipity and trying to find new ways for data to collide. In the questions, Ingrid Mason asked about parallels with the GLAM (galleries, libraries, archives and museums) community, but it was also pointed out that GLAMs are behind in publishing their data – not everything HuNI wants to use is available yet.  I pointed out (on the twitter back channel) that requests for GLAM information from intensive users (e.g. researchers) helps memory institutions make the case for publishing more data – it's still all a bit chicken-or-the-egg.

After lunch I went to the crowdsourcing session (not least cos I was presenting early results from my PhD in it).  The first presentation was on 'crowdsourcing semantic tags on 3D museum artefacts' which could have amazing applications for teaching material culture and criticism as well as source communities because it lets people annotate specific locations on a 3D model. Interestingly, during the questions someone reported people visiting campus classics museum who said they were enjoying seeing the objects in person but also wanted access to electronic versions – it's fascinating watching audience expectations change.

The next presentation was on 'Optimising crowdsourcing websites to increase volunteer participation' which was a case study of NYPL's What's on the menu by Donelle McKinley who was using MECLAB/Flint McGlaughlin's Conversion Sequence heuristic (clarity of value proposition, motivation, incentive, friction, anxiety) to assess how the project's design was optimised to motivate audience participation.  Donelle's analysis is really useful for people thinking about designing for crowdsourcing, but I'm not sure my notes do it justice, and I'm afraid I didn't get many notes for Pauline Cockrill's 'Using Web 2.0 to make new connections in community history' as I was on just afterwards.  One point I tweeted was about a quick win for crowdsourcing in using real-world communities as pointers to successful online collaborations, but I'm not sure now who said it.

One comment I noted during the discussion was "a real pain about Old Weather was that you'd get into working on a ship and it would just sail off on you" – interfaces that work for the organisation doesn't always work for the audience.  This session was generally useful for clarifying my thoughts on the tension between optimising for efficiency or engagement in cultural heritage crowdsourcing projects.

In the interests of getting this posted I'll stop here and call this 'day 1'. I'm not sure if any of the slides are available yet, but I'll update and link to any presentations or other write-ups I find. There's a live blog of many sessions at http://snurb.info/taxonomy/term/137.

[Update: I've posted about Day 2 at Quick and dirty Digital Humanities Australasia notes: day 2 and Slow and still dirty Digital Humanities Australasia notes: day 3.]

Museum technologists redux: it's not about us

Recently there's been a burst of re-energised conversations on Twitter, blogs and inevitably at MW2012 (Museums on the Web 2012) about museum technologists, about breaking out of the bubble, about digital strategies vs plain old strategies for museums.  This is a quick post (because I only ever post when I should be writing a different paper) to make sure my position is clear.

If you're reading this you probably know that these are important issues to discuss, and it's exciting thinking about the organisational change issues museums will rise to in order to stay relevant, but it's also important to step back and remind ourselves that ultimately, it's not about us.  It's not about our role as museum technologists, or museums as organisations.

Museum technologists should be advocates for the digital audience, and guide museums in creating integrated, meaningful experiences, but we should also make sure that other museum staff know we still share their values and respect their expertise, and dispel myths about being zealots of openness at the expense of other requirements or wanting to devalue the physical experience.

It's about valuing the digital experiences our audiences have in our galleries, online and on the devices they carry in their pockets.  It's about understanding that online visitors are real visitors too.  It's about helping people make the most of their physical experiences by extending and enhancing their understandings of our collections and the world that shaped them.  It's about showing the difference digital makes by showing the impact it can have for a museum seeking to fulfil its mission for audiences it can't see as well as those right under its nose.

I'm a museum technologist, but maybe in my excitement about its potential I haven't been clear enough: I'm not in love with technology, I'm in love with what it enables – better museums, and better museum experiences.

How things change: the Google Art Project (again)

The updated Google Art Project has been launched with loads more museums contributing over 30,000 artworks.  The interface still seems a bit sketchy to me (sometimes you can open links in a new tab, sometimes you can't; mystery meat navigation; the lovely zoom option isn't immediately discoverable; the thumbnails that appear at the bottom don't have a strong visual connection with the action that triggers their appearance; and the only way I could glean any artist/title information about the thumbnails was by looking at the URL), but it's nice to see options for exploring by collection (collecting institution, I assume), date or artist emphasised in the interface. 

Anyway, it's all about the content – easy access to high-quality zoomable images of some of the world's best artworks in an interface with lots of relevant information and links back to the holding institution is a win for everyone.  And if the attention (and traffic) makes museums a little jealous, well, it'll be fascinating to see how that translates into action.  After all, keeping up with the Joneses seems to be one way museums change…

Reading some online stories about the launch, I was struck by how far conversations about traditional and online galleries have come.  From one:

As users explore the galleries they can also add comments to each painting and share the whole collection with friends and family. Try doing that in the Tate Modern. Actually, don’t.

Although, of course, you can – it's traditionally known as 'having a conversation in a museum'. 
But in 2012, is visiting a website and sharing links online seen as a reasonable stand-in for the physical visit to a museum, leaving the in-person gallery visit for 'purists' and enthusiasts?  (This might make blockbuster exhibtions bearable.)  Or, as the consensus of the past decade has it, does it just whet the appetite and create demand for an experience with the original object, leading to more visits?