Sunday, 24 April 2011

Balsamiq Mockups

I have started using a new wireframing tool called Balsamiq and I have to say I quite like it.

It's a quick sketch drag and drop tool for creating wireframes. After using Axure and Visio I found the commands quite intuitive and quick to pick up.

Best things:
  • fast
  • sketchy type diagrams
  • lots of pre defined graphics/icons

Worst things:
  • No background master templates yet - you need to create a diagram then save it as an image to insert in the background
  • Loses relative linking between files when moving a folder: arrgh

It doesn't have the interaction that Axure can provide but this isn't what it is for; you get speed instead.

Things I haven't explored yet but keen to try is a user created collection of additional graphics you can add to your library.

We are using it for wireframes, simple click through flows and illustrating UX requirements

Anyone else using this tool? What do you think of it?

Example wireframe from Balsamiq site: YouTube

Example wireframe from Balsamiq site: Wiki







Saturday, 19 March 2011

Neglecting content strategy


I recently attended the Webstock conference in Wellington in February 2010. One of the talks I liked was by Kristina Halvorson, a content consultant from the States. Her company is Brain Traffic (http://blog.braintraffic.com)

The talk rang a lot of bells for me; guilty bells mostly.  She described the ‘deer in the headlights’ feeling of being given all the content to write for a new website in the last 2 weeks of development from some marketing materials. What is actually needed is a content strategy; the vision and lifecycle of content on a site. Questioning what kind of content is needed, what is its structure and who will do it and make decisions on it. She argued that too much content is treated as ‘launch and leave’ creating over time a morass of outdated and unuseful content. A strategy to plan how to create and govern content would avoid this.

As a user experience person I have usually commented on or advocated content at the level of ‘this is the kind of content people want’ or ‘this content isn’t working’. When clients have said they have the content in hand, usually something to do with marketing materials, I haven’t demurred terribly. Next time I’ll push harder for it to be considered up-front as part of ensuring a good experience. Though this sounds like common-sense, it does seem to get neglected. Hopefully this will avoid testing piles of dense text unsuitable for the web, or users getting lost in labyrinthine piles of content or, my favourite, search throwing up board minute meetings because the relevant content has not been tagged correctly.

I would also think that user experience conferences should definitely include a few more content people speaking. Very useful.

What are other people’s experiences?

Tuesday, 15 March 2011

Extrapolating positive findings from user testing

User testing is qualitative research. The main purpose of user testing is to find and remove problems. Given this I have always assumed that the removal of negatives is good for the design, even if we cannot know what the true frequency of problems are in the wider population. That is, we know difficulty in the lab will mean difficulty in the real-world even if we don't know whether it's 20% or 60% of people that will have that problem.

In terms of positive results I am less sure about using this assumption. I have reported positive findings to show balance and to highlight what is working well in order to encourage teams and maintain successful features (as www.usability.gov argues too). But I get increasingly uncomfortable when things such as colour preferences are extrapolated to the wider user population. 

When reading about what makes a qualitative study, showing rich specific content in a specific context is its key strength. The general argument is not we seek to generalise to a wider population but we develop, or generalise, to a theory (Bryman, 2008; Creswell, 2009, Grbich 2007).

I would argue that the temptation to generalise to a wider population is inherent. In fact clients would argue why on earth should I do any research if I cannot generalise to my wider customers. Why do a focus group when we cannot generalise outside the session?

Williams (2000 in Bryman 2008), argues that 'moderatum generalizations' are allowable: linkages can be made to similar groups. For example behaviour of football hooligans in one football club is related to other case studies of different football hooligans. 

At a broader level I would assume that one of the points of creating theory in qualitative studies is that is generalisable. I find the idea of 'theoretical sampling & saturation' interesting: you sample, collect data and analyse until there is only repeated information (from Grounded Theory - Glaser & Strauss 1967, Strauss & Corbin 1988). Given this if we have consistent analysis of a positive interaction then we can assume it will work in general. We are not making a conclusion about frequency but a deeper more abstract judgment, for example the button 'affords' clicking through its visual design and therefore it is a good design feature. However to what level do we really theorise in user testing?

So where does this leave us? Feedback on interaction is difficult to get via other methodologies.  I still want to report interactions that are working well for a design otherwise I fear the design will be paralysed within continual redesign from scratch. We do violate the principles of generalisation - we are assuming if everyone in the session understands the check-out process, the wider user population will too.

However with other types of feedback, such as preferences, perhaps we should not take a 'some information is better than none' approach. User testing can create hypotheses that should be further examined using other methods such as A/B testing, surveys or web analytics. For example if 6 out of 8 people liked the content tone this is an indication it could be working but it is not a definitive 'yes'. This is where I'm a fan of triangulation: using multiple data points and methodologies to get a clearer idea of the state of the world. 

I haven't decided what I think about this topic - how can we be pragmatic and helpful without being misinforming? Definitely welcome discussion

References
Bryman, Alan (2008). Social Science Research Methods. 3rd Edn. Oxford University Press.
Creswell, John W. (2009) Research Design. Sage Publications
Grbich, Carol (2007). Qualitative data analysis. Sage Publications. 

    Wednesday, 9 March 2011

    Bubbling ideas

    The trigger for this blog came from a great UPA conference last year in Munich.

    At the conference there was a panel discussion on how qualitative data is used in user testing. This touched on an issue I think is core to a lot of UX testing - extrapolating qualitative data. I raised my concern about extrapolating positive findings but expressed myself thoroughly unclearly and was disappointed by the response.

    I had been thinking for a while about a way to post questions and thoughts to keep connected in UX discussion especially as I was an independent UX free-lancer. Also I thought it would be a good reflective practice.

    It is a bit of an experiment. I would love to discuss ideas with other UX and related folk, and hope to learn :)