lunes, 25 de junio de 2012

Social annotation in web search: inattentional blindness

Aditi Muralidharan; Zoltan Gyongyi; Ed Chi. Social Annotation in Web Search. CHI '12 Proceedings of the 2012 ACM annual conference on Human Factors in Computing Systems, pages 1085-1094
Full paper   Extended abstract 

Knowing that other researchers are looking for answers to similar questions to ours is a good signal of the interest of the topic. In this case, Aditi Muralidharan, Zoltan Gyongyi, and Ed Chi have done two studies on social annotation in web search using eye tracking technology. Here, in Barcelona, we have been running related experiments in the last months and arrived to similar conclusions not published so far.

Their study (CHI 2012)

1) 11 users perform several personalized tasks. Half of the results have snippets with social annotations, the other half have not this kind of results. Test is recorded with an eye tracker and the users watch the recording in a retrospective think-aloud (RTA). The conclusion is that most of people did not notice the social annotations, and those who saw them, don't pay too much attention. Why people don't see them? (see the second test)

2) 12 users (all of them know each other) perform several tasks in mock-ups of search engines. Again, half of the results have snippets with social annotations, the other half have not this kind of results. The pages are mock-ups because they have been modified in order to present social annotation with variations: big/small profile picture, above/below snippet, first/second position, long/short snippets. The test were recorded with eye tracker. The conclusion is that when the picture is big and when the social annotation is above the snippet, people notice it. The reason is that users have a pattern of reading in page results and they don't see further than the elements they recognize as useful in a first scan: title and url. So, new elements like social annotations prompt "inattentional blindness" (as Mack, Rock name it). This phenomenon is widely known, you might have seen the video with the gorilla dancing while other people is playing... and nobody notice it.
In a future paper, the authors could do more experiments to know if changing the style of this snippets would make them more visible.

Our study (We wish CHI 2013  :-)  )

Our study have not been published, so I will not write many results here so far. We studied 5 kind of snippets: Google Places, Google +, Google Author, multimedia and reviews. We duplicated each page of results removing in one version this rich snippets. We prepared 10 SERPs (with their 10 duplicated plain snippets pages). 60 users performed the 10 search and they saw 5 pages with rich snippets and 5 pages without the rich version of the snippet. The sessions were recorded with an eye tracker.
The preliminary results show that, in general, it does not exist significant difference (t-test) between the user visual behavior when looking to pages with rich snippets and the similar pages without them.
The metrics that we considered were mainly the fixation duration in the rich snippet (and his equivalent as a plain snippet) and the time to first fixation in the rich snippet (and his equivalent as a plain snippet), as well as the click count.
We hope to share soon the study with the scientific community and with SEO practitioners.
These heat maps show the fixation duration in the SERP with plain snippets and "Google +" snippets in 6th and 2nd position. No differences

And here you are some statistical results for the fixation duration average on the studied snippets on a top position (position 2) and in a bottom position (position 6). No significant differences:

jueves, 21 de junio de 2012

Thinking on buying an eye tracker?

Sometimes other colleagues around the world ask me about which eye tracker could they buy or rent.
My answer is "it depends on the kind of study you plan to run".

There are several firms in the market selling eye tracking devices based on infrared (for serious research I don't trust yet in other technologies based on recording the users' ayes with a webcam). As an infrared-based technology I know Tobii. I have a Tobii T1750, a second-hand one in fact from 2007, that it is not anymore in the market, so I cannot be sure about the newer models to recommend, but here you are some models and the uses that I (and Tobii) recommend for each one:

  • Tobii T60 hz and T120. Useful for research that aim to study the users' behavior on websites, images, and anything that can be showed on the screen. This device brings the infrared on the screen. Useful for usability studies
  • Tobii X60 and X120. Useful for research on screen (external screen) or other devices as mobile phones or tablets if you incorporates a special device for them. It is a great option because is lighter and flexible. If I could buy another one now, this would be my first option.
  • Tobii T60 XL. Similar to T60 but wider. Useful for studies that need a big screen. I don't see the point for usability studies.
  • Tobii Glasses. Necessary if the research is about physical objects (supermarket products, museums, etc.). I used it in a study on Connected TV and video games. It works properly but the resolution is lower than in screen-based eye trackers, and the complexity for the analysis is bigger due to the kind of information: video. It is not convenient for web studies.
 About to make your decision? Don't be on a hurry, it is an expensive device with an expensive software. Take it easy, check Tobii website, compare, and let me know if we can plan a joint research!