Faceted navigation is widespread on the web (a.k.a faceted search and faceted browse). It’s become an expected standard. I’ve written several posts on the subject and also have a popular workshop on faceted navigation. (Next one: 22 Oct 2011 in NYC). Yet we really don’t know much about the ROI of faceted navigation. Or do we?

I’ve only been able to find a few studies or case studies reporting a measureable ROI of faceted navigation. There are lots of variables in play, and definitively showing measureable gains directly to faceted navigation can be tricky. But a simple before-and-after comparison should be possible.

One helpful sources is Endeca’s case studies. Examples of ROI include:

  • Kiddicare.com: 100% increase in conversion rates; 100% increase in sales; Additional 100% increase in conversion rates with PowerReviews
  • AutoScout 24: 5% increase in lead generation to dealers; 70% decrease in no results found
  • Otto Group: 130% increase in conversion rates; Doubled conversion rates for visitors originating from pay-per-click marketing programs; Search failure rate decreased from over 33% to 0.5%

If you have such data or evidence in any form, please let me and others know about by commenting here. Note I’m not talking about studies that show how efficient faceted navigation is in terms of interaction or time on task (such as the ones reported here): I’m looking for hard evidence on ROI in real world situations.

It’s a positive sign that so many websites have faceted navigation these days: there must be something “right” about it. But why have so many site owners and stakeholders funded and implemented faceted navigation systems? What’s the actual return against the cost of implementation and maintenance?

Some logical arguments include combinations of the following:

  • Conversion: Customers can’t buy what they can’t find: Findability is critical for ecommerce sites.  A well-designed navigation plays a key role in getting people to the information or products you want to see. This ultimately helps you sell products or ideas. Faceted navigation has been shown to improve findability, in general.
  • Efficiency: Employees lose productivity when navigation is inefficient: These days company intranets can be enormous. The time to find information impacts employee productivity. Even the smallest increase in navigational efficiency can have huge returns for a large corporation if you multiple it by thousands of employees. Faceted navigation is efficient.
  • Confidence: Faceted navigation increases information scent: Revealing facet values gives users better insight into the type of terms and language used on the site. They are then able to match their information need with the content of the site, giving them confidence as the navigate forward through a given collection. This keeps them on the site and away from the customer support hotline.
  • “Aboutness”: Facets show the overall semantic make-up of a collection: Faceted metadata–the values associated with a collection of documents or products–give clues into the “aboutness” of that collection. Facets convey the breadth and type of a results list, for instance. This can help get to their target information better.
  • Reduced Uncertainty: Users don’t have to specify precise queries: With faceted navigation, users don’t rely on formulating precise keyword searches alone to find information. Instead, they can enter broad searches and use the facets in a flexible way to refine the initial query. This gives confidence in being comprehensive and reduces uncertainty in information seeking in general, as well as removes the frustration of finding no results.
  • Navigation: Browsing categories provides a different experience than keyword search: Jared Spool and his colleagues found that people tend to continue shopping more often when navigating than after doing a direct keyword search: people tend to continue browsing—and buying—when they can successfully navigate to the products they want to purchase. Sure, keyword searching may also get them there, but that experience is different. He writes in an article entitled “Users Continue After Category Links” (Dec 2001):
    • Apparently, the way you get to the target content affects whether you’ll continue looking or not. In a recent study of 30 users, we found that if the users used Search to locate their target content on the site, only 20% of them continued looking at other content after they found the target content. But if the users used the category links to find their target, 62% continued browsing the site. Users who started with the category links ended up looking at almost 10 times as many non-target content pages as those who started with Search.

A well-designed faceted navigation system won’t solve all your problems. But because navigation is so central to the basic web experience, it stands to reason that that are financial implications involved. What are they exactly?

Again, if you have any support for the above contentions or have another argument around the benefits of faceted navigation, please let me know.

We’re pleased to announce the line up for the EuroHCIR 2011 workshop–the first HCIR event to be held outside the US. It will be held as part of the British HCI Conference in Newcastle on July 4.  The program will include:

  • Short slots for oral presentations of the 9 accepted papers
  • A keynote address
  • A poster session
  • Interactive group activities
You don’t need to register for the HCI Conference to participate in the workshop. Please join us!

Accepted Papers

  • The potential of Recall and Precision as interface design parameters for information retrieval systems situated in everyday environments
    Ayman Moghnieh and Josep Blat
  • The Mosaic Test: Benchmarking Colour-based Image Retrieval Systems Using Image Mosaics
    William Plant, Joanna Lumsden and Ian Nabney.
  • Exploratory Search in an Audio-Visual Archive: Evaluating a Professional Search Tool for Non-Professional Users
    Marc Bron, Jasmijn Van Gorp, Frank Nack and Maarten De Rijke
  • A Taxonomy of Enterprise Search
    Tony Russell-Rose, Joe Lamantia and Mark Burrell
  • Evaluating the Cognitive Impact of Search User Interface Design Decisions
    Max L. Wilson
  • Supplying Collaborative Source-code Retrieval Tools to Software Developers
    Juan M. Fernández-Luna, Juan F. Huete and Julio Rodriguez-Cano
  • Problem Solved: A Practical Approach to Search Design
    Vegard Sandvold
  • Back to MARS: The unexplored possibilities in query result visualization
    Alfredo Ferreira, Pedro B. Pascoal and Manuel J. Fonseca.
  • Interactive Analysis and Exploration of Experimental Evaluation Results
    Emanuele Di Buccio, Marco Dussin, Nicola Ferro, Ivano Masiero, Giuseppe Santucci and Giuseppe Tino

Accepted Posters

  • Towards User-Centered Retrieval Algorithms
    Manuel J. Fonseca
  • Design Thinking Search User Interfaces
    Arne Berger
  • The Development and Application of an Evaluation Methodology for Person Search Engines
    Roland Brennecke, Thomas Mandl and Christa Womser-Hacker

I’m honored to be on the organizing team for the first European workshop on HCIR at the HCI 2011 conference in Newcastle on July 4. See the workshop website for more details.

We are looking for submissions from industry professionals, as well as from academics. If you work in related areas–such as IA, UX, search systems design, etc.–we’d love to hear about your practical experience in the form of a short position paper. The call for papers is now open.

What is HCIR, you ask? Human computer Information Retrieval (HCIR) is a relatively new area of investigation that brings together concerns of human-computer interaction (HCI) and information retrieval (IR). The term was coined by Professor Gary Marchionini around 2005. Wikipedia defines HCIR as:

…the study of information retrieval techniques that bring human intelligence into the search process. The fields of human–computer interaction (HCI) and information retrieval (IR) have both developed innovative techniques to address the challenge of navigating complex information spaces, but their insights have often failed to cross disciplinary borders. Human–computer information retrieval has emerged in academic research and industry practice to bring together research in the fields of IR and HCI, in order to create new kinds of search systems that depend on continuous human control of the search process.

HCIR includes a ranges of techniques and approaches that allow people to better interact with information and find what they are looking for, such as auto-complete, spell correction, and relevance feedback. A significant amount of attention is given to faceted navigation.

If you will be in Hamburg or Sydney in April, consider attending one of my workshops. I’ll be focusing on some of these aspects of HCIR around IA, web navigation, and faceted navigation:

1. In GERMAN: UX Workshops in Hamburg by NetFlow, 11-12 April

2. in ENGLISH: ANZ UX Workshops in Sydney, 28-29 April
Enhanced by Zemanta

Are you in OZ and want to learn about faceted search, strategic alignment diagrams, IA, navigation and more this April? I’m  delighted to announce that I’ll be giving 2 workshops in Sydney on April 28-29, 2011!

See the workshop website for more information.

Here are some highlights:

WORKSHOP 1: Information Architecture for Strategic Web Design

Thursday 28 April 2011, 9:30-17:00 - This workshop focuses on the conceptual and strategic side of information architecture (IA). Topics include: alignment diagrams, mental models, concept maps, Cores and Paths, information structures and facets.

WORKSHOP 2: Web Navigation Design

Friday 29 April 2011, 9:30-17:00 – This workshop focuses on the nuts and bolts of good navigation design. Topics include principles of web navigation, navigation mechanisms, types of navigation, the scent of information, and faceted navigation.

COST

  • Earlybird (to April 2): AUD 660
  • Regular Price: AUD 759

AUDIENCE

Beginner to intermediate web designers, interaction designers and IAs; usability experts looking to improve web design skills; and project managers, product mangers, and others seeking to better understand web navigation design.

See the registration details page for more information and to sign up.

I’ve been thinking quite a bit about the role of paper and offline information resources in our overall information experience as humans interact with information. Some recent projects and research at work put the topic back on my plate. It was also part of my talk at the Euro IA conference (see Commercial Ethnography: Innovating Information Experiences).

A while back, I published a short essay on the potential importance of a creating print-friendly web pages. See Printing the Web in Boxes and Arrows (2003). The motivation for that article came from the observation that people quite like to print things from the web, as well as printing things like email. It seemed to me at the time that perhaps there is even a higher use of paper in offices since the web came along than before. So, our experience with a website may extend offline as well, and designers should consider how to best create print-friendly content.

Then, while at the CHI conference this year, I came across two fascinating exhibits related to paper. The first was digital paper, also called interactive paper. The second was iCandy, a program that allows you to print from your iTunes collections. Both of these stood out, particularly at a conference where digital interfaces are the focus of attention. With iCandy, for instance, the inventor was taking something that wasn’t originally available offline–iTunes–and making it available in paper format. Why bother, I thought? Is the experience with iTunes not sufficient? Is there something missing or something better than interacting with my music collection with iTunes? These two exhibits, as well as my own observations, suggest that yes–there is something with experiencing information on paper that gets completely lost in electronic formats.

Even more recently, I came across a post from innovation guru Scott Anthony about Plastic Logic’s new reader device. Interestingly, the hurdle he sees for Plastic Logic with their new reader is an experiential one:

But think about that target user. Hassled executives have defined patterns of behavior about how they interact with documents. They are used to flipping, scribbling, and shuffling through those documents. Sure, the weight of the paper can be cumbersome, but Plastic Logic faces an uphill climb if its device makes it harder rather than easier to review and comment on documents.

The experience we have with print materials is, in Anthony’s opinion, a potential showstopper for widespread acceptance of new reading devices. But it’s not just a matter of habit that we gravitate to read things on paper: there are real benefits of working with a multi-dimensional medium like paper that get lost in electronic formats.

Finally, Peter Merholz just posted about the paperless office again. He reaches back to a previous posting of his in which he disagrees with Malcolm Gladwell’s article in the New Yorker in 2002 on the topic. Peter makes some good points, but he’s also a little myopic on this one, particularly when making conclusions based on what he sees at his office. The habits of a cutting-edge, digital design office (Adaptive Path) hardly represent how people in other industries and businesses use paper.

I personally don’t foresee the complete disappearance of paper in the office in the near future, but I believe the time will come when online information experiences are rich enough to make it truly more advantageous to read a document from a computer screen than from paper. But even then, paper resources will still have a role. As noted at the end of an article entitled “On its way, at last” in The Economist–the catalyst for Peter’s post–we’ll probably see a re-purposing of paper. And Peter himself mentions such a shift as well in the Adaptive Path office.

Paradigm shifts with other types of media have also seen this type of re-purposing of old, incumbent media. As the radio became widespread, for instance, the initial reporting of a news event stopped being  communicate by lads standing on the corner shouting “Extra, Extra.” As a result, newspapers become more process-oriented. In other words, radio took over the announcing role, and people then got the details of the event from the newspaper. But newspapers didn’t go away.

So I don’t think we’ll see the completely paperless office, at least not on a widespread basis. Sure, some companies may actually achieve a paperless office, but they will be the exception rather than the norm. Instead, paper will come to serve a different role. It will be used for informal communication and extra-work events, or for brainstorming session and other creative exercises, or for official documents that require a signature and a company seal, for instance. There will be less of it, for sure–particularly for administrative things–but a completely paperLESS office is not only NOT in our future, but probably a bad idea.

As David Gelernter said back in 2000 in his Computer Manifesto:

“The ‘paperless office’ is a bad idea because paper is one of the most useful and valuable media ever invented.”

Deep Zoom

11 October 2008

This isn’t new, but I just came across Deep Zoom from MS. It’s based on their Seadragon technology, and it requires Silverlight.

Check out the Hard Rock Cafe collection of memorabilia. Combined with a faceted navigation on the left, you can really move around the items quite quickly. And because these are photos, you immediately see what you are getting. The zoom function is great–you can read the fine print on a document or see scratches on the guitars.

I can also imagine browsing publications, books, and newspapers with this technology, so there’d be application for it in information design and information architecture.

A few months ago there was an interesting story in Smashing magazine that spotted some new trends in web navigation menus. By and large, the trends identified are seen from a visual design standpoint, including some style trends.

I’ve been noticing two other navigation mechanisms and styles that seem to gaining popularity. The first is what I call a section sitemap menu. This is basically a dynamic menu activitated on roll over or on click from a main navigation point. The layer that is revealed essentially shows a mini-sitemap for that section of the site. This is shallower structure for navigating and allows visitors to get an overview at a glance.

Here are three examples, from HP.com (which has a complete subnavigation in the menu), Philips.nl, and Otto.de (which allows browsing by different facets.

Both the HP.com and Philips.nl examples also integrate advertising and promoting into the navigation. Not sure how user-centered that approach is, but it’s surely a more seductable moment than a plain ad on the homepage. It probably overcomes banner blindness quite well, too.

The second trend I’ve notices is a double-column left-hand navigation area. Blogs sometimes have this. Here’s an example from Information Design Patterns. Or on Josh Porter’s Bokardo.com blog. I know I’ve seen more of this arrangement, but I don’t have more examples at this moment. They’re out there, though.

Let me know if you see any other trends out there.

I’d like to post some thoughts about presentations I saw at the Euro IA 2007 Conference. Already mentioned Are’s presentation.

Here’s a summary of mine, which is essentially the last slide in my presentation (available on SlideShare) that sums everything up:

  • The cost of adding more information is noise. Don’t forget this when people talk about “unlimited shelf space” online.
  • There are different types of sources of metadata to consider: user-generated metadata (e.g., tagging), technically generated metadata (e.g., entity extraction), and owner-created metadata (e.g., controlled vocabularies).
  • There are also different types of structures of organization to give meaning and context to the metadata when you represent it: user-created structures (e.g., filtering tags for special interest groups), technically created structure (e.g., Google News page), and owner-created structures (e.g., a thesaurus).
  • In the Long Tail, any and all types of metadata and types of structure are needed. Forget about the silly arguments that one will replace the other. Think of it as matrix with the types of metadata on the side and the types of structures on the top.
  • Further, since niche markets fit the description of a bounded domain, and since traditional taxonomies and classification are often good strategies for organizing information in bounded domains, as Clay Shirky points out, AND as we move to a culture of niche markets, as Chris Anderson predicts, traditional IA and taxonomy will become more important.
  • Additionally, niche markets are defined by the categories you create. Online, a “pile of information”–as David Weinberger says in Everything is Miscellaneous--begins and ends with the IA and organization you develop.
  • IA in the Long Tail will be about second order design. You may not be able to customize each page or local navigation scheme. Instead, you need to provide people with the tools they need to make sense of information.
  • This means a shift for IA to look at abstract, broader patterns of human information behavior and of information structures in a domain. Card sorting is great, but we need to go well beyond this. We need to look at users much more closely, as well as the inherent patterns of information in a domain.

Not the most practical talk I’ve given, but many people thanked for the talk and said it got them thinking. So it seemed to have been well-received.

I started using Daylife just after it first launched. Since then I dropped if from my “Daily Stuff” tabs in FireFox that I usually open simulatenously when I go online. I liked a lot of things about the service, but it just wasn’t something that I needed at the time.
Just having revisited the site, I noticed some major changes to the user experience. The huge, page-filling image that previously occupied the start page has been removed in favor of more genre-conforming elements for online news sites. And for the most part, the site is far less image-rich than before. In areas like “Celebrity” this is unfortunate, but overall it’s probably a good move. With more focus on text and links, Daylife should be able to better expose and leverage their algorithms and entity extraction. And I like the basic information design on the site, so it works well.

A next step for them might be to expose more user-generated content and metadata. Comments, blogging, tagging, etc., would set it apart from other similar news services. (Now, there’s a good quesiton: how can tagging be leveraged on a current awareness service, where articles come on and go off the radar in a matter of days? With no time to incubate a collection of tags, what do you do there?). But their API is already a huge step in the Web 2.0 direction, so I’m going to knock them.

Anyway, I’m going to give it another shot and add it back to my daily tabs.

SlideCasting: 99% Good

29 July 2007

The smart folks over at SlideShare came up with a powerful new service for the site: SlideCast. Haven’t used it yet, but it looks to be fairly simple and a good idea overall.

Personally, I never considered PowerPoint to be evil. It’s just another tool the communicate. Sure, it can be used wrong, and it has it’s own style of communication, but any medium does. SlideCasting looks like it will make posted slide decks much more powerful.

If you have experience with SlideCasting, let me know what you think.

Matt Hurst over at Data Mining: Text Mining, Visualization and Social Media points to this interactive map on the NY Times website to track political campaign funding. I guess I’m spoiled by the Trendalyzer mentioned in my previous post, but this level of interactivity is downright tame. It’s smooth and somewhat usable (although the banners and nav at the top of the screen cut of the date range controls so that I didn’t see them until I was done looking at it), but I craved for more interactivity and exposing relationships.

The thing I really wanted to somehow have the ability to overlay two or more candidates’ funding bubbles. Flicking between the two, Obama clearly gets more support from the Chicago area than Guiliani, for instance (which is no surprise). But what other interesting connections and relationships might also be revealed? How’s Barack stacking up to Rudy in NY? Or how about bubbles for Dems vs Reps? I don’t want to knock the NYT for doing this, but there just seems like so many other easy targets that could have made this so much better.

BTW, check out Matt’s blog for other neat things going on in the text mining and analytics realm.

News Cues

9 July 2007

There’s a interesting study in the February issue of JASIST about which elements are most important for determining credibility of news stories on automated news aggregator pages, like Google News. [1] Though the findings might be obvious (there’s nothing wrong with stating the obvious), the researchers point to three elements that are most important on such automatically created pages:

  • The name of primary source from which the headline and lead were borrowed
  • The time elapsed since the story broke
  • The number of related articles written on the topic of the story

The researchers write: “…The findings from this study demonstrate that information scent is not simply restricted to the actual text of the news lead or headline in a news aggregating service. Automatically generated cues revealing the pedigree of the hyperlinked information carry their own information scent. Furthermore, these cues appear to be psychologically significant and therefore worthy of design attention. Systems that emphasize such cues in their interfaces are likely to aid information foraging, especially under situations where the user is unlikely to be highly task-motivated and therefore prone toward heuristically based judgments of information relevance. Navigational tools that highlight these cues are likely to be more effective in directing user traffic, as evidenced by early research on newspaper design (which highlighted the attention-getting potential of placement, layout, and color) and screen design (focusing primarily on typography and color…Finally, visualization efforts should focus on attracting user attention towards-and making explicit the value of-proximal cues instead of simply concentrating on visualizing the underlying information.”

This means to me that–even though the pages are automatically generated–there is still information architecture and information design that is critical to understanding and experience the information. Maybe machines won’t replace designers and there is a place for professions like IA in the future after all. Hmm…

[1] Sundar, S. Shyam, Silvia Knoblock-Westerwick, Matthias R. Hastall. News Cues: Information Scent and Cognitive Heuristics. JASIST 58(3): 366-378, 2007.

Is Relevance Relevant?

24 June 2007

For decades, information science has developed and examined the notion of relevance in information retrieval (IR). By and large, the approach to measuring relevance has been rather technical. Recall and precision have been the two main measures:

  • Recall looks at whether all of the documents relevant to a given query are returned.
  • Precision measures whether only the relevant documents are returned.

To measure relevance, you first need to create a key. This is a list of matching documents in a given database to a given query. But this key is itself artificial and doesn’t take into account any of the significant contextual factors people employ when determining relevance in real-life situations. It’s made up ahead of time by group of people who themselves don’t have a real information need in a real IR situation.

Tefko Saracevic points to a broader model of relevance in his article Relevance Reconsidered [1]. This includes the notion of technical relevance, but takes a more holistic look at relevance accounting for information interaction in IR situations. In addtion to technical relevance, he adds other types to the mix:

  • Topical or subject relevance: relation between the subject or topic expressed in a query, and topic or subject covered by retrieved texts, or more broadly, by texts in the systems file, or even in existence. It is assumed that both queries and texts can be identified as being about a topic or subject. Aboutness is the criterion by which topicality is inferred.
  • Cognitive relevance or pertinence: relation between the state of knowledge and cognitive information need of a user, and texts retrieved, or in the file of a system, or even in existence. Cognitive correspondence, informativeness, novelty, information quality, and the like are criteria by which cognitive relevance is inferred.
  • Situational relevance or utility: relation between the situation, task, or problem at hand, and texts retrieved by a systems or in the file of a system, or even in existence. Usefulness in decision making, appropriateness of information in resolution of a problem, reduction of uncertainty, and the like are criteria by which situational relevance is inferred.
  • Motivational or affective relevance: relation between the intents, goals, and motivations of a user, and texts retrieved by a system or in the file of a system, or even in existence. Satisfaction, success, accomplishment, and the like are criteria for inferring motivational relevance.”

A recent study in JASIST (July 2007) also shows that relevance is very situational and contextual [2]. The researchers looked at how people picked documents from random-ordered results lists from different search engines (Google, MSN Search, and Yahoo!).

“The findings show that the similarities between the users’ choices and the rankings of the search engines are low. We examined the effects of the presentation order of the results, and of the thinking styles of the participants. Presentation order influences the rankings, but overall the results indicate that there is no ‘average user,’ and even if the users have the same basic knowledge of a topic, they evaluate information in their own context, which is influenced by cognitive, affective, and physical factors.”

Cognitive, affective, and physical factors? Yikes. Recall and precision don’t look at any of these, yet these were found to be significant. So what does the traditional notion of relevance in IR really measure with recall and precision?

I believe there is a much broader context that needs to be considered–one that accounts for the entire information experience. Not sure what this is, but context and situation seem to trump recall and precision in real-world IR. Perhaps relevance isn’t even relevant any more in the online, ditigal world anyway. Perhaps we need a entirely new model for understanding how and when people select documents in IR situations.

[1] Tefko Saracevic (1996). Relevance reconsidered. Information science: Integration in perspectives. Proceedings of the Second Conference on Conceptions of Library and Information Science. Copenhagen (Denmark), 201-218.

[2] Judit Bar-Ilan, Kevin Keenoy, Eti Yaari, & Mark Levene (July 2007). User rankings of search engine results. JASIST (58, 9) 1254-1266.

Follow

Get every new post delivered to your Inbox.

Join 64 other followers