I just gave a talk for World IA Day at U Mich in lovely Ann Arbor entitled “Undiscovered Public Knowledge and IA.” Below are the slides, followed by links to the resources I mentioned in the talk. (Apologies for comical looking fonts: they seem to have gotten messed up when uploading to SlideShare.)


Faceted navigation is widespread on the web (a.k.a faceted search and faceted browse). It’s become an expected standard. I’ve written several posts on the subject and also have a popular workshop on faceted navigation. (Next one: 22 Oct 2011 in NYC). Yet we really don’t know much about the ROI of faceted navigation. Or do we?

I’ve only been able to find a few studies or case studies reporting a measureable ROI of faceted navigation. There are lots of variables in play, and definitively showing measureable gains directly to faceted navigation can be tricky. But a simple before-and-after comparison should be possible.

One helpful sources is Endeca’s case studies. Examples of ROI include:

  • Kiddicare.com: 100% increase in conversion rates; 100% increase in sales; Additional 100% increase in conversion rates with PowerReviews
  • AutoScout 24: 5% increase in lead generation to dealers; 70% decrease in no results found
  • Otto Group: 130% increase in conversion rates; Doubled conversion rates for visitors originating from pay-per-click marketing programs; Search failure rate decreased from over 33% to 0.5%

If you have such data or evidence in any form, please let me and others know about by commenting here. Note I’m not talking about studies that show how efficient faceted navigation is in terms of interaction or time on task (such as the ones reported here): I’m looking for hard evidence on ROI in real world situations.

It’s a positive sign that so many websites have faceted navigation these days: there must be something “right” about it. But why have so many site owners and stakeholders funded and implemented faceted navigation systems? What’s the actual return against the cost of implementation and maintenance?

Some logical arguments include combinations of the following:

  • Conversion: Customers can’t buy what they can’t find: Findability is critical for ecommerce sites.  A well-designed navigation plays a key role in getting people to the information or products you want to see. This ultimately helps you sell products or ideas. Faceted navigation has been shown to improve findability, in general.
  • Efficiency: Employees lose productivity when navigation is inefficient: These days company intranets can be enormous. The time to find information impacts employee productivity. Even the smallest increase in navigational efficiency can have huge returns for a large corporation if you multiple it by thousands of employees. Faceted navigation is efficient.
  • Confidence: Faceted navigation increases information scent: Revealing facet values gives users better insight into the type of terms and language used on the site. They are then able to match their information need with the content of the site, giving them confidence as the navigate forward through a given collection. This keeps them on the site and away from the customer support hotline.
  • “Aboutness”: Facets show the overall semantic make-up of a collection: Faceted metadata–the values associated with a collection of documents or products–give clues into the “aboutness” of that collection. Facets convey the breadth and type of a results list, for instance. This can help get to their target information better.
  • Reduced Uncertainty: Users don’t have to specify precise queries: With faceted navigation, users don’t rely on formulating precise keyword searches alone to find information. Instead, they can enter broad searches and use the facets in a flexible way to refine the initial query. This gives confidence in being comprehensive and reduces uncertainty in information seeking in general, as well as removes the frustration of finding no results.
  • Navigation: Browsing categories provides a different experience than keyword search: Jared Spool and his colleagues found that people tend to continue shopping more often when navigating than after doing a direct keyword search: people tend to continue browsing—and buying—when they can successfully navigate to the products they want to purchase. Sure, keyword searching may also get them there, but that experience is different. He writes in an article entitled “Users Continue After Category Links” (Dec 2001):
    • Apparently, the way you get to the target content affects whether you’ll continue looking or not. In a recent study of 30 users, we found that if the users used Search to locate their target content on the site, only 20% of them continued looking at other content after they found the target content. But if the users used the category links to find their target, 62% continued browsing the site. Users who started with the category links ended up looking at almost 10 times as many non-target content pages as those who started with Search.

A well-designed faceted navigation system won’t solve all your problems. But because navigation is so central to the basic web experience, it stands to reason that that are financial implications involved. What are they exactly?

Again, if you have any support for the above contentions or have another argument around the benefits of faceted navigation, please let me know.

Are you in OZ and want to learn about faceted search, strategic alignment diagrams, IA, navigation and more this April? I’m  delighted to announce that I’ll be giving 2 workshops in Sydney on April 28-29, 2011!

See the workshop website for more information.

Here are some highlights:

WORKSHOP 1: Information Architecture for Strategic Web Design

Thursday 28 April 2011, 9:30-17:00 – This workshop focuses on the conceptual and strategic side of information architecture (IA). Topics include: alignment diagrams, mental models, concept maps, Cores and Paths, information structures and facets.

WORKSHOP 2: Web Navigation Design

Friday 29 April 2011, 9:30-17:00 – This workshop focuses on the nuts and bolts of good navigation design. Topics include principles of web navigation, navigation mechanisms, types of navigation, the scent of information, and faceted navigation.


  • Earlybird (to April 2): AUD 660
  • Regular Price: AUD 759


Beginner to intermediate web designers, interaction designers and IAs; usability experts looking to improve web design skills; and project managers, product mangers, and others seeking to better understand web navigation design.

See the registration details page for more information and to sign up.

Well, maybe the title of this post is a little misleading: static footer bars aren’t really new. Facebook had one years ago. What’s new(ish), however, is how widespread they’ve become–something of a trend these days.

Below is one from CNET.com as an example:

Figure 1: Static footer bar on CNET.com (click to enlarge)

You can see the black bar at the bottom of the screen with several options: recently viewed products, my lists, something called TechTracker, log in, and join CNET links. It’s a toolbox of options from across the site brought together in a single mechanism. This bar is static and stays in view as the user scrolls down a page.

In Designing Web Navigation, I use David Fiorito’s and Richard Dalton’s navigation types to distinguish between different functions a navigation mechanism can have. (See their IA Summit 2004-05 presentations “Creating a Consistent Enterprise Web Navigation Solution.”  and “Thinking Navigation.”) Based on these categories, here’s how I describe the three fundamental types in my book:

  • Structural Navigation – This type of navigation connects one page to another based on the hierarchy of the site; on any page you’d expect to be able to move to the page above it and pages below it.
  • Associative Navigation – Connects pages with similar topics and content, regardless of their location in the site; links tend to cross structural boundaries.
  • Utility Navigation – Connects pages and features that help people use that site itself;. these may lie outside the main hierarchy of the site, and their only relationship to one another is their function.

Static footer bars fall into the last type: utility navigation. As in the CNET example (Figure 1), they usually include functional and helpful features for the site.

Previously on the Crate and Barrel site, the static footer bar contained the online shopping cart, lists, and a link to check out. Figure 2 shows a screenshot from the site from about 6 months ago (i.e., middle of 2010).

Figure 2: Static footer on Crate and Barrel with shopping cart feature (circa June 2010)

Since then, they’ve removed the static footer and move the utility options to the top of the page. Perhaps the static footer wasn’t working well with users? I can imagine it would be easy to overlook it, and putting something as significant as the shopping car in a footer might not give it the prominence it needs. Visibility is likely to be an issue with static footer bars.

The AllAboutJazz.com website recently introduced a fairly extensive static footer menu (Figure 3). It includes social media links, RSS options links, radio stations, and even provides quick links to some main content on the site.

Figure 3: Static footer on AllAboutJazz.com

There are several things to note in this example:

  1. It does not extend across the whole page, as in the Crate and Barrel example (Figure 2). Instead, there is a small gap between the ends of the footer bar and the sides of the browser (when viewed at 1024 wide or wider). This is important to maintaining a sense of something that sits on top of the page as an overlay and not something that takes space away from the content. Though I’ve never tested static navigation footers, I also suspect that not extending the bar all the way across the page reduces the chance of “blindness” to the bar itself. The CNET example (Figure 1) also takes this approach.
  2. The static footer bar on AllAboutJazz is semi-transparent. This provides the sense that it’s an overlay, and it feels less intrusive on the page and its content.
  3. If the above two points aren’t enough, users can collapse the static footer bar on AllAboutJazz.com. It’s open by default, but with a single click it can be reduced to the size shown on the lower right of the next screen shot (See the “Tools” overlay in Figure 4.)

Figure 4: Collapsed static footer on AllAboutJazz.com

Static footer bars aren’t going to solve any major structural or navigation issues with your site, but I imagine they can be useful to help keep page navigation from getting in the way of content or tasks users are trying to accomplish. It also potentially keeps utility navigation options close by for visitors who need them.

There are other examples out there on the web. If you have used them, I’d like to hear what went well or not.

I’ve not implemented static footer bars in any of our designs, but I’m starting to explore their use. Nonetheless, from surveying examples found on the web, my recommendations for static footer bars are:

  • Avoid teasers and promotional links. This extra “noise” may give the illusion that all of the options there are just ads and none of them can be useful.
  • Don’t extend the bar all the way across the page. Leave a gap between the left and right edges of the footer and the sides of the browser. It could even be smaller than the width of the content area on a fixed width layout.
  • Use a transparency for the background of the bar so the page behind it comes through a little.
  • Allow users to collapse a static footer bar, and then to expand it again.

If you’d like to find out more static footer bars and related topics, I have two sets of workshops already planned for 2011, in English and in German:

1. In ENGLISH: Part of UX Fest in London, February 9-10
a. Designing Web Navigation
b. Faceted Search & Beyond

2. In GERMAN: Workshops in Hamburg by NetFlow, April 11-12
a. Prinzipien der Informationsarchitektur
b. Elemente des Navigationsdesigns
[details and online registration to come]

See my workshops page on this blog for descriptions of the sessions.

Significant points and recommendations regarding web navigation from two recent Forrest reports caught my eye.

1. The first is a brief overview of their survey results of 60 web improvement projects across Europe [1]. One of the top areas of concern is good navigation. They write:

Navigation. Users don’t just want good content; they want content that’s easy to find and use. Companies that provided intuitive category names in menus and efficient online processes improved metrics like conversion rates and increased sales.

Note that here their definition of “navigation” is broad, including aspects of the site structure and of search.

(The other two main areas of concern, BTW, are value–or aligning with user needs–and design presentation).

Fixing issues with an existing navigation system or creating an effective one from scratch will have the most positive impact on usability when prioritizing issues to address, according to the report.

2. The second report is a little older–from August 2010. In this document, Forrester gives advice in the form of seven indicators to tell you when a relaunch is needed [2]. One of the indicators is a troubled navigation scheme. From the report:

Navigation system breakdown. Web sites are subject to a cycle of accretion, as contributors add links and content, and erosion, as out-of-date content gets removed. These changes wear site navigation systems down over time. Warning signs include complaints from contributors who say that they can’t find a home for their content; “quick links” tacked onto pages to get users to the critical content they can no longer find through menus; and alternative overlapping menu structures layered onto the site. Until 2009, USPS attempted to meet a diverse set of user needs by adding new categorization schemes to the site over the course of several years. The result was a confusing blend of competing menu structures and shortcuts. In 2009, USPS redesigned its site to address these navigational shortcomings — creating a more streamlined site that features both menu category and task-driven navigation.

They recommend:

Monitor navigation system health. Every site will have some deterioration of its navigation system over time, courtesy of accretion and erosion. To track the course of this natural process, employ analytic tools that check users’ paths through the site. Then at least once per quarter, review critical paths, like those from the home page to customer service, for warning signs like an increase in the number of users who pogo-stick up and down between menus and submenus or a surge in users who start out with menu navigation and then turn to search. If you see a 25% or greater increase in the number of visits during which these redflag behavior patterns occur, take it as an early warning sign of navigation system failure.

Social media, mash-ups, and ajax are all great. But even in the world of Web 2.0, Web 3.0 and beyond, the fundamental problems of navigation and findability in web design still remain.

There’s no silver bullet in designing web navigation. Instead, it’s about your approach to solving the problem and way of thinking about navigation. This is what I put forth in Designing Web Navigation (i.e., a way of thinking, not guidelines), and this is what I teach in my workshops.

If you’d like to find out more, I have two sets of workshops already planned for 2011, in English and in German:

1. In ENGLISH: Part of UX Fest in London, February 9-10
a. Designing Web Navigation
b. Faceted Search & Beyond

2. In GERMAN: Workshops in Hamburg by NetFlow, April 11-12
a. Prinzipien der Informationsarchitektur
b. Elemente des Navigationsdesigns
[details and online registration to come]

See my workshops page on this blog for descriptions of the sessions.


Forrester Reports Referenced

[1] Adele Sage. “Europe 2010: Fixing Known Usability Problems Pays Off,” Forrester Report, November 12, 2010.

[2] Vidya L. Drego. “When To Redesign Your Site: Seven Indicators That It May Be Time For An Online Overhaul,” Forrester Report, August 17, 2010.

Jared Spool points an interesting article by Bret Victor called “Magic Ink: Information Software and the Graphical Interface.” Here’s the abstract:

The ubiquity of frustrating, unhelpful software interfaces has motivated decades of research into “Human-Computer Interaction.” In this paper, I suggest that the long-standing focus on “interaction” may be misguided. For a majority subset of software, called “information software,” I argue that interactivity is actually a curse for users and a crutch for designers, and users’ goals can be better satisfied through other means.

Information software design can be seen as the design of context-sensitive information graphics. I demonstrate the crucial role of information graphic design, and present three approaches to context-sensitivity, of which interactivity is the last resort. After discussing the cultural changes necessary for these design ideas to take root, I address their implementation. I outline a tool which may allow designers to create data-dependent graphics with no engineering assistance, and also outline a platform which may allow an unprecedented level of implicit context-sharing between independent programs. I conclude by asserting that the principles of information software design will become critical as technology improves.

Although this paper presents a number of concrete design and engineering ideas, the larger intent is to introduce a “unified theory” of information software design, and provide inspiration and direction for progressive designers who suspect that the world of software isn’t as flat as they’ve been told.

I just gave a keynote at the Polish IA Summit in Warsaw on the topic of sense making. I highlighted four key challenges IAs and designers face in creating interfaces that let people make better sense of large amounts of information, all of which are reflected in Bret Victor’s article:

  1. Representation: how information is displayed affects how it’s consumed and understood, but showing large amounts of information can be difficult in many situations (e.g., on smaller displays).
  2. Interaction: giving people the ability to manipulate information is important for sense making. However, there is an effort-benefit tradeoff–people may not take the time to learn how to use all the controls you provide, or they may not have the skills.
  3. Semantics: Bret Victor talks about context sensitivity of information, which is essentially what I was talking about with semantics.
  4. Time: showing how information (and metadata) change over time can provide incredible insight in many situations. Just look at Hans Rosling’s Gapminder talks. The temporal dimension of information is important for sense making.

Sense making solutions, then, combine and balance all of the above aspects. Beyond that, there are two more considerations:

5. Understanding users, workflow, and needs, and creating systems that bring value to people.

6. Bringing value to businesses

While a lot of the academic work on sense making is interesting and inspiring, it still fails to adequately address this last two points, in my opinion. Bret Victor’s piece is definitely a step in the right direction. Check it out.

Kate Rutter gave a presentation at the IA Summit in Memphis this year on slime mold.  That’s right, slime mold. What’s that got to do with IA and UX? Nothing. And Everything.

Hear the whole presentation on Boxes and Arrows. In a nutshell, slime mold aren’t animals or plants. They’re really both. They change and adapt to their enviroment.   To have effective projects, we have to deal with our organizations and environments. This isn’t trivial to UX work. In fact, I think THE biggest challenge. And we can learn from slime mold.

You gotta read the article to really get the point:

Steffen Schilb is at it again. First came CardSort. Now he’s developed another interesting tool to test your information architecture called C-Inspector:

“C–Inspector is a web–based application that helps you to test the information architecture of your website. By analyzing both quantitative and qualitative data collected through the remote test, you can gain insight into the users’ mental models and identify possible issues with labelling or grouping.”

I’ve not tried the tool myself yet, but it looks promising. The task-based approach would appear to give rich feedback on your IA. This isn’t new, though–the guys over at Optimal Usability also just recently launched TreeJack, which I got to see at the IA Summit in Memphis. It also takes a task-based approach.

Congratulations, Steffen.

The first issue of the Journal of Information Architecture (JofIA) has finally arrived. I contributed a piece on uncertainty. Here’s the table of contents:

  • Dorte Madsen
    Editorial: Shall We Dance?
    pp. 1-5
  • Gianluca Brugnoli
    Connecting the Dots of User Experience
    pp. 6-15
  • Helena Francke
    Towards an Architectural Document Analysis
    pp. 16-36
  • Andrew Hinton
    The Machineries of Context
    pp. 37-47
  • James Kalbach
    On Uncertainty in Information Architecture
    pp. 48-55

It was a long time coming and a lot of people put a ton of work into the launch of the journal. Congratulations to everyone involved.

Lou Rosenfeld and I will be giving a series of workshops in Hamburg, Germany from May 18-20–right after the German IA Konferenz. Find out more and register at UX Workshops.

Here’s a brief overview of the workshops

  • May 18, 2009 – Enterprise Information Architecture – Louis Rosenfeld
    Developing a unified web site or intranet for a large, decentralized organization is the Holy Grail for many of today’s Internet professionals. This day-long seminar is for managers and web professionals who desperately want to tie together content in a rational, user-centered way, regardless of content ownership issues, cultural hurdles, and turf battles.This advanced information architecture seminar combines lecture, demonstration and exercises, discussion, and handouts to address a topic that bewilders every large organization: designing unified information architectures for large enterprises.
  • May 19, 2009 – Commercial Ethnography – James Kalbach
    Ethnographic research methods have many potential advantages for businesses, including helping to increase insight into customer behaviour, make the real world visible the entire organisation and identify opportunities for innovation. In this course, you will learn about practical skills needed to conduct an ethnographic study from beginning to end. The course outline walks through each phase step-by-step.
  • May 20, 2009 – Personas and Mental Models – James Kalbach
    Communicating user research effectively is critical for user-centred design. This full-day course has two parts that show how to bring your research to life:
    Part 1: Personas - Personas have become a mainstream design tool. There’s even a growing body of literature on the subject, including two full-length books. But there are also misconceptions and misuses of personas in the field.
    Part 2: Mental Models - The term “mental models” means different things to different people. In this workshop, we use the term broadly to refer to any technique used to understand the behavioural, cognitive, and emotional states of users.

The early bird price runs until April 2. There’s limited place for each of the workshops.

Register at www.uxworkshops.com.

Karen Lindemann from Netflow is the sponsor and producer of the events.

A-Z Index Examples

24 February 2009

Here’s a collection of A-Z index examples on UX Refresh.

Generally, I’m a fan of A-Z indexes. But at the same time I realize they are really difficult to create and maintain, particular in dynamic online settings. So the real value of them remains elusive to me. I don’t think I’d really try too hard to convince someone they need an A-Z index to organize information in a digital space.

That said, I did make a point in my presentation at the Euro IA Summit 2008 in Barcelona that things like indexes and taxonomies make sense within bounded domains–more so than in open domain contexts. (See also a summary in the ASIST Bulletin: “Navigating the Long Tail.”) Even Clay Shirky agrees with that. Here’s my point:

As we collectively move down the long tail, bounded domains–or niche markets, as Chris Anderson calls them–will increase and solidify, and so we will also see an increase in the need for indexes, taxonomies, and ontologies to help organize these domains.

So maybe there’s hope for A-Z indexes after all. In fact, I recently came across an excellent implementation of an A-Z index not included in the collection summarized on UX Refresh: EMBASE, a bibliographic database from Elsevier. From the Embase website:

EMBASE.com is a biomedical and pharmacological bibliographic database, which provides access to the most up-to-date citations and abstracts from biomedical and drug literature via EMBASE and Medline. It contains over 19 million indexed records from 7,000+ peer reviewed journals, covering 1947 to date, with more than 600,000 additions annually.

EMBASE is indexed using the Elsevier life science thesaurus, EMTREE and Medline records are mapped to EMBASE before adding to EMBASE.com.

The interesting part about it is that the index is integrated into the auto-complete suggested terms feature from the main search field–with “use:” references and all:


Auto-complete suggestions are most often alphabetical anyway, so this makes a lot of sense. And since biomedicists become familiar with the standard terms in their bounded domain, their understanding of the index should be fairly high. I’d even predict that people would expect to have access to standard index terms in this context.

Below are excertps from a longer essay I wrote several years ago that never got published, mixed with a passage from Designing Web Navigation. In thinking about breadth vs. depth recently, I returned to this line of thinking and wanted to share it.


Uncertainty in Information Seeking

Formal associations of information and uncertainty date back to Shannon and Weaver (1949), where the presentation of information itself was believed to reduce uncertainty. Later, Nicholas Belkin (1980) focused on the notion that seekers, sometimes even experts in a given information system, are not able to properly formulate queries to access the information they need. He calls this “anomalous states of knowledge,” or ASK for short. Here, uncertainty underlies the basic information seeking.

Kuhlthau’s (1993) work on uncertainty and information seeking is perhaps the most extensive. She proposes uncertainty as a principle for information seeking, defined as follows:

“Uncertainty is a cognitive state that commonly causes affective symptoms of anxiety and lack of confidence. Uncertainty and anxiety can be expected in the early stages of the ISP. The affective symptoms of uncertainty, confusion, and frustration are associated with vague, unclear thoughts about a topic or problem. As knowledge states shift to more clearly focused thoughts, a parallel shift occurs in feelings of increased confidence. Uncertainty due to a lack of understanding, a gap in meaning, or a limited construct initiates the process of information seeking” (Kulthau, 1993, p. 111).

Wilson et al. (2002) explored the relationship between uncertainty and information seeking. They found that the Uncertainty Principle as outlined by Kuhlthau indeed serves as a useful variable in understanding and predicting information-seeking behavior. Though not conclusive, this research points towards uncertainty as a universal aspect of information seeking.

Uncertainty common in earlier stages is caused by the introduction of new information that stands in conflict with the user’s prior understanding of the material. Significant to this principle is a typical secondary peak of uncertainty in the process, or the “dip” in confidence mentioned above. Optimism at the beginning phases of information seeking often gives way to doubt and uncertainty in the middle phases. In other words, confidence does not lie on a steadily increasing linear scale, as previously believed, but rather can rise or fall as new information is uncovered. Acquiring more information in initial stages increases rather than decreases uncertainty.

Note, however, that it is the perception of complexity, rather than the actual objective complexity of a task, that causes feelings of uncertainty (Kuhlthau, 1999). Perceived complexity is often the cause of the secondary peak of uncertainty, doubt, and confusion in information seeking.

Uncertainty in Breadth vs. Depth

An example of information uncertainty can be found in the issue of breadth vs. depth of information structures, an important issue in web design. Researchers typically have studied search time, disorientation, error rates, and even satisfaction. A good summary can be found in Larson & Czerwinski (1998). The general design recommendation from such research is to increase breadth to reduce search time and errors, as well as increase satisfaction. It is believed that the time spent scanning menu items in a broader structure is less than the time spent drilling down into a deeper structure. In the latter, menu terms are necessarily more general and therefore more ambiguous.

Unfortunately, most breadth vs. depth studies test relatively symmetrical structures, for example 4x4x4 structures (Snowberry et al. 1983), and thus do not account for naturally occurring irregularities in hypertext shapes. One exception is a study by Norman and Chin (1988), in which constant structures were compared to irregular shapes (increasing, decreasing, convex, concave). The researchers found that the concave structure (8x2x2x8) performed best.

Michael Bernard (2002) more recently tested information structures with both symmetrical and asymmetrical schemes as well. He confirmed that broader structures do indeed perform better, but also found that deeper, asymmetrical structures perform better than symmetrical structures of the same depth. For example, 4x4x4x4 structures performed not only worse than asymmetrical shapes of the same depth (e.g. the concave 6x2x2x12) but also worse than deeper concave structures (e.g. 3x2x2x2x12). He concludes that the performance of the structures is determined in part by the properties of the hypertext shape, namely the perceived complexity of the information space and information uncertainty.

A concave information architecture indeed seems to match a decrease in certainty users often experience when seeking information as described by Kuhlthau. At the top level of a concave structure, seekers need orientation without being overwhelmed. A balance of well-selected, mutually exclusive categories serves as an efficient, satisfying starting point. The middle levels are best restricted in breadth, thus reducing uncertainty and feelings of doubt or frustration while making choices. The broader, bottom level of a concave structure, however, provides maximum information scent and a sense of “arrival” as the seeker begins gaining confidence again. As Bernard (2002) writes, “at the terminal level, broad menus reduce the information uncertainty.” At this point in the structure the users are able to handle more complexity.

Conversely, convex structures present more choices at the middle levels than on the ends (e.g. 2x8x8x2) and thereby contradict a normal pattern of cognitive and emotional user needs in information seeking: there is more uncertainty after navigating has begun. This could mean an increased likelihood of a hesitation in the search process, and feelings of apprehension and frustration may set in.

Therefore, why the performance of varying hypertext shapes is given by perceived complexity and uncertainty. This means that in evaluating or creating information architectures, affective considerations can play a potential role in predicting their overall success, namely feelings of uncertainty and confidence.

Uncertainty in the Scent of Information

Jared Spool and his colleagues (2004) have popularized the notion of the scent of information. Scent refers to how well links and navigation match a visitor’s information need and how well they predict the content on the destination page. There are potentially many aspects of navigation design that contribute to scent, including position on screen, labels, icons, color, descriptive texts, and so forth.

But ultimately scent is more complex and subtle than how links are displayed. It really has to do with creating a sense of confidence in navigating. The researchers explain:

Usually, however, scent is invisible. It is a product of how well the designers understand the site’s users, those users’ needs, and how the users access the site.

In fact, the best way to detect scent is to measure the users’ confidence…When the scent is weak, users are not confident at all. They doubt their choices. They tell us they are making “wild guesses.” They click hesitantly, hoping the site will magically come through for them. More important, they rarely find what they are seeking.

When scent is strong, however, their confidence builds as they draw closer to their content. They traverse the site with little hesitation. Moreover, they find what they are seeking.

Trigger words emerge to be the most critical aspect in creating information scent. These are navigation labels and texts that match a visitor’s need on the page. Discussed further in Chapter 5, labels are what people are scanning for when the first land on a page. Scanning for trigger words is a consistent pattern Spool and his team found across user types, across tasks, and across sites:

We’ve noticed that people looking for information all exhibit similar patterns. They first scan for their trigger words—words or phrases they associate with the content they’re seeking—in an attempt to pick up the scent.

Trigger words help to indicate they are on the right track. They reduce uncertainty and give confidence in navigating further.

While labels and trigger words certainly play a leading part in reducing uncertainty and providing a sense of confidence, I would also argue that the shape of the information architecture itself also plays a significant role.


  1. Shannon, C.E., & Weaver, W. (1949). The mathematical theory of communication. Urbana, IL: University of Illinois Press.
  2. Belkin, N. J. (1980). Anomalous states of knowledge as the basis for information retrieval. Canadian Journal of Information Science, 5, 133-143.
  3. Bernard, M. L. (2002). Examining the effects of hypertext shape on user performance. Usability News, 4.2. Accessed online March 10, 2003 at http://wsupsy.psy.twsu.edu/surl/usabilitynews/42/hypertext.htm.
  4. Kuhlthau, C. C. (1993). Seeking meaning: A process approach to library and information services. Norwood, NJ: Ablex.
  5. Kuhlthau, C.C. (1999). The role of experience in the information search process of an early career information worker: Perceptions of uncertainty, complexity, construction, and sources. Journal of the American Society for Information Science, 50(5), 399-412.
  6. Larson, K. & Czerwinski, M. (1998). Web page design: Implications of memory, structure and scent from information retrieval. Proceedings of the Assocaitions for Computering Machinery’s CHI 1998, 18-23.
  7. Norman, K. L. and Chin, J. P. (1988). The effect of tree structure on search performance in a hierarchical menu selection system. Behaviour and Information Technology, 7, 51-65.
  8. Snowberry, K., Parkinson, S., & Sisson, N. (1983). Computer display menus. Ergonomics, 26, 699-712.
  9. Spool, Jared, Christine Perfetti, & David Brittan (2004). Design for the Scent of Information, User Interface Engineering.
  10. Wilson, T. D., Ford, N., Foster, A., & Spink, A. (2002). Information seeking and mediated searching. Part 2. Uncertainty and its correlates. Journal of the American Society for Information Science and Technology, 53(9), 704-715.

I’m giving two all-day workshops on IA and Navigation here in Hamburg in May. The workshops are being organized and sponsored by Karen Lindemann of Netflow. See the details on Netflow’s site (in German only).

Registration is now open. The workshops will be held in German. Dates: 6.-7. May, 2008.


Get every new post delivered to your Inbox.

Join 71 other followers