Is Relevance Relevant?

For decades, information science has developed and examined the notion of relevance in information retrieval (IR). By and large, the approach to measuring relevance has been rather technical. Recall and precision have been the two main measures:

  • Recall looks at whether all of the documents relevant to a given query are returned.
  • Precision measures whether only the relevant documents are returned.

To measure relevance, you first need to create a key. This is a list of matching documents in a given database to a given query. But this key is itself artificial and doesn’t take into account any of the significant contextual factors people employ when determining relevance in real-life situations. It’s made up ahead of time by group of people who themselves don’t have a real information need in a real IR situation.

Tefko Saracevic points to a broader model of relevance in his article Relevance Reconsidered [1]. This includes the notion of technical relevance, but takes a more holistic look at relevance accounting for information interaction in IR situations. In addtion to technical relevance, he adds other types to the mix:

  • Topical or subject relevance: relation between the subject or topic expressed in a query, and topic or subject covered by retrieved texts, or more broadly, by texts in the systems file, or even in existence. It is assumed that both queries and texts can be identified as being about a topic or subject. Aboutness is the criterion by which topicality is inferred.
  • Cognitive relevance or pertinence: relation between the state of knowledge and cognitive information need of a user, and texts retrieved, or in the file of a system, or even in existence. Cognitive correspondence, informativeness, novelty, information quality, and the like are criteria by which cognitive relevance is inferred.
  • Situational relevance or utility: relation between the situation, task, or problem at hand, and texts retrieved by a systems or in the file of a system, or even in existence. Usefulness in decision making, appropriateness of information in resolution of a problem, reduction of uncertainty, and the like are criteria by which situational relevance is inferred.
  • Motivational or affective relevance: relation between the intents, goals, and motivations of a user, and texts retrieved by a system or in the file of a system, or even in existence. Satisfaction, success, accomplishment, and the like are criteria for inferring motivational relevance.”

A recent study in JASIST (July 2007) also shows that relevance is very situational and contextual [2]. The researchers looked at how people picked documents from random-ordered results lists from different search engines (Google, MSN Search, and Yahoo!).

“The findings show that the similarities between the users’ choices and the rankings of the search engines are low. We examined the effects of the presentation order of the results, and of the thinking styles of the participants. Presentation order influences the rankings, but overall the results indicate that there is no ‘average user,’ and even if the users have the same basic knowledge of a topic, they evaluate information in their own context, which is influenced by cognitive, affective, and physical factors.”

Cognitive, affective, and physical factors? Yikes. Recall and precision don’t look at any of these, yet these were found to be significant. So what does the traditional notion of relevance in IR really measure with recall and precision?

I believe there is a much broader context that needs to be considered–one that accounts for the entire information experience. Not sure what this is, but context and situation seem to trump recall and precision in real-world IR. Perhaps relevance isn’t even relevant any more in the online, ditigal world anyway. Perhaps we need a entirely new model for understanding how and when people select documents in IR situations.

[1] Tefko Saracevic (1996). Relevance reconsidered. Information science: Integration in perspectives. Proceedings of the Second Conference on Conceptions of Library and Information Science. Copenhagen (Denmark), 201-218.

[2] Judit Bar-Ilan, Kevin Keenoy, Eti Yaari, & Mark Levene (July 2007). User rankings of search engine results. JASIST (58, 9) 1254-1266.

About Jim Kalbach

Head of Customer Success at MURAL

4 comments

  1. Pingback: Content Relevance « Vagrant Muse

  2. mahesh

    James

    Thanks for articulating this so well!! I have been thinking[1] about this contextual relevance of search queries. My aims was to be able to define custom taxonomies, via tags, for local content and have contextual relevance defined by my own preferences.

    Thanks again, was unaware of the vocabularies that exist in this subject before I read your post.

    [1] http://maheshcr.wordpress.com/2007/06/24/content-discoverability/

  3. Pingback: John Ferrara on Measuring Relevance « Experiencing Information

  4. Pingback: Faceted Navigation: Grouping – An UnTapped Potential? « Experiencing Information

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: