FeedHub

8 October 2007

Jan tipped me off to FeedHub. This is a beta attempt at filtering lots of RSS feeds. I have to admit I’m not 100% what it tells me, but it appears to be doing some kind of text analytics on my feeds. It then personalize a structure around those feeds. The goal is to reduce RSS clutter and noise, so I can focus on the topics and subjects I want to (so they claim).

Here is what mSpoke, the creators of FeedHub, have to say about its inner workings in a blog post:

“Very simply, we learn about you based on the implicit usage of your personalized feed and any explicit gestures you choose to share with us. We use this information to distill a set of “memes” that describe your preferences. Each meme represents some characteristic of a post, like its topic, popularity in del.icio.us, or number of Diggs. Each meme also has a strength that indicates how predictive FeedHub expects it to be in choosing content you’ll like. As we learn about you, FeedHub automatically discovers new memes for you and strengthens or weakens memes appropriately.”

So you basically give FeedHub your feeds as a OPML file, it analyzes them for you, and then builds a profile of your interests that you can manage and customize.The basic building block of all of this is what they are calling a meme, or an extracted category.

I’m quite confused about the overall experience and how this really helps me make sense of the feeds I currently subscribe to. If anyone has more experience with it, I’d like to hear about it.

Memeorandum launched a new blog news aggregation service today. It’s an automatic summary of key news issues, with links to further discussions from around the blogosphere on a given topic. Unfortunately they don’t have the key visual clues that help people judge the importance and credibility of a story: number of conversations on that topic, and time ellapsed since the story broke. I’m also not thrilled with the information design–it’s not the prettiest thing to look at.

Interestingly, there are two categories or types of news they are focusing on: political news and tech news. This reminded me of something Chris Anderson writes about in the Long Tail: Information needs context for it to be useful. For instance, top-ten list of all bands on a online music store is meaningless, but a top-ten list for latin jazz suddenly makes a lot of sense. Similarly, news based on all blogs isn’t nearly as valuable as categorizing it under politics or tech or whatever.

In the Long Tail, categories of niches are needed before the information in that market can even make sense, so it would seem. And if services like Memeorandum expand, we’ll end up with a taxonomy of niche markets. Uh oh–did I just say “taxonomy.” Guess I did. Looks like structured information–even way out there in the long tail of the blogoshere–ain’t so bad after all.

News Cues

9 July 2007

There’s a interesting study in the February issue of JASIST about which elements are most important for determining credibility of news stories on automated news aggregator pages, like Google News. [1] Though the findings might be obvious (there’s nothing wrong with stating the obvious), the researchers point to three elements that are most important on such automatically created pages:

  • The name of primary source from which the headline and lead were borrowed
  • The time elapsed since the story broke
  • The number of related articles written on the topic of the story

The researchers write: “…The findings from this study demonstrate that information scent is not simply restricted to the actual text of the news lead or headline in a news aggregating service. Automatically generated cues revealing the pedigree of the hyperlinked information carry their own information scent. Furthermore, these cues appear to be psychologically significant and therefore worthy of design attention. Systems that emphasize such cues in their interfaces are likely to aid information foraging, especially under situations where the user is unlikely to be highly task-motivated and therefore prone toward heuristically based judgments of information relevance. Navigational tools that highlight these cues are likely to be more effective in directing user traffic, as evidenced by early research on newspaper design (which highlighted the attention-getting potential of placement, layout, and color) and screen design (focusing primarily on typography and color…Finally, visualization efforts should focus on attracting user attention towards-and making explicit the value of-proximal cues instead of simply concentrating on visualizing the underlying information.”

This means to me that–even though the pages are automatically generated–there is still information architecture and information design that is critical to understanding and experience the information. Maybe machines won’t replace designers and there is a place for professions like IA in the future after all. Hmm…

[1] Sundar, S. Shyam, Silvia Knoblock-Westerwick, Matthias R. Hastall. News Cues: Information Scent and Cognitive Heuristics. JASIST 58(3): 366-378, 2007.

Follow

Get every new post delivered to your Inbox.

Join 64 other followers