People: fast forward

XIV.1 January + February 2007
Page: 50
Digital Citation

Taxonomies to tax the couch-potato’s cortex

Aaron Marcus

back to top 

Growing up in the 1950s, there were only three TV stations, one for each network, so checking the television listings in the newspaper was a relatively simple task. Knowing that I had to get back home in time for The Mickey Mouse Club on weekday afternoons and The Milton Berle Show on Tuesday nights at 8 p.m. was a straightforward matter of scheduling.

Today, the situation is different. The many choices in media, especially video programming, as well as details of content, cost, and timing, all create user challenges. Provider challenges include how to expose choices to the decision-maker(s), i.e., viewers and/or purchasers, and designing how one navigates among choices before selecting the desired item.

One facet of this challenge involves selecting music. In the past, the choices were a handful of AM radio stations of known, limited brands, themes, content, and scheduling. As a teenager, I always knew when the Top 40 countdown would be played. Then came FM, then downloaded stations and music/audio files, and now satellite radio stations. Many different solutions have been proposed in the audio world, making the taxonomies, attributes, and choices evident in many ways. From the relative elegance of iTunes and a half-dozen major competitors for legal music, music is now categorized and searched in multiple ways. There are categories of music, lists of lists of lists, and novel display techniques, like that of the affinity diagrams of Live Plasma, the mood-music organizer of Mood Logic, or the music research project of Lamere. Users can find music using search boxes similar to those of general Web/Internet search engines, with more or less sophistication, including Boolean-logic-based queries.

The situation becomes even more complex with audio-visual collections. They can encompass the contents of spoken word and music, but also contain all of the complexities of visual artifacts. Art-history databases have grappled with the visual challenges for decades; the searchable collections of the Getty Museum are a well-known example. The University of Maryland long ago experimented with FilmFinder.

Many of the issues of verbal/auditory databases will eventually surface in the collections of video material. For example, one may use the following search term: "Find the scenes in a movie where Clark Gable and Marilyn Monroe are sitting together in a truck in the middle of the desert." In the future, consumers probably will want and be able to see snippets of favorite video material, much as they can now view sports highlights of the best moments of a chosen game or competition.

But leaving aside these search-and-retrieval questions, there remains the humble issue of how to present just a broadcast schedule of video material or the available contents of a large aggregator of video content. The former is an issue for such cable and satellite service providers as DirecTV or Comcast. The latter issue matters for YouTube and its competitors.

As it happens, I use three DirecTV decoders, each of which shows 800 or more channels of continuous content according to three different categorization, visual-display, and remote-control-interaction techniques. This challenge reduces the most dedicated couch potato's brain to a potato chip. The movie-content metadata beyond title, year of release, and essential story line (e.g., who are the actors/actresses, directors, production companies, locales, etc.) for these services is nowhere near the sophistication of Gracenote's music databases or the film databases of the American Film Institute.

The video archives of YouTube (reported to be adding tens of thousands of video clips per day and already containing approximately 100 million entries), Google, Amazon, and Apple attempt to provide some initial, brief categorizations of content. Armed with a search widget, the user is left to his/her resources in navigating vast landscapes of content. But how can one categorize YouTube's strange assemblages of video? Only tag words can lend some aggregate order to a relatively unstructured collection of content. Beyond this challenge are the language, jargon, and translation challenges of making the world's video production available to all.

It is true that social tagging systems may simplify the search process, but what if the taggers don't quite realize what you want or what you see in the content, or differ greatly in general cultural experience? How will you be able to "round up the usual suspects" when no one has thought to phrase the question in quite the way you need? Similarly, what happens when mindsets change significantly over generations, and the tags of yesteryear are meaningless to today's inquiring minds? Beloit College in Wisconsin has tracked this change in its study of what's on the minds of current college students in the class of 2010: students who have lived in a world with no Soviet Union, one Germany, and bar codes since birth [2]. Perhaps Scott Jones' guide concept, actual live human beings, embodied in his service, will come to the rescue ... but then who will help them? And how many worldwide guides could possibly be recruited to help everyone struggling to find the right media selection?

When selections are returned, the challenge of visualization or sonification arises. Perhaps lists or tables are always a safe bet for effective visualization, provided one has the right language, labels, and layout. But even the most effective large-scale displays of possible content selections are a challenge when the screen-display area becomes the small window of a mobile device. Here, at last, we face the ultimate challenge. How, indeed, will we be able to scroll, pan, or otherwise manipulate a large collection of entities and relationships in a way that enables us to make a decision about what to view, what to experience? What will allow us to make that decision with, at most, something like 1.5 seconds, just in case we are driving, making a decision for others, and can't take our eyes off the road for too long?

Designing usable, useful, and appealing taxonomies, extremely fast navigation/browsing strategies, and efficient methods to select choices based on a review of key attributes, will challenge us throughout this growth period in available media content. Some people enjoy playing amidst plentitude; to others, it is anathema. Some people are searching, others are browsing, and still others are primarily role-playing. For some people, knowing obscure references is a sign of status, if not wealth. Research issues include what strategies, search techniques, tag techniques, and display techniques to offer different generations, cultures, cognitive styles, emotional/psychological needs, and computing/communication platforms.

The collections are growing quickly. Services from AT&T, PCShowBuzz (PCShowBuzz), and others are now providing downloadable or streaming video content from PCs to home media display systems, offering, among other things, 1000-plus channels. We are only a short time away from a massive flood of video content. Already, Lee Gomes [6], a feature writer for The Wall Street Journal, estimates that people worldwide have spent 9,305 years watching the 45 terabytes of video on YouTube since it started. Are we ready for 10X or 100X versions? Like it or not, the onslaught is coming. Our brains are about to be deluged with more video content than most people could have imagined. Let's hope that many teams of CHI and other professionals will help solve this information design and visualization problem quickly and efficiently, so we can move on to the next level.

back to top  References

1. American Film Institute database: (last checked 10 September 2006).

2. Beloit College, 2006 Mindset List study of the Class of 2010: (last checked on 12 September 2006).

3. Blaine, Tina, and Fels, Sidney (2003). "Contexts of Collaborative Musical Experiences." Proc., 2003 Conference on New Interfaces for Musical Expression (NIME-03), Montreal, Canada .

4. FilmFinder: See,, and Plaisant, Catherine (2004). "The Challenge of Information Visualization Evaluation." Human-Computer Interaction Laboratory, University of Maryland , College Park, MD, (last checked on 12 September 2006).

5. Getty Museum visual archives: (last checked 10 September 2006).

6. Gomes, Lee (2006). "Will All of Us Get Our 15 Minutes On a YouTube Video?" Wall Street Journal, 30, August 2006, p. B1.

7. Gracenote music database: (last checked 10 September 2006).

8. Jiajun Zhu, Jiajun, and Lu, Lie (2005). "Perceptual Visualization of A Music Collection", Proc. of IEEE International Conference on Multimedia and Expo (ICME05), pp 1058-1061.

9. Kim, Ryan (2006). "Rethinking Google's System: Human-Powered Search Premieres." San Francisco Chronicle, 4 September 2006, p. C1 ff.

10. Lamere, Paul (2006). "Search Inside the Music" Project report posted on the Web: (last checked 10 September 2006).

11. Live Plasma music organizer: (last checked 10 September 2006).

12. Mood music organizer: (last checked 10 September 2006).

13. PCShowBuzz, offering 1000+ channels: (last checked 10 September 2006).

back to top  Author

Aaron Marcus
Aaron Marcus and Associates, Inc. (AM+A)

About the author

Aaron Marcus is the founder and president of Aaron Marcus and Associates, Inc. (AM+A). He has degrees from both Princeton University and Yale University, in physics and graphic design, respectively. Mr. Marcus has published, lectured, tutored, and consulted internationally for more than 30 years.

back to top  Figures

UF1Figure. Three versions of recent current commercial video guides as shown by DirecTV. Each provides a different appearance, different ways of showing detailed information, and different remote controls with different arrays of buttons to access the information.

back to top 

©2007 ACM  1072-5220/07/0100  $5.00

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2007 ACM, Inc.

Post Comment

No Comments Found