Using the Amazon metric to construct an image database based on what people do, not what they say


Autoria(s): Wyeld, T. G.; Colomb, R. M.
Contribuinte(s)

E. Banissi

R. A. Burkhard

A. Ursyn

J. J. Zhang

Data(s)

01/01/2006

Resumo

Current image database metadata schemas require users to adopt a specific text-based vocabulary. Text-based metadata is good for searching but not for browsing. Existing image-based search facilities, on the other hand, are highly specialised and so suffer similar problems. Wexelblat's semantic dimensional spatial visualisation schemas go some way towards addressing this problem by making both searching and browsing more accessible to the user in a single interface. But the question of how and what initial metadata to enter a database remains. Different people see different things in an image and will organise a collection in equally diverse ways. However, we can find some similarity across groups of users regardless of their reasoning. For example, a search on Amazon.com returns other products also, based on an averaging of how users navigate the database. In this paper, we report on applying this concept to a set of images for which we have visualised them using traditional methods and the Amazon.com method. We report on the findings of this comparative investigation in a case study setting involving a group of randomly selected participants. We conclude with the recommendation that in combination, the traditional and averaging methods would provide an enhancement to current database visualisation, searching, and browsing facilities.

Identificador

http://espace.library.uq.edu.au/view/UQ:104889

Idioma(s)

eng

Publicador

IEEE Computer Society

Palavras-Chave #Amazon metric #Database browsing #Database visualisation #Database navigation #Text-base vocabulary #User interface #E1 #280104 Computer-Human Interaction #740000 - Education and Training
Tipo

Conference Paper