CAMEL: Concept Annotated iMagE Libraries

View/ Open
Issue Date
2001-01Author
Natsev, Apostol
Chadha, Atul
Soetarman, Basuki
Vitter, Jeffrey Scott
Publisher
SPIE--The International Society for Optical Engineering
Type
Article
Article Version
Scholarly/refereed, publisher version
Metadata
Show full item recordAbstract
The problem of content-based image searching has received considerable attention in the last few years. Thousands of images
are now available on the internet, andmany important applications require searching of images in domains such as E-commerce,
medical imaging, weather prediction, satellite imagery, and so on. Yet, content-based image querying is still largely unestablished
as a mainstream field, nor is it widely used by search engines. We believe that two of the major hurdles for this poor
acceptance are poor retrieval quality and usability.
In this paper, we introduce the CAMEL system—an acronym for Concept Annotated iMagE Libraries—as an effort to
address both of the above problems. The CAMEL system provides and easy-to-use, and yet powerful, text-only query interface,
which allows users to search for images based on visual concepts, identified by specifying relevant keywords. Conceptually,
CAMEL annotates images with the visual concepts that are relevant to them. In practice, CAMEL defines visual concepts by
looking at sample images off-line and extracting their relevant visual features. Once defined, such visual concepts can be used to
search for relevant images on the fly, using content-based search methods. The visual concepts are stored in a Concept Library
and are represented by an associated set of wavelet features, which in our implementation were extracted by the WALRUS
image querying system. Even though the CAMEL framework applies independently of the underlying query engine, for our
prototype we have chosenWALRUS as a back-end, due to its ability to extract and query with image region features.
CAMEL improves retrieval quality because it allows experts to build very accurate representations of visual concepts that
can be used even by novice users. At the same time, CAMEL improves usability by supporting the familiar text-only interface
currently used by most search engines on the web. Both improvements represent a departure from traditional approaches to
improving image query systems—instead of focusing on query execution, we emphasize query specification by allowing
simpler and yet more precise query specification.
Description
Copyright 2001 Society of Photo-Optical Instrumentation Engineers. One print or electronic copy may be made for personal use only. Systematic electronic or print reproduction and distribution, duplication of any material in this paper for a fee or for commercial purposes, or modification of the content of the paper are prohibited.
http://dx.doi.org/10.1117/12.410975
Collections
Citation
Apostol Natsev, Atul Chadha, Basuki Soetarman and Jeffrey S. Vitter, "CAMEL: concept annotated image libraries", Proc. SPIE 4315, 62 (2001). http://dx.doi.org/10.1117/12.410975
Items in KU ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
We want to hear from you! Please share your stories about how Open Access to this item benefits YOU.