Show simple item record

dc.contributor.authorNatsev, Apostol
dc.contributor.authorChadha, Atul
dc.contributor.authorSoetarman, Basuki
dc.contributor.authorVitter, Jeffrey Scott
dc.date.accessioned2011-03-18T20:31:58Z
dc.date.available2011-03-18T20:31:58Z
dc.date.issued2001-01
dc.identifier.citationApostol Natsev, Atul Chadha, Basuki Soetarman and Jeffrey S. Vitter, "CAMEL: concept annotated image libraries", Proc. SPIE 4315, 62 (2001). http://dx.doi.org/10.1117/12.410975
dc.identifier.urihttp://hdl.handle.net/1808/7196
dc.descriptionCopyright 2001 Society of Photo-Optical Instrumentation Engineers. One print or electronic copy may be made for personal use only. Systematic electronic or print reproduction and distribution, duplication of any material in this paper for a fee or for commercial purposes, or modification of the content of the paper are prohibited. http://dx.doi.org/10.1117/12.410975
dc.description.abstractThe problem of content-based image searching has received considerable attention in the last few years. Thousands of images are now available on the internet, andmany important applications require searching of images in domains such as E-commerce, medical imaging, weather prediction, satellite imagery, and so on. Yet, content-based image querying is still largely unestablished as a mainstream field, nor is it widely used by search engines. We believe that two of the major hurdles for this poor acceptance are poor retrieval quality and usability. In this paper, we introduce the CAMEL system—an acronym for Concept Annotated iMagE Libraries—as an effort to address both of the above problems. The CAMEL system provides and easy-to-use, and yet powerful, text-only query interface, which allows users to search for images based on visual concepts, identified by specifying relevant keywords. Conceptually, CAMEL annotates images with the visual concepts that are relevant to them. In practice, CAMEL defines visual concepts by looking at sample images off-line and extracting their relevant visual features. Once defined, such visual concepts can be used to search for relevant images on the fly, using content-based search methods. The visual concepts are stored in a Concept Library and are represented by an associated set of wavelet features, which in our implementation were extracted by the WALRUS image querying system. Even though the CAMEL framework applies independently of the underlying query engine, for our prototype we have chosenWALRUS as a back-end, due to its ability to extract and query with image region features. CAMEL improves retrieval quality because it allows experts to build very accurate representations of visual concepts that can be used even by novice users. At the same time, CAMEL improves usability by supporting the familiar text-only interface currently used by most search engines on the web. Both improvements represent a departure from traditional approaches to improving image query systems—instead of focusing on query execution, we emphasize query specification by allowing simpler and yet more precise query specification.
dc.language.isoen_US
dc.publisherSPIE--The International Society for Optical Engineering
dc.subjectCamel
dc.subjectWalrus
dc.subjectConcepts
dc.subjectContent-based query
dc.subjectImages
dc.subjectMultimedia
dc.titleCAMEL: Concept Annotated iMagE Libraries
dc.typeArticle
kusw.kuauthorVitter, Jeffrey Scott
kusw.oastatusfullparticipation
dc.identifier.doi10.1117/12.410975
kusw.oaversionScholarly/refereed, publisher version
kusw.oapolicyThis item meets KU Open Access policy criteria.
dc.rights.accessrightsopenAccess


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record