Strainer - PowerPoint PPT Presentation

sieve search images effectively through visual elimination l.
Skip this Video
Loading SlideShow in 5 Seconds..
Strainer PowerPoint Presentation

play fullscreen
1 / 20
Download Presentation
Download Presentation


Presentation Transcript

  1. SIEVE—Search Images Effectively through Visual Elimination Ying Liu, Dengsheng Zhang and Guojun Lu Gippsland School of Info Tech, Monash University, Churchill, Victoria, 3842 {dengsheng.zhang,}

  2. Outline • Motivations • SIEVE • Experiment • Results • Conclusions

  3. Motivations—Semantic Gap • Conventional content-based image retrieval (CBIR) systems put visual features ahead of textual information. • However, there is a gap between visual features and semantic features (textual information) which cannot be closed easily.

  4. Motivations—Text Search Popular • CBIR systems are not widely used as text based image search engines. • However, the textual description in existing search engines may not capture image content and is subjective in nature. • We propose to integrate the existing text-based image search engine with visual features. • A post-filtering algorithm is proposed, it is called SIEVE—Search Images Effectively through Visual Elimination. • Practical fusion methods are also proposed to integrate SIEVE with contemporary text-based search engines.

  5. SIVE—The Idea • The idea of using SIEVE is very similar to object classification done by a human being. • First, objects of interest are roughly distinguished from other very different objects either manually or through certain hand tools. • Then, the collected objects are subject to visual inspection to confirm each object of interest from unwanted objects.

  6. SIEVE—The Approach • In our approach, text-based image search results for a given query are obtained first. • SIEVE is then used to filter out those images which are semantically irrelevant to the query.

  7. SIEVE—The System

  8. Features Segmentation SIEVE—Feature Extraction • For each image in the list, SIEVE first segments it into different regions. • Next, color and texture features of each region are extracted. • The region color feature is the dominant color in HSV space and the region texture feature is the Gabor feature obtained

  9. SIEVE—Decision Tree Analysis • Semantic template based decision tree reasoning algorithm is used to derive a set of decision rules to learn a set of concepts in natural scenery images. • Using these decision rules, the low-level features of a region are mapped to semantic concepts.

  10. SIEVE—Decision Tree Analysis

  11. Experiment—Image Collections • To test the retrieval performance of SIEVE, 10 queries are selected, including mountain, beach, building, firework, flower, forest, snow, sunset, tiger and sea. • Google image search can return up to thousands of images for a query, however, users are usually only interested in the first few pages. • Therefore, for each query, the top 100 images are downloaded from the first 5 pages.

  12. Experiment—Learning Semantics • For a given query, each image in the returned list is segmented into different regions using JSEG • Regions with size over 5% of the entire image are selected. • Then, low-level features of these regions are extracted. • Next, the semantic based decision tree method is used to learn the concept of each region in an image and decide whether the image is relevant to the query or not.

  13. Experiment—Measurement • In Web image search scenario, it is not known how many relevant images there are in the database for a given query. • Bull’s eye measurement is used. • The bull’s eye measures the retrieval precision among the top K retrieved images.

  14. Results—Retrieval Accuracy Average retrieval precision for 10 image concepts

  15. Results—Retrieval Examples Above: Search result by Google using query ‘Tiger’Left: Result by SIEVE using the same query ‘Tiger’

  16. Results—Retrieval Examples Above: Search result by Google using query ‘Snow’Left: Result by SIEVE using the same query ‘Snow’

  17. Results—Retrieval Examples Above: Search result by Google using query ‘Firework’Right: Result by SIEVE using the same query ‘Firework’

  18. Integration with Search Engines • Scenario 1— SIEVE is installed on the server. User sends an image search query a Web browser. Search engine returns the SIEVED images to the user. • Scenario 2— SIEVE is integrated with the Web browser as a plug-in. A user query is directed by the SIEVE to search engine. The returned list is subject to SIEVE. • Scenario 3— SIEVE is used as an application software. SIEVE directs user query to various Web image search engines. The returned lists from search engines are further SIEVED.

  19. Issues • Significant time on image segmentation and computing image semantics. This can be solved by indexing images semantics upfront in image search engines. • Although a limited concept set is used to test its performance, the decision tree can accommodate more semantic concepts, provided their corresponding distinct feature templates are available for inclusion in the training dataset. • SIEVE can be applied more effectively if images in database are first classified into categories.

  20. Conclusions • An effective method called SIEVE has been proposed to improve text based Web image search. • Compared with text based image search engine, it shows significant improvement on the tested semantic concepts. • Compared with conventional CBIR systems, it is much more efficient in dealing huge image database like Web images. Because SIEVE makes use of efficient text based image search engine. • Future research will extend SIEVE to include large number of image concepts.