Building multi-model collaboration in detecting multimedia semantic concepts Conference

Ha, HY, Fleites, FC, Chen, SC. (2013). Building multi-model collaboration in detecting multimedia semantic concepts . 205-212. 10.4108/icst.collaboratecom.2013.254110

cited authors

  • Ha, HY; Fleites, FC; Chen, SC

authors

abstract

  • The booming multimedia technology is incurring a thriving multi-media data propagation. As multimedia data have become more essential, taking over a major potion of the content processed by many applications, it is important to leverage data mining methods to associate the low-level features extracted from multimedia data to high-level semantic concepts. In order to bridge the semantic gap, researchers have investigated the correlation among multiple modalities involved in multimedia data to effectively detect semantic concepts. It has been shown that multimodal fusion plays an important role in elevating the performance of both multimedia content-based retrieval and semantic concepts detection. In this paper, we propose a novel cluster-based ARC fusion method to thoroughly explore the correlation among multiple modalities and classification models. After combining features from multiple modalities, each classification model is built on one feature cluster, which is generated from our previous work FCC-MMF. The correlation between medoid of a feature cluster and a semantic concept is introduced to identify the capability of a classification model. It is further applied with the logistic regression method to refine ARC fusion method proposed in our previous work for semantic concept detection. Several experiments are conducted to compare the proposed method with other related works and the proposed method has outperform other works with higher Mean Average Precision (MAP). © 2013 ICST.

publication date

  • December 1, 2013

International Standard Book Number (ISBN) 13

start page

  • 205

end page

  • 212