Florida international university and University of Miami TRECVID 2011 Conference

Chen, C, Liu, D, Zhu, Q et al. (2011). Florida international university and University of Miami TRECVID 2011 .

cited authors

  • Chen, C; Liu, D; Zhu, Q; Meng, T; Shyu, ML; Yang, Y; Ha, H; Fleites, F; Chen, SC; Chen, W; Chen, T

abstract

  • This paper presents a summary of the team "Florida International University - University of Miami (FIU-UM)" in TRECVID 2011 tasks [1]. This year, the FIU-UM team participated in the Semantic Indexing (SIN) and Instance Search (INS) tasks. Four runs of the SIN results were submitted. • F_A_FIU-UM-1_1:KF+Meta&Relation+Audio+SPCPE&SIFT-FusetheresultsfromSubspaceModeling and Ranking (SMR) using the Key Frame-based low-level features (KF), LibSVM classification using Metadata from those meta-xml files associated with the IACC videos as well as the relationships between semantic concepts (Meta&Relation), Gaussian Mixture Models (GMM) using the Mel-frequency cepstral coefficients (MFCC) audio features, and the simultaneous partition and class parameter estimation (SPCPE) algorithm with scale-invariant feature transform (SIFT) interesting points matching (SPCPE&SIFT). • F_A_FIU-UM-2_2: KF+Meta&Relation - Fuse the results from SMR using KF and LibSVM using meta information and relationships between semantic concepts. • F_A_FIU-UM-3_3: Fuse the results from SMR using KF, GMM using MFCC audio features as well as SPCPE&SIFT matching. • F_A_FIU-UM-4_4: KF - Served as the baseline model which uses SMR on the Key Frame-based low-level features. In addition, four runs of the INS task were also submitted. • FIU-UM-1: Use 95 original example images as well as 261 self-collected images to train the Multiple Correspondence Analysis (MCA) models to rank the testing video clips according to each image query; and use SIFT, K-Nearest Neighbor (KNN), and related SIN models to re-rank the returned video clips. • FIU-UM-2: Use 95 original example images as well as 261 self-collected images to train the MCA models to rank the testing video clips according to each image query; and use the MCA model trained by 95 original example images to re-rank the video clips. • FIU-UM-3: Use 95 original example images as well as 261 self-collected images to train the MCA models to rank the testing video clips according to each image query; and no re-ranking processing is performed in this run. • FIU-UM-4: Use 95 original example images to train the MCA models to rank the testing video clips according to each image query; and re-rank the testing video clips by the KNN results obtained by 95 original example images. After analyzing this year's results, a few future directions are proposed to improve the current framework.

publication date

  • January 1, 2011