This paper describes the FIU-UMgroup TRECVID 2008 high level feature extraction task submission. We have used a correlation based video semantic concept detection system for this task submission. This system first extracts shot based low-level audiovisual features from the raw data source (audio and video files). The resulting numerical feature set is then discretized. Multiple correspondence analysis (MCA) is then used to explore the correlation between items, which are the feature-value pairs generated by the discretization process, and the different concepts. This process generates both positive and negative rules. During the classification process each instance (shot) is tested against each rule. The score for each instance determines the final classification. We have conducted two runs using two different predetermined values as the score threshold for classification: • A_FIU-UM-FE1 1: train on partial TRECVID2008 development data (all TRECVID2007 development data + partial TRECVID2007 test data) using -2 as the instance score for final classification • A FIU-UM-FE2 2: train on partial TRECVID2008 development data (all TRECVID2007 development data + partial TRECVID2007 test data) using 0 as the instance score for final classification (simple majority) We observed a slight improvement in the A FIU-UM-FE2_2 run over the A FIU-UM-FE1 1 run. Initially it appeared from the training data that using a score of -2 could potentially provide a better performance, however; in order to test a true majority voting concept we have conducted the second run (A_FIU-UM-FE2_2) using 0 as our threshold. Based on the submitted results and our results produced in some of our previous work  we believe that theMCA process has the capability to learn the correlation between low-level features such as color, volume, texture etc. and high level features (concepts) and by that help narrow the semantic gap. One of the biggest challenges of this year's high level feature extraction task was the fact that the target high-level feature list has been changed. This year we have used the same low-level features that we used in 2007. We believe that this low level feature set might have not been the best candidate to represent new high level feature list. Therefore, We believe that extracting additional audio-visual features which are a little more relevant to the new concept list would have improved our observed performance. Finally we observed that the problem of imbalanced data is still a major challenge that our system is having difficulties to address. In this paper we will provide more details regarding our system, discuss our observations, and provide some thoughts regarding the future to which this system is heading.