Video content representation based on texture and lighting Conference

Radev, IS, Paschos, G, Pissinou, N et al. (2000). Video content representation based on texture and lighting . EURO-PAR 2011 PARALLEL PROCESSING, PT 1, 1929 457-466. 10.1007/3-540-40053-2_40

cited authors

  • Radev, IS; Paschos, G; Pissinou, N; Makki, K

authors

abstract

  • When dealing with yet unprocessed video, structuring and extracting features according to models that reflect the idiosyncrasies of a video data category (film, news, etc.) are essential for guaranteeing the content annotation, and thus the use of video. In this paper, we present methods for automatic extraction of texture and lighting features of representative frames of video data shots. These features are the most important elements which characterize the development of plastic (physical) space in the film video. They are also important in other video categories. Texture and lighting are two basic properties, or features, of video frames represented in the general film model presented in [12]. This model is informed by the internal components and interrelationships known and used in the film application domain. The method for extraction of texture granularity is based on the approach for measuring the granularity as the spatial rate of change of the image intensity [3], where we extend it to color textures. The method for extraction of lighting feature is based on the approach of closed solution schemes [4], which we improve by making it more general and more effective.

publication date

  • January 1, 2000

published in

Digital Object Identifier (DOI)

International Standard Book Number (ISBN) 10

start page

  • 457

end page

  • 466

volume

  • 1929