When dealing with yet unprocessed video, structuring and extracting features according to models that reflect the idiosyncrasies of a video data category (film, news, etc.) are essential for guaranteeing the content annotation, and thus the use of video. In this paper, we present methods for automatic extraction of texture and lighting features of representative frames of video data shots. These features are the most important elements which characterize the development of plastic (physical) space in the film video. They are also important in other video categories. Texture and lighting are two basic properties, or features, of video frames represented in the general film model presented in . This model is informed by the internal components and interrelationships known and used in the film application domain. The method for extraction of texture granularity is based on the approach for measuring the granularity as the spatial rate of change of the image intensity , where we extend it to color textures. The method for extraction of lighting feature is based on the approach of closed solution schemes , which we improve by making it more general and more effective.