PROJECT SUMMARY Cognitive models of depression suggest that the development and maintenance of this disorder stem from individuals’ characteristic ways of attending to, interpreting, and remembering environmental stimuli, such as selective attention toward negative aspects of experience. In face perception, these biases are expressed as atendency to interpret ambiguous faces as expressing negative emotion and a general impairment in processing of emotional faces. The ability to process other important face dimensions (e.g., identity) independently from emotion might also be impaired in depression, but research in this area has been very limited. Obtaining a better understanding of all these impairments is critical, as the ability to correctly extract information from faces is important for adequate social interaction. Social impairments observed in depression could be produced or intensified by face perception impairments. A treatment for these biases that has gained attention in recent years is attentional bias modification (ABM). In ABM, people are trained through feedback to allocate less attention to negative emotional information (e.g., sad expression) and more attention to positive or neutral emotional information (e.g. happy expression). ABM can help reduce the symptoms of depression, but the effect seems small and non-robust, and there is very little understanding of its mechanisms of action and how to increase generalization beyond the trained task and biases. Designing better treatments for attentional biases in depression will require a better understanding of the biases themselves. This project proposes to use state-of-the-art computational and psychophysical approaches to more precisely characterize three relatively unexplored aspects of attentional biases that are likely to have an impact on the outcome of ABM and similar treatments. More specifically, we will use recent advances in general recognition theory (to which we have contributed) and in classification images techniques, to study whether people with depression show an impairment in filtering information about other aspects of faces (e.g., identity) when they process face emotion, whether the biases observed in depression are due to perceptual versus decisional processes, and exactly what face information is processed differently during emotion identification in depression. Finally, basic research suggests that increasing the discriminability, independence and attention to relevant features of emotional expression should increase generalization of ABM-induced learning to new faces outside of the training environment. An ideal protocol would also target both perceptual and decisional processing. Our previous research shows that categorization training is the ideal candidate for such an intervention, as it produces all the desired effects. An exploratory goal of this project is to test whether we can improve discriminability and independence of face emotion processing in people with depression using categorization training.