Making sense of occluded scenes using light field pre-processing and deep-learning Conference

Liyanage, N, Abeywardena, K, Jayaweera, SS et al. (2020). Making sense of occluded scenes using light field pre-processing and deep-learning . 2020-November 538-543. 10.1109/TENCON50793.2020.9293774

cited authors

  • Liyanage, N; Abeywardena, K; Jayaweera, SS; Wijenayake, C; Edussooriya, CUS; Seneviratne, S

abstract

  • A combined approach of low-complexity light field depth filtering and deep learning is proposed for object classification in the presence of partial occlusions. The proposed approach exploits depth information embedded in multi-perspective four-dimensional (4-D) light fields via low-complexity 4-D sparse depth filtering and deep-learning. The proposed 4-D depth filter, designed using numerical optimization techniques by formulating as an ℓ1 - ℓ∞ minimization problem, is shown to outperform typical light field refocusing based on 4-D shift-sum averaging filters. Experiments conducted using a light field dataset acquired by a Lytro camera verify 45% and 27% better performance in terms of object classification accuracy compared to the cases when no depth filtering is employed and standard shift-sum refocusing is employed, respectively.

publication date

  • November 16, 2020

Digital Object Identifier (DOI)

International Standard Book Number (ISBN) 13

start page

  • 538

end page

  • 543

volume

  • 2020-November