Texture-Based Processing in Early Vision and a Proposed Role for Coarse-Scale Segmentation
Humans and other natural systems are remarkably adept at extracting spatial information from vision. To better understand this process, it would be useful to know how the visual system can make an initial estimate of where things are in a scene and how they are oriented. Texture is one source of information that the visual system can use for this purpose. It can be used both for segmenting the visual input and for estimating spatial orientations within segmented regions; moreover, each of these two processes can be performed starting with the same mechanisms, namely spatiotemporally-tuned cells in the visual cortex. But little attention has been given to the problem of integrating the two processes into a single system. In this paper, we discuss texture-based visual processing and review recent work in computer vision that offers insights into how a visual system could solve this problem. We then argue that a beneficial extension to these approaches would be to incorporate an initial coarse-scale segmentation step. We offer supporting evidence from psychophysics that the human visual system does in fact perform such a rough segmentation early in vision.