
The extrastriate cortex (shown in orange and red) is believed to be involved in perceptual priming (Photo credit: Wikipedia)
C H A P T E R 2
STEREOPSIS
2.0 Introduction
Human perception is very largely dominated by the visual modality. During the course of evolution the snout shortened and the eyes became more frontally located, allowing overlap of monocular views and thus stereoscopic depth perception. This was of obvious adaptive value to tree‑ dwelling organisms, leaping from branch to branch, and gave the hunter‑gatherer, and predators in general, the means of spotting stationary camouflaged prey.
This chapter will provide a review of basic research concerning the development of binocular vision and then a review of basic neurophysiological research in animal which has provided both theoretical and practical foundations for deprivation studies with humans.
Early explanations of human visual perception focused upon the optical and muscular properties of the eyes. This was particularly so for theories of depth perception put forth prior to the nineteenth century. Binocular cues for depth perception were first convincingly demonstrated by Wheatsone (1838). He used his stereoscope to show that the two drawings representing the different perspectives of the right and left eyes could be combined to give the actual sensation of objects in depth. This first presentation of stereograms was derived from the situation in real viewing in which each eye receives a slightly different projection from the visual panorama because of the roughly 65 mm horizontal separation of the eyes. This provided the first perceptual proof that the vivid impression of stereoscopic depth could be created from two flat pictures (Wade, 1988).
2.1 Depth Perception Definition ‑ Stereopsis
Stereopsis is the ability to discriminate depth on the basis of binocular visual information because of the physical separation of the eye. The two‑dimensional projections of three‑dimensional objects occupy slightly different positions on the right and left retina’s. The slight difference between the left and right retinal images, and consequently the two exited areas of receptors, produced when viewing an object is termed “Retinal Disparity“; it provides information for stereoscopic depth perception.
Stereopsis contributes to our ability to determine precisely the location of objects in the real world. Stimulation of corresponding retinal locations gives rise to single vision and perception of a single direction. If stimulated locations are very different, double vision results. However, between these two extremes, there is a range where stimulation of non‑corresponding points causes single vision and the appearance of depth.
2.2 The importance of studying stereopsis
Stereopsis, therefore, is a term which has two meanings. First, it refers to that extra sense of solidity and depth which is experienced when using two eyes rather than one, an experience which is confirmed by the superiority of binocular depth judgements. Second, stereopsis is triggered when two slightly different views of a scene are viewed in a stereoscope. In this case the two flat stimuli induce an illusion of depth which can be used to gain an understanding of normal stereopsis.
From the above definition of stereopsis it is clear that the extra information provided by stereopsis could be obtained from pictorial depth cues, such as binocular disparity, which could help in the recognition of visually presented objects.
The next three sections consider three issues concerning stereopsis: (i) the nature of brain process responsible for registering the visual information. (ii) the neural basis of stereopsis. (iii) the theoretical accounts of stereopsis.
2.3 The Anatomy of The Human Visual System
The way in which visual information is relayed from the eyes to the brain is not as straightforward as one might initially think.
In the first half of the 19th century Johannes Müller suggested the possibility that the neural connections from the two eyes might meet in common cells in the brain. Knowledge of the anatomical and physiological basis for stereopsis goes back to Descartes, but Newton was the first to discover that at the optic chiasm there is a partial decussation of the optic tract which results in the tract on one side carrying information from the same half‑field of both eyes. The tracts continue to the lateral geniculate nuclei, but at this stage little, if any, interaction takes place between the two visual pathways (Bishop, Burke and Davis, 1959). However, at the level of the striate cortex, it is found that most of the cells are binocularly driven (Hubel and Wiesel, 1962).
Primary visual representations derived from the retinal patterns do not use prior knowledge of the visual world. Understanding these representations is a first step toward understanding the representations used and stored by the high‑level visual networks. These structures are illustrated in figure 2.1. below.
FIGURE 2.1. The visual pathways to the brain (Adapted from P. Firsby 1979; Seeing, Oxford Univ. Press)
The lenses of the eyes focus images of the world, ocular images, onto the retina. The retina’s convert and encode these images into low‑level visual representations called visual patterns. The primary visual patterns comprise several different registered sub‑patterns which describe various temporal and spatial characteristics of the ocular images. The primary visual patterns are transmitted by the optic nerve to the lateral geniculate nuclei, where they are further processed and analysed. The result of the analysis is transmitted over the optic radiations to the primary visual cortex and transforms the primary visual patterns into internal representations suitable for storage and analysis.
The next section is a reviews of the physiological data obtained from single‑cell recording from neurones in non‑human primates, and relates this to a model of visual object processing. This allows localisation of the different processing modules in functional structures within the brain of higher primates, including man.
2.4 Neurophysiological studies of stereopsis
Neurophysiological investigations have established a hierarchy of spatial organization along the visual pathway and have shown that one of the functions of early areas of visual cortex is the extraction of features in an image, such as position, and length of contour (Hubel & Wiesel, 1962, 1977).
2.4.1. Neural basis of visual perception
The visual system has three separate, partly independent pathways in the cerebral cortex‑ one responsible for movement and certain aspects of distance (Livingstone, 1988; Livingstone & Hubel, 1988; Zeki & Shipp, 1988). For example, when we identify an object, one set of neurones identifies the shape, another set “concentrates on” the colours of the object, and another “concentrates on” orientation of the object. At least that is true to some extent as the three pathways do communicate with one another. Some sort of communication among them would seem to be necessary to see an object as a single, unified entity (Kaas, 1989).
The three pathways of the cerebral cortex begin with two major types of ganglion cells in the retina. The smaller cells, sometimes known as X cells, are located mostly in or near the fovea. The larger cells, sometimes known as Y cells, and the W cells, are only weakly responsive to visual stimuli (Raczkowski, Hamos, & Sherman, 1988); they are poorly understood, and their role is not known precisely.
2.4.2. Coding information about depth of objects
Pettigrew and Dreher (1987) argue that there exist three separate systems coding information about objects lying at different distances. They identify the ‘X’ retinal ganglion cells as taking input (information) from the fixation point, and combine it with the information of ‘Y’, ‘W’ cells and the eyes, which project to different layers in the Lateral Geniculate Nulcleus of the Thalamus (L.G.N).
The lateral geniculate has six distinct laminae (layers). On each side of the brain, three of those laminae receive their input from the eye on the same side of the head; the other three receive their input from the opposite side (no cell in the lateral geniculate has binocular vision).
Of the six laminae, four are composed of parvocellular (small cell) neurones and two are composed of magnocellular (large cell) neurones. The X cells make synapses onto parvocellular neurones and the Y cells synapse onto magnocellular neurones. When axons from the lateral geniculate reach the cerebral cortex, the two pathways (parvocellular and magnocellular) become three. These two pathways make contact with cells in different parts of lamina 4C in the primary visual cortex (area V1 or striate cortex). The parvocellular pathway splits; it sends some of its information to clusters of neurones called blobs in laminae 2 and 3 and the rest to the area between blobs. The magnocellular neurones also send information to some cells in the blobs but not the same cells as the parvocellular neurones. The blob cells that receive their input from the parvocellular neurones are highly sensitive to colour (Ts’O & Gillert, 1988), that is, each one is excited by light of one colour (such as red) in its receptive field and inhibited by another colour (such as green). The magnocellular neurones are colour blind; they respond equally to light of any wavelength. These three pathways remain distinct in the secondary visual cortex (V2). The parvocellular neurones, which have smaller receptive fields, are better adapted to detect visual details; the magnocellular neurones detect the broader outlines of shapes. The magnocellular neurones respond rapidly but only briefly to a stimulus, while the parvocellular neurones give a sustained response to an unchanging stimulus. Consequently, the magnocellular neurones are well suited to detect movement and stereoscopic depth perception; the parvocellular neurones are better suited to analyzing a stationary object.
2.4.3. The neural basis of stereopsis
How does the brain ‘know’ which are the correct left/right parts of features to fuse together? Obviously the entire process is not yet understood, but something is known about the neurobiological hardware involved in the initial stages of stereoscopic vision.
It is known that most visual cortical neurones receive input from both eyes, left and right. Each of these binocular neurones gives its largest response when the two eyes view the same object.
Binocular neurones related to stereopsis come from the magnocellular projections of the lateral geniculate body, and belong to a system which includes thick stripes. They are distributed in monkey visual association cortex in regions designated by V2, V3, V4 and MT (Livingstone and Hubel, 1987; De Yoe and Van Essen, 1988). The presence of binocular cells in these regions, which also function in networks for extraction of other basic properties of vision, e.g., form, texture and motion, provides a basis for interaction of these properties with stereopsis.
Cells are said to be binocular if they respond to patterns projected in both eyes and monocular if they respond to patterns projected in one eye only.
Barlow, Blakemore and Pettigrew (1967) established that there is a preferred disparity value for each binocular cell, and Hubel and Wiesel (1970) have discovered that in Macaquemonkeys half the binocular units of the visual cortex will only respond to appropriate stimulation of both eyes together, and not to stimulation of either eye separately. These cells give a maximum response to well‑defined disparity perpendicular to the orientation of their receptive fields.
Individual neurones in the visual cortex are responsive to different amounts of retinal disparity (Ohzawa, et al. 1990). It has been suggested that this response preference to degrees of retinal disparity is one basis upon which we perceive depth. The idea is that this preference corresponds to a particular distance or depth.
In general, binocular neurones respond best when the two eyes view matched features (Hubel and Wiesel, 1962, 1970). This property, then satisfies one of the requirements for the analysis of disparity information, that is, the matching of monocular features.
Binocular neurones have another property; some binocular cells respond only when these preferred features appear at the same depth plane as the point of fixation. This is because the receptive fields for these binocular neurones are located on corresponding areas of the two eyes. Activity in such cells would serve to signal the presence of stimuli located on the horopter. Other binocular cells respond best to stimuli that are imaged on corresponding, or disparate, areas of the two eyes. Cells of this type would be activated by stimuli located at depth planes other than the plane of fixation (Poggio and Fischer, 1978). Also, the amount of depth (disparity) giving the best response varies from cell to cell.
In general, it appears that these binocular neurones perform two of the operations necessary for stereopsis, feature matching and disparity computation.
Evidence favouring the role of binocular neurones in stereoscopic depth perception comes from studying cats. When kittens were allowed to see with only one eye at a time; the eyes were stimulated alternately by placing an opaque contact lens in one eye on one day and in the other eye on the next day. This rearing procedure effectively turns binocular neurones into monocular ones (Blakemore, 1976). When tested as adults, these animals were unable to perform binocular depth discriminations that were simple for normally reared cats (Blake and Hirsch, 1975; Packwood and Gordon, 1975). Because they lacked a normal array of binocular neurones these cats were “blind” for stereopsis.
Binocular stereopsis does appear to be a strong cue to depth, but up to 10 % of the population may unable to use it to judge distance and they are not severely impaired. The loss of stereopsis ought to have the same effect on a patient as loosing one eye ; that is depth perception should be impaired, but the information about the 3‑D form of objects should still be conveyed by monocular depth cues. By analogy with the stereoblind cats, we can assume that this condition in humans stems from a lack of binocular neurones. There is evidence favouring such assumption. For, instance, when stereoblind people were tested on the tilt aftereffect test they show very little interocular transfer; adapting one eye has very little effect on judgements of line orientation when the nonadapted eye is tested (Mitchell and Ware, 1974). In contrast, people with good stereopsis show a substantial degree of interocular transfer which presumably, results from adaptation of binocular neurones. The deficient degree of interocular transfer in the stereoblind individual, then, probably reflects the paucity of binocular neurones in the person’s brain.
It has been estimated that the incidence of stereoblindness may be as high as 5 to 10 % in the general population. As a rule, stereoblindness is associated with the presence of certain visual disorders during early childhood.
2.5 Impairment of stereopsis (astereopsis)
Impairment of stereopsis is disorientation within a visual environment because of the failure to appreciate relative distances or lengths between objects. The level of stereopsis impairment can be trivial from an information‑processing viewpoint. For example, the mechanical limitation of fusional mechanisms in one eye due to paralysis of III, IV cranial nerves, causes the inability to achieve retinal correspondence in the face of intact sensory mechanisms. Conversely, the deficit may occur, even though the eyes have always been perfectly aligned, in adults from disruption at a high level of processing due to acquired cortical lesions which should disrupt binocular neurones and their operations.
A review of neurological literature reveals many instance of remarkable and specific disorders of various aspects of vision.
Impairment of stereopsis, known as astereopsis, can occur early in development (Kay et al., (1981) or may be acquired in adulthood due to neurological disease. The neuropsychological data provided in the literature shows some examples of the effect of loss of stereoscopic vision. There are cases where patients appear to have intact “early” visual processes, but have impaired perception of depth and the distances between objects.
Holmes and Horrax (1919) reports that a patient described a box as a piece of flat cardboard, no matter at what angle he saw it, while stairs appeared as a number of straight lines on the floor; in addition, drawing and photographs, which when fixed in a stereoscope appeared as tridimensional figures to a normal observer, seemed to him also flat when viewed in the stereoscope. Thus the patient who loses stereoscopic vision seems to lose the ability to use all the different depth cues. The lesion, which probably affected the superior visual and visual association cortices, was accompanied by a complex visual syndrome which impaired visuo‑spatial ability.
The effect of a loss of stereoscopic vision is disorientation due to the failure to appreciate relative distances or lengths between objects. Riddoch (1917) describes the problem of the experience of lose of 3D vision in his patient thus;
“This loss of appreciation of the relative position of things that he sees quite well has been the means of giving him many a fright. Two vehicles approaching each other in the street always seem about to collide. A person who is crossing the street is sure, he thinks, to be run over by a taxi which is really yards away. He used to stand and stare aghast till he found he was registering wrong impressions and that the accidents that he expected to occur every minute did not come off”.
Balint (1907, 1909) noted that his patient was afraid of walking outside on the streets because he was unable to judge accurately the distance of trams coming towards him. This results in patients being unable to reach out for or to grasp objects and may also hinder their ability to plan and co‑ordinate effective actions during route finding.
Testing stereopsis can be used with advantage in the evaluation of patients with visual disorders caused by focal lesions of the central visual system. Carmon and Bechtoldt
(1969) and Benton and Hecaen (1970) postulated that the right hemisphere (RH) was dominant for stereopsis. On the other hand, Julesz (1976) found markedly different results in normal subjects, in whom the left and right visual fields showed equivalent stereoptic ability. Moreover, Gazzaniga, Boggen and Sperry (1965), found that stereopsis was preserved after complete calossal lesions, except when the chiasm was split. This finding indicates the existence of stereoptic ability in each hemisphere and both hemispheres are capable of processing stereopsis. However, Julesz (1976) did find a significant difference between upper and lower visual fields.
Similar results were obtained in an experiment with normal subjects. Rabowska (1983) presented normal subjects random‑dot stereograms tachistoscopically. He found a left hemifield advantage over the right hemifield group. On the other hand, Julesz et al. (1976) found different results in normal subjects as the left and right visual fields showed equivalent stereopsis ability. This findings indicate that the right and left hemispheres are equally capable of stereoptic processing, assuming equal visuo‑spatial processing between the left and right hemiretina. Whether there is a specific association with right parietal damage is not clear but there is some evidence to suggest that this may be the case for stereoacuity (Danta et al., 1978; Fowler et al., 1989). According to Marr (1982), stereoacuity occurs late in stereoptic processing and is related to the ability to construct accurately the information in convolved images.
Earlier mechanisms which contribute to stereopsis and which rely on the detection of local brightness contrast, and detection of local primitive elements remain intact. Similarly, the clinical observation that stereoacuity may be impaired in the face of relatively normal visual acuity and spatial contrast sensitivity abilities which require only the monocular identification, and localization of primitive features. However, depth perception may not be impaired so drastically.
2.6 Monocular visual depth information
Although the mechanism for the production of a binocular stereoscopic image appears to be highly developed, it is in no way necessary for the perception of depth as such.
There are many monocular cues available, such as relative size of objects, interruption, texture and colour gradients, linear perspective, motion parallax and others, and a certain amount of information regarding depth is gained from the degree of convergence of the eyes when fixating and from the accommodation of the lens. Thus, there are various source of information specifying depth relations among objects and distance from the perceiver to those objects. It is quite clear that our visual world is rich with depth information. How, then, are these multiple cues combined?. How are other aspects of vision influenced by depth information? The answer will be treated further in the final section 2.11. of the present chapter.
2.7 Theoretical accounts of the single vision (stereopsis)
Binocular vision has been a subject of discussion for three millennia. Research in stereopsis is traditionally devoted to quantifying the relationship between disparity and perceived depth. The problem of how disparity is derived (i.e., how the corresponding left and right retinal projections of an object are found‑ has been ignored. Perhaps the inherent limitations of the stimuli used may have caused researchers to shy away from studying binocular depth perception as a pattern‑matching process.
2.7.1. Fusion Theory
The basis of binocular stereoscopic vision is generally agreed to be the fusion of the two slightly disparate images produced at each eye. Fusion is the most popular solution to the singleness problem. As a simple statement, it merely emphasizes that somehow the images at the two eyes are perceived as one, that they are fused.
However, fusion theory provides an incomplete account of stereopsis because it does not incorporate the important observation that we normally fail to register conflicting portions of monocular projections of stereoscopic targets under direct scrutiny. This single vision falls under the rubric of binocular rivalry fusion and involves different processes (Fox and Check, 1966a).
Stereopsis is more general than fusion, since with increased disparity ( above the fusion threshold but within the limit of patent stereopsis) corresponding points can be seen as double but still perceived in depth.
2.7.2. Suppression (Summation) Theory
Alternatives to fusion may be summation or suppression. Suppression theories, which revolve around the idea that at any given moment one eye is dominant and suppresses the image from the other, have gained some support , but there are difficulties concerning visual direction.
Suppression theories predict that the monocular visual direction is maintained , where a fusion theory predicts a compromise between the two different monocular visual directions.
Suppression is disproved by the occurrence of binocular rivalry, the mechanism whereby one sees a broken horizontal line with a vertical line passing through the gap, or vice versa, and not a normal cross, when one presents a vertical line and a horizontal line to the other.
As the two eyes ordinarily share a common view, phenomenal observation cannot tell us which process, fusion or suppression, operates to promote single vision. Efforts to distinguish fusion from suppression in binocular vision have relied chiefly on the phenomenon of displacement (Werner, 1937 ; Dodwell, 1970). It is reasoned that fusion has occurred if the visual direction of disparate half‑images viewed stereoscopically assumes a value intermediate to that of either half‑image. Suppression on the other hand, should yield a visual direction equivalent to that of one of the two half‑images. There have been several demonstration apparently supporting the fusion outcome, in that displacement was reported, but for several reasons critics have argued that those demonstrations are inconclusive. According to one argument, small vergence errors during binocular fixation can introduce shifts in visual direction that mimic the outcome expected with fusion (Ogle, 1950 ; Kaufman & Pitblado, 1969). This possibility can be minimized by using briefly flashed stimuli (Ono et al., 1977). A second, more general criticism of displacement phenomena concerns the potential ambiguities in criteria for reporting shifts in visual direction and in the sensitivity of the psychophysical procedures typically used (Kaufman and Arditi, 1976). In general, these criticisms underscore the difficulty of distinguishing fusion from suppression on the basis of phenomenal report alone. To avoid these difficulties, Blake (1977) has used indirect techniques by measuring monocular detection thresholds under stimulus conditions yielding stereopsis, apparent fusion, monocular dominance, and monocular suppression. Blake, and Camisa (1978) demonstrated that the contribution to single vision and stereopsis by one eye is not always achieved at the complete expense of the partner eye’s input. The results are inconsistent with the hypothesis that suppression alone mediates binocular single vision. Others (Fox & Check, 1969; Fox & Mcintyre, 1967; Makous & Sanders 1978) have used similar techniques to Blake and Camisa (1978) but the results have not been consistent.
An object in space stimulates what are termed “corresponding retinal points” when its left and right eye images are located the same distance and direction from their respective foveas. Stimulation received which does not conform to the criteria for correspondence, constitutes retinal disparity. If the disparity is large the viewer sees double images of the non‑fixated point , or if it is relatively small his perception is of a single point but at a different position in depth from his plane of fixation .
2.8 The importance of ” Random Dots Stereograms ” studies in Perception of Depth
The use of random‑dot stereograms (RDS) has enabled researchers to study stereopsis and to determine whether certain perceptual cues are processed at the retinal or the cortical level and has confirmed the need to postulate a central processing mechanism for this function (Julesz 1964).
Studies such as that of Wallach & Brindley (1953, 1970) have demonstrated that under suitable conditions , 3‑D figures can be correctly perceived from 2‑D monocular images, while on the other hand,Julesz (1960) has shown, using computer generated random‑dots stereograms , that under different conditions the sensation of a 3‑D image can be caused solely by the disparity between matching elements in the images presented to each eye.
Kaufman (1964) designed several stereograms in which clear depth was perceived despite the absence of binocular disparity between form information within the two half images. For example, depth perception was produced by brightness disparity between, similar, non‑disparate forms.
Random dot stereograms are important because they produce the impression of two surfaces at different depths, while at the same time there are no cues in either image about what forms the surface will take. This shows that, in some sense, we are able to see depth independently of the processes involved in seeing form, since even when we reduce monocular form information to the minimum (to random dots), the perception of depth can still occur.
In fact, theories for depth arising from Julesz stereograms have become an acid‑test for contemporary theories of stereopsis (Marr and Poggio, 1979; Frisby and Mayhew, 1981; Sperling, 1970 ).
Although the mechanism for the production of binocular stereoscopic image appears to be highly developed, it is in no way necessary for the perception of depth as such. There are many monocular cues available, such as relative size of objects, interruption, texture and colour gradients, linear perspective, motion parallax and others, and a certain amount of information regarding depth is gained from the degree of convergence of the eyes when fixating and from accommodation of the lens.
In the next sections, emphasis will be given to the computational approach to vision. The aim of the computational approach to vision is to specify the computations necessary to extract useful scene information from images.
2.9 Computational models of binocular vision
Marr (1982) and his collaborators made a number of contributions to our understanding of the ways in which binocular disparity is used in depth perception through the implementation of a computational model. According to Marr and Poggio (1976) and Marr (1982), the task of using the information available from binocular disparity in order to construct a “depth map” (e.g., Marr 1982, page 276), involves rather complex processes.
The mapping of the retina onto cortex is topographic;this means that neighbouring regions on the retina are represented in neighbouring regions on the cortex. But, the retinal images from which depth information is extracted are two‑dimensional they are depthless.
Much of the interest in “depth maps” arises from Marr and Nishihara’s description of the 2.5‑D sketch as a possible inter‑mediate processing stage, incorporating a surface‑based description in viewer‑centred co‑ordinates (Marr & Nishihara, 1978). Such a representation would be useful for guiding actions such as reaching out towards or grasping objects. The 2.5‑D sketch was held to act as a buffer store to hold the products of early visual processing from one instant to the next.
2.9.1. The matching process in the two eyes (correspondence problem)
The “correspondence problem” refers to the requirement for matching different sources of information about the same object. This can be considered as a feature of many theoretical accounts that rely on analysis of a series of static images. The human brain is able to solve the correspondence problem by using stereopsis mechanisms.
Both stereopsis and fusion of the two disparate images is computationally complex. The correct features of one eye must be matched with the other. Only when this correspondence problem is solved can the disparity involved in the match be determined and hence the part be located in three dimensions.
Numerous researchers have studied the process of determining distance for stereo images (Julesz,1971; Marr, palm & Poggio, 1978; Marr & Poggio, 1979 , Mayhew & Frisby, 1981).
Frisby & Mayhew (1981) discussed the common proposal that how goal of stereopsis is to determine the disparity at each point in the field of view and solve the problem of ” false targets” or non‑corresponding points. Stereopsis is achieved in three stages; first, the selection of points of location in a monocular image. Second, identify the same location in the other image. Finally, the associated disparities between those loci are measured . Thus, a depth map of the visual scene can built up. An object’s boundaries could be computed and then matched. Thus the problem of ‘false targets’ in which non‑corresponding points should not be matched must be overcome.
Marr (1982) states that if a correspondence between stereopsis derived primitives such as bars, blobs, terminators, and so on, satisfies with sufficient detail matching constraints for objects in the physical world, then that match is geometrically correct and unique.
Marr and Poggio (1976, 1979) proposed three rules or “constraints” that might be useful in clarifying the correspondence problem.
(i) Compatibility constraint; it states that two descriptive elements can match if, and only if, they could have arisen from the same surface markings, shadows and so forth; this means that the descriptive elements in the primal sketches formed from the input to each eye must be compatible with being adjacent views of the same physical features (e.g. elements to be matched must be physically similar, e.g. have the same colour; or edges must have the same orientation; in a random‑dot stereogram, black dots can match only black dots).
(ii) Continuity constraint; stereo‑ disparity varies smoothly almost everywhere. This condition is a consequence of the cohesiveness of matter, and it states that only a small fraction of the area of an image is composed of boundaries that are discontinuous in depth. The matching in this constraint is physical considerations of local pattern elements.
(iii) Uniqueness constraints; each item from each image may be assigned at most one disparity value. This condition relies on the assumption that an item corresponds to something that has a unique physical position. This means that the same features inhibit one another and the correspondence must be one to one almost everywhere and thus has a single disparity
value.
Marr suggests that these three rules obtained from consideration of physical properties of objects are sufficient to solve the stereo matching problem.
These three rules mentioned above are the central activity of deriving a single three dimensional image from the two horizontally disparate images of the left and right eyes in a computational way.
Pingback: C H A P T E R 4: (Dr. FAWZY OSMAN’s Ph.D: RECOGNISING AN OBJECT FROM DIFFERENT VIEWS | fawzy's lucky people
Thanks for the “like ” of my post and for becoming a follower of mine.
LikeLike
I have noticed you don’t monetize your website, don’t waste
your traffic, you can earn extra cash every month because you’ve got high quality content.
If you want to know how to make extra bucks, search for:
best adsense alternative Boorfe’s tips
LikeLike