11.6.4. Colour effects in the two tasks (recognition vs. naming) of Experiment 7.
One of the major aims of experiment 7 was to investigate the effect of colour versus black‑white photographs of rotated common objects. Both in recognition and naming tasks in the present experiment there were non-significant effects of colour vs. black-white condition. To determine whether colour is independent of other visual attributes of the stimulus the present experiment was carried out in which stimulus parameters were systematically covaried. The visual presentation of the stimuli in identical colour in combination with other visual attributes in the present experiment successfully isolated response to the overall form of object regardless of colour and black-white condition.
Under this condition of the present experiment the subjects strategy to perform the experiment remained constant upon the shape of the object itself regardless of whether the object was in colour or in black and white.
This result indicates that colour is independent of every other module. The result have important theoretical and practical implications in relation to the debate of whether colour, stereoscopic depth and orientation are processed independently.
The time to perform a task when there are two colours present need not be longer than when there is only one (Willows & McKinon, 1973; Carter, 1982). Nevertheless, the segmentation of a display by colour is generally affected by the number of colours it contains. If the same colour is contained in different shapes, then segmentation by colour will conflict with that determined by shape.
These results are consistent with some similar findings in terms of employing the effect of “an inappropriate colour” on identifying objects. For example, Ostergaard & Davidoff (1985) found that the latency of identification was identical for appropriately and inappropriately coloured objects. Moreover, Cavanagh & Leclerc, (1989), also found that shadows, which play an important role in forming an object representation, can be of any colour without disturbing the perception of a three-dimensional shape.
It may appear from Figure 11.3 that the binocular viewing colour condition with object rotated in depth produced longer RT than either binocular black-white, monocular colour or monocular black-white stimuli. This result reveals that the colour vs. black/white, stereo, and orientations contribute independently to performance. Independent contributions from surface colour and orientations, binocular/monocular viewing would fit with increasing physiological evidence for the separate processing of these characteristics by the visual system (e.g. Livingstone, 1988). Consequently, any benefit when the colour vs. black-white, stereo and orientation integration are present is small, relative to when each characteristic occurs alone.
Evidence from human visual psychophysics suggests there is at least some degree of independence among modules. For example, one can easily understand black and white films that lack colour and stereo, and 3D structure can be perceived in extremely impoverished stimuli, such as the projections of rotating wire-frame objects (Wallach & O’connell, 1953), or random dot stereograms (Julesz, 1971). Physiological evidence for module independence is provided by the existence of separate subcortical and cortical pathways carrying different kinds of visual information (Stone & Dreher, 1982; Zeki & Shipp, 1989).
Two important issues are raised by the finding that independent contributions from surface colour and orientations, binocular/monocular viewing condition of the present study. The first of these concerns object structure information. Such information allows us to deal with invariance and complete object procedures. The second type of information is object detail. Two objects may have identical structures yet appear different because of surface details such as colour and orientation. Object representation must contain both sources of information. It is well known that most current theories of object representation treat surface details separately from structure. The fact that, for example colour and orientation, are registered in separate maps has consequences for object representation. If, as seems likely, an object’s structural representation is displayed on the visuospatial scratchpad, then features which contribute to spatial representation will represent a separate source of information to features which contribute to surface details. Marr (1982) proposes that surface details are added to the representation of spatial features at the 2.5 D sketch representation. Grossberg & Minglloa (1985) suggest that invisible spatial contours are formed which allow surface details to be filled between the spatial contours and finally, Biederman (1987) suggests that object recognition is achieved by matching spatial template parts against theinput.
To some extent, the two different tasks (recognition vs. naming) may also require the extraction of different kind of visual information. It is not clear, though, how independent the analysis of the different low-level modules can and should be and to what extent information should actually be integrated.
Broadly speaking, these results support findings of the limited role of colour in both naming and recognition behaviour (e.g.; Davidoff & Donnelly, 1990 ; Biederman & Ju 1988; Davidoff & Ostergaard, 1988). This conclusion is strengthened by evidence thatpatients with achromatopsia are not visually handicapped, but rather only suffer a lack of quality in their vision. (e.g. Humphreys & Riddoch, 1987b). The reliance on spatial characteristics, rather than surface characteristics, for object representation is efficient. Surface details can change or appear to change depending on the state of the object’s near neighbours. However, spatial features do not just appear; they occur as a result of information important to object representation being categorized and enabling the formation of spatial features. Object structure representations enables us to have stability and constancy.
Models of object recognition differ in the way they represent the storage of knowledge concerning colour. The question then arises as to whether colour, stereo, and orientation form part of an integrated representation specifying the stored visual characteristics of objects.
Price & Humphreys, (1989) proposed that channels for processing surface texture, shading, and colour exist independently of contour-based processing, and that these former channels contribute separately to object recognition; that is , they do not solely provide additional definition of surface depth and orientation for a 2.5‑D sketch or inferred volumetric primitive.
Surface‑and edge‑based accounts of object recognition make different predictions concerning the effects of colour on recognition. According to Marr (1980) the surface‑coded representation in the 2.5‑D sketch must be constructed prior to recognition taking place. Therefore, object recognition will benefit if objects are depicted along with their surface details (such as variations in brightness, texture, and colour) in either an indirect way (e.g. the curved surface of a fruit may be derived from its texture) or in a direct way if they are specified as part of the stored description of an object. Edge‑based accounts of object recognition make different predictions concerning the effects of surface information on recognition (Biederman 1987).
Biederman & Ju (1988) argue that recognition operates directly from two‑dimensional edge‑based representations of the major components of objects. According to these accounts, surface‑coded representations do not need to be constructed prior to recognition taking place. Thus the recognition of line drawings, containing no surface details, will be as efficient as that of coloured photographs of objects, containing extra surface details. For example, the eyes encode the intensity of reflected light so that doubling the intensity increases the perceived brightness by a constant amount. This is the famous Weber‑Fechner Law, which suggests that the firing rates of sensory cells are logarithmically proportional to the sensory stimulus.
Experimental evidence supports the view that superiority effects operate from outline stimuli with very little surface detail (Homa et al., 1976; Davidoff & Donnelley, 1990). Recognition can proceed from edge‑based entry level representations without the need for other stored information. Therefore, there will be no difference in the speed with which objects can be recognized from black and white and coloured photographs, (Ostergaard & Davidoff, 1985; Biederman & Ju, 1988; Davidoff & Ostergaard, 1988). For example, Biederman & Ju (1988) found that the times to name colour photographs and line drawings of objects showed an advantage for colour photographs when no mask followed the stimuli, but there was no difference between naming latencies for colour photographs and line drawings when the stimuli were followed by a pattern mask.
Biederman & Ju (1988) found that the effect of colour was not altered by its specificity or for the object; that is, it was true for both photographs and line drawings of objects with and without a defining or typical colour.
The limited role of colour has been disputed by Price & Humphreys (1989). They argue that colour may be required at the entry level to disambiguate objects from categories that are structurally similar. Humphreys & Price stated that, despite the significant effects of surface detail in their experiments, it remains true that similar effects have been found in other studies (Biederman & Ju, 1988; Davidoff, 1985). They suggest that the discrepancy between their significant results and those of previous studies, for instance, Davidoff et al. (1988), is because the colour affects the process only when an object has a specific associated colour, for example the orange colour of an orange. It has been found that the presentation of an object colour (e.g., the red or green of an apple) plays a significant part in reducing the time to name the object (Ostergaard & Davidoff 1985).
In sum, the results of the present study are consistent with independent dimensional processing model, which assumes that some dimensions can be analyzed independently and in parallel. A stimulus is processed on the basis of the first relevant dimension that is resolved. Garner (1974) labeled these dimensions ‘separable’ and it easy to limit processing to one dimension while ignoring the other. In the frame work of information integration (Anderson, 1974), this limitation of processing to one dimension is described as the selection rule, which postulates that observers respond only to information represented in one dimension and ignore the rest; the neglected dimension(s) does (do) not have any effect on the observer’ performance.
11.6.5. Conclusion
It is quite clear that the negative effects of colour in both tasks in the present experiment fulfilled the prediction that colour is independent of other visual attributes of the stimulus.
However, we should expect colour to effect the process of selecting between competing representations only when the object has a specific associated colour. The relative usefulness of the dimensions involved also affect how different sources of information are utilized in object recognition. The present finding of independent processing of colour, stereoscopic depth and orientation rule out the possibility that these factors are processed by separate channels in the visual system in performing identification tasks. All the three different factors must be considered carefully in mapping out the attentional span in the depth axis of 3-D visual space.
Many investigators have defined the importance of stimulus properties and emphasized the need to take into account the interactions occurring among stimuli, organisms, and experimental tasks, in order to explain processing (Forad & Nelson, 1984; Ward, 1985, Foley & Cole, 1986; Ward & Vela).
The role of the visual properties that characterizes objects and contribute to their identification such as form, colour has been manipulated in the experiment. The effects produced between colour vs. black-white condition and both viewing and orientations in the present study reveals the importance of manipulating the physical characteristics of the stimulus objects for naming and recognition behaviour. Such manipulations could provide valuable clues for understanding the mechanisms that underlie naming and recognition of oriented everyday objects.
The present finding of object effect is consistent with a number of other demonstrations of the apparent ease with which the visual system computes a representation in three‑dimensional space from a two‑dimensional projection. For example, Benton, Smith & Lang (1972) indicated a decline in naming efficiency in aphasic patients as a consequence of a reduction of stimulus cues which, however, still permitted identification. Similarly,
Goodglass, Barton & Kaplan (1968) suggests that ” reduction of informational input in a modality results in a lowered level of concept‑ arousal through that modality”. The subject selection act consists of two main components ‑ identification and acquisition. Identification is the classification of the foveal image object, (i.e.; this object is or is not the target).
Acquisition is the selection of an object or point outside of the fovea to look at the next. It might be the subject’s ability to selectively fixate objects on the basis of specific characteristics is, in fact, his ability to select objects in the extra‑foveal field which are similar to the target, as specified.
The subject’s precept of 3‑D or 2‑D is determined by the target specifications. For example, he or she selects a “cup” as a perception of a patterning of a “cup” on visual features of the object. These features include symmetry, the ratios of height to length of the object, and so forth. One necessary question for future research then is whether the present effects of objects change according to the nature of the stimulus. This suggests the interesting possibility that object naming and object recognition can be influenced by the visual characteristics of the object.
The effect of integration of colour vs. black-white condition, stereo, and orientation on naming and recognition tasks fulfilled the prediction that identical colour of objects is functionally independent of orientation and stereoscopic information, then the coloured effect will be stable across their variation.
As the results stand, the colour vs. black and white stimuli as a main effect did not produce a significant effect on either the naming or the recognition tasks. The explanation for this null effect of colour seems acceptable because if we can consider the way of visually presenting objects in identical colour and with a combination of different attributes (i.e., different angle and from two different viewing conditions), thus
The identical colour for the three different stimuli might have served as a distractor factor which forced the subject’s to pay attention to the general appearance of the shape of the object itself and not to the specific colour. In this situation the subject strategy to perform the experiment remained constant upon the shape of the object itself regardless of whether the object was in colour or in black and white.
The combination of different visual attributes of the object with the distractor factor of identical colour made the task more complicated.
As the objects in this experiment were defined as a conjunction of form and colour, thus, it demands more complex segmentation which in this case provided for a serial search, because segmentation must be an individual object of a display. In contrast, searching for a single object with red among green distractor, where the search for apparently modular can be interpreted as a very easily achieved segmentation.
extra…..
The visual system is able to analyze various attributes, such as shape, colour, and texture. In each case, a different network is used for the analysis, yet the sensory signals for these attributes all originate at the eye. As a result, it is often necessary to qualify the modality: visual pictorial (shape), colour, visual texture, and so on. The variation in RTs observed in the naming task is a function of the amount of the rotation between the three different angles. In this case the entry level representations are not colour‑coded descriptions.
Biederman & Ju (1988 did notuse monochrome photographs to eliminate colour but leave surface texture and shading information. Because luminance differences make up both texture and boundary information, so the comparison between a line drawing and black and white photograph is fair only when all boundary information is retained. It is unlikely that the Biederman and Ju comparison would produce the same result if subtle discriminations were required.
As the three objects used in the present experiment (i.e. cup, plug and stamp) were of identical colour, either the same “red” colour or black‑white.
The identical colour for the three different stimuli might have served as a distractor factor which forced the subject’s to pay attention to the general appearance of the shape of the object itself and not to the specific colour.
11.6.2. Naming task
The naming task in the present experiment was one in which three stimuli were presented successively, and the subject had to name them very quickly.
11.6.2.1 Angle Effects:
With regard to the main findings of the naming task, one of the most significant ones concerns the angularity effect. Overall, the results of the naming task in Experiment 7 provide further evident for the principal finding of the experiments reported in chapter 10. The increase in reaction time with angular difference is again clearly evident for rotated objects in depth. Viewing an object at angle 0 degree took reliably longer than the viewing the same object at angle 90 degrees. In addition, the results show that the mean naming RT at “45” degrees is shorter than at angle “0” degree, and the mean RT at angle “90” degrees is shorter than the mean RT at both angles 0, and 45 degrees. This result is consistent with the results of others who have found evidence for viewpoint‑dependent representations of novel 3‑D objects (Bülthoff & Edelman, 1990; Rock & DiVita, 1987; Tarr, 1989; Humphrey & Khan, 1992).
Palmer et al. (1981) conducted an extensive study of the perceptibility of various objects when presented at a number of different orientations. Generally, a three-quarters front view ( viewing an object at 45 degrees) was most effective for recognition, and their subjects showed a clear preference orientation for the object in its canonical orientation It is possible that the representation of 90 degrees in the present study have a ‘canonical’status (Palmer et. al. 1981), such as the upright orientation, and that under depth rotation an input shape may be rotated into correspondence with the canonical view even if other stored views are nearer. Thus, naming times would exhibit two components, one dependent on the orientation difference between observed object and the upright, the other dependent on the orientation difference between the observed object and the nearest stored orientation.
With regard to the results of the present experiment, in naming visually presented rotated objects, the subject compared the objects in different orientations feature by feature; the more features that are different, the more likely the subjects are to take longer to retrieve the name of that stimulus in different orientations. To verify that stimuli are in fact identical, the subjects must compare all features of the same object when shown as rotated in depth. Therefore, RTs are typically faster at angle 90 degrees than RTs of angle 0 degree. Objects can be more readily identified from some orientations compared with others (Palmer, Rosch, & Chase, 1981).
Palmer et al. (1981) conducted an extensive study of the perceptibility of various objects when presented at a number of different orientations. Generally, a three-quarters front view ( viewing an object at 45 degrees) was most effective for recognition, and their subjects showed a clear preference orientation for the object in its canonical orientation It is possible that the representation of 90 degrees in the present study have a ‘canonical’status (Palmer et. al. 1981), such as the upright orientation, and that under depth rotation an input shape may be rotated into correspondence with the canonical view even if other stored views are nearer. Thus, naming times would exhibit two components, one dependent on the orientation difference between observed object and the upright, the other dependent on the orientation difference between the observed object and the nearest stored orientation.
Another condition under which viewpoint affects identiability of a specific object arises when the orientation is simply unfamiliar as when a plug is presented at angle 0 degree in depth (visible parts becoming hidden). For some objects a rotation in depth caused a change in the parts that were visible, and for these objects naming time did increase. Also, some rotations could change the visible relations among parts and thus affect naming.
Jolicoeur (1985) reported that naming RTs were lengthened as a function of an object’s rotation away from its normally upright position. He concluded that mental rotation was required for the identification of such objects, as the effect of X‑Y rotation on RTs was similar for naming and mental rotation.
The orientation effect upon the naming time for objects differing in many dimensions (depth, colour, shape) in the present task suggesting that the stimulus structure limits the subject’s processing options. There are some changes, of course, in perceived structure that occur during development. These results agree with those of other authors who have worked with similar types of stimulus dimensions (Burner, 1974; Shepp, 1983; Ward, 1983; Ward et al. 1986). Stimuli that are perceived by the older child and adult as separable are perceived by the young child as integral. However, the perception of objects formed from integral dimensions would not appear to vary in the course of development (Smith, 1980; Ward et al., 1986).
A number of studies have shown that patients with right‑hemisphere posterior injuries have difficulty in recognizing unconventional views of objects, but generally perform well with conventional views of the same objects ( Humphreys & Riddoch, 1984; Ratcliff & Newcomb, 1982; Warrington & James, 1986, 1988).
Humphreys & Riddoch (1984) and Riddoch & Humphreys (1986) found that in four patients the ability to recognize photographs of foreshortened objects was impaired. One group of patients showed impaired performance only if the major axis of a target object was foreshortened. Another patient was impaired only if the primary distinctive features of the target object were obscured. This double dissociation within the group of patients suggests two routes to object constancy: one specifying the object’s structure defined with reference to its major axis; the other characterized by something like a feature‑list of the object’s distinguishing properties or parts. Unusual views could occlude parts or features as well as obscure an object’s main axis of elongation. A feature checking model predicts the longest RTs for “non-identical” stimuli since all features of the stimuli would have to be compared. Since “canonical view” (viewing an object at angle 90 degrees) RTs are faster than “foreshortened view” (viewed an object at angle 0 degree), a process in addition to feature checking must be proposed. It seems that it is easier to analyze the features of certain objects but not others.
11.6.2.2 .Colour effect The effect of colour versus black and white stimuli was not significant Therefore the null hypothesis is accepted and the conclusion is that the object appearing in an inappropriate colour does not affect the speed of response latencies.
Previous research has found a significant advantage in naming latencies for colour photographs over monochrome photographs of objects (Ostergaard & Davidoff, 1983). However, this advantage was for the naming of natural colour objects ( e.g., fruits and vegetables). The advantage of colour over black and white using man‑made objects as stimuli in the present experiment did not reach significance. One explanation of this effect may be that the objects in earlier studies (e.g., fruits and vegetables) have a characteristic colour whereas man‑made objects do not.
The three objects used were all structurally dissimilar and also the same object differed from one view to another according to the rotation variable (i.e., angle 0 vs. 45 vs. 90 degrees) which may explain the lack of a significant difference between colour and black and white photographs in this study. Price & Humphreys (1989) found a significant difference between line drawings and black and white photographs for structurally similar objects but not for dissimilar objects.
One other explanation is that the three objects used in the present experiment had the same colour i.e. red, which forced our subjects not to pay any attention to the colour but only to the shape of the presented object. The time needed for naming will take some time to filter the irrelevant information and for the subject to attend to the relevant dimension (selective attention).
11.6.3. Viewing Condition (binocular vs. monocular) effect: The effect of viewing condition also does not exceed the critical F value in the different types of the analyses. However, the obtained value for the interaction effect of viewing condition by object produced a significant effect at the 0.0001 level of analysis for black-white RT data only.
The viewing by objects interaction showed that the object “cup” under binocular view condition had a shorter mean RT than under monocular view. In addition, the mean RT of cup was faster than the other two objects in both viewing conditions.
Interestingly, the results of both separate ANOVAs on mean naming reaction time for colour and black‑white shows that the interaction between viewing condition and subjects were significant at the 0.01 level. The nature of the interaction between viewing condition and angle with subjects which was found in both sets of analyses of colour and black‑white data separately reveals several suggestive differences between the RTs for the colour and the black‑white condition. In the light of the present results, the RTs differences between a particular object and viewing condition suggest that the dimensionality of the visual objects depicted is an important factor in the naming rotated of objects in depth.
…….To be continued….