
Two darker circles are not of the same grey level when rotating about 150 – 300 RPM. It is even possible to see weak colour tint. Related to the low-level mechanism of seeing and inaccurate interpretation of visual perception. (Photo credit: Wikipedia)
5.4 Experimental evidence of the functional parallels between imagery and vision
Several lines of research within cognitive psychology have provided evidence of common representations for imagery and perception.
Podgorny and Shepard (1978) demonstrated the functional equivalence of mental images and visual precepts in a dot localization task. Subjects were shown a square grid in which they either imagined or were presented with a block letter. On each trial a probe dot was presented somewhere in the grid and subjects task was to decide whether the dot fell on or off the (real or imagined) letter. Podgorny and Shepard found that the pattern of response times were highly dependent on the spatial position of the dot with respect to the letter.
More importantly, the pattern of response times was essentially the same whether the letter was real or imagined, as would be expected if image and precepts of the letters activated common representations.
Kosslyn’s (1980) studies of mental imagery have been aimed at explaining the format of mental images and other information processing characteristics of mental imagery and perception. He found that images have a limited resolution, such that two image points are viewed binocularly. Furthermore the finding that images show the visual “oblique effect”, such that lines can be imagined more closely spaced at a horizontal or vertical orientation than obliquely, has been taken to imply that visual representations are being used.
Finke and Kosslyn (1980) had subjects inspect patterns consisting of two dots separated by a variable distance. On different trials the subjects then either observed or merely imagined such a pattern at the centre of a large screen, and they were asked to move their eyes along horizontal and vertical lines on the screen to find the most peripheral points at which the dots could still be distinguished visually. These measures defined “fields of resolution” for both imagery and perception conditions; the fields of resolution increased in size monotonically at the same rate with increasing separation between the dots, were of a horizontally elongated elliptical shape, and extended further below the point of fixation than above.
However, Finke and Kurtzman (1981a) repeated these experimental procedures using patterns that consisted of concentric circles that varied in area and relative contrast. From the results of size scaling which suggested that objects are imagined to be smaller in area than they actual are, for example Moyer (1973) and Paivio (1975) reported that the time required to indicate which of named animals is largerdecreases monotonically with increasing differences between the actual size of animals. Moyer and Bayer (1976) later compared reaction times in perception and memory conditions for judging which of two circles is larger, when the circles were drawn either from a small or a large range of sizes. Although judgements in the memory condition (in which the subjects were presented with “names” they had learned for the circles). The reaction times in each condition decreased with increasing differences in both the absolute and ordinal size of the circles.
Finke and Kurtzman (1981a) predicted that imagery fields of resolution would increase in size less rapidly with increasing pattern area than perceptual fields. This prediction was not supported, however, as the two fields of resolution increased in size with increasing pattern area at the same rate. In addition, the reductions in pattern contrast decreased the size of the fields of resolution in the perceptual but not in the imagery conditions.
Finke and Kurtzman (1981b) have studied comparative sizes and shapes of the visual fields for perceived and imagined stimuli. They measured fields of resolution along eight spatial directions using square‑wave bar gratings of 3 different spatial frequencies, (ie. 1, 3, and 9 cycles per degree); the subject’s task was to find the points in their peripheral visual field where the two halves of each imagined or perceived pattern could no longer be distinguished. The subjects judgements revealed that fields of resolution for imagined and perceived patterns decrease in size with increasing spatial frequency in the same manner, and that the average shapes of these fields are very similar, with the perceptual fields being slightly more elongated. The type of this results indicate that imagined patterns are judged to have certain constraints on visual resolution that can not be accounted for entirely by subjects guessing ability or their consciously available knowledge about visual perception.
Intos‑Peterson (1983) have argued that paradigms in which subjects are trained in making direct judgements with regard to their images are particularly susceptible to this criticism. They have shown that the absolute size of fields of resolution in imagery depending on the expectations of the experimenter.
More recently, Cave and Kosslyn (1989) in a reaction time study compared the time taken to evaluate stimuli of varying sizes. When subjects expect an upcoming stimulus to be a certain size, response time increases with the disparity between expected and actual size.
The view advanced by Stephen Kosslyn and his associates is particularly interesting because it has much in common with traditional views of imagery in which images in the mind are modelled on an interior picture or copy of whatever it is that is imagined. However, Heil (1982) criticised Kosslyn account on imagery. Heil on a logical basis argue that (a)..to have a mental image is not to be in some special relation to an item or episode having the properties attributed to the image; and (b) the having of mental images, in consequence, is not like “scanning CRT’s”. Heil argued that concept of mental image is wrong as an internal picture (copy, replica). He claims’ the connection between imagination and recognition (i.e., perception) may be logical, but the character of our capacity to do these things is an empirical matter.
5.5 Neuropsychological Models of Visual Imagery
[Farah’s model of visual imagery (1984)] Farah (1984) modified Kosslyn’s (1980) theory of visual imagery. According to Kosslyn (1980) model, the imagery system is seen to involve information‑bearing structures and information‑manipulation processes.
One information‑bearing structure is the long‑term visual memory structure. This is seen as storing information regarding the appearance of objects. A second kind of structure is labelled the visual buffer. This in itself does not bear information, but is the medium in which images occur.
The information‑manipulating processes are thought to include those which (i) generate, (ii) inspect and (iii) transform the image in the visual buffer. The generation process utilizes information in the long‑term visual memory to create an image in the visual buffer. The inspection process converts the image into organized precepts, identifying parts and relations within the image. The transformation process transforms the image, for example by rotation. In fact, there is a good deal of evidence that identification of these kinds of objects under certain conditions may proceed by matching characteristics of the figure as a whole to models stored in long‑term memory (Tarr and Pinker, 1989).
Distraction of the long‑term visual memory structures, the visual buffer, the generation process or the inspection process would give rise to an individual reporting no imageryand an inability to carry out tasks involving the consultation of an image. Loss of the transformation process, however, would not give rise to total loss of imagery, but would create deficits in visual/spatial thought processes.
Farah (1984) added a number of non‑imagery processing components to Kosslyn‘s model. First, a “describe” component for question answering tasks during which the contents of the visual buffer (i.e. either an internally generated image or a visually encoded precept) must be described. Second, a “copy” component for constructional tasks, when the contents of the visual buffer (again either an internally generated image or a visually encoded precept) must be drawn or constructed. Third, a”detect” component for detecting, (not processing), the mere presence of activation in the buffer. Fourth, a visual encoding process for encoding stimuli into the visual buffer. Fifth, a recognition process where a match is obtained between the representation derived from input, (ie. the inspected contents. The model illustrating elements of Kosslyn’s (1980) theory with Farah’s (1984) additional components and processes superimposed is shown in Figure 5.3 below.
ENCODE DETECT
LONG TERM MEMORY VISUAL BUFFER INSPECT
DESCRIBE COPY
MATCH
Figure 5.3. The general model of perception and imagery from which task analyses were derived. Patterns of activation are formed in the visual buffer either by an image generation process (from long term memories) or a perceptual encoding process. The presence of activation in the visual buffer may simply be detected, or an inspection process may read out structured patterns of activation for further processing description (in the case of question answering tasks), copying (in the case of drawing or construction tasks) or matching with long‑term memories (in the case of recognition tasks).
(From Farah, 1984).
According to this model, imagery shares many of the same representations and processes as visual perception. When an object is seen, its appearance is encoded from the retinal image into the visual buffer. It may then be matched to one of the appearances stored in long‑term memory. When an object is imagined its appearance is generated from the long term‑memory into the visual buffer. Whether seen or imagined, the contents of the visual buffer can be inspected or transformed in order to attempt a match.
It is evident that Farah’s model is a neuropsychological information processing model, in that she attempts to identify functional components and the way in which they communicate with another. It is important to consider whether imagery requires a component in addition to the long‑term visual memory component, as suggested above by Farah (1984), since if separate components exist, predicted deficits following brain‑damage would be qualitatively different.
5.6 Disorders of visual imagery
Farah, described a single functional locus for maintaining representations of perceived visual material and for generated, images objects. Farah considered this “is not itself information bearing but is the medium in which images occur”. Such images are generated from the long‑term memory store or encoded from the outside world. Accordingly, Manning and Campbell (1992) carried out an investigation with an Optic aphasic patient. In A.G was unable to read aloud or to comprehend written material of any sort. Manning and Campbell’s results suggests a separation between a representational locus for the input‑driven visual structural analysis of pictured objects, and one that supports the ability to generate structural representations from long‑term memory (via the given name or by touch). However, the patient, A.G., was unable to perform tasks such as Tail‑occluded pictures, or the Heads Test, which suggest that when the task is driven from vision he cannot directly compare information from one of these sources to the other. This defect in generating a usable image from disrupted or incomplete visual information may be an associated or causal factor in his optic aphasia. Thus imagery generation is directly implicated in naming and in the free description of pictured objects. Also, access to semantic representation from vision might be expected to affect occluded visual material. Riddoch (1990) have shown a similar separation on the basis of a case who shows a different sort of defect: an inability to generate an image from long term‑memory, but good skills with visually presented material.
The matching processes required for the correct decision in the masked or occluded tasks is more difficult for such pictures than for fully specified ones. The nature of the difficulty may arise in a failure of direct activation by the (incomplete) visual input of the semantic (imagistic) specification in a noisy context, or in a failure to make effective use of recursive interactive procedures that could effectively draw together the semantic and visual representations in the generation of images.
More direct evidence of an impairment of the visual store is provided by the poor performance on imaging tests. Recently, De Renzi and Lucchelli (1993), reported a patient with severe impairments in object recognition. Her main complaint was the inability to read and to recognize familiar faces and objects after severe head injury in a traffic accident. The neurological examination disclosed a mild spastic paresis of the left lower limb, without sensory disturbances. In the subsequent months, the paralysis recovered completely and there was some improvement of reading and object recognition, while prosopagnosia remained severe. The T‑scan showed the patient suffered from damage to the left hemisphere. The patient’s performance on imaging tests such as drawing from memory, describing the perceptual differences between similar objects and describing the colour of objects were impaired during months of subsequent investigation.