Proceedings paper

 

Deriving Meaning from Sound: An Experiment in Ecological Acoustics

Sean W. Coward and Catherine J. Stevens

Macarthur Auditory Research Centre, Sydney (MARCS)

University of Western Sydney

 

 

While speech and music perception have received considerable attention in psychology (Deutsch, 1999; Jusczyk, 1997), relatively little is known about the way in which humans perceive other environmental sounds. Recent literature within the field of ecological acoustics has focused on the way in which the soundwave contains meaningful information for the listener (Ballas & Howard, 1987; Ballas, 1993; Gaver, 1986, 1989, 1993a, 1993b; Heine & Guski, 1991; Jenison, 1997; Pressing, 1997; Rosenblum, Wuestefeld, & Anderson, 1996; Stoffregen & Pittenger, 1995; Warren & Verbrugge, 1984). For example, Gaver (1986) proposed that acoustic properties of the sound signal convey information that enables identification of an associated event. Sound-event mappings that express consistent information regarding the source are termed nomic, whereas symbolic mappings consist of the pairing of unrelated dimensions. Gaver (1986) predicted that the redundancy of information expressed within nomic mappings results in an intuitive association, and that this initial advantage aids learning relative to the learning of symbolic mappings.

Surprisingly, few of today's 'informative' sounds would appear to build on the inherent meaning in nomic mappings. The ring of a telephone, the buzz of an alarm clock, and the wail of an ambulance siren seem to be designed to gain attention, but may require an additional cognitive step to link sound and meaning. Although ultimately effective, symbolic mappings of this kind may be relatively inefficient and require an unnecessary period of learning. Accordingly, the aim of the present study was to conduct a systematic, experimental investigation of the relative ease of learning nomic and symbolic sound-event mappings. A review of the relevant literature begins with an outline of Gaver's theoretical framework for the field of ecological acoustics.

Everyday versus Musical Listening

Gaver (1993b) distinguished between two types of auditory perception that reflect the attentional focus of the individual. Musical listening is the experience of perceiving properties of the proximal stimulus as it reaches the ear. Sounds are perceived in terms of pitch, loudness, rhythm, and other acoustic components typically analysed by psychologists and psychophysicists. This auditory experience is common during the perception of music. As an example, the sound of a melody is usually appreciated with reference to the rhythm and pitch variations that signify the song.

The study of musical listening has received considerable attention in psychology (Deutsch, 1999; Tighe & Dowling, 1993). As a result we understand a good deal about the way humans perceive acoustic features. However, listeners do not always focus on the proximal stimulus but on the distal stimulus. For example, while listening to a melody it is also possible to determine the musical instrument responsible. Contemporary psychological theories are less adept at explaining how the listener identifies the source of the sound as a guitar.

Gaver (1993b) addressed this issue with the notion of everyday listening. During the everyday listening experience the distal stimulus is the focus of perception. Rather than perceiving the rhythm and melodic contour of sound, the individual perceives the event responsible for producing the soundwave. According to Gaver, perception of the event is made possible by detecting consistent causal relationships as described by physical law. For instance, the action of plucking a metal string suspended over a resonant cavity produces a specific pattern of air disturbances that can only be produced by a constrained number of objects and events. An individual engaged in everyday listening is consequently able to perceive the strum of a guitar.

While everyday or musical listening can be applied during the perception of any sound, the majority of psychological research on audition has studied the proximal stimuli as described by psychoacoustics (e.g. Rasch & Plomp, 1999). This empirical emphasis on musical, at the expense of everyday, listening is most likely a product of the assumption that auditory stimulation requires processing before it becomes informative. However, proponents of ecological acoustics counter this assumption, arguing that the structure of the soundwave conveys meaning for the listener. The present study provides the first experimental examination of Gaver's conception of everyday and musical listening, with the prediction that meaningful information regarding the sound source is accessible only during the everyday listening experience. This hypothesis was tested by systematically manipulating instructions to encourage either everyday or musical listening, a technique that has been shown to influence the expectations and performance of listeners (Ballas & Howard, 1987).

Learning Sound-Event Mappings

Within Gaver's (1993a, 1993b) framework, the structure of a soundwave does not specify one certain source, but rather constrains the range of events that could have produced the sound. As a result, the specifics of the sounding event presumably have to be learned. The association of a signal (sound) to a referent (event) can be conceptualised as a mapping (Familant & Detweiler, 1993). A nomic mapping associates a sound to an event with which it is causally linked (Gaver, 1986). For example, the use of the sound of fire to indicate the event of burning is a nomic mapping as the two components are expressing the same event. In contrast, symbolic mappings are arbitrary associations that rely on social convention for meaning (Gaver, 1986). In this way a fire alarm may be used to signify the event of fire.

According to Gaver (1986) the experience of everyday listening allows certain sound-event mappings to be learned more easily than others. Nomic mappings involve complete congruence between acoustic information and the causally related occurrence, with the two components expressing the same event. This redundancy occurs as the feature sets of the signal and referent are identical (Familant & Detweiler, 1993), resulting in an implicit association. Research has shown that even young children find it relatively easy to map sounds to an appropriate event with only one trial (Jacko & Rosenthal, 1997). By contrast, the acoustic properties of the sound in a symbolic mapping bear no resemblance to the event it is meant to convey. Such an association must therefore be learned through contiguous exposure. Consequently, Gaver (1986) provided the, as yet untested, hypothesis that nomic mappings are more easily learned than symbolic mappings. As the implicit associations within nomic mappings are only evident during everyday listening, these learning advantages are not likely to be realised during the musical listening experience.

Gaver (1986) was quick to point out that nomic mappings merely provide an initial associative advantage that is not maintained after a mapping has been well learned. Speech is an obvious example of the efficient learning of symbolic relations. Nonetheless, there is great potential to explore the efficacy of non-verbal nomic mappings in conveying information quickly and efficiently. This issue has important design implications for auditory icons, which are natural sounds used to communicate information (Gaver, 1989). It is therefore hypothesised that the learning advantage of nomic over symbolic mappings manifests during the initial learning trials; with exposure, nomic and symbolic mappings become equivalent. The latter hypothesis will be tested by comparing performance in immediate and delayed test conditions.

Identification of Nomic and Symbolic Mappings

One challenge for the field of ecological acoustics is to develop a taxonomy of nomic sound-event mappings. The majority of studies in ecological acoustics have approached this task by examining the perception of complex sounds within their natural environment (e.g. Vicente & Burns, 1996). Research has shown that both adults (Lass, Eastham, Parrish, Scherbick, & Ralph, 1982) and children (Jacko, 1996) are capable of accurately identifying the source of environmental sounds. One assumption of these studies is that the soundwave produced during each event is unique and must be learned by the individual. Such studies imply that a taxonomy of nomic mappings requires as many sounds as there are objects and events. While such empirical endeavors provide significant contributions to our understanding of the organism-environment interaction, they fail to isolate and identify potential invariant mappings between specific acoustic features and the corresponding properties of the sound source (Aslin & Smith, 1988).

In an experimental study addressing this topic, Warren and Verbrugge (1984) found that the differentiation of breaking and bouncing glass objects was dependent on temporal properties of the sound. Their analysis suggested that sound-source perception may be reduced to invariant temporal and spectral components. Specific acoustic features are therefore considered indicators of particular properties of the sounding object.

Gaver (1993b) adopted this reductive approach by speculating that causal relations exist between certain acoustic properties and source parameters, and that such mappings are invariant across sound-producing objects and events. For example, Gaver (1993a) proposed that the pitch of a sound is an invariant indicator of the size of the sound-producing object. Accordingly, sound-producing events should reflect the size of the sounding object via the pitch of the resulting sound. Larger objects presumably vibrate with larger oscillations and consequently produce a sound of lower pitch than do smaller objects. Gaver (1989) utilised the proposed nomic mapping between pitch and size within the computer interface SonicFinder, and this mapping has been recommended by others (Hereford & Winn, 1994).

The implication of a more reductive, psychophysical approach is that a finite set of invariant relations may be used to produce an infinite number of nomic sound-event mappings. Gaver (1993a) advanced this notion by developing a number of algorithms capable of synthesising complex sounds. One such algorithm, when entered into an appropriate sound synthesis program, replicates the sound of a bar being struck with a mallet. By changing specified parameters within this equation it is possible to alter the perceived properties of the bar. For instance, changing the value of the partial frequencies alters the perceived length of the bar, whereas changing the damping constant results in a perceived change of material. As a result, a large number of objects and interactions can be synthesised by altering certain parameters within a single algorithm.

Overview of the study

Despite being widely cited throughout the ecological literature, the rigorous psychophysical analysis performed by Warren and Verbrugge (1984) is extremely rare within ecological acoustics (Heine & Guski, 1991). The possibility of compromising the ecological validity so cherished within the ecological paradigm means that highly controlled laboratory experiments are in the minority (Shepard, 1984). However, the relative paucity of controlled experimentation has restricted causal explanations to the imprecise level of unspecified stimulus interactions. Therefore, this study attempted to employ the precision of psychophysical reductionism while maintaining some ecological validity.

The present experiment used the algorithm for struck bars as developed by Gaver (1993a). According to Gaver, pitch forms a nomic mapping with bar length, whereas damping, which is proposed to indicate the material of a bar, could be used in a symbolic mapping with bar length. Two pretests were conducted that attempted to test these predicted relations. The pretest experiments were based on the speeded classification task of the Garner interference paradigm (Garner & Felfoldy, 1970). In the Garner methodology, two dimensions are paired, with attributes on the second dimension either varied orthogonally or held constant (Melara & Marks, 1990). If participants classify the first dimension more slowly when the irrelevant dimension is varied then it can be assumed that the two dimensions interact in perception (Melara, 1989).

The purpose of this procedure is to identify integral dimensions that combine to produce a unitary perception. Nomic mappings may be thought of in terms of the association of two perceptually integral dimensions, as both the signal and referent depict the same event. As predicted, pitch and bar length were found to be perceptually integral, suggesting that they are nomically related. Damping and bar length were confirmed as a symbolic mapping.

Acquisition of nomic and symbolic mappings was tested using a variation of the paired-associate learning task employed by Leiser, Avons, and Carr (1989). There are several advantages provided by this design that are relevant. Most importantly, participants learn the correct sound-meaning associations 'online' during an experiment via feedback: participants were required to guess the required combinations at first exposure. This feature is useful for examining Gaver's (1986) claim that the intuitive association of sound and the circumstances of production should be straightforward in nomic relationships.

As with all learning tasks, it is imperative to control the features of the associated referent to minimise confounds. For this reason bar length was represented in numerical terms, with a three-digit measurement in millimetres used to distinguish among the different lengths. It was concluded that these numbers were unlikely to differ in terms of familiarity, semantics, phonology, imagery, complexity, or difficulty. While the selection of numerals to indicate length may be considered somewhat abstract, Pansky and Algom (1999) used the Garner interference paradigm to demonstrate that numerical magnitude and physical size are perceptually integral dimensions.

Aim, Design, and Hypotheses

The aim of the present study was to examine the relative ease of learning nomic and symbolic sound-event mappings. The experiment employed a 2X(2X2) factorial design: the two mapping levels of nomic and symbolic, an immediate and delayed test phase, and the between-subjects factor of everyday versus musical listening. The dependent variable was the percentage of correct responses, a measure widely considered to be a valid indicator of learning in humans (Brand & Jolles, 1985; Greene, 1988; Lachner, Satzger, & Engel, 1994; Savage & Gouvier, 1992). The main hypothesis under investigation was that nomic mappings are more easily learned than symbolic mappings. In an important qualification to this prediction, it is hypothesised that these learning advantages manifest only in the immediate phase of the everyday listening condition. The experimental can therefore be sub-divided into four specific predictions:

Hypothesis 1: Learning in the nomic condition is superior to learning in the symbolic condition during the immediate phase within the everyday listening group.

Hypothesis 2: Learning in the nomic condition is equivalent to learning in the symbolic condition during the delayed phase within the everyday listening group.

Hypothesis 3: Learning in the nomic condition is equivalent to learning in the symbolic condition during the immediate phase within the musical listening group.

Hypothesis 4: Learning in the nomic condition is equivalent to learning in the symbolic condition during the delayed phase within the musical listening group.

Method

Participants

Participants were 40 students from the University of Western Sydney, Macarthur. Participation was voluntary and the only requirement was that individuals had self-reported normal hearing and, for control purposes, no formal training in music.

Materials

Auditory stimuli. For the nomic variable of frequency, a scale consisting of ten sounds was constructed. An estimation was then made of the bar lengths (measured in millimetres) necessary to produce these frequencies, when both bar width and thickness were held constant, using a wooden xylophone as a guide. Another ten sounds were then produced for the symbolic category of damping. All sounds were found to be distinguishable during pilot testing.

Visual stimuli. Prior to hearing the struck bar, participants were presented with a green circle positioned in the centre of the screen. This served as a focus for the mouse pointer and ensured the pointer was equidistant from all numbers at the start of each trial. At the same time as the sound was presented, five numbers were displayed in a circular configuration surrounding the area previously occupied by the circle. The positions of the numbers were altered in each block to control for spatial organisation.

Apparatus. The experiment was designed and conducted using Powerlaboratory (Chute & Westall, 1996) and was run on one of two Apple Macintosh computers; a Power Macintosh 7300/200 and a Power Macintosh G3.

Procedure

Prior to testing, participants were given a brief summary of the experimental procedure. The participants were prompted to position the mouse on their favoured side and were fitted with headphones.

The experimental session was then initiated, with a more detailed set of instructions provided on screen. The instructions varied according to the type of listening that the condition was attempting to induce. Participants assigned to the everyday listening group were told that they would be presented with the sound of a struck pipe, and that this sound would be paired with the length of the pipe in millimeters. Those in the musical listening condition were told that they would hear a sound and that it would be associated with a label.

The paired-associate learning task required participants to select (using the mouse) the appropriate number after hearing a sound. Participants were given a brief practice phase to familiarise themselves with the procedure. After making each selection, the participant was provided with feedback that evaluated their choice as either correct or incorrect, before providing the correct association.

Each of the five sounds was presented twice in random order within each block, and each mapping condition consisted of five blocks. The order of mapping condition was counterbalanced across groups. Evidence of retention and the effects of prior learning were assessed by repeating the test one week later (Barnard, Breeding and Cross, 1984). The duration of each session was 30 minutes.

Results

Four orthogonal planned comparisons were performed to test the experimental hypothesis (Howell, 1997). Alpha was set at .046 to adjust for familywise error (Shavelson, 1998).

Hypothesis 1 stated that nomic mappings are learned better than symbolic mappings in the everyday listening-immediate test phase, and the mean percentage of correct responses for the nomic condition (M = 65.70%, SD = 9.82%) was found to be significantly greater than the symbolic mean (M = 55.80%, SD = 13.50%), t(19) = 3.21, p = .005. Hypothesis 2 predicted that the difference in learning performance between nomic and symbolic mappings would not be maintained during the everyday listening-delayed test phase, with the related comparison illustrating no difference between the nomic (M = 69.60%, SD = 8.80%) and symbolic scores (M = 70.50%, SD = 12.92%), t(19) = .32, p = .75. Hypothesis 3, stating that there is no difference in learning performance between nomic and symbolic mappings in the musical listening-immediate condition, revealed no significant difference between the nomic (M = 62.80%, SD = 13.78%) and symbolic scores (M = 58.60%, SD = 16.28%), t(19) = 1.13, p = .27. Finally, Hypothesis 4 predicted no difference in ease of learning between nomic and symbolic mappings in the musical listening-delayed phase, and the comparison failed to detect a significant difference between the nomic (M = 67.80%, SD = 16.58%) and symbolic scores (M = 70.10%, SD = 12.02%), t(19) = .74, p = .47. A graphic representation of the learning curves is provided in Figure 1.

Figure 1. Mean percentage of correct responses for each mapping condition in the everyday and musical listening groups. Standard error of the mean is shown.

 

Discussion

The results support the four experimental hypotheses. Specifically, nomic mappings were learned significantly better than symbolic mappings, but the advantage was restricted, as predicted, to the immediate phase of the everyday listening group. Figure 2 shows that the largest discrepancy in performance between nomic and symbolic mappings occured on the first block of the everyday listening condition. This observation endorses the notion that nomic mappings were more intuitive than symbolic mappings.

An ecological perspective provides one explanation of these findings. The nomic mapping of pitch-size afforded useful information to participants about the length of the struck bar. A biologically-based explanation is that humans, possibly through evolution, have adapted to associate, relatively easily, the pitch of a sounding object with size. Such a combination represents a nomic mapping because it conforms to unchanging physical laws or states of affairs. These conditions have accompanied humans throughout history, and phylogenetic imprinting of these laws may provide the basis for direct event perception. However, the present experiment used adult participants highly familiar with relations between sounds and object size and, as a result, cannot rule out the possibility that the facilitatory effects of nomic mappings are the result of experience.

Implications for Design of Auditory Icons

The present study demonstrates that adults can determine the relative length of a struck bar from the acoustic quality of pitch. Importantly, this finding need not be restricted to impact sounds. Gaver (1993a, 1993b) suggests that more complex sound-producing events may be reducible to a series of impacts. For example, Gaver proposed that scraping involves multiple impacts as the moving object falls into depressions and hits raised ridges. As a result, the current research findings should generalise to a large number of events.

The learning advantages of nomic mappings are not necessarily confined to pitch. Damping indicates the material of a struck object, and amplitude, with its perceptual correlate of loudness, affords information about the proximity of an event and the force of the interaction (Gaver 1993a). Warren and Verbrugge (1984) have illustrated the role of temporal properties during event perception. The challenge for ecological psychology is to investigate the extraction of such relevant features from the soundwave and to map each component to the invariant information that it affords. The resultant taxonomy of sound-event mappings would provide a framework for understanding realised and potential meaning inherent in the acoustic array.

Results of the present study also imply that nomic mappings, as illustrated with pitch and object size, provide significant initial learning advantages over symbolic mappings. Although the advantages are realised only during the initial stages of learning, such intuitive mappings may minimise the need for training. If invariant relations are mapped together then the meaning of an auditory icon should be obvious, learned quickly, and resistant to extinction.

Limitations and Future Directions

Criticisms of the current experimental research may question extrapolation to perception in the real world. For instance, the reductive psychophysical analysis employed in the present study reduced the auditory stimuli to mere caricatures of everyday sounds. Importantly, identification of the elements crucial for event perception was what prompted Gaver (1993a) to devise algorithms for synthesising auditory stimuli. His reasoning was that if an artificial sound manages to produce an accurate identification of the desired sound-producing event, then the essential spectral components have been included. It is also likely that the proposed one-to-one relationship between pitch and object size is more complex in real world settings. Frequency is known to reflect the material and shape of an object, as well as its size (Gaver, 1993a). However, when the attributes of shape and material are held constant, there is probably a direct relationship between pitch and size (Gaver, 1993a). Thus, despite the risk of corrupting the natural listening experience, a controlled laboratory experiment was considered appropriate for examining the predictions of the current study.

Further exploratory, empirical research is still required in the relatively new field of ecological acoustics. First, invariant acoustic properties need to be examined and mapped to source related information. For instance, Gaver's (1993a) predictions regarding sound-event mappings remain to be investigated. Pilot testing during operationalisation of the nomic and symbolic categories during the present study employed the Garner interference paradigm, which appears to hold potential for the future investigation of sound-event mappings. Second, while the present results are consistent with notions of direct perception and preparedness, they provide definitive evidence for neither. Developmental and even comparative studies are needed to identify the basis for the advantage of nomic mappings. Finally, it would be of significant theoretical interest to examine the possibility of generalising these concepts to linguistic and musical domains. Interestingly, research into infant-directed speech suggests that prosodic cues, such as melodic contour and rhythm, rather than semantic content, communicates intent to young infants via direct manipulation of instinctive physiological responses (Fernald, 1989). It would be most interesting to examine whether invariant acoustic relations maintain their affordances in various auditory contexts from language to music.

References

Aslin, R. N., & Smith, L. B. (1988). Perceptual development. Annual Review of Psychology, 39, 435-473.

Ballas, J. A. (1993). Common factors in the identification of an assortment of brief everyday sounds. Journal of Experimental Psychology: Human Perception and Performance, 19(2), 250-267.

Ballas, J. A., & Howard, J. R. (1987). Interpreting the language of environmental sounds. Environment and Behavior, 19(1), 91-114.

Barnard, W. A., Breeding, M., & Cross, H. A. (1984). Object recognition as a function of stimulus characteristics. Bulletin of the Psychonomic Society, 22(1), 15-18.

Brand, N., & Jolles, J. (1985). Learning and retrieval rate of words presented auditorily and visually. The Journal of General Psychology, 112(2), 201-210.

Chute, D. L., & Westall, R.F. (1996). Fifth generation research tools: Collaborative development with Powerlaboratory. Behavior Research Methods, Instruments, and Computers, 28, 311-314.

Deutsch, D. (Ed.). (1999). The Psychology of Music (2nd ed.). San Diego: Academic Press.

Familant, M. E., & Detweiler, M. C. (1993). Iconic reference: Evolving perspectives and an organizing framework. International Journal of Man-Machine Studies, 39, 705-728.

Fernald, A. (1989). Intonation and communicative intent in mothers' speech to infants: Is the melody the message? Child Development, 60, 1497-1510.

Garner, W. R., & Felfoldy, G. L. (1970). Integrality of stimulus dimensions in various types of information processing. Cognitive Psychology, 1, 225-241.

Gaver, W. W. (1986). Auditory icons: Using sound in computer interfaces. Human-Computer Interaction, 2, 167-177.

Gaver, W. W. (1989). The SonicFinder: An interface that uses auditory icons. Human-Computer Interaction, 4, 67-94.

Gaver, W. W. (1993a). How do we hear in the world?: Explorations in ecological acoustics. Ecological Psychology, 5(4), 285-313.

Gaver, W. W. (1993b). What in the world do we hear?: An ecological approach to auditory event perception. Ecological Psychology, 5(1), 1-29.

Greene, R. (1988). Stimulus suffix effects in recognition memory. Memory & Cognition, 16(3), 206-209.

Heine, W., & Guski, R. (1991). Listening: The perception of auditory events? Ecological Psychology, 3(3), 263-275.

Hereford, J., & Winn, W. (1994). Non-speech sound in human-computer interaction: A review and design guidelines. Journal of Educational Computing Research, 11(3), 211-233.

Howell, D. C. (1997). Statistical Methods for Psychology (4th ed.). Belmont: Duxbury Press.

Jacko, J. A. (1996). The identifiability of auditory icons for use in educational software for children. Interacting With Computers, 8(2), 121-133.

Jacko, J. A., & Rosenthal, D. J. (1997). Age-related differences in the mapping of auditory icons to visual icons in computer interfaces for children. Perceptual and Motor Skills, 84, 1223-1233.

Jenison, R. L. (1997). On acoustic information for motion. Ecological Psychology, 9(2), 131-151.

Jusczyk, P. W. (1997). The Discovery of Spoken Language. Cambridge: MIT Press.

Lachner, G., Satzger, W., & Engel, R. R. (1994). Verbal memory tests in the differential diagnosis of depression and dementia: Discriminative power of seven test variations. Archives of Clinical Neuropsychology, 9(1), 1-13.

Lass, N. J., Eastham, S. K., Parrish, W. C., Scherbick, K. A., & Ralph, D. M. (1982). Listeners' identification of environmental sounds. Perceptual and Motor Skills, 55, 75-78.

Leiser, R. G., Avons, S. E., & Carr, D. J. (1989). Paralanguage and human-computer interaction. Part 2: comprehension of synthesized vocal segregates. Behaviour & Information Technology, 8(1), 23-32.

Melara, R. D. (1989). Dimensional interaction between color and pitch. Journal of Experimental Psychology: Human Perception and Performance, 15(1), 69-79.

Melara, R. D., & Marks, L. E. (1990). Dimensional interactions in language processing: Investigating directions and levels of crosstalk. Journal of Experimental Psychology: Learning, Memory, and Cognition, 16(4), 539-554.

Pansky, A., & Algom, D. (1999). Stroop and Garner effects in comparative judgement of numerals: The role of attention. Journal of Experimental Psychology: Human Perception and Performance, 25(1), 39-58.

Pressing, J. (1997). Some perspectives on performed sound and music in virtual environments. Presence, 6(4), 482-503.

Rasch, R., & Plomp, R. (1999). The perception of musical tones. D. Deutsch (Ed.) The Psychology of Music (2nd ed.) (pp. 89-112). San Diego: Academic Press.

Rosenblum, L. D., Wuestefeld, A. P., & Anderson, K. L. (1996). Auditory reachability: An affordance approach to the perception of sound source distance. Ecological Psychology, 8(1), 1-24.

Savage, R., & Gouvier, W. (1992). Rey auditory-verbal learning test: The effects of age and gender, and norms for delayed recall and story recognition trials. Archives of Clinical Neuropsychology, 7(5), 407-414.

Seligman, M. E. P. (1970). On the generality of the laws of learning. Psychological Review, 77(5), 406-418.

Seligman, M. E. P. (1971). Phobias and preparedness. Behavior Therapy, 2, 307-320.

Shavelson, R. J. (1988). Statistical Reasoning for the Behavioral Sciences (2nd ed.). Boston: Allyn & Bacon.

Shepard, R. N. (1984). Ecological constraints on internal representation: Resonant kinematics of perceiving, imagining, thinking, and dreaming. Psychological Review, 91(4), 417-447.

Stoffregen, T. A., & Pittenger, J. B. (1995). Human echolocation as a basic form of perception and action. Ecological Psychology, 7(3), 181-216.

Tighe, T. J., & Dowling, W. J. (1993). Psychology and Music: The Understanding of Melody and Rhythm. Hillsdale: Erlbaum.

Vicente, K. J., & Burns, C. M. (1996). Evidence for direct perception from cognition in the wild. Ecological Psychology, 8(3), 269-280.

Warren, W. H., & Verbrugge, R. R. (1984). Auditory perception of breaking and bouncing events: A case study in ecological acoustics. Journal of Experimental Psychology: Human Perception and Performance, 10(5), 704-712.

 Back to index