Thinking-for-speaking in motion event descriptions as revealed by speech-accompanying gestures

Sotaro Kita

Dept. of Experimental Psychology, University of Bristol, Bristol BS8 1TH, UK

 

This talk concerns the relationship between speech and gestures that spontaneously accompany speech. The basic message of the talk is that spontaneous gestures can serve as a window into the spatial representations that speakers generate on-line at the moment of speaking. By observing such gestures, we can gain insight into how speakers organize spatial information in preparation for speaking, namely, the speaker's thinking-for-speaking. The talk will focus on how people express motion events with speech and gestures. I will discuss three studies that shed light on exactly how speech and gesture production processes are related to each other in such a way that gesture reflects the speaker's thinking-for-speaking, and how early in a child's development the relationship between the two processes is established. In these studies, spontaneous gestures were elicited by having adult and child participants narrate animated cartoons. The first study showed that adult speakers of Turkish, Japanese, and English gesturally expressed the same motion events differently and, furthermore, the gestural differences mirrored lexical and syntactic differences between the languages. The second study demonstrated that the linguistic effect on gesture was observed even within a single language. Syntactically different descriptions of Manner and Path in motion events were elicited from English speaking adults (e.g., In a event in which an entity rolls down a slope, rolling is the Manner and downward direction is the Path). Depending on how Manner and Path were syntactically related in a given utterance, gestural expression of Manner and Path varied in such a way that the linguistic and gestural representations have a similar structure. From these two studies, it is concluded that speech-accompanying gestures are generated from an interface representation between spatial cognition and speaking. The interface representation is an imagistic representation of an event that is adjusted on-line to be more compatible with demands of speech formulation processes. Finally, in the third study, we explored how early in a child's development such interface representations emerge. It was found that three year old English-speaking children exhibited the same gestural sensitivity to syntactic packaging of Manner and Path as adults (in the second study). Thus, it was concluded that children at the age of three can already adjust their imagistic representation of events for the purpose of speaking.