What do we see in each other?: How the perception of movement drives social interaction
Human perceptual systems are extremely sensitive to the actions and movements of other living things. The information derived from perceiving actions and movements underlies our social interactions, allowing us to infer the intentions of others, or to anticipate future actions from even subtle cues. The links between automatic components of our movements (those occurring below awareness) and social judgments are important, but not well understood. We ask whether the stochastic signatures of our own movements, sensed kinesthetically, can be mapped onto those sensed visually when observing the motions of others. An interdisciplinary team of IGERT students recorded the spatiotemporal properties of the movements of participants performing complex motor tasks (tennis serves or martial arts routines). Stochastic signatures of movement variability were estimated for both deliberate and automatic movement segments using maximum likelihood estimation. Two distinct movement classes emerged and were fit (95% confidence) by the scale and shape parameters of the Gamma distribution, yielding a motion “fingerprint” that uniquely localized each individual in the Gamma shape-scale plane. These empirically determined stochastic signatures were then used to endow an animated avatar performing the routines with the natural statistics from our participants. The motions of the avatar were also distorted, using the stochastic signatures of autistic individuals. We hypothesized that identification of one’s own movements would be more accurate than identification of the movements of others, regardless of the presence of distortions. Findings and implications for normal—and compromised—systems in social interactions are discussed.
Judges and Presenters may log in to read queries and replies.