Philosophy:Embodied language processing

From HandWiki
Short description: Linguistics concept

Embodied cognition occurs when an organism's sensorimotor capacities (ability of the body to respond to its senses with movement), body and environment play an important role in thinking. The way in which a person's body and their surroundings interacts also allows for specific brain functions to develop and in the future to be able to act.[1] This means that not only does the mind influence the body's movements, but the body also influences the abilities of the mind, also termed the bi-directional hypothesis. There are three generalizations that are assumed to be true relating to embodied cognition. A person's motor system (that controls movement of the body) is activated when (1) they observe manipulable objects, (2) process action verbs, and (3) observe another individual's movements.[2]

Embodied semantics is one of two theories concerning the location and processing of sensory motors inputs within the human brain.[3][4] The theory of embodied semantics involves the existence of specialized hubs where the meaning of a word is tied with the sensory motor processing unit associated with the word meaning. For example, the concept of kicking would be represented in the sensory motor areas that control kicking actions.[5] As a result, the theory assumes that individuals must possess a body to understand English.

Neural circuitry

The overlap between various semantic categories with sensory motor areas suggests that a common mechanism is used by neurons to process action, perception, and semantics. The correlation principle states that neurons that fire together, wire together. Also, neurons out of sync, delink. When an individual pronounces a word, the activation pattern for articulatory motor systems of the speaker leads to activation of auditory and somatosensory systems due to self-perceived sounds and movements.

If a word meaning is grounded in the visual shapes of the objects, the word form circuit is active together with neural activity in the ventral-temporal visual stream related to processing of visual object information. Correlation learning links the word and object circuits, resulting in an embodied object-semantic relationship. To study the embodiment effect on semantic (meaning) processing of common adjectives and abstract nouns, people who were contrasted by their endurance, tempo, plasticity, emotionality, sex or age were tested using Semantic Projective Method.[3][6][4] In these studies, males with stronger motor-physical endurance estimated abstractions describing people-, work/reality- and time-related concepts in more positive terms than males with weaker endurance. Females with stronger social or physical endurance estimated social attractors in more positive terms than weaker females. Both male and female temperament groups with higher sociability showed a universal positive bias in their estimations of social concepts, in comparison to participants with lower sociability.

Semantic hubs

A semantic hub represents a focal point in the brain where all semantic information pertaining to a specific word is integrated. For example, the color, shape, size, smell, and sound associated with the word “cat” would be integrated at the same semantic hub. Some candidate regions for semantic hubs include:

  1. Inferior Frontal Cortex: the anterior part of Broca's area and adjacent tissue in the left Inferior Frontal Cortex including Brodmann's areas 44, 45, and 47 are activated for semantic processing and functional changes.[7]
  2. Superior Temporal Cortex: contains Wernicke's area which controls the classic posterior language area in and adjacent to the superior temporal gyrus and sulcus. This area is thought to be a semantic processor on the basis of lesion, perfusion, and imaging data.[7]
  3. Inferior Parietal Cortex: angular and adjacent to supramarginal gyrus in inferior parietal cortex is thought to be most strongly activated during semantic processing of cross-modal spatial and temporal configurations.
  4. Inferior and middle temporal cortex: a general semantic binding site between words and their meaning in left or bilateral medial/inferior temporal cortex.[7]
  5. Anterior temporal cortex: thought to be involved in semantic dementia. which is a severe and specific semantic deficit characterized by lesions to both temporal poles.[7]

Semantic integration mechanisms involve various hub sites listed above, which contradicts the idea that there is one center where all integration occurs. However, each individual hub is compliant with the amodal model. Collectively, all of the hubs provide evidence for the theory that there are areas within the brain where emotional, sensory, and motor information all converge in one area.

Semantic category specificity

Each potential semantic hub is activated to a specific degree according to the category that the perceived word belongs to. For example, lesions to each of the five potential hubs do not affect all words. Instead, experimental data determines that one semantic category suffers more than another as it pertains to the word.

  1. Left Inferior Frontal Cortex and bilateral frontocentral motor systems: these two areas are strongly activated in response to action related words or phrases. Lesions to these two areas produces impairments in the processing of action related words and action related concepts.[8]
  2. Bilateral superior temporal Cortex: this area is strongly activated in response to words related to sounds. Lesions to this area produce impairment in sound word processing.[8]
  3. Left Inferior Parietal Cortex: especially near the supramarginal gyrus, this area is activated by spatial language. Lesions to the Inferior Parietal Cortex produced impairment involving spatial language such as prepositions.[8]
  4. Medial/Inferior Temporal Cortex: this area is strongly activated by category-specific words animals, tools, person's names, color, and form. Lesions show impairment in category-specific words for these categories as well.[8]
  5. Anterior Temporal Cortex: this area is associated with processing the differences between semantic categories.[8]

Some of the category differences are thought to be produced by the adjacent hubs. For example, category specificity is greatest close to the piriform and anterior insular olfactory cortex. Here, odor words such as “cinnamon” lead to greater activation than control words. In the gustatory cortex in the anterior insula and frontal operculum, taste words such as “sugar” lead to a stronger activation.

Experiential trace hypothesis

Experiential Trace Hypothesis states that each time an individual interacts with the world, traces of that particular experience are left in our brain.[9] These traces can be accessed again when a person thinks of words or sentences that remind them of that experience. Additionally, these traces in our brain are linked to the action that they are related to.[9]

Words and sentences become those cues that retrieve these traces from our mind. Researchers have studied if the previous experience with a word, such as its location (up or down) in space, affects how people understand and then respond to that word.[10] In one experiment, researchers hypothesized that if reading an object word also activates a location that is linked to that noun, then the following action response should be compatible with that association.[10] They found that participants were faster to push a button higher than another button when the word was associated with being "up" or "above" than when the button was lower than the other for words associated with "up" and "above".

The results of this study displayed that participants were faster to respond when the location of the word and the action they had to perform were similar. This demonstrates that language processing and action are connected. This research also found that the location information of a word is automatically activated after seeing the word.[10] In a similar study, it was discovered that participants were equally as fast at responding to words that were associated with either an upward or downward location when the buttons to respond to these words were horizontal – meaning that the experiential trace effect was ruled out when the responding action did not link to either of the locations that were activated.[11]

Experiential-simulation theory of language understanding

Some theorists have proposed an experiential-simulation approach of language understanding. They argue that previous experiential traces related to a word may be reactivated at a later stage when accessing the meaning of the same word. This has been highlighted through the example of encountering the word ‘airplane’ in a situation where someone points to an airplane in the sky, thus making one look upwards. These experiential traces, e.g. ‘looking upwards’ are later reactivated when accessing the meaning of the word ‘airplane'. Similarly, another example might be when a person accesses the meaning of the word ‘snail’, they might also access experiential traces associated with this word, e.g. ‘looking downwards’ (likely towards the ground).[12]

Language comprehension and motor systems involved in action

Concrete verbs

As a result of previous experience to certain words, several studies have found that the action associated with a certain word is also activated in the motor cortices when processing that same word. For example, using event-related functional magnetic resonance imaging (fMRI), it was discovered that exposure to concrete action verbs referring to face, arm, or leg actions (e.g., to lick, pick, kick) activated motor regions that are stimulated when making actions with the foot, hand, or mouth.[13]

Abstract verbs

However, findings are not as clear cut when abstract verbs are involved. Embodied theories of language comprehension assume that abstract concepts, as well as concrete ones, are grounded in the sensorimotor system [4][14] Some studies have investigated the activation of motor cortices using abstract and also concrete verbs, examining the stimulation of the motor cortices when comprehending literal action verbs (concrete) vs. the metaphorical usage of the same action verbs (abstract). One such study used fMRI to study participants whilst they viewed actions performed by the mouth, hand or foot, and read literal and metaphorical sentences related to the mouth hand or foot. This study found activation in the premotor cortex for literal action (e.g. “grasping the scissors”) but not for metaphorical usage (e.g. “grasping the idea”).[8] These findings suggest that the assumption of embodied theories that abstract concepts, as well as concrete ones, are grounded in the sensorimotor system may not be true.

However, in contrast, other research has found motor cortex activation for the metaphorical usage of action verbs. One such study investigated cortical activation during comprehension of literal and idiomatic sentences using Magnetoencephalography (MEG). During a silent reading task, participants were presented with stimuli which included both literal and metaphorical arm-related action verbs, e.g. “Mary caught the fish” versus “Mary caught the sun”, and also literal and metaphorical leg-related action verbs, e.g. “Pablo jumped on the armchair” versus “Pablo jumped on the bandwagon”. This study found that processing of abstract verbs (idioms in this case) did indeed activate motor regions of the brain, activating anterior fronto-temporal activity very early compared to literal verbs.[7]

Embodied semantics using fMRI for concrete and abstract words

Hauk and colleagues found that reading words associated with foot, hand, or mouth actions (examples: kick, pick, lick) activated by motor areas adjacent or overlapping with areas activated by making actions with the hand, foot, or mouth.[13] Additionally, neurolinguist Tettmanti and colleagues found that listening to action related sentences activated the premotor cortex in a somatotopic fashion.[5] Example: leg sentences showed premotor activity dorsal to hand sentences dorsal to mouth sentences.

Aziz-Zadeh and colleagues localized foot, hand, and mouth premotor regions of interest in each subject by having subjects watch actions associated with each effector and read phrases associated with the foot, hand, and mouth. In each subject, regions most activated for watching a foot action were also most active for language related to foot actions. The same was true for the hand and mouth. Rizzolatti and colleagues have suggested that action plan (manipulating, reaching) is more important than the actual effector involved.[8]

Other studies have investigated activation of the motor system during comprehension of concrete and abstract sentences. Using transcranial magnetic stimulation (TMS) and a behavioural paradigm, one study investigated whether listening to action-related sentences activated activity within the motor cortices. This was investigated using Motor Evoked Potentials (MEPs) from the TMS which were recorded from hand muscles when stimulating the hand motor area, and from foot and leg muscles when stimulating the foot motor area. Participants were presented with sentences relating to hand or foot actions. As control, participants listened to sentences containing abstract content. The study found there was indeed activation of the motor cortices whilst listening to sentences expressing foot/leg and hand/arm actions. This activation specifically concerned the areas of the motor system ‘where the effector involved in the processed sentence is motorically represented’ (pp. 360). Specifically, the results showed that listening to hand-action-related sentences prompted a decrease of MEP amplitude recorded from hand muscles and listening to foot-action-related sentences prompted a decrease of MEP amplitude recorded from foot muscle.[15]

Action-sentence compatibility effect (ACE)

Sentence processing can facilitate activation of motor systems based on the actions referred to in the sentence. In one study, researchers asked participants to make judgments on whether a sentence was sensible or not. For example, "You handed Courtney the notebook" versus "Courtney handed you the notebook". They asked participants in one condition to push a button farther away from their body if the sentence was logical and a button close to their body when it wasn't logical. The results of this study demonstrated that participants were faster at pushing the "sentence is logical" button when the action in the sentence matched the action required by them to push the correct button.[16] This means if the sentence read "you handed Courtney the notebook", the participants were faster to push the button that was farther away from them when this button meant the sentence was logical. The depicted motion in these sentences affected the amount of time required to understand the sentences that described the motion that is in the same direction. This effect has been shown to apply to sentences that describe concrete actions (putting a book on a shelf) as well as more abstract actions (you told the story to the policeman).[17]

Other studies have tried to understand the ACE phenomenon by examining the modulation of motor resonance during language comprehension. In one study participants were asked to read sentences containing a frame of between one and three words. The participants had to rotate a knob, in one direction for half of the experiment and in the other direction for the other half. Each 5° of rotation induced the presentation of a new frame. Each of the sentences described actions involving manual rotation. In these, the rotation direction would or would not match the rotation direction implied by the sentence. Earlier studies, such as that by Glenberg & Kaschak (2002), examined motor resonance in responses to sentences presumably given after the sentence was read. In contrast, results of this study revealed that motor resonance had dissipated before the end of the sentence, with motor resonance occurring on the verb. This study made use of comprehension questions rather than sensibility sentences. The researchers have argued that this created a more naturalistic reading situation, so it could be argued that the results of this study are deemed more suitable because they are in regards to more naturalistic language. Overall, the researchers have concluded that motor resonance is quite immediate and short-lived and that duration of the effect is modified by linguistic context.[18]

Neurophysiological evidence has also been presented to prove an ACE. This research used a behavioural paradigm as well as Event-Related Potential (ERP) to record brain activity, allowing the researchers to explore the neural brain markers of the ACE paradigm in semantic processing and motor responses. ERP was particularly beneficial in helping the researchers to investigate the bi-directional hypothesis of action-sentence comprehension, which proposes that language processing facilitates movement and movement also facilitates language comprehension. In the study participants listened to sentences describing an action that involved an open hand, a closed hand or no manual action. They were then required to press a button to indicate their understanding of the sentence. Each participant was assigned a hand-shape, either closed or open, which was required to activate the button. As well as two groups (closed or open hand-shapes), there were three different categories relating to hand-shape: compatible, incompatible and neutral. Behavioural results from the study showed that participants responded quicker when the hand-shape required to press the response-button was compatible with the hand-shape inferred by the sentence. ERP results provided evidence to support the bi-directional hypothesis, showing that cortical markers of motor processes were affected by sentence meaning, therefore providing evidence for a semantics-to-motor effect. ERP results also demonstrated a motor-to-semantics effect as brain markers of comprehension were modified by motor effects.[19]

The Action-Compatibility Effect also states that the brain resources used to plan and carry out actions are also used in language comprehension; therefore, if an action implied in a sentence is different from the suggested response, there is interference within these brain resources.[17]

Word activation

Other studies have demonstrated that reading an object name interferes with how a person plans on grasping that object.[20] It was also found that similar words can prime similar actions. Playing the piano and using a typewriter both utilize similar motor actions; these words prime each other in a word decision task.[20] These studies have concluded that activation of motor decisions occur automatically when exposed to action-related words.[20]

Metaphorical language

Aziz-Zadeh investigated congruent somatotopic organization of semantic representations for metaphorical sentences in either hemisphere. Aziz-Zadeh presented subjects with stimuli such as “kick the bucket” or “bite the bullet” to read and then followed by presenting subjects with videos of hand, foot, and mouth actions. Evidence could not be found in either hemisphere to support this theory.[8]

The metaphors used in the experiment are common in the English language however. Therefore, the argument stands that if a metaphor is heard often enough it will not activate the same network of processing that it did initially.

Actions emphasize meaning

Many studies have shown how body movements and speech can be combined to emphasize meaning (often called gesturing). A person can observe the actions of another to help them comprehend what that person is saying.[21] For example, if a person is pointing repeatedly, it helps the listener to understand that the direction being inferred is very important; whereas if it was a casual point in the general direction, the location of the object may not be as necessary to comprehend what the speaker is saying. Another example may be the stomping of one's foot. This can help the listener to understand the anger and frustration being conveyed by the speaker.[citation needed]

Implications

Many studies have demonstrated that people's understanding of words and sentences can influence their movements and actions as well as the opposite – peoples’ actions can influence how quickly they can comprehend a word or sentence.[22] This knowledge is important for many reasons. One study looked at the impact of embodied cognition in a classroom setting to facilitate and enhance language learning. For a child, there is a difference between oral language learning and reading. In oral language learning, the mapping between a symbol (word) and the object is common – often brought about by gesturing to the object.[23] However, when a child is learning to read, they focus on the letter-sound combinations and the correct pronunciation of the words. Usually, the object the words are referring to, are not immediately connected with the word so an association between the word and object isn't immediately made.[23] The researchers of this study suggest the Moved by Reading intervention which consists of two parts – Physical Manipulation stage and an Imagined Manipulation stage.[23] In physical manipulation, the child reads a sentence and then is instructed to act out that sentence with available toys.[23] This forces the child to connect words with objects and their actions. In the imagined manipulation stage, the child reads the sentence and is then asked to imagine how they would interact with toys to act out the sentence.[23] They studied this further and discovered that it is possible for these children to still benefit from the effects of embodied cognition when they manipulate objects on a computer screen.[23] This embodied cognition software can help children facilitate language comprehension.[citation needed] Other implications for language instruction that would enhance acquisition and retention are to offer activities which invite learners to actively use their body in the process, or at least observe the teacher doing so, thus activating their mirror neurons.[22]

References

  1. Cowart, M. (2005, Jul. 8 ). Embodied Cognition. http://www.iep.utm.edu/embodcog/
  2. Mahon, B. Z, and A. Caramazza. (2008). A critical look at the embodied cognition hypothesis and a new proposal. Journal of Physiology - Paris, 102 pp. 59–70.
  3. 3.0 3.1 Trofimova, IN (1999). "How people of different age, sex and temperament estimate the world.". Psychological Reports 85/2 (2): 533–552. doi:10.2466/pr0.1999.85.2.533. PMID 10611787. 
  4. 4.0 4.1 4.2 Trofimova, IN (2014). "Observer bias: an interaction of temperament traits with biases in the semantic perception of lexical material.". PLOS ONE 9 (1): e85677. doi:10.1371/journal.pone.0085677. PMID 24475048. Bibcode2014PLoSO...985677T. 
  5. 5.0 5.1 Rueschemeyer, S. et al. (2010). Effects of Intentional Motor Actions on Embodied Language Processing. Experimental Psychology, 57 (4), pp. 260−66. doi:10.1027/1618-3169/a000031
  6. Trofimova, IN (2012). "Understanding misunderstanding: a study of sex differences in meaning attribution.". Psychological Research 77 (6): 748–760. doi:10.1007/s00426-012-0462-8. 
  7. 7.0 7.1 7.2 7.3 7.4 Boulenger, V., Shtyrov, Y., & Pulvermüller, F. (2012). When do you grasp the idea? MEG evidence for instantaneous idiom understanding. Neuroimage, 59(4), 3502-3513.
  8. 8.0 8.1 8.2 8.3 8.4 8.5 8.6 8.7 Aziz-Zadeh, L., Wilson, S. M., Rizzolatti, G., & Iacoboni, M. (2006). Congruent embodied representations for visually presented actions and linguistic phrases describing actions. Current Biology, 16(18), 1818-1823.
  9. 9.0 9.1 Zwaan R.A. & Pecher, D. The grounding of cognition: The role of perception and action in memory, language, and thinking. Cambridge, UK: Cambridge University Press.
  10. 10.0 10.1 10.2 Lachmair, M. et al. (2011). Root versus roof: automatic activation of location information during word processing. Psychonomic Bulletin & Review, 18 pp. 1180–88. doi:10.3758/s13423-001-0158-x
  11. Zwann, R. A. (2002). Language Comprehenders Mentally Represent the Shape of Objects. Psychological Science, 13 (2), pp. 168–71
  12. Zwaan, R. A., & Madden, C. J. (2005). Embodied sentence comprehension. Grounding cognition: The role of perception and action in memory, language, and thinking, 224-245.
  13. 13.0 13.1 Hauk, O., Johnsrude, I., & Pulvermüller, F. (2004). Somatotopic representation of action words in human motor and premotor cortex. Neuron, 41(2), 301-307.
  14. Jirak, D., Menz, M. M., Buccino, G., Borghi, A. M., & Binkofski, F. (2010). Grasping language–A short story on embodiment. Consciousness and Cognition, 19(3), 711-720.
  15. Buccino, G., Riggio, L., Melli, G., Binkofski, F., Gallese, V., & Rizzolatti, G. (2005). Listening to action-related sentences modulates the activity of the motor system: a combined TMS and behavioral study. Cognitive Brain Research, 24(3), 355-363.
  16. Havas, D. A. et al. (2007). Emotion simulation during language comprehension. Psychonomic Bulletin & Review, 14 (3), pp. 436−41.
  17. 17.0 17.1 Glenberg, A. M, and M. Kaschak. (2002). Grounding Language in Action. Psychnomic Bulletin & Review, 9 (3), pp. 558–65.
  18. Zwaan, R.A., & Taylor, L.J. (2006). Journal of Experimental Psychology: General, 135(1), 1.
  19. Aravena, P., Hurtado, E., Riveros, R., Cardona, J. F., Manes, F., & Ibáñez, A. (2010). Applauding with closed hands: neural signature of action-sentence compatibility effects. PLoS One, 5(7), e11751.
  20. 20.0 20.1 20.2 Fischer, M., & Zwann, R. (2008). Embodied language: A review of the role of the motor system in language comprehension. The Quarterly Journal of Experimental Psychology, 61(6), 825–50. doi:10.1080/17470210701623605
  21. Gibbs, R. W. (2006). Language and Communication. Embodiement and Cognitive Science, pp. 158–207.
  22. 22.0 22.1 "Storytelling in language teaching – re-evaluating the weight of kinaesthetic modality for brain-compatible pedagogy". Storytelling 1 (2): 13–52. 2014. 
  23. 23.0 23.1 23.2 23.3 23.4 23.5 Glenberg, A. M, and A. Goldberg. (2011). Improving early reading comprehension using embodied CAI. Instructional Science, 39 pp. 27–39. doi:10.1007/s11251-009-9096-7