Social:Nonmanual feature
A nonmanual feature, also sometimes called nonmanual signal or sign language expression, are the features of signed languages that do not use the hands. Nonmanual features are gramaticised and a necessary component in many signs, in the same way that manual features are. Nonmanual features serve a similar function to intonation in spoken languages.[1]
Purpose
Nonmanual features in signed languages do not function the same way that general body language and facial expressions do in spoken ones. In spoken languages, they can give extra information but are not necessary for the receiver to understand the meaning of the utterance (for example, an autistic person may not use any facial expressions but still get their meaning across clearly, and people with visual impairments may understand spoken utterances without the need for visual aides). Conversely, nonmanual features are needed to understand the full meaning of many signs, and they can drastically change the meaning of individual signs. For example, in ASL the signs HERE and NOT HERE have the same manual sign, and are distinguished only by nonmanual features.[2]
Nonmanual features also do not function the same way as gestures (which exist in both spoken and signed languages), as nonmanual features are grammaticised.[3] For this reason, nonmanual features need to be included in signwriting systems.
Form
In sign languages, the hands do the majority of the work, forming phonemes and giving denotational meaning. Extra meaning however is created through the use of nonmanual features. Despite the literal meaning of manual, not all signs that use other body parts are nonmanual features of the language, and it generally refers to information expressed in the upper half of the body such as the head, eyebrows, eyes, cheeks, and mouth in various postures or movements.[4]
Nonmanual features have two main aspects - place and setting. These are the nonmanual equivalents to HOLM (handshape, orientation, location, and movement) in manual sign components. Place refers to the part of the body used, while setting refers to the state it is in.[5] For example, the Auslan sign for WHY has nonmanual features necessary to distinguish it from the sign BECAUSE. One of these nonmanual features can be described as having the place of [eyebrows] and the setting of [furrowed].[6]
Although it is done using the face, mouthing is not always considered a nonmanual feature, as it is not a natural feature of signed languages, being taken from the local spoken language/s.[5] Because of this, there is debate as to whether mouthing is a sign language feature or a form of codeswitching.[7]
Types
Lexical
Many lexical signs use nonmanual features in addition to the manual articulation. For instance, facial expressions may accompany verbs of emotion, as in the sign for angry in Czech Sign Language.
Nonmanual elements can be lexically contrastive. An example is the ASL sign for NOT YET, which requires that the tongue touch the lower lip and that the head rotate from side to side, in addition to the manual part of the sign. Without these features the sign would be interpreted as LATE.[8] Mouthings can also be contrastive, as in the manually identical signs for DOCTOR and BATTERY in Sign Language of the Netherlands.[9]
In some languages, there are a small amount of words that are formed entirely by nonmanual features. For example, in Polish Sign Language, a sign is used to express that the user wishes to self-correct or rephrase an utterance, perhaps best translated as I MEAN. The sign is made by closing the eyes and shaking the head.[5] Because it does not use the hands, this can be used simultaneously as the user rephrases their statement.
Intensifiers can be expressed through nonmanual features, as they have the benefit of being expressed at the same time as manual signs. In Auslan, puffed cheeks can be used simultaneously with the manual sign LARGE to translate the sign better as GIGANTIC.
Nonmanual features are also a part of many sign names.[2]
Phrasal
Many grammatical functions are produced nonmanually,[10] including interrogation, negation, relative clauses and topicalisation, and conditional clauses.[11] ASL and BSL use similar nonmanual marking for yes–no questions - they are shown through raised eyebrows and a forward head tilt.[12][1] which functions similarly to English's pitch raise in these questions.[1]
Nonmanual features are frequently used to grammatically signify role shift, which is when the signer switches between two or more individuals they are quoting.[13] For example, in German Sign Language this can be done by the signer using signing space to tie quoted speech to pronouns.[14] It can also be expressed by gaze-shifting or head-shifting.[15]
Adjective phrases can be formed using nonmanual features. For instance, in ASL a slightly open mouth with the tongue relaxed and visible in the corner of the mouth means 'carelessly', but a similar nonmanual in BSL means 'boring' or 'unpleasant'.[16]
Discourse
Discourse functions such as turn taking are largely regulated through head movement and eye gaze. Since the addressee in a signed conversation must be watching the signer, a signer can avoid letting the other person have a turn by not looking at them, or can indicate that the other person may have a turn by making eye contact.[17]
Recognition in academia
In early studies of signed languages done by hearing researchers, nonmanual features were largely ignored.[18] In the 1960s, William Stokoe established a system of sign language phonology for American Sign Language and was one of the first researchers to discuss nonmanual features in his writings when he used diacritics in his writings to signify six different facial expressions based on their meanings in English.[19]
From Stokoe's writings until the 1990s, facial expressions were discussed in some studies on signed languages, and awareness of them as a grammaticised aspect of signed languages began to grow.[3] In the 21st century, discussion of nonmanual signs in both research on individual languages and sign language education has become more common, partially due to the increased awareness of minimal pairs in automatic sign language recognition technology.[20]
References
- ↑ 1.0 1.1 1.2 Rudge, Luke A. (2018-08-03) (in en). Analysing British sign language through the lens of systemic functional linguistics. https://uwe-repository.worktribe.com/output/863200/analysing-british-sign-language-through-the-lens-of-systemic-functional-linguistics.
- ↑ 2.0 2.1 Aran, Oya; Burger, Thomas; Caplier, Alice; Akarun, Lale (2008). "A belief-based sequential fusion approach for fusing manual and non-manual signs" (in en).
- ↑ 3.0 3.1 Reilly, Judy Snitzer; Mcintire, Marina; Bellugi, Ursula (1990). "The acquisition of conditionals in American Sign Language: Grammaticized facial expressions" (in en). Applied Psycholinguistics 11 (4): 369–392. doi:10.1017/S0142716400009632. ISSN 1469-1817. https://www.cambridge.org/core/journals/applied-psycholinguistics/article/abs/acquisition-of-conditionals-in-american-sign-language-grammaticized-facial-expressions/9F21CC624EA4FF3732606F0FCD6A8D9A.
- ↑ Herrmann, Annika (2013), "Nonmanuals in sign languages", Modal and Focus Particles in Sign Languages, A Cross-Linguistic Study (De Gruyter): pp. 33–52, https://www.jstor.org/stable/j.ctvbkk221.10, retrieved 2022-04-02
- ↑ 5.0 5.1 5.2 Tomaszewski, Piotr (2010-01-01), Not by the hands alone: Functions of non-manual features in Polish Sign Language, Matrix, pp. 289–320, ISBN 978-83-932212-0-2, https://www.researchgate.net/publication/266741808, retrieved 2022-04-04
- ↑ "Signbank". https://auslan.org.au/dictionary/words/why-1.html.
- ↑ Bogliotti, Caroline; Isel, Frederic (2021). "Manual and Spoken Cues in French Sign Language's Lexical Access: Evidence From Mouthing in a Sign-Picture Priming Paradigm". Frontiers in Psychology 12: 655168. doi:10.3389/fpsyg.2021.655168. ISSN 1664-1078. PMID 34113290.
- ↑ Liddell, Scott K. (2003). Grammar, Gesture, and Meaning in American Sign Language. Cambridge: Cambridge University Press.
- ↑ SignGram blueprint: A guide to sign language grammar writing. De Gruyter Mouton. 2017. ISBN 9781501511806. OCLC 1012688117.
- ↑ Bross, Fabian; Hole, Daniel. "Scope-taking strategies in German Sign Language". Glossa 2 (1): 1–30. doi:10.5334/gjgl.106.
- ↑ Boudreault, Patrick; Mayberry, Rachel I. (2006). "Grammatical processing in American Sign Language: Age of first-language acquisition effects in relation to syntactic structure". Language and Cognitive Processes 21 (5): 608–635. doi:10.1080/01690960500139363.
- ↑ Baker, Charlotte, and Dennis Cokely (1980). American Sign Language: A teacher's resource text on grammar and culture. Silver Spring, MD: T.J. Publishers.
- ↑ Quer, Josep (2018-10-01). "On categorizing types of role shift in Sign languages" (in en). Theoretical Linguistics 44 (3–4): 277–282. doi:10.1515/tl-2018-0020. ISSN 1613-4060. https://www.degruyter.com/document/doi/10.1515/tl-2018-0020/html?lang=en.
- ↑ Buchstaller, Isabelle; Alphen, Ingrid van (2012-05-01) (in en). Quotatives: Cross-linguistic and cross-disciplinary perspectives. John Benjamins Publishing. ISBN 978-90-272-7479-3. https://books.google.com/books?id=Ns7_TvV_NfwC&pg=PA203.
- ↑ "How to use role shifting in American Sign Language". https://www.handspeak.com/learn/index.php?id=16.
- ↑ Sutton-Spence, Rachel, and Bencie Woll (1998). The linguistics of British Sign Language. Cambridge: Cambridge University Press.
- ↑ Baker, Charlotte (1977). Regulators and turn-taking in American Sign Language discourse, in Lynn Friedman, On the other hand: New perspectives on American Sign Language. New York: Academic Press. ISBN:9780122678509
- ↑ Filhol, Michael; Choisier, Annick; Hadjadj, Mohamed (1982-05-31). Non-manual features: the right to indifference. https://www.researchgate.net/publication/263374191.
- ↑ Stokoe, William C. Jr. (2005-01-01). "Sign Language Structure: An Outline of the Visual Communication Systems of the American Deaf". The Journal of Deaf Studies and Deaf Education 10 (1): 3–37. doi:10.1093/deafed/eni001. ISSN 1081-4159. PMID 15585746. https://doi.org/10.1093/deafed/eni001.
- ↑ Mukushev, Medet; Sabyrov, Arman; Imashev, Alfarabi; Koishybay, Kenessary; Kimmelman, Vadim; Sandygulova, Anara (2020). "Evaluation of Manual and Non-manual Components for Sign Language Recognition" (in English). Proceedings of the 12th Language Resources and Evaluation Conference (Marseille, France: European Language Resources Association): 6073–6078. ISBN 979-10-95546-34-4. https://aclanthology.org/2020.lrec-1.745.
Original source: https://en.wikipedia.org/wiki/Nonmanual feature.
Read more |