Social:Classifier constructions in sign languages
In sign languages, the term classifier construction (also known as classifier predicates) refers to a morphological system that can express events and states.[1] They use handshape classifiers to represent movement, location, and shape. Classifiers differ from signs in their morphology, namely in that signs consist of a single morpheme. Signs are composed of three meaningless phonological features: handshape, location, and movement. Classifiers, on the other hand, consist of many morphemes. Specifically, the handshape, location, and movement are all meaningful on their own.[2] The handshape represents an entity and the hand's movement iconically represents the movement of that entity. The relative location of multiple entities can be represented iconically in two-handed constructions.
Classifiers share some limited similarities with the gestures of hearing non-signers. Those who do not know the sign language can often guess the meaning of these constructions. This is because they are often iconic (non-arbitrary).[3] It has also been found that many unrelated sign languages use similar handshapes for specific entities. Children master these constructions around the age of 8 or 9.[4] Two-handed classifier constructions have a figure-ground relationship. Specifically, the first classifier represents the background whereas the second one represents the entity in focus. The right hemisphere of the brain is involved in using classifiers. They may also be used creatively for story-telling and poetic purposes.
Frishberg coined the word "classifier" in this context in her 1975 paper on American Sign Language. Various connections have been made to classifiers in spoken languages. Linguists have since then debated on how best to analyze these constructions. Analyses differ in how much they rely on morphology to explain them. Some have questioned their linguistic status, as well as the very use of the term "classifier".[5] Not much is known yet about their syntax or phonology.
Description
In classifier constructions, the handshape is the classifier representing an entity, such as a horse.[6] The signer can represent its movement and/or speed in an iconic fashion. This means that the meaning of the movement can be guessed by its form.[6][7] A horse jumping over a fence may be represented by having the stationary hand be the fence and the moving hand be the horse.[8] However, not all combinations of handshape and movement are possible.[6] Classifier constructions act as verbs.[9]
The handshape, movement and relative location in these constructions are meaningful on their own.[2] This is in contrast to two-handed lexical signs, in which the two hands do not contribute to the meaning of the sign on their own.[10] The handshapes in a two-handed classifier construction are signed in a specific order if they represent an entity's location. The first sign usually represents the unmoving ground (for example a surface). The second sign represents the smaller figure in focus (for example a person walking).[11][12][13] While the handshape is usually determined by the visual aspects of the entity in question,[14] there are other factors. The way in which the doer interacts with the entity[15] or the entity's movement[16] can also affect the handshape choice. Classifiers also often co-occur with verbs.[13] Not much is known yet about their syntax[17] or phonology.[18]
Classifier constructions are produced from the perspective of the signer. This means that the addressee must mentally flip the construction horizontally to understand it correctly. For example, if the addressee sees the signer place an object on the right side from the addressee's perspective, it means that they (the addressee) must mentally flip the construction to understand that it was placed on the left side. Native signers seem to be able to do this automatically.[19]
Two-handed lexical signs are limited in form by two constraints. The Dominance Condition states that the non-dominant hand cannot move and that its handshape comes from a restricted set. The Symmetry Condition states that both hands must have the same handshape, movement and orientation.[20] Classifier constructions, on the other hand, can break both of these restrictions. This further exemplifies the difference in phonology and morphology between lexical signs and classifiers.[21]
Unlike spoken language, sign languages have two articulators that can move independently.[22] The more active hand is termed the dominant hand whereas the less active hand is non-dominant.[23] The active hand is the same as the signer's dominant hand, although it is possible to switch the hands' role.[24] The two hands allow signers to represent two entities at the same time, although with some limitations. For example, a woman walking past a zigzagging car cannot be signed at the same time. This is because two simultaneous constructions cannot have differing movements; one would have to sign them sequentially.[22]
Argument structure
Classifiers constructions may show agreement with various arguments in its domain. In the example below, the handshape agrees with the direct object, using a "thin object" handshape for flowers and a "round object" handshape for apples. Agreement between subject and indirect object is marked with a path movement from the former to the latter. This manner of marking agreement is shared with some lexical signs.[25]
Script error: No such module "Interlinear". Script error: No such module "Interlinear".There are also correlations in American Sign Language (ASL) between specific types of classifier constructions and the kind of argument structure they have:[26]
- Predicates with a handling classifier are transitive (with an external and an internal argument)
- Predicates with a whole entity classifier are intransitive unaccusative (one single internal argument)
- Predicates with a body part classifier are intransitive unergative (one single external argument)
Classification
There have been many attempts at classifying the types of classifiers. The number of proposed types have ranged from two to seven.[27] Overlap in terminology across the classifications systems can cause confusion.[28] In 1993, Engberg-Pedersen grouped the handshapes used in classifier constructions in four categories:[29][30]
- Whole entity classifiers: The handshape represents an object. It can also represent a non-physical concept, such as culture.[31] The same object may be represented by multiple handshapes to focus on different aspects of the concept. For example, a CD may be represented by a flat palm or by a rounded C-hand.[32]
- Extension and surface classifiers: The handshape represents the depth or width of an entity. For example, a thin wire, a narrow board or the wide surface of a car's roof. These are not always considered to be classifiers in more recent analyses.[33]
- Handling/instrument classifiers: The handshape represents the hands handling an entity or instrument, such as a knife. They resemble whole entity classifiers, but they semantically imply an agent handling the entity. Just as with whole entity classifiers, the entity in handling classifiers does not have to be a physical object.[34]
- Limb classifiers: The handshape represents limbs such as legs, feet or paws. Unlike other classifier types, these cannot be combined with motion or location morphemes.[28]
The handshape's movement is grouped similarly:[29][30]
- Location morphemes:[6] Movement represents the location of an entity through a short, downward movement. The entity's orientation can be represented by shifting the hand's orientation.
- Motion morphemes: Movement represents the entity's movement along a path.
- Manner morphemes: Movement represents the manner of motion, but not the path.
- Extension morphemes: Movement does not represent actual motion, but the outline of the entity's shape or perimeter. It can also represent the configuration of multiple similar entities, such as a line of books.
Whole entity classifiers and handling classifiers are the most established classifier types.[33] The former occur with intransitive verbs, the latter occur with transitive verbs.[35] Most linguists don't consider extension and surface classifiers to be true classifiers.[33] This is because they appear in a larger range of syntactic positions. They also cannot be referred back to anaphorically in the discourse, nor can they be combined with motion verbs.[33]
Certain types of classifiers and movements cannot be combined for grammatical reasons. For example, in ASL manner of motion cannot be combined with limb classifiers. To indicate a person limping in a circle, one must first sign the manner of motion (limping), then the limb classifiers (the legs).[36]
There is little research on the differences in classifier constructions across sign languages.[37] Most seem to have them and can be described in similar terms.[37] Many unrelated languages encode the same entity with similar handshapes.[38] This is even the case for children not exposed to language who use a home sign system to communicate.[38] Handling classifiers along with extension and surface classifiers are especially likely to be the same across languages.[38]
Relation to gestures
Gestures are manual structures that are not as conventionalized as linguistics signs.[39] Hearing non-signers use forms similar to classifiers when asked to communicate through gesture. There is a 70% overlap in how signers and non-signers use movement and location, but only a 25% overlap for handshapes. Non-signers use a greater amount of handshapes, but the signers' have more complex phonology.[40] Non-signers also do not constrain their gestures to a morphological system as with sign language users.[38]
Lexicalization
Certain classifier constructions may also, over time, lose their general meaning and become fully-fledged signs. This process is referred to as lexicalization.[41][42] These types of signs are referred to as frozen signs.[43] For example, the ASL sign FALL seems to have come from a classifier construction. This classifier construction consists of a V-shaped hand, which represents the legs, moving down. As it became more like a sign, it could also be used with non-animate referents, like apples or boxes. As a sign, the former classifier construction now conforms to the usual constraints of a word, such as consisting of one syllable.[44] The resulting sign must not be a simple sum of its combined parts, but can have a different meaning entirely.[45] They may serve as the root morpheme that serves as the base for aspectual and derivational affixes. Classifiers cannot take these types of affixes.[46]
History
It wasn't until the 1960s that sign languages were being studied seriously.[47] Initially, classifier constructions were not regarded as full linguistic systems.[8][48] This was due to their high degree of apparent variability and iconicity.[48] Consequently, early analyses described them in terms of visual imagery.[37] Linguists started focusing on proving that sign languages were real languages. They started paying less attention to their iconic properties and more to the way they are organized.[47]
Frishberg was the first[49][50] to use the term "classifier" in her 1975 paper on arbitrariness and iconicity in ASL to refer to the handshape unit used in classifier constructions.[51]
The start of the study of sign language classifier coincided with a renewed interest in spoken language classifiers.[52] In 1977, Allan performed a survey of classifier systems in spoken languages. He compared classifier constructions to the "predicate classifiers" used in the Athabaskan languages.[53] These are a family of oral indigenous languages spoken throughout North America.[54] Reasons for comparing them included standardizing terminology and proving that sign languages are similar to spoken languages.[55] Allan described predicate classifiers as separate verbal morphemes that denote some salient aspect of the associated noun.[53] However, Schembri pointed out the "terminological confusion" surrounding classifiers.[56] Allan's description and comparison came to draw criticism. Later analyses showed that these predicate classifiers did not constitute separate morphemes. Instead, they were better described as classificatory verbs stems rather than classifiers.[57][58][59]
In 1982, Supalla showed that classifier constructions were part of a complex morphological system in ASL.[60][61][48] He split the classifier handshapes into two main categories: semantic classifiers (also called "entity classifiers") and size and shape specifiers (SASSes).[62] SASS categories use handshapes to describe the visual properties of an entity. Entity classifiers are less iconic. they refer to a general semantic class of objects such as "thin and straight" or "flat and round".[63] Handling classifiers would be the third type of classifier to be described. This classifier imitates the hand holding or handling an instrument.[63] A fourth type, the body-part classifier, represents a human or animal body parts, usually the limbs.[64] Linguist adopted and modified Supalla's morphological analysis for other sign languages.[28]
In the 1990s, a renewed interested in the relation between sign languages and gesture took place.[47] Some linguists, such as Liddell (2000), called the linguistic status of classifier constructions into question, especially the location and movement.[65] There were two reasons for doing so. First, the imitative gestures of non-signers are similar to classifiers.[47] Second, very many types of movement and locations can be used in these constructions. Liddell suggested that it would be more accurate to consider them to be a mixture of linguistic and extra-linguistic elements, such as gesture.[66][67][68] Schembri and colleagues similarly suggested in 2005 that classifier constructions are "blends of linguistic and gestural elements".[69] Regardless of the high degree of variability, Schembri and colleagues argue that classifier constructions are still grammatically restrained by various factors. For example, they are more abstract and categorical than the gestural forms made by non-signers.[38] It is now generally accepted that classifiers have both linguistic and gestural properties.[70]
Similar to Allan, Grinevald also compared sign language classifiers to spoken classifiers in 2000.[71] Specifically, she focused on verbal classifiers, which act as verbal affixes.[72] She lists the following example from Cayuga, an Iroquoian language:[73]
Script error: No such module "Interlinear".
The classifier for the word vehicle in Cayuga, -treht-, is similar to whole entity classifiers in sign languages. Similar examples have been found in Digueño, which has morphemes that act like extension and surface classifiers in sign languages. Both examples are attached to the verb and cannot stand alone.[74] It is now accepted that classifiers in spoken and signed languages are similar, contrary to what was previously believed.[75] They both track references grammatically, can form new words and may emphasize a salient aspect of an entity.[75] The main difference is that sign language only have verbal classifiers.[75] The classifiers systems in spoken languages are more diverse in function and distribution.[76]
Despite the many proposed alternative names to the term classifier,[77] and questionable relationship to spoken language classifiers,[78] it continues to be a commonly used term in sign language research.[78]
Linguistic analyses
There is no consensus on how to analyze classifier constructions.[3] Linguistic analyses can be divided into three major categories: representational, morphological, and lexical. Representational analyses were the first attempt at describing classifiers.[8] This analysis views them as manual representations of movements in the world. Because classifier constructions are highly iconic, representational analyses argue that this form-meaning connection should be the basis for linguistic analysis. This was argued because finite sets of morphemes or parameters cannot account for all potentially meaningful classifier constructions.[79][80] This view has been criticized because it predicts impossible constructions. For example, in ASL, a walking classifier handshape cannot be used to represent the movement of an animal in the animal noun class, even though it is an iconic representation of the event.[81][clarification needed]
Lexical analyses view classifiers as partially lexicalized words.[82]
Morphological analyses view classifiers as a series of morphemes.[83][60] Currently, this is the predominant school of thought.[84][85] In this analyses, classifier verbs are combinations of verbal roots with numerous affixes.[86] If the handshape is taken to consist of several morphemes, it is not clear how they should be segmented or analyzed.[8][87] For example, the fingertips in Swedish Sign Language can be bent in order to represent the front of a car getting damaged in a crash; this led Supalla to posit that each finger might act as a separate morpheme.[87] The morphological analysis has been criticized for its complexity.[86] Liddell found that to analyze a classifier construction in ASL where one person walks to another would require anywhere between 14 and 28 morphemes.[88] Other linguists, however, consider the handshape to consist of one, solitary morpheme.[89] In 2003, Schembri stated that there is no convincing evidence that all handshapes are multi-morphemic. This was based on grammaticality judgments from native signers.[89]
Morphological analyses differ in what aspect of the construction they consider the root. Supalla argued that the morpheme which expresses motion or location is the verbal root to which the handshape morpheme is affixed.[60] Engberg-Pedersen disagreed with Supalla, arguing that the choice of handshape can fundamentally change how the movement is interpreted. Therefore, she claims the movement should be the root. For example, putting a book on a shelf and a cat jumping on a shelf both use the same movement in ASL, despite being fundamentally different acts.[90][91][9] Classifiers are affixes, meaning that they cannot occur alone and must be bound.[92] Classifiers on their own are not specified for place of articulation or movement. This might explain why they are bound: this missing information is filled in by the root.[92]
Certain classifiers are similar to pronouns.[9][91][93] Like pronouns, the signer has to first introduce the referent, usually by signing or fingerspelling the noun.[94] The classifier is then taken to refer to this referent.[9] Signers do not have to re-introduce the same referent in later constructions; it is understood to still refer to the that referent.[9] Some classifiers also denote a specific group the same way that the pronoun "she" can refer to women or waitresses.[94] Similarly, ASL has a classifier which refers to vehicles, but not people or animals.[94] In this view, verbal classifiers may be seen as agreement markers for their referents with the movement as its root.[9]
Acquisition
The gestures of speaking children sometimes resemble classifier constructions.[95] However, signing children learn these constructions as part of a grammatical system, not as iconic representations of events. Owing to their complexity, it takes a long time to master them.[96][97] Children do not master the use of classifier constructions until the age of eight or nine.[98] There are many reasons for this relatively late mastery. Children must learn to express different viewpoints correctly, select the correct handshape and order the construction properly.[96] Schick found that the handling classifiers were the most difficult ones to master. This was followed by the extension and surface classifier. The whole entity classifiers had the fewest production errors.[99] Young children prefer to substitute complex classifiers with simpler, more general ones.[98]
Children start using classifiers at the age of two.[96] These early forms are mostly handling and whole entity classifiers.[96] Simple movements are produced correctly as early as 2.6 years of age.[100] Complex movements, such as arcs, are more difficult for children to express. The acquisition of location in classifier constructions depends on the complexity between the referents and the related spatial locations.[100] Simple extension and surface classifiers are produced correctly at 4.5 years of age.[100] By the age of five to six, children usually select the correct handshape.[101][96] At age six to seven, children still make mistakes in representing spatial relationships. In signs with a figure-ground relationship, these children will sometimes omit the ground entirely.[96] This could be because mentioning them together requires proper coordination of both hands. Another explanation is that children have more trouble learning optional structures in general.[100] Although mostly mastered, children aged nine still have difficulty understanding the locative relations between classifiers.[97]
It is widely accepted that iconicity helps in learning spoken languages, although the picture is less clear for sign languages.[102][103] Some have argued that iconicity plays no role in acquiring classifier construction. This is claimed because constructions are highly complex and are not mastered until late childhood.[102] Other linguists claim that children as young as three years old can produce adult-like constructions,[102] although only with one hand.[104] Slobin found that children under three years of age seem to "bootstrap" natural gesture to make learning the handshape easier.[105] Most young children do not seem to represent spatial situations iconically.[98] They also do not express complex path movements at once, but rather do so sequentially.[98] In adults, it has been shown that iconicity can help in learning lexical signs.[39][40]
Brain structures
As with spoken languages, the left hemisphere of the brain is dominant for sign language production.[106] However, the right hemisphere is superior in some aspects. It is better at processing concrete words, like bed or flower, compared to abstract ones.[107] It is also important in showing spatial relations between entities iconically.[106] It is especially important in using and understanding classifier constructions.[108] Signers with damage to the right hemisphere cannot properly describe items in a room. They can remember the items themselves, but cannot use classifiers to express their location.[107]
The parietal cortex is activated in both hemispheres when perceiving the spatial location of objects.[107] For spoken languages, describing spatial relationships only engages the left parietal cortex. For sign languages, both the left and right parietal cortex are needed when using classifier constructions.[107] This might explain why people with right hemisphere damage have trouble with expressing these constructions. Namely, they cannot encode external spatial relations and use them while signing.[109]
In order to use certain classifier constructions, the signer must be able to visualize the entity and its shape, orientation and location.[110] It has been shown that deaf signers are better at generating spatial mental images than hearing non-signers.[110] The spatial memory span of deaf signers is also superior.[111] This is linked to their use of sign language, rather than being deaf.[111] This suggests that using sign language might change the way the brain organizes non-linguistic information.[110]
Stylistic and creative use
It is possible for a signer to "hold" the non-dominant hand in a classifier construction. This is usually the background. This may serve the function of keeping relevant information present during the conversation.[112] During the hold, the dominant hand might also articulate other signs that are relevant to the first classifier.[113]
In performative story-telling and poetry, classifiers may also serve creative purposes.[114][115] Just as in spoken language, skilled language use can indicate eloquence. It has been observed in ASL poetry that skilled signers may combine classifiers and lexical signs.[115] The sign for BAT and DARK are identical in British Sign Language; they're also both articulated at the face. This may be used for poetic effect. For example, likening bats with darkness by using an entity classifier showing a bat flying at the face.[116] Classifiers may also be used in expressively characterizing animals or non-human objects.[117]
Citations
- ↑ Sandler & Lillo-Martin 2006, p. 76.
- ↑ 2.0 2.1 Hill, Lillo-Martin & Wood 2019, p. 49.
- ↑ 3.0 3.1 Brentari 2010, p. 254.
- ↑ Emmorey 2008, p. 194-195.
- ↑ Brentari 2010, p. 253-254.
- ↑ 6.0 6.1 6.2 6.3 Emmorey 2008, p. 74.
- ↑ Kimmelman, Pfau & Aboh 2019.
- ↑ 8.0 8.1 8.2 8.3 Zwitserlood 2012, p. 159.
- ↑ 9.0 9.1 9.2 9.3 9.4 9.5 Zwitserlood 2012, p. 166.
- ↑ Sandler & Lillo-Martin 2006, p. 78-79.
- ↑ Hill, Lillo-Martin & Wood 2019, p. 51.
- ↑ Emmorey 2008, p. 86.
- ↑ 13.0 13.1 Zwitserlood 2012, p. 164.
- ↑ Schembri 2003, p. 22.
- ↑ Schembri 2003, p. 22-23.
- ↑ Schembri 2003, p. 24.
- ↑ Marschark & Spencer 2003, p. 316.
- ↑ Zwitserlood 2012, p. 169.
- ↑ Brozdowski, Secora & Emmorey 2019.
- ↑ Emmorey 2008, p. 36-38.
- ↑ Sandler & Lillo-Martin 2006, p. 90.
- ↑ 22.0 22.1 Emmorey 2008, p. 85-86.
- ↑ Hill, Lillo-Martin & Wood 2019, p. 34.
- ↑ Crasborn 2006, p. 69.
- ↑ Carlo 2014, p. 49-50.
- ↑ Carlo 2014, p. 52.
- ↑ Schembri 2003, p. 9-10.
- ↑ 28.0 28.1 28.2 Zwitserlood 2012, p. 161.
- ↑ 29.0 29.1 Engberg-Pedersen 1993.
- ↑ 30.0 30.1 Emmorey 2008, p. 76.
- ↑ Emmorey 2008, p. 78.
- ↑ Zwitserlood 2012, p. 163.
- ↑ 33.0 33.1 33.2 33.3 Zwitserlood 2012, p. 162.
- ↑ Emmorey 2008, p. 80.
- ↑ Zwitserlood 2012, p. 167.
- ↑ Emmorey 2008, p. 81.
- ↑ 37.0 37.1 37.2 Zwitserlood 2012, p. 158.
- ↑ 38.0 38.1 38.2 38.3 38.4 Schembri 2003, p. 26.
- ↑ 39.0 39.1 Ortega, Schiefner & Özyürek 2019.
- ↑ 40.0 40.1 Marshall & Morgan 2015.
- ↑ Brentari 2010, p. 260.
- ↑ Sandler & Lillo-Martin 2006, p. 87.
- ↑ Zwitserlood 2012, p. 169-170.
- ↑ Aronoff et al. 2003, p. 69-70.
- ↑ Zwitserlood 2012, p. 179.
- ↑ Zwitserlood 2012, p. 170.
- ↑ 47.0 47.1 47.2 47.3 Brentari, Fenlon & Cormier 2018.
- ↑ 48.0 48.1 48.2 Schembri 2003, p. 11.
- ↑ Brentari 2010, p. 252.
- ↑ Emmorey 2008, p. 9.
- ↑ Frishberg 1975.
- ↑ Zwitserlood 2012, p. 160.
- ↑ 53.0 53.1 Keith 1977.
- ↑ Fernald & Platero 2000, p. 3.
- ↑ Schembri 2003, p. 10-11.
- ↑ Schembri 2003, p. 15.
- ↑ Schembri 2003, p. 13-14.
- ↑ Emmorey 2008, p. 88.
- ↑ Zwitserlood 2012, p. 175.
- ↑ 60.0 60.1 60.2 Supalla 1982.
- ↑ Zwitserlood 2012, p. 161; 165.
- ↑ Sandler & Lillo-Martin 2006, p. 77.
- ↑ 63.0 63.1 Sandler & Lillo-Martin 2006, p. 77-78.
- ↑ Hill, Lillo-Martin & Wood 2019, p. 50.
- ↑ Crasborn 2006, p. 68.
- ↑ Liddell 2000.
- ↑ Schembri 2003, p. 9.
- ↑ Brentari 2010, p. 256.
- ↑ Schembri, Jones & Burnham 2005.
- ↑ Cormier, Schembri & Woll 2010, p. 2664-2665.
- ↑ Grinevald 2000.
- ↑ Aronoff et al. 2003, p. 63-64.
- ↑ Grinevald 2000, p. 67.
- ↑ Sandler & Lillo-Martin 2006, p. 84.
- ↑ 75.0 75.1 75.2 Zwitserlood 2012, p. 180.
- ↑ Zwitserlood 2012, p. 175-176.
- ↑ Schembri 2003, p. 4.
- ↑ 78.0 78.1 Emmorey 2008, p. 90.
- ↑ DeMatteo 1977.
- ↑ Brentari 2010, p. 256-257.
- ↑ Brentari 2010, p. 258-259.
- ↑ Liddell 2003a.
- ↑ Benedicto & Brentari 2004.
- ↑ Zwitserlood 2012, p. 159; 165.
- ↑ Schembri 2003, p. 18.
- ↑ 86.0 86.1 Zwitserlood 2012, p. 165.
- ↑ 87.0 87.1 Schembri 2003, p. 18-20.
- ↑ Liddell 2003b, p. 205-206.
- ↑ 89.0 89.1 Schembri 2003, p. 19.
- ↑ Schembri 2003, p. 21-22.
- ↑ 91.0 91.1 Emmorey 2008, p. 88-91.
- ↑ 92.0 92.1 Zwitserlood 2012, p. 168.
- ↑ Marschark & Spencer 2003, p. 321.
- ↑ 94.0 94.1 94.2 Baker-Shenk & Cokely 1981, p. 287.
- ↑ Emmorey 2008, p. 198.
- ↑ 96.0 96.1 96.2 96.3 96.4 96.5 Marschark & Spencer 2003, p. 223.
- ↑ 97.0 97.1 Zwitserlood 2012, p. 174.
- ↑ 98.0 98.1 98.2 98.3 Zwitserlood 2012, p. 173.
- ↑ Schick 1990.
- ↑ 100.0 100.1 100.2 100.3 Emmorey 2008, p. 196.
- ↑ Morgan & Woll 2003, p. 300.
- ↑ 102.0 102.1 102.2 Ortega 2017.
- ↑ Thompson 2011, p. 609.
- ↑ Slobin 2003, p. 275.
- ↑ Slobin 2003, p. 272.
- ↑ 106.0 106.1 Marschark & Spencer 2003, p. 365.
- ↑ 107.0 107.1 107.2 107.3 Marschark & Spencer 2003, p. 370.
- ↑ Marschark & Spencer 2003, p. 373.
- ↑ Marschark & Spencer 2003, p. 371.
- ↑ 110.0 110.1 110.2 Emmorey 2008, p. 266.
- ↑ 111.0 111.1 Emmorey 2008, p. 266-267.
- ↑ Sandler & Lillo-Martin 2006, p. 88.
- ↑ Marschark & Spencer 2003, p. 334.
- ↑ Sutton-Spence 2012, p. 1003.
- ↑ 115.0 115.1 Sandler & Lillo-Martin 2006, p. 88-89.
- ↑ Sutton-Spence 2012, p. 1011.
- ↑ Sutton-Spence 2012, p. 1012.
References
- Aronoff, Mark; Meir, Irit; Padden, Carol; Sandler, Wendy (2003). "Classifier constructions and morphology in two sign languages". Perspectives on classifier constructions in sign languages. Lawrence Erlbaum Associates. pp. 53–84.
- Baker-Shenk, Charlotte Lee; Cokely, Dennis (1981). American sign language : a teacher's resource text on grammar and culture. Cokely, Dennis.. Washington, D.C.: Clerc Books, Gallaudet University Press. ISBN 093032384X. OCLC 24120797.
- Baker; van den Bogaerde; Pfau; Schermer (2016). The Linguistics of Sign Languages. John Benjamins. ISBN 9789027212306.
- Benedicto, Elena; Brentari, Diane (2004). "Where did all the arguments go?: argument-changing properties of classifiers in ASL". Natural Language & Linguistic Theory 22 (4): 743–810. doi:10.1007/s11049-003-4698-2.
- Brentari, Diane (2010). Sign Languages. Cambridge University Press. ISBN 978-0-521-88370-2.
- Brentari, Diane; Fenlon, Jordan; Cormier, Kearsy (2018). "Sign language phonology". Oxford Research Encyclopedia of Linguistics. doi:10.1093/acrefore/9780199384655.013.117. ISBN 9780199384655.
- Brozdowski, Chris; Secora, Kristen; Emmorey, Karen (2019-03-11). "Assessing the Comprehension of Spatial Perspectives in ASL Classifier Constructions". The Journal of Deaf Studies and Deaf Education 24 (3): 214–222. doi:10.1093/deafed/enz005. ISSN 1081-4159. PMID 30856254.
- Carlo, Geraci (2014). Structuring the argument. Multidisciplinary research on verb argument structure. John Benjamins Publishing Company. pp. 45–60. ISBN 978-90-272-0827-9.
- Cormier, Kearsy; Schembri, Adam; Woll, Bencie (2010). "Diversity across sign languages and spoken languages: Implications for language universals". Lingua 120 (12): 2664–2667. doi:10.1016/j.lingua.2010.03.016.
- Crasborn, Onno A (2006). "A linguistic analysis of the use of the two hands in sign language poetry". Linguistics in the Netherlands 23 (1): 65–77. doi:10.1075/avt.23.09cra.
- DeMatteo, Asa (1977). On the other hand: New perspectives on American Sign Language. pp. 109–136.
- Engberg-Pedersen, Elisabeth (1993). "Space in Danish Sign Language. The Semantics and Morphosyntax of the Use of Space in a Visual Language". Nordic Journal of Linguistics 19: 406. doi:10.1017/S0332586500003115.
- Engberg-Pedersen, Elisabeth (2003). "How Composite Is a Fall? Adults’ and Children’s Descriptions of Different Types of Falls in Danish Sign Language". Perspectives on Classifier Constructions in Sign Languages. Lawrence Erlbaum. ISBN 0-8058-4269-1.
- Emmorey, Karen (2008). Language, Cognition, and the Brain. Lawrence Erlbaum Associates. ISBN 978-1-4106-0398-2.
- Emmorey, Karen; Melissa, Herzig (2008). "Categorical versus gradient properties of classifier constructions in ASL". Perspectives on classifier constructions in signed languages. Routledge. p. 222. ISBN 978-0415653817.
- Fernald, Theodore; Platero, Paul (2000). The Athabaskan Languages: Perspectives on a Native American Language Family. Oxford University Press. ISBN 978-0195119473. https://archive.org/details/athabaskanlangua0000unse.
- Grinevald, Collete (2000). "A morphosyntactic typology of classifiers". Systems of nominal classifications. Cambridge University Press. pp. 50–92. ISBN 9780521065238.
- Frishberg, Nancy (1975). "Arbitrariness and iconicity: historical change in American Sign Language". Language 51 (3): 696–719. doi:10.2307/412894.
- Hill, Joseph; Lillo-Martin, Diane; Wood, Sandra (2019). Sign Languages: Structures and Contexts. Routledge. ISBN 978-1-138-08916-7.
- Keith, Allan (1977). "Classifiers". Language 53 (2): 285–311. doi:10.1353/lan.1977.0043.
- Kimmelman, Vadim; Pfau, Roland; Aboh, Enoch O. (April 2019). "Argument structure of classifier predicates in Russian Sign Language". Natural Language & Linguistic Theory 38 (2): 539–579. doi:10.1007/s11049-019-09448-9.
- Liddell, Scott K (2000). The signs of language revisited: An anthology to honor Ursula Bellugi and Edward Klima. Lawrence Erlbaum Associates. pp. 303–320. ISBN 1-4106-0497-7.
- Liddell, Scott K (2003a). Grammar, gesture, and meaning in American Sign Language. Cambridge University Press. ISBN 9780511615054.
- Liddell, Scott K (2003b). "Sources of Meaning in ASL Classifier Predicates". Perspectives on Classifier Constructions in Sign Languages. Lawrence Erlbaum Associates. pp. 199–220. ISBN 0-8058-4269-1.
- Marschark, Marc; Spencer, Patricia Elizabeth (2003). Oxford handbook of deaf studies, language, and education. Oxford: Oxford University Press. ISBN 0195149971. OCLC 50143669.
- Marshall, Chloë R.; Morgan, Gary (2015). "From Gesture to Sign Language: Conventionalization of Classifier Constructions by Adult Hearing Learners of British Sign Language". Topics in Cognitive Science 7 (1): 61–80. doi:10.1111/tops.12118. ISSN 1756-8765. PMID 25329326. http://openaccess.city.ac.uk/6413/8/from%20gesture%20to%20sign%20language.pdf.
- Morgan, Gary; Woll, Bencie (2003). "The Development of Reference Switching Encoded Through Body Classifiers in British Sign Language". Perspectives on Classifier Constructions in Sign Languages. Lawrence Erlbaum. ISBN 0-8058-4269-1.
- Ortega, Gerardo; Schiefner, Annika; Özyürek, Aslı (2019). "Hearing non-signers use their gestures to predict iconic form-meaning mappings at first exposure to signs". Cognition 191: 103996. doi:10.1016/j.cognition.2019.06.008. PMID 31238248. http://pure-oai.bham.ac.uk/ws/files/68302182/Preprint_Ortega_Schiefner_Ozyurek_UoB.pdf.
- Ortega, Gerardo (2017). "Iconicity and Sign Lexical Acquisition: A Review". Frontiers in Psychology 8: 1280. doi:10.3389/fpsyg.2017.01280. ISSN 1664-1078. PMID 28824480.
- Sandler, Wendy; Lillo-Martin, Diane (2006). Sign Language and Linguistic Universals. Cambridge University Press. ISBN 978-0521483957.
- Schembri, Adam (2003). "Rethinking ‘classifiers’ in signed languages". Perspectives on Classifier Constructions in Sign Languages. Psychology Press. ISBN 978-0415653817.
- Schembri, Adam; Jones, Caroline; Burnham, Denis (2005). "Comparing Action Gestures and Classifier Verbs of Motion: Evidence From Australian Sign Language, Taiwan Sign Language, and Nonsigners' Gestures Without Speech". The Journal of Deaf Studies and Deaf Education 10 (3): 272–290. doi:10.1093/deafed/eni029. PMID 15858072.
- Schick, Brenda (1990). "The effects of morphosyntactic structure on the acquisition of classifier predicates in ASL". Theoretical Issues: 358–374.
- Slobin, Dan (2003). A Cognitive/Functional Perspective on the Acquisition of "Classifiers". Lawrence Erlbaum Associates. pp. 271–296.
- Supalla, Ted Roland (1982). Structure and Acquisition of Verbs of Motion and Location in American Sign Language.
- Sutton-Spence, Rachel (2012). "Poetry". Sign language: An International Handbook. Berlin: De gruyter mouton. ISBN 978-3-11-020421-6.
- Thompson, Robin L. (2011). "Iconicity in Language Processing and Acquisition: What Signed Languages Reveal: Iconicity in Sign Language". Language and Linguistics Compass 5 (9): 603–616. doi:10.1111/j.1749-818X.2011.00301.x.
- Zwitserlood, Inge (2012). "Classifiers". Sign language: an international handbook. Berlin: De gruyter mouton. ISBN 9783110261325. OCLC 812574063.
Original source: https://en.wikipedia.org/wiki/Classifier constructions in sign languages.
Read more |