4 resultados para dishonest signer
em Boston University Digital Common
Resumo:
Handshape is a key articulatory parameter in sign language, and thus handshape recognition from signing video is essential for sign recognition and retrieval. Handshape transitions within monomorphemic lexical signs (the largest class of signs in signed languages) are governed by phonological rules. For example, such transitions normally involve either closing or opening of the hand (i.e., to exclusively use either folding or unfolding of the palm and one or more fingers). Furthermore, akin to allophonic variations in spoken languages, both inter- and intra- signer variations in the production of specific handshapes are observed. We propose a Bayesian network formulation to exploit handshape co-occurrence constraints, also utilizing information about allophonic variations to aid in handshape recognition. We propose a fast non-rigid image alignment method to gain improved robustness to handshape appearance variations during computation of observation likelihoods in the Bayesian network. We evaluate our handshape recognition approach on a large dataset of monomorphemic lexical signs. We demonstrate that leveraging linguistic constraints on handshapes results in improved handshape recognition accuracy. As part of the overall project, we are collecting and preparing for dissemination a large corpus (three thousand signs from three native signers) of American Sign Language (ASL) video. The video have been annotated using SignStream® [Neidle et al.] with labels for linguistic information such as glosses, morphological properties and variations, and start/end handshapes associated with each ASL sign.
Resumo:
We propose a new notion of cryptographic tamper evidence. A tamper-evident signature scheme provides an additional procedure Div which detects tampering: given two signatures, Div can determine whether one of them was generated by the forger. Surprisingly, this is possible even after the adversary has inconspicuously learned (exposed) some-or even all-the secrets in the system. In this case, it might be impossible to tell which signature is generated by the legitimate signer and which by the forger. But at least the fact of the tampering will be made evident. We define several variants of tamper-evidence, differing in their power to detect tampering. In all of these, we assume an equally powerful adversary: she adaptively controls all the inputs to the legitimate signer (i.e., all messages to be signed and their timing), and observes all his outputs; she can also adaptively expose all the secrets at arbitrary times. We provide tamper-evident schemes for all the variants and prove their optimality. Achieving the strongest tamper evidence turns out to be provably expensive. However, we define a somewhat weaker, but still practical, variant: α-synchronous tamper-evidence (α-te) and provide α-te schemes with logarithmic cost. Our α-te schemes use a combinatorial construction of α-separating sets, which might be of independent interest. We stress that our mechanisms are purely cryptographic: the tamper-detection algorithm Div is stateless and takes no inputs except the two signatures (in particular, it keeps no logs), we use no infrastructure (or other ways to conceal additional secrets), and we use no hardware properties (except those implied by the standard cryptographic assumptions, such as random number generators). Our constructions are based on arbitrary ordinary signature schemes and do not require random oracles.
Resumo:
Locating hands in sign language video is challenging due to a number of factors. Hand appearance varies widely across signers due to anthropometric variations and varying levels of signer proficiency. Video can be captured under varying illumination, camera resolutions, and levels of scene clutter, e.g., high-res video captured in a studio vs. low-res video gathered by a web cam in a user’s home. Moreover, the signers’ clothing varies, e.g., skin-toned clothing vs. contrasting clothing, short-sleeved vs. long-sleeved shirts, etc. In this work, the hand detection problem is addressed in an appearance matching framework. The Histogram of Oriented Gradient (HOG) based matching score function is reformulated to allow non-rigid alignment between pairs of images to account for hand shape variation. The resulting alignment score is used within a Support Vector Machine hand/not-hand classifier for hand detection. The new matching score function yields improved performance (in ROC area and hand detection rate) over the Vocabulary Guided Pyramid Match Kernel (VGPMK) and the traditional, rigid HOG distance on American Sign Language video gestured by expert signers. The proposed match score function is computationally less expensive (for training and testing), has fewer parameters and is less sensitive to parameter settings than VGPMK. The proposed detector works well on test sequences from an inexpert signer in a non-studio setting with cluttered background.
Resumo:
Facial features play an important role in expressing grammatical information in signed languages, including American Sign Language(ASL). Gestures such as raising or furrowing the eyebrows are key indicators of constructions such as yes-no questions. Periodic head movements (nods and shakes) are also an essential part of the expression of syntactic information, such as negation (associated with a side-to-side headshake). Therefore, identification of these facial gestures is essential to sign language recognition. One problem with detection of such grammatical indicators is occlusion recovery. If the signer's hand blocks his/her eyebrows during production of a sign, it becomes difficult to track the eyebrows. We have developed a system to detect such grammatical markers in ASL that recovers promptly from occlusion. Our system detects and tracks evolving templates of facial features, which are based on an anthropometric face model, and interprets the geometric relationships of these templates to identify grammatical markers. It was tested on a variety of ASL sentences signed by various Deaf native signers and detected facial gestures used to express grammatical information, such as raised and furrowed eyebrows as well as headshakes.