967 resultados para Speaker Recognition, Text-constrained, Multilingual, Speaker Verification, HMMs


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Automaticity (in this essay defined as short response time) and fluency in language use are closely connected to each other and some research has been conducted regarding some of the aspects involved. In fact, the notion of automaticity is still debated and many definitions and opinions on what automaticity is have been suggested (Andersson,1987, 1992, 1993, Logan, 1988, Segalowitz, 2010). One aspect that still needs more research is the correlation between vocabulary proficiency (a person’s knowledge about words and ability to use them correctly) and response time in word recognition. Therefore, the aim of this study has been to investigate this correlation using two different tests; one vocabulary size test (Paul Nation) and one lexical decision task (SuperLab) that measures both response time and accuracy. 23 Swedish students partaking in the English 7 course in upper secondary Swedish school were tested. The data were analyzed using a quantitative method where the average values and correlations from the test were used to compare the results. The correlations were calculated using Pearson’s Coefficient Correlations Calculator. The empirical study indicates that vocabulary proficiency is not strongly correlated with shorter response times in word recognition. Rather, the data indicate that L2 learners instead are sensitive to the frequency levels of the vocabulary. The accuracy (number of correct recognized words) and response times correlate with the frequency level of the tested words. This indicates that factors other than vocabulary proficiency are important for the ability to recognize words quickly.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A common assumption is that language is used for conveying factual information, but linguistic forms also serve a way to communicate pragmatic features, such as speakers’ intentions and mental state. This study describes and analyses two strategies for stance-taking in GhaPE, more specific the use of discourse particles and complement-taking predicates. Such grammatical resources have been identified in the literature to play important functions in signalling how the speaker evaluates and positions him/herself and the addressee with respect to objects of discourse. The analysis and discussion of forms is informed by Du Bois’ (2007) ‘stance triangle’, which has proved to be a useful analytical device for investigating stance from a dialogical perspective. GhaPE is at times anticipated as fairly simple both by scholars and in the community where it is spoken. This thesis is thus an attempt to display aspects of the richness of the language.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The status of English as the language of international communication is by now well-established. However, in the past 16 years, research has tried to emphasize the fact that the English spoken in international contact situations and between people with other first languages than English has different needs than the English spoken locally amongst native speakers, resulting in the emergence of English as a lingua franca (ELF) as a scholarly field. However, the impact of findings in ELF has so far only led to a moderate shift in English language teaching. Especially in expanding circle countries, where ELF should have the biggest impact, change is only gradually becoming palpable. Accent and pronunciation, as one of the biggest factors on both identity and mutual intelligibility (Jenkins 2000; 2007) are at the root of discussion. The scope of this study is therefore to examine accent choices and the extent to which native speaker ideology informs the preferences of ten speakers of ELF and 27 German natives with experience in international communication. Both ethnographical and sociolinguistic methods, as well as auditory analysis have been applied and conducted. The auditory analysis of six variables in the recorded speech production of the ten speakers suggests that there is no significant preference of one norm-giving variety over the other. Rather, speakers tend to mix-and-match General American- and Standard Southern British English-like features in their pronunciation. When reporting their accent ideals, the idea of a ‘neutral’ English accent is mentioned by four participants. Neutral accents seem to have been understood as ‘unmarked accents’. Expressed beliefs on their own English pronunciation show a comparatively high level of reflection on and confidence in their own production. Results from a rating task and a survey given to 27 German participants reveal attitudes that are more negatively stacked. While Germans reported openness towards NNS (non-native speaker) accents and showed awareness of the priority of intelligibility over accent choice in both their own and others’ pronunciation, they still largely reported NS accent preference. The ratings of the production from ten ELF speakers confirmed this and showed that ‘neutral’ is equated with native-like. In the light of these findings, issues are discussed that ultimately relate to the influence of NS Englishes, identity and the development of English as an international language.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A major problem in de novo design of enzyme inhibitors is the unpredictability of the induced fit, with the shape of both ligand and enzyme changing cooperatively and unpredictably in response to subtle structural changes within a ligand. We have investigated the possibility of dampening the induced fit by using a constrained template as a replacement for adjoining segments of a ligand. The template preorganizes the ligand structure, thereby organizing the local enzyme environment. To test this approach, we used templates consisting of constrained cyclic tripeptides, formed through side chain to main chain linkages, as structural mimics of the protease-bound extended beta-strand conformation of three adjoining amino acid residues at the N- or C-terminal sides of the scissile bond of substrates. The macrocyclic templates were derivatized to a range of 30 structurally diverse molecules via focused combinatorial variation of nonpeptidic appendages incorporating a hydroxyethylamine transition-state isostere. Most compounds in the library were potent inhibitors of the test protease (HIV-1 protease). Comparison of crystal structures for five protease-inhibitor complexes containing an N-terminal macrocycle and three protease-inhibitor complexes containing a C-terminal macrocycle establishes that the macrocycles fix their surrounding enzyme environment, thereby permitting independent variation of acyclic inhibitor components with only local disturbances to the protease. In this way, the location in the protease of various acyclic fragments on either side of the macrocyclic template can be accurately predicted. This type of templating strategy minimizes the problem of induced fit, reducing unpredictable cooperative effects in one inhibitor region caused by changes to adjacent enzyme-inhibitor interactions. This idea might be exploited in template-based approaches to inhibitors of other proteases, where a beta-strand mimetic is also required for recognition, and also other protein-binding ligands where different templates may be more appropriate.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Episodic recognition of novel and familiar melodies was examined by asking participants to make judgments about the recency and frequency of presentation of melodies over the course of two days of testing. For novel melodies, recency judgments were poor and participants often confused the number of presentations of a melody with its day of presentation; melodies heard frequently were judged as have been heard more recently than they actually were. For familiar melodies, recency judgments were much more accurate and the number of presentations of a melody helped rather than hindered performance. Frequency judgments were generally more accurate than recency judgments and did not demonstrate the same interaction with musical familiarity. Overall, these findings suggest that (1) episodic recognition of novel melodies is based more on a generalized feeling of familiarity than on a specific episodic memory, (2) frequency information contributes more strongly to this generalized memory than recency information, and (3) the formation of an episodic memory for a melody depends either on the overall familiarity of the stimulus or the availability of a verbal label. (C) 2004 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

MHC class I molecules generally present peptides of 8-10 aa long, forming an extended coil in the HLA cleft. Although longer peptides can also bind to class I molecules, they tend to bulge from the cleft and it is not known whether the TCR repertoire has sufficient plasticity to recognize these determinants during the antiviral CTL response. In this study, we show that unrelated individuals infected with EBV generate a significant CTL response directed toward an HLA-B*3501-restricted, 11-mer epitope from the BZLF1 Ag. The 11-mer determinant adopts a highly bulged conformation with seven of the peptide side chains being solvent-exposed and available for TCR interaction. Such a complex potentially creates a structural challenge for TCR corecognition of both HLA-B*3501 and the peptide Ag. Surprisingly, unrelated B*3501 donors recognizing the 11-mer use identical or closely related alpha beta TCR sequences that share particular CDR3 motifs. Within the small number of dominant CTL clonotypes observed, each has discrete fine specificity for the exposed-side chain residues of the peptide. The data show that bulged viral peptides are indeed immunogenic but suggest that the highly constrained TCR repertoire reflects a limit to TCR diversity when responding to some unusual MHC peptide ligands.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Automatic signature verification is a well-established and an active area of research with numerous applications such as bank check verification, ATM access, etc. This paper proposes a novel approach to the problem of automatic off-line signature verification and forgery detection. The proposed approach is based on fuzzy modeling that employs the Takagi-Sugeno (TS) model. Signature verification and forgery detection are carried out using angle features extracted from box approach. Each feature corresponds to a fuzzy set. The features are fuzzified by an exponential membership function involved in the TS model, which is modified to include structural parameters. The structural parameters are devised to take account of possible variations due to handwriting styles and to reflect moods. The membership functions constitute weights in the TS model. The optimization of the output of the TS model with respect to the structural parameters yields the solution for the parameters. We have also derived two TS models by considering a rule for each input feature in the first formulation (Multiple rules) and by considering a single rule for all input features in the second formulation. In this work, we have found that TS model with multiple rules is better than TS model with single rule for detecting three types of forgeries; random, skilled and unskilled from a large database of sample signatures in addition to verifying genuine signatures. We have also devised three approaches, viz., an innovative approach and two intuitive approaches using the TS model with multiple rules for improved performance. (C) 2004 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a corpus-based descriptive analysis of the most prevalent transfer effects and connected speech processes observed in a comparison of 11 Vietnamese English speakers (6 females, 5 males) and 12 Australian English speakers (6 males, 6 females) over 24 grammatical paraphrase items. The phonetic processes are segmentally labelled in terms of IPA diacritic features using the EMU speech database system with the aim of labelling departures from native-speaker pronunciation. An analysis of prosodic features was made using ToBI framework. The results show many phonetic and prosodic processes which make non-native speakers’ speech distinct from native ones. The corpusbased methodology of analysing foreign accent may have implications for the evaluation of non-native accent, accented speech recognition and computer assisted pronunciation- learning.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Most face recognition systems only work well under quite constrained environments. In particular, the illumination conditions, facial expressions and head pose must be tightly controlled for good recognition performance. In 2004, we proposed a new face recognition algorithm, Adaptive Principal Component Analysis (APCA) [4], which performs well against both lighting variation and expression change. But like other eigenface-derived face recognition algorithms, APCA only performs well with frontal face images. The work presented in this paper is an extension of our previous work to also accommodate variations in head pose. Following the approach of Cootes et al, we develop a face model and a rotation model which can be used to interpret facial features and synthesize realistic frontal face images when given a single novel face image. We use a Viola-Jones based face detector to detect the face in real-time and thus solve the initialization problem for our Active Appearance Model search. Experiments show that our approach can achieve good recognition rates on face images across a wide range of head poses. Indeed recognition rates are improved by up to a factor of 5 compared to standard PCA.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As the first step in developing a protocol for the use of video-phones in community health, we carried out a feasibility study among clients with a range of health needs. Clients were equipped with a commercially available video-phone connected using the client's home telephone line. A hands-free speaker-phone and a miniature video-camera (for close-up views) were connected to the video-phone. Ten clients participated: five required wound care, two palliative care, two long-term therapy monitoring and one was a rural client. All but two were aged 75 years or more. Each client had a video-phone for an average of two to three weeks. During the six months of the study, 43 client calls were made, of which 36 (84%) were converted to video-calls. The speaker-phone was used on 24 occasions (56%) and the close-up camera on 23 occasions (53%). Both clients and nurses rated the equipment as satisfactory or better in questionnaires. None of the nurses felt that the equipment was difficult to use, including unpacking it and setting it up; only one client found it difficult. Taking into account the clients' responses, including their free-text comments, a judgement was made as to whether the video-phone had been useful to their nursing care. In seven cases it was felt to be unhelpful and in three cases it was judged helpful. Although the study sample was small, the results suggest that home telenursing is likely to be useful for rural clients in Australia, unsurprisingly, because of the distances involved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents an innovative approach for signature verification and forgery detection based on fuzzy modeling. The signature image is binarized and resized to a fixed size window and is then thinned. The thinned image is then partitioned into a fixed number of eight sub-images called boxes. This partition is done using the horizontal density approximation approach. Each sub-image is then further resized and again partitioned into twelve further sub-images using the uniform partitioning approach. The features of consideration are normalized vector angle (α) from each box. Each feature extracted from sample signatures gives rise to a fuzzy set. Since the choice of a proper fuzzification function is crucial for verification, we have devised a new fuzzification function with structural parameters, which is able to adapt to the variations in fuzzy sets. This function is employed to develop a complete forgery detection and verification system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper discusses forms of epistemic marking that instantiate multiple perspective constructions (see Evans 2005). Such forms express the speaker’s and the addressee’s simultaneous epistemic perspectives from the point of view of the speaker, crucially relying on the assumptions of the speaker with regard to the addressee’s knowledge. The analysis of forms considers established semanto-pragmatic concepts, such as semantic scope, mitigation strategies and communicative intention (as marked by sentence-type) in the exploration of forms. In addition, the notion of knowledge asymmetry is discussed alongside the concepts of epistemic status and stance as tools for a semantic analysis of investigated forms

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The paper describes epistemic marking in Ika (Arwako-Chibchan, Colombia) and proposes an analysis in terms of a typologically unusual pattern called conjunct/disjunct, which has been attested for a small number of Asian and South American languages. Canonically, conjunct occurs with first person subjects in statements and with second person in questions, as opposed to any other combination of subject and sentence-type, which is disjunct. The pattern found in Ika both conforms to expectations and, at the same time, contributes to a more nuanced analysis of the functional motivations of the conjunct/disjunct pattern. In Ika, conjunct marking encodes the speaker's direct access to an event that involves either (or both) of the speech participants. In addition, conjunct/disjunct marking interacts predictably with a second set of epistemic markers that encode asymmetries in the epistemic authority of the speaker and the addressee. The analysis builds on first-hand data but remains tentative, awaiting further investigation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The paper focuses on inter-personal aspects of the context in the analysis of evidential and related epistemic marking systems. While evidentiality is defined by its capacity to qualify the speaker's indexical point of view in terms of information source, it is argued that other aspects of the context are important to analyze evidentiality both conceptually and grammatically. These distinct, analytical components concern the illocutionary status of a given marker and its scope properties. The importance of the hearer's point of view in pragmatics and semantics is well attested and constitutes a convincing argument for an increased emphasis on the perspective of the hearer/addressee in analyses of epistemic marking, such as evidentiality. The paper discusses available accounts of evidentials that attend to the perspective of the addressee and also introduces lesser-known epistemic marking systems that share a functional space with evidentiality.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Automatic Term Recognition (ATR) is a fundamental processing step preceding more complex tasks such as semantic search and ontology learning. From a large number of methodologies available in the literature only a few are able to handle both single and multi-word terms. In this paper we present a comparison of five such algorithms and propose a combined approach using a voting mechanism. We evaluated the six approaches using two different corpora and show how the voting algorithm performs best on one corpus (a collection of texts from Wikipedia) and less well using the Genia corpus (a standard life science corpus). This indicates that choice and design of corpus has a major impact on the evaluation of term recognition algorithms. Our experiments also showed that single-word terms can be equally important and occupy a fairly large proportion in certain domains. As a result, algorithms that ignore single-word terms may cause problems to tasks built on top of ATR. Effective ATR systems also need to take into account both the unstructured text and the structured aspects and this means information extraction techniques need to be integrated into the term recognition process.