816 resultados para Mentally disabled person
Resumo:
Investigations of the factor structure of the Alcohol Use Disorders Identification Test (AUDIT) have produced conflicting results. The current study assessed the factor structure of the AUDIT for a group of Mentally Disordered Offenders (MDOs) and examined the pattern of scoring in specific subgroups. The sample comprised 2005 MDOs who completed a battery of tests including the AUDIT. Confirmatory factor analyses revealed that a two-factor solution – alcohol consumption and alcohol-related consequences – provided the best data fit for AUDIT scores. A three-factor solution provided an equally good fit, but the second and third factors were highly correlated and a measure of parsimony also favoured the two-factor solution. This study provides useful information on the factor structure of the AUDIT amongst a large MDO population, while also highlighting the difficulties associated with the presence of people with mental health problems in the criminal justice system.
Resumo:
SEMAINE has created a large audiovisual database as a part of an iterative approach to building Sensitive Artificial Listener (SAL) agents that can engage a person in a sustained, emotionally colored conversation. Data used to build the agents came from interactions between users and an operator simulating a SAL agent, in different configurations: Solid SAL (designed so that operators displayed an appropriate nonverbal behavior) and Semi-automatic SAL (designed so that users' experience approximated interacting with a machine). We then recorded user interactions with the developed system, Automatic SAL, comparing the most communicatively competent version to versions with reduced nonverbal skills. High quality recording was provided by five high-resolution, high-framerate cameras, and four microphones, recorded synchronously. Recordings total 150 participants, for a total of 959 conversations with individual SAL characters, lasting approximately 5 minutes each. Solid SAL recordings are transcribed and extensively annotated: 6-8 raters per clip traced five affective dimensions and 27 associated categories. Other scenarios are labeled on the same pattern, but less fully. Additional information includes FACS annotation on selected extracts, identification of laughs, nods, and shakes, and measures of user engagement with the automatic system. The material is available through a web-accessible database. © 2010-2012 IEEE.
Resumo:
In this study a broadly representative sample of clients in the City of Westminster, receiving Care in the Community for reasons of mental ill-health, were interviewed regarding their experiences of, and levels of satisfaction with, services provided. The results reveal the vulnerability of services users, the benefits of community care, the high regard the majority have for their helpers, the limitations imposed by scarce resources, and the negative effects of only loose co-ordination between health and social services. Respondents also provide a rich source of data on how services might be improved.
Resumo:
This paper presents a novel method of audio-visual feature-level fusion for person identification where both the speech and facial modalities may be corrupted, and there is a lack of prior knowledge about the corruption. Furthermore, we assume there are limited amount of training data for each modality (e.g., a short training speech segment and a single training facial image for each person). A new multimodal feature representation and a modified cosine similarity are introduced to combine and compare bimodal features with limited training data, as well as vastly differing data rates and feature sizes. Optimal feature selection and multicondition training are used to reduce the mismatch between training and testing, thereby making the system robust to unknown bimodal corruption. Experiments have been carried out on a bimodal dataset created from the SPIDRE speaker recognition database and AR face recognition database with variable noise corruption of speech and occlusion in the face images. The system's speaker identification performance on the SPIDRE database, and facial identification performance on the AR database, is comparable with the literature. Combining both modalities using the new method of multimodal fusion leads to significantly improved accuracy over the unimodal systems, even when both modalities have been corrupted. The new method also shows improved identification accuracy compared with the bimodal systems based on multicondition model training or missing-feature decoding alone.