877 resultados para hybrid human-computer


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Power back-off performances of a new variant power-combining Class-E amplifier under different amplitude-modulation schemes such as continuous wave (CW), envelope elimination and restoration (EER), envelope tracking (ET) and outphasing are for the first time investigated in this study. Finite DC-feed inductances rather than massive RF chokes as used in the classic single-ended Class-E power amplifier (PA) resulted from the approximate yet effective frequency-domain circuit analysis provide the wherewithal to increase modulation bandwidth up to 80% higher than the classic single-ended Class-E PA. This increased modulation bandwidth is required for the linearity improvement in the EER/ET transmitters. The modified output load network of the power-combining Class-E amplifier adopting three-harmonic terminations technique relaxes the design specifications for the additional filtering block typically required at the output stage of the transmitter chain. Qualitative agreements between simulation and measurement results for all four schemes were achieved where the ET technique was proven superior to the other schemes. When the PA is used within the ET scheme, an increase of average drain efficiency of as high as 40% with respect to the CW excitation was obtained for a multi-carrier input signal with 12 dB peak-to-average power ratio. © 2011 The Institution of Engineering and Technology.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In a human-computer dialogue system, the dialogue strategy can range from very restrictive to highly flexible. Each specific dialogue style has its pros and cons and a dialogue system needs to select the most appropriate style for a given user. During the course of interaction, the dialogue style can change based on a user’s response and the system observation of the user. This allows a dialogue system to understand a user better and provide a more suitable way of communication. Since measures of the quality of the user’s interaction with the system can be incomplete and uncertain, frameworks for reasoning with uncertain and incomplete information can help the system make better decisions when it chooses a dialogue strategy. In this paper, we investigate how to select a dialogue strategy based on aggregating the factors detected during the interaction with the user. For this purpose, we use probabilistic logic programming (PLP) to model probabilistic knowledge about how these factors will affect the degree of freedom of a dialogue. When a dialogue system needs to know which strategy is more suitable, an appropriate query can be executed against the PLP and a probabilistic solution with a degree of satisfaction is returned. The degree of satisfaction reveals how much the system can trust the probability attached to the solution.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper presents a novel method of audio-visual feature-level fusion for person identification where both the speech and facial modalities may be corrupted, and there is a lack of prior knowledge about the corruption. Furthermore, we assume there are limited amount of training data for each modality (e.g., a short training speech segment and a single training facial image for each person). A new multimodal feature representation and a modified cosine similarity are introduced to combine and compare bimodal features with limited training data, as well as vastly differing data rates and feature sizes. Optimal feature selection and multicondition training are used to reduce the mismatch between training and testing, thereby making the system robust to unknown bimodal corruption. Experiments have been carried out on a bimodal dataset created from the SPIDRE speaker recognition database and AR face recognition database with variable noise corruption of speech and occlusion in the face images. The system's speaker identification performance on the SPIDRE database, and facial identification performance on the AR database, is comparable with the literature. Combining both modalities using the new method of multimodal fusion leads to significantly improved accuracy over the unimodal systems, even when both modalities have been corrupted. The new method also shows improved identification accuracy compared with the bimodal systems based on multicondition model training or missing-feature decoding alone.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A finite element model of a single cell was created and used to investigate the effects of ageing on biophysical stimuli generated within a cell. Major cellular components were incorporated in the model: the membrane, cytoplasm, nucleus, microtubules, actin filaments, intermediate filaments, nuclear lamina, and chromatin. The model used multiple sets of tensegrity structures. Viscoelastic properties were assigned to the continuum components. To corroborate the model, a simulation of Atomic Force Microscopy (AFM) indentation was performed and results showed a force/indentation simulation with the range of experimental results.

Ageing was simulated by both increasing membrane stiffness (thereby modelling membrane peroxidation with age) and decreasing density of cytoskeletal elements (thereby modelling reduced actin density with age). Comparing normal and aged cells under indentation predicts that aged cells have a lower membrane area subjected to high strain compared to young cells, but the difference, surprisingly, is very small and would not be measurable experimentally. Ageing is predicted to have more significant effect on strain deep in the nucleus. These results show that computation of biophysical stimuli within cells are achievable with single-cell computational models whose force/displacement behaviour is within experimentally observed ranges. the models suggest only small, though possibly physiologically-significant, differences in internal biophysical stimuli between normal and aged cells.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This article outlines the ongoing development of a locative smartphone app for iPhone and Android phones entitled The Belfast Soundwalks Project. Drawing upon a method known as soundwalking, the aim of this app is to engage the public in sonic art through the creation of up to ten soundwalks within the city of Belfast. This paper discusses the use of GPS enabled mobile devices in the creation of soundwalks in other cities. The authors identify various strategies for articulating an experience of listening in place as mediated by mobile technologies. The project aims to provide a platform for multiple artists to develop site-specific sound works which highlight the relationship between sound, place and community. The development of the app and the app interface are discussed, as are the methods employed to test and evaluate the project.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Despite the importance of laughter in social interactions it remains little studied in affective computing. Respiratory, auditory, and facial laughter signals have been investigated but laughter-related body movements have received almost no attention. The aim of this study is twofold: first an investigation into observers' perception of laughter states (hilarious, social, awkward, fake, and non-laughter) based on body movements alone, through their categorization of avatars animated with natural and acted motion capture data. Significant differences in torso and limb movements were found between animations perceived as containing laughter and those perceived as nonlaughter. Hilarious laughter also differed from social laughter in the amount of bending of the spine, the amount of shoulder rotation and the amount of hand movement. The body movement features indicative of laughter differed between sitting and standing avatar postures. Based on the positive findings in this perceptual study, the second aim is to investigate the possibility of automatically predicting the distributions of observer's ratings for the laughter states. The findings show that the automated laughter recognition rates approach human rating levels, with the Random Forest method yielding the best performance.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Social signals and interpretation of carried information is of high importance in Human Computer Interaction. Often used for affect recognition, the cues within these signals are displayed in various modalities. Fusion of multi-modal signals is a natural and interesting way to improve automatic classification of emotions transported in social signals. Throughout most present studies, uni-modal affect recognition as well as multi-modal fusion, decisions are forced for fixed annotation segments across all modalities. In this paper, we investigate the less prevalent approach of event driven fusion, which indirectly accumulates asynchronous events in all modalities for final predictions. We present a fusion approach, handling short-timed events in a vector space, which is of special interest for real-time applications. We compare results of segmentation based uni-modal classification and fusion schemes to the event driven fusion approach. The evaluation is carried out via detection of enjoyment-episodes within the audiovisual Belfast Story-Telling Corpus.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The research presented in this paper proposes a set of design guidelines in the context of a Parkinson's Disease (PD) rehabilitation design framework for the development of serious games for the physical therapy of people with PD. The game design guidelines provided in the paper are informed by the study of the literature review and lessons learned from the pilot testing of serious games designed to suit the requirements of rehabilitation of patients with Parkinson's Disease. The proposed PD rehabilitation design framework employed for the games pilot testing utilises a low-cost, customized and off-the-shelf motion capture system (employing commercial game controllers) developed to cater for the unique requirement of the physical therapy of people with PD. Although design guidelines have been proposed before for the design of serious games in health, this is the first research paper to present guidelines for the design of serious games specifically for PD motor rehabilitation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper presents a novel method of audio-visual fusion for person identification where both the speech and facial modalities may be corrupted, and there is a lack of prior knowledge about the corruption. Furthermore, we assume there is a limited amount of training data for each modality (e.g., a short training speech segment and a single training facial image for each person). A new representation and a modified cosine similarity are introduced for combining and comparing bimodal features with limited training data as well as vastly differing data rates and feature sizes. Optimal feature selection and multicondition training are used to reduce the mismatch between training and testing, thereby making the system robust to unknown bimodal corruption. Experiments have been carried out on a bimodal data set created from the SPIDRE and AR databases with variable noise corruption of speech and occlusion in the face images. The new method has demonstrated improved recognition accuracy.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In order to use virtual reality as a sport analysis tool, we need to be sure that an immersed athlete reacts realistically in a virtual environment. This has been validated for a real handball goalkeeper facing a virtual thrower. However, we currently ignore which visual variables induce a realistic motor behavior of the immersed handball goalkeeper. In this study, we used virtual reality to dissociate the visual information related to the movements of the player from the visual information related to the trajectory of the ball. Thus, the aim is to evaluate the relative influence of these different visual information sources on the goalkeeper's motor behavior. We tested 10 handball goalkeepers who had to predict the final position of the virtual ball in the goal when facing the following: only the throwing action of the attacking player (TA condition), only the resulting ball trajectory (BA condition), and both the throwing action of the attacking player and the resulting ball trajectory (TB condition). Here we show that performance was better in the BA and TB conditions, but contrary to expectations, performance was substantially worse in the TA condition. A significant effect of ball landing zone does, however, suggest that the relative importance between visual information from the player and the ball depends on the targeted zone in the goal. In some cases, body-based cues embedded in the throwing actions may have a minor influence on the ball trajectory and vice versa. Kinematics analysis was then combined with these results to determine why such differences occur depending on the ball landing zone and consequently how it can clarify the role of different sources of visual information on the motor behavior of an athlete immersed in a virtual environment.