623 resultados para Nonverbal Decoding
Resumo:
OBJECTIVE Replicating the training program in non-verbal communication based on the theoretical framework of interpersonal communication; non-verbal coding, valuing the aging aspects in the perspective of active aging, checking its current relevance through the content assimilation index after 90 days (mediate) of its application. METHOD A descriptive and exploratory field study was conducted in three hospitals under direct administration of the state of São Paulo that caters exclusively to Unified Health System (SUS) patients. The training lasted 12 hours divided in three meetings, applied to 102 health professionals. RESULTS Revealed very satisfactory and satisfactory mediate content assimilation index in 82.9%. CONCLUSION The program replication proved to be relevant and updated the setting of hospital services, while remaining efficient for healthcare professionals.
Resumo:
Neuroimaging studies typically compare experimental conditions using average brain responses, thereby overlooking the stimulus-related information conveyed by distributed spatio-temporal patterns of single-trial responses. Here, we take advantage of this rich information at a single-trial level to decode stimulus-related signals in two event-related potential (ERP) studies. Our method models the statistical distribution of the voltage topographies with a Gaussian Mixture Model (GMM), which reduces the dataset to a number of representative voltage topographies. The degree of presence of these topographies across trials at specific latencies is then used to classify experimental conditions. We tested the algorithm using a cross-validation procedure in two independent EEG datasets. In the first ERP study, we classified left- versus right-hemifield checkerboard stimuli for upper and lower visual hemifields. In a second ERP study, when functional differences cannot be assumed, we classified initial versus repeated presentations of visual objects. With minimal a priori information, the GMM model provides neurophysiologically interpretable features - vis à vis voltage topographies - as well as dynamic information about brain function. This method can in principle be applied to any ERP dataset testing the functional relevance of specific time periods for stimulus processing, the predictability of subject's behavior and cognitive states, and the discrimination between healthy and clinical populations.
Resumo:
BACKGROUND: Analyses of brain responses to external stimuli are typically based on the means computed across conditions. However in many cognitive and clinical applications, taking into account their variability across trials has turned out to be statistically more sensitive than comparing their means. NEW METHOD: In this study we present a novel implementation of a single-trial topographic analysis (STTA) for discriminating auditory evoked potentials at predefined time-windows. This analysis has been previously introduced for extracting spatio-temporal features at the level of the whole neural response. Adapting the STTA on specific time windows is an essential step for comparing its performance to other time-window based algorithms. RESULTS: We analyzed responses to standard vs. deviant sounds and showed that the new implementation of the STTA gives above-chance decoding results in all subjects (in comparison to 7 out of 11 with the original method). In comatose patients, the improvement of the decoding performance was even more pronounced than in healthy controls and doubled the number of significant results. COMPARISON WITH EXISTING METHOD(S): We compared the results obtained with the new STTA to those based on a logistic regression in healthy controls and patients. We showed that the first of these two comparisons provided a better performance of the logistic regression; however only the new STTA provided significant results in comatose patients at group level. CONCLUSIONS: Our results provide quantitative evidence that a systematic investigation of the accuracy of established methods in normal and clinical population is an essential step for optimizing decoding performance.
Resumo:
BACKGROUND: Recent neuroimaging studies suggest that value-based decision-making may rely on mechanisms of evidence accumulation. However no studies have explicitly investigated the time when single decisions are taken based on such an accumulation process. NEW METHOD: Here, we outline a novel electroencephalography (EEG) decoding technique which is based on accumulating the probability of appearance of prototypical voltage topographies and can be used for predicting subjects' decisions. We use this approach for studying the time-course of single decisions, during a task where subjects were asked to compare reward vs. loss points for accepting or rejecting offers. RESULTS: We show that based on this new method, we can accurately decode decisions for the majority of the subjects. The typical time-period for accurate decoding was modulated by task difficulty on a trial-by-trial basis. Typical latencies of when decisions are made were detected at ∼500ms for 'easy' vs. ∼700ms for 'hard' decisions, well before subjects' response (∼340ms). Importantly, this decision time correlated with the drift rates of a diffusion model, evaluated independently at the behavioral level. COMPARISON WITH EXISTING METHOD(S): We compare the performance of our algorithm with logistic regression and support vector machine and show that we obtain significant results for a higher number of subjects than with these two approaches. We also carry out analyses at the average event-related potential level, for comparison with previous studies on decision-making. CONCLUSIONS: We present a novel approach for studying the timing of value-based decision-making, by accumulating patterns of topographic EEG activity at single-trial level.
Resumo:
Four studies investigated the reliability and validity of thin slices of nonverbal behavior from social interactions including (1) how well individual slices of a given behavior predict other slices in the same interaction; (2) how well a slice of a given behavior represents the entirety of that behavior within an interaction; (3) how long a slice is necessary to sufficiently represent the entirety of a behavior within an interaction; (4) which slices best capture the entirety of behavior, across different behaviors; and (5) which behaviors (of six measured behaviors) are best captured by slices. Notable findings included strong reliability and validity for thin slices of gaze and nods, and that a 1.5 min slice from the start of an interaction may adequately represent some behaviors. Results provide useful information to researchers making decisions about slice measurement of behavior.
Resumo:
Nonverbal behavior coding is typically conducted by "hand". To remedy this time and resource intensive undertaking, we illustrate how nonverbal social sensing, defined as the automated recording and extracting of nonverbal behavior via ubiquitous social sensing platforms, can be achieved. More precisely, we show how and what kind of nonverbal cues can be extracted and to what extent automated extracted nonverbal cues can be validly obtained with an illustrative research example. In a job interview, the applicant's vocal and visual nonverbal immediacy behavior was automatically sensed and extracted. Results show that the applicant's nonverbal behavior can be validly extracted. Moreover, both visual and vocal applicant nonverbal behavior predict recruiter hiring decision, which is in line with previous findings on manually coded applicant nonverbal behavior. Finally, applicant average turn duration, tempo variation, and gazing best predict recruiter hiring decision. Results and implications of such a nonverbal social sensing for future research are discussed.
Resumo:
Understanding the basis on which recruiters form hirability impressions for a job applicant is a key issue in organizational psychology and can be addressed as a social computing problem. We approach the problem from a face-to-face, nonverbal perspective where behavioral feature extraction and inference are automated. This paper presents a computational framework for the automatic prediction of hirability. To this end, we collected an audio-visual dataset of real job interviews where candidates were applying for a marketing job. We automatically extracted audio and visual behavioral cues related to both the applicant and the interviewer. We then evaluated several regression methods for the prediction of hirability scores and showed the feasibility of conducting such a task, with ridge regression explaining 36.2% of the variance. Feature groups were analyzed, and two main groups of behavioral cues were predictive of hirability: applicant audio features and interviewer visual cues, showing the predictive validity of cues related not only to the applicant, but also to the interviewer. As a last step, we analyzed the predictive validity of psychometric questionnaires often used in the personnel selection process, and found that these questionnaires were unable to predict hirability, suggesting that hirability impressions were formed based on the interaction during the interview rather than on questionnaire data.
Resumo:
Directional cell growth requires that cells read and interpret shallow chemical gradients, but how the gradient directional information is identified remains elusive. We use single-cell analysis and mathematical modeling to define the cellular gradient decoding network in yeast. Our results demonstrate that the spatial information of the gradient signal is read locally within the polarity site complex using double-positive feedback between the GTPase Cdc42 and trafficking of the receptor Ste2. Spatial decoding critically depends on low Cdc42 activity, which is maintained by the MAPK Fus3 through sequestration of the Cdc42 activator Cdc24. Deregulated Cdc42 or Ste2 trafficking prevents gradient decoding and leads to mis-oriented growth. Our work discovers how a conserved set of components assembles a network integrating signal intensity and directionality to decode the spatial information contained in chemical gradients.
Resumo:
In wireless communications the transmitted signals may be affected by noise. The receiver must decode the received message, which can be mathematically modelled as a search for the closest lattice point to a given vector. This problem is known to be NP-hard in general, but for communications applications there exist algorithms that, for a certain range of system parameters, offer polynomial expected complexity. The purpose of the thesis is to study the sphere decoding algorithm introduced in the article On Maximum-Likelihood Detection and the Search for the Closest Lattice Point, which was published by M.O. Damen, H. El Gamal and G. Caire in 2003. We concentrate especially on its computational complexity when used in space–time coding. Computer simulations are used to study how different system parameters affect the computational complexity of the algorithm. The aim is to find ways to improve the algorithm from the complexity point of view. The main contribution of the thesis is the construction of two new modifications to the sphere decoding algorithm, which are shown to perform faster than the original algorithm within a range of system parameters.
Resumo:
Bioinformatics applies computers to problems in molecular biology. Previous research has not addressed edit metric decoders. Decoders for quaternary edit metric codes are finding use in bioinformatics problems with applications to DNA. By using side effect machines we hope to be able to provide efficient decoding algorithms for this open problem. Two ideas for decoding algorithms are presented and examined. Both decoders use Side Effect Machines(SEMs) which are generalizations of finite state automata. Single Classifier Machines(SCMs) use a single side effect machine to classify all words within a code. Locking Side Effect Machines(LSEMs) use multiple side effect machines to create a tree structure of subclassification. The goal is to examine these techniques and provide new decoders for existing codes. Presented are ideas for best practices for the creation of these two types of new edit metric decoders.
Resumo:
L’objectif du présent article est d’évaluer si la synergologie fait partie du domaine de la science ou si elle n’est qu’une pseudoscience du décodage du non-verbal. Le texte comprend cinq parties. Dans la première partie, nous décrivons des éléments importants de la démarche scientifique. Dans les deuxième et troisième parties, nous présentons brièvement la synergologie et nous vérifions si celle-ci respecte les critères de la science. La quatrième partie fait état d’une mise en demeure adressée à Patrick Lagacé et à La Presse pour une série de textes qui présentait une vision très critique de cette approche. Enfin, l’utilisation d’arguments non pertinents d’un point de vue scientifique, une tentative inappropriée de donner de la crédibilité à la synergologie par une mise en demeure et un recours injustifié à l’argument éthique nous amènent à conclure que la synergologie est une pseudoscience du décodage du non-verbal.
Resumo:
Modeling nonlinear systems using Volterra series is a century old method but practical realizations were hampered by inadequate hardware to handle the increased computational complexity stemming from its use. But interest is renewed recently, in designing and implementing filters which can model much of the polynomial nonlinearities inherent in practical systems. The key advantage in resorting to Volterra power series for this purpose is that nonlinear filters so designed can be made to work in parallel with the existing LTI systems, yielding improved performance. This paper describes the inclusion of a quadratic predictor (with nonlinearity order 2) with a linear predictor in an analog source coding system. Analog coding schemes generally ignore the source generation mechanisms but focuses on high fidelity reconstruction at the receiver. The widely used method of differential pnlse code modulation (DPCM) for speech transmission uses a linear predictor to estimate the next possible value of the input speech signal. But this linear system do not account for the inherent nonlinearities in speech signals arising out of multiple reflections in the vocal tract. So a quadratic predictor is designed and implemented in parallel with the linear predictor to yield improved mean square error performance. The augmented speech coder is tested on speech signals transmitted over an additive white gaussian noise (AWGN) channel.
Resumo:
The article attempts to explain the main paradox faced by Canada at formulating its foreign policy on international security. Explained in economic and political terms, this paradox consists in the contradiction between the Canadian ability to achieve its strategic goals, serving to its own national interest and its dependence on the United States. The first section outlines three representative examples to evaluate this paradox: the Canada’s position in North American security regime, the US-Canada economic security relations, and the universe of possibilities for action of Canada as a middle power. The second section suggests that liberal agenda, especially concerning to ethical issues, has been established by this country to minimize this paradox. By pursing this agenda, Canada is able to reaffirm its national identity and therefore its independence on the United States. The third section evaluates both the explained paradox and the reaffirmation of Canadian identity during the Jean Chrétien (1993-2003), Paul Martin (2003-2006) and Stephen Harper’s (2006) governments.