795 resultados para Slot-based task-splitting algorithms


Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Recent neuroimaging studies suggest that value-based decision-making may rely on mechanisms of evidence accumulation. However no studies have explicitly investigated the time when single decisions are taken based on such an accumulation process. NEW METHOD: Here, we outline a novel electroencephalography (EEG) decoding technique which is based on accumulating the probability of appearance of prototypical voltage topographies and can be used for predicting subjects' decisions. We use this approach for studying the time-course of single decisions, during a task where subjects were asked to compare reward vs. loss points for accepting or rejecting offers. RESULTS: We show that based on this new method, we can accurately decode decisions for the majority of the subjects. The typical time-period for accurate decoding was modulated by task difficulty on a trial-by-trial basis. Typical latencies of when decisions are made were detected at ∼500ms for 'easy' vs. ∼700ms for 'hard' decisions, well before subjects' response (∼340ms). Importantly, this decision time correlated with the drift rates of a diffusion model, evaluated independently at the behavioral level. COMPARISON WITH EXISTING METHOD(S): We compare the performance of our algorithm with logistic regression and support vector machine and show that we obtain significant results for a higher number of subjects than with these two approaches. We also carry out analyses at the average event-related potential level, for comparison with previous studies on decision-making. CONCLUSIONS: We present a novel approach for studying the timing of value-based decision-making, by accumulating patterns of topographic EEG activity at single-trial level.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The subject of this master’s thesis was developing a context-based reminder service for mobile devices. Possible sources of context were identified and analyzed. One such source is geographical location obtained via a GPS receiver. These receivers consume a lot of power and techniques and algorithms for reducing power consumptions were proposed and analyzed. The service was implemented as an application on a series 60 mobile phone. The application requirements, user interface and architecture are presented. The end-user experiences are discussed and possible future development and research areas are presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Social interactions are a very important component in people"s lives. Social network analysis has become a common technique used to model and quantify the properties of social interactions. In this paper, we propose an integrated framework to explore the characteristics of a social network extracted from multimodal dyadic interactions. For our study, we used a set of videos belonging to New York Times" Blogging Heads opinion blog. The Social Network is represented as an oriented graph, whose directed links are determined by the Influence Model. The links" weights are a measure of the"influence" a person has over the other. The states of the Influence Model encode automatically extracted audio/visual features from our videos using state-of-the art algorithms. Our results are reported in terms of accuracy of audio/visual data fusion for speaker segmentation and centrality measures used to characterize the extracted social network.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work, we propose a copula-based method to generate synthetic gene expression data that account for marginal and joint probability distributions features captured from real data. Our method allows us to implant significant genes in the synthetic dataset in a controlled manner, giving the possibility of testing new detection algorithms under more realistic environments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we show how a nonlinear preprocessing of speech signal -with high noise- based on morphological filters improves the performance of robust algorithms for pitch tracking (RAPT). This result happens for a very simple morphological filter. More sophisticated ones could even improve such results. Mathematical morphology is widely used in image processing and has a great amount of applications. Almost all its formulations derived in the two-dimensional framework are easily reformulated to be adapted to one-dimensional context

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Commission on Classification and Terminology and the Commission on Epidemiology of the International League Against Epilepsy (ILAE) have charged a Task Force to revise concepts, definition, and classification of status epilepticus (SE). The proposed new definition of SE is as follows: Status epilepticus is a condition resulting either from the failure of the mechanisms responsible for seizure termination or from the initiation of mechanisms, which lead to abnormally, prolonged seizures (after time point t1 ). It is a condition, which can have long-term consequences (after time point t2 ), including neuronal death, neuronal injury, and alteration of neuronal networks, depending on the type and duration of seizures. This definition is conceptual, with two operational dimensions: the first is the length of the seizure and the time point (t1 ) beyond which the seizure should be regarded as "continuous seizure activity." The second time point (t2 ) is the time of ongoing seizure activity after which there is a risk of long-term consequences. In the case of convulsive (tonic-clonic) SE, both time points (t1 at 5 min and t2 at 30 min) are based on animal experiments and clinical research. This evidence is incomplete, and there is furthermore considerable variation, so these time points should be considered as the best estimates currently available. Data are not yet available for other forms of SE, but as knowledge and understanding increase, time points can be defined for specific forms of SE based on scientific evidence and incorporated into the definition, without changing the underlying concepts. A new diagnostic classification system of SE is proposed, which will provide a framework for clinical diagnosis, investigation, and therapeutic approaches for each patient. There are four axes: (1) semiology; (2) etiology; (3) electroencephalography (EEG) correlates; and (4) age. Axis 1 (semiology) lists different forms of SE divided into those with prominent motor systems, those without prominent motor systems, and currently indeterminate conditions (such as acute confusional states with epileptiform EEG patterns). Axis 2 (etiology) is divided into subcategories of known and unknown causes. Axis 3 (EEG correlates) adopts the latest recommendations by consensus panels to use the following descriptors for the EEG: name of pattern, morphology, location, time-related features, modulation, and effect of intervention. Finally, axis 4 divides age groups into neonatal, infancy, childhood, adolescent and adulthood, and elderly.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Anthropomorphic model observers are mathe- matical algorithms which are applied to images with the ultimate goal of predicting human signal detection and classification accuracy across varieties of backgrounds, image acquisitions and display conditions. A limitation of current channelized model observers is their inability to handle irregularly-shaped signals, which are common in clinical images, without a high number of directional channels. Here, we derive a new linear model observer based on convolution channels which we refer to as the "Filtered Channel observer" (FCO), as an extension of the channelized Hotelling observer (CHO) and the nonprewhitening with an eye filter (NPWE) observer. In analogy to the CHO, this linear model observer can take the form of a single template with an external noise term. To compare with human observers, we tested signals with irregular and asymmetrical shapes spanning the size of lesions down to those of microcalfications in 4-AFC breast tomosynthesis detection tasks, with three different contrasts for each case. Whereas humans uniformly outperformed conventional CHOs, the FCO observer outperformed humans for every signal with only one exception. Additive internal noise in the models allowed us to degrade model performance and match human performance. We could not match all the human performances with a model with a single internal noise component for all signal shape, size and contrast conditions. This suggests that either the internal noise might vary across signals or that the model cannot entirely capture the human detection strategy. However, the FCO model offers an efficient way to apprehend human observer performance for a non-symmetric signal.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Understanding the basis on which recruiters form hirability impressions for a job applicant is a key issue in organizational psychology and can be addressed as a social computing problem. We approach the problem from a face-to-face, nonverbal perspective where behavioral feature extraction and inference are automated. This paper presents a computational framework for the automatic prediction of hirability. To this end, we collected an audio-visual dataset of real job interviews where candidates were applying for a marketing job. We automatically extracted audio and visual behavioral cues related to both the applicant and the interviewer. We then evaluated several regression methods for the prediction of hirability scores and showed the feasibility of conducting such a task, with ridge regression explaining 36.2% of the variance. Feature groups were analyzed, and two main groups of behavioral cues were predictive of hirability: applicant audio features and interviewer visual cues, showing the predictive validity of cues related not only to the applicant, but also to the interviewer. As a last step, we analyzed the predictive validity of psychometric questionnaires often used in the personnel selection process, and found that these questionnaires were unable to predict hirability, suggesting that hirability impressions were formed based on the interaction during the interview rather than on questionnaire data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent advances in machine learning methods enable increasingly the automatic construction of various types of computer assisted methods that have been difficult or laborious to program by human experts. The tasks for which this kind of tools are needed arise in many areas, here especially in the fields of bioinformatics and natural language processing. The machine learning methods may not work satisfactorily if they are not appropriately tailored to the task in question. However, their learning performance can often be improved by taking advantage of deeper insight of the application domain or the learning problem at hand. This thesis considers developing kernel-based learning algorithms incorporating this kind of prior knowledge of the task in question in an advantageous way. Moreover, computationally efficient algorithms for training the learning machines for specific tasks are presented. In the context of kernel-based learning methods, the incorporation of prior knowledge is often done by designing appropriate kernel functions. Another well-known way is to develop cost functions that fit to the task under consideration. For disambiguation tasks in natural language, we develop kernel functions that take account of the positional information and the mutual similarities of words. It is shown that the use of this information significantly improves the disambiguation performance of the learning machine. Further, we design a new cost function that is better suitable for the task of information retrieval and for more general ranking problems than the cost functions designed for regression and classification. We also consider other applications of the kernel-based learning algorithms such as text categorization, and pattern recognition in differential display. We develop computationally efficient algorithms for training the considered learning machines with the proposed kernel functions. We also design a fast cross-validation algorithm for regularized least-squares type of learning algorithm. Further, an efficient version of the regularized least-squares algorithm that can be used together with the new cost function for preference learning and ranking tasks is proposed. In summary, we demonstrate that the incorporation of prior knowledge is possible and beneficial, and novel advanced kernels and cost functions can be used in algorithms efficiently.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The article discusses the development of WEBDATANET established in 2011 which aims to create a multidisciplinary network of web-based data collection experts in Europe. Topics include the presence of 190 experts in 30 European countries and abroad, the establishment of web-based teaching and discussion platforms and working groups and task forces. Also discussed is the scope of the research carried by WEBDATANET. In light of the growing importance of web-based data in the social and behavioral sciences, WEBDATANET was established in 2011 as a COST Action (IS 1004) to create a multidisciplinary network of web-based data collection experts: (web) survey methodologists, psychologists, sociologists, linguists, economists, Internet scientists, media and public opinion researchers. The aim was to accumulate and synthesize knowledge regarding methodological issues of web-based data collection (surveys, experiments, tests, non-reactive data, and mobile Internet research), and foster its scientific usage in a broader community.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Despite moderate improvements in outcome of glioblastoma after first-line treatment with chemoradiation recent clinical trials failed to improve the prognosis of recurrent glioblastoma. In the absence of a standard of care we aimed to investigate institutional treatment strategies to identify similarities and differences in the pattern of care for recurrent glioblastoma. We investigated re-treatment criteria and therapeutic pathways for recurrent glioblastoma of eight neuro-oncology centres in Switzerland having an established multidisciplinary tumour-board conference. Decision algorithms, differences and consensus were analysed using the objective consensus methodology. A total of 16 different treatment recommendations were identified based on combinations of eight different decision criteria. The set of criteria implemented as well as the set of treatments offered was different in each centre. For specific situations, up to 6 different treatment recommendations were provided by the eight centres. The only wide-range consensus identified was to offer best supportive care to unfit patients. A majority recommendation was identified for non-operable large early recurrence with unmethylated MGMT promoter status in the fit patients: here bevacizumab was offered. In fit patients with late recurrent non-operable MGMT promoter methylated glioblastoma temozolomide was recommended by most. No other majority recommendations were present. In the absence of strong evidence we identified few consensus recommendations in the treatment of recurrent glioblastoma. This contrasts the limited availability of single drugs and treatment modalities. Clinical situations of greatest heterogeneity may be suitable to be addressed in clinical trials and second opinion referrals are likely to yield diverging recommendations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Evaluation of image quality (IQ) in Computed Tomography (CT) is important to ensure that diagnostic questions are correctly answered, whilst keeping radiation dose to the patient as low as is reasonably possible. The assessment of individual aspects of IQ is already a key component of routine quality control of medical x-ray devices. These values together with standard dose indicators can be used to give rise to 'figures of merit' (FOM) to characterise the dose efficiency of the CT scanners operating in certain modes. The demand for clinically relevant IQ characterisation has naturally increased with the development of CT technology (detectors efficiency, image reconstruction and processing), resulting in the adaptation and evolution of assessment methods. The purpose of this review is to present the spectrum of various methods that have been used to characterise image quality in CT: from objective measurements of physical parameters to clinically task-based approaches (i.e. model observer (MO) approach) including pure human observer approach. When combined together with a dose indicator, a generalised dose efficiency index can be explored in a framework of system and patient dose optimisation. We will focus on the IQ methodologies that are required for dealing with standard reconstruction, but also for iterative reconstruction algorithms. With this concept the previously used FOM will be presented with a proposal to update them in order to make them relevant and up to date with technological progress. The MO that objectively assesses IQ for clinically relevant tasks represents the most promising method in terms of radiologist sensitivity performance and therefore of most relevance in the clinical environment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The European Forum on Epilepsy Research (ERF2013), which took place in Dublin, Ireland, on May 26-29, 2013, was designed to appraise epilepsy research priorities in Europe through consultation with clinical and basic scientists as well as representatives of lay organizations and health care providers. The ultimate goal was to provide a platform to improve the lives of persons with epilepsy by influencing the political agenda of the EU. The Forum highlighted the epidemiologic, medical, and social importance of epilepsy in Europe, and addressed three separate but closely related concepts. First, possibilities were explored as to how the stigma and social burden associated with epilepsy could be reduced through targeted initiatives at EU national and regional levels. Second, ways to ensure optimal standards of care throughout Europe were specifically discussed. Finally, a need for further funding in epilepsy research within the European Horizon 2020 funding programme was communicated to politicians and policymakers participating to the forum. Research topics discussed specifically included (1) epilepsy in the developing brain; (2) novel targets for innovative diagnostics and treatment of epilepsy; (3) what is required for prevention and cure of epilepsy; and (4) epilepsy and comorbidities, with a special focus on aging and mental health. This report provides a summary of recommendations that emerged at ERF2013 about how to (1) strengthen epilepsy research, (2) reduce the treatment gap, and (3) reduce the burden and stigma associated with epilepsy. Half of the 6 million European citizens with epilepsy feel stigmatized and experience social exclusion, stressing the need for funding trans-European awareness campaigns and monitoring their impact on stigma, in line with the global commitment of the European Commission and with the recommendations made in the 2011 Written Declaration on Epilepsy. Epilepsy care has high rates of misdiagnosis and considerable variability in organization and quality across European countries, translating into huge societal cost (0.2% GDP) and stressing the need for cost-effective programs of harmonization and optimization of epilepsy care throughout Europe. There is currently no cure or prevention for epilepsy, and 30% of affected persons are not controlled by current treatments, stressing the need for pursuing research efforts in the field within Horizon 2020. Priorities should include (1) development of innovative biomarkers and therapeutic targets and strategies, from gene and cell-based therapies to technologically advanced surgical treatment; (2) addressing issues raised by pediatric and aging populations, as well as by specific etiologies and comorbidities such as traumatic brain injury (TBI) and cognitive dysfunction, toward more personalized medicine and prevention; and (3) translational studies and clinical trials built upon well-established European consortia.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The differentiation of workers into morphological subcastes (e.g., soldiers) represents an important evolutionary transition and is thought to improve division of labor in social insects. Soldiers occur in many ant and termite species, where they make up a small proportion of the workforce. A common assumption of worker caste evolution is that soldiers are behavioral specialists. Here, we report the first test of the "rare specialist" hypothesis in a eusocial bee. Colonies of the stingless bee Tetragonisca angustula are defended by a small group of morphologically differentiated soldiers. Contrary to the rare specialist hypothesis, we found that soldiers worked more (+34%-41%) and performed a greater variety of tasks (+23%-34%) than other workers, particularly early in life. Our results suggest a "rare elite" function of soldiers in T. angustula, that is, that they perform a disproportionately large amount of the work. Division of labor was based on a combination of temporal and physical castes, but soldiers transitioned faster from one task to the next. We discuss why the rare specialist assumption might not hold in species with a moderate degree of worker differentiation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

INTRODUCTION: Dispatch-assisted cardiopulmonary resuscitation (DA-CPR) plays a key role in out-of-hospital cardiac arrests. We sought to measure dispatchers' performances in a criteria-based system in recognizing cardiac arrest and delivering DA-CPR. Our secondary purpose was to identify the factors that hampered dispatchers' identification of cardiac arrests, the factors that prevented them from proposing DA-CPR, and the factors that prevented bystanders from performing CPR. METHODS AND RESULTS: We reviewed dispatch recordings for 1254 out-of-hospital cardiac arrests occurring between January 1, 2011 and December 31, 2013. Dispatchers correctly identified cardiac arrests in 71% of the reviewed cases and 84% of the cases in which they were able to assess for patient consciousness and breathing. The median time to recognition of the arrest was 60s. The median time to start chest compression was 220s. CONCLUSIONS: This study demonstrates that performances from a criteria-based dispatch system can be similar to those from a medical-priority dispatch system regarding out-of-hospital cardiac arrest (OHCA) time recognition and DA-CPR delivery. Agonal breathing recognition remains the weakest link in this sensitive task in both systems. It is of prime importance that all dispatch centers tend not only to implement DA-CPR but also to have tools to help them reach this objective, as today it should be mandatory to offer this service to the community. In order to improve benchmarking opportunities, we completed previously proposed performance standards as propositions.