17 resultados para Recall interval
em Aston University Research Archive
Resumo:
In developing neural network techniques for real world applications it is still very rare to see estimates of confidence placed on the neural network predictions. This is a major deficiency, especially in safety-critical systems. In this paper we explore three distinct methods of producing point-wise confidence intervals using neural networks. We compare and contrast Bayesian, Gaussian Process and Predictive error bars evaluated on real data. The problem domain is concerned with the calibration of a real automotive engine management system for both air-fuel ratio determination and on-line ignition timing. This problem requires real-time control and is a good candidate for exploring the use of confidence predictions due to its safety-critical nature.
Resumo:
Influential models of short-term memory have attributed the fact that short words are recalled better than longer words in serial recall (the length effect) to articulatory rehearsal. Crucial for this link is the finding that the length effect disappears under articulatory suppression. We show, instead, that, under suppression, the length effect is abolished or reversed for real words but remains robust for nonwords. The latter finding is demonstrated in a variety of conditions: with lists of three and four nonwords, with nonwords drawn from closed and open sets, with spoken and written presentation, and with written and spoken output. Our interpretation is that the standard length effect derives from the number of phonological units to be retained. The length effect is abolished or reversed under suppression because this condition encourages reliance on lexical-semantic representations. Using these representations, longer words can more easily be reconstructed from degraded phonology than shorter words. © 2005 Elsevier Inc. All rights reserved.
Resumo:
Over recent years, evidence has been accumulating in favour of the importance of long-term information as a variable which can affect the success of short-term recall. Lexicality, word frequency, imagery and meaning have all been shown to augment short term recall performance. Two competing theories as to the causes of this long-term memory influence are outlined and tested in this thesis. The first approach is the order-encoding account, which ascribes the effect to the usage of resources at encoding, hypothesising that word lists which require less effort to process will benefit from increased levels of order encoding, in turn enhancing recall success. The alternative view, trace redintegration theory, suggests that order is automatically encoded phonologically, and that long-term information can only influence the interpretation of the resultant memory trace. The free recall experiments reported here attempted to determine the importance of order encoding as a facilitatory framework and to determine the locus of the effects of long-term information in free recall. Experiments 1 and 2 examined the effects of word frequency and semantic categorisation over a filled delay, and experiments 3 and 4 did the same for immediate recall. Free recall was improved by both long-term factors tested. Order information was not used over a short filled delay, but was evident in immediate recall. Furthermore, it was found that both long-term factors increased the amount of order information retained. Experiment 5 induced an order encoding effect over a filled delay, leaving a picture of short-term processes which are closely associated with long-term processes, and which fit conceptions of short-term memory being part of language processes rather better than either the encoding or the retrieval-based models. Experiments 6 and 7 aimed to determine to what extent phonological processes were responsible for the pattern of results observed. Articulatory suppression affected the encoding of order information where speech rate had no direct influence, suggesting that it is ease of lexical access which is the most important factor in the influence of long-term memory on immediate recall tasks. The evidence presented in this thesis does not offer complete support for either the retrieval-based account or the order encoding account of long-term influence. Instead, the evidence sits best with models that are based upon language-processing. The path urged for future research is to find ways in which this diffuse model can be better specified, and which can take account of the versatility of the human brain.
Resumo:
The overall aim of this study was to examine experimentally the effects of noise upon short-term memory tasks in the hope of shedding further light upon the apparently inconsistent results of previous research in the area. Seven experiments are presented. The first chapter of the thesis comprised a comprehensive review of the literature on noise and human performance while in the second chapter some theoretical questions concerning the effects of noise were considered in more detail follovred by a more detailed examination of the effects of noise upon memory. Chapter 3 described an experiment which examined the effects of noise on attention allocation in short-term memory as a function of list length. The results provided only weak evidence of increased selectivity in noise. In further chapters no~effects Here investigated in conjunction vrith various parameters of short-term memory tasks e.g. the retention interval, presentation rate. The results suggested that noise effects were significantly affected by the length of the retention interval but not by the rate of presentation. Later chapters examined the possibility of differential noise effects on the mode of recall (recall v. recognition) and the type of presentation (sequential v. simultaneous) as well as an investigation of the effect of varying the point of introduction of the noise and the importance of individual differences in noise research. The results of this study were consistent with the hypothesis that noise at presentation facilitates phonemic coding. However, noise during recall appeared to affect the retrieval strategy adopted by the subject.
Resumo:
Non-uniform B-spline dictionaries on a compact interval are discussed in the context of sparse signal representation. For each given partition, dictionaries of B-spline functions for the corresponding spline space are built up by dividing the partition into subpartitions and joining together the bases for the concomitant subspaces. The resulting slightly redundant dictionaries are composed of B-spline functions of broader support than those corresponding to the B-spline basis for the identical space. Such dictionaries are meant to assist in the construction of adaptive sparse signal representation through a combination of stepwise optimal greedy techniques.
Resumo:
Motor timing tasks have been employed in studies of neurodevelopmental disorders such as developmental dyslexia and ADHD, where they provide an index of temporal processing ability. Investigations of these disorders have used different stimulus parameters within the motor timing tasks which are likely to affect performance measures. Here we assessed the effect of auditory and visual pacing stimuli on synchronised motor timing performance and its relationship with cognitive and behavioural predictors that are commonly used in the diagnosis of these highly prevalent developmental disorders. Twenty- one children (mean age 9.6 years) completed a finger tapping task in two stimulus conditions, together with additional psychometric measures. As anticipated, synchronisation to the beat (ISI 329 ms) was less accurate in the visually paced condition. Decomposition of timing variance indicated that this effect resulted from differences in the way that visual and auditory paced tasks are processed by central timekeeping and associated peripheral implementation systems. The ability to utilise an efficient processing strategy on the visual task correlated with both reading and sustained attention skills. Dissociations between these patterns of relationship across task modality suggest that not all timing tasks are equivalent.
Resumo:
We tested 44 participants with respect to their working memory (WM) performance on alcohol-related versus neutral visual stimuli. Previously an alcohol attentional bias (AAB) had been reported using these stimuli, where the attention of frequent drinkers was automatically drawn toward alcohol-related items (e.g., beer bottle). The present study set out to provide evidence for an alcohol memory bias (AMB) that would persist over longer time-scales than the AAB. The WM task we used required memorizing 4 stimuli in their correct locations and a visual interference task was administered during a 4-sec delay interval. A subsequent probe required participants to indicate whether a stimulus was shown in the correct or incorrect location. For each participant we calculated a drinking score based on 3 items derived from the Alcohol Use Questionnaire, and we observed that higher scorers better remembered alcohol-related images compared with lower scorers, particularly when these were presented in their correct locations upon recall. This provides first evidence for an AMB. It is important to highlight that this effect persisted over a 4-sec delay period including a visual interference task that erased iconic memories and diverted attention away from the encoded items, thus the AMB cannot be reduced to the previously reported AAB. Our finding calls for further investigation of alcohol-related cognitive biases in WM, and we propose a preliminary model that may guide future research. © 2012 American Psychological Association.
Resumo:
Background - Several antipsychotic agents are known to prolong the QT interval in a dose dependent manner. Corrected QT interval (QTc) exceeding a threshold value of 450 ms may be associated with an increased risk of life threatening arrhythmias. Antipsychotic agents are often given in combination with other psychotropic drugs, such as antidepressants, that may also contribute to QT prolongation. This observational study compares the effects observed on QT interval between antipsychotic monotherapy and psychoactive polytherapy, which included an additional antidepressant or lithium treatment. Method - We examined two groups of hospitalized women with Schizophrenia, Bipolar Disorder and Schizoaffective Disorder in a naturalistic setting. Group 1 was composed of nineteen hospitalized women treated with antipsychotic monotherapy (either haloperidol, olanzapine, risperidone or clozapine) and Group 2 was composed of nineteen hospitalized women treated with an antipsychotic (either haloperidol, olanzapine, risperidone or quetiapine) with an additional antidepressant (citalopram, escitalopram, sertraline, paroxetine, fluvoxamine, mirtazapine, venlafaxine or clomipramine) or lithium. An Electrocardiogram (ECG) was carried out before the beginning of the treatment for both groups and at a second time after four days of therapy at full dosage, when blood was also drawn for determination of serum levels of the antipsychotic. Statistical analysis included repeated measures ANOVA, Fisher Exact Test and Indipendent T Test. Results - Mean QTc intervals significantly increased in Group 2 (24 ± 21 ms) however this was not the case in Group 1 (-1 ± 30 ms) (Repeated measures ANOVA p < 0,01). Furthermore we found a significant difference in the number of patients who exceeded the threshold of borderline QTc interval value (450 ms) between the two groups, with seven patients in Group 2 (38%) compared to one patient in Group 1 (7%) (Fisher Exact Text, p < 0,05). Conclusions - No significant prolongation of the QT interval was found following monotherapy with an antipsychotic agent, while combination of these drugs with antidepressants caused a significant QT prolongation. Careful monitoring of the QT interval is suggested in patients taking a combined treatment of antipsychotic and antidepressant agents.
Resumo:
The increasing intensity of global competition has led organizations to utilize various types of performance measurement tools for improving the quality of their products and services. Data envelopment analysis (DEA) is a methodology for evaluating and measuring the relative efficiencies of a set of decision making units (DMUs) that use multiple inputs to produce multiple outputs. All the data in the conventional DEA with input and/or output ratios assumes the form of crisp numbers. However, the observed values of data in real-world problems are sometimes expressed as interval ratios. In this paper, we propose two new models: general and multiplicative non-parametric ratio models for DEA problems with interval data. The contributions of this paper are fourfold: (1) we consider input and output data expressed as interval ratios in DEA; (2) we address the gap in DEA literature for problems not suitable or difficult to model with crisp values; (3) we propose two new DEA models for evaluating the relative efficiencies of DMUs with interval ratios, and (4) we present a case study involving 20 banks with three interval ratios to demonstrate the applicability and efficacy of the proposed models where the traditional indicators are mostly financial ratios. © 2011 Elsevier Inc.
Resumo:
Although crisp data are fundamentally indispensable for determining the profit Malmquist productivity index (MPI), the observed values in real-world problems are often imprecise or vague. These imprecise or vague data can be suitably characterized with fuzzy and interval methods. In this paper, we reformulate the conventional profit MPI problem as an imprecise data envelopment analysis (DEA) problem, and propose two novel methods for measuring the overall profit MPI when the inputs, outputs, and price vectors are fuzzy or vary in intervals. We develop a fuzzy version of the conventional MPI model by using a ranking method, and solve the model with a commercial off-the-shelf DEA software package. In addition, we define an interval for the overall profit MPI of each decision-making unit (DMU) and divide the DMUs into six groups according to the intervals obtained for their overall profit efficiency and MPIs. We also present two numerical examples to demonstrate the applicability of the two proposed models and exhibit the efficacy of the procedures and algorithms. © 2011 Elsevier Ltd.
Resumo:
Conventional DEA models assume deterministic, precise and non-negative data for input and output observations. However, real applications may be characterized by observations that are given in form of intervals and include negative numbers. For instance, the consumption of electricity in decentralized energy resources may be either negative or positive, depending on the heat consumption. Likewise, the heat losses in distribution networks may be within a certain range, depending on e.g. external temperature and real-time outtake. Complementing earlier work separately addressing the two problems; interval data and negative data; we propose a comprehensive evaluation process for measuring the relative efficiencies of a set of DMUs in DEA. In our general formulation, the intervals may contain upper or lower bounds with different signs. The proposed method determines upper and lower bounds for the technical efficiency through the limits of the intervals after decomposition. Based on the interval scores, DMUs are then classified into three classes, namely, the strictly efficient, weakly efficient and inefficient. An intuitive ranking approach is presented for the respective classes. The approach is demonstrated through an application to the evaluation of bank branches. © 2013.
Resumo:
We propose an arithmetic of function intervals as a basis for convenient rigorous numerical computation. Function intervals can be used as mathematical objects in their own right or as enclosures of functions over the reals. We present two areas of application of function interval arithmetic and associated software that implements the arithmetic: (1) Validated ordinary differential equation solving using the AERN library and within the Acumen hybrid system modeling tool. (2) Numerical theorem proving using the PolyPaver prover. © 2014 Springer-Verlag.
Resumo:
Electrocardiography (ECG) has been recently proposed as biometric trait for identification purposes. Intra-individual variations of ECG might affect identification performance. These variations are mainly due to Heart Rate Variability (HRV). In particular, HRV causes changes in the QT intervals along the ECG waveforms. This work is aimed at analysing the influence of seven QT interval correction methods (based on population models) on the performance of ECG-fiducial-based identification systems. In addition, we have also considered the influence of training set size, classifier, classifier ensemble as well as the number of consecutive heartbeats in a majority voting scheme. The ECG signals used in this study were collected from thirty-nine subjects within the Physionet open access database. Public domain software was used for fiducial points detection. Results suggested that QT correction is indeed required to improve the performance. However, there is no clear choice among the seven explored approaches for QT correction (identification rate between 0.97 and 0.99). MultiLayer Perceptron and Support Vector Machine seemed to have better generalization capabilities, in terms of classification performance, with respect to Decision Tree-based classifiers. No such strong influence of the training-set size and the number of consecutive heartbeats has been observed on the majority voting scheme.