895 resultados para Hit rate
Resumo:
By means of fixed-links modeling, the present study identified different processes of visual short-term memory (VSTM) functioning and investigated how these processes are related to intelligence. We conducted an experiment where the participants were presented with a color change detection task. Task complexity was manipulated through varying the number of presented stimuli (set size). We collected hit rate and reaction time (RT) as indicators for the amount of information retained in VSTM and speed of VSTM scanning, respectively. Due to the impurity of these measures, however, the variability in hit rate and RT was assumed to consist not only of genuine variance due to individual differences in VSTM retention and VSTM scanning but also of other, non-experimental portions of variance. Therefore, we identified two qualitatively different types of components for both hit rate and RT: (1) non-experimental components representing processes that remained constant irrespective of set size and (2) experimental components reflecting processes that increased as a function of set size. For RT, intelligence was negatively associated with the non-experimental components, but was unrelated to the experimental components assumed to represent variability in VSTM scanning speed. This finding indicates that individual differences in basic processing speed, rather than in speed of VSTM scanning, differentiates between high- and low-intelligent individuals. For hit rate, the experimental component constituting individual differences in VSTM retention was positively related to intelligence. The non-experimental components of hit rate, representing variability in basal processes, however, were not associated with intelligence. By decomposing VSTM functioning into non-experimental and experimental components, significant associations with intelligence were revealed that otherwise might have been obscured.
Resumo:
Universidade Estadual de Campinas . Faculdade de Educação Física
Resumo:
Objectives: (1) To establish test performance measures for Transient Evoked Otoacoustic Emission testing of 6-year-old children in a school setting; (2) To investigate whether Transient Evoked Otoacoustic Emission testing provides a more accurate and effective alternative to a pure tone screening plus tympanometry protocol. Methods: Pure tone screening, tympanometry and transient evoked otoacoustic emission data were collected from 940 subjects (1880 ears), with a mean age of 6.2 years. Subjects were tested in non-sound-treated rooms within 22 schools. Receiver operating characteristics curves along with specificity, sensitivity, accuracy and efficiency values were determined for a variety of transient evoked otoacoustic emission/pure tone screening/tympanometry comparisons. Results: The Transient Evoked Otoacoustic Emission failure rate for the group was 20.3%. The failure rate for pure tone screening was found to be 8.9%, whilst 18.6% of subjects failed a protocol consisting of combined pure tone screening and tympanometry results. In essence, findings from the comparison of overall Transient Evoked Otoacoustic Emission pass/fail with overall pure tone screening pass/fail suggested that use of a modified Rhode Island Hearing Assessment Project criterion would result in a very high probability that a child with a pass result has normal hearing (true negative). However, the hit rate was only moderate. Selection of a signal-to-noise ratio (SNR) criterion set at greater than or equal to 1 dB appeared to provide the best test performance measures for the range of SNR values investigated. Test performance measures generally declined when tympanometry results were included, with the exception of lower false alarm rates and higher positive predictive values. The exclusion of low frequency data from the Transient Evoked Otoacoustic Emission SNR versus pure tone screening analysis resulted in improved performance measures. Conclusions: The present study poses several implications for the clinical implementation of Transient Evoked Otoacoustic Emission screening for entry level school children. Transient Evoked Otoacoustic Emission pass/fail criteria will require revision. The findings of the current investigation offer support to the possible replacement of pure tone screening with Transient Evoked Otoacoustic Emission testing for 6-year-old children. However, they do not suggest the replacement of the pure tone screening plus tympanometry battery. (C) 2001 Elsevier Science Ireland Ltd. All rights reserved.
Resumo:
An "in-house" RT-PCR method was developed that allows the simultaneous detection of the RNA of the Hepatitis C Virus (HCV) and an artificial RNA employed as an external control. Samples were analyzed in pools of 6-12 donations, each donation included in two pools, one horizontal and one vertical, permitting the immediate identification of a reactive donation, obviating the need for pool dismembering. The whole process took 6-8 hours per day and results were issued in parallel to serology. The method was shown to detect all six HCV genotypes and a sensitivity of 500 IU/mL was achieved (95% hit rate). Until July 2005, 139,678 donations were tested and 315 (0.23%) were found reactive for HCV-RNA. Except for five false-positives, all 310 presented the corresponding antibody as well, so the yield of NAT-only donations was zero, presenting a specificity of 99.83%. Detection of a window period donation, in the population studied, will probably demand testing of a larger number of donations. International experience is showing a rate of 1:200,000 - 1:500,000 of isolated HCV-RNA reactive donations.
Resumo:
In this thesis, a feed-forward, back-propagating Artificial Neural Network using the gradient descent algorithm is developed to forecast the directional movement of daily returns for WTI, gold and copper futures. Out-of-sample back-test results vary, with some predictive abilities for copper futures but none for either WTI or gold. The best statistically significant hit rate achieved was 57% for copper with an absolute return Sharpe Ratio of 1.25 and a benchmarked Information Ratio of 2.11.
Resumo:
Through a rational design approach, we generated a panel of HLA-A*0201/NY-ESO-1(157-165)-specific T cell receptors (TCR) with increasing affinities of up to 150-fold from the wild-type TCR. Using these TCR variants which extend just beyond the natural affinity range, along with an extreme supraphysiologic one having 1400-fold enhanced affinity, and a low-binding one, we sought to determine the effect of TCR binding properties along with cognate peptide concentration on CD8(+) T cell responsiveness. Major histocompatibility complexes (MHC) expressed on the surface of various antigen presenting cells were peptide-pulsed and used to stimulate human CD8(+) T cells expressing the different TCR via lentiviral transduction. At intermediate peptide concentration we measured maximum cytokine/chemokine secretion, cytotoxicity, and Ca(2+) flux for CD8(+) T cells expressing TCR within a dissociation constant (K(D)) range of ∼1-5 μM. Under these same conditions there was a gradual attenuation in activity for supraphysiologic affinity TCR with K(D) < ∼1 μM, irrespective of CD8 co-engagement and of half-life (t(1/2) = ln 2/k(off)) values. With increased peptide concentration, however, the activity levels of CD8(+) T cells expressing supraphysiologic affinity TCR were gradually restored. Together our data support the productive hit rate model of T cell activation arguing that it is not the absolute number of TCR/pMHC complexes formed at equilibrium, but rather their productive turnover, that controls levels of biological activity. Our findings have important implications for various immunotherapies under development such as adoptive cell transfer of TCR-engineered CD8(+) T cells, as well as for peptide vaccination strategies.
Resumo:
This paper reports a series of experiments on patient JB, a man with memory difficulties following damage to the left frontal lobe. The primary characteristic of JB's recognition memory impairment is a high level of false recognition together with a normal hit rate. The hypothesis that JB's false recognition reflects an over-reliance on familiarity is considered, but discounted on the basis that the false alarm rate is not affected by increasing the similarity between distracters and targets, and remains high when nonword stimuli are used. It is suggested, instead, that JB relies on a poorly focused memory description, which lacks item-specific detail but contains more general, low-level properties of the target items-these properties being held by many distracter items as well. This deficit is considered to arise because of damage to frontally mediated control processes involved in the selection of elements for memory encoding. An encoding deficit is supported by the fact that JB's false recognition is significantly reduced by orienting instructions, and is eliminated when his remote memory is subjected to recognition testing. In contrast, it is shown that manipulations at the level of retrieval (e.g. restricting the number of "old" responses) have little effect on his false recognition.
Resumo:
Imaging studies have shown reduced frontal lobe resources following total sleep deprivation (TSD). The anterior cingulate cortex (ACC) in the frontal region plays a role in performance monitoring and cognitive control; both error detection and response inhibition are impaired following sleep loss. Event-related potentials (ERPs) are an electrophysiological tool used to index the brain's response to stimuli and information processing. In the Flanker task, the error-related negativity (ERN) and error positivity (Pe) ERPs are elicited after erroneous button presses. In a Go/NoGo task, NoGo-N2 and NoGo-P3 ERPs are elicited during high conflict stimulus processing. Research investigating the impact of sleep loss on ERPs during performance monitoring is equivocal, possibly due to task differences, sample size differences and varying degrees of sleep loss. Based on the effects of sleep loss on frontal function and prior research, it was expected that the sleep deprivation group would have lower accuracy, slower reaction time and impaired remediation on performance monitoring tasks, along with attenuated and delayed stimulus- and response-locked ERPs. In the current study, 49 young adults (24 male) were screened to be healthy good sleepers and then randomly assigned to a sleep deprived (n = 24) or rested control (n = 25) group. Participants slept in the laboratory on a baseline night, followed by a second night of sleep or wake. Flanker and Go/NoGo tasks were administered in a battery at 1O:30am (i.e., 27 hours awake for the sleep deprivation group) to measure performance monitoring. On the Flanker task, the sleep deprivation group was significantly slower than controls (p's <.05), but groups did not differ on accuracy. No group differences were observed in post-error slowing, but a trend was observed for less remedial accuracy in the sleep deprived group compared to controls (p = .09), suggesting impairment in the ability to take remedial action following TSD. Delayed P300s were observed in the sleep deprived group on congruent and incongruent Flanker trials combined (p = .001). On the Go/NoGo task, the hit rate (i.e., Go accuracy) was significantly lower in the sleep deprived group compared to controls (p <.001), but no differences were found on false alarm rates (i.e., NoGo Accuracy). For the sleep deprived group, the Go-P3 was significantly smaller (p = .045) and there was a trend for a smaller NoGo-N2 compared to controls (p = .08). The ERN amplitude was reduced in the TSD group compared to controls in both the Flanker and Go/NoGo tasks. Error rate was significantly correlated with the amplitude of response-locked ERNs in control (r = -.55, p=.005) and sleep deprived groups (r = -.46, p = .021); error rate was also correlated with Pe amplitude in controls (r = .46, p=.022) and a trend was found in the sleep deprived participants (r = .39, p =. 052). An exploratory analysis showed significantly larger Pe mean amplitudes (p = .025) in the sleep deprived group compared to controls for participants who made more than 40+ errors on the Flanker task. Altered stimulus processing as indexed by delayed P3 latency during the Flanker task and smaller amplitude Go-P3s during the Go/NoGo task indicate impairment in stimulus evaluation and / or context updating during frontal lobe tasks. ERN and NoGoN2 reductions in the sleep deprived group confirm impairments in the monitoring system. These data add to a body of evidence showing that the frontal brain region is particularly vulnerable to sleep loss. Understanding the neural basis of these deficits in performance monitoring abilities is particularly important for our increasingly sleep deprived society and for safety and productivity in situations like driving and sustained operations.
Resumo:
A new generation of reanalysis products is currently being produced that provides global gridded atmospheric data spanning more than a century. Such data may be useful for characterising the observed long-term variability of extreme precipitation events, particularly in regions where spatial coverage of surface observations is limited, and in the pre-satellite era. An analysis of extreme precipitation events is performed over England and Wales, investigating the ability of Twentieth Century Reanalysis and ERA-Interim to represent extreme precipitation accumulations as recorded in the England and Wales Precipitation dataset on accumulation time-scales from 1 to 7 days. Significant correlations are found between daily precipitation accumulation observations and both reanalysis products. A hit-rate analysis indicates that the reanalyses have hit rates (as defined by an event above the 98th percentile) of approximately 40–65% for extreme events in both summer (JJA) and winter (DJF). This suggests that both ERA-Interim and Twentieth Century Reanalysis are difficult to use for representing individual extreme precipitation events over England and Wales.
Resumo:
Outliers são observações que parecem ser inconsistentes com as demais. Também chamadas de valores atípicos, extremos ou aberrantes, estas inconsistências podem ser causadas por mudanças de política ou crises econômicas, ondas inesperadas de frio ou calor, erros de medida ou digitação, entre outras. Outliers não são necessariamente valores incorretos, mas, quando provenientes de erros de medida ou digitação, podem distorcer os resultados de uma análise e levar o pesquisador à conclusões equivocadas. O objetivo deste trabalho é estudar e comparar diferentes métodos para detecção de anormalidades em séries de preços do Índice de Preços ao Consumidor (IPC), calculado pelo Instituto Brasileiro de Economia (IBRE) da Fundação Getulio Vargas (FGV). O IPC mede a variação dos preços de um conjunto fixo de bens e serviços componentes de despesas habituais das famílias com nível de renda situado entre 1 e 33 salários mínimos mensais e é usado principalmente como um índice de referência para avaliação do poder de compra do consumidor. Além do método utilizado atualmente no IBRE pelos analistas de preços, os métodos considerados neste estudo são: variações do Método do IBRE, Método do Boxplot, Método do Boxplot SIQR, Método do Boxplot Ajustado, Método de Cercas Resistentes, Método do Quartil, do Quartil Modificado, Método do Desvio Mediano Absoluto e Algoritmo de Tukey. Tais métodos foram aplicados em dados pertencentes aos municípios Rio de Janeiro e São Paulo. Para que se possa analisar o desempenho de cada método, é necessário conhecer os verdadeiros valores extremos antecipadamente. Portanto, neste trabalho, tal análise foi feita assumindo que os preços descartados ou alterados pelos analistas no processo de crítica são os verdadeiros outliers. O Método do IBRE é bastante correlacionado com os preços alterados ou descartados pelos analistas. Sendo assim, a suposição de que os preços alterados ou descartados pelos analistas são os verdadeiros valores extremos pode influenciar os resultados, fazendo com que o mesmo seja favorecido em comparação com os demais métodos. No entanto, desta forma, é possível computar duas medidas através das quais os métodos são avaliados. A primeira é a porcentagem de acerto do método, que informa a proporção de verdadeiros outliers detectados. A segunda é o número de falsos positivos produzidos pelo método, que informa quantos valores precisaram ser sinalizados para um verdadeiro outlier ser detectado. Quanto maior for a proporção de acerto gerada pelo método e menor for a quantidade de falsos positivos produzidos pelo mesmo, melhor é o desempenho do método. Sendo assim, foi possível construir um ranking referente ao desempenho dos métodos, identificando o melhor dentre os analisados. Para o município do Rio de Janeiro, algumas das variações do Método do IBRE apresentaram desempenhos iguais ou superiores ao do método original. Já para o município de São Paulo, o Método do IBRE apresentou o melhor desempenho. Em trabalhos futuros, espera-se testar os métodos em dados obtidos por simulação ou que constituam bases largamente utilizadas na literatura, de forma que a suposição de que os preços descartados ou alterados pelos analistas no processo de crítica são os verdadeiros outliers não interfira nos resultados.
Resumo:
Waste generated during the exploration and production of oil, water stands out due to various factors including the volume generated, the salt content, the presence of oil and chemicals and the water associated with oil is called produced water. The chemical composition of water is complex and depends strongly on the field generator, because it was in contact with the geological formation for thousands of years. This work aims to characterize the hydrochemical water produced in different areas of a field located in the Potiguar Basin. We collected 27 samples from 06 zones (400, 600, 400/600, 400/450/500, 350/400, A) the producing field called S and measured 50 required parameter divided between physical and chemical parameters, cations and anions. In hydrochemical characterization was used as tools of reasons ionic calculations, diagrams and they hydrochemical classification diagram Piper and Stiff diagram and also the statistic that helped in the identification of signature patterns for each production area including the area that supplies water injected this field for secondary oil recovery. The ionic balance error was calculated to assess the quality of the results of the analysis that was considered good, because 89% of the samples were below 5% error. Hydrochemical diagrams classified the waters as sodium chloride, with the exception of samples from Area A, from the injection well, which were classified as sodium bicarbonate. Through descriptive analysis and discriminant analysis was possible to obtain a function that differs chemically production areas, this function had a good hit rate of classification was 85%
Resumo:
Layer mortality due to heat stress is an important economic loss for the producer. The aim of this study was to determine the mortality pattern of layers reared in the region of Bastos, SP, Brazil, according to external environment and bird age. Data mining technique were used based on monthly mortality records of hens in production, 135 poultry houses, from January 2004 to August 2008. The external environment was characterized according maximum and minimum temperatures, obtained monthly at the meteorological station CATI in the city of Tupa, SP, Brazil. Mortality was classified as normal (<= 1.2%) or high (> 1.2%), considering the mortality limits mentioned in literature. Data mining technique produced a decision tree with nine levels and 23 leaves, with 62.6% of overall accuracy. The hit rate for the High class was 64.1% and 59.9% for Normal class. The decision tree allowed finding a pattern in the mortality data, generating a model for estimating mortality based on the thermal environment and bird age.
Resumo:
The human voice is an important communication tool and any disorder of the voice can have profound implications for social and professional life of an individual. Techniques of digital signal processing have been used by acoustic analysis of vocal disorders caused by pathologies in the larynx, due to its simplicity and noninvasive nature. This work deals with the acoustic analysis of voice signals affected by pathologies in the larynx, specifically, edema, and nodules on the vocal folds. The purpose of this work is to develop a classification system of voices to help pre-diagnosis of pathologies in the larynx, as well as monitoring pharmacological treatments and after surgery. Linear Prediction Coefficients (LPC), Mel Frequency cepstral coefficients (MFCC) and the coefficients obtained through the Wavelet Packet Transform (WPT) are applied to extract relevant characteristics of the voice signal. For the classification task is used the Support Vector Machine (SVM), which aims to build optimal hyperplanes that maximize the margin of separation between the classes involved. The hyperplane generated is determined by the support vectors, which are subsets of points in these classes. According to the database used in this work, the results showed a good performance, with a hit rate of 98.46% for classification of normal and pathological voices in general, and 98.75% in the classification of diseases together: edema and nodules
Resumo:
This study investigates the chemical species produced water from the reservoir areas of oil production in the field of Monte Alegre (onshore production) with a proposal of developing a model applied to the identification of the water produced in different zones or groups of zones.Starting from the concentrations of anions and cátions from water produced as input parameters in Linear Discriminate Analysis, it was possible to estimate and compare the model predictions respecting the particularities of their methods in order to ascertain which one would be most appropriate. The methods Resubstitution, Holdout Method and Lachenbruch were used for adjustment and general evaluation of the built models. Of the estimated models for Wells producing water for a single production area, the most suitable method was the "Holdout Method and had a hit rate of 90%. Discriminant functions (CV1, CV2 and CV3) estimated in this model were used to modeling new functions for samples ofartificial mixtures of produced water (producedin our laboratory) and samples of mixtures actualproduced water (water collected inwellsproducingmore thanonezone).The experiment with these mixtures was carried out according to a schedule experimental mixtures simplex type-centroid also was simulated in which the presence of water from steam injectionin these tanks fora part of amostras. Using graphs of two and three dimensions was possible to estimate the proportion of water in the production area
Resumo:
The training and the application of a neural network system for the prediction of occurrences of secondary metabolites belonging to diverse chemical classes in the Asteraceae is described. From a database containing about 604 genera and 28,000 occurrences of secondary metabolites in the plant family, information was collected encompassing nine chemical classes and their respective occurrences for training of a multi-layer net using the back-propagation algorithm. The net supplied as output the presence or absence of the chemical classes as well as the number of compounds isolated from each taxon. The results provided by the net from the presence or absence of a chemical class showed a 89% hit rate; by excluding triterpenes from the analysis, only 5% of the genera studied exhibited errors greater than 10%. Copyright (C) 2004 John Wiley Sons, Ltd.