959 resultados para Diagnostic Test Accuracy
Resumo:
ANTECEDENTES: El aislamiento de células fetales libres o ADN fetal en sangre materna abre una ventana de posibilidades diagnósticas no invasivas para patologías monogénicas y cromosómicas, además de permitir la identificación del sexo y del RH fetal. Actualmente existen múltiples estudios que evalúan la eficacia de estos métodos, mostrando resultados costo-efectivos y de menor riesgo que el estándar de oro. Este trabajo describe la evidencia encontrada acerca del diagnóstico prenatal no invasivo luego de realizar una revisión sistemática de la literatura. OBJETIVOS: El objetivo de este estudio fue reunir la evidencia que cumpla con los criterios de búsqueda, en el tema del diagnóstico fetal no invasivo por células fetales libres en sangre materna para determinar su utilidad diagnóstica. MÉTODOS: Se realizó una revisión sistemática de la literatura con el fin de determinar si el diagnóstico prenatal no invasivo por células fetales libres en sangre materna es efectivo como método de diagnóstico. RESULTADOS: Se encontraron 5,893 artículos que cumplían con los criterios de búsqueda; 67 cumplieron los criterios de inclusión: 49.3% (33/67) correspondieron a estudios de corte transversal, 38,8% (26/67) a estudios de cohortes y el 11.9% (8/67) a estudios casos y controles. Se obtuvieron resultados de sensibilidad, especificidad y tipo de prueba. CONCLUSIÓN: En la presente revisión sistemática, se evidencia como el diagnóstico prenatal no invasivo es una técnica feasible, reproducible y sensible para el diagnóstico fetal, evitando el riesgo de un diagnóstico invasivo.
Resumo:
Existen importantes pruebas de valoración que miden habilidades o competencias motoras en el niño; a pesar de ello Colombia carece de estudios que demuestren la validez y la confiabilidad de un test de medición que permita emitir un juicio valorativo relacionado con las competencias motoras infantiles, teniendo presente que la intervención debe basarse en la rigurosidad que exigen los procesos de valoración y evaluación del movimiento corporal. Objetivo. El presente estudio se centró en determinar las propiedades psicométricas del test de competencias motoras Bruininiks Oseretsky –BOT 2- segunda edición. Materiales y métodos. Se realizó una evaluación de pruebas diagnósticas con 24 niños aparentemente sanos de ambos géneros, entre 4 y 7 años, residentes en las ciudades de Chía y Bogotá. La evaluación fue realizada por 3 evaluadores expertos; el análisis para consistencia interna se realizó utilizando el Coeficiente Alfa de Cronbach, el análisis de reproducibilidad se estableció a través del Coeficiente de Correlación Intraclase –CCI- y para el análisis de la validez concurrente se utilizó el Coeficiente de Correlación de Pearson, considerando un alfa=0.05. Resultados. Para la totalidad de las pruebas, se encontraron altos índices de confiabilidad y validez. Conclusiones. El BOT 2 es un instrumento válido y confiable, que puede ser utilizado para la evaluación e identificación del nivel de desarrollo en que se encuentran las competencias motoras en el niño.
Resumo:
This paper compares the use of two diagnostic tests, Gaze Stabilization Test (GST) and the Dynamic Visual Acuity Test (DVAT) to detect unilateral vestibular dysfunction.
Resumo:
Ecological risk assessments must increasingly consider the effects of chemical mixtures on the environment as anthropogenic pollution continues to grow in complexity. Yet testing every possible mixture combination is impractical and unfeasible; thus, there is an urgent need for models that can accurately predict mixture toxicity from single-compound data. Currently, two models are frequently used to predict mixture toxicity from single-compound data: Concentration addition and independent action (IA). The accuracy of the predictions generated by these models is currently debated and needs to be resolved before their use in risk assessments can be fully justified. The present study addresses this issue by determining whether the IA model adequately described the toxicity of binary mixtures of five pesticides and other environmental contaminants (cadmium, chlorpyrifos, diuron, nickel, and prochloraz) each with dissimilar modes of action on the reproduction of the nematode Caenorhabditis elegans. In three out of 10 cases, the IA model failed to describe mixture toxicity adequately with significant or antagonism being observed. In a further three cases, there was an indication of synergy, antagonism, and effect-level-dependent deviations, respectively, but these were not statistically significant. The extent of the significant deviations that were found varied, but all were such that the predicted percentage effect seen on reproductive output would have been wrong by 18 to 35% (i.e., the effect concentration expected to cause a 50% effect led to an 85% effect). The presence of such a high number and variety of deviations has important implications for the use of existing mixture toxicity models for risk assessments, especially where all or part of the deviation is synergistic.
Resumo:
While over-dispersion in capture–recapture studies is well known to lead to poor estimation of population size, current diagnostic tools to detect the presence of heterogeneity have not been specifically developed for capture–recapture studies. To address this, a simple and efficient method of testing for over-dispersion in zero-truncated count data is developed and evaluated. The proposed method generalizes an over-dispersion test previously suggested for un-truncated count data and may also be used for testing residual over-dispersion in zero-inflation data. Simulations suggest that the asymptotic distribution of the test statistic is standard normal and that this approximation is also reasonable for small sample sizes. The method is also shown to be more efficient than an existing test for over-dispersion adapted for the capture–recapture setting. Studies with zero-truncated and zero-inflated count data are used to illustrate the test procedures.
Resumo:
We propose a novel method for scoring the accuracy of protein binding site predictions – the Binding-site Distance Test (BDT) score. Recently, the Matthews Correlation Coefficient (MCC) has been used to evaluate binding site predictions, both by developers of new methods and by the assessors for the community wide prediction experiment – CASP8. Whilst being a rigorous scoring method, the MCC does not take into account the actual 3D location of the predicted residues from the observed binding site. Thus, an incorrectly predicted site that is nevertheless close to the observed binding site will obtain an identical score to the same number of nonbinding residues predicted at random. The MCC is somewhat affected by the subjectivity of determining observed binding residues and the ambiguity of choosing distance cutoffs. By contrast the BDT method produces continuous scores ranging between 0 and 1, relating to the distance between the predicted and observed residues. Residues predicted close to the binding site will score higher than those more distant, providing a better reflection of the true accuracy of predictions. The CASP8 function predictions were evaluated using both the MCC and BDT methods and the scores were compared. The BDT was found to strongly correlate with the MCC scores whilst also being less susceptible to the subjectivity of defining binding residues. We therefore suggest that this new simple score is a potentially more robust method for future evaluations of protein-ligand binding site predictions.
Resumo:
The construction sector is under growing pressure to increase productivity and improve quality, most notably in reports by Latham (1994, Constructing the Team, HMSO, London) and Egan (1998, Rethinking Construction, HMSO, London). A major problem for construction companies is the lack of project predictability. One method of increasing predictability and delivering increased customer value is through the systematic management of construction processes. However, the industry has no methodological mechanism to assess process capability and prioritise process improvements. Standardized Process Improvement for Construction Enterprises (SPICE) is a research project that is attempting to develop a stepwise process improvement framework for the construction industry, utilizing experience from the software industry, and in particular the Capability Maturity Model (CMM), which has resulted in significant productivity improvements in the software industry. This paper introduces SPICE concepts and presents the results from two case studies conducted on design and build projects. These studies have provided further in-sight into the relevance and accuracy of the framework, as well as its value for the construction sector.
Resumo:
Objectives: Our objective was to test the performance of CA125 in classifying serum samples from a cohort of malignant and benign ovarian cancers and age-matched healthy controls and to assess whether combining information from matrix-assisted laser desorption/ionization (MALDI) time-of-flight profiling could improve diagnostic performance. Materials and Methods: Serum samples from women with ovarian neoplasms and healthy volunteers were subjected to CA125 assay and MALDI time-of-flight mass spectrometry (MS) profiling. Models were built from training data sets using discriminatory MALDI MS peaks in combination with CA125 values and tested their ability to classify blinded test samples. These were compared with models using CA125 threshold levels from 193 patients with ovarian cancer, 290 with benign neoplasm, and 2236 postmenopausal healthy controls. Results: Using a CA125 cutoff of 30 U/mL, an overall sensitivity of 94.8% (96.6% specificity) was obtained when comparing malignancies versus healthy postmenopausal controls, whereas a cutoff of 65 U/mL provided a sensitivity of 83.9% (99.6% specificity). High classification accuracies were obtained for early-stage cancers (93.5% sensitivity). Reasons for high accuracies include recruitment bias, restriction to postmenopausal women, and inclusion of only primary invasive epithelial ovarian cancer cases. The combination of MS profiling information with CA125 did not significantly improve the specificity/accuracy compared with classifications on the basis of CA125 alone. Conclusions: We report unexpectedly good performance of serum CA125 using threshold classification in discriminating healthy controls and women with benign masses from those with invasive ovarian cancer. This highlights the dependence of diagnostic tests on the characteristics of the study population and the crucial need for authors to provide sufficient relevant details to allow comparison. Our study also shows that MS profiling information adds little to diagnostic accuracy. This finding is in contrast with other reports and shows the limitations of serum MS profiling for biomarker discovery and as a diagnostic tool
Resumo:
This dissertation deals with aspects of sequential data assimilation (in particular ensemble Kalman filtering) and numerical weather forecasting. In the first part, the recently formulated Ensemble Kalman-Bucy (EnKBF) filter is revisited. It is shown that the previously used numerical integration scheme fails when the magnitude of the background error covariance grows beyond that of the observational error covariance in the forecast window. Therefore, we present a suitable integration scheme that handles the stiffening of the differential equations involved and doesn’t represent further computational expense. Moreover, a transform-based alternative to the EnKBF is developed: under this scheme, the operations are performed in the ensemble space instead of in the state space. Advantages of this formulation are explained. For the first time, the EnKBF is implemented in an atmospheric model. The second part of this work deals with ensemble clustering, a phenomenon that arises when performing data assimilation using of deterministic ensemble square root filters in highly nonlinear forecast models. Namely, an M-member ensemble detaches into an outlier and a cluster of M-1 members. Previous works may suggest that this issue represents a failure of EnSRFs; this work dispels that notion. It is shown that ensemble clustering can be reverted also due to nonlinear processes, in particular the alternation between nonlinear expansion and compression of the ensemble for different regions of the attractor. Some EnSRFs that use random rotations have been developed to overcome this issue; these formulations are analyzed and their advantages and disadvantages with respect to common EnSRFs are discussed. The third and last part contains the implementation of the Robert-Asselin-Williams (RAW) filter in an atmospheric model. The RAW filter is an improvement to the widely popular Robert-Asselin filter that successfully suppresses spurious computational waves while avoiding any distortion in the mean value of the function. Using statistical significance tests both at the local and field level, it is shown that the climatology of the SPEEDY model is not modified by the changed time stepping scheme; hence, no retuning of the parameterizations is required. It is found the accuracy of the medium-term forecasts is increased by using the RAW filter.
Resumo:
Developing models to predict the effects of social and economic change on agricultural landscapes is an important challenge. Model development often involves making decisions about which aspects of the system require detailed description and which are reasonably insensitive to the assumptions. However, important components of the system are often left out because parameter estimates are unavailable. In particular, measurements of the relative influence of different objectives, such as risk, environmental management, on farmer decision making, have proven difficult to quantify. We describe a model that can make predictions of land use on the basis of profit alone or with the inclusion of explicit additional objectives. Importantly, our model is specifically designed to use parameter estimates for additional objectives obtained via farmer interviews. By statistically comparing the outputs of this model with a large farm-level land-use data set, we show that cropping patterns in the United Kingdom contain a significant contribution from farmer’s preference for objectives other than profit. In particular, we found that risk aversion had an effect on the accuracy of model predictions, whereas preference for a particular number of crops grown was less important. While nonprofit objectives have frequently been identified as factors in farmers’ decision making, our results take this analysis further by demonstrating the relationship between these preferences and actual cropping patterns.
Resumo:
In financial research, the sign of a trade (or identity of trade aggressor) is not always available in the transaction dataset and it can be estimated using a simple set of rules called the tick test. In this paper we investigate the accuracy of the tick test from an analytical perspective by providing a closed formula for the performance of the prediction algorithm. By analyzing the derived equation, we provide formal arguments for the use of the tick test by proving that it is bounded to perform better than chance (50/50) and that the set of rules from the tick test provides an unbiased estimator of the trade signs. On the empirical side of the research, we compare the values from the analytical formula against the empirical performance of the tick test for fifteen heavily traded stocks in the Brazilian equity market. The results show that the formula is quite realistic in assessing the accuracy of the prediction algorithm in a real data situation.
Resumo:
In order to examine metacognitive accuracy (i.e., the relationship between metacognitive judgment and memory performance), researchers often rely on by-participant analysis, where metacognitive accuracy (e.g., resolution, as measured by the gamma coefficient or signal detection measures) is computed for each participant and the computed values are entered into group-level statistical tests such as the t-test. In the current work, we argue that the by-participant analysis, regardless of the accuracy measurements used, would produce a substantial inflation of Type-1 error rates, when a random item effect is present. A mixed-effects model is proposed as a way to effectively address the issue, and our simulation studies examining Type-1 error rates indeed showed superior performance of mixed-effects model analysis as compared to the conventional by-participant analysis. We also present real data applications to illustrate further strengths of mixed-effects model analysis. Our findings imply that caution is needed when using the by-participant analysis, and recommend the mixed-effects model analysis.
Resumo:
Algorithms for computer-aided diagnosis of dementia based on structural MRI have demonstrated high performance in the literature, but are difficult to compare as different data sets and methodology were used for evaluation. In addition, it is unclear how the algorithms would perform on previously unseen data, and thus, how they would perform in clinical practice when there is no real opportunity to adapt the algorithm to the data at hand. To address these comparability, generalizability and clinical applicability issues, we organized a grand challenge that aimed to objectively compare algorithms based on a clinically representative multi-center data set. Using clinical practice as the starting point, the goal was to reproduce the clinical diagnosis. Therefore, we evaluated algorithms for multi-class classification of three diagnostic groups: patients with probable Alzheimer's disease, patients with mild cognitive impairment and healthy controls. The diagnosis based on clinical criteria was used as reference standard, as it was the best available reference despite its known limitations. For evaluation, a previously unseen test set was used consisting of 354 T1-weighted MRI scans with the diagnoses blinded. Fifteen research teams participated with a total of 29 algorithms. The algorithms were trained on a small training set (n = 30) and optionally on data from other sources (e.g., the Alzheimer's Disease Neuroimaging Initiative, the Australian Imaging Biomarkers and Lifestyle flagship study of aging). The best performing algorithm yielded an accuracy of 63.0% and an area under the receiver-operating-characteristic curve (AUC) of 78.8%. In general, the best performances were achieved using feature extraction based on voxel-based morphometry or a combination of features that included volume, cortical thickness, shape and intensity. The challenge is open for new submissions via the web-based framework: http://caddementia.grand-challenge.org.
Resumo:
Short-term memory (STM) impairments are prevalent in adults with acquired brain injuries. While there are several published tests to assess these impairments, the majority require speech production, e.g. digit span (Wechsler, 1987). This feature may make them unsuitable for people with aphasia and motor speech disorders because of word finding difficulties and speech demands respectively. If patients perceive the speech demands of the test to be high, the may not engage with testing. Furthermore, existing STM tests are mainly ‘pen-and-paper’ tests, which can jeopardise accuracy. To address these shortcomings, we designed and standardised a novel computerised test that does not require speech output and because of the computerised delivery it would enable clinicians identify STM impairments with greater precision than current tests. The matching listening span tasks, similar to the non-normed PALPA 13 (Kay, Lesser & Coltheart, 1992) is used to test short-term memory for serial order of spoken items. Sequences of digits are presented in pairs. The person hears the first sequence, followed by the second sequence and s/he decides whether the two sequences are the same or different. In the computerised test, the sequences are presented in live voice recordings on a portable computer through a software application (Molero Martin, Laird, Hwang & Salis 2013). We collected normative data from healthy older adults (N=22-24) using digits, real words (one- and two-syllables) and non-words (one- and two- syllables). Their performance was scored following two systems. The Highest Span system was the highest span length (e.g. 2-8) at which a participant correctly responded to over 7 out of 10 trials at the highest sequence length. Test re-test reliability was also tested in a subgroup of participants. The test will be available as free of charge for clinicians and researchers to use.
Resumo:
Genome-wide association studies (GWAS) have been widely used in genetic dissection of complex traits. However, common methods are all based on a fixed-SNP-effect mixed linear model (MLM) and single marker analysis, such as efficient mixed model analysis (EMMA). These methods require Bonferroni correction for multiple tests, which often is too conservative when the number of markers is extremely large. To address this concern, we proposed a random-SNP-effect MLM (RMLM) and a multi-locus RMLM (MRMLM) for GWAS. The RMLM simply treats the SNP-effect as random, but it allows a modified Bonferroni correction to be used to calculate the threshold p value for significance tests. The MRMLM is a multi-locus model including markers selected from the RMLM method with a less stringent selection criterion. Due to the multi-locus nature, no multiple test correction is needed. Simulation studies show that the MRMLM is more powerful in QTN detection and more accurate in QTN effect estimation than the RMLM, which in turn is more powerful and accurate than the EMMA. To demonstrate the new methods, we analyzed six flowering time related traits in Arabidopsis thaliana and detected more genes than previous reported using the EMMA. Therefore, the MRMLM provides an alternative for multi-locus GWAS.