868 resultados para Statistic validation
Resumo:
OBJECTIVE: To evaluate an automated seizure detection (ASD) algorithm in EEGs with periodic and other challenging patterns. METHODS: Selected EEGs recorded in patients over 1year old were classified into four groups: A. Periodic lateralized epileptiform discharges (PLEDs) with intermixed electrical seizures. B. PLEDs without seizures. C. Electrical seizures and no PLEDs. D. No PLEDs or seizures. Recordings were analyzed by the Persyst P12 software, and compared to the raw EEG, interpreted by two experienced neurophysiologists; Positive percent agreement (PPA) and false-positive rates/hour (FPR) were calculated. RESULTS: We assessed 98 recordings (Group A=21 patients; B=29, C=17, D=31). Total duration was 82.7h (median: 1h); containing 268 seizures. The software detected 204 (=76.1%) seizures; all ictal events were captured in 29/38 (76.3%) patients; in only in 3 (7.7%) no seizures were detected. Median PPA was 100% (range 0-100; interquartile range 50-100), and the median FPR 0/h (range 0-75.8; interquartile range 0-4.5); however, lower performances were seen in the groups containing periodic discharges. CONCLUSION: This analysis provides data regarding the yield of the ASD in a particularly difficult subset of EEG recordings, showing that periodic discharges may bias the results. SIGNIFICANCE: Ongoing refinements in this technique might enhance its utility and lead to a more extensive application.
Resumo:
While the incidence of sleep disorders is continuously increasing in western societies, there is a clear demand for technologies to asses sleep-related parameters in ambulatory scenarios. The present study introduces a novel concept of accurate sensor to measure RR intervals via the analysis of photo-plethysmographic signals recorded at the wrist. In a cohort of 26 subjects undergoing full night polysomnography, the wrist device provided RR interval estimates in agreement with RR intervals as measured from standard electrocardiographic time series. The study showed an overall agreement between both approaches of 0.05 ± 18 ms. The novel wrist sensor opens the door towards a new generation of comfortable and easy-to-use sleep monitors.
Resumo:
L'article met en évidence la nécessité de prendre en compte le registre des valeurs, engagées dans nos croyances, lorsque nous cherchons à évaluer les méthodes psychothérapeutiques et à les valider scientifiquement. Après avoir montré l'apport de l'anthropologie clinique pour une telle démarche et précisé le lien réunissant science et croyance, il propose une double clarification qui apparaît indispensable à opérer en vue de valider une méthode psychothérapeutique, à savoir une clarification d'ordre épistémologique et scientifique.
Resumo:
The genotyping of human papillomaviruses (HPV) is essential for the surveillance of HPV vaccines. We describe and validate a low-cost PGMY-based PCR assay (PGMY-CHUV) for the genotyping of 31 HPV by reverse blotting hybridization (RBH). Genotype-specific detection limits were 50 to 500 genome equivalents per reaction. RBH was 100% specific and 98.61% sensitive using DNA sequencing as the gold standard (n = 1,024 samples). PGMY-CHUV was compared to the validated and commercially available linear array (Roche) on 200 samples. Both assays identified the same positive (n = 182) and negative samples (n = 18). Seventy-six percent of the positives were fully concordant after restricting the comparison to the 28 genotypes shared by both assays. At the genotypic level, agreement was 83% (285/344 genotype-sample combinations; κ of 0.987 for single infections and 0.853 for multiple infections). Fifty-seven of the 59 discordant cases were associated with multiple infections and with the weakest genotypes within each sample (P < 0.0001). PGMY-CHUV was significantly more sensitive for HPV56 (P = 0.0026) and could unambiguously identify HPV52 in mixed infections. PGMY-CHUV was reproducible on repeat testing (n = 275 samples; 392 genotype-sample combinations; κ of 0.933) involving different reagents lots and different technicians. Discordant results (n = 47) were significantly associated with the weakest genotypes in samples with multiple infections (P < 0.0001). Successful participation in proficiency testing also supported the robustness of this assay. The PGMY-CHUV reagent costs were estimated at $2.40 per sample using the least expensive yet proficient genotyping algorithm that also included quality control. This assay may be used in low-resource laboratories that have sufficient manpower and PCR expertise.
Resumo:
BACKGROUND: Genotypes obtained with commercial SNP arrays have been extensively used in many large case-control or population-based cohorts for SNP-based genome-wide association studies for a multitude of traits. Yet, these genotypes capture only a small fraction of the variance of the studied traits. Genomic structural variants (GSV) such as Copy Number Variation (CNV) may account for part of the missing heritability, but their comprehensive detection requires either next-generation arrays or sequencing. Sophisticated algorithms that infer CNVs by combining the intensities from SNP-probes for the two alleles can already be used to extract a partial view of such GSV from existing data sets. RESULTS: Here we present several advances to facilitate the latter approach. First, we introduce a novel CNV detection method based on a Gaussian Mixture Model. Second, we propose a new algorithm, PCA merge, for combining copy-number profiles from many individuals into consensus regions. We applied both our new methods as well as existing ones to data from 5612 individuals from the CoLaus study who were genotyped on Affymetrix 500K arrays. We developed a number of procedures in order to evaluate the performance of the different methods. This includes comparison with previously published CNVs as well as using a replication sample of 239 individuals, genotyped with Illumina 550K arrays. We also established a new evaluation procedure that employs the fact that related individuals are expected to share their CNVs more frequently than randomly selected individuals. The ability to detect both rare and common CNVs provides a valuable resource that will facilitate association studies exploring potential phenotypic associations with CNVs. CONCLUSION: Our new methodologies for CNV detection and their evaluation will help in extracting additional information from the large amount of SNP-genotyping data on various cohorts and use this to explore structural variants and their impact on complex traits.
Resumo:
In this work, a previously-developed, statistical-based, damage-detection approach was validated for its ability to autonomously detect damage in bridges. The damage-detection approach uses statistical differences in the actual and predicted behavior of the bridge caused under a subset of ambient trucks. The predicted behavior is derived from a statistics-based model trained with field data from the undamaged bridge (not a finite element model). The differences between actual and predicted responses, called residuals, are then used to construct control charts, which compare undamaged and damaged structure data. Validation of the damage-detection approach was achieved by using sacrificial specimens that were mounted to the bridge and exposed to ambient traffic loads and which simulated actual damage-sensitive locations. Different damage types and levels were introduced to the sacrificial specimens to study the sensitivity and applicability. The damage-detection algorithm was able to identify damage, but it also had a high false-positive rate. An evaluation of the sub-components of the damage-detection methodology and methods was completed for the purpose of improving the approach. Several of the underlying assumptions within the algorithm were being violated, which was the source of the false-positives. Furthermore, the lack of an automatic evaluation process was thought to potentially be an impediment to widespread use. Recommendations for the improvement of the methodology were developed and preliminarily evaluated. These recommendations are believed to improve the efficacy of the damage-detection approach.
Resumo:
False identity documents constitute a potential powerful source of forensic intelligence because they are essential elements of transnational crime and provide cover for organized crime. In previous work, a systematic profiling method using false documents' visual features has been built within a forensic intelligence model. In the current study, the comparison process and metrics lying at the heart of this profiling method are described and evaluated. This evaluation takes advantage of 347 false identity documents of four different types seized in two countries whose sources were known to be common or different (following police investigations and dismantling of counterfeit factories). Intra-source and inter-sources variations were evaluated through the computation of more than 7500 similarity scores. The profiling method could thus be validated and its performance assessed using two complementary approaches to measuring type I and type II error rates: a binary classification and the computation of likelihood ratios. Very low error rates were measured across the four document types, demonstrating the validity and robustness of the method to link documents to a common source or to differentiate them. These results pave the way for an operational implementation of a systematic profiling process integrated in a developed forensic intelligence model.
Resumo:
The objective of this research is to determine whether the nationally calibrated performance models used in the Mechanistic-Empirical Pavement Design Guide (MEPDG) provide a reasonable prediction of actual field performance, and if the desired accuracy or correspondence exists between predicted and monitored performance for Iowa conditions. A comprehensive literature review was conducted to identify the MEPDG input parameters and the MEPDG verification/calibration process. Sensitivities of MEPDG input parameters to predictions were studied using different versions of the MEPDG software. Based on literature review and sensitivity analysis, a detailed verification procedure was developed. A total of sixteen different types of pavement sections across Iowa, not used for national calibration in NCHRP 1-47A, were selected. A database of MEPDG inputs and the actual pavement performance measures for the selected pavement sites were prepared for verification. The accuracy of the MEPDG performance models for Iowa conditions was statistically evaluated. The verification testing showed promising results in terms of MEPDG’s performance prediction accuracy for Iowa conditions. Recalibrating the MEPDG performance models for Iowa conditions is recommended to improve the accuracy of predictions. ****************** Large File**************************
Resumo:
Photopolymerization is commonly used in a broad range of bioapplications, such as drug delivery, tissue engineering, and surgical implants, where liquid materials are injected and then hardened by means of illumination to create a solid polymer network. However, photopolymerization using a probe, e.g., needle guiding both the liquid and the curing illumination, has not been thoroughly investigated. We present a Monte Carlo model that takes into account the dynamic absorption and scattering parameters as well as solid-liquid boundaries of the photopolymer to yield the shape and volume of minimally invasively injected, photopolymerized hydrogels. In the first part of the article, our model is validated using a set of well-known poly(ethylene glycol) dimethacrylate hydrogels showing an excellent agreement between simulated and experimental volume-growth-rates. In the second part, in situ experimental results and simulations for photopolymerization in tissue cavities are presented. It was found that a cavity with a volume of 152 mm3 can be photopolymerized from the output of a 0.28-mm2 fiber by adding scattering lipid particles while only a volume of 38 mm3 (25%) was achieved without particles. The proposed model provides a simple and robust method to solve complex photopolymerization problems, where the dimension of the light source is much smaller than the volume of the photopolymerizable hydrogel.
Resumo:
Fondements : La recherche sur l'oedème postopératoire consécutif à la chirurgie prothétique du genou est peu développée, notamment en raison de l'absence d'une méthode de mesure adaptée. Une collaboration entre physiothérapeutes et ingénieurs a permis de développer et valider une méthode de mesure innovante et facilement applicable. Les physiothérapeutes ont identifié un besoin clinique, les ingénieurs ont apporté leur savoir technologique, et l'équipe a conjointement élaboré le protocole de mesure et effectué l'étude de validation. Introduction : La bioimpédance est fréquemment utilisée pour évaluer l'oedème par l'analyse d'un signal électrique passant au travers du corps, en extrapolant la résistance théorique à une fréquence égale à zéro (R0). La mesure s'avère fiable et rapide, mais n'a jamais été appliquée et validée pour l'évaluation de l'oedème en chirurgie orthopédique. Objectif : L'objectif de l'étude est de valider la mesure de l'oedème du membre inférieur par bioimpédance, chez des patients ayant bénéficié d'une prothèse totale de genou (PTG). Questionnement : Après nous être assurés de l'absence d'influence de l'implant métallique de la PTG sur la mesure, nous nous questionnions sur la validité et la fiabilité des mesures de bioimpédance dans ce contexte. Méthodes : Deux évaluateurs ont mesuré à tour de rôle et à deux reprises successives l'oedème chez 24 patients opérés d'une PTG, à trois temps différents (préopératoire, J+2, J+8). L'oedème a été évalué par bioimpédance (R0) et par conversion en volume de mesures centimétriques du membre inférieur (MI). Nous avons calculé le ratio moyen des MI pour chaque méthode. Nous avons évalué la reproductibilité intra- et inter-observateurs de la bioimpédance (coefficient de corrélation intraclasse, CCI) et la corrélation entre méthodes (Spearman). Résultats : Le ratio moyen opéré/sain du volume des MI est de 1.04 (SD ± 0.06) en préopératoire, 1.18 (SD ± 0.09) à J+2 et 1.17 (SD ± 0.10) à J+8. Le ratio sain/opéré des MI de R0 est de 1.04 (SD ± 0.07) en préopératoire, 1.51 (SD ± 0.22) à J+2 et 1.65 (SD ± 0.21) à J+8. En préopératoire, à J+2 et J+8, les CCI tous supérieurs à 0.95 pour la reproductibilité intra- et inter-observateurs de la bioimpédance. La corrélation entre méthodes est de 0.71 en préopératoire, 0.61 à J2 et 0.33 à J8. Analyse et conclusion : La variation du ratio des MI entre les temps préopératoire, J+2 et J+8 est plus marquée pour R0. La mesure de bioimpédance bénéficie d'une excellente reproductibilité intra- et inter-observateurs. L'évolution dans le temps de la corrélation entre méthodes peut être expliquée par l'influence potentielle de facteurs confondants sur R0 (modification de la composition liquidienne) et par l'influence de l'atrophie musculaire postopératoire sur la mesure de volume. La collaboration physiothérapeutes-ingénieurs a permis le développement et l'évaluation d'une nouvelle méthode de mesure.
Resumo:
Carbapenemases should be accurately and rapidly detected, given their possible epidemiological spread and their impact on treatment options. Here, we developed a simple, easy and rapid matrix-assisted laser desorption ionization-time of flight (MALDI-TOF)-based assay to detect carbapenemases and compared this innovative test with four other diagnostic approaches on 47 clinical isolates. Tandem mass spectrometry (MS-MS) was also used to determine accurately the amount of antibiotic present in the supernatant after 1 h of incubation and both MALDI-TOF and MS-MS approaches exhibited a 100% sensitivity and a 100% specificity. By comparison, molecular genetic techniques (Check-MDR Carba PCR and Check-MDR CT103 microarray) showed a 90.5% sensitivity and a 100% specificity, as two strains of Aeromonas were not detected because their chromosomal carbapenemase is not targeted by probes used in both kits. Altogether, this innovative MALDI-TOF-based approach that uses a stable 10-μg disk of ertapenem was highly efficient in detecting carbapenemase, with a sensitivity higher than that of PCR and microarray.
Resumo:
BACKGROUND: The Marburg Heart Score (MHS) aims to assist GPs in safely ruling out coronary heart disease (CHD) in patients presenting with chest pain, and to guide management decisions. AIM: To investigate the diagnostic accuracy of the MHS in an independent sample and to evaluate the generalisability to new patients. DESIGN AND SETTING: Cross-sectional diagnostic study with delayed-type reference standard in general practice in Hesse, Germany. METHOD: Fifty-six German GPs recruited 844 males and females aged ≥ 35 years, presenting between July 2009 and February 2010 with chest pain. Baseline data included the items of the MHS. Data on the subsequent course of chest pain, investigations, hospitalisations, and medication were collected over 6 months and were reviewed by an independent expert panel. CHD was the reference condition. Measures of diagnostic accuracy included the area under the receiver operating characteristic curve (AUC), sensitivity, specificity, likelihood ratios, and predictive values. RESULTS: The AUC was 0.84 (95% confidence interval [CI] = 0.80 to 0.88). For a cut-off value of 3, the MHS showed a sensitivity of 89.1% (95% CI = 81.1% to 94.0%), a specificity of 63.5% (95% CI = 60.0% to 66.9%), a positive predictive value of 23.3% (95% CI = 19.2% to 28.0%), and a negative predictive value of 97.9% (95% CI = 96.2% to 98.9%). CONCLUSION: Considering the diagnostic accuracy of the MHS, its generalisability, and ease of application, its use in clinical practice is recommended.