912 resultados para Appearance-based methods
Resumo:
Introduction Toxoplasmosis may be life-threatening in fetuses and in immune-deficient patients. Conventional laboratory diagnosis of toxoplasmosis is based on the presence of IgM and IgG anti-Toxoplasma gondii antibodies; however, molecular techniques have emerged as alternative tools due to their increased sensitivity. The aim of this study was to compare the performance of 4 PCR-based methods for the laboratory diagnosis of toxoplasmosis. One hundred pregnant women who seroconverted during pregnancy were included in the study. The definition of cases was based on a 12-month follow-up of the infants. Methods Amniotic fluid samples were submitted to DNA extraction and amplification by the following 4 Toxoplasma techniques performed with parasite B1 gene primers: conventional PCR, nested-PCR, multiplex-nested-PCR, and real-time PCR. Seven parameters were analyzed, sensitivity (Se), specificity (Sp), positive predictive value (PPV), negative predictive value (NPV), positive likelihood ratio (PLR), negative likelihood ratio (NLR) and efficiency (Ef). Results Fifty-nine of the 100 infants had toxoplasmosis; 42 (71.2%) had IgM antibodies at birth but were asymptomatic, and the remaining 17 cases had non-detectable IgM antibodies but high IgG antibody titers that were associated with retinochoroiditis in 8 (13.5%) cases, abnormal cranial ultrasound in 5 (8.5%) cases, and signs/symptoms suggestive of infection in 4 (6.8%) cases. The conventional PCR assay detected 50 cases (9 false-negatives), nested-PCR detected 58 cases (1 false-negative and 4 false-positives), multiplex-nested-PCR detected 57 cases (2 false-negatives), and real-time-PCR detected 58 cases (1 false-negative). Conclusions The real-time PCR assay was the best-performing technique based on the parameters of Se (98.3%), Sp (100%), PPV (100%), NPV (97.6%), PLR (â^ž), NLR (0.017), and Ef (99%).
Resumo:
This thesis studies molecular dynamics simulations on two levels of resolution: the detailed level of atomistic simulations, where the motion of explicit atoms in a many-particle system is considered, and the coarse-grained level, where the motion of superatoms composed of up to 10 atoms is modeled. While atomistic models are capable of describing material specific effects on small scales, the time and length scales they can cover are limited due to their computational costs. Polymer systems are typically characterized by effects on a broad range of length and time scales. Therefore it is often impossible to atomistically simulate processes, which determine macroscopic properties in polymer systems. Coarse-grained (CG) simulations extend the range of accessible time and length scales by three to four orders of magnitude. However, no standardized coarse-graining procedure has been established yet. Following the ideas of structure-based coarse-graining, a coarse-grained model for polystyrene is presented. Structure-based methods parameterize CG models to reproduce static properties of atomistic melts such as radial distribution functions between superatoms or other probability distributions for coarse-grained degrees of freedom. Two enhancements of the coarse-graining methodology are suggested. Correlations between local degrees of freedom are implicitly taken into account by additional potentials acting between neighboring superatoms in the polymer chain. This improves the reproduction of local chain conformations and allows the study of different tacticities of polystyrene. It also gives better control of the chain stiffness, which agrees perfectly with the atomistic model, and leads to a reproduction of experimental results for overall chain dimensions, such as the characteristic ratio, for all different tacticities. The second new aspect is the computationally cheap development of nonbonded CG potentials based on the sampling of pairs of oligomers in vacuum. Static properties of polymer melts are obtained as predictions of the CG model in contrast to other structure-based CG models, which are iteratively refined to reproduce reference melt structures. The dynamics of simulations at the two levels of resolution are compared. The time scales of dynamical processes in atomistic and coarse-grained simulations can be connected by a time scaling factor, which depends on several specific system properties as molecular weight, density, temperature, and other components in mixtures. In this thesis the influence of molecular weight in systems of oligomers and the situation in two-component mixtures is studied. For a system of small additives in a melt of long polymer chains the temperature dependence of the additive diffusion is predicted and compared to experiments.
Resumo:
The topic of this work concerns nonparametric permutation-based methods aiming to find a ranking (stochastic ordering) of a given set of groups (populations), gathering together information from multiple variables under more than one experimental designs. The problem of ranking populations arises in several fields of science from the need of comparing G>2 given groups or treatments when the main goal is to find an order while taking into account several aspects. As it can be imagined, this problem is not only of theoretical interest but it also has a recognised relevance in several fields, such as industrial experiments or behavioural sciences, and this is reflected by the vast literature on the topic, although sometimes the problem is associated with different keywords such as: "stochastic ordering", "ranking", "construction of composite indices" etc., or even "ranking probabilities" outside of the strictly-speaking statistical literature. The properties of the proposed method are empirically evaluated by means of an extensive simulation study, where several aspects of interest are let to vary within a reasonable practical range. These aspects comprise: sample size, number of variables, number of groups, and distribution of noise/error. The flexibility of the approach lies mainly in the several available choices for the test-statistic and in the different types of experimental design that can be analysed. This render the method able to be tailored to the specific problem and the to nature of the data at hand. To perform the analyses an R package called SOUP (Stochastic Ordering Using Permutations) has been written and it is available on CRAN.
Resumo:
In der vorliegenden Arbeit wurde eine Top Down (TD) und zwei Bottom Up (BU) MALDI/ESI Massenspektrometrie/HPLC-Methoden entwickelt mit dem Ziel Augenoberfächenkomponenten, d.h. Tränenfilm und Konjunktivalzellen zu analysieren. Dabei wurde ein detaillierter Einblick in die Entwicklungsschritte gegeben und die Ansätze auf Eignung und methodische Grenzen untersucht. Während der TD Ansatz vorwiegend Eignung zur Analyse von rohen, weitgehend unbearbeiteten Zellproben fand, konnten mittels des BU Ansatzes bearbeitete konjunktivale Zellen, aber auch Tränenfilm mit hoher Sensitivität und Genauigkeit proteomisch analysiert werden. Dabei konnten mittels LC MALDI BU-Methode mehr als 200 Tränenproteine und mittels der LC ESI Methode mehr als 1000 Tränen- sowie konjunktivale Zellproteine gelistet werden. Dabei unterschieden sich ESI- and MALDI- Methoden deutlich bezüglich der Quantität und Qualität der Ergebnisse, weshalb differente proteomische Anwendungsgebiete der beiden Methoden vorgeschlagen wurden. Weiterhin konnten mittels der entwickelten LC MALDI/ESI BU Plattform, basierend auf den Vorteilen gegenüber dem TD Ansatz, therapeutische Einflüsse auf die Augenoberfläche mit Fokus auf die topische Anwendung von Taurin sowie Taflotan® sine, untersucht werden. Für Taurin konnten entzündungshemmende Effekte, belegt durch dynamische Veränderungen des Tränenfilms, dokumentiert werden. Außerdem konnten vorteilhafte, konzentrationsabhängige Wirkweisen auch in Studien an konjunktival Zellen gezeigt werden. Für die Anwendung von konservierungsmittelfreien Taflotan® sine, konnte mittels LC ESI BU Analyse eine Regenerierung der Augenoberfläche in Patienten mit Primärem Offenwinkel Glaukom (POWG), welche unter einem “Trockenem Auge“ litten nach einem therapeutischen Wechsel von Xalatan® basierend auf dynamischen Tränenproteomveränderungen gezeigt werden. Die Ergebnisse konnten mittels Microarray (MA) Analysen bestätigt werden. Sowohl in den Taurin Studien, als auch in der Taflotan® sine Studie, konnten charakteristische Proteine der Augenoberfläche dokumentiert werden, welche eine objektive Bewertung des Gesundheitszustandes der Augenoberfläche ermöglichen. Eine Kombination von Taflotan® sine und Taurin wurde als mögliche Strategie zur Therapie des Trockenen Auges bei POWG Patienten vorgeschlagen und diskutiert.
Resumo:
Conventional inorganic materials for x-ray radiation sensors suffer from several drawbacks, including their inability to cover large curved areas, me- chanical sti ffness, lack of tissue-equivalence and toxicity. Semiconducting organic polymers represent an alternative and have been employed as di- rect photoconversion material in organic diodes. In contrast to inorganic detector materials, polymers allow low-cost and large area fabrication by sol- vent based methods. In addition their processing is compliant with fexible low-temperature substrates. Flexible and large-area detectors are needed for dosimetry in medical radiotherapy and security applications. The objective of my thesis is to achieve optimized organic polymer diodes for fexible, di- rect x-ray detectors. To this end polymer diodes based on two different semi- conducting polymers, polyvinylcarbazole (PVK) and poly(9,9-dioctyluorene) (PFO) have been fabricated. The diodes show state-of-the-art rectifying be- haviour and hole transport mobilities comparable to reference materials. In order to improve the X-ray stopping power, high-Z nanoparticle Bi2O3 or WO3 where added to realize a polymer-nanoparticle composite with opti- mized properities. X-ray detector characterization resulted in sensitivties of up to 14 uC/Gy/cm2 for PVK when diodes were operated in reverse. Addition of nanoparticles could further improve the performance and a maximum sensitivy of 19 uC/Gy/cm2 was obtained for the PFO diodes. Compared to the pure PFO diode this corresponds to a five-fold increase and thus highlights the potentiality of nanoparticles for polymer detector design. In- terestingly the pure polymer diodes showed an order of magnitude increase in sensitivity when operated in forward regime. The increase was attributed to a different detection mechanism based on the modulation of the diodes conductivity.
Resumo:
n this paper we present a novel hybrid approach for multimodal medical image registration based on diffeomorphic demons. Diffeomorphic demons have proven to be a robust and efficient way for intensity-based image registration. A very recent extension even allows to use mutual information (MI) as a similarity measure to registration multimodal images. However, due to the intensity correspondence uncertainty existing in some anatomical parts, it is difficult for a purely intensity-based algorithm to solve the registration problem. Therefore, we propose to combine the resulting transformations from both intensity-based and landmark-based methods for multimodal non-rigid registration based on diffeomorphic demons. Several experiments on different types of MR images were conducted, for which we show that a better anatomical correspondence between the images can be obtained using the hybrid approach than using either intensity information or landmarks alone.
Resumo:
This study aimed to evaluate the influence of professional prophylactic methods on the DIAGNOdent 2095, DIAGNOdent 2190 and VistaProof performance in detecting occlusal caries. Assessments were performed in 110 permanent teeth at baseline and after bicarbonate jet or prophylactic paste and rinsing. Performance in terms of sensitivity improved after rinsing of the occlusal surfaces when the prophylactic paste was used. However, the sodium bicarbonate jet did not significantly influence the performance of the fluorescence-based methods. It can be concluded that different professional prophylactic methods can significantly influence the performance of fluorescence-based methods for occlusal caries detection.
Resumo:
Misconceptions exist in all fields of learning and develop through a person’s preconception of how the world works. Students with misconceptions in chemical engineering are not capable of correctly transferring knowledge to a new situation and will likely arrive at an incorrect solution. The purpose of this thesis was to repair misconceptions in thermodynamics by using inquiry-based activities. Inquiry-based learning is a method of teaching that involves hands-on learning and self-discovery. Previous work has shown inquiry-based methods result in better conceptual understanding by students relative to traditional lectures. The thermodynamics activities were designed to guide students towards the correct conceptual understanding through observing a preconception fail to hold up through an experiment or simulation. The developed activities focus on the following topics in thermodynamics: “internal energy versus enthalpy”, “equilibrium versus steady state”, and “entropy”. For each topic, two activities were designed to clarify the concept and assure it was properly grasped. Each activity was coupled with an instructions packet containing experimental procedure as well as pre- and post-analysis questions, which were used to analyze the effect of the activities on the students’ responses. Concept inventories were used to monitor students’ conceptual understanding at the beginning and end of the semester. The results did not show a statistically significant increase in the overall concept inventory scores for students who performed the activities compared to traditional learning. There was a statistically significant increase in concept area scores for “internal energy versus enthalpy” and “equilibrium versus steady state”. Although there was not a significant increase in concept inventory scores for “entropy”, written analyses showed most students’ misconceptions were repaired. Students transferred knowledge effectively and retained most of the information in the concept areas of “internal energy versus enthalpy” and “equilibrium versus steady state”.
Resumo:
This study compared the performance of fluorescence-based methods, radiographic examination, and International Caries Detection and Assessment System (ICDAS) II on occlusal surfaces. One hundred and nineteen permanent human molars were assessed twice by 2 experienced dentists using the laser fluorescence (LF and LFpen) and fluorescence camera (FC) devices, ICDAS II and bitewing radiographs (BW). After measuring, the teeth were histologically prepared and assessed for caries extension. The sensitivities for dentine caries detection were 0.86 (FC), 0.78 (LFpen), 0.73 (ICDAS II), 0.51 (LF) and 0.34 (BW). The specificities were 0.97 (BW), 0.89 (LF), 0.65 (ICDAS II), 0.63 (FC) and 0.56 (LFpen). BW presented the highest values of likelihood ratio (LR)+ (12.47) and LR- (0.68). Rank correlations with histology were 0.53 (LF), 0.52 (LFpen), 0.41 (FC), 0.59 (ICDAS II) and 0.57 (BW). The area under the ROC curve varied from 0.72 to 0.83. Inter- and intraexaminer intraclass correlation values were respectively 0.90 and 0.85 (LF), 0.93 and 0.87 (LFpen) and 0.85 and 0.76 (FC). The ICDAS II kappa values were 0.51 (interexaminer) and 0.61 (intraexaminer). The BW kappa values were 0.50 (interexaminer) and 0.62 (intraexaminer). The Bland and Altman limits of agreement were 46.0 and 38.2 (LF), 55.6 and 40.0 (LFpen) and 1.12 and 0.80 (FC), for intra- and interexaminer reproducibilities. The posttest probability for dentine caries detection was high for BW and LF. In conclusion, LFpen, FC and ICDAS II presented better sensitivity and LF and BW better specificity. ICDAS II combined with BW showed the best performance and is the best combination for detecting caries on occlusal surfaces.
Resumo:
Loss to follow-up (LTFU) is a common problem in many epidemiological studies. In antiretroviral treatment (ART) programs for patients with human immunodeficiency virus (HIV), mortality estimates can be biased if the LTFU mechanism is non-ignorable, that is, mortality differs between lost and retained patients. In this setting, routine procedures for handling missing data may lead to biased estimates. To appropriately deal with non-ignorable LTFU, explicit modeling of the missing data mechanism is needed. This can be based on additional outcome ascertainment for a sample of patients LTFU, for example, through linkage to national registries or through survey-based methods. In this paper, we demonstrate how this additional information can be used to construct estimators based on inverse probability weights (IPW) or multiple imputation. We use simulations to contrast the performance of the proposed estimators with methods widely used in HIV cohort research for dealing with missing data. The practical implications of our approach are illustrated using South African ART data, which are partially linkable to South African national vital registration data. Our results demonstrate that while IPWs and proper imputation procedures can be easily constructed from additional outcome ascertainment to obtain valid overall estimates, neglecting non-ignorable LTFU can result in substantial bias. We believe the proposed estimators are readily applicable to a growing number of studies where LTFU is appreciable, but additional outcome data are available through linkage or surveys of patients LTFU. Copyright © 2013 John Wiley & Sons, Ltd.
Resumo:
This study aimed to assess the performance of International Caries Detection and Assessment System (ICDAS), radiographic examination, and fluorescence-based methods for detecting occlusal caries in primary teeth. One occlusal site on each of 79 primary molars was assessed twice by two examiners using ICDAS, bitewing radiography (BW), DIAGNOdent 2095 (LF), DIAGNOdent 2190 (LFpen), and VistaProof fluorescence camera (FC). The teeth were histologically prepared and assessed for caries extent. Optimal cutoff limits were calculated for LF, LFpen, and FC. At the D (1) threshold (enamel and dentin lesions), ICDAS and FC presented higher sensitivity values (0.75 and 0.73, respectively), while BW showed higher specificity (1.00). At the D (2) threshold (inner enamel and dentin lesions), ICDAS presented higher sensitivity (0.83) and statistically significantly lower specificity (0.70). At the D(3) threshold (dentin lesions), LFpen and FC showed higher sensitivity (1.00 and 0.91, respectively), while higher specificity was presented by FC (0.95), ICDAS (0.94), BW (0.94), and LF (0.92). The area under the receiver operating characteristic (ROC) curve (Az) varied from 0.780 (BW) to 0.941 (LF). Spearman correlation coefficients with histology were 0.72 (ICDAS), 0.64 (BW), 0.71 (LF), 0.65 (LFpen), and 0.74 (FC). Inter- and intraexaminer intraclass correlation values varied from 0.772 to 0.963 and unweighted kappa values ranged from 0.462 to 0.750. In conclusion, ICDAS and FC exhibited better accuracy in detecting enamel and dentin caries lesions, whereas ICDAS, LF, LFpen, and FC were more appropriate for detecting dentin lesions on occlusal surfaces in primary teeth, with no statistically significant difference among them. All methods presented good to excellent reproducibility.
Resumo:
Responses of many real-world problems can only be evaluated perturbed by noise. In order to make an efficient optimization of these problems possible, intelligent optimization strategies successfully coping with noisy evaluations are required. In this article, a comprehensive review of existing kriging-based methods for the optimization of noisy functions is provided. In summary, ten methods for choosing the sequential samples are described using a unified formalism. They are compared on analytical benchmark problems, whereby the usual assumption of homoscedastic Gaussian noise made in the underlying models is meet. Different problem configurations (noise level, maximum number of observations, initial number of observations) and setups (covariance functions, budget, initial sample size) are considered. It is found that the choices of the initial sample size and the covariance function are not critical. The choice of the method, however, can result in significant differences in the performance. In particular, the three most intuitive criteria are found as poor alternatives. Although no criterion is found consistently more efficient than the others, two specialized methods appear more robust on average.
Resumo:
PURPOSE The aim of this work is to derive a theoretical framework for quantitative noise and temporal fidelity analysis of time-resolved k-space-based parallel imaging methods. THEORY An analytical formalism of noise distribution is derived extending the existing g-factor formulation for nontime-resolved generalized autocalibrating partially parallel acquisition (GRAPPA) to time-resolved k-space-based methods. The noise analysis considers temporal noise correlations and is further accompanied by a temporal filtering analysis. METHODS All methods are derived and presented for k-t-GRAPPA and PEAK-GRAPPA. A sliding window reconstruction and nontime-resolved GRAPPA are taken as a reference. Statistical validation is based on series of pseudoreplica images. The analysis is demonstrated on a short-axis cardiac CINE dataset. RESULTS The superior signal-to-noise performance of time-resolved over nontime-resolved parallel imaging methods at the expense of temporal frequency filtering is analytically confirmed. Further, different temporal frequency filter characteristics of k-t-GRAPPA, PEAK-GRAPPA, and sliding window are revealed. CONCLUSION The proposed analysis of noise behavior and temporal fidelity establishes a theoretical basis for a quantitative evaluation of time-resolved reconstruction methods. Therefore, the presented theory allows for comparison between time-resolved parallel imaging methods and also nontime-resolved methods. Magn Reson Med, 2014. © 2014 Wiley Periodicals, Inc.
Resumo:
BACKGROUND: HIV surveillance requires monitoring of new HIV diagnoses and differentiation of incident and older infections. In 2008, Switzerland implemented a system for monitoring incident HIV infections based on the results of a line immunoassay (Inno-Lia) mandatorily conducted for HIV confirmation and type differentiation (HIV-1, HIV-2) of all newly diagnosed patients. Based on this system, we assessed the proportion of incident HIV infection among newly diagnosed cases in Switzerland during 2008-2013. METHODS AND RESULTS: Inno-Lia antibody reaction patterns recorded in anonymous HIV notifications to the federal health authority were classified by 10 published algorithms into incident (up to 12 months) or older infections. Utilizing these data, annual incident infection estimates were obtained in two ways, (i) based on the diagnostic performance of the algorithms and utilizing the relationship 'incident = true incident + false incident', (ii) based on the window-periods of the algorithms and utilizing the relationship 'Prevalence = Incidence x Duration'. From 2008-2013, 3'851 HIV notifications were received. Adult HIV-1 infections amounted to 3'809 cases, and 3'636 of them (95.5%) contained Inno-Lia data. Incident infection totals calculated were similar for the performance- and window-based methods, amounting on average to 1'755 (95% confidence interval, 1588-1923) and 1'790 cases (95% CI, 1679-1900), respectively. More than half of these were among men who had sex with men. Both methods showed a continuous decline of annual incident infections 2008-2013, totaling -59.5% and -50.2%, respectively. The decline of incident infections continued even in 2012, when a 15% increase in HIV notifications had been observed. This increase was entirely due to older infections. Overall declines 2008-2013 were of similar extent among the major transmission groups. CONCLUSIONS: Inno-Lia based incident HIV-1 infection surveillance proved useful and reliable. It represents a free, additional public health benefit of the use of this relatively costly test for HIV confirmation and type differentiation.
Resumo:
Based on an order-theoretic approach, we derive sufficient conditions for the existence, characterization, and computation of Markovian equilibrium decision processes and stationary Markov equilibrium on minimal state spaces for a large class of stochastic overlapping generations models. In contrast to all previous work, we consider reduced-form stochastic production technologies that allow for a broad set of equilibrium distortions such as public policy distortions, social security, monetary equilibrium, and production nonconvexities. Our order-based methods are constructive, and we provide monotone iterative algorithms for computing extremal stationary Markov equilibrium decision processes and equilibrium invariant distributions, while avoiding many of the problems associated with the existence of indeterminacies that have been well-documented in previous work. We provide important results for existence of Markov equilibria for the case where capital income is not increasing in the aggregate stock. Finally, we conclude with examples common in macroeconomics such as models with fiat money and social security. We also show how some of our results extend to settings with unbounded state spaces.