856 resultados para Population set-based methods
Resumo:
This study compared the performance of fluorescence-based methods, radiographic examination, and International Caries Detection and Assessment System (ICDAS) II on occlusal surfaces. One hundred and nineteen permanent human molars were assessed twice by 2 experienced dentists using the laser fluorescence (LF and LFpen) and fluorescence camera (FC) devices, ICDAS II and bitewing radiographs (BW). After measuring, the teeth were histologically prepared and assessed for caries extension. The sensitivities for dentine caries detection were 0.86 (FC), 0.78 (LFpen), 0.73 (ICDAS II), 0.51 (LF) and 0.34 (BW). The specificities were 0.97 (BW), 0.89 (LF), 0.65 (ICDAS II), 0.63 (FC) and 0.56 (LFpen). BW presented the highest values of likelihood ratio (LR)+ (12.47) and LR- (0.68). Rank correlations with histology were 0.53 (LF), 0.52 (LFpen), 0.41 (FC), 0.59 (ICDAS II) and 0.57 (BW). The area under the ROC curve varied from 0.72 to 0.83. Inter- and intraexaminer intraclass correlation values were respectively 0.90 and 0.85 (LF), 0.93 and 0.87 (LFpen) and 0.85 and 0.76 (FC). The ICDAS II kappa values were 0.51 (interexaminer) and 0.61 (intraexaminer). The BW kappa values were 0.50 (interexaminer) and 0.62 (intraexaminer). The Bland and Altman limits of agreement were 46.0 and 38.2 (LF), 55.6 and 40.0 (LFpen) and 1.12 and 0.80 (FC), for intra- and interexaminer reproducibilities. The posttest probability for dentine caries detection was high for BW and LF. In conclusion, LFpen, FC and ICDAS II presented better sensitivity and LF and BW better specificity. ICDAS II combined with BW showed the best performance and is the best combination for detecting caries on occlusal surfaces.
Resumo:
BACKGROUND: Bullous pemphigoid (BP), pemphigus vulgaris (PV) and pemphigus foliaceus (PF) are autoimmune bullous diseases characterized by the presence of tissue-bound and circulating autoantibodies directed against disease-specific target antigens of the skin. Although rare, these diseases run a chronic course and are associated with significant morbidity and mortality. There are few prospective data on gender- and age-specific incidence of these disorders. OBJECTIVES: Our aims were: (i) to evaluate the incidence of BP and PV/PF in Swiss patients, as the primary endpoint; and (ii) to assess the profile of the patients, particularly for comorbidities and medications, as the secondary endpoint. METHODS: The protocol of the study was distributed to all dermatology clinics, immunopathology laboratories and practising dermatologists in Switzerland. All newly diagnosed cases of BP and pemphigus occurring between 1 January 2001 and 31 December 2002 were collected. In total, 168 patients (73 men and 95 women) with these autoimmune bullous diseases, with a diagnosis based on clinical, histological and immunopathological criteria, were finally included. RESULTS: BP showed a mean incidence of 12.1 new cases per million people per year. Its incidence increased significantly after the age of 70 years, with a maximal value after the age of 90 years. The female/male ratio was 1.3. The age-standardized incidence of BP using the European population as reference was, however, lower, with 6.8 new cases per million people per year, reflecting the ageing of the Swiss population. In contrast, both PV and PF were less frequent. Their combined mean incidence was 0.6 new cases per million people per year. CONCLUSIONS; This is the first comprehensive prospective study analysing the incidence of autoimmune bullous diseases in an entire country. Our patient cohort is large enough to establish BP as the most frequent autoimmune bullous disease. Its incidence rate appears higher compared with other previous studies, most likely because of the demographic characteristics of the Swiss population. Nevertheless, based on its potentially misleading presentations, it is possible that the real incidence rate of BP is still underestimated. Based on its significant incidence in the elderly population, BP should deserve more public health concern.
Resumo:
Loss to follow-up (LTFU) is a common problem in many epidemiological studies. In antiretroviral treatment (ART) programs for patients with human immunodeficiency virus (HIV), mortality estimates can be biased if the LTFU mechanism is non-ignorable, that is, mortality differs between lost and retained patients. In this setting, routine procedures for handling missing data may lead to biased estimates. To appropriately deal with non-ignorable LTFU, explicit modeling of the missing data mechanism is needed. This can be based on additional outcome ascertainment for a sample of patients LTFU, for example, through linkage to national registries or through survey-based methods. In this paper, we demonstrate how this additional information can be used to construct estimators based on inverse probability weights (IPW) or multiple imputation. We use simulations to contrast the performance of the proposed estimators with methods widely used in HIV cohort research for dealing with missing data. The practical implications of our approach are illustrated using South African ART data, which are partially linkable to South African national vital registration data. Our results demonstrate that while IPWs and proper imputation procedures can be easily constructed from additional outcome ascertainment to obtain valid overall estimates, neglecting non-ignorable LTFU can result in substantial bias. We believe the proposed estimators are readily applicable to a growing number of studies where LTFU is appreciable, but additional outcome data are available through linkage or surveys of patients LTFU. Copyright © 2013 John Wiley & Sons, Ltd.
Resumo:
This study aimed to assess the performance of International Caries Detection and Assessment System (ICDAS), radiographic examination, and fluorescence-based methods for detecting occlusal caries in primary teeth. One occlusal site on each of 79 primary molars was assessed twice by two examiners using ICDAS, bitewing radiography (BW), DIAGNOdent 2095 (LF), DIAGNOdent 2190 (LFpen), and VistaProof fluorescence camera (FC). The teeth were histologically prepared and assessed for caries extent. Optimal cutoff limits were calculated for LF, LFpen, and FC. At the D (1) threshold (enamel and dentin lesions), ICDAS and FC presented higher sensitivity values (0.75 and 0.73, respectively), while BW showed higher specificity (1.00). At the D (2) threshold (inner enamel and dentin lesions), ICDAS presented higher sensitivity (0.83) and statistically significantly lower specificity (0.70). At the D(3) threshold (dentin lesions), LFpen and FC showed higher sensitivity (1.00 and 0.91, respectively), while higher specificity was presented by FC (0.95), ICDAS (0.94), BW (0.94), and LF (0.92). The area under the receiver operating characteristic (ROC) curve (Az) varied from 0.780 (BW) to 0.941 (LF). Spearman correlation coefficients with histology were 0.72 (ICDAS), 0.64 (BW), 0.71 (LF), 0.65 (LFpen), and 0.74 (FC). Inter- and intraexaminer intraclass correlation values varied from 0.772 to 0.963 and unweighted kappa values ranged from 0.462 to 0.750. In conclusion, ICDAS and FC exhibited better accuracy in detecting enamel and dentin caries lesions, whereas ICDAS, LF, LFpen, and FC were more appropriate for detecting dentin lesions on occlusal surfaces in primary teeth, with no statistically significant difference among them. All methods presented good to excellent reproducibility.
Resumo:
Responses of many real-world problems can only be evaluated perturbed by noise. In order to make an efficient optimization of these problems possible, intelligent optimization strategies successfully coping with noisy evaluations are required. In this article, a comprehensive review of existing kriging-based methods for the optimization of noisy functions is provided. In summary, ten methods for choosing the sequential samples are described using a unified formalism. They are compared on analytical benchmark problems, whereby the usual assumption of homoscedastic Gaussian noise made in the underlying models is meet. Different problem configurations (noise level, maximum number of observations, initial number of observations) and setups (covariance functions, budget, initial sample size) are considered. It is found that the choices of the initial sample size and the covariance function are not critical. The choice of the method, however, can result in significant differences in the performance. In particular, the three most intuitive criteria are found as poor alternatives. Although no criterion is found consistently more efficient than the others, two specialized methods appear more robust on average.
Resumo:
Objective: To assess the prevalence of lateral incisor agenesis impacted canines and supernumerary teeth in a young adult male population. Materials and Methods: The panoramic radiographs of 1745 military students (mean age: 18.6 ± 0.52 years) who attended the Center of Aviation Medicine of the Armed Forces of Greece during the period 1997-2011 were initially analyzed for lateral incisor agenesis by two observers. After exclusion of the known orthodontic cases, a subgroup of 1636 examinees (mean age: 18.6 ± 0.44 years) was evaluated for canine impaction and supernumerary teeth. Results: Twenty-eight missing lateral incisors were observed in 22 military students, indicating an incidence of 1.3% in the investigated population. No lateral incisor agenesis was detected in the mandibular arch. A prevalence rate of 0.8% was determined for canine impaction in the sample of young adults. The majority of impacted teeth (86.7%) were diagnosed in the maxillary arch. Thirty-five supernumerary teeth were observed in 24 examinees (prevalence rate: 1.5%). The ratio of supernumerary teeth located in the maxilla versus the mandible was 2.2:1. The most common type of supernumerary tooth was the upper distomolar. Conclusion: The prevalence of lateral incisor agenesis, canine impaction, and supernumerary teeth ranged from 0.8 to 1.5% in the sample of male Greek military students.
Resumo:
OBJECTIVE To determine the rates of the available urinary diversion options for patients treated with radical cystectomy for bladder cancer in different settings (pioneering institutions, leading urologic oncology centers, and population based). METHODS Population-based data from the literature included all patients (n = 7608) treated in Sweden during the period 1964-2008, from Germany (n = 14,200) for the years 2008 and 2011, US patients (identified from National Inpatient Sample during 1998-2005, 35,370 patients and 2001-2008, 55,187 patients), and from Medicare (n = 22,600) for the years 1992, 1995, 1998, and 2001. After the International Consultation on Urologic Diseases-European Association of Urology International Consultation on Bladder Cancer 2012, the urinary diversion committee members disclosed data from their home institutions (n = 15,867), including the pioneering institutions and the leading urologic oncology centers. They are the coauthors of this report. RESULTS The receipt of continent urinary diversion in Sweden and the United States is <15%, whereas in the German high-volume setting, 30% of patients receive a neobladder. At leading urologic oncology centers, this rate is also 30%. At pioneering institutions up to 75% of patients receive an orthotopic reconstruction. Anal diversion is <1%. Continent cutaneous diversion is the second choice. CONCLUSION Enormous variations in urinary diversion exist for >2 decades. Increased attention in expanding the use of continent reconstruction may help to reduce these disparities for patients undergoing radical cystectomy for bladder cancer. Continent reconstruction should not be the exclusive domain of cystectomy centers. Efforts to increase rates of this complex reconstruction must concentrate on better definition of the quality-of-life impact, technique dissemination, and the centralization of radical cystectomy.
Resumo:
PURPOSE The aim of this work is to derive a theoretical framework for quantitative noise and temporal fidelity analysis of time-resolved k-space-based parallel imaging methods. THEORY An analytical formalism of noise distribution is derived extending the existing g-factor formulation for nontime-resolved generalized autocalibrating partially parallel acquisition (GRAPPA) to time-resolved k-space-based methods. The noise analysis considers temporal noise correlations and is further accompanied by a temporal filtering analysis. METHODS All methods are derived and presented for k-t-GRAPPA and PEAK-GRAPPA. A sliding window reconstruction and nontime-resolved GRAPPA are taken as a reference. Statistical validation is based on series of pseudoreplica images. The analysis is demonstrated on a short-axis cardiac CINE dataset. RESULTS The superior signal-to-noise performance of time-resolved over nontime-resolved parallel imaging methods at the expense of temporal frequency filtering is analytically confirmed. Further, different temporal frequency filter characteristics of k-t-GRAPPA, PEAK-GRAPPA, and sliding window are revealed. CONCLUSION The proposed analysis of noise behavior and temporal fidelity establishes a theoretical basis for a quantitative evaluation of time-resolved reconstruction methods. Therefore, the presented theory allows for comparison between time-resolved parallel imaging methods and also nontime-resolved methods. Magn Reson Med, 2014. © 2014 Wiley Periodicals, Inc.
Resumo:
Antisense oligonucleotides deserve great attention as potential drug candidates for the treatment of genetic disorders. For example, muscle dystrophy can be treated successfully in mice by antisense-induced exon skipping in the pre-mRNA coding for the structural protein dystrophin in muscle cells. For this purpose a sugar- and backbone-modified DNA analogue was designed, in which a tricyclic ring system substitutes the deoxyribose. These chemical modifications stabilize the dimers formed with the targeted RNA relative to native nucleic acid duplexes and increase the biostability of the antisense oligonucleotide. While evading enzymatic degradation constitutes an essential property of antisense oligonucleotides for therapeutic application, it renders the oligonucleotide inaccessible to biochemical sequencing techniques and requires the development of alternative methods based on mass spectrometry. The set of sequences studied includes tcDNA oligonucleotides ranging from 10 to 15 nucleotides in length as well as their hybrid duplexes with DNA and RNA complements. All samples were analyzed on a LTQ Orbitrap XL instrument equipped with a nano-electrospray source. For tandem mass spectrometric experiments collision-induced dissociation was performed, using helium as collision gas. Mass spectrometric sequencing of tcDNA oligomers manifests the applicability of the technique to substrates beyond the scope of enzyme-based methods. Sequencing requires the formation of characteristic backbone fragments, which take the form of a-B- and w-ions in the product ion spectra of tcDNA. These types of product ions are typically associated with unmodified DNA, which suggests a DNA-like fragmentation mechanism in tcDNA. The loss of nucleobases constitutes the second prevalent dissociation pathway observed in tcDNA. Comparison of partially and fully modified oligonucleotides indicates a pronounced impact of the sugar-moiety on the base loss. As this event initiates cleavage of the backbone, the presented results provide new mechanistic insights into the fragmentation of DNA in the gas-phase. The influence of the sugar-moiety on the dissociation extends to tcDNA:DNA and tcDNA:RNA hybrid duplexes, where base loss was found to be much more prominent from sugar-modified oligonucleotides than from their natural complements. Further prominent dissociation channels are strand separation and backbone cleavage of the single strands, as well as the ejection of backbone fragments from the intact duplex. The latter pathway depends noticeably on the base sequence. Moreover, it gives evidence of the high stability of the hybrid dimers, and thus directly reflects the affinity of tcDNA for its target in the cell. As the cellular target of tcDNA is a pre-mRNA, the structure was designed to discriminate RNA from DNA complements, which could be demonstrated by mass spectrometric experiments.
Resumo:
AIMS The preferred antithrombotic strategy for secondary prevention in patients with cryptogenic stroke (CS) and patent foramen ovale (PFO) is unknown. We pooled multiple observational studies and used propensity score-based methods to estimate the comparative effectiveness of oral anticoagulation (OAC) compared with antiplatelet therapy (APT). METHODS AND RESULTS Individual participant data from 12 databases of medically treated patients with CS and PFO were analysed with Cox regression models, to estimate database-specific hazard ratios (HRs) comparing OAC with APT, for both the primary composite outcome [recurrent stroke, transient ischaemic attack (TIA), or death] and stroke alone. Propensity scores were applied via inverse probability of treatment weighting to control for confounding. We synthesized database-specific HRs using random-effects meta-analysis models. This analysis included 2385 (OAC = 804 and APT = 1581) patients with 227 composite endpoints (stroke/TIA/death). The difference between OAC and APT was not statistically significant for the primary composite outcome [adjusted HR = 0.76, 95% confidence interval (CI) 0.52-1.12] or for the secondary outcome of stroke alone (adjusted HR = 0.75, 95% CI 0.44-1.27). Results were consistent in analyses applying alternative weighting schemes, with the exception that OAC had a statistically significant beneficial effect on the composite outcome in analyses standardized to the patient population who actually received APT (adjusted HR = 0.64, 95% CI 0.42-0.99). Subgroup analyses did not detect statistically significant heterogeneity of treatment effects across clinically important patient groups. CONCLUSION We did not find a statistically significant difference comparing OAC with APT; our results justify randomized trials comparing different antithrombotic approaches in these patients.
Resumo:
BACKGROUND: HIV surveillance requires monitoring of new HIV diagnoses and differentiation of incident and older infections. In 2008, Switzerland implemented a system for monitoring incident HIV infections based on the results of a line immunoassay (Inno-Lia) mandatorily conducted for HIV confirmation and type differentiation (HIV-1, HIV-2) of all newly diagnosed patients. Based on this system, we assessed the proportion of incident HIV infection among newly diagnosed cases in Switzerland during 2008-2013. METHODS AND RESULTS: Inno-Lia antibody reaction patterns recorded in anonymous HIV notifications to the federal health authority were classified by 10 published algorithms into incident (up to 12 months) or older infections. Utilizing these data, annual incident infection estimates were obtained in two ways, (i) based on the diagnostic performance of the algorithms and utilizing the relationship 'incident = true incident + false incident', (ii) based on the window-periods of the algorithms and utilizing the relationship 'Prevalence = Incidence x Duration'. From 2008-2013, 3'851 HIV notifications were received. Adult HIV-1 infections amounted to 3'809 cases, and 3'636 of them (95.5%) contained Inno-Lia data. Incident infection totals calculated were similar for the performance- and window-based methods, amounting on average to 1'755 (95% confidence interval, 1588-1923) and 1'790 cases (95% CI, 1679-1900), respectively. More than half of these were among men who had sex with men. Both methods showed a continuous decline of annual incident infections 2008-2013, totaling -59.5% and -50.2%, respectively. The decline of incident infections continued even in 2012, when a 15% increase in HIV notifications had been observed. This increase was entirely due to older infections. Overall declines 2008-2013 were of similar extent among the major transmission groups. CONCLUSIONS: Inno-Lia based incident HIV-1 infection surveillance proved useful and reliable. It represents a free, additional public health benefit of the use of this relatively costly test for HIV confirmation and type differentiation.
Resumo:
Academic and industrial research in the late 90s have brought about an exponential explosion of DNA sequence data. Automated expert systems are being created to help biologists to extract patterns, trends and links from this ever-deepening ocean of information. Two such systems aimed on retrieving and subsequently utilizing phylogenetically relevant information have been developed in this dissertation, the major objective of which was to automate the often difficult and confusing phylogenetic reconstruction process. ^ Popular phylogenetic reconstruction methods, such as distance-based methods, attempt to find an optimal tree topology (that reflects the relationships among related sequences and their evolutionary history) by searching through the topology space. Various compromises between the fast (but incomplete) and exhaustive (but computationally prohibitive) search heuristics have been suggested. An intelligent compromise algorithm that relies on a flexible “beam” search principle from the Artificial Intelligence domain and uses the pre-computed local topology reliability information to adjust the beam search space continuously is described in the second chapter of this dissertation. ^ However, sometimes even a (virtually) complete distance-based method is inferior to the significantly more elaborate (and computationally expensive) maximum likelihood (ML) method. In fact, depending on the nature of the sequence data in question either method might prove to be superior. Therefore, it is difficult (even for an expert) to tell a priori which phylogenetic reconstruction method—distance-based, ML or maybe maximum parsimony (MP)—should be chosen for any particular data set. ^ A number of factors, often hidden, influence the performance of a method. For example, it is generally understood that for a phylogenetically “difficult” data set more sophisticated methods (e.g., ML) tend to be more effective and thus should be chosen. However, it is the interplay of many factors that one needs to consider in order to avoid choosing an inferior method (potentially a costly mistake, both in terms of computational expenses and in terms of reconstruction accuracy.) ^ Chapter III of this dissertation details a phylogenetic reconstruction expert system that selects a superior proper method automatically. It uses a classifier (a Decision Tree-inducing algorithm) to map a new data set to the proper phylogenetic reconstruction method. ^
Resumo:
Using properties of moment stationarity we develop exact expressions for the mean and covariance of allele frequencies at a single locus for a set of populations subject to drift, mutation, and migration. Some general results can be obtained even for arbitrary mutation and migration matrices, for example: (1) Under quite general conditions, the mean vector depends only on mutation rates, not on migration rates or the number of populations. (2) Allele frequencies covary among all pairs of populations connected by migration. As a result, the drift, mutation, migration process is not ergodic when any finite number of populations is exchanging genes. in addition, we provide closed form expressions for the mean and covariance of allele frequencies in Wright's finite-island model of migration under several simple models of mutation, and we show that the correlation in allele frequencies among populations can be very large for realistic rates of mutation unless an enormous number of populations are exchanging genes. As a result, the traditional diffusion approximation provides a poor approximation of the stationary distribution of allele frequencies among populations. Finally, we discuss some implications of our results for measures of population structure based on Wright's F-statistics.
Recommendations for dementia caregiver stress interventions based on Intervention Mapping guidelines
Resumo:
Stress can affect a person's psychological and physical health and cause a variety of conditions including depression, immune system changes, and hypertension (Alzheimer's Association, 2010; Aschbacher et al., 2009; Fredman et al., 2010; Long et al., 2004; Mills et al., 2009; von Känel et al., 2008). The severity and consequences of these conditions can vary based on the duration, amount, and sources of stress experienced by the individual (Black & Hyer, 2010; Coen et al., 1997; Conde-Sala et al., 2010; Pinquart & Sörensen, 2007). Caregivers of people with dementia have an elevated risk for stress and its related health problems because they experience more negative interactions with, and provide more emotional support for, their care recipients than other caregivers. ^ This paper uses a systematic program planning process of Intervention Mapping to organize evidence from literature, qualitative research and theory to develop recommendations for a theory- and evidence-based intervention to improve outcomes for caregivers of people with dementia. A needs assessment was conducted to identify specific dementia caregiver stress influences and a logic model of dementia caregiver stress was developed using the PRECEDE Model. Necessary behavior and environmental outcomes are identified for dementia caregiver stress reduction and performance objectives for each were combined with selected determinants to produce change objectives. Planning matrices were then designed to inform effective theory-based methods and practical applications for recommended intervention delivery. Recommendations for program components, their scope and sequence, the completed program materials, and the program protocols are delineated along with ways to insure that the program is adopted and implemented after it is shown to be effective.^
Resumo:
The genomic era brought by recent advances in the next-generation sequencing technology makes the genome-wide scans of natural selection a reality. Currently, almost all the statistical tests and analytical methods for identifying genes under selection was performed on the individual gene basis. Although these methods have the power of identifying gene subject to strong selection, they have limited power in discovering genes targeted by moderate or weak selection forces, which are crucial for understanding the molecular mechanisms of complex phenotypes and diseases. Recent availability and rapid completeness of many gene network and protein-protein interaction databases accompanying the genomic era open the avenues of exploring the possibility of enhancing the power of discovering genes under natural selection. The aim of the thesis is to explore and develop normal mixture model based methods for leveraging gene network information to enhance the power of natural selection target gene discovery. The results show that the developed statistical method, which combines the posterior log odds of the standard normal mixture model and the Guilt-By-Association score of the gene network in a naïve Bayes framework, has the power to discover moderate/weak selection gene which bridges the genes under strong selection and it helps our understanding the biology under complex diseases and related natural selection phenotypes.^