198 resultados para cut vertex false positive


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Changing perspectives on the natural history of celiac disease (CD), new serology and genetic tests, and amended histological criteria for diagnosis cast doubt on past prevalence estimates for CD. We set out to establish a more accurate prevalence estimate for CD using a novel serogenetic approach.Methods: The human leukocyte antigen (HLA)-DQ genotype was determined in 356 patients with 'biopsy-confirmed' CD, and in two age-stratified, randomly selected community cohorts of 1,390 women and 1,158 men. Sera were screened for CD-specific serology.Results: Only five 'biopsy-confirmed' patients with CD did not possess the susceptibility alleles HLA-DQ2.5, DQ8, or DQ2.2, and four of these were misdiagnoses. HLA-DQ2.5, DQ8, or DQ2.2 was present in 56% of all women and men in the community cohorts. Transglutaminase (TG)-2 IgA and composite TG2/deamidated gliadin peptide (DGP) IgA/IgG were abnormal in 4.6% and 5.6%, respectively, of the community women and 6.9% and 6.9%, respectively, of the community men, but in the screen-positive group, only 71% and 75%, respectively, of women and 65% and 63%, respectively, of men possessed HLA-DQ2.5, DQ8, or DQ2.2. Medical review was possible for 41% of seropositive women and 50% of seropositive men, and led to biopsy-confirmed CD in 10 women (0.7%) and 6 men (0.5%), but based on relative risk for HLA-DQ2.5, DQ8, or DQ2.2 in all TG2 IgA or TG2/DGP IgA/IgG screen-positive subjects, CD affected 1.3% or 1.9%, respectively, of females and 1.3% or 1.2%, respectively, of men. Serogenetic data from these community cohorts indicated that testing screen positives for HLA-DQ, or carrying out HLA-DQ and further serology, could have reduced unnecessary gastroscopies due to false-positive serology by at least 40% and by over 70%, respectively.Conclusions: Screening with TG2 IgA serology and requiring biopsy confirmation caused the community prevalence of CD to be substantially underestimated. Testing for HLA-DQ genes and confirmatory serology could reduce the numbers of unnecessary gastroscopies. © 2013 Anderson et al.; licensee BioMed Central Ltd.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Vertebral fracture risk is a heritable complex trait. The aim of this study was to identify genetic susceptibility factors for osteoporotic vertebral fractures applying a genome-wide association study (GWAS) approach. The GWAS discovery was based on the Rotterdam Study, a population-based study of elderly Dutch individuals aged >55years; and comprising 329 cases and 2666 controls with radiographic scoring (McCloskey-Kanis) and genetic data. Replication of one top-associated SNP was pursued by de-novo genotyping of 15 independent studies across Europe, the United States, and Australia and one Asian study. Radiographic vertebral fracture assessment was performed using McCloskey-Kanis or Genant semi-quantitative definitions. SNPs were analyzed in relation to vertebral fracture using logistic regression models corrected for age and sex. Fixed effects inverse variance and Han-Eskin alternative random effects meta-analyses were applied. Genome-wide significance was set at p<5×10-8. In the discovery, a SNP (rs11645938) on chromosome 16q24 was associated with the risk for vertebral fractures at p=4.6×10-8. However, the association was not significant across 5720 cases and 21,791 controls from 14 studies. Fixed-effects meta-analysis summary estimate was 1.06 (95% CI: 0.98-1.14; p=0.17), displaying high degree of heterogeneity (I2=57%; Qhet p=0.0006). Under Han-Eskin alternative random effects model the summary effect was significant (p=0.0005). The SNP maps to a region previously found associated with lumbar spine bone mineral density (LS-BMD) in two large meta-analyses from the GEFOS consortium. A false positive association in the GWAS discovery cannot be excluded, yet, the low-powered setting of the discovery and replication settings (appropriate to identify risk effect size >1.25) may still be consistent with an effect size <1.10, more of the type expected in complex traits. Larger effort in studies with standardized phenotype definitions is needed to confirm or reject the involvement of this locus on the risk for vertebral fractures.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The role of germline polymorphisms of the T-cell receptor A/D and B loci in susceptibility to ankylosing spondylitis was investigated by linkage studies using microsatellite markers in 215 affected sibling pairs. The presence of a significant susceptibility gene (lambda ≤ 1.6) at the TCRA/D locus was excluded (LOD score < -2.0). At the TCRB locus, there was weak evidence of the presence of a susceptibility gene (P = 0.01, LOD score 1.1). Further family studies will be required to determine whether this is a true or false-positive finding. It is unlikely that either the TCRA/D or TCRB loci contain genes responsible for more than a moderate proportion of the non-MHC genetic susceptibility to ankylosing spondylitis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Classification criteria should facilitate selection of similar patients for clinical and epidemiologic studies, therapeutic trials, and research on etiopathogenesis to enable comparison of results across studies from different centers. We critically appraise the validity and performance of the Assessment of SpondyloArthritis international Society (ASAS) classification criteria for axial spondyloarthritis (axSpA). It is still debatable whether all patients fulfilling these criteria should be considered as having true axSpA. Patients with radiographically evident disease by the ASAS criteria are not necessarily identical with ankylosing spondylitis (AS) as classified by the modified New York criteria. The complex multi-arm selection design of the ASAS criteria induces considerable heterogeneity among patients so classified, and applying them in settings with a low prevalence of axial spondyloarthritis (SpA) greatly increases the proportion of subjects falsely classified as suffering from axial SpA. One of the unmet needs in non-radiographic form of axial SpA is to have reliable markers that can identify individuals at risk for progression to AS and thereby facilitate early intervention trials designed to prevent such progression. We suggest needed improvements of the ASAS criteria for axSpA, as all criteria sets should be regarded as dynamic concepts open to modifications or updates as our knowledge advances.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Copy number variants (CNVs) account for a major proportion of human genetic polymorphism and have been predicted to have an important role in genetic susceptibility to common disease. To address this we undertook a large, direct genome-wide study of association between CNVs and eight common human diseases. Using a purpose-designed array we typed 19,000 individuals into distinct copy-number classes at 3,432 polymorphic CNVs, including an estimated 50% of all common CNVs larger than 500 base pairs. We identified several biological artefacts that lead to false-positive associations, including systematic CNV differences between DNAs derived from blood and cell lines. Association testing and follow-up replication analyses confirmed three loci where CNVs were associated with diseaseIRGM for Crohns disease, HLA for Crohns disease, rheumatoid arthritis and type 1 diabetes, and TSPAN8 for type 2 diabetesalthough in each case the locus had previously been identified in single nucleotide polymorphism (SNP)-based studies, reflecting our observation that most common CNVs that are well-typed on our array are well tagged by SNPs and so have been indirectly explored through SNP studies. We conclude that common CNVs that can be typed on existing platforms are unlikely to contribute greatly to the genetic basis of common human diseases. © 2010 Macmillan Publishers Limited. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Yao, Begg, and Livingston (1996, Biometrics 52, 992-1001) considered the optimal group size for testing a series of potentially therapeutic agents to identify a promising one as soon as possible for given error rates. The number of patients to be tested with each agent was fixed as the group size. We consider a sequential design that allows early acceptance and rejection, and we provide an optimal strategy to minimize the sample sizes (patients) required using Markov decision processes. The minimization is under the constraints of the two types (false positive and false negative) of error probabilities, with the Lagrangian multipliers corresponding to the cost parameters for the two types of errors. Numerical studies indicate that there can be a substantial reduction in the number of patients required.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Stallard (1998, Biometrics 54, 279-294) recently used Bayesian decision theory for sample-size determination in phase II trials. His design maximizes the expected financial gains in the development of a new treatment. However, it results in a very high probability (0.65) of recommending an ineffective treatment for phase III testing. On the other hand, the expected gain using his design is more than 10 times that of a design that tightly controls the false positive error (Thall and Simon, 1994, Biometrics 50, 337-349). Stallard's design maximizes the expected gain per phase II trial, but it does not maximize the rate of gain or total gain for a fixed length of time because the rate of gain depends on the proportion: of treatments forwarding to the phase III study. We suggest maximizing the rate of gain, and the resulting optimal one-stage design becomes twice as efficient as Stallard's one-stage design. Furthermore, the new design has a probability of only 0.12 of passing an ineffective treatment to phase III study.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a Multi-Hypotheses Tracking (MHT) approach that allows solving ambiguities that arise with previous methods of associating targets and tracks within a highly volatile vehicular environment. The previous approach based on the Dempster–Shafer Theory assumes that associations between tracks and targets are unique; this was shown to allow the formation of ghost tracks when there was too much ambiguity or conflict for the system to take a meaningful decision. The MHT algorithm described in this paper removes this uniqueness condition, allowing the system to include ambiguity and even to prevent making any decision if available data are poor. We provide a general introduction to the Dempster–Shafer Theory and present the previously used approach. Then, we explain our MHT mechanism and provide evidence of its increased performance in reducing the amount of ghost tracks and false positive processed by the tracking system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Species distribution modelling (SDM) typically analyses species’ presence together with some form of absence information. Ideally absences comprise observations or are inferred from comprehensive sampling. When such information is not available, then pseudo-absences are often generated from the background locations within the study region of interest containing the presences, or else absence is implied through the comparison of presences to the whole study region, e.g. as is the case in Maximum Entropy (MaxEnt) or Poisson point process modelling. However, the choice of which absence information to include can be both challenging and highly influential on SDM predictions (e.g. Oksanen and Minchin, 2002). In practice, the use of pseudo- or implied absences often leads to an imbalance where absences far outnumber presences. This leaves analysis highly susceptible to ‘naughty-noughts’: absences that occur beyond the envelope of the species, which can exert strong influence on the model and its predictions (Austin and Meyers, 1996). Also known as ‘excess zeros’, naughty noughts can be estimated via an overall proportion in simple hurdle or mixture models (Martin et al., 2005). However, absences, especially those that occur beyond the species envelope, can often be more diverse than presences. Here we consider an extension to excess zero models. The two-staged approach first exploits the compartmentalisation provided by classification trees (CTs) (as in O’Leary, 2008) to identify multiple sources of naughty noughts and simultaneously delineate several species envelopes. Then SDMs can be fit separately within each envelope, and for this stage, we examine both CTs (as in Falk et al., 2014) and the popular MaxEnt (Elith et al., 2006). We introduce a wider range of model performance measures to improve treatment of naughty noughts in SDM. We retain an overall measure of model performance, the area under the curve (AUC) of the Receiver-Operating Curve (ROC), but focus on its constituent measures of false negative rate (FNR) and false positive rate (FPR), and how these relate to the threshold in the predicted probability of presence that delimits predicted presence from absence. We also propose error rates more relevant to users of predictions: false omission rate (FOR), the chance that a predicted absence corresponds to (and hence wastes) an observed presence, and the false discovery rate (FDR), reflecting those predicted (or potential) presences that correspond to absence. A high FDR may be desirable since it could help target future search efforts, whereas zero or low FOR is desirable since it indicates none of the (often valuable) presences have been ignored in the SDM. For illustration, we chose Bradypus variegatus, a species that has previously been published as an exemplar species for MaxEnt, proposed by Phillips et al. (2006). We used CTs to increasingly refine the species envelope, starting with the whole study region (E0), eliminating more and more potential naughty noughts (E1–E3). When combined with an SDM fit within the species envelope, the best CT SDM had similar AUC and FPR to the best MaxEnt SDM, but otherwise performed better. The FNR and FOR were greatly reduced, suggesting that CTs handle absences better. Interestingly, MaxEnt predictions showed low discriminatory performance, with the most common predicted probability of presence being in the same range (0.00-0.20) for both true absences and presences. In summary, this example shows that SDMs can be improved by introducing an initial hurdle to identify naughty noughts and partition the envelope before applying SDMs. This improvement was barely detectable via AUC and FPR yet visible in FOR, FNR, and the comparison of predicted probability of presence distribution for pres/absence.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective: To describe patient participation and clinical performance in a colorectal cancer (CRC) screening program utilising faecal occult blood test (FOBT). Methods: A community-based intervention was conducted in a small, rural community in north Queensland, 2000/01. One of two FOBT kits – guaiac (Hemoccult-ll) or immunochemical (Inform) – was assigned by general practice and mailed to participants (3,358 patients aged 50–74 years listed with the local practices). Results: Overall participation in FOBT screening was 36.3%. Participation was higher with the immunochemical kit than the guaiac kit (OR=1.9, 95% Cl 1.6-2.2). Women were more likely to comply with testing than men (OR=1.4, 95% Cl 1.2-1.7), and people in their 60s were less likely to participate than those 70–74 years (OR=0.8, 95% Cl 0.6-0.9). The positivity rate was higher for the immunochemical (9.5%) than the guaiac (3.9%) test (χ2=9.2, p=0.002), with positive predictive values for cancer or adenoma of advanced pathology of 37.8% (95% Cl 28.1–48.6) for !nform and 40.0% (95% Cl 16.8–68.7) for Hemoccult-ll. Colonoscopy follow-up was 94.8% with a medical complication rate of 2–3%. Conclusions: An immunochemical FOBT enhanced participation. Higher positivity rates for this kit did not translate into higher false-positive rates, and both test types resulted in a high yield of neoplasia. Implications: In addition to type of FOBT, the ultimate success of a population-based screening program for CRC using FOBT will depend on appropriate education of health professionals and the public as well as significant investment in medical infrastructure for colonoscopy follow-up.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Web data can often be represented in free tree form; however, free tree mining methods seldom exist. In this paper, a computationally fast algorithm FreeS is presented to discover all frequently occurring free subtrees in a database of labelled free trees. FreeS is designed using an optimal canonical form, BOCF that can uniquely represent free trees even during the presence of isomorphism. To avoid enumeration of false positive candidates, it utilises the enumeration approach based on a tree-structure guided scheme. This paper presents lemmas that introduce conditions to conform the generation of free tree candidates during enumeration. Empirical study using both real and synthetic datasets shows that FreeS is scalable and significantly outperforms (i.e. few orders of magnitude faster than) the state-of-the-art frequent free tree mining algorithms, HybridTreeMiner and FreeTreeMiner.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Generating discriminative input features is a key requirement for achieving highly accurate classifiers. The process of generating features from raw data is known as feature engineering and it can take significant manual effort. In this paper we propose automated feature engineering to derive a suite of additional features from a given set of basic features with the aim of both improving classifier accuracy through discriminative features, and to assist data scientists through automation. Our implementation is specific to HTTP computer network traffic. To measure the effectiveness of our proposal, we compare the performance of a supervised machine learning classifier built with automated feature engineering versus one using human-guided features. The classifier addresses a problem in computer network security, namely the detection of HTTP tunnels. We use Bro to process network traffic into base features and then apply automated feature engineering to calculate a larger set of derived features. The derived features are calculated without favour to any base feature and include entropy, length and N-grams for all string features, and counts and averages over time for all numeric features. Feature selection is then used to find the most relevant subset of these features. Testing showed that both classifiers achieved a detection rate above 99.93% at a false positive rate below 0.01%. For our datasets, we conclude that automated feature engineering can provide the advantages of increasing classifier development speed and reducing development technical difficulties through the removal of manual feature engineering. These are achieved while also maintaining classification accuracy.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We present a new penalty-based genetic algorithm for the multi-source and multi-sink minimum vertex cut problem, and illustrate the algorithm’s usefulness with two real-world applications. It is proved in this paper that the genetic algorithm always produces a feasible solution by exploiting some domain-specific knowledge. The genetic algorithm has been implemented on the example applications and evaluated to show how well it scales as the problem size increases.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A value-shift began to influence global political thinking in the late 20th century, characterised by recognition of the need for environmentally, socially and culturally sustainable resource development. This shift entailed a move away from thinking of ‘nature’ and ‘culture’ as separate entities – the former existing to serve the latter – toward the possibility of embracing the intrinsic worth of the nonhuman world. Cultural landscape theory recognises ‘nature’ as at once both ‘natural’, and a ‘cultural’ construct. As such, it may offer a framework through which to progress in the quest for ‘sustainable development’. This study makes a contribution to this quest by asking whether contemporary developments in cultural landscape theory can contribute to rehabilitation strategies for Australian open-cut coal mining landscapes. The answer is ‘yes’. To answer the research question, a flexible, ‘emergent’ methodological approach has been used, resulting in the following outcomes. A thematic historical overview of landscape values and resource development in Australia post-1788, and a review of cultural landscape theory literature, contribute to the formation of a new theoretical framework: Reconnecting the Interrupted Landscape. This framework establishes a positive answer to the research question. It also suggests a method of application within the Australian open-cut coal mining landscape, a highly visible exemplar of the resource development landscape. This method is speculatively tested against the rehabilitation strategy of an operating open-cut coal mine, concluding with positive recommendations to the industry, and to government.