990 resultados para Log odds rate


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we examine the relationships between log odds rate and various reliability measures such as hazard rate and reversed hazard rate in the context of repairable systems. We also prove characterization theorems for some families of distributions viz. Burr, Pearson and log exponential models. We discuss the properties and applications of log odds rate in weighted models. Further we extend the concept to the bivariate set up and study its properties.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

For two multinormal populations with equal covariance matrices the likelihood ratio discriminant function, an alternative allocation rule to the sample linear discriminant function when n1 ≠ n2 ,is studied analytically. With the assumption of a known covariance matrix its distribution is derived and the expectation of its actual and apparent error rates evaluated and compared with those of the sample linear discriminant function. This comparison indicates that the likelihood ratio allocation rule is robust to unequal sample sizes. The quadratic discriminant function is studied, its distribution reviewed and evaluation of its probabilities of misclassification discussed. For known covariance matrices the distribution of the sample quadratic discriminant function is derived. When the known covariance matrices are proportional exact expressions for the expectation of its actual and apparent error rates are obtained and evaluated. The effectiveness of the sample linear discriminant function for this case is also considered. Estimation of true log-odds for two multinormal populations with equal or unequal covariance matrices is studied. The estimative, Bayesian predictive and a kernel method are compared by evaluating their biases and mean square errors. Some algebraic expressions for these quantities are derived. With equal covariance matrices the predictive method is preferable. Where it derives this superiority is investigated by considering its performance for various levels of fixed true log-odds. It is also shown that the predictive method is sensitive to n1 ≠ n2. For unequal but proportional covariance matrices the unbiased estimative method is preferred. Product Normal kernel density estimates are used to give a kernel estimator of true log-odds. The effect of correlation in the variables with product kernels is considered. With equal covariance matrices the kernel and parametric estimators are compared by simulation. For moderately correlated variables and large dimension sizes the product kernel method is a good estimator of true log-odds.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The present study gave emphasis on characterizing continuous probability distributions and its weighted versions in univariate set up. Therefore a possible work in this direction is to study the properties of weighted distributions for truncated random variables in discrete set up. The problem of extending the measures into higher dimensions as well as its weighted versions is yet to be examined. As the present study focused attention to length-biased models, the problem of studying the properties of weighted models with various other weight functions and their functional relationships is yet to be examined.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper considers methods for testing for superiority or non-inferiority in active-control trials with binary data, when the relative treatment effect is expressed as an odds ratio. Three asymptotic tests for the log-odds ratio based on the unconditional binary likelihood are presented, namely the likelihood ratio, Wald and score tests. All three tests can be implemented straightforwardly in standard statistical software packages, as can the corresponding confidence intervals. Simulations indicate that the three alternatives are similar in terms of the Type I error, with values close to the nominal level. However, when the non-inferiority margin becomes large, the score test slightly exceeds the nominal level. In general, the highest power is obtained from the score test, although all three tests are similar and the observed differences in power are not of practical importance. Copyright (C) 2007 John Wiley & Sons, Ltd.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Using a dynamic materials model, processing and instability maps have been developed for near-alpha titanium alloy 685 in the temperature range 775-1025 degrees C and strain-rate range of 0.001-10 s(-1) to optimise its hot workability. The alloy's beta-transus temperature lies at about 1020 degrees C. The material undergoes superplasticity with a peak efficiency of 80% at 975 degrees C and 0.001 s(-1), which are the optimum parameters for alpha-beta working. The occurrence of superplasticity is attributed to two-phase microduplex structure, higher strain-rate sensitivity, low flow stress and sigmoidal variation between log flow stress and log strain rate. The material also exhibits how localisation due to adiabatic shear-band formation up to its beta-transus temperature with strain rates greater than 0.02 s(-1) and thus cracking along these regions. (C) 1997 Published by Elsevier Science S.A.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A method is described for measuring the mechanical properties of polymers in compression at strain rates in the range approximately 300-500 s-1. A gravity-driven pendulum is used to load a specimen on the end of an instrumented Hopkinson output bar and the results are processed by a microcomputer. Stress-strain curves up to high strains are presented for polycarbonate, polyethersulphone and high density polyethylene over a range of temperatures. The value of yield stress, for all three polymers, was found to vary linearly with log (strain rate) at strain rates up to 500 s-1. © 1985.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

OBJECTIVE The authors used a genome-wide association study (GWAS) of multiply affected families to investigate the association of schizophrenia to common single-nucleotide polymorphisms (SNPs) and rare copy number variants (CNVs). METHOD The family sample included 2,461 individuals from 631 pedigrees (581 in the primary European-ancestry analyses). Association was tested for single SNPs and genetic pathways. Polygenic scores based on family study results were used to predict case-control status in the Schizophrenia Psychiatric GWAS Consortium (PGC) data set, and consistency of direction of effect with the family study was determined for top SNPs in the PGC GWAS analysis. Within-family segregation was examined for schizophrenia-associated rare CNVs. RESULTS No genome-wide significant associations were observed for single SNPs or for pathways. PGC case and control subjects had significantly different genome-wide polygenic scores (computed by weighting their genotypes by log-odds ratios from the family study) (best p=10-17, explaining 0.4% of the variance). Family study and PGC analyses had consistent directions for 37 of the 58 independent best PGC SNPs (p=0.024). The overall frequency of CNVs in regions with reported associations with schizophrenia (chromosomes 1q21.1, 15q13.3, 16p11.2, and 22q11.2 and the neurexin-1 gene [NRXN1]) was similar to previous case-control studies. NRXN1 deletions and 16p11.2 duplications (both of which were transmitted from parents) and 22q11.2 deletions (de novo in four cases) did not segregate with schizophrenia in families. CONCLUSIONS Many common SNPs are likely to contribute to schizophrenia risk, with substantial overlap in genetic risk factors between multiply affected families and cases in large case-control studies. Our findings are consistent with a role for specific CNVs in disease pathogenesis, but the partial segregation of some CNVs with schizophrenia suggests that researchers should exercise caution in using them for predictive genetic testing until their effects in diverse populations have been fully studied.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Background: Little is known about the risk of progression to hazardous alcohol use in people currently drinking at safe limits. We aimed to develop a prediction model (predictAL) for the development of hazardous drinking in safe drinkers. Methods: A prospective cohort study of adult general practice attendees in six European countries and Chile followed up over 6 months. We recruited 10,045 attendees between April 2003 to February 2005. 6193 European and 2462 Chilean attendees recorded AUDIT scores below 8 in men and 5 in women at recruitment and were used in modelling risk. 38 risk factors were measured to construct a risk model for the development of hazardous drinking using stepwise logistic regression. The model was corrected for over fitting and tested in an external population. The main outcome was hazardous drinking defined by an AUDIT score >= 8 in men and >= 5 in women. Results: 69.0% of attendees were recruited, of whom 89.5% participated again after six months. The risk factors in the final predictAL model were sex, age, country, baseline AUDIT score, panic syndrome and lifetime alcohol problem. The predictAL model's average c-index across all six European countries was 0.839 (95% CI 0.805, 0.873). The Hedge's g effect size for the difference in log odds of predicted probability between safe drinkers in Europe who subsequently developed hazardous alcohol use and those who did not was 1.38 (95% CI 1.25, 1.51). External validation of the algorithm in Chilean safe drinkers resulted in a c-index of 0.781 (95% CI 0.717, 0.846) and Hedge's g of 0.68 (95% CI 0.57, 0.78). Conclusions: The predictAL risk model for development of hazardous consumption in safe drinkers compares favourably with risk algorithms for disorders in other medical settings and can be a useful first step in prevention of alcohol misuse.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Abstract Background A large number of probabilistic models used in sequence analysis assign non-zero probability values to most input sequences. To decide when a given probability is sufficient the most common way is bayesian binary classification, where the probability of the model characterizing the sequence family of interest is compared to that of an alternative probability model. We can use as alternative model a null model. This is the scoring technique used by sequence analysis tools such as HMMER, SAM and INFERNAL. The most prevalent null models are position-independent residue distributions that include: the uniform distribution, genomic distribution, family-specific distribution and the target sequence distribution. This paper presents a study to evaluate the impact of the choice of a null model in the final result of classifications. In particular, we are interested in minimizing the number of false predictions in a classification. This is a crucial issue to reduce costs of biological validation. Results For all the tests, the target null model presented the lowest number of false positives, when using random sequences as a test. The study was performed in DNA sequences using GC content as the measure of content bias, but the results should be valid also for protein sequences. To broaden the application of the results, the study was performed using randomly generated sequences. Previous studies were performed on aminoacid sequences, using only one probabilistic model (HMM) and on a specific benchmark, and lack more general conclusions about the performance of null models. Finally, a benchmark test with P. falciparum confirmed these results. Conclusions Of the evaluated models the best suited for classification are the uniform model and the target model. However, the use of the uniform model presents a GC bias that can cause more false positives for candidate sequences with extreme compositional bias, a characteristic not described in previous studies. In these cases the target model is more dependable for biological validation due to its higher specificity.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Medical errors originating in health care facilities are a significant source of preventable morbidity, mortality, and healthcare costs. Voluntary error report systems that collect information on the causes and contributing factors of medi- cal errors regardless of the resulting harm may be useful for developing effective harm prevention strategies. Some patient safety experts question the utility of data from errors that did not lead to harm to the patient, also called near misses. A near miss (a.k.a. close call) is an unplanned event that did not result in injury to the patient. Only a fortunate break in the chain of events prevented injury. We use data from a large voluntary reporting system of 836,174 medication errors from 1999 to 2005 to provide evidence that the causes and contributing factors of errors that result in harm are similar to the causes and contributing factors of near misses. We develop Bayesian hierarchical models for estimating the log odds of selecting a given cause (or contributing factor) of error given harm has occurred and the log odds of selecting the same cause given that harm did not occur. The posterior distribution of the correlation between these two vectors of log-odds is used as a measure of the evidence supporting the use of data from near misses and their causes and contributing factors to prevent medical errors. In addition, we identify the causes and contributing factors that have the highest or lowest log-odds ratio of harm versus no harm. These causes and contributing factors should also be a focus in the design of prevention strategies. This paper provides important evidence on the utility of data from near misses, which constitute the vast majority of errors in our data.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Time series models relating short-term changes in air pollution levels to daily mortality counts typically assume that the effects of air pollution on the log relative rate of mortality do not vary with time. However, these short-term effects might plausibly vary by season. Changes in the sources of air pollution and meteorology can result in changes in characteristics of the air pollution mixture across seasons. The authors develop Bayesian semi-parametric hierarchical models for estimating time-varying effects of pollution on mortality in multi-site time series studies. The methods are applied to the updated National Morbidity and Mortality Air Pollution Study database for the period 1987--2000, which includes data for 100 U.S. cities. At the national level, a 10 micro-gram/m3 increase in PM(10) at lag 1 is associated with a 0.15 (95% posterior interval: -0.08, 0.39),0.14 (-0.14, 0.42), 0.36 (0.11, 0.61), and 0.14 (-0.06, 0.34) percent increase in mortality for winter, spring, summer, and fall, respectively. An analysis by geographical regions finds a strong seasonal pattern in the northeast (with a peak in summer) and little seasonal variation in the southern regions of the country. These results provide useful information for understanding particle toxicity and guiding future analyses of particle constituent data.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The genomic era brought by recent advances in the next-generation sequencing technology makes the genome-wide scans of natural selection a reality. Currently, almost all the statistical tests and analytical methods for identifying genes under selection was performed on the individual gene basis. Although these methods have the power of identifying gene subject to strong selection, they have limited power in discovering genes targeted by moderate or weak selection forces, which are crucial for understanding the molecular mechanisms of complex phenotypes and diseases. Recent availability and rapid completeness of many gene network and protein-protein interaction databases accompanying the genomic era open the avenues of exploring the possibility of enhancing the power of discovering genes under natural selection. The aim of the thesis is to explore and develop normal mixture model based methods for leveraging gene network information to enhance the power of natural selection target gene discovery. The results show that the developed statistical method, which combines the posterior log odds of the standard normal mixture model and the Guilt-By-Association score of the gene network in a naïve Bayes framework, has the power to discover moderate/weak selection gene which bridges the genes under strong selection and it helps our understanding the biology under complex diseases and related natural selection phenotypes.^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Superoxide is an important transient reactive oxygen species (ROS) in the ocean formed as an intermediate in the redox transformation of oxygen (O2) into hydrogen peroxide (H2O2) and vice versa. This highly reactive and very short-lived radical anion can be produced both via photochemical and biological processes in the ocean. In this paper we examine the decomposition rate of O2- throughout the water column, using new data collected in the Eastern Tropical North Atlantic (ETNA) Ocean. For this approach we applied a semi factorial experimental design, to identify and quantify the pathways of the major identified sinks in the ocean. In this work we occupied 6 stations, 2 on the West African continental shelf and 4 open ocean stations, including the CVOO time series site adjacent to Cape Verde. Our results indicate that in the surface ocean, impacted by Saharan aerosols and sediment resuspension, the main decay pathways for superoxide is via reactions with Mn(||) and organic matter.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Wurst is a protein threading program with an emphasis on high quality sequence to structure alignments (http://www.zbh.uni-hamburg.de/wurst). Submitted sequences are aligned to each of about 3000 templates with a conventional dynamic programming algorithm, but using a score function with sophisticated structure and sequence terms. The structure terms are a log-odds probability of sequence to structure fragment compatibility, obtained from a Bayesian classification procedure. A simplex optimization was used to optimize the sequence-based terms for the goal of alignment and model quality and to balance the sequence and structural contributions against each other. Both sequence and structural terms operate with sequence profiles.