965 resultados para Bayesian p-values


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background: There are several studies in the literature depicting measurement error in gene expression data and also, several others about regulatory network models. However, only a little fraction describes a combination of measurement error in mathematical regulatory networks and shows how to identify these networks under different rates of noise. Results: This article investigates the effects of measurement error on the estimation of the parameters in regulatory networks. Simulation studies indicate that, in both time series (dependent) and non-time series (independent) data, the measurement error strongly affects the estimated parameters of the regulatory network models, biasing them as predicted by the theory. Moreover, when testing the parameters of the regulatory network models, p-values computed by ignoring the measurement error are not reliable, since the rate of false positives are not controlled under the null hypothesis. In order to overcome these problems, we present an improved version of the Ordinary Least Square estimator in independent (regression models) and dependent (autoregressive models) data when the variables are subject to noises. Moreover, measurement error estimation procedures for microarrays are also described. Simulation results also show that both corrected methods perform better than the standard ones (i.e., ignoring measurement error). The proposed methodologies are illustrated using microarray data from lung cancer patients and mouse liver time series data. Conclusions: Measurement error dangerously affects the identification of regulatory network models, thus, they must be reduced or taken into account in order to avoid erroneous conclusions. This could be one of the reasons for high biological false positive rates identified in actual regulatory network models.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The Mehlich-1 (M-1) extractant and Monocalcium Phosphate in acetic acid (MCPa) have mechanisms for extraction of available P and S in acidity and in ligand exchange, whether of the sulfate of the extractant by the phosphate of the soil, or of the phosphate of the extractant by the sulfate of the soil. In clayey soils, with greater P adsorption capacity, or lower remaining P (Rem-P) value, which corresponds to soils with greater Phosphate Buffer Capacity (PBC), more buffered for acidity, the initially low pH of the extractants increases over their time of contact with the soil in the direction of the pH of the soil; and the sulfate of the M-1 or the phosphate of the MCPa is adsorbed by adsorption sites occupied by these anions or not. This situation makes the extractant lose its extraction capacity, a phenomenon known as loss of extraction capacity or consumption of the extractant, the object of this study. Twenty soil samples were chosen so as to cover the range of Rem-P (0 to 60 mg L-1). Rem-P was used as a measure of the PBC. The P and S contents available from the soil samples through M-1 and MCPa, and the contents of other nutrients and of organic matter were determined. For determination of loss of extraction capacity, after the rest period, the pH and the P and S contents were measured in both the extracts-soils. Although significant, the loss of extraction capacity of the acidity of the M-1 and MCPa extractants with reduction in the Rem-P value did not have a very expressive effect. A “linear plateau” model was observed for the M-1 for discontinuous loss of extraction capacity of the P content in accordance with reduction in the concentration of the Rem-P or increase in the PBC, suggesting that a discontinuous model should also be adopted for interpretation of available P of soils with different Rem-P values. In contrast, a continuous linear response was observed between the P variables in the extract-soil and Rem-P for the MCPa extractor, which shows increasing loss of extraction capacity of this extractor with an increase in the PBC of the soil, indicating the validity of the linear relationship between the available S of the soil and the PBC, estimated by Rem-P, as currently adopted.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The application of statistics to science is not a neutral act. Statistical tools have shaped and were also shaped by its objects. In the social sciences, statistical methods fundamentally changed research practice, making statistical inference its centerpiece. At the same time, textbook writers in the social sciences have transformed rivaling statistical systems into an apparently monolithic method that could be used mechanically. The idol of a universal method for scientific inference has been worshipped since the "inference revolution" of the 1950s. Because no such method has ever been found, surrogates have been created, most notably the quest for significant p values. This form of surrogate science fosters delusions and borderline cheating and has done much harm, creating, for one, a flood of irreproducible results. Proponents of the "Bayesian revolution" should be wary of chasing yet another chimera: an apparently universal inference procedure. A better path would be to promote both an understanding of the various devices in the "statistical toolbox" and informed judgment to select among these.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Indoor radon is regularly measured in Switzerland. However, a nationwide model to predict residential radon levels has not been developed. The aim of this study was to develop a prediction model to assess indoor radon concentrations in Switzerland. The model was based on 44,631 measurements from the nationwide Swiss radon database collected between 1994 and 2004. Of these, 80% randomly selected measurements were used for model development and the remaining 20% for an independent model validation. A multivariable log-linear regression model was fitted and relevant predictors selected according to evidence from the literature, the adjusted R², the Akaike's information criterion (AIC), and the Bayesian information criterion (BIC). The prediction model was evaluated by calculating Spearman rank correlation between measured and predicted values. Additionally, the predicted values were categorised into three categories (50th, 50th-90th and 90th percentile) and compared with measured categories using a weighted Kappa statistic. The most relevant predictors for indoor radon levels were tectonic units and year of construction of the building, followed by soil texture, degree of urbanisation, floor of the building where the measurement was taken and housing type (P-values <0.001 for all). Mean predicted radon values (geometric mean) were 66 Bq/m³ (interquartile range 40-111 Bq/m³) in the lowest exposure category, 126 Bq/m³ (69-215 Bq/m³) in the medium category, and 219 Bq/m³ (108-427 Bq/m³) in the highest category. Spearman correlation between predictions and measurements was 0.45 (95%-CI: 0.44; 0.46) for the development dataset and 0.44 (95%-CI: 0.42; 0.46) for the validation dataset. Kappa coefficients were 0.31 for the development and 0.30 for the validation dataset, respectively. The model explained 20% overall variability (adjusted R²). In conclusion, this residential radon prediction model, based on a large number of measurements, was demonstrated to be robust through validation with an independent dataset. The model is appropriate for predicting radon level exposure of the Swiss population in epidemiological research. Nevertheless, some exposure misclassification and regression to the mean is unavoidable and should be taken into account in future applications of the model.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis studies survival analysis techniques dealing with censoring to produce predictive tools that predict the risk of endovascular aortic aneurysm repair (EVAR) re-intervention. Censoring indicates that some patients do not continue follow up, so their outcome class is unknown. Methods dealing with censoring have drawbacks and cannot handle the high censoring of the two EVAR datasets collected. Therefore, this thesis presents a new solution to high censoring by modifying an approach that was incapable of differentiating between risks groups of aortic complications. Feature selection (FS) becomes complicated with censoring. Most survival FS methods depends on Cox's model, however machine learning classifiers (MLC) are preferred. Few methods adopted MLC to perform survival FS, but they cannot be used with high censoring. This thesis proposes two FS methods which use MLC to evaluate features. The two FS methods use the new solution to deal with censoring. They combine factor analysis with greedy stepwise FS search which allows eliminated features to enter the FS process. The first FS method searches for the best neural networks' configuration and subset of features. The second approach combines support vector machines, neural networks, and K nearest neighbor classifiers using simple and weighted majority voting to construct a multiple classifier system (MCS) for improving the performance of individual classifiers. It presents a new hybrid FS process by using MCS as a wrapper method and merging it with the iterated feature ranking filter method to further reduce the features. The proposed techniques outperformed FS methods based on Cox's model such as; Akaike and Bayesian information criteria, and least absolute shrinkage and selector operator in the log-rank test's p-values, sensitivity, and concordance. This proves that the proposed techniques are more powerful in correctly predicting the risk of re-intervention. Consequently, they enable doctors to set patients’ appropriate future observation plan.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Hydrophobicity as measured by Log P is an important molecular property related to toxicity and carcinogenicity. With increasing public health concerns for the effects of Disinfection By-Products (DBPs), there are considerable benefits in developing Quantitative Structure and Activity Relationship (QSAR) models capable of accurately predicting Log P. In this research, Log P values of 173 DBP compounds in 6 functional classes were used to develop QSAR models, by applying 3 molecular descriptors, namely, Energy of the Lowest Unoccupied Molecular Orbital (ELUMO), Number of Chlorine (NCl) and Number of Carbon (NC) by Multiple Linear Regression (MLR) analysis. The QSAR models developed were validated based on the Organization for Economic Co-operation and Development (OECD) principles. The model Applicability Domain (AD) and mechanistic interpretation were explored. Considering the very complex nature of DBPs, the established QSAR models performed very well with respect to goodness-of-fit, robustness and predictability. The predicted values of Log P of DBPs by the QSAR models were found to be significant with a correlation coefficient R2 from 81% to 98%. The Leverage Approach by Williams Plot was applied to detect and remove outliers, consequently increasing R 2 by approximately 2% to 13% for different DBP classes. The developed QSAR models were statistically validated for their predictive power by the Leave-One-Out (LOO) and Leave-Many-Out (LMO) cross validation methods. Finally, Monte Carlo simulation was used to assess the variations and inherent uncertainties in the QSAR models of Log P and determine the most influential parameters in connection with Log P prediction. The developed QSAR models in this dissertation will have a broad applicability domain because the research data set covered six out of eight common DBP classes, including halogenated alkane, halogenated alkene, halogenated aromatic, halogenated aldehyde, halogenated ketone, and halogenated carboxylic acid, which have been brought to the attention of regulatory agencies in recent years. Furthermore, the QSAR models are suitable to be used for prediction of similar DBP compounds within the same applicability domain. The selection and integration of various methodologies developed in this research may also benefit future research in similar fields.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

To analyze the effects of treatment approach on the outcomes of newborns (birth weight [BW] < 1,000 g) with patent ductus arteriosus (PDA), from the Brazilian Neonatal Research Network (BNRN) on: death, bronchopulmonary dysplasia (BPD), severe intraventricular hemorrhage (IVH III/IV), retinopathy of prematurity requiring surgical (ROPsur), necrotizing enterocolitis requiring surgery (NECsur), and death/BPD. This was a multicentric, cohort study, retrospective data collection, including newborns (BW < 1000 g) with gestational age (GA) < 33 weeks and echocardiographic diagnosis of PDA, from 16 neonatal units of the BNRN from January 1, 2010 to Dec 31, 2011. Newborns who died or were transferred until the third day of life, and those with presence of congenital malformation or infection were excluded. Groups: G1 - conservative approach (without treatment), G2 - pharmacologic (indomethacin or ibuprofen), G3 - surgical ligation (independent of previous treatment). Factors analyzed: antenatal corticosteroid, cesarean section, BW, GA, 5 min. Apgar score < 4, male gender, Score for Neonatal Acute Physiology Perinatal Extension (SNAPPE II), respiratory distress syndrome (RDS), late sepsis (LS), mechanical ventilation (MV), surfactant (< 2 h of life), and time of MV. death, O2 dependence at 36 weeks (BPD36wks), IVH III/IV, ROPsur, NECsur, and death/BPD36wks. Student's t-test, chi-squared test, or Fisher's exact test; Odds ratio (95% CI); logistic binary regression and backward stepwise multiple regression. Software: MedCalc (Medical Calculator) software, version 12.1.4.0. p-values < 0.05 were considered statistically significant. 1,097 newborns were selected and 494 newborns were included: G1 - 187 (37.8%), G2 - 205 (41.5%), and G3 - 102 (20.6%). The highest mortality was observed in G1 (51.3%) and the lowest in G3 (14.7%). The highest frequencies of BPD36wks (70.6%) and ROPsur were observed in G3 (23.5%). The lowest occurrence of death/BPD36wks occurred in G2 (58.0%). Pharmacological (OR 0.29; 95% CI: 0.14-0.62) and conservative (OR 0.34; 95% CI: 0.14-0.79) treatments were protective for the outcome death/BPD36wks. The conservative approach of PDA was associated to high mortality, the surgical approach to the occurrence of BPD36wks and ROPsur, and the pharmacological treatment was protective for the outcome death/BPD36wks.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In the unlubricated sliding wear of steels the mild-severe and severe-mild wear transitions have long been investigated. The effect of system inputs such as normal load, sliding speed, environment humidity and temperature, material properties, among others, on those transitions have also been studied. Although transitions seem to be caused by microstructural changes, surfaces oxidation and work-hardening, some questions remain regarding the way each aspect is involved. Since the early studies in sliding wear, it has usually been assumed that only the material properties of the softer body influence the wear behavior of contacting surfaces. For example, the Archard equation involves only the hardness of the softer body, without considering the hardness of the harder body. This work aims to discuss the importance of the harder body hardness in determining the wear regime operation. For this, pin-on-disk wear tests were carried out, in which the disk material was always harder than the pin material. Variations of the friction force and vertical displacement of the pin were registered during the tests. A material characterization before and after tests was conducted using stereoscopy and scanning electron microscopy (SEM) methods, in addition to mass loss, surface roughness and microhardness measurements. The wear results confirmed the occurrence of a mild-severe wear transition when the disk hardness was decreased. The disk hardness to pin hardness ratio (H(d)/H(p)) was used as a criterion to establish the nature of surface contact deformation and to determine the wear regime transition. A predominantly elastic or plastic contact, characterized by H(d)/H(p) values higher or lower than one, results in a mild or severe wear regime operation, respectively. (c) 2009 Elsevier B.V. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Differential scanning calorimetry was used to evaluate the effect of storage at 10degreesC, 20degreesC and 30degreesC, and 40% and 65% relative humidity (RH) on adzuki bean starch gelatinisation and protein denaturation temperatures. Storage for 6 months at an elevated storage temperature (30degreesC) caused increases in the starch gelatinisation onset temperature (T-o) and gelatinisation peak temperature (T-p) for both Bloodwood and Erimo varieties. Storage at 40% RH resulted in higher T-o and T-p values than storage at 65% RH. The T-o of starch from Bloodwood and Erimo beans stored for up to 1.5 months at 10degreesC and 65% were similar to those of fresh beans. The changes in the salt-soluble protein component were less clear cut than those of the starch. Nonetheless, protein extracted from beans stored at 40% RH exhibited significantly lower T-o and T-p values compared with those stored at 65% RH. This indicates some destabilisation of the protein at the higher RH. These results suggest that detrimental changes occur in starch and, to a lesser extent protein, of adzuki beans stored under unfavourable conditions. On the basis of these results, the best storage conditions to maintain the characteristics of fresh beans are low temperatures (e.g. 10degreesC) and high RH (e.g. 65%). (C) 2003 Swiss Society of Food Science and Technology. Published by Elsevier Science Ltd. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Background: Current relevance of T-wave alternans is based on its association with electrical disorder and elevated cardiac risk. Quantitative reports would improve understanding on TWA augmentation mechanisms during mental stress or prior to tachyarrhythmias. However, little information is available about quantitative TWA values in clinical populations. This study aims to create and compare TWA profiles of healthy subjects and ICD patients, evaluated on treadmill stress protocols. Methods: Apparently healthy subjects, not in use of any medication were recruited. All eligible ICD patients were capable of performing an attenuated stress test. TWA analysis was performed during a 15-lead treadmill test. The derived comparative profile consisted of TWA amplitude and its associated heart rate, at rest (baseline) and at peak TWA value. Chi-square or Mann-Whitney tests were used with p values <= 0.05. Discriminatory performance was evaluated by a binary logistic regression model. Results: 31 healthy subjects (8F, 23M) and 32 ICD patients (10F, 22M) were different on baseline TWA (1 +/- 2 mu V; 8 +/- 9 mu V; p < 0.001) and peak TWA values (26 +/- 13 mu V; 37 +/- 20 mu V; p = 0,009) as well as on baseline TWA heart rate (79 +/- 10 bpm; 67 +/- 15 bpm; p < 0.001) and peak TWA heart rate (118 +/- 8 bpm; 90 +/- 17 bpm; p < 0.001). The logistic model yielded sensitivity and specificity values of 88.9% and 92.9%, respectively. Conclusions: Healthy subjects and ICD patients have distinct TWA profiles. The new TWA profile representation (in amplitude-heart rate pairs) may help comparison among different research protocols. Ann Noninvasive Electrocardiol 2009;14(2):108-118.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Background Meta-analysis is increasingly being employed as a screening procedure in large-scale association studies to select promising variants for follow-up studies. However, standard methods for meta-analysis require the assumption of an underlying genetic model, which is typically unknown a priori. This drawback can introduce model misspecifications, causing power to be suboptimal, or the evaluation of multiple genetic models, which augments the number of false-positive associations, ultimately leading to waste of resources with fruitless replication studies. We used simulated meta-analyses of large genetic association studies to investigate naive strategies of genetic model specification to optimize screenings of genome-wide meta-analysis signals for further replication. Methods Different methods, meta-analytical models and strategies were compared in terms of power and type-I error. Simulations were carried out for a binary trait in a wide range of true genetic models, genome-wide thresholds, minor allele frequencies (MAFs), odds ratios and between-study heterogeneity (tau(2)). Results Among the investigated strategies, a simple Bonferroni-corrected approach that fits both multiplicative and recessive models was found to be optimal in most examined scenarios, reducing the likelihood of false discoveries and enhancing power in scenarios with small MAFs either in the presence or in absence of heterogeneity. Nonetheless, this strategy is sensitive to tau(2) whenever the susceptibility allele is common (MAF epsilon 30%), resulting in an increased number of false-positive associations compared with an analysis that considers only the multiplicative model. Conclusion Invoking a simple Bonferroni adjustment and testing for both multiplicative and recessive models is fast and an optimal strategy in large meta-analysis-based screenings. However, care must be taken when examined variants are common, where specification of a multiplicative model alone may be preferable.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Background-The Bypass Angioplasty Revascularization Investigation 2 Diabetes (BARI 2D) trial in 2368 patients with stable ischemic heart disease assigned before randomization to percutaneous coronary intervention or coronary artery bypass grafting strata reported similar 5-year all-cause mortality rates with insulin sensitization versus insulin provision therapy and with a strategy of prompt initial coronary revascularization and intensive medical therapy or intensive medical therapy alone with revascularization reserved for clinical indication(s). In this report, we examine the predefined secondary end points of cardiac death and myocardial infarction (MI). Methods and Results-Outcome data were analyzed by intention to treat; the Kaplan-Meier method was used to assess 5-year event rates. Nominal P values are presented. During an average 5.3-year follow-up, there were 316 deaths (43% were attributed to cardiac causes) and 279 first MI events. Five-year cardiac mortality did not differ between revascularization plus intensive medical therapy (5.9%) and intensive medical therapy alone groups (5.7%; P = 0.38) or between insulin sensitization (5.7%) and insulin provision therapy (6%; P = 0.76). In the coronary artery bypass grafting stratum (n = 763), MI events were significantly less frequent in revascularization plus intensive medical therapy versus intensive medical therapy alone groups (10.0% versus 17.6%; P = 0.003), and the composite end points of all-cause death or MI (21.1% versus 29.2%; P = 0.010) and cardiac death or MI (P = 0.03) were also less frequent. Reduction in MI (P = 0.001) and cardiac death/MI (P = 0.002) was significant only in the insulin sensitization group. Conclusions-In many patients with type 2 diabetes mellitus and stable ischemic coronary disease in whom angina symptoms are controlled, similar to those enrolled in the percutaneous coronary intervention stratum, intensive medical therapy alone should be the first-line strategy. In patients with more extensive coronary disease, similar to those enrolled in the coronary artery bypass grafting stratum, prompt coronary artery bypass grafting, in the absence of contraindications, intensive medical therapy, and an insulin sensitization strategy appears to be a preferred therapeutic strategy to reduce the incidence of MI. Clinical Trial Registration-URL: http://www.clinicaltrials.gov. Unique identifier: NCT00006305. (Circulation. 2009;120:2529-2540.)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Objective Intrasubstance meniscal signal changes not reaching the articular surface on fast spin echo (FSE) sequences are considered to represent mucoid degeneration on MRI. The aim of this study was to evaluate the association of prevalent intrasubstance signal changes with incident tears of the medial meniscus detected on 3.0 T MRI over a 1-year period. Materials and methods A total of 161 women aged a parts per thousand yen40 years participated in a longitudinal 1-year observational study of knee osteoarthritis. MRI (3.0 T) was performed at baseline and 12-month follow-up. The anterior horn, body, and posterior horn of the medial meniscus were scored by two experienced musculoskeletal radiologists using the Boston-Leeds Osteoarthritis Knee Score (BLOKS) system. Four grades were used to describe the meniscal morphology: grade 0 (normal), grade 1 (intrasubstance signal changes not reaching the articular surface), grade 2 (single tears), and grade 3 (complex tears and maceration). Fisher`s exact test and the Cochran-Armitage trend test were performed to evaluate whether baseline intrasubstance signal changes (grade 1) predict incident meniscal tears/maceration (grades 2 and/or 3) in the same subregion of the medial meniscus, when compared to subregions without pathology as the reference group (grade 0). Results Medial meniscal intrasubstance signal changes at baseline did not predict tears at follow-up when evaluating the anterior and posterior horns (left-sided p-values 0.06 and 0.59, respectively). No incident tears were detected in the body. Conclusion We could not demonstrate an association between prevalent medial meniscal intrasubstance signal changes with incident tears over a 1-year period.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We present a novel maximum-likelihood-based algorithm for estimating the distribution of alignment scores from the scores of unrelated sequences in a database search. Using a new method for measuring the accuracy of p-values, we show that our maximum-likelihood-based algorithm is more accurate than existing regression-based and lookup table methods. We explore a more sophisticated way of modeling and estimating the score distributions (using a two-component mixture model and expectation maximization), but conclude that this does not improve significantly over simply ignoring scores with small E-values during estimation. Finally, we measure the classification accuracy of p-values estimated in different ways and observe that inaccurate p-values can, somewhat paradoxically, lead to higher classification accuracy. We explain this paradox and argue that statistical accuracy, not classification accuracy, should be the primary criterion in comparisons of similarity search methods that return p-values that adjust for target sequence length.