917 resultados para Power Sensitivity Model


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background Meta-analysis is increasingly being employed as a screening procedure in large-scale association studies to select promising variants for follow-up studies. However, standard methods for meta-analysis require the assumption of an underlying genetic model, which is typically unknown a priori. This drawback can introduce model misspecifications, causing power to be suboptimal, or the evaluation of multiple genetic models, which augments the number of false-positive associations, ultimately leading to waste of resources with fruitless replication studies. We used simulated meta-analyses of large genetic association studies to investigate naive strategies of genetic model specification to optimize screenings of genome-wide meta-analysis signals for further replication. Methods Different methods, meta-analytical models and strategies were compared in terms of power and type-I error. Simulations were carried out for a binary trait in a wide range of true genetic models, genome-wide thresholds, minor allele frequencies (MAFs), odds ratios and between-study heterogeneity (tau(2)). Results Among the investigated strategies, a simple Bonferroni-corrected approach that fits both multiplicative and recessive models was found to be optimal in most examined scenarios, reducing the likelihood of false discoveries and enhancing power in scenarios with small MAFs either in the presence or in absence of heterogeneity. Nonetheless, this strategy is sensitive to tau(2) whenever the susceptibility allele is common (MAF epsilon 30%), resulting in an increased number of false-positive associations compared with an analysis that considers only the multiplicative model. Conclusion Invoking a simple Bonferroni adjustment and testing for both multiplicative and recessive models is fast and an optimal strategy in large meta-analysis-based screenings. However, care must be taken when examined variants are common, where specification of a multiplicative model alone may be preferable.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A risk score model was developed based in a population of 1,224 individuals from the general population without known diabetes aging 35 years or more from an urban Brazilian population sample in order to select individuals who should be screened in subsequent testing and improve the efficacy of public health assurance. External validation was performed in a second, independent, population from a different city ascertained through a similar epidemiological protocol. The risk score was developed by multiple logistic regression and model performance and cutoff values were derived from a receiver operating characteristic curve. Model`s capacity of predicting fasting blood glucose levels was tested analyzing data from a 5-year follow-up protocol conducted in the general population. Items independently and significantly associated with diabetes were age, BMI and known hypertension. Sensitivity, specificity and proportion of further testing necessary for the best cutoff value were 75.9, 66.9 and 37.2%, respectively. External validation confirmed the model`s adequacy (AUC equal to 0.72). Finally, model score was also capable of predicting fasting blood glucose progression in non-diabetic individuals in a 5-year follow-up period. In conclusion, this simple diabetes risk score was able to identify individuals with an increased likelihood of having diabetes and it can be used to stratify subpopulations in which performing of subsequent tests is necessary and probably cost-effective.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The standard critical power test protocol on the cycle prescribes a series of trials to exhaustion, each at a different but constant power setting. Recently the protocol has been modified and applied to a series of trials to exhaustion each at a different ramp incremental rate. This study was undertaken to compare critical power and anaerobic work capacity estimates in the same group of subjects when derived from the two protocols. Ten male subjects of mixed athletic ability cycled to exhaustion on eight occasions in randomized order over a 3-wk period. Four trials were performed at differing constant power settings and four trials on differing ramp incremental rates. Both critical power and anaerobic work capacity were estimated for each subject by curve fitting of the ramp model and of three versions of the constant power model. After adjusting for inter-subject variability, no significant differences were detected between critical power estimates or between anaerobic work capacity estimates from any model formulation or from the two protocols. It is concluded that both the ramp and constant power protocols produce equivalent estimates for critical power and anaerobic work capacity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study evaluated the use of Raman spectroscopy to identify the spectral differences between normal (N), benign hyperplasia (BPH) and adenocarcinoma (CaP) in fragments of prostate biopsies in vitro with the aim of developing a spectral diagnostic model for tissue classification. A dispersive Raman spectrometer was used with 830 nm wavelength and 80 mW excitation. Following Raman data collection and tissue histopathology (48 fragments diagnosed as N, 43 as BPH and 14 as CaP), two diagnostic models were developed in order to extract diagnostic information: the first using PCA and Mahalanobis analysis techniques and the second one a simplified biochemical model based on spectral features of cholesterol, collagen, smooth muscle cell and adipocyte. Spectral differences between N, BPH and CaP tissues, were observed mainly in the Raman bands associated with proteins, lipids, nucleic and amino acids. The PCA diagnostic model showed a sensitivity and specificity of 100%, which indicates the ability of PCA and Mahalanobis distance techniques to classify tissue changes in vitro. Also, it was found that the relative amount of collagen decreased while the amount of cholesterol and adipocyte increased with severity of the disease. Smooth muscle cell increased in BPH tissue. These characteristics were used for diagnostic purposes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Noninvasive positive-pressure ventilation (NPPV) modes are currently available on bilevel and ICU ventilators. However, little data comparing the performance of the NPPV modes on these ventilators are available. Methods: In an experimental bench study, the ability of nine ICU ventilators to function in the presence of leaks was compared with a bilevel ventilator using the IngMar ASL5000 lung simulator (IngMar Medical; Pittsburgh, PA) set at a compliance of 60 mL/cm H(2)O, an inspiratory resistance of 10 cm H(2)O/L/s, an expiratory resistance of 20 cm H(2)O/L/s, and a respiratory rate of 15 breaths/min. All of the ventilators were set at 12 cm H(2)O pressure support and 5 cm H(2)O positive end-expiratory pressure. The data were collected at baseline and at three customized leaks. Main results: At baseline, all of the ventilators were able to deliver adequate tidal volumes, to maintain airway pressure, and to synchronize with the simulator, without missed efforts or auto-triggering. As the leak was increased, all of the ventilators (except the Vision [Respironics; Murrysville, PA] and Servo I [Maquet; Solna, Sweden]) needed adjustment of sensitivity or cycling criteria to maintain adequate ventilation, and some transitioned to backup ventilation. Significant differences in triggering and cycling were observed between the Servo I and the Vision ventilators. Conclusions: The Vision and Servo I were the only ventilators that required no adjustments as they adapted to increasing leaks. There were differences in performance between these two ventilators, although the clinical significance of these differences is unclear. Clinicians should be aware that in the presence of leaks, most ICU ventilators require adjustments to maintain an adequate tidal volume. (CHEST 2009; 136:448-456)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Association between insulin resistance (IR) and non-alcoholic fatty liver disease (NAFLD) has been reported. This prompted us to evaluate the power of the insulin sensitivity index (ISI) in association with IGFBP-1 to identify IR early in obese children/adolescents. OGTT was performed in 34 obese/overweight children/adolescents. Glucose, insulin and IGFBP-1 were measured in serum samples and ISI was calculated. Considering the presence of three or more risk factors for IR as a criterion for IR, ISI <4.6 showed 87.5% sensitivity and 94.5% specificity in diagnosing IR. IGFBP-1 was lower in the group with ISI <4.6 (p <0.01). In this group, three patients had higher than expected IGFBP-1, suggesting hepatic IR, while three patients with ISI >4.6 showed very low IGFBP-1 levels. Conclusion: ISI <4.6 is a good indicator of early peripheral IR and, associated with IGFBP-1, can identify increased risk of hepatic IR. Low IGFBP-1 levels among non-IR children may indicate increased portal insulin levels.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

HE PROBIT MODEL IS A POPULAR DEVICE for explaining binary choice decisions in econometrics. It has been used to describe choices such as labor force participation, travel mode, home ownership, and type of education. These and many more examples can be found in papers by Amemiya (1981) and Maddala (1983). Given the contribution of economics towards explaining such choices, and given the nature of data that are collected, prior information on the relationship between a choice probability and several explanatory variables frequently exists. Bayesian inference is a convenient vehicle for including such prior information. Given the increasing popularity of Bayesian inference it is useful to ask whether inferences from a probit model are sensitive to a choice between Bayesian and sampling theory techniques. Of interest is the sensitivity of inference on coefficients, probabilities, and elasticities. We consider these issues in a model designed to explain choice between fixed and variable interest rate mortgages. Two Bayesian priors are employed: a uniform prior on the coefficients, designed to be noninformative for the coefficients, and an inequality restricted prior on the signs of the coefficients. We often know, a priori, whether increasing the value of a particular explanatory variable will have a positive or negative effect on a choice probability. This knowledge can be captured by using a prior probability density function (pdf) that is truncated to be positive or negative. Thus, three sets of results are compared:those from maximum likelihood (ML) estimation, those from Bayesian estimation with an unrestricted uniform prior on the coefficients, and those from Bayesian estimation with a uniform prior truncated to accommodate inequality restrictions on the coefficients.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Most cellular solids are random materials, while practically all theoretical structure-property results are for periodic models. To be able to generate theoretical results for random models, the finite element method (FEM) was used to study the elastic properties of solids with a closed-cell cellular structure. We have computed the density (rho) and microstructure dependence of the Young's modulus (E) and Poisson's ratio (PR) for several different isotropic random models based on Voronoi tessellations and level-cut Gaussian random fields. The effect of partially open cells is also considered. The results, which are best described by a power law E infinity rho (n) (1<n<2), show the influence of randomness and isotropy on the properties of closed-cell cellular materials, and are found to be in good agreement with experimental data. (C) 2001 Acta Materialia Inc. Published by Elsevier Science Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We use a stochastic patch occupancy model of invertebrates in the Mound Springs ecosystem of South Australia to assess the ability of incidence function models to detect environmental impacts on metapopulations. We assume that the probability of colonisation decreases with increasing isolation and the probability of extinction is constant across spring vents. We run the models to quasi-equilibrium, and then impose an impact by increasing the local extinction probability. We sample the output at various times pre- and postimpact, and examine the probability of detecting a significant change in population parameters. The incidence function model approach turns out to have little power to detect environmental impacts on metapopulations with small numbers of patches. (C) 2001 Elsevier Science Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A shortened version of the Interpersonal Sensitivity Measure (IPSM) developed to predict depression prone personalities was administered in a self-report questionnaire to a community-based sample of 3269 Australian twin pairs aged 18-28 years, along with Eysenck's EPQ and Cloninger's TPQ. The IPSM included four sub-scales: Separation Anxiety (SEP); Interpersonal Sensitivity (INT); Fragile Inner-Self (FIS); and Timidity (TIM). Univariate analysis revealed that individual differences in the IPSM sub-scale scores were best explained by additive genetic and specific environmental effects. Confirming previous research findings, familial aggregation for the EPQ and TPQ personality dimensions was entirely due to additive genetic effects. In the multivariate case, a model comprising additive genetic and specific environmental effects best explained the covariation between the latent factors for male and female twin pairs alike. The EPQ and TPQ dimensions accounted for moderate to large proportions of the genetic variance (40-76%) in the IPSM sub-scales, while most of the non-shared environment variance was unique to the IPSM sub-scales. (C) 2001 Elsevier Science Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ligaments undergo finite strain displaying hyperelastic behaviour as the initially tangled fibrils present straighten out, combined with viscoelastic behaviour (strain rate sensitivity). In the present study the anterior cruciate ligament of the human knee joint is modelled in three dimensions to gain an understanding of the stress distribution over the ligament due to motion imposed on the ends, determined from experimental studies. A three dimensional, finite strain material model of ligaments has recently been proposed by Pioletti in Ref. [2]. It is attractive as it separates out elastic stress from that due to the present strain rate and that due to the past history of deformation. However, it treats the ligament as isotropic and incompressible. While the second assumption is reasonable, the first is clearly untrue. In the present study an alternative model of the elastic behaviour due to Bonet and Burton (Ref. [4]) is generalized. Bonet and Burton consider finite strain with constant modulii for the fibres and for the matrix of a transversely isotropic composite. In the present work, the fibre modulus is first made to increase exponentially from zero with an invariant that provides a measure of the stretch in the fibre direction. At 12% strain in the fibre direction, a new reference state is then adopted, after which the material modulus is made constant, as in Bonet and Burton's model. The strain rate dependence can be added, either using Pioletti's isotropic approximation, or by making the effect depend on the strain rate in the fibre direction only. A solid model of a ligament is constructed, based on experimentally measured sections, and the deformation predicted using explicit integration in time. This approach simplifies the coding of the material model, but has a limitation due to the detrimental effect on stability of integration of the substantial damping implied by the nonlinear dependence of stress on strain rate. At present, an artificially high density is being used to provide stability, while the dynamics are being removed from the solution using artificial viscosity. The result is a quasi-static solution incorporating the effect of strain rate. Alternate approaches to material modelling and integration are discussed, that may result in a better model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Bond's method for ball mill scale-up only gives the mill power draw for a given duty. This method is incompatible with computer modelling and simulation techniques. It might not be applicable for the design of fine grinding ball mills and ball mills preceded by autogenous and semi-autogenous grinding mills. Model-based ball mill scale-up methods have not been validated using a wide range of full-scale circuit data. Their accuracy is therefore questionable. Some of these methods also need expensive pilot testing. A new ball mill scale-up procedure is developed which does not have these limitations. This procedure uses data from two laboratory tests to determine the parameters of a ball mill model. A set of scale-up criteria then scales-up these parameters. The procedure uses the scaled-up parameters to simulate the steady state performance of full-scale mill circuits. At the end of the simulation, the scale-up procedure gives the size distribution, the volumetric flowrate and the mass flowrate of all the streams in the circuit, and the mill power draw.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new ball mill scale-up procedure is developed which uses laboratory data to predict the performance of MI-scale ball mill circuits. This procedure contains two laboratory tests. These laboratory tests give the data for the determination of the parameters of a ball mill model. A set of scale-up criteria then scales-up these parameters. The procedure uses the scaled-up parameters to simulate the steady state performance of the full-scale mill circuit. At the end of the simulation, the scale-up procedure gives the size distribution, the volumetric flowrate and the mass flowrate of all the streams in the circuit, and the mill power draw. A worked example shows how the new ball mill scale-up procedure is executed. This worked example uses laboratory data to predict the performance of a full-scale re-grind mill circuit. This circuit consists of a ball mill in closed circuit with hydrocyclones. The MI-scale ball mill has a diameter (inside liners) of 1.85m. The scale-up procedure shows that the full-scale circuit produces a product (hydrocyclone overflow) that has an 80% passing size of 80 mum. The circuit has a recirculating load of 173%. The calculated power draw of the full-scale mill is 92kW (C) 2001 Elsevier Science Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new ball mill scale-up procedure is developed. This procedure has been validated using seven sets of Ml-scale ball mil data. The largest ball mills in these data have diameters (inside liners) of 6.58m. The procedure can predict the 80% passing size of the circuit product to within +/-6% of the measured value, with a precision of +/-11% (one standard deviation); the re-circulating load to within +/-33% of the mass-balanced value (this error margin is within the uncertainty associated with the determination of the re-circulating load); and the mill power to within +/-5% of the measured value. This procedure is applicable for the design of ball mills which are preceded by autogenous (AG) mills, semi-autogenous (SAG) mills, crushers and flotation circuits. The new procedure is more precise and more accurate than Bond's method for ball mill scale-up. This procedure contains no efficiency correction which relates to the mill diameter. This suggests that, within the range of mill diameter studied, milling efficiency does not vary with mill diameter. This is in contrast with Bond's equation-Bond claimed that milling efficiency increases with mill diameter. (C) 2001 Elsevier Science Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective-To compare the accuracy and feasibility of harmonic power Doppler and digitally subtracted colour coded grey scale imaging for the assessment of perfusion defect severity by single photon emission computed tomography (SPECT) in an unselected group of patients. Design-Cohort study. Setting-Regional cardiothoracic unit. Patients-49 patients (mean (SD) age 61 (11) years; 27 women, 22 men) with known or suspected coronary artery disease were studied with simultaneous myocardial contrast echo (MCE) and SPECT after standard dipyridamole stress. Main outcome measures-Regional myocardial perfusion by SPECT, performed with Tc-99m tetrafosmin, scored qualitatively and also quantitated as per cent maximum activity. Results-Normal perfusion was identified by SPECT in 225 of 270 segments (83%). Contrast echo images were interpretable in 92% of patients. The proportion of normal MCE by grey scale, subtracted, and power Doppler techniques were respectively 76%, 74%, and 88% (p < 0.05) at > 80% of maximum counts, compared with 65%, 69%, and 61% at < 60% of maximum counts. For each technique, specificity was lowest in the lateral wail, although power Doppler was the least affected. Grey scale and subtraction techniques were least accurate in the septal wall, but power Doppler showed particular problems in the apex. On a per patient analysis, the sensitivity was 67%, 75%, and 83% for detection of coronary artery disease using grey scale, colour coded, and power Doppler, respectively, with a significant difference between power Doppler and grey scale only (p < 0.05). Specificity was also the highest for power Doppler, at 55%, but not significantly different from subtracted colour coded images. Conclusions-Myocardial contrast echo using harmonic power Doppler has greater accuracy than with grey scale imaging and digital subtraction. However, power Doppler appears to be less sensitive for mild perfusion defects.