931 resultados para Performance prediction
Resumo:
Virulent infections are expected to impair learning ability, either as a direct consequence of stressed physiological state or as an adaptive response that minimizes diversion of energy from immune defense. This prediction has been well supported for mammals and bees. Here, we report an opposite result in Drosophila melanogaster. Using an odor-mechanical shock conditioning paradigm, we found that intestinal infection with bacterial pathogens Pseudomonas entomophila or Erwinia c. carotovora improved flies' learning performance after a 1h retention interval. Infection with P. entomophila (but not E. c. carotovora) also improved learning performance after 5 min retention. No effect on learning performance was detected for intestinal infections with an avirulent GacA mutant of P. entomophila or for virulent systemic (hemocoel) infection with E. c. carotovora. Assays of unconditioned responses to odorants and shock do not support a major role for changes in general responsiveness to stimuli in explaining the changes in learning performance, although differences in their specific salience for learning cannot be excluded. Our results demonstrate that the effects of pathogens on learning performance in insects are less predictable than suggested by previous studies, and support the notion that immune stress can sometimes boost cognitive abilities.
Resumo:
Validation is the main bottleneck preventing theadoption of many medical image processing algorithms inthe clinical practice. In the classical approach,a-posteriori analysis is performed based on someobjective metrics. In this work, a different approachbased on Petri Nets (PN) is proposed. The basic ideaconsists in predicting the accuracy that will result froma given processing based on the characterization of thesources of inaccuracy of the system. Here we propose aproof of concept in the scenario of a diffusion imaginganalysis pipeline. A PN is built after the detection ofthe possible sources of inaccuracy. By integrating thefirst qualitative insights based on the PN withquantitative measures, it is possible to optimize the PNitself, to predict the inaccuracy of the system in adifferent setting. Results show that the proposed modelprovides a good prediction performance and suggests theoptimal processing approach.
Resumo:
BACKGROUND: Prognostic models have been developed to predict survival of patients with newly diagnosed glioblastoma (GBM). To improve predictions, models should be updated with information at the recurrence. We performed a pooled analysis of European Organization for Research and Treatment of Cancer (EORTC) trials on recurrent glioblastoma to validate existing clinical prognostic factors, identify new markers, and derive new predictions for overall survival (OS) and progression free survival (PFS).¦METHODS: Data from 300 patients with recurrent GBM recruited in eight phase I or II trials conducted by the EORTC Brain Tumour Group were used to evaluate patient's age, sex, World Health Organisation (WHO) performance status (PS), presence of neurological deficits, disease history, use of steroids or anti-epileptics and disease characteristics to predict PFS and OS. Prognostic calculators were developed in patients initially treated by chemoradiation with temozolomide.¦RESULTS: Poor PS and more than one target lesion had a significant negative prognostic impact for both PFS and OS. Patients with large tumours measured by the maximum diameter of the largest lesion (⩾42mm) and treated with steroids at baseline had shorter OS. Tumours with predominant frontal location had better survival. Age and sex did not show independent prognostic values for PFS or OS.¦CONCLUSIONS: This analysis confirms performance status but not age as a major prognostic factor for PFS and OS in recurrent GBM. Patients with multiple and large lesions have an increased risk of death. With these data prognostic calculators with confidence intervals for both medians and fixed time probabilities of survival were derived.
Resumo:
A wide range of modelling algorithms is used by ecologists, conservation practitioners, and others to predict species ranges from point locality data. Unfortunately, the amount of data available is limited for many taxa and regions, making it essential to quantify the sensitivity of these algorithms to sample size. This is the first study to address this need by rigorously evaluating a broad suite of algorithms with independent presence-absence data from multiple species and regions. We evaluated predictions from 12 algorithms for 46 species (from six different regions of the world) at three sample sizes (100, 30, and 10 records). We used data from natural history collections to run the models, and evaluated the quality of model predictions with area under the receiver operating characteristic curve (AUC). With decreasing sample size, model accuracy decreased and variability increased across species and between models. Novel modelling methods that incorporate both interactions between predictor variables and complex response shapes (i.e. GBM, MARS-INT, BRUTO) performed better than most methods at large sample sizes but not at the smallest sample sizes. Other algorithms were much less sensitive to sample size, including an algorithm based on maximum entropy (MAXENT) that had among the best predictive power across all sample sizes. Relative to other algorithms, a distance metric algorithm (DOMAIN) and a genetic algorithm (OM-GARP) had intermediate performance at the largest sample size and among the best performance at the lowest sample size. No algorithm predicted consistently well with small sample size (n < 30) and this should encourage highly conservative use of predictions based on small sample size and restrict their use to exploratory modelling.
Resumo:
The major objective of this research project is to utilize thermal analysis techniques in conjunction with x-ray analysis methods to identify and explain chemical reactions that promote aggregate related deterioration in Portland cement concrete. The first year of this project has been spent obtaining and analyzing limestone and dolomite samples that exhibit a wide range of field service performance. Most of the samples chosen for the study also had laboratory durability test information (ASTM C 666, method B) that was readily available. Preliminary test results indicate that a strong relationship exists between the average crystallite size of the limestone (calcite) specimens and their apparent decomposition temperatures as measured by thermogravimetric analysis. Also, premature weight loss in the thermogravimetric analysis tests appeared to be related to the apparent decomposition temperature of the various calcite test specimens.
Resumo:
We present the most comprehensive comparison to date of the predictive benefit of genetics in addition to currently used clinical variables, using genotype data for 33 single-nucleotide polymorphisms (SNPs) in 1,547 Caucasian men from the placebo arm of the REduction by DUtasteride of prostate Cancer Events (REDUCE®) trial. Moreover, we conducted a detailed comparison of three techniques for incorporating genetics into clinical risk prediction. The first method was a standard logistic regression model, which included separate terms for the clinical covariates and for each of the genetic markers. This approach ignores a substantial amount of external information concerning effect sizes for these Genome Wide Association Study (GWAS)-replicated SNPs. The second and third methods investigated two possible approaches to incorporating meta-analysed external SNP effect estimates - one via a weighted PCa 'risk' score based solely on the meta analysis estimates, and the other incorporating both the current and prior data via informative priors in a Bayesian logistic regression model. All methods demonstrated a slight improvement in predictive performance upon incorporation of genetics. The two methods that incorporated external information showed the greatest receiver-operating-characteristic AUCs increase from 0.61 to 0.64. The value of our methods comparison is likely to lie in observations of performance similarities, rather than difference, between three approaches of very different resource requirements. The two methods that included external information performed best, but only marginally despite substantial differences in complexity.
Resumo:
The usefulness of species distribution models (SDMs) in predicting impacts of climate change on biodiversity is difficult to assess because changes in species ranges may take decades or centuries to occur. One alternative way to evaluate the predictive ability of SDMs across time is to compare their predictions with data on past species distributions. We use data on plant distributions, fossil pollen and current and mid-Holocene climate to test the ability of SDMs to predict past climate-change impacts. We find that species showing little change in the estimated position of their realized niche, with resulting good model performance, tend to be dominant competitors for light. Different mechanisms appear to be responsible for among-species differences in model performance. Confidence in predictions of the impacts of climate change could be improved by selecting species with characteristics that suggest little change is expected in the relationships between species occurrence and climate patterns.
Resumo:
Superheater corrosion causes vast annual losses for the power companies. With a reliable corrosion prediction method, the plants can be designed accordingly, and knowledge of fuel selection and determination of process conditions may be utilized to minimize superheater corrosion. Growing interest to use recycled fuels creates additional demands for the prediction of corrosion potential. Models depending on corrosion theories will fail, if relations between the inputs and the output are poorly known. A prediction model based on fuzzy logic and an artificial neural network is able to improve its performance as the amount of data increases. The corrosion rate of a superheater material can most reliably be detected with a test done in a test combustor or in a commercial boiler. The steel samples can be located in a special, temperature-controlled probe, and exposed to the corrosive environment for a desired time. These tests give information about the average corrosion potential in that environment. Samples may also be cut from superheaters during shutdowns. The analysis ofsamples taken from probes or superheaters after exposure to corrosive environment is a demanding task: if the corrosive contaminants can be reliably analyzed, the corrosion chemistry can be determined, and an estimate of the material lifetime can be given. In cases where the reason for corrosion is not clear, the determination of the corrosion chemistry and the lifetime estimation is more demanding. In order to provide a laboratory tool for the analysis and prediction, a newapproach was chosen. During this study, the following tools were generated: · Amodel for the prediction of superheater fireside corrosion, based on fuzzy logic and an artificial neural network, build upon a corrosion database developed offuel and bed material analyses, and measured corrosion data. The developed model predicts superheater corrosion with high accuracy at the early stages of a project. · An adaptive corrosion analysis tool based on image analysis, constructedas an expert system. This system utilizes implementation of user-defined algorithms, which allows the development of an artificially intelligent system for thetask. According to the results of the analyses, several new rules were developed for the determination of the degree and type of corrosion. By combining these two tools, a user-friendly expert system for the prediction and analyses of superheater fireside corrosion was developed. This tool may also be used for the minimization of corrosion risks by the design of fluidized bed boilers.
Resumo:
BACKGROUND AND AIMS: Parental history (PH) and genetic risk scores (GRSs) are separately associated with coronary heart disease (CHD), but evidence regarding their combined effects is lacking. We aimed to evaluate the joint associations and predictive ability of PH and GRSs for incident CHD. METHODS: Data for 4283 Caucasians were obtained from the population-based CoLaus Study, over median follow-up time of 5.6 years. CHD was defined as incident myocardial infarction, angina, percutaneous coronary revascularization or bypass grafting. Single nucleotide polymorphisms for CHD identified by genome-wide association studies were used to construct unweighted and weighted versions of three GRSs, comprising of 38, 53 and 153 SNPs respectively. RESULTS: PH was associated with higher values of all weighted GRSs. After adjustment for age, sex, smoking, diabetes, systolic blood pressure, low and high density lipoprotein cholesterol, PH was significantly associated with CHD [HR 2.61, 95% CI (1.47-4.66)] and further adjustment for GRSs did not change this estimate. Similarly, one standard deviation change of the weighted 153-SNPs GRS was significantly associated with CHD [HR 1.50, 95% CI (1.26-1.80)] and remained so, after further adjustment for PH. The weighted, 153-SNPs GRS, but not PH, modestly improved discrimination [(C-index improvement, 0.016), p = 0.048] and reclassification [(NRI improvement, 8.6%), p = 0.027] beyond cardiovascular risk factors. After including both the GRS and PH, model performance improved further [(C-index improvement, 0.022), p = 0.006]. CONCLUSION: After adjustment for cardiovascular risk factors, PH and a weighted, polygenic GRS were jointly associated with CHD and provided additive information for coronary events prediction.
Resumo:
Trabecular bone score (TBS) is a gray-level textural index of bone microarchitecture derived from lumbar spine dual-energy X-ray absorptiometry (DXA) images. TBS is a bone mineral density (BMD)-independent predictor of fracture risk. The objective of this meta-analysis was to determine whether TBS predicted fracture risk independently of FRAX probability and to examine their combined performance by adjusting the FRAX probability for TBS. We utilized individual-level data from 17,809 men and women in 14 prospective population-based cohorts. Baseline evaluation included TBS and the FRAX risk variables, and outcomes during follow-up (mean 6.7 years) comprised major osteoporotic fractures. The association between TBS, FRAX probabilities, and the risk of fracture was examined using an extension of the Poisson regression model in each cohort and for each sex and expressed as the gradient of risk (GR; hazard ratio per 1 SD change in risk variable in direction of increased risk). FRAX probabilities were adjusted for TBS using an adjustment factor derived from an independent cohort (the Manitoba Bone Density Cohort). Overall, the GR of TBS for major osteoporotic fracture was 1.44 (95% confidence interval [CI] 1.35-1.53) when adjusted for age and time since baseline and was similar in men and women (p > 0.10). When additionally adjusted for FRAX 10-year probability of major osteoporotic fracture, TBS remained a significant, independent predictor for fracture (GR = 1.32, 95% CI 1.24-1.41). The adjustment of FRAX probability for TBS resulted in a small increase in the GR (1.76, 95% CI 1.65-1.87 versus 1.70, 95% CI 1.60-1.81). A smaller change in GR for hip fracture was observed (FRAX hip fracture probability GR 2.25 vs. 2.22). TBS is a significant predictor of fracture risk independently of FRAX. The findings support the use of TBS as a potential adjustment for FRAX probability, though the impact of the adjustment remains to be determined in the context of clinical assessment guidelines. © 2015 American Society for Bone and Mineral Research.
Resumo:
The purpose of this study is to investigate the performance persistence of international mutual funds, employing a data sample which includes 2,168 European mutual funds investing in Asia-Pacific region; Japan excluded. Also, a number of performance measures is tested and compared, and especially, this study tries to find out whether iterative Bayesian procedure can be used to provide more accurate predictions on future performance. Finally, this study examines whether the cross-section of mutual fund returns can be explained with simple accounting variables and market risk. To exclude the effect of the Asian currency crisis in 1997, the studied time period includes years from 1999 to 2007. The overall results showed significant performance persistence for repeating winners when performance was tested with contingency tables. Also the annualized alpha spreads between the top and bottom portfolios were more than ten percent at their highest. Nevertheless, the results do not confirm the improved prediction accuracy of the Bayesian alphas.
Resumo:
The purpose of this study is to examine macroeconomic indicators‟ and technical analysis‟ ability to signal market crashes. Indicators examined were Yield Spread, The Purchasing Managers Index and the Consumer Confidence Index. Technical Analysis indicators were moving average, Moving Average Convergence-Divergence and Relative Strength Index. We studied if commonly used macroeconomic indicators can be used as a warning system for a stock market crashes as well. The hypothesis is that the signals of recession can be used as signals of stock market crash and that way a basis for a hedging strategy. The data is collected from the U.S. markets from the years 1983-2010. Empirical studies show that macroeconomic indicators have been able to explain the future GDP development in the U.S. in research period and they were statistically significant. A hedging strategy that combined the signals of yield spread and Consumer Confidence Index gave most useful results as a basis of a hedging strategy in selected time period. It was able to outperform buy-and-hold strategy as well as all of the technical indicator based hedging strategies.
Resumo:
The thesis examines the performance persistence of hedge funds using complement methodologies (namely cross-sectional regressions, quantile portfolio analysis and Spearman rank correlation test). In addition, six performance ranking metrics and six different combinations of selection and holding periods are compared. The data is gathered from HFI and Tremont databases covering over 14,000 hedge funds and time horizon is set from January 1996 to December 2007. The results suggest that there definitely exists performance persistence among hedge funds and the strength and existence of persistence vary among fund styles. The persistence depends on the metrics and combination of selection and prediction period applied. According to the results, the combination of 36-month selection and holding period outperforms other five period combinations in capturing performance persistence within the sample. Furthermore, model-free performance metrics capture persistence more sensitively than model-specific metrics. The study is the first one ever to use MVR as a performance ranking metric, and surprisingly MVR is more sensitive to detect persistence than other performance metrics employed.
Resumo:
Thesis: A liquid-cooled, direct-drive, permanent-magnet, synchronous generator with helical, double-layer, non-overlapping windings formed from a copper conductor with a coaxial internal coolant conduit offers an excellent combination of attributes to reliably provide economic wind power for the coming generation of wind turbines with power ratings between 5 and 20MW. A generator based on the liquid-cooled architecture proposed here will be reliable and cost effective. Its smaller size and mass will reduce build, transport, and installation costs. Summary: Converting wind energy into electricity and transmitting it to an electrical power grid to supply consumers is a relatively new and rapidly developing method of electricity generation. In the most recent decade, the increase in wind energy’s share of overall energy production has been remarkable. Thousands of land-based and offshore wind turbines have been commissioned around the globe, and thousands more are being planned. The technologies have evolved rapidly and are continuing to evolve, and wind turbine sizes and power ratings are continually increasing. Many of the newer wind turbine designs feature drivetrains based on Direct-Drive, Permanent-Magnet, Synchronous Generators (DD-PMSGs). Being low-speed high-torque machines, the diameters of air-cooled DD-PMSGs become very large to generate higher levels of power. The largest direct-drive wind turbine generator in operation today, rated just below 8MW, is 12m in diameter and approximately 220 tonne. To generate higher powers, traditional DD-PMSGs would need to become extraordinarily large. A 15MW air-cooled direct-drive generator would be of colossal size and tremendous mass and no longer economically viable. One alternative to increasing diameter is instead to increase torque density. In a permanent magnet machine, this is best done by increasing the linear current density of the stator windings. However, greater linear current density results in more Joule heating, and the additional heat cannot be removed practically using a traditional air-cooling approach. Direct liquid cooling is more effective, and when applied directly to the stator windings, higher linear current densities can be sustained leading to substantial increases in torque density. The higher torque density, in turn, makes possible significant reductions in DD-PMSG size. Over the past five years, a multidisciplinary team of researchers has applied a holistic approach to explore the application of liquid cooling to permanent-magnet wind turbine generator design. The approach has considered wind energy markets and the economics of wind power, system reliability, electromagnetic behaviors and design, thermal design and performance, mechanical architecture and behaviors, and the performance modeling of installed wind turbines. This dissertation is based on seven publications that chronicle the work. The primary outcomes are the proposal of a novel generator architecture, a multidisciplinary set of analyses to predict the behaviors, and experimentation to demonstrate some of the key principles and validate the analyses. The proposed generator concept is a direct-drive, surface-magnet, synchronous generator with fractional-slot, duplex-helical, double-layer, non-overlapping windings formed from a copper conductor with a coaxial internal coolant conduit to accommodate liquid coolant flow. The novel liquid-cooling architecture is referred to as LC DD-PMSG. The first of the seven publications summarized in this dissertation discusses the technological and economic benefits and limitations of DD-PMSGs as applied to wind energy. The second publication addresses the long-term reliability of the proposed LC DD-PMSG design. Publication 3 examines the machine’s electromagnetic design, and Publication 4 introduces an optimization tool developed to quickly define basic machine parameters. The static and harmonic behaviors of the stator and rotor wheel structures are the subject of Publication 5. And finally, Publications 6 and 7 examine steady-state and transient thermal behaviors. There have been a number of ancillary concrete outcomes associated with the work including the following. X Intellectual Property (IP) for direct liquid cooling of stator windings via an embedded coaxial coolant conduit, IP for a lightweight wheel structure for lowspeed, high-torque electrical machinery, and IP for numerous other details of the LC DD-PMSG design X Analytical demonstrations of the equivalent reliability of the LC DD-PMSG; validated electromagnetic, thermal, structural, and dynamic prediction models; and an analytical demonstration of the superior partial load efficiency and annual energy output of an LC DD-PMSG design X A set of LC DD-PMSG design guidelines and an analytical tool to establish optimal geometries quickly and early on X Proposed 8 MW LC DD-PMSG concepts for both inner and outer rotor configurations Furthermore, three technologies introduced could be relevant across a broader spectrum of applications. 1) The cost optimization methodology developed as part of this work could be further improved to produce a simple tool to establish base geometries for various electromagnetic machine types. 2) The layered sheet-steel element construction technology used for the LC DD-PMSG stator and rotor wheel structures has potential for a wide range of applications. And finally, 3) the direct liquid-cooling technology could be beneficial in higher speed electromotive applications such as vehicular electric drives.
Resumo:
The present study compares the performance of stochastic and fuzzy models for the analysis of the relationship between clinical signs and diagnosis. Data obtained for 153 children concerning diagnosis (pneumonia, other non-pneumonia diseases, absence of disease) and seven clinical signs were divided into two samples, one for analysis and other for validation. The former was used to derive relations by multi-discriminant analysis (MDA) and by fuzzy max-min compositions (fuzzy), and the latter was used to assess the predictions drawn from each type of relation. MDA and fuzzy were closely similar in terms of prediction, with correct allocation of 75.7 to 78.3% of patients in the validation sample, and displaying only a single instance of disagreement: a patient with low level of toxemia was mistaken as not diseased by MDA and correctly taken as somehow ill by fuzzy. Concerning relations, each method provided different information, each revealing different aspects of the relations between clinical signs and diagnoses. Both methods agreed on pointing X-ray, dyspnea, and auscultation as better related with pneumonia, but only fuzzy was able to detect relations of heart rate, body temperature, toxemia and respiratory rate with pneumonia. Moreover, only fuzzy was able to detect a relationship between heart rate and absence of disease, which allowed the detection of six malnourished children whose diagnoses as healthy are, indeed, disputable. The conclusion is that even though fuzzy sets theory might not improve prediction, it certainly does enhance clinical knowledge since it detects relationships not visible to stochastic models.