908 resultados para Prediction of random e_ects


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We considered prediction techniques based on models of accelerated failure time with random e ects for correlated survival data. Besides the bayesian approach through empirical Bayes estimator, we also discussed about the use of a classical predictor, the Empirical Best Linear Unbiased Predictor (EBLUP). In order to illustrate the use of these predictors, we considered applications on a real data set coming from the oil industry. More speci - cally, the data set involves the mean time between failure of petroleum-well equipments of the Bacia Potiguar. The goal of this study is to predict the risk/probability of failure in order to help a preventive maintenance program. The results show that both methods are suitable to predict future failures, providing good decisions in relation to employment and economy of resources for preventive maintenance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

2000 Mathematics Subject Classification: 62E16, 65C05, 65C20.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: A variety of methods for prediction of peptide binding to major histocompatibility complex (MHC) have been proposed. These methods are based on binding motifs, binding matrices, hidden Markov models (HMM), or artificial neural networks (ANN). There has been little prior work on the comparative analysis of these methods. Materials and Methods: We performed a comparison of the performance of six methods applied to the prediction of two human MHC class I molecules, including binding matrices and motifs, ANNs, and HMMs. Results: The selection of the optimal prediction method depends on the amount of available data (the number of peptides of known binding affinity to the MHC molecule of interest), the biases in the data set and the intended purpose of the prediction (screening of a single protein versus mass screening). When little or no peptide data are available, binding motifs are the most useful alternative to random guessing or use of a complete overlapping set of peptides for selection of candidate binders. As the number of known peptide binders increases, binding matrices and HMM become more useful predictors. ANN and HMM are the predictive methods of choice for MHC alleles with more than 100 known binding peptides. Conclusion: The ability of bioinformatic methods to reliably predict MHC binding peptides, and thereby potential T-cell epitopes, has major implications for clinical immunology, particularly in the area of vaccine design.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A finite-element method is used to study the elastic properties of random three-dimensional porous materials with highly interconnected pores. We show that Young's modulus, E, is practically independent of Poisson's ratio of the solid phase, nu(s), over the entire solid fraction range, and Poisson's ratio, nu, becomes independent of nu(s) as the percolation threshold is approached. We represent this behaviour of nu in a flow diagram. This interesting but approximate behaviour is very similar to the exactly known behaviour in two-dimensional porous materials. In addition, the behaviour of nu versus nu(s) appears to imply that information in the dilute porosity limit can affect behaviour in the percolation threshold limit. We summarize the finite-element results in terms of simple structure-property relations, instead of tables of data, to make it easier to apply the computational results. Without using accurate numerical computations, one is limited to various effective medium theories and rigorous approximations like bounds and expansions. The accuracy of these equations is unknown for general porous media. To verify a particular theory it is important to check that it predicts both isotropic elastic moduli, i.e. prediction of Young's modulus alone is necessary but not sufficient. The subtleties of Poisson's ratio behaviour actually provide a very effective method for showing differences between the theories and demonstrating their ranges of validity. We find that for moderate- to high-porosity materials, none of the analytical theories is accurate and, at present, numerical techniques must be relied upon.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Hospitals are nowadays collecting vast amounts of data related with patient records. All this data hold valuable knowledge that can be used to improve hospital decision making. Data mining techniques aim precisely at the extraction of useful knowledge from raw data. This work describes an implementation of a medical data mining project approach based on the CRISP-DM methodology. Recent real-world data, from 2000 to 2013, were collected from a Portuguese hospital and related with inpatient hospitalization. The goal was to predict generic hospital Length Of Stay based on indicators that are commonly available at the hospitalization process (e.g., gender, age, episode type, medical specialty). At the data preparation stage, the data were cleaned and variables were selected and transformed, leading to 14 inputs. Next, at the modeling stage, a regression approach was adopted, where six learning methods were compared: Average Prediction, Multiple Regression, Decision Tree, Artificial Neural Network ensemble, Support Vector Machine and Random Forest. The best learning model was obtained by the Random Forest method, which presents a high quality coefficient of determination value (0.81). This model was then opened by using a sensitivity analysis procedure that revealed three influential input attributes: the hospital episode type, the physical service where the patient is hospitalized and the associated medical specialty. Such extracted knowledge confirmed that the obtained predictive model is credible and with potential value for supporting decisions of hospital managers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: Risks of significant infant drug exposurethrough breastmilk are poorly defined for many drugs, and largescalepopulation data are lacking. We used population pharmacokinetics(PK) modeling to predict fluoxetine exposure levels ofinfants via mother's milk in a simulated population of 1000 motherinfantpairs.METHODS: Using our original data on fluoxetine PK of 25breastfeeding women, a population PK model was developed withNONMEM and parameters, including milk concentrations, wereestimated. An exponential distribution model was used to account forindividual variation. Simulation random and distribution-constrainedassignment of doses, dosing time, feeding intervals and milk volumewas conducted to generate 1000 mother-infant pairs with characteristicssuch as the steady-state serum concentrations (Css) and infantdose relative to the maternal weight-adjusted dose (relative infantdose: RID). Full bioavailability and a conservative point estimate of1-month-old infant CYP2D6 activity to be 20% of the adult value(adjusted by weigth) according to a recent study, were assumed forinfant Css calculations.RESULTS: A linear 2-compartment model was selected as thebest model. Derived parameters, including milk-to-plasma ratios(mean: 0.66; SD: 0.34; range, 0 - 1.1) were consistent with the valuesreported in the literature. The estimated RID was below 10% in >95%of infants. The model predicted median infant-mother Css ratio was0.096 (range 0.035 - 0.25); literature reported mean was 0.07 (range0-0.59). Moreover, the predicted incidence of infant-mother Css ratioof >0.2 was less than 1%.CONCLUSION: Our in silico model prediction is consistent withclinical observations, suggesting that substantial systemic fluoxetineexposure in infants through human milk is rare, but further analysisshould include active metabolites. Our approach may be valid forother drugs. [supported by CIHR and Swiss National Science Foundation(SNSF)]

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present simple procedures for the prediction of a real valued sequence. The algorithms are based on a combinationof several simple predictors. We show that if the sequence is a realization of a bounded stationary and ergodic random process then the average of squared errors converges, almost surely, to that of the optimum, given by the Bayes predictor. We offer an analog result for the prediction of stationary gaussian processes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a simple randomized procedure for the prediction of a binary sequence. The algorithm uses ideas from recent developments of the theory of the prediction of individual sequences. We show that if thesequence is a realization of a stationary and ergodic random process then the average number of mistakes converges, almost surely, to that of the optimum, given by the Bayes predictor.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Overall introduction.- Longitudinal studies have been designed to investigate prospectively, from their beginning, the pathway leading from health to frailty and to disability. Knowledge about determinants of healthy ageing and health behaviour (resources) as well as risks of functional decline is required to propose appropriate preventative interventions. The functional status in older people is important considering clinical outcome in general, healthcare need and mortality. Part I.- Results and interventions from lucas (longitudinal urban cohort ageing study). Authors.- J. Anders, U. Dapp, L. Neumann, F. Pröfener, C. Minder, S. Golgert, A. Daubmann, K. Wegscheider,. W. von Renteln-Kruse Methods.- The LUCAS core project is a longitudinal cohort of urban community-dwelling people 60 years and older, recruited in 2000/2001. Further LUCAS projects are cross-sectional comparative and interventional studies (RCT). Results.- The emphasis will be on geriatric medical care in a population-based approach, discussing different forms of access, too. (Dapp et al. BMC Geriatrics 2012, 12:35; http://www.biomedcentral.com/1471-2318/12/35): - longitudinal data from the LUCAS urban cohort (n = 3.326) will be presented covering 10 years of observation, including the prediction of functional decline, need of nursing care, and mortality by using a self-filling screening tool; - interventions to prevent functional decline do focus on first (pre-clinical) signs of pre-frailty before entering the frailty-cascade ("Active Health Promotion in Old Age", "geriatric mobility centre") or disability ("home visits"). Conclusions.- The LUCAS research consortium was established to study particular aspects of functional competence, its changes with ageing, to detect pre-clinical signs of functional decline, and to address questions on how to maintain functional competence and to prevent adverse outcome in different settings. The multidimensional data base allows the exploration of several further questions. Gait performance was exmined by GAITRite®-System. Supported by the Federal Ministry for Education and Research (BMBF Funding No. 01ET1002A). Part II.- Selected results from the lausanne cohort 65+ (Lc65 + ) Study (Switzerland). Authors.- Prof Santos-Eggimann Brigitte, Dr Seematter-Bagnoud Laurence, Prof Büla Christophe, Dr Rochat Stéphane. Methods.- The Lc65+ cohort was launched in 2004 with the random selection of 3054 eligible individuals aged 65 to 70 (birth year 1934-1938) in the non-institutionalized population of Lausanne (Switzerland). Results.- Information is collected about life course social and health-related events, socio-economics, medical and psychosocial dimensions, lifestyle habits, limitations in activities of daily living, mobility impairments, and falls. Gait performance are objectively measured using body-fixed sensors. Frailty is assessed using Fried's frailty phenotype. Follow-up consists in annual self-completed questionnaires, as well as physical examination and physical and mental performance tests every three years. - Lausanne cohort 65+ (Lc65 + ): design and longitudinal outcomes. The baseline data collection was completed among 1422 participants in 2004-2005 through self-completed questionnaires, face-to-face interviews, physical examination and tests of mental and physical performances. Information about institutionalization, self-reported health services utilization, and death is also assessed. An additional random sample (n = 1525) of 65-70 years old subjects was recruited in 2009 (birth year 1939-1943). - lecture no 4: alcohol intake and gait parameters: prevalent and longitudinal association in the Lc65+ study. The association between alcohol intake and gait performance was investigated.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Substantial collective flow is observed in collisions between lead nuclei at Large Hadron Collider (LHC) as evidenced by the azimuthal correlations in the transverse momentum distributions of the produced particles. Our calculations indicate that the global v1-flow, which at RHIC peaked at negative rapidities (named third flow component or antiflow), now at LHC is going to turn toward forward rapidities (to the same side and direction as the projectile residue). Potentially this can provide a sensitive barometer to estimate the pressure and transport properties of the quark-gluon plasma. Our calculations also take into account the initial state center-of-mass rapidity fluctuations, and demonstrate that these are crucial for v1 simulations. In order to better study the transverse momentum flow dependence we suggest a new "symmetrized" v1S(pt) function, and we also propose a new method to disentangle global v1 flow from the contribution generated by the random fluctuations in the initial state. This will enhance the possibilities of studying the collective Global v1 flow both at the STAR Beam Energy Scan program and at LHC.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Aims: Plasma concentrations of imatinib differ largely between patients despite same dosage, owing to large inter-individual variability in pharmacokinetic (PK) parameters. As the drug concentration at the end of the dosage interval (Cmin) correlates with treatment response and tolerability, monitoring of Cmin is suggested for therapeutic drug monitoring (TDM) of imatinib. Due to logistic difficulties, random sampling during the dosage interval is however often performed in clinical practice, thus rendering the respective results not informative regarding Cmin values.Objectives: (I) To extrapolate randomly measured imatinib concentrations to more informative Cmin using classical Bayesian forecasting. (II) To extend the classical Bayesian method to account for correlation between PK parameters. (III) To evaluate the predictive performance of both methods.Methods: 31 paired blood samples (random and trough levels) were obtained from 19 cancer patients under imatinib. Two Bayesian maximum a posteriori (MAP) methods were implemented: (A) a classical method ignoring correlation between PK parameters, and (B) an extended one accounting for correlation. Both methods were applied to estimate individual PK parameters, conditional on random observations and covariate-adjusted priors from a population PK model. The PK parameter estimates were used to calculate trough levels. Relative prediction errors (PE) were analyzed to evaluate accuracy (one-sample t-test) and to compare precision between the methods (F-test to compare variances).Results: Both Bayesian MAP methods allowed non-biased predictions of individual Cmin compared to observations: (A) - 7% mean PE (CI95% - 18 to 4 %, p = 0.15) and (B) - 4% mean PE (CI95% - 18 to 10 %, p = 0.69). Relative standard deviations of actual observations from predictions were 22% (A) and 30% (B), i.e. comparable to the intraindividual variability reported. Precision was not improved by taking into account correlation between PK parameters (p = 0.22).Conclusion: Clinical interpretation of randomly measured imatinib concentrations can be assisted by Bayesian extrapolation to maximum likelihood Cmin. Classical Bayesian estimation can be applied for TDM without the need to include correlation between PK parameters. Both methods could be adapted in the future to evaluate other individual pharmacokinetic measures correlated to clinical outcomes, such as area under the curve(AUC).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Substantial collective flow is observed in collisions between lead nuclei at Large Hadron Collider (LHC) as evidenced by the azimuthal correlations in the transverse momentum distributions of the produced particles. Our calculations indicate that the global v1-flow, which at RHIC peaked at negative rapidities (named third flow component or antiflow), now at LHC is going to turn toward forward rapidities (to the same side and direction as the projectile residue). Potentially this can provide a sensitive barometer to estimate the pressure and transport properties of the quark-gluon plasma. Our calculations also take into account the initial state center-of-mass rapidity fluctuations, and demonstrate that these are crucial for v1 simulations. In order to better study the transverse momentum flow dependence we suggest a new"symmetrized" vS1(pt) function, and we also propose a new method to disentangle global v1 flow from the contribution generated by the random fluctuations in the initial state. This will enhance the possibilities of studying the collective Global v1 flow both at the STAR Beam Energy Scan program and at LHC.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Substantial collective flow is observed in collisions between lead nuclei at Large Hadron Collider (LHC) as evidenced by the azimuthal correlations in the transverse momentum distributions of the produced particles. Our calculations indicate that the global v1-flow, which at RHIC peaked at negative rapidities (named third flow component or antiflow), now at LHC is going to turn toward forward rapidities (to the same side and direction as the projectile residue). Potentially this can provide a sensitive barometer to estimate the pressure and transport properties of the quark-gluon plasma. Our calculations also take into account the initial state center-of-mass rapidity fluctuations, and demonstrate that these are crucial for v1 simulations. In order to better study the transverse momentum flow dependence we suggest a new"symmetrized" vS1(pt) function, and we also propose a new method to disentangle global v1 flow from the contribution generated by the random fluctuations in the initial state. This will enhance the possibilities of studying the collective Global v1 flow both at the STAR Beam Energy Scan program and at LHC.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis studies the predictability of market switching and delisting events from OMX First North Nordic multilateral stock exchange by using financial statement information and market information from 2007 to 2012. This study was conducted by using a three stage process. In first stage relevant theoretical framework and initial variable pool were constructed. Then, explanatory analysis of the initial variable pool was done in order to further limit and identify relevant variables. The explanatory analysis was conducted by using self-organizing map methodology. In the third stage, the predictive modeling was carried out with random forests and support vector machine methodologies. It was found that the explanatory analysis was able to identify relevant variables. The results indicate that the market switching and delisting events can be predicted in some extent. The empirical results also support the usability of financial statement and market information in the prediction of market switching and delisting events.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Genomewide marker information can improve the reliability of breeding value predictions for young selection candidates in genomic selection. However, the cost of genotyping limits its use to elite animals, and how such selective genotyping affects predictive ability of genomic selection models is an open question. We performed a simulation study to evaluate the quality of breeding value predictions for selection candidates based on different selective genotyping strategies in a population undergoing selection. The genome consisted of 10 chromosomes of 100 cM each. After 5,000 generations of random mating with a population size of 100 (50 males and 50 females), generation G(0) (reference population) was produced via a full factorial mating between the 50 males and 50 females from generation 5,000. Different levels of selection intensities (animals with the largest yield deviation value) in G(0) or random sampling (no selection) were used to produce offspring of G(0) generation (G(1)). Five genotyping strategies were used to choose 500 animals in G(0) to be genotyped: 1) Random: randomly selected animals, 2) Top: animals with largest yield deviation values, 3) Bottom: animals with lowest yield deviations values, 4) Extreme: animals with the 250 largest and the 250 lowest yield deviations values, and 5) Less Related: less genetically related animals. The number of individuals in G(0) and G(1) was fixed at 2,500 each, and different levels of heritability were considered (0.10, 0.25, and 0.50). Additionally, all 5 selective genotyping strategies (Random, Top, Bottom, Extreme, and Less Related) were applied to an indicator trait in generation G(0), and the results were evaluated for the target trait in generation G(1), with the genetic correlation between the 2 traits set to 0.50. The 5 genotyping strategies applied to individuals in G(0) (reference population) were compared in terms of their ability to predict the genetic values of the animals in G(1) (selection candidates). Lower correlations between genomic-based estimates of breeding values (GEBV) and true breeding values (TBV) were obtained when using the Bottom strategy. For Random, Extreme, and Less Related strategies, the correlation between GEBV and TBV became slightly larger as selection intensity decreased and was largest when no selection occurred. These 3 strategies were better than the Top approach. In addition, the Extreme, Random, and Less Related strategies had smaller predictive mean squared errors (PMSE) followed by the Top and Bottom methods. Overall, the Extreme genotyping strategy led to the best predictive ability of breeding values, indicating that animals with extreme yield deviations values in a reference population are the most informative when training genomic selection models.