908 resultados para Prediction of random e_ects
Resumo:
The purpose of this study was to validate and cross-validate the Beunen-Malina-Freitas method for non-invasive prediction of adult height in girls. A sample of 420 girls aged 10–15 years from the Madeira Growth Study were measured at yearly intervals and then 8 years later. Anthropometric dimensions (lengths, breadths, circumferences, and skinfolds) were measured; skeletal age was assessed using the Tanner-Whitehouse 3 method and menarcheal status (present or absent) was recorded. Adult height was measured and predicted using stepwise, forward, and maximum R2 regression techniques. Multiple correlations, mean differences, standard errors of prediction, and error boundaries were calculated. A sample of the Leuven Longitudinal Twin Study was used to cross-validate the regressions. Age-specific coefficients of determination (R2) between predicted and measured adult height varied between 0.57 and 0.96, while standard errors of prediction varied between 1.1 and 3.9 cm. The cross-validation confirmed the validity of the Beunen-Malina-Freitas method in girls aged 12–15 years, but at lower ages the cross-validation was less consistent. We conclude that the Beunen-Malina-Freitas method is valid for the prediction of adult height in girls aged 12–15 years. It is applicable to European populations or populations of European ancestry.
Resumo:
The length of stay of preterm infants in a neonatology service has become an issue of a growing concern, namely considering, on the one hand, the mothers and infants health conditions and, on the other hand, the scarce healthcare facilities own resources. Thus, a pro-active strategy for problem solving has to be put in place, either to improve the quality-of-service provided or to reduce the inherent financial costs. Therefore, this work will focus on the development of a diagnosis decision support system in terms of a formal agenda built on a Logic Programming approach to knowledge representation and reasoning, complemented with a case-based problem solving methodology to computing, that caters for the handling of incomplete, unknown, or even contradictory in-formation. The proposed model has been quite accurate in predicting the length of stay (overall accuracy of 84.9%) and by reducing the computational time with values around 21.3%.
Resumo:
This study tested a prediction model of suicidality in a sample of young adults. Predictor variables included perceived parental rejection, self-criticism, neediness, and depression. Participants (N 5 165) responded to the Depressive Experiences Questionnaire,theInventoryforAssessingMemoriesofParentalRearingBehavior, theCenterforEpidemiologicalStudiesDepressionScale,andtheSuicideBehaviors Questionnaire—Revised. Perceived parental rejection, personality, and depression wereassessedinitiallyatTime1,anddepressionagainandsuicidalitywereassessed 5 months later at Time 2. The proposed structural equation model fit the observed data well in a sample of young adults. Parental rejection demonstrated direct and indirect relationships with suicidality, and self-criticism and neediness each had indirect associations with suicidality. Depression was directly related to suicidality. Implications for clinical practice are discussed.
Resumo:
2016
Resumo:
Resumo: Predição da concentração de baixo risco de diflubenzuron para organismos aquáticos e avaliação da argila e brita na redução da toxicidade. O diflubenzuron é um inseticida que além de ser usado agricultura, tem sido amplamente empregado na piscicultura, apesar do seu uso ser proibido nesta atividade. Este composto não consta na lista da legislação brasileira que estabelece limites máximos permissíveis em corpos de água para a proteção das comunidades aquáticas. No presente trabalho, a partir da toxicidade do diflubenzuron em organismos não-alvo, foi calculada a concentração de risco para somente 5% das espécies (HC5). O valor deste parâmetro foi estimado em aproximadamente 7 x 10-6 mg L-1 . Este baixo valor é devido à extremamente alta toxicidade do diflubenzuron para dafnídeos e à grande variação de sensibilidade entre as espécies testadas. Dois matérias de relativamente baixo custo se mostraram eficientes na remoção da toxicidade do diflubenzuron de soluções contendo este composto. Dentre esses materiais, a argila expandida promoveu a redução em aproximadamente 50% da toxicidade de uma solução contendo diflubenzuron. Os resultados podem contribuir para políticas públicas no Brasil relacionadas ao estabelecimento de limites máximos permissíveis de xenobióticos no compartimento aquático. Também, para a pesquisa de matérias inertes e de baixo custo com potencial de remoção de xenobióticos presentes em efluentes da aquicultura ou da agricultura. Abstract: Diflubenzuron is an insecticide that, besides being used in the agriculture, has been widely used in fish farming. However, its use is prohibited in this activity. Diflubenzuron is not in the list of Brazilian legislation establishing maximum permissible limits in water bodies for the protection of aquatic communities. In this paper, according toxicity data of diflubenzuron in non-target organisms, it was calculated an hazardous concentration for only 5% of the species (HC5) of the aquatic community. This parameter value was estimated to be about 7 x 10 -6 mg L -1 . The low value is due to the extreme high toxicity of diflubenzuron to daphnids and to the large variation in sensitivity among the species tested. Two relatively low cost and inert materials were efficient in removing the diflubenzuron from solutions containing this compound. Among these materials, expanded clay shown to promote reduction of approximately 50% of the toxicity of a solution containing diflubenzuron. The results may contribute to the establishment of public policies in Brazil associated to the definition of maximum permissible limits of xenobiotics in the aquatic compartment. This study is also relevant to the search of low cost and inert materials for xenobiotics removal from aquaculture or agricultural effluents.
Resumo:
Modern scientific discoveries are driven by an unsatisfiable demand for computational resources. High-Performance Computing (HPC) systems are an aggregation of computing power to deliver considerably higher performance than one typical desktop computer can provide, to solve large problems in science, engineering, or business. An HPC room in the datacenter is a complex controlled environment that hosts thousands of computing nodes that consume electrical power in the range of megawatts, which gets completely transformed into heat. Although a datacenter contains sophisticated cooling systems, our studies indicate quantitative evidence of thermal bottlenecks in real-life production workload, showing the presence of significant spatial and temporal thermal and power heterogeneity. Therefore minor thermal issues/anomalies can potentially start a chain of events that leads to an unbalance between the amount of heat generated by the computing nodes and the heat removed by the cooling system originating thermal hazards. Although thermal anomalies are rare events, anomaly detection/prediction in time is vital to avoid IT and facility equipment damage and outage of the datacenter, with severe societal and business losses. For this reason, automated approaches to detect thermal anomalies in datacenters have considerable potential. This thesis analyzed and characterized the power and thermal characteristics of a Tier0 datacenter (CINECA) during production and under abnormal thermal conditions. Then, a Deep Learning (DL)-powered thermal hazard prediction framework is proposed. The proposed models are validated against real thermal hazard events reported for the studied HPC cluster while in production. This thesis is the first empirical study of thermal anomaly detection and prediction techniques of a real large-scale HPC system to the best of my knowledge. For this thesis, I used a large-scale dataset, monitoring data of tens of thousands of sensors for around 24 months with a data collection rate of around 20 seconds.
Resumo:
The study of random probability measures is a lively research topic that has attracted interest from different fields in recent years. In this thesis, we consider random probability measures in the context of Bayesian nonparametrics, where the law of a random probability measure is used as prior distribution, and in the context of distributional data analysis, where the goal is to perform inference given avsample from the law of a random probability measure. The contributions contained in this thesis can be subdivided according to three different topics: (i) the use of almost surely discrete repulsive random measures (i.e., whose support points are well separated) for Bayesian model-based clustering, (ii) the proposal of new laws for collections of random probability measures for Bayesian density estimation of partially exchangeable data subdivided into different groups, and (iii) the study of principal component analysis and regression models for probability distributions seen as elements of the 2-Wasserstein space. Specifically, for point (i) above we propose an efficient Markov chain Monte Carlo algorithm for posterior inference, which sidesteps the need of split-merge reversible jump moves typically associated with poor performance, we propose a model for clustering high-dimensional data by introducing a novel class of anisotropic determinantal point processes, and study the distributional properties of the repulsive measures, shedding light on important theoretical results which enable more principled prior elicitation and more efficient posterior simulation algorithms. For point (ii) above, we consider several models suitable for clustering homogeneous populations, inducing spatial dependence across groups of data, extracting the characteristic traits common to all the data-groups, and propose a novel vector autoregressive model to study of growth curves of Singaporean kids. Finally, for point (iii), we propose a novel class of projected statistical methods for distributional data analysis for measures on the real line and on the unit-circle.
Resumo:
Objective To find a correlation between cerebral symptoms at birth and abnormalities found at anomaly scan, through the analysis of sensitivity of the anomaly scan in the prediction of severe CMV neonatal disease. Methods - Design, Setting, Population This was a retrospective collection of all cases of primary congenital CMV infection reported in our unit (Obstetrics and Perinatal Medicine, Policlinico di S Orsola, IRCSS, Bologna) over a period of 9 years (2013–2022). Only cases of fetal infection following confirmed maternal primary infection in the first trimester (MPI) and newborns with confirmed CMV infection on blood/saliva or urine were included. Results Between 2014 and 2022, 69 fetuses had an antenatal diagnosis of primary CMV infection. The infection occurred after MPI in the first, second, and third trimester in 63.7% (43/69), 27.5% (19/69), and 10% (7/69) of cases, respectively. Second-trimester assessment by anomaly scan was abnormal in 10/69 (15%) fetuses: 5/69 (7%) had an extracerebral STA and 5/69 (7%) had a cerebral STA. Normal anomaly scan was found in 59/69 (86%) fetuses. When looking at all fetuses infected in the first trimester, 12.5% (5/40) underwent TOP and 45% (18/40) had symptoms at birth. A mean follow-up of 22.4 months (range 12–48 months) was available for 68/69 (99%) live born neonates. Conclusion Anomaly scan results to have a predictive positive value of 67% fetuses infected in the first trimester. Serial assessment by ultrasound is necessary to predict the risk of sequelae occurring in 35% following fetal infection in the first trimester of pregnancy. This combined evaluation by US and trimester of infection should be useful when counselling on the prognosis of cCMV infection.
Resumo:
The current environmental crisis is forcing the automotive industry to face tough challenges for the Internal Combustion Engines development in order to reduce the emissions of pollutants and Greenhouse gases. In this context, in the last decades, the main technological solutions adopted by the manufacturers have been the direct injection and the engine downsizing, which led to the rising of new concerns related to the fuel-cylinder walls physical interaction. The fuel spray possibly impacts the cylinder liner wall, which is wetted by the lubricant oil thus causing the derating of the lubricant properties, increasing the oil consumption, and contaminating the lubricant oil in the crankcase. Also, concerning hydrogen fuelled internal combustion engines, it is likely that the high near-wall temperature, which is typical of the hydrogen flame, results in the evaporation of a portion of the lubricant oil, increasing its consumption. With regards on the innovative combustion systems and their control strategies, optical accessible engines are fundamental tools for experimental investigations on such combustion systems. Though, due to the optical measurement line, optical engines suffer from a high level of blow-by, which must be accounted for. In light of the above, this thesis work aims to develop numerical methodologies with the aim to build useful tools for supporting the design of modern engines. In particular, a one-dimensional modelling of the lubricant oil-fuel dilution and oil evaporation has been performed and coupled with an optimization algorithm to achieve a lubricant oil surrogate. Then, a quasi-dimensional blow-by model has been developed and validated against experimental data. Such model, has been coupled with CFD 3D simulations and directly implemented in CFD 3D. Finally, CFD 3D simulations coupled with the VOF method have been performed in order to validate a methodology for studying the impact of a liquid droplet on a solid surface.
Resumo:
Fretting fatigue is a fatigue damage process that occurs when two surfaces in contact with each other are subjected to relative micro-slip, causing a reduced fatigue life with respect to the plain fatigue case. Fretting has been now studied deeply for over 50 years, but still no univocal design approach has been universally accepted. This thesis presents a method for predicting the fretting fatigue life of materials based on the material specific fatigue parameters. To validate the method, a set of fretting fatigue experimental tests have been run, using a newly designed specimen. FE analyses of the tests were also run and the SWT parameter was retrieved and it was found to be useful to successfully identify which samples failed. Finally, S-N curves were retrieved by using two different fatigue life predicting methods (CoffinManson and Jahed-Varvani). The two different methods were compared with the experimental results and it was found that the Jahed-Varvani method gave accurate results in terms of fretting fatigue life.
Resumo:
Previous earthquakes showed that shear wall damage could lead to catastrophic failures of the reinforced concrete building. The lateral load capacity of shear walls needs to be estimated to minimize associated losses during catastrophic events; hence it is necessary to develop and validate reliable and stable numerical methods able to converge to reasonable estimations with minimum computational effort. The beam-column 1-D line element with fiber-type cross-section model is a practical option that yields results in agreement with experimental data. However, shortcomings of using this model to predict the local damage response may come from the fact that the model requires fine calibration of material properties to overcome regularization and size effects. To reduce the mesh-dependency of the numerical model, a regularization method based on the concept of post-yield energy is applied in this work to both the concrete and the steel material constitutive laws to predict the nonlinear cyclic response and failure mechanism of concrete shear walls. Different categories of wall specimens known to produce a different response under in plane cyclic loading for their varied geometric and detailing characteristics are considered in this study, namely: 1) scaled wall specimens designed according to the European seismic design code and 2) unique full-scale wall specimens detailed according to the U.S. design code to develop a ductile behavior under cyclic loading. To test the boundaries of application of the proposed method, two full-scale walls with a mixed shear-flexure response and different values of applied axial load are also considered. The results of this study show that the use of regularized constitutive models considerably enhances the response predictions capabilities of the model with regards to global force-drift response and failure mode. The simulations presented in this thesis demonstrate the proposed model to be a valuable tool for researchers and engineers.
Resumo:
One of the first useful products from the human genome will be a set of predicted genes. Besides its intrinsic scientific interest, the accuracy and completeness of this data set is of considerable importance for human health and medicine. Though progress has been made on computational gene identification in terms of both methods and accuracy evaluation measures, most of the sequence sets in which the programs are tested are short genomic sequences, and there is concern that these accuracy measures may not extrapolate well to larger, more challenging data sets. Given the absence of experimentally verified large genomic data sets, we constructed a semiartificial test set comprising a number of short single-gene genomic sequences with randomly generated intergenic regions. This test set, which should still present an easier problem than real human genomic sequence, mimics the approximately 200kb long BACs being sequenced. In our experiments with these longer genomic sequences, the accuracy of GENSCAN, one of the most accurate ab initio gene prediction programs, dropped significantly, although its sensitivity remained high. Conversely, the accuracy of similarity-based programs, such as GENEWISE, PROCRUSTES, and BLASTX was not affected significantly by the presence of random intergenic sequence, but depended on the strength of the similarity to the protein homolog. As expected, the accuracy dropped if the models were built using more distant homologs, and we were able to quantitatively estimate this decline. However, the specificities of these techniques are still rather good even when the similarity is weak, which is a desirable characteristic for driving expensive follow-up experiments. Our experiments suggest that though gene prediction will improve with every new protein that is discovered and through improvements in the current set of tools, we still have a long way to go before we can decipher the precise exonic structure of every gene in the human genome using purely computational methodology.
Resumo:
Given the importance of Guzera breeding programs for milk production in the tropics, the objective of this study was to compare alternative random regression models for estimation of genetic parameters and prediction of breeding values. Test-day milk yields records (TDR) were collected monthly, in a maximum of 10 measurements. The database included 20,524 records of first lactation from 2816 Guzera cows. TDR data were analyzed by random regression models (RRM) considering additive genetic, permanent environmental and residual effects as random and the effects of contemporary group (CG), calving age as a covariate (linear and quadratic effects) and mean lactation curve as fixed. The genetic additive and permanent environmental effects were modeled by RRM using Wilmink, All and Schaeffer and cubic B-spline functions as well as Legendre polynomials. Residual variances were considered as heterogeneous classes, grouped differently according to the model used. Multi-trait analysis using finite-dimensional models (FDM) for testday milk records (TDR) and a single-trait model for 305-days milk yields (default) using the restricted maximum likelihood method were also carried out as further comparisons. Through the statistical criteria adopted, the best RRM was the one that used the cubic B-spline function with five random regression coefficients for the genetic additive and permanent environmental effects. However, the models using the Ali and Schaeffer function or Legendre polynomials with second and fifth order for, respectively, the additive genetic and permanent environmental effects can be adopted, as little variation was observed in the genetic parameter estimates compared to those estimated by models using the B-spline function. Therefore, due to the lower complexity in the (co)variance estimations, the model using Legendre polynomials represented the best option for the genetic evaluation of the Guzera lactation records. An increase of 3.6% in the accuracy of the estimated breeding values was verified when using RRM. The ranks of animals were very close whatever the RRM for the data set used to predict breeding values. Considering P305, results indicated only small to medium difference in the animals' ranking based on breeding values predicted by the conventional model or by RRM. Therefore, the sum of all the RRM-predicted breeding values along the lactation period (RRM305) can be used as a selection criterion for 305-day milk production. (c) 2014 Elsevier B.V. All rights reserved.
Resumo:
We created a simulation based on experimental data from bacteriophage T7 that computes the developmental cycle of the wild-type phage and also of mutants that have an altered genome order. We used the simulation to compute the fitness of more than 105 mutants. We tested these computations by constructing and experimentally characterizing T7 mutants in which we repositioned gene 1, coding for T7 RNA polymerase. Computed protein synthesis rates for ectopic gene 1 strains were in moderate agreement with observed rates. Computed phage-doubling rates were close to observations for two of four strains, but significantly overestimated those of the other two. Computations indicate that the genome organization of wild-type T7 is nearly optimal for growth: only 2.8% of random genome permutations were computed to grow faster, the highest 31% faster, than wild type. Specific discrepancies between computations and observations suggest that a better understanding of the translation efficiency of individual mRNAs and the functions of qualitatively “nonessential” genes will be needed to improve the T7 simulation. In silico representations of biological systems can serve to assess and advance our understanding of the underlying biology. Iteration between computation, prediction, and observation should increase the rate at which biological hypotheses are formulated and tested.
Resumo:
In this paper we determine the local and global resilience of random graphs G(n,p) (p >> n(-1)) with respect to the property of containing a cycle of length at least (1 - alpha)n. Roughly speaking, given alpha > 0, we determine the smallest r(g) (G, alpha) with the property that almost surely every subgraph of G = G(n,p) having more than r(g) (G, alpha)vertical bar E(G)vertical bar edges contains a cycle of length at least (1 - alpha)n (global resilience). We also obtain, for alpha < 1/2, the smallest r(l) (G, alpha) such that any H subset of G having deg(H) (v) larger than r(l) (G, alpha) deg(G) (v) for all v is an element of V(G) contains a cycle of length at least (1 - alpha)n (local resilience). The results above are in fact proved in the more general setting of pseudorandom graphs.