925 resultados para Estimation Of Distribution Algorithm


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of this study was to examine, in the context of an economic model of health production, the relationship between inputs (health influencing activities) and fitness.^ Primary data were collected from 204 employees of a large insurance company at the time of their enrollment in an industrially-based health promotion program. The inputs of production included medical care use, exercise, smoking, drinking, eating, coronary disease history, and obesity. The variables of age, gender and education known to affect the production process were also examined. Two estimates of fitness were used; self-report and a physiologic estimate based on exercise treadmill performance. Ordinary least squares and two-stage least squares regression analyses were used to estimate the fitness production functions.^ In the production of self-reported fitness status the coefficients for the exercise, smoking, eating, and drinking production inputs, and the control variable of gender were statistically significant and possessed theoretically correct signs. In the production of physiologic fitness exercise, smoking and gender were statistically significant. Exercise and gender were theoretically consistent while smoking was not. Results are compared with previous analyses of health production. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

HIV/AIDS is a treatable although incurable disease that presents immense challenges to those infected including physical, social and psychological effects. As of 2009, an estimated 2.4 million people were living with HIV or AIDS in India, 0.3% of the country's population. In India, it is difficult to not only treat but also to track because it is associated with socio-economic factors such as illiteracy, social biases, poor sanitation, malnutrition and social class. Nevertheless, it is important to know the prevalence of HIV/AIDS for several reasons. At the individual level, the quality of life of people living with HIV/AIDS is markedly lower than their counterparts without the disease and is associated with challenges. At the community level, it is important to identify high risk groups, monitor prevention efforts, and allocate appropriate resources to target programs for the reduction of transmission of HIV. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study proposed a novel statistical method that modeled the multiple outcomes and missing data process jointly using item response theory. This method follows the "intent-to-treat" principle in clinical trials and accounts for the correlation between outcomes and missing data process. This method may provide a good solution to chronic mental disorder study. ^ The simulation study demonstrated that if the true model is the proposed model with moderate or strong correlation, ignoring the within correlation may lead to overestimate of the treatment effect and result in more type I error than specified level. Even if the within correlation is small, the performance of proposed model is as good as naïve response model. Thus, the proposed model is robust for different correlation settings if the data is generated by the proposed model.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Accurate quantitative estimation of exposure using retrospective data has been one of the most challenging tasks in the exposure assessment field. To improve these estimates, some models have been developed using published exposure databases with their corresponding exposure determinants. These models are designed to be applied to reported exposure determinants obtained from study subjects or exposure levels assigned by an industrial hygienist, so quantitative exposure estimates can be obtained. ^ In an effort to improve the prediction accuracy and generalizability of these models, and taking into account that the limitations encountered in previous studies might be due to limitations in the applicability of traditional statistical methods and concepts, the use of computer science- derived data analysis methods, predominantly machine learning approaches, were proposed and explored in this study. ^ The goal of this study was to develop a set of models using decision trees/ensemble and neural networks methods to predict occupational outcomes based on literature-derived databases, and compare, using cross-validation and data splitting techniques, the resulting prediction capacity to that of traditional regression models. Two cases were addressed: the categorical case, where the exposure level was measured as an exposure rating following the American Industrial Hygiene Association guidelines and the continuous case, where the result of the exposure is expressed as a concentration value. Previously developed literature-based exposure databases for 1,1,1 trichloroethane, methylene dichloride and, trichloroethylene were used. ^ When compared to regression estimations, results showed better accuracy of decision trees/ensemble techniques for the categorical case while neural networks were better for estimation of continuous exposure values. Overrepresentation of classes and overfitting were the main causes for poor neural network performance and accuracy. Estimations based on literature-based databases using machine learning techniques might provide an advantage when they are applied to other methodologies that combine `expert inputs' with current exposure measurements, like the Bayesian Decision Analysis tool. The use of machine learning techniques to more accurately estimate exposures from literature-based exposure databases might represent the starting point for the independence from the expert judgment.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The normal boiling point is a fundamental thermo-physical property, which is important in describing the transition between the vapor and liquid phases. Reliable method which can predict it is of great importance, especially for compounds where there are no experimental data available. In this work, an improved group contribution method, which is second order method, for determination of the normal boiling point of organic compounds based on the Joback functional first order groups with some changes and added some other functional groups was developed by using experimental data for 632 organic components. It could distinguish most of structural isomerism and stereoisomerism, which including the structural, cis- and trans- isomers of organic compounds. First and second order contributions for hydrocarbons and hydrocarbon derivatives containing carbon, hydrogen, oxygen, nitrogen, sulfur, fluorine, chlorine and bromine atoms, are given. The fminsearch mathematical approach from MATLAB software is used in this study to select an optimal collection of functional groups (65 functional groups) and subsequently to develop the model. This is a direct search method that uses the simplex search method of Lagarias et al. The results of the new method are compared to the several currently used methods and are shown to be far more accurate and reliable. The average absolute deviation of normal boiling point predictions for 632 organic compounds is 4.4350 K; and the average absolute relative deviation is 1.1047 %, which is of adequate accuracy for many practical applications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Carbon isotopically based estimates of CO2 levels have been generated from a record of the photosynthetic fractionation of 13C (epsilon p) in a central equatorial Pacific sediment core that spans the last ~255 ka. Contents of 13C in phytoplanktonic biomass were determined by analysis of C37 alkadienones. These compounds are exclusive products of Prymnesiophyte algae which at present grow most abundantly at depths of 70-90 m in the central equatorial Pacific. A record of the isotopic compostion of dissolved CO2 was constructed from isotopic analyses of the planktonic foraminifera Neogloboquadrina dutertrei, which calcifies at 70-90 m in the same region. Values of epsilon p, derived by comparison of the organic and inorganic delta values, were transformed to yield concentrations of dissolved CO2 (c e) based on a new, site-specific calibration of the relationship between epsilon p and c e. The calibration was based on reassessment of existing epsilon p versus c e data, which support a physiologically based model in which epsilon p is inversely related to c e. Values of PCO2, the partial pressure of CO2 that would be in equilibrium with the estimated concentrations of dissolved CO2, were calculated using Henry's law and the temperature determined from the alkenone-unsaturation index UK 37. Uncertainties in these values arise mainly from uncertainties about the appropriateness (particularly over time) of the site-specific relationship between epsilon p and 1/c e. These are discussed in detail and it is concluded that the observed record of epsilon p most probably reflects significant variations in Delta pCO2, the ocean-atmosphere disequilibrium, which appears to have ranged from ~110 µatm during glacial intervals (ocean > atmosphere) to ~60 µatm during interglacials. Fluxes of CO2 to the atmosphere would thus have been significantly larger during glacial intervals. If this were characteristic of large areas of the equatorial Pacific, then greater glacial sinks for the equatorially evaded CO2 must have existed elsewhere. Statistical analysis of air-sea pCO2 differences and other parameters revealed significant (p < 0.01) inverse correlations of Delta pCO2 with sea surface temperature and with the mass accumulation rate of opal. The former suggests response to the strength of upwelling, the latter may indicate either drawdown of CO2 by siliceous phytoplankton or variation of [CO2]/[Si(OH)4] ratios in upwelling waters.