952 resultados para information criteria
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Foram utilizados 21.762 registros de peso do nascimento aos 550 dias de idade de 4.221 animais para estimativa das funções de covariância empregando modelos de regressão aleatória. Os modelos incluíram, como aleatórios, os efeitos genéticos aditivo direto e materno, de ambiente permanente de animal e de ambiente permanente materno e, como fixos, os efeitos de grupo contemporâneo, a idade da vaca ao parto (linear e quadrático) e o polinômio ortogonal de Legendre da idade do animal (regressão cúbica), como covariáveis. As variâncias residuais foram modeladas por uma função de variâncias com ordens de 2 a 6. Análises com polinômios ortogonais de diversas ordens foram realizadas para os efeitos genético aditivo direto, genético aditivo materno, de ambiente permanente de animal e de ambiente permanente materno. Os modelos foram comparados pelos critérios de informação Bayesiano de Schwarz (BIC) e Akaike (AIC). O melhor modelo indicado por todos os critérios foi o que considerou o efeito genético aditivo direto ajustado por um polinômio cúbico, o efeito genético materno ajustado por um polinômio quadrático, o efeito de ambiente permanente de animal ajustado por polinômio quártico e o efeito de ambiente permanente materno ajustado por polinômio linear. As estimativas de herdabilidade para o efeito direto foram maiores no início e no final do período estudado, com valores de 0,28 ao nascimento, 0,21 aos 240 dias e 0,24 aos 550 dias de idade. As estimativas de herdabilidade materna foram maiores aos 160 dias de idade (0,10) que nas demais fases do crescimento. As correlações genéticas variaram de moderadas a altas, diminuindo conforme o aumento da distância entre as idades. Maior eficiência na seleção para peso pode ser obtida considerando os pesos pós-desmama, período em que as estimativas de variância genética e herdabilidade foram superiores.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
The population aging process increases the number of elderly people worldwide. In Brazil, a country of continental size, this process began in the 40s and happens with specific features in each of the different region s realities. This way, this thesis aimed to evaluate the psychometric properties of a elderly s quality of life (QOL) scale, the WHOQOL-old, in a population of the Northeast of Brazil. We sought to investigate the congruence between the content covered by the scale and the ones deemed as relevant by the participants. It aimed also study the validity evidences of the instrument s internal structure. To achieve the research objectives we adopted the design of multiple methods. The research was organized in two studies. For data collection, both studies used a sociodemographic questionnaire to obtain a profile of the participants and the Mini Mental State Exam (MMSE), used as exclusion criterion. A number of 18 elderly residents of the cities of Natal-RN and Campina Grande-PB, mean age of 73.3 years (SD = 5.9) took part od the study, They were organized into three focal groups (FG) in witch they discussed about the concept of QOL, what enhance and what hinders QOL. For Study II, a quantitative approach, 335 elderly from Campina Grande responded scale WHOQOL-old. They are between 65 and 99 years (M = 74.17, SD = 6.5). The FG data were analyzed by categorical thematic content. For the data analysis of the WHOQOL-old scale were used exploratory factor analysis and calculation of the Akaike and Bayesian information criteria. The results of both studies were triangulated. According to the discussions in the FG, health and social participation have central roles in quality of life. Social participation is related to all the other QOL s influences raised. The participants indicated the relevance of religiosity and were divided about the importance of sexual activity. Exploratory factor analysis (EFA) extracted a model of six factors. Two items (OLD_3 and OLD_9), not loaded on any factor and were excluded. The other items had factor loadings > 0.3. The response categories were reduced from five to three. After the scale changes, the empirical model showed better fit (-2loglikelihood = 8993.90, BIC and AIC = 9183.90 = 9546.24) than the theoretical model (-2loglikelihood = 18390.88, AIC = 18678.88 and BIC = 19228.11). Despite the best information criterion values, the RMESA remained above the ideal (0.06). We conclude that the WHOQOL-old presents psychometric parameters below the ideal when used with the Northeast population, but the improvements made the scale s use acceptable. The WHOQOL-old uses observable variables that matches with the participants' perceptions on quality of life. However, new strategies must be tested for a better sacale refinement
Resumo:
Pós-graduação em Zootecnia - FCAV
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
We consider model selection uncertainty in linear regression. We study theoretically and by simulation the approach of Buckland and co-workers, who proposed estimating a parameter common to all models under study by taking a weighted average over the models, using weights obtained from information criteria or the bootstrap. This approach is compared with the usual approach in which the 'best' model is used, and with Bayesian model averaging. The weighted predictor behaves similarly to model averaging, with generally more realistic mean-squared errors than the usual model-selection-based estimator.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Blue whiting (Micromesistius poutassou, http://www.marinespecies.org/aphia.php?p=taxdetails&id=126439) is a small mesopelagic planktivorous gadoid found throughout the North-East Atlantic. This data contains the results of a model-based analysis of larvae captured by the Continuous Plankton Recorder (CPR) during the period 1951-2005. The observations are analysed using Generalised Additive Models (GAMs) of the the spatial, seasonal and interannual variation in the occurrence of larvae. The best fitting model is chosen using the Aikaike Information Criteria (AIC). The probability of occurrence in the continous plankton recorder is then normalised and converted to a probability distribution function in space (UTM projection Zone 28) and season (day of year). The best fitting model splits the distribution into two separate spawning grounds north and south of a dividing line at 53 N. The probability distribution is therefore normalised in these two regions (ie the space-time integral over each of the two regions is 1). The modelled outputs are on a UTM Zone 28 grid: however, for convenience, the latitude ("lat") and longitude ("lon") of each of these grid points are also included as a variable in the NetCDF file. The assignment of each grid point to either the Northern or Southern component (defined here as north/south of 53 N), is also included as a further variable ("component"). Finally, the day of year ("doy") is stored as the number of days elapsed from and included January 1 (ie doy=1 on January 1) - the year is thereafter divided into 180 grid points.
Resumo:
This thesis studies survival analysis techniques dealing with censoring to produce predictive tools that predict the risk of endovascular aortic aneurysm repair (EVAR) re-intervention. Censoring indicates that some patients do not continue follow up, so their outcome class is unknown. Methods dealing with censoring have drawbacks and cannot handle the high censoring of the two EVAR datasets collected. Therefore, this thesis presents a new solution to high censoring by modifying an approach that was incapable of differentiating between risks groups of aortic complications. Feature selection (FS) becomes complicated with censoring. Most survival FS methods depends on Cox's model, however machine learning classifiers (MLC) are preferred. Few methods adopted MLC to perform survival FS, but they cannot be used with high censoring. This thesis proposes two FS methods which use MLC to evaluate features. The two FS methods use the new solution to deal with censoring. They combine factor analysis with greedy stepwise FS search which allows eliminated features to enter the FS process. The first FS method searches for the best neural networks' configuration and subset of features. The second approach combines support vector machines, neural networks, and K nearest neighbor classifiers using simple and weighted majority voting to construct a multiple classifier system (MCS) for improving the performance of individual classifiers. It presents a new hybrid FS process by using MCS as a wrapper method and merging it with the iterated feature ranking filter method to further reduce the features. The proposed techniques outperformed FS methods based on Cox's model such as; Akaike and Bayesian information criteria, and least absolute shrinkage and selector operator in the log-rank test's p-values, sensitivity, and concordance. This proves that the proposed techniques are more powerful in correctly predicting the risk of re-intervention. Consequently, they enable doctors to set patients’ appropriate future observation plan.
Resumo:
Based on theoretical considerations an explanation for the temperature dependence of the thermal expansion and the bulk modulus is proposed. A new equation state is also derived. Additionally a physical explanation for the latent heat of fusion is presented. These theoretical predictions are tested against experiments on highly symmetrical monatomic structures. ^ The volume is not an independent variable and must be broken down into its fundamental components when the relationships to the pressure and temperature are defined. Using zero pressure and temperature reference frame, the initial parameters, volume at zero pressure and temperature[V°], bulk modulus at zero temperature [K°] and volume coefficient of thermal expansion at zero pressure[α°] are defined. ^ The new derived EoS is tested against the experiments on perovskite and epsilon iron. The Root-mean-square-deviations (RMSD) of the residuals of the molar volume, pressure, and temperature are in the range of the uncertainty of the experiments. ^ Separating the experiments into 200 K ranges, the new EoS was compared to the most widely used finite strain, interatomic potential, and empirical isothermal EoSs such as the Burch-Murnaghan, the Vinet, and the Roy-Roy respectively. Correlation coefficients, RMSD's of the residuals, and Akaike Information Criteria were used for evaluating the fitting. Based on these fitting parameters, the new p-V-T EoS is superior in every temperature range relative to the investigated conventional isothermal EoS. ^ The new EoS for epsilon iron reproduces the preliminary-reference earth-model (PREM) densities at 6100-7400 K indicating that the presence of light elements might not be necessary to explain the Earth's inner core densities. ^ It is suggested that the latent heat of fusion supplies the energy required for overcoming on the viscous drag resistance of the atoms. The calculated energies for melts formed from highly symmetrical packing arrangements correlate very well with experimentally determined latent heat values. ^ The optical investigation of carhonado-diamond is also part of the dissertation. The collected first complete infrared FTIR absorption spectra for carhonado-diamond confirm the interstellar origin for the most enigmatic diamonds known as carbonado. ^
Resumo:
A significant observational effort has been directed to investigate the nature of the so-called dark energy. In this dissertation we derive constraints on dark energy models using three different observable: measurements of the Hubble rate H(z) (compiled by Meng et al. in 2015.); distance modulus of 580 Supernovae Type Ia (Union catalog Compilation 2.1, 2011); and the observations of baryon acoustic oscilations (BAO) and the cosmic microwave background (CMB) by using the so-called CMB/BAO of six peaks of BAO (a peak determined through the Survey 6dFGS data, two through the SDSS and three through WiggleZ). The statistical analysis used was the method of the χ2 minimum (marginalized or minimized over h whenever possible) to link the cosmological parameter: m, ω and δω0. These tests were applied in two parameterization of the parameter ω of the equation of state of dark energy, p = ωρ (here, p is the pressure and ρ is the component of energy density). In one, ω is considered constant and less than -1/3, known as XCDM model; in the other the parameter of state equantion varies with the redshift, where we the call model GS. This last model is based on arguments that arise from the theory of cosmological inflation. For comparison it was also made the analysis of model CDM. Comparison of cosmological models with different observations lead to different optimal settings. Thus, to classify the observational viability of different theoretical models we use two criteria information, the Bayesian information criterion (BIC) and the Akaike information criteria (AIC). The Fisher matrix tool was incorporated into our testing to provide us with the uncertainty of the parameters of each theoretical model. We found that the complementarity of tests is necessary inorder we do not have degenerate parametric spaces. Making the minimization process we found (68%), for the Model XCDM the best fit parameters are m = 0.28 ± 0, 012 and ωX = −1.01 ± 0, 052. While for Model GS the best settings are m = 0.28 ± 0, 011 and δω0 = 0.00 ± 0, 059. Performing a marginalization we found (68%), for the Model XCDM the best fit parameters are m = 0.28 ± 0, 012 and ωX = −1.01 ± 0, 052. While for Model GS the best settings are M = 0.28 ± 0, 011 and δω0 = 0.00 ± 0, 059.
Resumo:
A significant observational effort has been directed to investigate the nature of the so-called dark energy. In this dissertation we derive constraints on dark energy models using three different observable: measurements of the Hubble rate H(z) (compiled by Meng et al. in 2015.); distance modulus of 580 Supernovae Type Ia (Union catalog Compilation 2.1, 2011); and the observations of baryon acoustic oscilations (BAO) and the cosmic microwave background (CMB) by using the so-called CMB/BAO of six peaks of BAO (a peak determined through the Survey 6dFGS data, two through the SDSS and three through WiggleZ). The statistical analysis used was the method of the χ2 minimum (marginalized or minimized over h whenever possible) to link the cosmological parameter: m, ω and δω0. These tests were applied in two parameterization of the parameter ω of the equation of state of dark energy, p = ωρ (here, p is the pressure and ρ is the component of energy density). In one, ω is considered constant and less than -1/3, known as XCDM model; in the other the parameter of state equantion varies with the redshift, where we the call model GS. This last model is based on arguments that arise from the theory of cosmological inflation. For comparison it was also made the analysis of model CDM. Comparison of cosmological models with different observations lead to different optimal settings. Thus, to classify the observational viability of different theoretical models we use two criteria information, the Bayesian information criterion (BIC) and the Akaike information criteria (AIC). The Fisher matrix tool was incorporated into our testing to provide us with the uncertainty of the parameters of each theoretical model. We found that the complementarity of tests is necessary inorder we do not have degenerate parametric spaces. Making the minimization process we found (68%), for the Model XCDM the best fit parameters are m = 0.28 ± 0, 012 and ωX = −1.01 ± 0, 052. While for Model GS the best settings are m = 0.28 ± 0, 011 and δω0 = 0.00 ± 0, 059. Performing a marginalization we found (68%), for the Model XCDM the best fit parameters are m = 0.28 ± 0, 012 and ωX = −1.01 ± 0, 052. While for Model GS the best settings are M = 0.28 ± 0, 011 and δω0 = 0.00 ± 0, 059.