907 resultados para Model selection criteria
Resumo:
This study examined the relationships between gifted selection criteria used in the Dade County Public Schools of Miami, Florida and performance in sixth grade gifted science classes.^ The goal of the study was to identify significant predictors of performance in sixth grade gifted science classes. Group comparisons of performance were also made. Performance in sixth grade gifted science was defined as the numeric average of nine weeks' grades earned in sixth grade gifted science classes.^ The sample consisted of 100 subjects who were formerly enrolled in sixth grade gifted science classes over two years at a large, multiethnic public middle school in Dade County.^ The predictors analyzed were I.Q. score (all scales combined), full scale I.Q. score, verbal scale I.Q. score, performance scale I.Q. score, combined Stanford Achievement Test (SAT) score (Reading Comprehension plus Math Applications), SAT Reading Comprehension score, and SAT Math Applications score. Combined SAT score and SAT Math Applications score were significantly positively correlated to performance in sixth grade gifted science. Performance scale I.Q. score was significantly negatively correlated to performance in sixth grade gifted science. The other predictors examined were not significantly correlated to performance.^ Group comparison results showed the mean average of nine weeks grades for the full scale I.Q. group was greater than the verbal and performance scale I.Q. groups. Females outperformed males to a highly significant level. Mean g.p.a. for ethnic groups was greatest for Asian students, followed by white non-Hispanic, Hispanic, and black. Students not receiving a lunch subsidy outperformed those receiving subsidies.^ Comparisons of performance based on gifted qualification plan showed the mean g.p.a. for traditional plan and Plan B groups were not different. Mean g.p.a. for students who qualified for gifted using automatic Math Applications criteria was highest, followed by automatic Reading Comprehension criteria and Plan B Matrix score. Both automatic qualification groups outperformed the traditional group. The traditional group outperformed the Plan B Matrix group. No significant differences in mean g.p.a. between the Plan B subgroups and the traditional plan group were found. ^
Resumo:
Introduction: Fluocinolone acetonide slow release implant (Iluvien®) was approved in December 2013 in UK for treatment of eyes which are pseudophakic with DMO that is unresponsive to other available therapies. This approval was based on evidence from FAME trials which were conducted at a time when ranibizumab was not available. There is a paucity of data on implementation of guidance on selecting patients for this treatment modality and also on the real world outcome of fluocinolone therapy especially in those patients that have been unresponsive to ranibizumab therapy. Method: Retrospective study of consecutive patients treated with fluocinolone between January and August 2014 at three sites were included to evaluate selection criteria used, baseline characteristics and clinical outcomes at 3-month time point. Results: Twenty two pseudophakic eyes of 22 consecutive patients were included. Majority of patients had prior therapy with multiple intravitreal anti-VEGF injections. Four eyes had controlled glaucoma. At baseline mean VA and CRT were 50.7 letters and 631 μm respectively. After 3 months, 18 patients had improved CRT of which 15 of them also had improved VA. No adverse effects were noted. One additional patient required IOP lowering medication. Despite being unresponsive to multiple prior therapies including laser and anti-VEGF injections, switching to fluocinolone achieved treatment benefit. Conclusion: The patient level selection criteria proposed by NICE guidance on fluocinolone appeared to be implemented. This data from this study provides new evidence on early outcomes following fluocinolone therapy in eyes with DMO which had not responded to laser and other intravitreal agents.
Resumo:
L'étude du mouvement des organismes est essentiel pour la compréhension du fonctionnement des écosystèmes. Dans le cas des écosystèmes marins exploités, cela amène à s'intéresser aux stratégies spatiales des pêcheurs. L'une des approches les plus utilisées pour la modélisation du mouvement des prédateurs supé- rieurs est la marche aléatoire de Lévy. Une marche aléatoire est un modèle mathématique composé par des déplacements aléatoires. Dans le cas de Lévy, les longueurs des déplacements suivent une loi stable de Lévy. Dans ce cas également, les longueurs, lorsqu'elles tendent vers l'in ni (in praxy lorsqu'elles sont grandes, grandes par rapport à la médiane ou au troisième quartile par exemple), suivent une loi puissance caractéristique du type de marche aléatoire de Lévy (Cauchy, Brownien ou strictement Lévy). Dans la pratique, outre que cette propriété est utilisée de façon réciproque sans fondement théorique, les queues de distribution, notion par ailleurs imprécise, sont modélisée par des lois puissances sans que soient discutées la sensibilité des résultats à la dé nition de la queue de distribution, et la pertinence des tests d'ajustement et des critères de choix de modèle. Dans ce travail portant sur les déplacements observés de trois bateaux de pêche à l'anchois du Pérou, plusieurs modèles de queues de distribution (log-normal, exponentiel, exponentiel tronqué, puissance et puissance tronqué) ont été comparés ainsi que deux dé nitions possible de queues de distribution (de la médiane à l'in ni ou du troisième quartile à l'in ni). Au plan des critères et tests statistiques utilisés, les lois tronquées (exponentielle et puissance) sont apparues les meilleures. Elles intègrent en outre le fait que, dans la pratique, les bateaux ne dépassent pas une certaine limite de longueur de déplacement. Le choix de modèle est apparu sensible au choix du début de la queue de distribution : pour un même bateau, le choix d'un modèle tronqué ou l'autre dépend de l'intervalle des valeurs de la variable sur lequel le modèle est ajusté. Pour nir, nous discutons les implications en écologie des résultats de ce travail.
Resumo:
The purpose of this paper is to develop a Bayesian analysis for nonlinear regression models under scale mixtures of skew-normal distributions. This novel class of models provides a useful generalization of the symmetrical nonlinear regression models since the error distributions cover both skewness and heavy-tailed distributions such as the skew-t, skew-slash and the skew-contaminated normal distributions. The main advantage of these class of distributions is that they have a nice hierarchical representation that allows the implementation of Markov chain Monte Carlo (MCMC) methods to simulate samples from the joint posterior distribution. In order to examine the robust aspects of this flexible class, against outlying and influential observations, we present a Bayesian case deletion influence diagnostics based on the Kullback-Leibler divergence. Further, some discussions on the model selection criteria are given. The newly developed procedures are illustrated considering two simulations study, and a real data previously analyzed under normal and skew-normal nonlinear regression models. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
Using vector autoregressive (VAR) models and Monte-Carlo simulation methods we investigate the potential gains for forecasting accuracy and estimation uncertainty of two commonly used restrictions arising from economic relationships. The Örst reduces parameter space by imposing long-term restrictions on the behavior of economic variables as discussed by the literature on cointegration, and the second reduces parameter space by imposing short-term restrictions as discussed by the literature on serial-correlation common features (SCCF). Our simulations cover three important issues on model building, estimation, and forecasting. First, we examine the performance of standard and modiÖed information criteria in choosing lag length for cointegrated VARs with SCCF restrictions. Second, we provide a comparison of forecasting accuracy of Ötted VARs when only cointegration restrictions are imposed and when cointegration and SCCF restrictions are jointly imposed. Third, we propose a new estimation algorithm where short- and long-term restrictions interact to estimate the cointegrating and the cofeature spaces respectively. We have three basic results. First, ignoring SCCF restrictions has a high cost in terms of model selection, because standard information criteria chooses too frequently inconsistent models, with too small a lag length. Criteria selecting lag and rank simultaneously have a superior performance in this case. Second, this translates into a superior forecasting performance of the restricted VECM over the VECM, with important improvements in forecasting accuracy ñreaching more than 100% in extreme cases. Third, the new algorithm proposed here fares very well in terms of parameter estimation, even when we consider the estimation of long-term parameters, opening up the discussion of joint estimation of short- and long-term parameters in VAR models.
Resumo:
Despite the commonly held belief that aggregate data display short-run comovement, there has been little discussion about the econometric consequences of this feature of the data. We use exhaustive Monte-Carlo simulations to investigate the importance of restrictions implied by common-cyclical features for estimates and forecasts based on vector autoregressive models. First, we show that the ìbestî empirical model developed without common cycle restrictions need not nest the ìbestî model developed with those restrictions. This is due to possible differences in the lag-lengths chosen by model selection criteria for the two alternative models. Second, we show that the costs of ignoring common cyclical features in vector autoregressive modelling can be high, both in terms of forecast accuracy and efficient estimation of variance decomposition coefficients. Third, we find that the Hannan-Quinn criterion performs best among model selection criteria in simultaneously selecting the lag-length and rank of vector autoregressions.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
A total of 61,528 weight records from 22,246 Nellore animals born between 1984 and 2002 were used to compare different multiple-trait analysis methods for birth to mature weights. The following models were used: standard multivarite model (MV), five reduced-rank models fitting the first 1, 2, 3, 4 and 5 genetic principal components, and five models using factor analysis with 1, 2, 3, 4 and 5 factors. Direct additive genetic random effects and residual effects were included in all models. In addition, maternal genetic and maternal permanent environmental effects were included as random effects for birth and weaning weight. The models included contemporary group as fixed effect and age of animal at recording (except for birth weight) and age of dam at calving as linear and quadratic effects (for birth weight and weaning weight). The maternal genetic, maternal permanent environmental and residual (co)variance matrices were assumed to be full rank. According to model selection criteria, the model fitting the three first principal components (PC3) provided the best fit, without the need for factor analysis models. Similar estimates of phenotypic, direct additive and maternal genetic, maternal permanent environmental and residual (co)variances were obtained with models MV and PC3. Direct heritability ranged from 0.21 (birth weight) to 0.45 (weight at 6 years of age). The genetic and phenotypic correlations obtained with model PC3 were slightly higher than those estimated with model MV. In general, the reduced-rank model substantially decreased the number of parameters in the analyses without reducing the goodness-of-fit. © 2013 Elsevier B.V.
Resumo:
We analyzed 46,161 monthly test-day records of milk production from 7453 first lactations of crossbred dairy Gyr (Bos indicus) x Holstein cows. The following seven models were compared: standard multivariate model (M10), three reduced rank models fitting the first 2, 3, or 4 genetic principal components, and three models considering a 2-, 3-, or 4-factor structure for the genetic covariance matrix. Full rank residual covariance matrices were considered for all models. The model fitting the first two principal components (PC2) was the best according to the model selection criteria. Similar phenotypic, genetic, and residual variances were obtained with models M10 and PC2. The heritability estimates ranged from 0.14 to 0.21 and from 0.13 to 0.21 for models M10 and PC2, respectively. The genetic correlations obtained with model PC2 were slightly higher than those estimated with model M10. PC2 markedly reduced the number of parameters estimated and the time spent to reach convergence. We concluded that two principal components are sufficient to model the structure of genetic covariances between test-day milk yields. © FUNPEC-RP.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
En esta Tesis Doctoral se emplean y desarrollan Métodos Bayesianos para su aplicación en análisis geotécnicos habituales, con un énfasis particular en (i) la valoración y selección de modelos geotécnicos basados en correlaciones empíricas; en (ii) el desarrollo de predicciones acerca de los resultados esperados en modelos geotécnicos complejos. Se llevan a cabo diferentes aplicaciones a problemas geotécnicos, como es el caso de: (1) En el caso de rocas intactas, se presenta un método Bayesiano para la evaluación de modelos que permiten estimar el módulo de Young a partir de la resistencia a compresión simple (UCS). La metodología desarrollada suministra estimaciones de las incertidumbres de los parámetros y predicciones y es capaz de diferenciar entre las diferentes fuentes de error. Se desarrollan modelos "específicos de roca" para los tipos de roca más comunes y se muestra cómo se pueden "actualizar" esos modelos "iniciales" para incorporar, cuando se encuentra disponible, la nueva información específica del proyecto, reduciendo las incertidumbres del modelo y mejorando sus capacidades predictivas. (2) Para macizos rocosos, se presenta una metodología, fundamentada en un criterio de selección de modelos, que permite determinar el modelo más apropiado, entre un conjunto de candidatos, para estimar el módulo de deformación de un macizo rocoso a partir de un conjunto de datos observados. Una vez que se ha seleccionado el modelo más apropiado, se emplea un método Bayesiano para obtener distribuciones predictivas de los módulos de deformación de macizos rocosos y para actualizarlos con la nueva información específica del proyecto. Este método Bayesiano de actualización puede reducir significativamente la incertidumbre asociada a la predicción, y por lo tanto, afectar las estimaciones que se hagan de la probabilidad de fallo, lo cual es de un interés significativo para los diseños de mecánica de rocas basados en fiabilidad. (3) En las primeras etapas de los diseños de mecánica de rocas, la información acerca de los parámetros geomecánicos y geométricos, las tensiones in-situ o los parámetros de sostenimiento, es, a menudo, escasa o incompleta. Esto plantea dificultades para aplicar las correlaciones empíricas tradicionales que no pueden trabajar con información incompleta para realizar predicciones. Por lo tanto, se propone la utilización de una Red Bayesiana para trabajar con información incompleta y, en particular, se desarrolla un clasificador Naïve Bayes para predecir la probabilidad de ocurrencia de grandes deformaciones (squeezing) en un túnel a partir de cinco parámetros de entrada habitualmente disponibles, al menos parcialmente, en la etapa de diseño. This dissertation employs and develops Bayesian methods to be used in typical geotechnical analyses, with a particular emphasis on (i) the assessment and selection of geotechnical models based on empirical correlations; on (ii) the development of probabilistic predictions of outcomes expected for complex geotechnical models. Examples of application to geotechnical problems are developed, as follows: (1) For intact rocks, we present a Bayesian framework for model assessment to estimate the Young’s moduli based on their UCS. Our approach provides uncertainty estimates of parameters and predictions, and can differentiate among the sources of error. We develop ‘rock-specific’ models for common rock types, and illustrate that such ‘initial’ models can be ‘updated’ to incorporate new project-specific information as it becomes available, reducing model uncertainties and improving their predictive capabilities. (2) For rock masses, we present an approach, based on model selection criteria to select the most appropriate model, among a set of candidate models, to estimate the deformation modulus of a rock mass, given a set of observed data. Once the most appropriate model is selected, a Bayesian framework is employed to develop predictive distributions of the deformation moduli of rock masses, and to update them with new project-specific data. Such Bayesian updating approach can significantly reduce the associated predictive uncertainty, and therefore, affect our computed estimates of probability of failure, which is of significant interest to reliability-based rock engineering design. (3) In the preliminary design stage of rock engineering, the information about geomechanical and geometrical parameters, in situ stress or support parameters is often scarce or incomplete. This poses difficulties in applying traditional empirical correlations that cannot deal with incomplete data to make predictions. Therefore, we propose the use of Bayesian Networks to deal with incomplete data and, in particular, a Naïve Bayes classifier is developed to predict the probability of occurrence of tunnel squeezing based on five input parameters that are commonly available, at least partially, at design stages.
Resumo:
Mechanistic models used for prediction should be parsimonious, as models which are over-parameterised may have poor predictive performance. Determining whether a model is parsimonious requires comparisons with alternative model formulations with differing levels of complexity. However, creating alternative formulations for large mechanistic models is often problematic, and usually time-consuming. Consequently, few are ever investigated. In this paper, we present an approach which rapidly generates reduced model formulations by replacing a model’s variables with constants. These reduced alternatives can be compared to the original model, using data based model selection criteria, to assist in the identification of potentially unnecessary model complexity, and thereby inform reformulation of the model. To illustrate the approach, we present its application to a published radiocaesium plant-uptake model, which predicts uptake on the basis of soil characteristics (e.g. pH, organic matter content, clay content). A total of 1024 reduced model formulations were generated, and ranked according to five model selection criteria: Residual Sum of Squares (RSS), AICc, BIC, MDL and ICOMP. The lowest scores for RSS and AICc occurred for the same reduced model in which pH dependent model components were replaced. The lowest scores for BIC, MDL and ICOMP occurred for a further reduced model in which model components related to the distinction between adsorption on clay and organic surfaces were replaced. Both these reduced models had a lower RSS for the parameterisation dataset than the original model. As a test of their predictive performance, the original model and the two reduced models outlined above were used to predict an independent dataset. The reduced models have lower prediction sums of squares than the original model, suggesting that the latter may be overfitted. The approach presented has the potential to inform model development by rapidly creating a class of alternative model formulations, which can be compared.
Resumo:
Despite the success of the ΛCDM model in describing the Universe, a possible tension between early- and late-Universe cosmological measurements is calling for new independent cosmological probes. Amongst the most promising ones, gravitational waves (GWs) can provide a self-calibrated measurement of the luminosity distance. However, to obtain cosmological constraints, additional information is needed to break the degeneracy between parameters in the gravitational waveform. In this thesis, we exploit the latest LIGO-Virgo-KAGRA Gravitational Wave Transient Catalog (GWTC-3) of GW sources to constrain the background cosmological parameters together with the astrophysical properties of Binary Black Holes (BBHs), using information from their mass distribution. We expand the public code MGCosmoPop, previously used for the application of this technique, by implementing a state-of-the-art model for the mass distribution, needed to account for the presence of non-trivial features, i.e. a truncated power law with two additional Gaussian peaks, referred to as Multipeak. We then analyse GWTC-3 comparing this model with simpler and more commonly adopted ones, both in the case of fixed and varying cosmology, and assess their goodness-of-fit with different model selection criteria, and their constraining power on the cosmological and population parameters. We also start to explore different sampling methods, namely Markov Chain Monte Carlo and Nested Sampling, comparing their performances and evaluating the advantages of both. We find concurring evidence that the Multipeak model is favoured by the data, in line with previous results, and show that this conclusion is robust to the variation of the cosmological parameters. We find a constraint on the Hubble constant of H0 = 61.10+38.65−22.43 km/s/Mpc (68% C.L.), which shows the potential of this method in providing independent constraints on cosmological parameters. The results obtained in this work have been included in [1].