992 resultados para Monte Carlo, Método de
Resumo:
Stellar differential rotation is an important key to understand hydromagnetic stellar dynamos, instabilities, and transport processes in stellar interiors as well as for a better treatment of tides in close binary and star-planet systems. The space-borne high-precision photometry with MOST, CoRoT, and Kepler has provided large and homogeneous datasets. This allows, for the first time, the study of differential rotation statistically robust samples covering almost all stages of stellar evolution. In this sense, we introduce a method to measure a lower limit to the amplitude of surface differential rotation from high-precision evenly sampled photometric time series such as those obtained by space-borne telescopes. It is designed for application to main-sequence late-type stars whose optical flux modulation is dominated by starspots. An autocorrelation of the time series is used to select stars that allow an accurate determination of spot rotation periods. A simple two-spot model is applied together with a Bayesian Information Criterion to preliminarily select intervals of the time series showing evidence of differential rotation with starspots of almost constant area. Finally, the significance of the differential rotation detection and a measurement of its amplitude and uncertainty are obtained by an a posteriori Bayesian analysis based on a Monte Carlo Markov Chain (hereafter MCMC) approach. We apply our method to the Sun and eight other stars for which previous spot modelling has been performed to compare our results with previous ones. The selected stars are of spectral type F, G and K. Among the main results of this work, We find that autocorrelation is a simple method for selecting stars with a coherent rotational signal that is a prerequisite to a successful measurement of differential rotation through spot modelling. For a proper MCMC analysis, it is necessary to take into account the strong correlations among different parameters that exists in spot modelling. For the planethosting star Kepler-30, we derive a lower limit to the relative amplitude of the differential rotation. We confirm that the Sun as a star in the optical passband is not suitable for a measurement of the differential rotation owing to the rapid evolution of its photospheric active regions. In general, our method performs well in comparison with more sophisticated procedures used until now in the study of stellar differential rotation
Resumo:
In this work we study the survival cure rate model proposed by Yakovlev (1993) that are considered in a competing risk setting. Covariates are introduced for modeling the cure rate and we allow some covariates to have missing values. We consider only the cases by which the missing covariates are categorical and implement the EM algorithm via the method of weights for maximum likelihood estimation. We present a Monte Carlo simulation experiment to compare the properties of the estimators based on this method with those estimators under the complete case scenario. We also evaluate, in this experiment, the impact in the parameter estimates when we increase the proportion of immune and censored individuals among the not immune one. We demonstrate the proposed methodology with a real data set involving the time until the graduation for the undergraduate course of Statistics of the Universidade Federal do Rio Grande do Norte
Resumo:
Peng was the first to work with the Technical DFA (Detrended Fluctuation Analysis), a tool capable of detecting auto-long-range correlation in time series with non-stationary. In this study, the technique of DFA is used to obtain the Hurst exponent (H) profile of the electric neutron porosity of the 52 oil wells in Namorado Field, located in the Campos Basin -Brazil. The purpose is to know if the Hurst exponent can be used to characterize spatial distribution of wells. Thus, we verify that the wells that have close values of H are spatially close together. In this work we used the method of hierarchical clustering and non-hierarchical clustering method (the k-mean method). Then compare the two methods to see which of the two provides the best result. From this, was the parameter � (index neighborhood) which checks whether a data set generated by the k- average method, or at random, so in fact spatial patterns. High values of � indicate that the data are aggregated, while low values of � indicate that the data are scattered (no spatial correlation). Using the Monte Carlo method showed that combined data show a random distribution of � below the empirical value. So the empirical evidence of H obtained from 52 wells are grouped geographically. By passing the data of standard curves with the results obtained by the k-mean, confirming that it is effective to correlate well in spatial distribution
Resumo:
The diffusive epidemic process (PED) is a nonequilibrium stochastic model which, exhibits a phase trnasition to an absorbing state. In the model, healthy (A) and sick (B) individuals diffuse on a lattice with diffusion constants DA and DB, respectively. According to a Wilson renormalization calculation, the system presents a first-order phase transition, for the case DA > DB. Several researches performed simulation works for test this is conjecture, but it was not possible to observe this first-order phase transition. The explanation given was that we needed to perform simulation to higher dimensions. In this work had the motivation to investigate the critical behavior of a diffusive epidemic propagation with Lévy interaction(PEDL), in one-dimension. The Lévy distribution has the interaction of diffusion of all sizes taking the one-dimensional system for a higher-dimensional. We try to explain this is controversy that remains unresolved, for the case DA > DB. For this work, we use the Monte Carlo Method with resuscitation. This is method is to add a sick individual in the system when the order parameter (sick density) go to zero. We apply a finite size scalling for estimates the critical point and the exponent critical =, e z, for the case DA > DB
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
O conhecimento do genoma pode auxiliar na identificação de regiões cromossômicas e, eventualmente, de genes que controlam características quantitativas (QTLs) de importância econômica. em um experimento com 1.129 suínos resultantes do cruzamento entre machos da raça Meishan e fêmeas Large White e Landrace, foram analisadas as características gordura intramuscular (GIM), em %, e ganho dos 25 aos 90 kg de peso vivo (GP), em g/dia, em 298 animais F1 e 831 F2, e espessura de toucinho (ET), em mm, em 324 F1 e 805 F2. Os animais das gerações F1 e F2 foram tipificados com 29 marcadores microsatélites. Estudou-se a ligação entre os cromossomos 4, 6 e 7 com GIM, ET e GP. Análises de QTL utilizando-se metodologia Bayesiana foram aplicadas mediante três modelos genéticos: modelo poligênico infinitesimal (MPI); modelo poligênico finito (MPF), considerando-se três locos; e MPF combinado com MPI. O número de QTLs, suas respectivas posições nos três cromossomos e o efeito fenotípico foram estimados simultaneamente. Os sumários dos parâmetros estimados foram baseados nas distribuições marginais a posteriori, obtidas por meio do uso da Cadeia de Markov, algoritmos de Monte Carlo (MCMC). Foi possível evidenciar dois QTLs relacionados a GIM nos cromossomos 4 e 6 e dois a ET nos cromossomos 4 e 7. Somente quando se ajustou o MPI, foram observados QTLs no cromossomo 4 para ET e GIM. Não foi possível detectar QTLs para a característica GP com a aplicação dessa metodologia, o que pode ter resultado do uso de marcadores não informativos ou da ausência de QTLs segregando nos cromossomos 4, 6 e 7 desta população. Foi evidenciada a vantagem de se analisar dados experimentais ajustando diferentes modelos genéticos; essas análises ilustram a utilidade e ampla aplicabilidade do método Bayesiano.
Resumo:
Pós-graduação em Biofísica Molecular - IBILCE
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Pós-graduação em Agronomia (Energia na Agricultura) - FCA
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
The James-Stein estimator is a biased shrinkage estimator with uniformly smaller risk than the risk of the sample mean estimator for the mean of multivariate normal distribution, except in the one-dimensional or two-dimensional cases. In this work we have used more heuristic arguments and intensified the geometric treatment of the theory of James-Stein estimator. New type James-Stein shrinking estimators are proposed and the Mahalanobis metric used to address the James-Stein estimator. . To evaluate the performance of the estimator proposed, in relation to the sample mean estimator, we used the computer simulation by the Monte Carlo method by calculating the mean square error. The result indicates that the new estimator has better performance relative to the sample mean estimator.
Resumo:
In the composition of this work are present two parts. The first part contains the theory used. The second part contains the two articles. The first article examines two models of the class of generalized linear models for analyzing a mixture experiment, which studied the effect of different diets consist of fat, carbohydrate, and fiber on tumor expression in mammary glands of female rats, given by the ratio mice that had tumor expression in a particular diet. Mixture experiments are characterized by having the effect of collinearity and smaller sample size. In this sense, assuming normality for the answer to be maximized or minimized may be inadequate. Given this fact, the main characteristics of logistic regression and simplex models are addressed. The models were compared by the criteria of selection of models AIC, BIC and ICOMP, simulated envelope charts for residuals of adjusted models, odds ratios graphics and their respective confidence intervals for each mixture component. It was concluded that first article that the simplex regression model showed better quality of fit and narrowest confidence intervals for odds ratio. The second article presents the model Boosted Simplex Regression, the boosting version of the simplex regression model, as an alternative to increase the precision of confidence intervals for the odds ratio for each mixture component. For this, we used the Monte Carlo method for the construction of confidence intervals. Moreover, it is presented in an innovative way the envelope simulated chart for residuals of the adjusted model via boosting algorithm. It was concluded that the Boosted Simplex Regression model was adjusted successfully and confidence intervals for the odds ratio were accurate and lightly more precise than the its maximum likelihood version.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)