957 resultados para Real data
Resumo:
Aims. A model-independent reconstruction of the cosmic expansion rate is essential to a robust analysis of cosmological observations. Our goal is to demonstrate that current data are able to provide reasonable constraints on the behavior of the Hubble parameter with redshift, independently of any cosmological model or underlying gravity theory. Methods. Using type Ia supernova data, we show that it is possible to analytically calculate the Fisher matrix components in a Hubble parameter analysis without assumptions about the energy content of the Universe. We used a principal component analysis to reconstruct the Hubble parameter as a linear combination of the Fisher matrix eigenvectors (principal components). To suppress the bias introduced by the high redshift behavior of the components, we considered the value of the Hubble parameter at high redshift as a free parameter. We first tested our procedure using a mock sample of type Ia supernova observations, we then applied it to the real data compiled by the Sloan Digital Sky Survey (SDSS) group. Results. In the mock sample analysis, we demonstrate that it is possible to drastically suppress the bias introduced by the high redshift behavior of the principal components. Applying our procedure to the real data, we show that it allows us to determine the behavior of the Hubble parameter with reasonable uncertainty, without introducing any ad-hoc parameterizations. Beyond that, our reconstruction agrees with completely independent measurements of the Hubble parameter obtained from red-envelope galaxies.
Resumo:
The kinetics of the ethoxylation of fatty alcohols catalyzed by potassium hydroxide was studied to obtain the rate constants for modeling of the industrial process. Experimental data obtained in a lab-scale semibatch autoclave reactor were used to evaluate kinetic and equilibrium parameters. The kinetic model was employed to model the performance of an industrial-scale spray tower reactor for fatty alcohol ethoxylation. The reactor model considers that mass transfer and reaction occur independently in two distinct zones of the reactor. Good agreement between the model predictions and real data was found. These findings confirm the reliability of the kinetic and reactor model for simulating fatty alcohol ethoxylation processes under industrial conditions.
Resumo:
In this paper we proposed a new two-parameters lifetime distribution with increasing failure rate. The new distribution arises on a latent complementary risk problem base. The properties of the proposed distribution are discussed, including a formal proof of its probability density function and explicit algebraic formulae for its reliability and failure rate functions, quantiles and moments, including the mean and variance. A simple EM-type algorithm for iteratively computing maximum likelihood estimates is presented. The Fisher information matrix is derived analytically in order to obtaining the asymptotic covariance matrix. The methodology is illustrated on a real data set. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
Computer viruses are an important risk to computational systems endangering either corporations of all sizes or personal computers used for domestic applications. Here, classical epidemiological models for disease propagation are adapted to computer networks and, by using simple systems identification techniques a model called SAIC (Susceptible, Antidotal, Infectious, Contaminated) is developed. Real data about computer viruses are used to validate the model. (c) 2008 Elsevier Ltd. All rights reserved.
Resumo:
In this paper, we deal with a generalized multi-period mean-variance portfolio selection problem with market parameters Subject to Markov random regime switchings. Problems of this kind have been recently considered in the literature for control over bankruptcy, for cases in which there are no jumps in market parameters (see [Zhu, S. S., Li, D., & Wang, S. Y. (2004). Risk control over bankruptcy in dynamic portfolio selection: A generalized mean variance formulation. IEEE Transactions on Automatic Control, 49, 447-457]). We present necessary and Sufficient conditions for obtaining an optimal control policy for this Markovian generalized multi-period meal-variance problem, based on a set of interconnected Riccati difference equations, and oil a set of other recursive equations. Some closed formulas are also derived for two special cases, extending some previous results in the literature. We apply the results to a numerical example with real data for Fisk control over bankruptcy Ill a dynamic portfolio selection problem with Markov jumps selection problem. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
The inverse Weibull distribution has the ability to model failure rates which are quite common in reliability and biological studies. A three-parameter generalized inverse Weibull distribution with decreasing and unimodal failure rate is introduced and studied. We provide a comprehensive treatment of the mathematical properties of the new distribution including expressions for the moment generating function and the rth generalized moment. The mixture model of two generalized inverse Weibull distributions is investigated. The identifiability property of the mixture model is demonstrated. For the first time, we propose a location-scale regression model based on the log-generalized inverse Weibull distribution for modeling lifetime data. In addition, we develop some diagnostic tools for sensitivity analysis. Two applications of real data are given to illustrate the potentiality of the proposed regression model.
Resumo:
A five-parameter distribution so-called the beta modified Weibull distribution is defined and studied. The new distribution contains, as special submodels, several important distributions discussed in the literature, such as the generalized modified Weibull, beta Weibull, exponentiated Weibull, beta exponential, modified Weibull and Weibull distributions, among others. The new distribution can be used effectively in the analysis of survival data since it accommodates monotone, unimodal and bathtub-shaped hazard functions. We derive the moments and examine the order statistics and their moments. We propose the method of maximum likelihood for estimating the model parameters and obtain the observed information matrix. A real data set is used to illustrate the importance and flexibility of the new distribution.
Resumo:
A bathtub-shaped failure rate function is very useful in survival analysis and reliability studies. The well-known lifetime distributions do not have this property. For the first time, we propose a location-scale regression model based on the logarithm of an extended Weibull distribution which has the ability to deal with bathtub-shaped failure rate functions. We use the method of maximum likelihood to estimate the model parameters and some inferential procedures are presented. We reanalyze a real data set under the new model and the log-modified Weibull regression model. We perform a model check based on martingale-type residuals and generated envelopes and the statistics AIC and BIC to select appropriate models. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
In this paper, we present various diagnostic methods for polyhazard models. Polyhazard models are a flexible family for fitting lifetime data. Their main advantage over the single hazard models, such as the Weibull and the log-logistic models, is to include a large amount of nonmonotone hazard shapes, as bathtub and multimodal curves. Some influence methods, such as the local influence and total local influence of an individual are derived, analyzed and discussed. A discussion of the computation of the likelihood displacement as well as the normal curvature in the local influence method are presented. Finally, an example with real data is given for illustration.
Resumo:
The zero-inflated negative binomial model is used to account for overdispersion detected in data that are initially analyzed under the zero-Inflated Poisson model A frequentist analysis a jackknife estimator and a non-parametric bootstrap for parameter estimation of zero-inflated negative binomial regression models are considered In addition an EM-type algorithm is developed for performing maximum likelihood estimation Then the appropriate matrices for assessing local influence on the parameter estimates under different perturbation schemes and some ways to perform global influence analysis are derived In order to study departures from the error assumption as well as the presence of outliers residual analysis based on the standardized Pearson residuals is discussed The relevance of the approach is illustrated with a real data set where It is shown that zero-inflated negative binomial regression models seems to fit the data better than the Poisson counterpart (C) 2010 Elsevier B V All rights reserved
Resumo:
We study in detail the so-called beta-modified Weibull distribution, motivated by the wide use of the Weibull distribution in practice, and also for the fact that the generalization provides a continuous crossover towards cases with different shapes. The new distribution is important since it contains as special sub-models some widely-known distributions, such as the generalized modified Weibull, beta Weibull, exponentiated Weibull, beta exponential, modified Weibull and Weibull distributions, among several others. It also provides more flexibility to analyse complex real data. Various mathematical properties of this distribution are derived, including its moments and moment generating function. We examine the asymptotic distributions of the extreme values. Explicit expressions are also derived for the chf, mean deviations, Bonferroni and Lorenz curves, reliability and entropies. The estimation of parameters is approached by two methods: moments and maximum likelihood. We compare by simulation the performances of the estimates from these methods. We obtain the expected information matrix. Two applications are presented to illustrate the proposed distribution.
Resumo:
The application of airborne laser scanning (ALS) technologies in forest inventories has shown great potential to improve the efficiency of forest planning activities. Precise estimates, fast assessment and relatively low complexity can explain the good results in terms of efficiency. The evolution of GPS and inertial measurement technologies, as well as the observed lower assessment costs when these technologies are applied to large scale studies, can explain the increasing dissemination of ALS technologies. The observed good quality of results can be expressed by estimates of volumes and basal area with estimated error below the level of 8.4%, depending on the size of sampled area, the quantity of laser pulses per square meter and the number of control plots. This paper analyzes the potential of an ALS assessment to produce certain forest inventory statistics in plantations of cloned Eucalyptus spp with precision equal of superior to conventional methods. The statistics of interest in this case were: volume, basal area, mean height and dominant trees mean height. The ALS flight for data assessment covered two strips of approximately 2 by 20 Km, in which clouds of points were sampled in circular plots with a radius of 13 m. Plots were sampled in different parts of the strips to cover different stand ages. The clouds of points generated by the ALS assessment: overall height mean, standard error, five percentiles (height under which we can find 10%, 30%, 50%,70% and 90% of the ALS points above ground level in the cloud), and density of points above ground level in each percentile were calculated. The ALS statistics were used in regression models to estimate mean diameter, mean height, mean height of dominant trees, basal area and volume. Conventional forest inventory sample plots provided real data. For volume, an exploratory assessment involving different combinations of ALS statistics allowed for the definition of the most promising relationships and fitting tests based on well known forest biometric models. The models based on ALS statistics that produced the best results involved: the 30% percentile to estimate mean diameter (R(2)=0,88 and MQE%=0,0004); the 10% and 90% percentiles to estimate mean height (R(2)=0,94 and MQE%=0,0003); the 90% percentile to estimate dominant height (R(2)=0,96 and MQE%=0,0003); the 10% percentile and mean height of ALS points to estimate basal area (R(2)=0,92 and MQE%=0,0016); and, to estimate volume, age and the 30% and 90% percentiles (R(2)=0,95 MQE%=0,002). Among the tested forest biometric models, the best fits were provided by the modified Schumacher using age and the 90% percentile, modified Clutter using age, mean height of ALS points and the 70% percentile, and modified Buckman using age, mean height of ALS points and the 10% percentile.
Resumo:
We introduce the log-beta Weibull regression model based on the beta Weibull distribution (Famoye et al., 2005; Lee et al., 2007). We derive expansions for the moment generating function which do not depend on complicated functions. The new regression model represents a parametric family of models that includes as sub-models several widely known regression models that can be applied to censored survival data. We employ a frequentist analysis, a jackknife estimator, and a parametric bootstrap for the parameters of the proposed model. We derive the appropriate matrices for assessing local influences on the parameter estimates under different perturbation schemes and present some ways to assess global influences. Further, for different parameter settings, sample sizes, and censoring percentages, several simulations are performed. In addition, the empirical distribution of some modified residuals are displayed and compared with the standard normal distribution. These studies suggest that the residual analysis usually performed in normal linear regression models can be extended to a modified deviance residual in the proposed regression model applied to censored data. We define martingale and deviance residuals to evaluate the model assumptions. The extended regression model is very useful for the analysis of real data and could give more realistic fits than other special regression models.
Resumo:
Polytomous Item Response Theory Models provides a unified, comprehensive introduction to the range of polytomous models available within item response theory (IRT). It begins by outlining the primary structural distinction between the two major types of polytomous IRT models. This focuses on the two types of response probability that are unique to polytomous models and their associated response functions, which are modeled differently by the different types of IRT model. It describes, both conceptually and mathematically, the major specific polytomous models, including the Nominal Response Model, the Partial Credit Model, the Rating Scale model, and the Graded Response Model. Important variations, such as the Generalized Partial Credit Model are also described as are less common variations, such as the Rating Scale version of the Graded Response Model. Relationships among the models are also investigated and the operation of measurement information is described for each major model. Practical examples of major models using real data are provided, as is a chapter on choosing an appropriate model. Figures are used throughout to illustrate important elements as they are described.
Resumo:
In this paper use consider the problem of providing standard errors of the component means in normal mixture models fitted to univariate or multivariate data by maximum likelihood via the EM algorithm. Two methods of estimation of the standard errors are considered: the standard information-based method and the computationally-intensive bootstrap method. They are compared empirically by their application to three real data sets and by a small-scale Monte Carlo experiment.