74 resultados para Discrete Data Models


Relevância:

40.00% 40.00%

Publicador:

Resumo:

We introduce in this paper a new class of discrete generalized nonlinear models to extend the binomial, Poisson and negative binomial models to cope with count data. This class of models includes some important models such as log-nonlinear models, logit, probit and negative binomial nonlinear models, generalized Poisson and generalized negative binomial regression models, among other models, which enables the fitting of a wide range of models to count data. We derive an iterative process for fitting these models by maximum likelihood and discuss inference on the parameters. The usefulness of the new class of models is illustrated with an application to a real data set. (C) 2008 Elsevier B.V. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In survival analysis applications, the failure rate function may frequently present a unimodal shape. In such case, the log-normal or log-logistic distributions are used. In this paper, we shall be concerned only with parametric forms, so a location-scale regression model based on the Burr XII distribution is proposed for modeling data with a unimodal failure rate function as an alternative to the log-logistic regression model. Assuming censored data, we consider a classic analysis, a Bayesian analysis and a jackknife estimator for the parameters of the proposed model. For different parameter settings, sample sizes and censoring percentages, various simulation studies are performed and compared to the performance of the log-logistic and log-Burr XII regression models. Besides, we use sensitivity analysis to detect influential or outlying observations, and residual analysis is used to check the assumptions in the model. Finally, we analyze a real data set under log-Buff XII regression models. (C) 2008 Published by Elsevier B.V.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The absorption spectrum of the acid form of pterin in water was investigated theoretically. Different procedures using continuum, discrete, and explicit models were used to include the solvation effect on the absorption spectrum, characterized by two bands. The discrete and explicit models used Monte Carlo simulation to generate the liquid structure and time-dependent density functional theory (B3LYP/6-31G+(d)) to obtain the excitation energies. The discrete model failed to give the correct qualitative effect on the second absorption band. The continuum model, in turn, has given a correct qualitative picture and a semiquantitative description. The explicit use of 29 solvent molecules, forming a hydration shell of 6 angstrom, embedded in the electrostatic field of the remaining solvent molecules, gives absorption transitions at 3.67 and 4.59 eV in excellent agreement with the S(0)-S(1) and S(0)-S(2) absorption bands at of 3.66 and 4.59 eV, respectively, that characterize the experimental spectrum of pterin in water environment. (C) 2010 Wiley Periodicals, Inc. Int J Quantum Chem 110: 2371-2377, 2010

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this study was to comparatively assess dental arch width, in the canine and molar regions, by means of direct measurements from plaster models, photocopies and digitized images of the models. The sample consisted of 130 pairs of plaster models, photocopies and digitized images of the models of white patients (n = 65), both genders, with Class I and Class II Division 1 malocclusions, treated by standard Edgewise mechanics and extraction of the four first premolars. Maxillary and mandibular intercanine and intermolar widths were measured by a calibrated examiner, prior to and after orthodontic treatment, using the three modes of reproduction of the dental arches. Dispersion of the data relative to pre- and posttreatment intra-arch linear measurements (mm) was represented as box plots. The three measuring methods were compared by one-way ANOVA for repeated measurements (α = 0.05). Initial / final mean values varied as follows: 33.94 to 34.29 mm / 34.49 to 34.66 mm (maxillary intercanine width); 26.23 to 26.26 mm / 26.77 to 26.84 mm (mandibular intercanine width); 49.55 to 49.66 mm / 47.28 to 47.45 mm (maxillary intermolar width) and 43.28 to 43.41 mm / 40.29 to 40.46 mm (mandibular intermolar width). There were no statistically significant differences between mean dental arch widths estimated by the three studied methods, prior to and after orthodontic treatment. It may be concluded that photocopies and digitized images of the plaster models provided reliable reproductions of the dental arches for obtaining transversal intra-arch measurements.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this study was to develop and validate equations to estimate the aboveground phytomass of a 30 years old plot of Atlantic Forest. In two plots of 100 m², a total of 82 trees were cut down at ground level. For each tree, height and diameter were measured. Leaves and woody material were separated in order to determine their fresh weights in field conditions. Samples of each fraction were oven dried at 80 °C to constant weight to determine their dry weight. Tree data were divided into two random samples. One sample was used for the development of the regression equations, and the other for validation. The models were developed using single linear regression analysis, where the dependent variable was the dry mass, and the independent variables were height (h), diameter (d) and d²h. The validation was carried out using Pearson correlation coefficient, paired t-Student test and standard error of estimation. The best equations to estimate aboveground phytomass were: lnDW = -3.068+2.522lnd (r² = 0.91; s y/x = 0.67) and lnDW = -3.676+0.951ln d²h (r² = 0.94; s y/x = 0.56).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work we study the problem of modeling identification of a population employing a discrete dynamic model based on the Richards growth model. The population is subjected to interventions due to consumption, such as hunting or farming animals. The model identification allows us to estimate the probability or the average time for a population number to reach a certain level. The parameter inference for these models are obtained with the use of the likelihood profile technique as developed in this paper. The identification method here developed can be applied to evaluate the productivity of animal husbandry or to evaluate the risk of extinction of autochthon populations. It is applied to data of the Brazilian beef cattle herd population, and the the population number to reach a certain goal level is investigated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work we report on a comparison of some theoretical models usually used to fit the dependence on temperature of the fundamental energy gap of semiconductor materials. We used in our investigations the theoretical models of Viña, Pässler-p and Pässler-ρ to fit several sets of experimental data, available in the literature for the energy gap of GaAs in the temperature range from 12 to 974 K. Performing several fittings for different values of the upper limit of the analyzed temperature range (Tmax), we were able to follow in a systematic way the evolution of the fitting parameters up to the limit of high temperatures and make a comparison between the zero-point values obtained from the different models by extrapolating the linear dependence of the gaps at high T to T = 0 K and that determined by the dependence of the gap on isotope mass. Using experimental data measured by absorption spectroscopy, we observed the non-linear behavior of Eg(T) of GaAs for T > ΘD.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of this study was to estimate the regressions calibration for the dietary data that were measured using the quantitative food frequency questionnaire (QFFQ) in the Natural History of HPV Infection in Men: the HIM Study in Brazil. A sample of 98 individuals from the HIM study answered one QFFQ and three 24-hour recalls (24HR) at interviews. The calibration was performed using linear regression analysis in which the 24HR was the dependent variable and the QFFQ was the independent variable. Age, body mass index, physical activity, income and schooling were used as adjustment variables in the models. The geometric means between the 24HR and the calibration-corrected QFFQ were statistically equal. The dispersion graphs between the instruments demonstrate increased correlation after making the correction, although there is greater dispersion of the points with worse explanatory power of the models. Identification of the regressions calibration for the dietary data of the HIM study will make it possible to estimate the effect of the diet on HPV infection, corrected for the measurement error of the QFFQ.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

study-specific results, their findings should be interpreted with caution

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Gene clustering is a useful exploratory technique to group together genes with similar expression levels under distinct cell cycle phases or distinct conditions. It helps the biologist to identify potentially meaningful relationships between genes. In this study, we propose a clustering method based on multivariate normal mixture models, where the number of clusters is predicted via sequential hypothesis tests: at each step, the method considers a mixture model of m components (m = 2 in the first step) and tests if in fact it should be m - 1. If the hypothesis is rejected, m is increased and a new test is carried out. The method continues (increasing m) until the hypothesis is accepted. The theoretical core of the method is the full Bayesian significance test, an intuitive Bayesian approach, which needs no model complexity penalization nor positive probabilities for sharp hypotheses. Numerical experiments were based on a cDNA microarray dataset consisting of expression levels of 205 genes belonging to four functional categories, for 10 distinct strains of Saccharomyces cerevisiae. To analyze the method's sensitivity to data dimension, we performed principal components analysis on the original dataset and predicted the number of classes using 2 to 10 principal components. Compared to Mclust (model-based clustering), our method shows more consistent results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nowadays, digital computer systems and networks are the main engineering tools, being used in planning, design, operation, and control of all sizes of building, transportation, machinery, business, and life maintaining devices. Consequently, computer viruses became one of the most important sources of uncertainty, contributing to decrease the reliability of vital activities. A lot of antivirus programs have been developed, but they are limited to detecting and removing infections, based on previous knowledge of the virus code. In spite of having good adaptation capability, these programs work just as vaccines against diseases and are not able to prevent new infections based on the network state. Here, a trial on modeling computer viruses propagation dynamics relates it to other notable events occurring in the network permitting to establish preventive policies in the network management. Data from three different viruses are collected in the Internet and two different identification techniques, autoregressive and Fourier analyses, are applied showing that it is possible to forecast the dynamics of a new virus propagation by using the data collected from other viruses that formerly infected the network. Copyright (c) 2008 J. R. C. Piqueira and F. B. Cesar. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Diagnostic methods have been an important tool in regression analysis to detect anomalies, such as departures from error assumptions and the presence of outliers and influential observations with the fitted models. Assuming censored data, we considered a classical analysis and Bayesian analysis assuming no informative priors for the parameters of the model with a cure fraction. A Bayesian approach was considered by using Markov Chain Monte Carlo Methods with Metropolis-Hasting algorithms steps to obtain the posterior summaries of interest. Some influence methods, such as the local influence, total local influence of an individual, local influence on predictions and generalized leverage were derived, analyzed and discussed in survival data with a cure fraction and covariates. The relevance of the approach was illustrated with a real data set, where it is shown that, by removing the most influential observations, the decision about which model best fits the data is changed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The mass function of cluster-size halos and their redshift distribution are computed for 12 distinct accelerating cosmological scenarios and confronted to the predictions of the conventional flat Lambda CDM model. The comparison with Lambda CDM is performed by a two-step process. First, we determine the free parameters of all models through a joint analysis involving the latest cosmological data, using supernovae type Ia, the cosmic microwave background shift parameter, and baryon acoustic oscillations. Apart from a braneworld inspired cosmology, it is found that the derived Hubble relation of the remaining models reproduces the Lambda CDM results approximately with the same degree of statistical confidence. Second, in order to attempt to distinguish the different dark energy models from the expectations of Lambda CDM, we analyze the predicted cluster-size halo redshift distribution on the basis of two future cluster surveys: (i) an X-ray survey based on the eROSITA satellite, and (ii) a Sunayev-Zeldovich survey based on the South Pole Telescope. As a result, we find that the predictions of 8 out of 12 dark energy models can be clearly distinguished from the Lambda CDM cosmology, while the predictions of 4 models are statistically equivalent to those of the Lambda CDM model, as far as the expected cluster mass function and redshift distribution are concerned. The present analysis suggests that such a technique appears to be very competitive to independent tests probing the late time evolution of the Universe and the associated dark energy effects.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The exact composition of a specific class of compact stars, historically referred to as ""neutron stars,'' is still quite unknown. Possibilities ranging from hadronic to quark degrees of freedom, including self-bound versions of the latter, have been proposed. We specifically address the suitability of strange star models (including pairing interactions) in this work, in the light of new measurements available for four compact stars. The analysis shows that these data might be explained by such an exotic equation of state, actually selecting a small window in parameter space, but still new precise measurements and also further theoretical developments are needed to settle the subject.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The kinematic approach to cosmological tests provides direct evidence to the present accelerating stage of the Universe that does not depend on the validity of general relativity, as well as on the matter-energy content of the Universe. In this context, we consider here a linear two-parameter expansion for the decelerating parameter, q(z)=q(0)+q(1)z, where q(0) and q(1) are arbitrary constants to be constrained by the union supernovae data. By assuming a flat Universe we find that the best fit to the pair of free parameters is (q(0),q(1))=(-0.73,1.5) whereas the transition redshift is z(t)=0.49(-0.07)(+0.14)(1 sigma) +0.54-0.12(2 sigma). This kinematic result is in agreement with some independent analyses and more easily accommodates many dynamical flat models (like Lambda CDM).