986 resultados para Interval generalized set


Relevância:

30.00% 30.00%

Publicador:

Resumo:

We introduce in this paper a new class of discrete generalized nonlinear models to extend the binomial, Poisson and negative binomial models to cope with count data. This class of models includes some important models such as log-nonlinear models, logit, probit and negative binomial nonlinear models, generalized Poisson and generalized negative binomial regression models, among other models, which enables the fitting of a wide range of models to count data. We derive an iterative process for fitting these models by maximum likelihood and discuss inference on the parameters. The usefulness of the new class of models is illustrated with an application to a real data set. (C) 2008 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The generalized Birnbaum-Saunders distribution pertains to a class of lifetime models including both lighter and heavier tailed distributions. This model adapts well to lifetime data, even when outliers exist, and has other good theoretical properties and application perspectives. However, statistical inference tools may not exist in closed form for this model. Hence, simulation and numerical studies are needed, which require a random number generator. Three different ways to generate observations from this model are considered here. These generators are compared by utilizing a goodness-of-fit procedure as well as their effectiveness in predicting the true parameter values by using Monte Carlo simulations. This goodness-of-fit procedure may also be used as an estimation method. The quality of this estimation method is studied here. Finally, through a real data set, the generalized and classical Birnbaum-Saunders models are compared by using this estimation method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider consider the problem of dichotomizing a continuous covariate when performing a regression analysis based on a generalized estimation approach. The problem involves estimation of the cutpoint for the covariate and testing the hypothesis that the binary covariate constructed from the continuous covariate has a significant impact on the outcome. Due to the multiple testing used to find the optimal cutpoint, we need to make an adjustment to the usual significance test to preserve the type-I error rates. We illustrate the techniques on one data set of patients given unrelated hematopoietic stem cell transplantation. Here the question is whether the CD34 cell dose given to patient affects the outcome of the transplant and what is the smallest cell dose which is needed for good outcomes. (C) 2010 Elsevier BM. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The modeling and analysis of lifetime data is an important aspect of statistical work in a wide variety of scientific and technological fields. Good (1953) introduced a probability distribution which is commonly used in the analysis of lifetime data. For the first time, based on this distribution, we propose the so-called exponentiated generalized inverse Gaussian distribution, which extends the exponentiated standard gamma distribution (Nadarajah and Kotz, 2006). Various structural properties of the new distribution are derived, including expansions for its moments, moment generating function, moments of the order statistics, and so forth. We discuss maximum likelihood estimation of the model parameters. The usefulness of the new model is illustrated by means of a real data set. (c) 2010 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a two-step pseudo likelihood estimation technique for generalized linear mixed models with the random effects being correlated between groups. The core idea is to deal with the intractable integrals in the likelihood function by multivariate Taylor's approximation. The accuracy of the estimation technique is assessed in a Monte-Carlo study. An application of it with a binary response variable is presented using a real data set on credit defaults from two Swedish banks. Thanks to the use of two-step estimation technique, the proposed algorithm outperforms conventional pseudo likelihood algorithms in terms of computational time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: The sensitivity to microenvironmental changes varies among animals and may be under genetic control. It is essential to take this element into account when aiming at breeding robust farm animals. Here, linear mixed models with genetic effects in the residual variance part of the model can be used. Such models have previously been fitted using EM and MCMC algorithms. Results: We propose the use of double hierarchical generalized linear models (DHGLM), where the squared residuals are assumed to be gamma distributed and the residual variance is fitted using a generalized linear model. The algorithm iterates between two sets of mixed model equations, one on the level of observations and one on the level of variances. The method was validated using simulations and also by re-analyzing a data set on pig litter size that was previously analyzed using a Bayesian approach. The pig litter size data contained 10,060 records from 4,149 sows. The DHGLM was implemented using the ASReml software and the algorithm converged within three minutes on a Linux server. The estimates were similar to those previously obtained using Bayesian methodology, especially the variance components in the residual variance part of the model. Conclusions: We have shown that variance components in the residual variance part of a linear mixed model can be estimated using a DHGLM approach. The method enables analyses of animal models with large numbers of observations. An important future development of the DHGLM methodology is to include the genetic correlation between the random effects in the mean and residual variance parts of the model as a parameter of the DHGLM.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents the techniques of likelihood prediction for the generalized linear mixed models. Methods of likelihood prediction is explained through a series of examples; from a classical one to more complicated ones. The examples show, in simple cases, that the likelihood prediction (LP) coincides with already known best frequentist practice such as the best linear unbiased predictor. The paper outlines a way to deal with the covariate uncertainty while producing predictive inference. Using a Poisson error-in-variable generalized linear model, it has been shown that in complicated cases LP produces better results than already know methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Stereo matching tries to find correspondences between locations in a pair of displaced images of the same scene in order to extract the underlying depth information. Pixel correspondence estimation suffers from occlusions, noise or bias. In this work, we introduce a novel approach to represent images by means of interval-valued fuzzy sets to overcome the uncertainty due to the above mentioned problems. Our aim is to take advantage of this representation in the stereo matching algorithm. The image interval-valued fuzzification process that we propose is based on image segmentation in a different way to the common use of segmentation in stereo vision. We introduce interval-valued fuzzy similarities to compare windows whose pixels are represented by intervals. In the experimental analysis we show the goodness of this representation in the stereo matching problem. The new representation together with the new similarity measure that we introduce shows a better overall behavior with respect to other very well-known methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Extensions of aggregation functions to Atanassov orthopairs (often referred to as intuitionistic fuzzy sets or AIFS) usually involve replacing the standard arithmetic operations with those defined for the membership and non-membership orthopairs. One problem with such constructions is that the usual choice of operations has led to formulas which do not generalize the aggregation of ordinary fuzzy sets (where the membership and non-membership values add to 1). Previous extensions of the weighted arithmetic mean and ordered weighted averaging operator also have the absorbent element 〈1,0〉, which becomes particularly problematic in the case of the Bonferroni mean, whose generalizations are useful for modeling mandatory requirements. As well as considering the consistency and interpretability of the operations used for their construction, we hold that it is also important for aggregation functions over higher order fuzzy sets to exhibit analogous behavior to their standard definitions. After highlighting the main drawbacks of existing Bonferroni means defined for Atanassov orthopairs and interval data, we present two alternative methods for extending the generalized Bonferroni mean. Both lead to functions with properties more consistent with the original Bonferroni mean, and which coincide in the case of ordinary fuzzy values.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Atanassov's intuitionistic fuzzy sets (AIFS) and interval valued fuzzy sets (IVFS) are two generalizations of a fuzzy set, which are equivalent mathematically although different semantically. We analyze the median aggregation operator for AIFS and IVFS. Different mathematical theories have lead to different definitions of the median operator. We look at the median from various perspectives: as an instance of the intuitionistic ordered weighted averaging operator, as a Fermat point in a plane, as a minimizer of input disagreement, and as an operation on distributive lattices. We underline several connections between these approaches and summarize essential properties of the median in different representations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fire is both a widespread natural disturbance that affects the distribution of species and a tool that can be used to manage habitats for species. Knowledge of temporal changes in the occurrence of species after fire is essential for conservation management in fire-prone environments. Two key issues are: whether postfire responses of species are idiosyncratic or if multiple species show a limited number of similar responses; and whether such responses to time since fire can predict the occurrence of species across broad spatial scales. We examined the response of bird species to time since fire in semiarid shrubland in southeastern Australia using data from surveys at 499 sites representing a 100-year chronosequence. We used nonlinear regression to model the probability of occurrence of 30 species with time since fire in two vegetation types, and compared species' responses with generalized response shapes from the literature. The occurrence of 16 species was significantly influenced by time since fire: they displayed six main responses consistent with generalized response shapes. Of these 16 species, 15 occurred more frequently in mid- or later-successional vegetation (>20 years since fire), and only one species occurred more often in early succession (<5 years since fire). The models had reasonable predictive ability for eight species, some predictive ability for seven species, and were little better than random for one species. Bird species displayed a limited range of responses to time since fire; thus a small set of fire ages should allow the provision of habitat for most species. Postfire successional changes extend for decades and management of the age class distribution of vegetation will need to reflect this timescale. Response curves revealed important seral stages for species and highlighted the importance of mid- to late-successional vegetation (>20 years). Although time since fire clearly influences the distribution of numerous bird species, predictive models of the spatial distribution of species in fire-prone landscapes need to incorporate other factors in addition to time since fire.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: Tear meniscus height (TMH) is an indirect measurement of tear film volume. This study investigated the temporal changes in the TMH during the blink interval in the morning (8–9 am) and at the end of the day (5–6 pm) in both soft contact lens (CL) and nonlens wearers (NLW).

Methods: Fifty participants (25 CL; 25 NLW) were evaluated for their subjective symptoms, TMH, noninvasive break up time, and bulbar hyperemia at the am and pm visits on the same day. The TMH was measured at set intervals between 2 and 15 sec during the blink interval, using an optical coherence tomographer.

Results: The NLW group revealed no changes in a variety of symptoms during the day, whereas the CL group reported an increase in dryness (P=0.03) and grittiness (P=0.02) over the day. For both groups, the TMH and calculated tear meniscus volume revealed lower values immediately after the blink and increased progressively afterwards, mainly due to reflex tearing. The am tear meniscus volume values tended to be higher than the pm values for both groups, but this was not significant (NLW P=0.13; CL P=0.82). Noninvasive break up time deteriorated during the day for both groups but was only significant for the CL group (P=0.002), whereas bulbar hyperemia revealed no statistically significant change for either group.

Conclusions: Reflex tearing may play a substantial role in the TMH differences observed over the blink interval. Standardization of the time when a TMH measurement is performed will be valuable in comparing tear film clinical studies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study proposes a novel non-parametric method for construction of prediction intervals (PIs) using interval type-2 Takagi-Sugeno-Kang fuzzy logic systems (IT2 TSK FLSs). The key idea in the proposed method is to treat the left and right end points of the type-reduced set as the lower and upper bounds of a PI. This allows us to construct PIs without making any special assumption about the data distribution. A new training algorithm is developed to satisfy conditions imposed by the associated confidence level on PIs. Proper adjustment of premise and consequent parameters of IT2 TSK FLSs is performed through the minimization of a PI-based objective function, rather than traditional error-based cost functions. This new cost function covers both validity and informativeness aspects of PIs. A metaheuristic method is applied for minimization of the non-linear non-differentiable cost function. Quantitative measures are applied for assessing the quality of PIs constructed using IT2 TSK FLSs. The demonstrated results for four benchmark case studies with homogenous and heterogeneous noise clearly show the proposed method is capable of generating high quality PIs useful for decision-making.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper considers the problem of designing an observer-based output feedback controller to exponentially stabilize a class of linear systems with an interval time-varying delay in the state vector. The delay is assumed to vary within an interval with known lower and upper bounds. The time-varying delay is not required to be differentiable, nor should its lower bound be zero. By constructing a set of Lyapunov–Krasovskii functionals and utilizing the Newton–Leibniz formula, a delay-dependent stabilizability condition which is expressed in terms of Linear Matrix Inequalities (LMIs) is derived to ensure the closed-loop system is exponentially stable with a prescribed α-convergence rate. The design of an observerbased output feedback controller can be carried out in a systematic and computationally efficient manner via the use of an LMI-based algorithm. A numerical example is given to illustrate the design procedure.