925 resultados para Hierarchical logistic model
Resumo:
Understanding how the human visual system recognizes objects is one of the key challenges in neuroscience. Inspired by a large body of physiological evidence (Felleman and Van Essen, 1991; Hubel and Wiesel, 1962; Livingstone and Hubel, 1988; Tso et al., 2001; Zeki, 1993), a general class of recognition models has emerged which is based on a hierarchical organization of visual processing, with succeeding stages being sensitive to image features of increasing complexity (Hummel and Biederman, 1992; Riesenhuber and Poggio, 1999; Selfridge, 1959). However, these models appear to be incompatible with some well-known psychophysical results. Prominent among these are experiments investigating recognition impairments caused by vertical inversion of images, especially those of faces. It has been reported that faces that differ "featurally" are much easier to distinguish when inverted than those that differ "configurally" (Freire et al., 2000; Le Grand et al., 2001; Mondloch et al., 2002) ??finding that is difficult to reconcile with the aforementioned models. Here we show that after controlling for subjects' expectations, there is no difference between "featurally" and "configurally" transformed faces in terms of inversion effect. This result reinforces the plausibility of simple hierarchical models of object representation and recognition in cortex.
Resumo:
One of the tantalising remaining problems in compositional data analysis lies in how to deal with data sets in which there are components which are essential zeros. By an essential zero we mean a component which is truly zero, not something recorded as zero simply because the experimental design or the measuring instrument has not been sufficiently sensitive to detect a trace of the part. Such essential zeros occur in many compositional situations, such as household budget patterns, time budgets, palaeontological zonation studies, ecological abundance studies. Devices such as nonzero replacement and amalgamation are almost invariably ad hoc and unsuccessful in such situations. From consideration of such examples it seems sensible to build up a model in two stages, the first determining where the zeros will occur and the second how the unit available is distributed among the non-zero parts. In this paper we suggest two such models, an independent binomial conditional logistic normal model and a hierarchical dependent binomial conditional logistic normal model. The compositional data in such modelling consist of an incidence matrix and a conditional compositional matrix. Interesting statistical problems arise, such as the question of estimability of parameters, the nature of the computational process for the estimation of both the incidence and compositional parameters caused by the complexity of the subcompositional structure, the formation of meaningful hypotheses, and the devising of suitable testing methodology within a lattice of such essential zero-compositional hypotheses. The methodology is illustrated by application to both simulated and real compositional data
Resumo:
Our goal in this paper is to assess reliability and validity of egocentered network data using multilevel analysis (Muthen, 1989, Hox, 1993) under the multitrait-multimethod approach. The confirmatory factor analysis model for multitrait-multimethod data (Werts & Linn, 1970; Andrews, 1984) is used for our analyses. In this study we reanalyse a part of data of another study (Kogovšek et al., 2002) done on a representative sample of the inhabitants of Ljubljana. The traits used in our article are the name interpreters. We consider egocentered network data as hierarchical; therefore a multilevel analysis is required. We use Muthen's partial maximum likelihood approach, called pseudobalanced solution (Muthen, 1989, 1990, 1994) which produces estimations close to maximum likelihood for large ego sample sizes (Hox & Mass, 2001). Several analyses will be done in order to compare this multilevel analysis to classic methods of analysis such as the ones made in Kogovšek et al. (2002), who analysed the data only at group (ego) level considering averages of all alters within the ego. We show that some of the results obtained by classic methods are biased and that multilevel analysis provides more detailed information that much enriches the interpretation of reliability and validity of hierarchical data. Within and between-ego reliabilities and validities and other related quality measures are defined, computed and interpreted
Resumo:
In the midst of health care reform, Colombia has succeeded in increasing health insurance coverage and the quality of health care. In spite of this, efficiency continues to be a matter of concern, and small-area variations in health care are one of the plausible causes of such inefficiencies. In order to understand this issue, we use individual data of all births from a Contributory-Regimen insurer in Colombia. We perform two different specifications of a multilevel logistic regression model. Our results reveal that hospitals account for 20% of variation on the probability of performing cesarean sections. Geographic area only explains 1/3 of the variance attributable to the hospital. Furthermore, some variables from both demand and supply sides are found to be also relevant on the probability of undergoing cesarean sections. This paper contributes to previous research by using a hierarchical model and by defining hospitals as cluster. Moreover, we also include clinical and supply induced demand variables.
Resumo:
We demonstrate that it is possible to link multi-chain molecular dynamics simulations with the tube model using a single chain slip-links model as a bridge. This hierarchical approach allows significant speed up of simulations, permitting us to span the time scales relevant for a comparison with the tube theory. Fitting the mean-square displacement of individual monomers in molecular dynamics simulations with the slip-spring model, we show that it is possible to predict the stress relaxation. Then, we analyze the stress relaxation from slip-spring simulations in the framework of the tube theory. In the absence of constraint release, we establish that the relaxation modulus can be decomposed as the sum of contributions from fast and longitudinal Rouse modes, and tube survival. Finally, we discuss some open questions regarding possible future directions that could be profitable in rendering the tube model quantitative, even for mildly entangled polymers
Resumo:
The classical computer vision methods can only weakly emulate some of the multi-level parallelisms in signal processing and information sharing that takes place in different parts of the primates’ visual system thus enabling it to accomplish many diverse functions of visual perception. One of the main functions of the primates’ vision is to detect and recognise objects in natural scenes despite all the linear and non-linear variations of the objects and their environment. The superior performance of the primates’ visual system compared to what machine vision systems have been able to achieve to date, motivates scientists and researchers to further explore this area in pursuit of more efficient vision systems inspired by natural models. In this paper building blocks for a hierarchical efficient object recognition model are proposed. Incorporating the attention-based processing would lead to a system that will process the visual data in a non-linear way focusing only on the regions of interest and hence reducing the time to achieve real-time performance. Further, it is suggested to modify the visual cortex model for recognizing objects by adding non-linearities in the ventral path consistent with earlier discoveries as reported by researchers in the neuro-physiology of vision.
Resumo:
None of the current surveillance streams monitoring the presence of scrapie in Great Britain provide a comprehensive and unbiased estimate of the prevalence of the disease at the holding level. Previous work to estimate the under-ascertainment adjusted prevalence of scrapie in Great Britain applied multiple-list capture–recapture methods. The enforcement of new control measures on scrapie-affected holdings in 2004 has stopped the overlapping between surveillance sources and, hence, the application of multiple-list capture–recapture models. Alternative methods, still under the capture–recapture methodology, relying on repeated entries in one single list have been suggested in these situations. In this article, we apply one-list capture–recapture approaches to data held on the Scrapie Notifications Database to estimate the undetected population of scrapie-affected holdings with clinical disease in Great Britain for the years 2002, 2003, and 2004. For doing so, we develop a new diagnostic tool for indication of heterogeneity as well as a new understanding of the Zelterman and Chao’s lower bound estimators to account for potential unobserved heterogeneity. We demonstrate that the Zelterman estimator can be viewed as a maximum likelihood estimator for a special, locally truncated Poisson likelihood equivalent to a binomial likelihood. This understanding allows the extension of the Zelterman approach by means of logistic regression to include observed heterogeneity in the form of covariates—in case studied here, the holding size and country of origin. Our results confirm the presence of substantial unobserved heterogeneity supporting the application of our two estimators. The total scrapie-affected holding population in Great Britain is around 300 holdings per year. None of the covariates appear to inform the model significantly.
Resumo:
None of the current surveillance streams monitoring the presence of scrapie in Great Britain provide a comprehensive and unbiased estimate of the prevalence of the disease at the holding level. Previous work to estimate the under-ascertainment adjusted prevalence of scrapie in Great Britain applied multiple-list capture-recapture methods. The enforcement of new control measures on scrapie-affected holdings in 2004 has stopped the overlapping between surveillance sources and, hence, the application of multiple-list capture-recapture models. Alternative methods, still under the capture-recapture methodology, relying on repeated entries in one single list have been suggested in these situations. In this article, we apply one-list capture-recapture approaches to data held on the Scrapie Notifications Database to estimate the undetected population of scrapie-affected holdings with clinical disease in Great Britain for the years 2002, 2003, and 2004. For doing so, we develop a new diagnostic tool for indication of heterogeneity as well as a new understanding of the Zelterman and Chao's lower bound estimators to account for potential unobserved heterogeneity. We demonstrate that the Zelterman estimator can be viewed as a maximum likelihood estimator for a special, locally truncated Poisson likelihood equivalent to a binomial likelihood. This understanding allows the extension of the Zelterman approach by means of logistic regression to include observed heterogeneity in the form of covariates-in case studied here, the holding size and country of origin. Our results confirm the presence of substantial unobserved heterogeneity supporting the application of our two estimators. The total scrapie-affected holding population in Great Britain is around 300 holdings per year. None of the covariates appear to inform the model significantly.
Resumo:
Objectives: To assess the potential source of variation that surgeon may add to patient outcome in a clinical trial of surgical procedures. Methods: Two large (n = 1380) parallel multicentre randomized surgical trials were undertaken to compare laparoscopically assisted hysterectomy with conventional methods of abdominal and vaginal hysterectomy; involving 43 surgeons. The primary end point of the trial was the occurrence of at least one major complication. Patients were nested within surgeons giving the data set a hierarchical structure. A total of 10% of patients had at least one major complication, that is, a sparse binary outcome variable. A linear mixed logistic regression model (with logit link function) was used to model the probability of a major complication, with surgeon fitted as a random effect. Models were fitted using the method of maximum likelihood in SAS((R)). Results: There were many convergence problems. These were resolved using a variety of approaches including; treating all effects as fixed for the initial model building; modelling the variance of a parameter on a logarithmic scale and centring of continuous covariates. The initial model building process indicated no significant 'type of operation' across surgeon interaction effect in either trial, the 'type of operation' term was highly significant in the abdominal trial, and the 'surgeon' term was not significant in either trial. Conclusions: The analysis did not find a surgeon effect but it is difficult to conclude that there was not a difference between surgeons. The statistical test may have lacked sufficient power, the variance estimates were small with large standard errors, indicating that the precision of the variance estimates may be questionable.
Resumo:
Modern buildings are designed to enhance the match between environment, spaces and the people carrying out work, so that the well-being and the performance of the occupants are all in harmony. Building services are systems that facilitate a healthy working environment within which workers productivity can be optimised in the buildings. However, the maintenance of these services is fraught with problems that may contribute to up to 50% of the total life cycle cost of the building. Maintenance support is one area which is not usually designed into the system as this is not common practice in the services industry. The other areas of shortfall for future designs are; client requirements, commissioning, facilities management data and post occupancy evaluation feedback which needs to be adequately planned to capture and document this information for use in future designs. At the University of Reading an integrated approach has been developed to assemble the multitude of aspects inherent in this field. The means records required and measured achievements for the benefit of both building owners and practitioners. This integrated approach can be represented in a Through Life Business Model (TLBM) format using the concept of Integrated Logistic Support (ILS). The prototype TLBM developed utilises the tailored tools and techniques of ILS for building services. This TLBM approach will facilitate the successful development of a databank that would be invaluable in capturing essential data (e.g. reliability of components) for enhancing future building services designs, life cycle costing and decision making by practitioners, in particular facilities managers.
Resumo:
Based on integrated system optimisation and parameter estimation a method is described for on-line steady state optimisation which compensates for model-plant mismatch and solves a non-linear optimisation problem by iterating on a linear - quadratic representation. The method requires real process derivatives which are estimated using a dynamic identification technique. The utility of the method is demonstrated using a simulation of the Tennessee Eastman benchmark chemical process.
Resumo:
We present, pedagogically, the Bayesian approach to composed error models under alternative, hierarchical characterizations; demonstrate, briefly, the Bayesian approach to model comparison using recent advances in Markov Chain Monte Carlo (MCMC) methods; and illustrate, empirically, the value of these techniques to natural resource economics and coastal fisheries management, in particular. The Bayesian approach to fisheries efficiency analysis is interesting for at least three reasons. First, it is a robust and highly flexible alternative to commonly applied, frequentist procedures, which dominate the literature. Second,the Bayesian approach is extremely simple to implement, requiring only a modest addition to most natural-resource economist tool-kits. Third, despite its attractions, applications of Bayesian methodology in coastal fisheries management are few.
Resumo:
This work proposes a unified neurofuzzy modelling scheme. To begin with, the initial fuzzy base construction method is based on fuzzy clustering utilising a Gaussian mixture model (GMM) combined with the analysis of covariance (ANOVA) decomposition in order to obtain more compact univariate and bivariate membership functions over the subspaces of the input features. The mean and covariance of the Gaussian membership functions are found by the expectation maximisation (EM) algorithm with the merit of revealing the underlying density distribution of system inputs. The resultant set of membership functions forms the basis of the generalised fuzzy model (GFM) inference engine. The model structure and parameters of this neurofuzzy model are identified via the supervised subspace orthogonal least square (OLS) learning. Finally, instead of providing deterministic class label as model output by convention, a logistic regression model is applied to present the classifier’s output, in which the sigmoid type of logistic transfer function scales the outputs of the neurofuzzy model to the class probability. Experimental validation results are presented to demonstrate the effectiveness of the proposed neurofuzzy modelling scheme.
Resumo:
This paper presents a hierarchical clustering method for semantic Web service discovery. This method aims to improve the accuracy and efficiency of the traditional service discovery using vector space model. The Web service is converted into a standard vector format through the Web service description document. With the help of WordNet, a semantic analysis is conducted to reduce the dimension of the term vector and to make semantic expansion to meet the user’s service request. The process and algorithm of hierarchical clustering based semantic Web service discovery is discussed. Validation is carried out on the dataset.
Resumo:
The benefits of breastfeeding for the children`s health have been highlighted in many studies. The innovative aspect of the present study lies in its use of a multilevel model, a technique that has rarely been applied to studies on breastfeeding. The data reported were collected from a larger study, the Family Budget Survey-Pesquisa de Orcamentos Familiares, carried out between 2002 and 2003 in Brazil that involved a sample of 48 470 households. A representative national sample of 1477 infants aged 0-6 months was used. The statistical analysis was performed using a multilevel model, with two levels grouped by region. In Brazil, breastfeeding prevalence was 58%. The factors that bore a negative influence on breastfeeding were over four residents living in the same household [odds ratio (OR) = 0.68, 90% confidence interval (CI) = 0.51-0.89] and mothers aged 30 years or more (OR = 0.68, 90% CI = 0.53-0.89). The factors that positively influenced breastfeeding were the following: higher socio-economic levels (OR = 1.37, 90% CI = 1.01-1.88), families with over two infants under 5 years (OR = 1.25, 90% CI = 1.00-1.58) and being a resident in rural areas (OR = 1.25, 90% CI = 1.00-1.58). Although majority of the mothers was aware of the value of maternal milk and breastfed their babies, the prevalence of breastfeeding remains lower than the rate advised by the World Health Organization, and the number of residents living in the same household along with mothers aged 30 years or older were both factors associated with early cessation of infant breastfeeding before 6 months.