88 resultados para Generalized Logistic Model
Resumo:
There are two main types of data sources of income distributions in China: household survey data and grouped data. Household survey data are typically available for isolated years and individual provinces. In comparison, aggregate or grouped data are typically available more frequently and usually have national coverage. In principle, grouped data allow investigation of the change of inequality over longer, continuous periods of time, and the identification of patterns of inequality across broader regions. Nevertheless, a major limitation of grouped data is that only mean (average) income and income shares of quintile or decile groups of the population are reported. Directly using grouped data reported in this format is equivalent to assuming that all individuals in a quintile or decile group have the same income. This potentially distorts the estimate of inequality within each region. The aim of this paper is to apply an improved econometric method designed to use grouped data to study income inequality in China. A generalized beta distribution is employed to model income inequality in China at various levels and periods of time. The generalized beta distribution is more general and flexible than the lognormal distribution that has been used in past research, and also relaxes the assumption of a uniform distribution of income within quintile and decile groups of populations. The paper studies the nature and extent of inequality in rural and urban China over the period 1978 to 2002. Income inequality in the whole of China is then modeled using a mixture of province-specific distributions. The estimated results are used to study the trends in national inequality, and to discuss the empirical findings in the light of economic reforms, regional policies, and globalization of the Chinese economy.
Resumo:
A mixture model incorporating long-term survivors has been adopted in the field of biostatistics where some individuals may never experience the failure event under study. The surviving fractions may be considered as cured. In most applications, the survival times are assumed to be independent. However, when the survival data are obtained from a multi-centre clinical trial, it is conceived that the environ mental conditions and facilities shared within clinic affects the proportion cured as well as the failure risk for the uncured individuals. It necessitates a long-term survivor mixture model with random effects. In this paper, the long-term survivor mixture model is extended for the analysis of multivariate failure time data using the generalized linear mixed model (GLMM) approach. The proposed model is applied to analyse a numerical data set from a multi-centre clinical trial of carcinoma as an illustration. Some simulation experiments are performed to assess the applicability of the model based on the average biases of the estimates formed. Copyright (C) 2001 John Wiley & Sons, Ltd.
Resumo:
Ligaments undergo finite strain displaying hyperelastic behaviour as the initially tangled fibrils present straighten out, combined with viscoelastic behaviour (strain rate sensitivity). In the present study the anterior cruciate ligament of the human knee joint is modelled in three dimensions to gain an understanding of the stress distribution over the ligament due to motion imposed on the ends, determined from experimental studies. A three dimensional, finite strain material model of ligaments has recently been proposed by Pioletti in Ref. [2]. It is attractive as it separates out elastic stress from that due to the present strain rate and that due to the past history of deformation. However, it treats the ligament as isotropic and incompressible. While the second assumption is reasonable, the first is clearly untrue. In the present study an alternative model of the elastic behaviour due to Bonet and Burton (Ref. [4]) is generalized. Bonet and Burton consider finite strain with constant modulii for the fibres and for the matrix of a transversely isotropic composite. In the present work, the fibre modulus is first made to increase exponentially from zero with an invariant that provides a measure of the stretch in the fibre direction. At 12% strain in the fibre direction, a new reference state is then adopted, after which the material modulus is made constant, as in Bonet and Burton's model. The strain rate dependence can be added, either using Pioletti's isotropic approximation, or by making the effect depend on the strain rate in the fibre direction only. A solid model of a ligament is constructed, based on experimentally measured sections, and the deformation predicted using explicit integration in time. This approach simplifies the coding of the material model, but has a limitation due to the detrimental effect on stability of integration of the substantial damping implied by the nonlinear dependence of stress on strain rate. At present, an artificially high density is being used to provide stability, while the dynamics are being removed from the solution using artificial viscosity. The result is a quasi-static solution incorporating the effect of strain rate. Alternate approaches to material modelling and integration are discussed, that may result in a better model.
Resumo:
We extend a recent construction for an integrable model describing Josephson tunneling between identical BCS systems to the case where the BCS systems have different single particle energy levels. The exact solution of this generalized model is obtained through the Bethe ansatz.
Resumo:
For Markov processes on the positive integers with the origin as an absorbing state, Ferrari, Kesten, Martinez and Picco studied the existence of quasi-stationary and limiting conditional distributions by characterizing quasi-stationary distributions as fixed points of a transformation Phi on the space of probability distributions on {1, 2,.. }. In the case of a birth-death process, the components of Phi(nu) can be written down explicitly for any given distribution nu. Using this explicit representation, we will show that Phi preserves likelihood ratio ordering between distributions. A conjecture of Kryscio and Lefevre concerning the quasi-stationary distribution of the SIS logistic epidemic follows as a corollary.
Resumo:
This paper proposes a template for modelling complex datasets that integrates traditional statistical modelling approaches with more recent advances in statistics and modelling through an exploratory framework. Our approach builds on the well-known and long standing traditional idea of 'good practice in statistics' by establishing a comprehensive framework for modelling that focuses on exploration, prediction, interpretation and reliability assessment, a relatively new idea that allows individual assessment of predictions. The integrated framework we present comprises two stages. The first involves the use of exploratory methods to help visually understand the data and identify a parsimonious set of explanatory variables. The second encompasses a two step modelling process, where the use of non-parametric methods such as decision trees and generalized additive models are promoted to identify important variables and their modelling relationship with the response before a final predictive model is considered. We focus on fitting the predictive model using parametric, non-parametric and Bayesian approaches. This paper is motivated by a medical problem where interest focuses on developing a risk stratification system for morbidity of 1,710 cardiac patients given a suite of demographic, clinical and preoperative variables. Although the methods we use are applied specifically to this case study, these methods can be applied across any field, irrespective of the type of response.
Resumo:
A recent development of the Markov chain Monte Carlo (MCMC) technique is the emergence of MCMC samplers that allow transitions between different models. Such samplers make possible a range of computational tasks involving models, including model selection, model evaluation, model averaging and hypothesis testing. An example of this type of sampler is the reversible jump MCMC sampler, which is a generalization of the Metropolis-Hastings algorithm. Here, we present a new MCMC sampler of this type. The new sampler is a generalization of the Gibbs sampler, but somewhat surprisingly, it also turns out to encompass as particular cases all of the well-known MCMC samplers, including those of Metropolis, Barker, and Hastings. Moreover, the new sampler generalizes the reversible jump MCMC. It therefore appears to be a very general framework for MCMC sampling. This paper describes the new sampler and illustrates its use in three applications in Computational Biology, specifically determination of consensus sequences, phylogenetic inference and delineation of isochores via multiple change-point analysis.
Resumo:
Research into the etiology of social phobia has lagged far behind that of descriptive and maintaining factors. The current paper reviews data from a variety of sources that have some bearing on questions of the origins of social fears. Areas examined include genetic factors, temperament, childrearing, negative life events, and adverse social experiences. Epidemiological data are examined in detail and factors associated with social phobia such as cognitive distortions and social skills are also covered. The paper concludes with an initial model that draws together some of the current findings and aims to provide a platform for future research directions. (C) 2004 Elsevier Ltd. All rights reserved.
Resumo:
The generalized secant hyperbolic distribution (GSHD) proposed in Vaughan (2002) includes a wide range of unimodal symmetric distributions, with the Cauchy and uniform distributions being the limiting cases, and the logistic and hyperbolic secant distributions being special cases. The current article derives an asymptotically efficient rank estimator of the location parameter of the GSHD and suggests the corresponding one- and two-sample optimal rank tests. The rank estimator derived is compared to the modified MLE of location proposed in Vaughan (2002). By combining these two estimators, a computationally attractive method for constructing an exact confidence interval of the location parameter is developed. The statistical procedures introduced in the current article are illustrated by examples.
Resumo:
Eukaryotic genomes display segmental patterns of variation in various properties, including GC content and degree of evolutionary conservation. DNA segmentation algorithms are aimed at identifying statistically significant boundaries between such segments. Such algorithms may provide a means of discovering new classes of functional elements in eukaryotic genomes. This paper presents a model and an algorithm for Bayesian DNA segmentation and considers the feasibility of using it to segment whole eukaryotic genomes. The algorithm is tested on a range of simulated and real DNA sequences, and the following conclusions are drawn. Firstly, the algorithm correctly identifies non-segmented sequence, and can thus be used to reject the null hypothesis of uniformity in the property of interest. Secondly, estimates of the number and locations of change-points produced by the algorithm are robust to variations in algorithm parameters and initial starting conditions and correspond to real features in the data. Thirdly, the algorithm is successfully used to segment human chromosome 1 according to GC content, thus demonstrating the feasibility of Bayesian segmentation of eukaryotic genomes. The software described in this paper is available from the author's website (www.uq.edu.au/similar to uqjkeith/) or upon request to the author.
Resumo:
The paper investigates a Bayesian hierarchical model for the analysis of categorical longitudinal data from a large social survey of immigrants to Australia. Data for each subject are observed on three separate occasions, or waves, of the survey. One of the features of the data set is that observations for some variables are missing for at least one wave. A model for the employment status of immigrants is developed by introducing, at the first stage of a hierarchical model, a multinomial model for the response and then subsequent terms are introduced to explain wave and subject effects. To estimate the model, we use the Gibbs sampler, which allows missing data for both the response and the explanatory variables to be imputed at each iteration of the algorithm, given some appropriate prior distributions. After accounting for significant covariate effects in the model, results show that the relative probability of remaining unemployed diminished with time following arrival in Australia.
Resumo:
The Perk-Schultz model may be expressed in terms of the solution of the Yang-Baxter equation associated with the fundamental representation of the untwisted affine extension of the general linear quantum superalgebra U-q (gl(m/n)], with a multiparametric coproduct action as given by Reshetikhin. Here, we present analogous explicit expressions for solutions of the Yang-Baxter equation associated with the fundamental representations of the twisted and untwisted affine extensions of the orthosymplectic quantum superalgebras U-q[osp(m/n)]. In this manner, we obtain generalizations of the Perk-Schultz model.
Resumo:
Pharmacodynamics (PD) is the study of the biochemical and physiological effects of drugs. The construction of optimal designs for dose-ranging trials with multiple periods is considered in this paper, where the outcome of the trial (the effect of the drug) is considered to be a binary response: the success or failure of a drug to bring about a particular change in the subject after a given amount of time. The carryover effect of each dose from one period to the next is assumed to be proportional to the direct effect. It is shown for a logistic regression model that the efficiency of optimal parallel (single-period) or crossover (two-period) design is substantially greater than a balanced design. The optimal designs are also shown to be robust to misspecification of the value of the parameters. Finally, the parallel and crossover designs are combined to provide the experimenter with greater flexibility.
Resumo:
The Operator Choice Model (OCM) was developed to model the behaviour of operators attending to complex tasks involving interdependent concurrent activities, such as in Air Traffic Control (ATC). The purpose of the OCM is to provide a flexible framework for modelling and simulation that can be used for quantitative analyses in human reliability assessment, comparison between human computer interaction (HCI) designs, and analysis of operator workload. The OCM virtual operator is essentially a cycle of four processes: Scan Classify Decide Action Perform Action. Once a cycle is complete, the operator will return to the Scan process. It is also possible to truncate a cycle and return to Scan after each of the processes. These processes are described using Continuous Time Probabilistic Automata (CTPA). The details of the probability and timing models are specific to the domain of application, and need to be specified using domain experts. We are building an application of the OCM for use in ATC. In order to develop a realistic model we are calibrating the probability and timing models that comprise each process using experimental data from a series of experiments conducted with student subjects. These experiments have identified the factors that influence perception and decision making in simplified conflict detection and resolution tasks. This paper presents an application of the OCM approach to a simple ATC conflict detection experiment. The aim is to calibrate the OCM so that its behaviour resembles that of the experimental subjects when it is challenged with the same task. Its behaviour should also interpolate when challenged with scenarios similar to those used to calibrate it. The approach illustrated here uses logistic regression to model the classifications made by the subjects. This model is fitted to the calibration data, and provides an extrapolation to classifications in scenarios outside of the calibration data. A simple strategy is used to calibrate the timing component of the model, and the results for reaction times are compared between the OCM and the student subjects. While this approach to timing does not capture the full complexity of the reaction time distribution seen in the data from the student subjects, the mean and the tail of the distributions are similar.
Resumo:
The University of Queensland, Australia has developed Fez, a world-leading user-interface and management system for Fedora-based institutional repositories, which bridges the gap between a repository and users. Christiaan Kortekaas, Andrew Bennett and Keith Webster will review this open source software that gives institutions the power to create a comprehensive repository solution without the hassle..