961 resultados para discrete-choice models
Resumo:
In this paper, we look at three models (mixture, competing risk and multiplicative) involving two inverse Weibull distributions. We study the shapes of the density and failure-rate functions and discuss graphical methods to determine if a given data set can be modelled by one of these models. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
The step size determines the accuracy of a discrete element simulation. The position and velocity updating calculation uses a pre-calculated table and hence the control of step size can not use the integration formulas for step size control. A step size control scheme for use with the table driven velocity and position calculation uses the difference between the calculation result from one big step and that from two small steps. This variable time step size method chooses the suitable time step size for each particle at each step automatically according to the conditions. Simulation using fixed time step method is compared with that of using variable time step method. The difference in computation time for the same accuracy using a variable step size (compared to the fixed step) depends on the particular problem. For a simple test case the times are roughly similar. However, the variable step size gives the required accuracy on the first run. A fixed step size may require several runs to check the simulation accuracy or a conservative step size that results in longer run times. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
Recent reviews of the desistance literature have advocated studying desistance as a process, yet current empirical methods continue to measure desistance as a discrete state. In this paper, we propose a framework for empirical research that recognizes desistance as a developmental process. This approach focuses on changes in the offending rare rather than on offending itself We describe a statistical model to implement this approach and provide an empirical example. We conclude with several suggestions for future research endeavors that arise from our conceptualization of desistance.
Resumo:
Computer assisted learning has an important role in the teaching of pharmacokinetics to health sciences students because it transfers the emphasis from the purely mathematical domain to an 'experiential' domain in which graphical and symbolic representations of actions and their consequences form the major focus for learning. Basic pharmacokinetic concepts can be taught by experimenting with the interplay between dose and dosage interval with drug absorption (e.g. absorption rate, bioavailability), drug distribution (e.g. volume of distribution, protein binding) and drug elimination (e.g. clearance) on drug concentrations using library ('canned') pharmacokinetic models. Such 'what if' approaches are found in calculator-simulators such as PharmaCalc, Practical Pharmacokinetics and PK Solutions. Others such as SAAM II, ModelMaker, and Stella represent the 'systems dynamics' genre, which requires the user to conceptualise a problem and formulate the model on-screen using symbols, icons, and directional arrows. The choice of software should be determined by the aims of the subject/course, the experience and background of the students in pharmacokinetics, and institutional factors including price and networking capabilities of the package(s). Enhanced learning may result if the computer teaching of pharmacokinetics is supported by tutorials, especially where the techniques are applied to solving problems in which the link with healthcare practices is clearly established.
Resumo:
We solve the Sp(N) Heisenberg and SU(N) Hubbard-Heisenberg models on the anisotropic triangular lattice in the large-N limit. These two models may describe respectively the magnetic and electronic properties of the family of layered organic materials K-(BEDT-TTF)(2)X, The Heisenberg model is also relevant to the frustrated antiferromagnet, Cs2CuCl4. We find rich phase diagrams for each model. The Sp(N) :antiferromagnet is shown to have five different phases as a function of the size of the spin and the degree of anisotropy of the triangular lattice. The effects of fluctuations at finite N are also discussed. For parameters relevant to Cs2CuCl4 the ground state either exhibits incommensurate spin order, or is in a quantum disordered phase with deconfined spin-1/2 excitations and topological order. The SU(N) Hubbard-Heisenberg model exhibits an insulating dimer phase, an insulating box phase, a semi-metallic staggered flux phase (SFP), and a metallic uniform phase. The uniform and SFP phases exhibit a pseudogap, A metal-insulator transition occurs at intermediate values of the interaction strength.
Resumo:
Activated sludge models are used extensively in the study of wastewater treatment processes. While various commercial implementations of these models are available, there are many people who need to code models themselves using the simulation packages available to them, Quality assurance of such models is difficult. While benchmarking problems have been developed and are available, the comparison of simulation data with that of commercial models leads only to the detection, not the isolation of errors. To identify the errors in the code is time-consuming. In this paper, we address the problem by developing a systematic and largely automated approach to the isolation of coding errors. There are three steps: firstly, possible errors are classified according to their place in the model structure and a feature matrix is established for each class of errors. Secondly, an observer is designed to generate residuals, such that each class of errors imposes a subspace, spanned by its feature matrix, on the residuals. Finally. localising the residuals in a subspace isolates coding errors. The algorithm proved capable of rapidly and reliably isolating a variety of single and simultaneous errors in a case study using the ASM 1 activated sludge model. In this paper a newly coded model was verified against a known implementation. The method is also applicable to simultaneous verification of any two independent implementations, hence is useful in commercial model development.
Resumo:
Despite their limitations, linear filter models continue to be used to simulate the receptive field properties of cortical simple cells. For theoreticians interested in large scale models of visual cortex, a family of self-similar filters represents a convenient way in which to characterise simple cells in one basic model. This paper reviews research on the suitability of such models, and goes on to advance biologically motivated reasons for adopting a particular group of models in preference to all others. In particular, the paper describes why the Gabor model, so often used in network simulations, should be dropped in favour of a Cauchy model, both on the grounds of frequency response and mutual filter orthogonality.
Resumo:
Some efficient solution techniques for solving models of noncatalytic gas-solid and fluid-solid reactions are presented. These models include those with non-constant diffusivities for which the formulation reduces to that of a convection-diffusion problem. A singular perturbation problem results for such models in the presence of a large Thiele modulus, for which the classical numerical methods can present difficulties. For the convection-diffusion like case, the time-dependent partial differential equations are transformed by a semi-discrete Petrov-Galerkin finite element method into a system of ordinary differential equations of the initial-value type that can be readily solved. In the presence of a constant diffusivity, in slab geometry the convection-like terms are absent, and the combination of a fitted mesh finite difference method with a predictor-corrector method is used to solve the problem. Both the methods are found to converge, and general reaction rate forms can be treated. These methods are simple and highly efficient for arbitrary particle geometry and parameters, including a large Thiele modulus. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
In this paper theoretical models have been established that can account for the gas transmission through nanocomposite laminates, consisting of an oxide layer of finite permeability containing defects, on a polymer sheet of finite thickness. The defect shapes can either be in the form of long cracks or rectangular holes. The models offer a choice of exact numerical calculations or fast and intuitive analytical approximations. The experimental measurements of oxygen permeation through four different SiOx/poly (ethylene terephthalate) samples that were strained to produce distributions or cracks showed good agreement when compared with predicted results from the approximate analytic model. As a consequence of this observation, a key practical conclusion is that, because of the logarithmic dependence of transmission on the width of a crack, for a given strain it is better to have a small number of large cracks rather than a large number of small cracks. (C) 2001 Elsevier Science B.V. All rights reserved.
Resumo:
Public sector organizations traditionally have been associated with the internal process (bureaucratic) model of organizational culture. Public choice and management theory have suggested that public sector managers can learn from the experience of private sector management, and need to change from the Internal process model of organizational culture. Due to these Influences an managers, the current research proposes that managers' perceptions of Ideal organizational culture would no longer reflect the Internal process model. Public sector managers' perceptions of the current culture, as well as their perceptions of the Ideal culture, were measured. A mail-out survey was conducted In the Queensland (a state of Australia) public sector. Responses to a competing values culture Inventory were received from 222 managers. Results Indicated that a reliance on the Internal process model persists, while managers had a desire for cultural models other than the Internal process model, as hypothesized.
Resumo:
Five kinetic models for adsorption of hydrocarbons on activated carbon are compared and investigated in this study. These models assume different mass transfer mechanisms within the porous carbon particle. They are: (a) dual pore and surface diffusion (MSD), (b) macropore, surface, and micropore diffusion (MSMD), (c) macropore, surface and finite mass exchange (FK), (d) finite mass exchange (LK), and (e) macropore, micropore diffusion (BM) models. These models are discriminated using the single component kinetic data of ethane and propane as well as the multicomponent kinetics data of their binary mixtures measured on two commercial activated carbon samples (Ajax and Norit) under various conditions. The adsorption energetic heterogeneity is considered for all models to account for the system. It is found that, in general, the models assuming diffusion flux of adsorbed phase along the particle scale give better description of the kinetic data.
Resumo:
Understanding the genetic architecture of quantitative traits can greatly assist the design of strategies for their manipulation in plant-breeding programs. For a number of traits, genetic variation can be the result of segregation of a few major genes and many polygenes (minor genes). The joint segregation analysis (JSA) is a maximum-likelihood approach for fitting segregation models through the simultaneous use of phenotypic information from multiple generations. Our objective in this paper was to use computer simulation to quantify the power of the JSA method for testing the mixed-inheritance model for quantitative traits when it was applied to the six basic generations: both parents (P-1 and P-2), F-1, F-2, and both backcross generations (B-1 and B-2) derived from crossing the F-1 to each parent. A total of 1968 genetic model-experiment scenarios were considered in the simulation study to quantify the power of the method. Factors that interacted to influence the power of the JSA method to correctly detect genetic models were: (1) whether there were one or two major genes in combination with polygenes, (2) the heritability of the major genes and polygenes, (3) the level of dispersion of the major genes and polygenes between the two parents, and (4) the number of individuals examined in each generation (population size). The greatest levels of power were observed for the genetic models defined with simple inheritance; e.g., the power was greater than 90% for the one major gene model, regardless of the population size and major-gene heritability. Lower levels of power were observed for the genetic models with complex inheritance (major genes and polygenes), low heritability, small population sizes and a large dispersion of favourable genes among the two parents; e.g., the power was less than 5% for the two major-gene model with a heritability value of 0.3 and population sizes of 100 individuals. The JSA methodology was then applied to a previously studied sorghum data-set to investigate the genetic control of the putative drought resistance-trait osmotic adjustment in three crosses. The previous study concluded that there were two major genes segregating for osmotic adjustment in the three crosses. Application of the JSA method resulted in a change in the proposed genetic model. The presence of the two major genes was confirmed with the addition of an unspecified number of polygenes.
Resumo:
For the improvement of genetic material suitable for on farm use under low-input conditions, participatory and formal plant breeding strategies are frequently presented as competing options. A common frame of reference to phrase mechanisms and purposes related to breeding strategies will facilitate clearer descriptions of similarities and differences between participatory plant breeding and formal plant breeding. In this paper an attempt is made to develop such a common framework by means of a statistically inspired language that acknowledges the importance of both on farm trials and research centre trials as sources of information for on farm genetic improvement. Key concepts are the genetic correlation between environments, and the heterogeneity of phenotypic and genetic variance over environments. Classic selection response theory is taken as the starting point for the comparison of selection trials (on farm and research centre) with respect to the expected genetic improvement in a target environment (low-input farms). The variance-covariance parameters that form the input for selection response comparisons traditionally come from a mixed model fit to multi-environment trial data. In this paper we propose a recently developed class of mixed models, namely multiplicative mixed models, also called factor-analytic models, for modelling genetic variances and covariances (correlations). Mixed multiplicative models allow genetic variances and covariances to be dependent on quantitative descriptors of the environment, and confer a high flexibility in the choice of variance-covariance structure, without requiring the estimation of a prohibitively high number of parameters. As a result detailed considerations regarding selection response comparisons are facilitated. ne statistical machinery involved is illustrated on an example data set consisting of barley trials from the International Center for Agricultural Research in the Dry Areas (ICARDA). Analysis of the example data showed that participatory plant breeding and formal plant breeding are better interpreted as providing complementary rather than competing information.