217 resultados para Discrete choice models
Resumo:
Six alternative structural models of individualism-collectivism are reviewed and empirically compared in a confirmatory factor analysis of questionnaire data from an Australian student sample (N=340). Central to the debate about the structure of this broad social attitude are the issues of (I) polarity (are individualism and collectivism bipolar opposites, or orthogonal factors?) and (2) dimensionality (are individualism and collectivism themselves higher-order constructs subsuming several more specific factors and, if so, what are they?). The data from this Australian sample support a model that represents individualism and collectivism as a higher-order bipolar factor hierarchically subsuming several bipolar reference-group-specific individualisms and collectivisms. Copyright (C) 2001 John Wiley & Sons, Ltd.
Resumo:
This paper assesses the capacity of local communities and sub-national governments to influence patterns of tourism development, within the context of a globalizing economy. Through a comparison of the contrasting examples of Hawaii and Queensland, the paper indicates the consequences of different approaches to land use regulation. It points to the importance of planning and policy processes that integrate community interests, in order to achieve long-term, sustainable tourism development. Effective regulation of development can minimize the social and environmental impacts of tourism. The paper illustrates how community organizations and sub-national governments can articulate local interests, despite the global demands of investors for more deregulated markets in land.
Resumo:
Impulsivity based on Gray's [Gray, J. A. (1982) The neuropsychology of anxiety: an enquiry into the function of the septo-hippocampal system. New York: Oxford University Press: (1991). The neurophysiology of temperament. In J. Strelau & A. Angleitner. Explorations in temperament: international perspectives on theory and measurement. London. Plenum Press]. physiological model of personality was hypothesised to be more predictive of goal oriented criteria within the workplace than scales derived From Eysenck's [Eysenck. H.J. (1967). The biological basis of personality. Springfield, IL: Charles C. Thompson.] physiological model of personality. Results confirmed the hypothesis and also showed that Gray's scale of Impulsivity was generally a better predictor than attributional style and interest in money. Results were interpreted as providing support for Gray's Behavioural Activation System which moderates response to reward. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
The Eysenck Personality Questionnaire-Revised (EPQ-R), the Eysenck Personality Profiler Short Version (EPP-S), and the Big Five Inventory (BFI-V4a) were administered to 135 postgraduate students of business in Pakistan. Whilst Extraversion and Neuroticism scales from the three questionnaires were highly correlated, it was found that Agreeableness was most highly correlated with Psychoticism in the EPQ-R and Conscientiousness was most highly correlated with Psychoticism in the EPP-S. Principal component analyses with varimax rotation were carried out. The analyses generally suggested that the five factor model rather than the three-factor model was more robust and better for interpretation of all the higher order scales of the EPQ-R, EPP-S, and BFI-V4a in the Pakistani data. Results show that the superiority of the five factor solution results from the inclusion of a broader variety of personality scales in the input data, whereas Eysenck's three factor solution seems to be best when a less complete but possibly more important set of variables are input. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
Map algebra is a data model and simple functional notation to study the distribution and patterns of spatial phenomena. It uses a uniform representation of space as discrete grids, which are organized into layers. This paper discusses extensions to map algebra to handle neighborhood operations with a new data type called a template. Templates provide general windowing operations on grids to enable spatial models for cellular automata, mathematical morphology, and local spatial statistics. A programming language for map algebra that incorporates templates and special processing constructs is described. The programming language is called MapScript. Example program scripts are presented to perform diverse and interesting neighborhood analysis for descriptive, model-based and processed-based analysis.
Resumo:
In this paper, we look at three models (mixture, competing risk and multiplicative) involving two inverse Weibull distributions. We study the shapes of the density and failure-rate functions and discuss graphical methods to determine if a given data set can be modelled by one of these models. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
The step size determines the accuracy of a discrete element simulation. The position and velocity updating calculation uses a pre-calculated table and hence the control of step size can not use the integration formulas for step size control. A step size control scheme for use with the table driven velocity and position calculation uses the difference between the calculation result from one big step and that from two small steps. This variable time step size method chooses the suitable time step size for each particle at each step automatically according to the conditions. Simulation using fixed time step method is compared with that of using variable time step method. The difference in computation time for the same accuracy using a variable step size (compared to the fixed step) depends on the particular problem. For a simple test case the times are roughly similar. However, the variable step size gives the required accuracy on the first run. A fixed step size may require several runs to check the simulation accuracy or a conservative step size that results in longer run times. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
Recent reviews of the desistance literature have advocated studying desistance as a process, yet current empirical methods continue to measure desistance as a discrete state. In this paper, we propose a framework for empirical research that recognizes desistance as a developmental process. This approach focuses on changes in the offending rare rather than on offending itself We describe a statistical model to implement this approach and provide an empirical example. We conclude with several suggestions for future research endeavors that arise from our conceptualization of desistance.
Resumo:
Computer assisted learning has an important role in the teaching of pharmacokinetics to health sciences students because it transfers the emphasis from the purely mathematical domain to an 'experiential' domain in which graphical and symbolic representations of actions and their consequences form the major focus for learning. Basic pharmacokinetic concepts can be taught by experimenting with the interplay between dose and dosage interval with drug absorption (e.g. absorption rate, bioavailability), drug distribution (e.g. volume of distribution, protein binding) and drug elimination (e.g. clearance) on drug concentrations using library ('canned') pharmacokinetic models. Such 'what if' approaches are found in calculator-simulators such as PharmaCalc, Practical Pharmacokinetics and PK Solutions. Others such as SAAM II, ModelMaker, and Stella represent the 'systems dynamics' genre, which requires the user to conceptualise a problem and formulate the model on-screen using symbols, icons, and directional arrows. The choice of software should be determined by the aims of the subject/course, the experience and background of the students in pharmacokinetics, and institutional factors including price and networking capabilities of the package(s). Enhanced learning may result if the computer teaching of pharmacokinetics is supported by tutorials, especially where the techniques are applied to solving problems in which the link with healthcare practices is clearly established.
Resumo:
We solve the Sp(N) Heisenberg and SU(N) Hubbard-Heisenberg models on the anisotropic triangular lattice in the large-N limit. These two models may describe respectively the magnetic and electronic properties of the family of layered organic materials K-(BEDT-TTF)(2)X, The Heisenberg model is also relevant to the frustrated antiferromagnet, Cs2CuCl4. We find rich phase diagrams for each model. The Sp(N) :antiferromagnet is shown to have five different phases as a function of the size of the spin and the degree of anisotropy of the triangular lattice. The effects of fluctuations at finite N are also discussed. For parameters relevant to Cs2CuCl4 the ground state either exhibits incommensurate spin order, or is in a quantum disordered phase with deconfined spin-1/2 excitations and topological order. The SU(N) Hubbard-Heisenberg model exhibits an insulating dimer phase, an insulating box phase, a semi-metallic staggered flux phase (SFP), and a metallic uniform phase. The uniform and SFP phases exhibit a pseudogap, A metal-insulator transition occurs at intermediate values of the interaction strength.
Resumo:
Activated sludge models are used extensively in the study of wastewater treatment processes. While various commercial implementations of these models are available, there are many people who need to code models themselves using the simulation packages available to them, Quality assurance of such models is difficult. While benchmarking problems have been developed and are available, the comparison of simulation data with that of commercial models leads only to the detection, not the isolation of errors. To identify the errors in the code is time-consuming. In this paper, we address the problem by developing a systematic and largely automated approach to the isolation of coding errors. There are three steps: firstly, possible errors are classified according to their place in the model structure and a feature matrix is established for each class of errors. Secondly, an observer is designed to generate residuals, such that each class of errors imposes a subspace, spanned by its feature matrix, on the residuals. Finally. localising the residuals in a subspace isolates coding errors. The algorithm proved capable of rapidly and reliably isolating a variety of single and simultaneous errors in a case study using the ASM 1 activated sludge model. In this paper a newly coded model was verified against a known implementation. The method is also applicable to simultaneous verification of any two independent implementations, hence is useful in commercial model development.
Resumo:
Despite their limitations, linear filter models continue to be used to simulate the receptive field properties of cortical simple cells. For theoreticians interested in large scale models of visual cortex, a family of self-similar filters represents a convenient way in which to characterise simple cells in one basic model. This paper reviews research on the suitability of such models, and goes on to advance biologically motivated reasons for adopting a particular group of models in preference to all others. In particular, the paper describes why the Gabor model, so often used in network simulations, should be dropped in favour of a Cauchy model, both on the grounds of frequency response and mutual filter orthogonality.
Resumo:
Some efficient solution techniques for solving models of noncatalytic gas-solid and fluid-solid reactions are presented. These models include those with non-constant diffusivities for which the formulation reduces to that of a convection-diffusion problem. A singular perturbation problem results for such models in the presence of a large Thiele modulus, for which the classical numerical methods can present difficulties. For the convection-diffusion like case, the time-dependent partial differential equations are transformed by a semi-discrete Petrov-Galerkin finite element method into a system of ordinary differential equations of the initial-value type that can be readily solved. In the presence of a constant diffusivity, in slab geometry the convection-like terms are absent, and the combination of a fitted mesh finite difference method with a predictor-corrector method is used to solve the problem. Both the methods are found to converge, and general reaction rate forms can be treated. These methods are simple and highly efficient for arbitrary particle geometry and parameters, including a large Thiele modulus. (C) 2001 Elsevier Science Ltd. All rights reserved.