977 resultados para Binary choice models
Resumo:
We study the distribution of energy level spacings in two models describing coupled single-mode Bose-Einstein condensates. Both models have a fixed number of degrees of freedom, which is small compared to the number of interaction parameters, and is independent of the dimensionality of the Hilbert space. We find that the distribution follows a universal Poisson form independent of the choice of coupling parameters, which is indicative of the integrability of both models. These results complement those for integrable lattice models where the number of degrees of freedom increases with increasing dimensionality of the Hilbert space. Finally, we also show that for one model the inclusion of an additional interaction which breaks the integrability leads to a non-Poisson distribution.
Resumo:
The process of adsorption of two dissociating and two non-dissociating aromatic compounds from dilute aqueous solutions on an untreated commercially available activated carbon (B.D.H.) was investigated systematically. All adsorption experiments were carried out in pH controlled aqueous solutions. The experimental isotherms were fitted into four different models (Langmuir homogenous Models, Langmuir binary Model, Langmuir-Freundlich single model and Langmuir-Freundlich double model). Variation of the model parameters with the solution pH was studied and used to gain further insight into the adsorption process. The relationship between the model parameters and the solution pH and pK(a) was used to predict the adsorption capacity in molecular and ionic form of solutes in other solution. A relationship was sought to predict the effect of pH on the adsorption systems and for estimating the maximum adsorption capacity of carbon at any pH where the solute is ionized reasonably well. N-2 and CO2 adsorption were used to characterize the carbon. X-ray Photoelectron Spectroscopy (XPS) measurement was used for surface elemental analysis of the activated carbon.
Resumo:
Standard factorial designs sometimes may be inadequate for experiments that aim to estimate a generalized linear model, for example, for describing a binary response in terms of several variables. A method is proposed for finding exact designs for such experiments that uses a criterion allowing for uncertainty in the link function, the linear predictor, or the model parameters, together with a design search. Designs are assessed and compared by simulation of the distribution of efficiencies relative to locally optimal designs over a space of possible models. Exact designs are investigated for two applications, and their advantages over factorial and central composite designs are demonstrated.
Resumo:
The present study addresses the problem of predicting the properties of multicomponent systems from those of corresponding binary systems. Two types of multicomponent polynomial models have been analysed. A probabilistic interpretation of the parameters of the Polynomial model, which explicitly relates them with the Gibbs free energies of the generalised quasichemical reactions, is proposed. The presented treatment provides a theoretical justification for such parameters. A methodology of estimating the ternary interaction parameter from the binary ones is presented. The methodology provides a way in which the power series multicomponent models, where no projection is required, could be incorporated into the Calphad approach.
Resumo:
Pharmacodynamics (PD) is the study of the biochemical and physiological effects of drugs. The construction of optimal designs for dose-ranging trials with multiple periods is considered in this paper, where the outcome of the trial (the effect of the drug) is considered to be a binary response: the success or failure of a drug to bring about a particular change in the subject after a given amount of time. The carryover effect of each dose from one period to the next is assumed to be proportional to the direct effect. It is shown for a logistic regression model that the efficiency of optimal parallel (single-period) or crossover (two-period) design is substantially greater than a balanced design. The optimal designs are also shown to be robust to misspecification of the value of the parameters. Finally, the parallel and crossover designs are combined to provide the experimenter with greater flexibility.
Resumo:
The Access to Allied Psychological Services component of Australia's Better Outcomes in Mental Health Care program enables eligible general practitioners to refer consumers to allied health professionals for affordable, evidence-based mental health care, via 108 projects conducted by Divisions of General Practice. The current study profiled the models of service delivery across these projects, and examined whether particular models were associated with differential levels of access to services. We found: 76% of projects were retaining their allied health professionals under contract, 28% via direct employment, and 7% some other way; Allied health professionals were providing services from GPs' rooms in 63% of projects, from their own rooms in 63%, from a third location in 42%; and The referral mechanism of choice was direct referral in 51% of projects, a voucher system in 27%, a brokerage system in 24%, and a register system in 25%. Many of these models were being used in combination. No model was predictive of differential levels of access, suggesting that the approach of adapting models to the local context is proving successful.
Resumo:
We explore the implications of refinements in the mechanical description of planetary constituents on the convection modes predicted by finite-element simulations. The refinements consist in the inclusion of incremental elasticity, plasticity (yielding) and multiple simultaneous creep mechanisms in addition to the usual visco-plastic models employed in the context of unified plate-mantle models. The main emphasis of this paper rests on the constitutive and computational formulation of the model. We apply a consistent incremental formulation of the non-linear governing equations avoiding the computationally expensive iterations that are otherwise necessary to handle the onset of plastic yield. In connection with episodic convection simulations, we point out the strong dependency of the results on the choice of the initial temperature distribution. Our results also indicate that the inclusion of elasticity in the constitutive relationships lowers the mechanical energy associated with subduction events.
Resumo:
The Operator Choice Model (OCM) was developed to model the behaviour of operators attending to complex tasks involving interdependent concurrent activities, such as in Air Traffic Control (ATC). The purpose of the OCM is to provide a flexible framework for modelling and simulation that can be used for quantitative analyses in human reliability assessment, comparison between human computer interaction (HCI) designs, and analysis of operator workload. The OCM virtual operator is essentially a cycle of four processes: Scan Classify Decide Action Perform Action. Once a cycle is complete, the operator will return to the Scan process. It is also possible to truncate a cycle and return to Scan after each of the processes. These processes are described using Continuous Time Probabilistic Automata (CTPA). The details of the probability and timing models are specific to the domain of application, and need to be specified using domain experts. We are building an application of the OCM for use in ATC. In order to develop a realistic model we are calibrating the probability and timing models that comprise each process using experimental data from a series of experiments conducted with student subjects. These experiments have identified the factors that influence perception and decision making in simplified conflict detection and resolution tasks. This paper presents an application of the OCM approach to a simple ATC conflict detection experiment. The aim is to calibrate the OCM so that its behaviour resembles that of the experimental subjects when it is challenged with the same task. Its behaviour should also interpolate when challenged with scenarios similar to those used to calibrate it. The approach illustrated here uses logistic regression to model the classifications made by the subjects. This model is fitted to the calibration data, and provides an extrapolation to classifications in scenarios outside of the calibration data. A simple strategy is used to calibrate the timing component of the model, and the results for reaction times are compared between the OCM and the student subjects. While this approach to timing does not capture the full complexity of the reaction time distribution seen in the data from the student subjects, the mean and the tail of the distributions are similar.
Resumo:
An interactive hierarchical Generative Topographic Mapping (HGTM) ¸iteHGTM has been developed to visualise complex data sets. In this paper, we build a more general visualisation system by extending the HGTM visualisation system in 3 directions: bf (1) We generalize HGTM to noise models from the exponential family of distributions. The basic building block is the Latent Trait Model (LTM) developed in ¸iteKabanpami. bf (2) We give the user a choice of initializing the child plots of the current plot in either em interactive, or em automatic mode. In the interactive mode the user interactively selects ``regions of interest'' as in ¸iteHGTM, whereas in the automatic mode an unsupervised minimum message length (MML)-driven construction of a mixture of LTMs is employed. bf (3) We derive general formulas for magnification factors in latent trait models. Magnification factors are a useful tool to improve our understanding of the visualisation plots, since they can highlight the boundaries between data clusters. The unsupervised construction is particularly useful when high-level plots are covered with dense clusters of highly overlapping data projections, making it difficult to use the interactive mode. Such a situation often arises when visualizing large data sets. We illustrate our approach on a toy example and apply our system to three more complex real data sets.
Resumo:
A recent method for phase equilibria, the AGAPE method, has been used to predict activity coefficients and excess Gibbs energy for binary mixtures with good accuracy. The theory, based on a generalised London potential (GLP), accounts for intermolecular attractive forces. Unlike existing prediction methods, for example UNIFAC, the AGAPE method uses only information derived from accessible experimental data and molecular information for pure components. Presently, the AGAPE method has some limitations, namely that the mixtures must consist of small, non-polar compounds with no hydrogen bonding, at low moderate pressures and at conditions below the critical conditions of the components. Distinction between vapour-liquid equilibria and gas-liquid solubility is rather arbitrary and it seems reasonable to extend these ideas to solubility. The AGAPE model uses a molecular lattice-based mixing rule. By judicious use of computer programs a methodology was created to examine a body of experimental gas-liquid solubility data for gases such as carbon dioxide, propane, n-butane or sulphur hexafluoride which all have critical temperatures a little above 298 K dissolved in benzene, cyclo-hexane and methanol. Within this methodology the value of the GLP as an ab initio combining rule for such solutes in very dilute solutions in a variety of liquids has been tested. Using the GLP as a mixing rule involves the computation of rotationally averaged interactions between the constituent atoms, and new calculations have had to be made to discover the magnitude of the unlike pair interactions. These numbers have been seen as significant in their own right in the context of the behaviour of infinitely-dilute solutions. A method for extending this treatment to "permanent" gases has also been developed. The findings from the GLP method and from the more general AGAPE approach have been examined in the context of other models for gas-liquid solubility, both "classical" and contemporary, in particular those derived from equations-of-state methods and from reference solvent methods.
Resumo:
This thesis is concerned with the inventory control of items that can be considered independent of one another. The decisions when to order and in what quantity, are the controllable or independent variables in cost expressions which are minimised. The four systems considered are referred to as (Q, R), (nQ,R,T), (M,T) and (M,R,T). Wiith ((Q,R) a fixed quantity Q is ordered each time the order cover (i.e. stock in hand plus on order ) equals or falls below R, the re-order level. With the other three systems reviews are made only at intervals of T. With (nQ,R,T) an order for nQ is placed if on review the inventory cover is less than or equal to R, where n, which is an integer, is chosen at the time so that the new order cover just exceeds R. In (M, T) each order increases the order cover to M. Fnally in (M, R, T) when on review, order cover does not exceed R, enough is ordered to increase it to M. The (Q, R) system is examined at several levels of complexity, so that the theoretical savings in inventory costs obtained with more exact models could be compared with the increases in computational costs. Since the exact model was preferable for the (Q,R) system only exact models were derived for theoretical systems for the other three. Several methods of optimization were tried, but most were found inappropriate for the exact models because of non-convergence. However one method did work for each of the exact models. Demand is considered continuous, and with one exception, the distribution assumed is the normal distribution truncated so that demand is never less than zero. Shortages are assumed to result in backorders, not lost sales. However, the shortage cost is a function of three items, one of which, the backorder cost, may be either a linear, quadratic or an exponential function of the length of time of a backorder, with or without period of grace. Lead times are assumed constant or gamma distributed. Lastly, the actual supply quantity is allowed to be distributed. All the sets of equations were programmed for a KDF 9 computer and the computed performances of the four inventory control procedures are compared under each assurnption.
Resumo:
The spatial distribution of self-employment in India: evidence from semiparametric geoadditive models, Regional Studies. The entrepreneurship literature has rarely considered spatial location as a micro-determinant of occupational choice. It has also ignored self-employment in developing countries. Using Bayesian semiparametric geoadditive techniques, this paper models spatial location as a micro-determinant of self-employment choice in India. The empirical results suggest the presence of spatial occupational neighbourhoods and a clear north–south divide in self-employment when the entire sample is considered; however, spatial variation in the non-agriculture sector disappears to a large extent when individual factors that influence self-employment choice are explicitly controlled. The results further suggest non-linear effects of age, education and wealth on self-employment.
Resumo:
Based on Bayesian Networks, methods were created that address protein sequence-based bacterial subcellular location prediction. Distinct predictive algorithms for the eight bacterial subcellular locations were created. Several variant methods were explored. These variations included differences in the number of residues considered within the query sequence - which ranged from the N-terminal 10 residues to the whole sequence - and residue representation - which took the form of amino acid composition, percentage amino acid composition, or normalised amino acid composition. The accuracies of the best performing networks were then compared to PSORTB. All individual location methods outperform PSORTB except for the Gram+ cytoplasmic protein predictor, for which accuracies were essentially equal, and for outer membrane protein prediction, where PSORTB outperforms the binary predictor. The method described here is an important new approach to method development for subcellular location prediction. It is also a new, potentially valuable tool for candidate subunit vaccine selection.
Resumo:
Recently, we have developed the hierarchical Generative Topographic Mapping (HGTM), an interactive method for visualization of large high-dimensional real-valued data sets. In this paper, we propose a more general visualization system by extending HGTM in three ways, which allows the user to visualize a wider range of data sets and better support the model development process. 1) We integrate HGTM with noise models from the exponential family of distributions. The basic building block is the Latent Trait Model (LTM). This enables us to visualize data of inherently discrete nature, e.g., collections of documents, in a hierarchical manner. 2) We give the user a choice of initializing the child plots of the current plot in either interactive, or automatic mode. In the interactive mode, the user selects "regions of interest," whereas in the automatic mode, an unsupervised minimum message length (MML)-inspired construction of a mixture of LTMs is employed. The unsupervised construction is particularly useful when high-level plots are covered with dense clusters of highly overlapping data projections, making it difficult to use the interactive mode. Such a situation often arises when visualizing large data sets. 3) We derive general formulas for magnification factors in latent trait models. Magnification factors are a useful tool to improve our understanding of the visualization plots, since they can highlight the boundaries between data clusters. We illustrate our approach on a toy example and evaluate it on three more complex real data sets. © 2005 IEEE.
Resumo:
Determination of the so-called optical constants (complex refractive index N, which is usually a function of the wavelength, and physical thickness D) of thin films from experimental data is a typical inverse non-linear problem. It is still a challenge to the scientific community because of the complexity of the problem and its basic and technological significance in optics. Usually, solutions are looked for models with 3-10 parameters. Best estimates of these parameters are obtained by minimization procedures. Herein, we discuss the choice of orthogonal polynomials for the dispersion law of the thin film refractive index. We show the advantage of their use, compared to the Selmeier, Lorentz or Cauchy models.