961 resultados para discrete choice models
Resumo:
Йордан Йорданов, Андрей Василев - В работата се изследват методи за решаването на задачи на оптималното управление в дискретно време с безкраен хоризонт и явни управления. Дадена е обосновка на една процедура за решаване на такива задачи, базирана на множители на Лагранж, коята често се употребява в икономическата литература. Извеждени са необходимите условия за оптималност на базата на уравнения на Белман и са приведени достатъчни условия за оптималност при допускания, които често се използват в икономиката.
Resumo:
The current study investigated whether 4- to 6-year-old children’s task solution choice was influenced by the past proficiency of familiar peer models and the children’s personal prior task experience. Peer past proficiency was established through behavioral assessments of interactions with novel tasks alongside peer and teacher predictions of each child’s proficiency. Based on these assessments, one peer model with high past proficiency and one age-, sex-, dominance-, and popularity-matched peer model with lower past proficiency were trained to remove a capsule using alternative solutions from a three-solution artificial fruit task. Video demonstrations of the models were shown to children after they had either a personal successful interaction or no interaction with the task. In general, there was not a strong bias toward the high past-proficiency model, perhaps due to a motivation to acquire multiple methods and the salience of other transmission biases. However, there was some evidence of a model-based past-proficiency bias; when the high past-proficiency peer matched the participants’ original solution, there was increased use of that solution, whereas if the high past-proficiency peer demonstrated an alternative solution, there was increased use of the alternative social solution and novel solutions. Thus, model proficiency influenced innovation.
Decoherence models for discrete-time quantum walks and their application to neutral atom experiments
Resumo:
We discuss decoherence in discrete-time quantum walks in terms of a phenomenological model that distinguishes spin and spatial decoherence. We identify the dominating mechanisms that affect quantum-walk experiments realized with neutral atoms walking in an optical lattice. From the measured spatial distributions, we determine with good precision the amount of decoherence per step, which provides a quantitative indication of the quality of our quantum walks. In particular, we find that spin decoherence is the main mechanism responsible for the loss of coherence in our experiment. We also find that the sole observation of ballistic-instead of diffusive-expansion in position space is not a good indicator of the range of coherent delocalization. We provide further physical insight by distinguishing the effects of short- and long-time spin dephasing mechanisms. We introduce the concept of coherence length in the discrete-time quantum walk, which quantifies the range of spatial coherences. Unexpectedly, we find that quasi-stationary dephasing does not modify the local properties of the quantum walk, but instead affects spatial coherences. For a visual representation of decoherence phenomena in phase space, we have developed a formalism based on a discrete analogue of the Wigner function. We show that the effects of spin and spatial decoherence differ dramatically in momentum space.
Resumo:
We summarise the properties and the fundamental mathematical results associated with basic models which describe coagulation and fragmentation processes in a deterministic manner and in which cluster size is a discrete quantity (an integer multiple of some basic unit size). In particular, we discuss Smoluchowski's equation for aggregation, the Becker-Döring model of simultaneous aggregation and fragmentation, and more general models involving coagulation and fragmentation.
Resumo:
In our research we investigate the output accuracy of discrete event simulation models and agent based simulation models when studying human centric complex systems. In this paper we focus on human reactive behaviour as it is possible in both modelling approaches to implement human reactive behaviour in the model by using standard methods. As a case study we have chosen the retail sector, and here in particular the operations of the fitting room in the women wear department of a large UK department store. In our case study we looked at ways of determining the efficiency of implementing new management policies for the fitting room operation through modelling the reactive behaviour of staff and customers of the department. First, we have carried out a validation experiment in which we compared the results from our models to the performance of the real system. This experiment also allowed us to establish differences in output accuracy between the two modelling methods. In a second step a multi-scenario experiment was carried out to study the behaviour of the models when they are used for the purpose of operational improvement. Overall we have found that for our case study example both, discrete event simulation and agent based simulation have the same potential to support the investigation into the efficiency of implementing new management policies.
Resumo:
Universidade Estadual de Campinas . Faculdade de Educação Física
Resumo:
Survival or longevity is an economically important trait in beef cattle. The main inconvenience for its inclusion in selection criteria is delayed recording of phenotypic data and the high computational demand for including survival in proportional hazard models. Thus, identification of a longevity-correlated trait that could be recorded early in life would be very useful for selection purposes. We estimated the genetic relationship of survival with productive and reproductive traits in Nellore cattle, including weaning weight (WW), post-weaning growth (PWG), muscularity (MUSC), scrotal circumference at 18 months (SC18), and heifer pregnancy (HP). Survival was measured in discrete time intervals and modeled through a sequential threshold model. Five independent bivariate Bayesian analyses were performed, accounting for cow survival and the five productive and reproductive traits. Posterior mean estimates for heritability (standard deviation in parentheses) were 0.55 (0.01) for WW, 0.25 (0.01) for PWG, 0.23 (0.01) for MUSC, and 0.48 (0.01) for SC18. The posterior mean estimates (95% confidence interval in parentheses) for the genetic correlation with survival were 0.16 (0.13-0.19), 0.30 (0.25-0.34), 0.31 (0.25-0.36), 0.07 (0.02-0.12), and 0.82 (0.78-0.86) for WW, PWG, MUSC, SC18, and HP, respectively. Based on the high genetic correlation and heritability (0.54) posterior mean estimates for HP, the expected progeny difference for HP can be used to select bulls for longevity, as well as for post-weaning gain and muscle score.
Resumo:
Context. The distribution of chemical abundances and their variation with time are important tools for understanding the chemical evolution of galaxies. In particular, the study of chemical evolution models can improve our understanding of the basic assumptions made when modelling our Galaxy and other spirals. Aims. We test a standard chemical evolution model for spiral disks in the Local Universe and study the influence of a threshold gas density and different efficiencies in the star formation rate (SFR) law on radial gradients of abundance, gas, and SFR. The model is then applied to specific galaxies. Methods. We adopt a one-infall chemical evolution model where the Galactic disk forms inside-out by means of infall of gas, and we test different thresholds and efficiencies in the SFR. The model is scaled to the disk properties of three Local Group galaxies (the Milky Way, M31 and M33) by varying its dependence on the star formation efficiency and the timescale for the infall of gas onto the disk. Results. Using this simple model, we are able to reproduce most of the observed constraints available in the literature for the studied galaxies. The radial oxygen abundance gradients and their time evolution are studied in detail. The present day abundance gradients are more sensitive to the threshold than to other parameters, while their temporal evolutions are more dependent on the chosen SFR efficiency. A variable efficiency along the galaxy radius can reproduce the present day gas distribution in the disk of spirals with prominent arms. The steepness in the distribution of stellar surface density differs from massive to lower mass disks, owing to the different star formation histories. Conclusions. The most massive disks seem to have evolved faster (i.e., with more efficient star formation) than the less massive ones, thus suggesting a downsizing in star formation for spirals. The threshold and the efficiency of star formation play a very important role in the chemical evolution of spiral disks. For instance, an efficiency varying with radius can be used to regulate the star formation. The oxygen abundance gradient can steepen or flatten in time depending on the choice of this parameter.
Resumo:
Objective: We carry out a systematic assessment on a suite of kernel-based learning machines while coping with the task of epilepsy diagnosis through automatic electroencephalogram (EEG) signal classification. Methods and materials: The kernel machines investigated include the standard support vector machine (SVM), the least squares SVM, the Lagrangian SVM, the smooth SVM, the proximal SVM, and the relevance vector machine. An extensive series of experiments was conducted on publicly available data, whose clinical EEG recordings were obtained from five normal subjects and five epileptic patients. The performance levels delivered by the different kernel machines are contrasted in terms of the criteria of predictive accuracy, sensitivity to the kernel function/parameter value, and sensitivity to the type of features extracted from the signal. For this purpose, 26 values for the kernel parameter (radius) of two well-known kernel functions (namely. Gaussian and exponential radial basis functions) were considered as well as 21 types of features extracted from the EEG signal, including statistical values derived from the discrete wavelet transform, Lyapunov exponents, and combinations thereof. Results: We first quantitatively assess the impact of the choice of the wavelet basis on the quality of the features extracted. Four wavelet basis functions were considered in this study. Then, we provide the average accuracy (i.e., cross-validation error) values delivered by 252 kernel machine configurations; in particular, 40%/35% of the best-calibrated models of the standard and least squares SVMs reached 100% accuracy rate for the two kernel functions considered. Moreover, we show the sensitivity profiles exhibited by a large sample of the configurations whereby one can visually inspect their levels of sensitiveness to the type of feature and to the kernel function/parameter value. Conclusions: Overall, the results evidence that all kernel machines are competitive in terms of accuracy, with the standard and least squares SVMs prevailing more consistently. Moreover, the choice of the kernel function and parameter value as well as the choice of the feature extractor are critical decisions to be taken, albeit the choice of the wavelet family seems not to be so relevant. Also, the statistical values calculated over the Lyapunov exponents were good sources of signal representation, but not as informative as their wavelet counterparts. Finally, a typical sensitivity profile has emerged among all types of machines, involving some regions of stability separated by zones of sharp variation, with some kernel parameter values clearly associated with better accuracy rates (zones of optimality). (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
A model where agents show discrete behavior regarding their actions, but have continuous opinions that are updated by interacting with other agents is presented. This new updating rule is applied to both the voter and Sznajd models for interaction between neighbors, and its consequences are discussed. The appearance of extremists is naturally observed and it seems to be a characteristic of this model.
Resumo:
In this article, we consider the stochastic optimal control problem of discrete-time linear systems subject to Markov jumps and multiplicative noise under three kinds of performance criterions related to the final value of the expectation and variance of the output. In the first problem it is desired to minimise the final variance of the output subject to a restriction on its final expectation, in the second one it is desired to maximise the final expectation of the output subject to a restriction on its final variance, and in the third one it is considered a performance criterion composed by a linear combination of the final variance and expectation of the output of the system. We present explicit sufficient conditions for the existence of an optimal control strategy for these problems, generalising previous results in the literature. We conclude this article presenting a numerical example of an asset liabilities management model for pension funds with regime switching.
Resumo:
In this study, regression models are evaluated for grouped survival data when the effect of censoring time is considered in the model and the regression structure is modeled through four link functions. The methodology for grouped survival data is based on life tables, and the times are grouped in k intervals so that ties are eliminated. Thus, the data modeling is performed by considering the discrete models of lifetime regression. The model parameters are estimated by using the maximum likelihood and jackknife methods. To detect influential observations in the proposed models, diagnostic measures based on case deletion, which are denominated global influence, and influence measures based on small perturbations in the data or in the model, referred to as local influence, are used. In addition to those measures, the local influence and the total influential estimate are also employed. Various simulation studies are performed and compared to the performance of the four link functions of the regression models for grouped survival data for different parameter settings, sample sizes and numbers of intervals. Finally, a data set is analyzed by using the proposed regression models. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
We investigate here a modification of the discrete random pore model [Bhatia SK, Vartak BJ, Carbon 1996;34:1383], by including an additional rate constant which takes into account the different reactivity of the initial pore surface having attached functional groups and hydrogens, relative to the subsequently exposed surface. It is observed that the relative initial reactivity has a significant effect on the conversion and structural evolution, underscoring the importance of initial surface chemistry. The model is tested against experimental data on chemically controlled char oxidation and steam gasification at various temperatures. It is seen that the variations of the reaction rate and surface area with conversion are better represented by the present approach than earlier random pore models. The results clearly indicate the improvement of model predictions in the low conversion region, where the effect of the initially attached functional groups and hydrogens is more significant, particularly for char oxidation. It is also seen that, for the data examined, the initial surface chemistry is less important for steam gasification as compared to the oxidation reaction. Further development of the approach must also incorporate the dynamics of surface complexation, which is not considered here.
Resumo:
Testing ecological models for management is an increasingly important part of the maturation of ecology as an applied science. Consequently, we need to work at applying fair tests of models with adequate data. We demonstrate that a recent test of a discrete time, stochastic model was biased towards falsifying the predictions. If the model was a perfect description of reality, the test falsified the predictions 84% of the time. We introduce an alternative testing procedure for stochastic models, and show that it falsifies the predictions only 5% of the time when the model is a perfect description of reality. The example is used as a point of departure to discuss some of the philosophical aspects of model testing.