946 resultados para dynamic stochastic general equilibrium models
Resumo:
In this thesis we attempt to make a probabilistic analysis of some physically realizable, though complex, storage and queueing models. It is essentially a mathematical study of the stochastic processes underlying these models. Our aim is to have an improved understanding of the behaviour of such models, that may widen their applicability. Different inventory systems with randon1 lead times, vacation to the server, bulk demands, varying ordering levels, etc. are considered. Also we study some finite and infinite capacity queueing systems with bulk service and vacation to the server and obtain the transient solution in certain cases. Each chapter in the thesis is provided with self introduction and some important references
Resumo:
The objective of the study of \Queueing models with vacations and working vacations" was two fold; to minimize the server idle time and improve the e ciency of the service system. Keeping this in mind we considered queueing models in di erent set up in this thesis. Chapter 1 introduced the concepts and techniques used in the thesis and also provided a summary of the work done. In chapter 2 we considered an M=M=2 queueing model, where one of the two heterogeneous servers takes multiple vacations. We studied the performance of the system with the help of busy period analysis and computation of mean waiting time of a customer in the stationary regime. Conditional stochastic decomposition of queue length was derived. To improve the e ciency of this system we came up with a modi ed model in chapter 3. In this model the vacationing server attends the customers, during vacation at a slower service rate. Chapter 4 analyzed a working vacation queueing model in a more general set up. The introduction of N policy makes this MAP=PH=1 model di erent from all working vacation models available in the literature. A detailed analysis of performance of the model was provided with the help of computation of measures such as mean waiting time of a customer who gets service in normal mode and vacation mode.
Resumo:
In this thesis we have presented several inventory models of utility. Of these inventory with retrial of unsatisfied demands and inventory with postponed work are quite recently introduced concepts, the latt~~ being introduced for the first time. Inventory with service time is relatively new with a handful of research work reported. The di lficuity encoLlntered in inventory with service, unlike the queueing process, is that even the simplest case needs a 2-dimensional process for its description. Only in certain specific cases we can introduce generating function • to solve for the system state distribution. However numerical procedures can be developed for solving these problem.
Resumo:
In this paper, we study some dynamic generalized information measures between a true distribution and an observed (weighted) distribution, useful in life length studies. Further, some bounds and inequalities related to these measures are also studied
Resumo:
In this paper, the residual Kullback–Leibler discrimination information measure is extended to conditionally specified models. The extension is used to characterize some bivariate distributions. These distributions are also characterized in terms of proportional hazard rate models and weighted distributions. Moreover, we also obtain some bounds for this dynamic discrimination function by using the likelihood ratio order and some preceding results.
Resumo:
Recently, reciprocal subtangent has been used as a useful tool to describe the behaviour of a density curve. Motivated by this, in the present article we extend the concept to the weighted models. Characterization results are proved for models viz. gamma, Rayleigh, equilibrium, residual lifetime, and proportional hazards. An identity under weighted distribution is also obtained when the reciprocal subtangent takes the form of a general class of distributions. Finally, an extension of reciprocal subtangent for the weighted models in the bivariate and multivariate cases are introduced and proved some useful results
Characterizations of Bivariate Models Using Some Dynamic Conditional Information Divergence Measures
Resumo:
In this article, we study some relevant information divergence measures viz. Renyi divergence and Kerridge’s inaccuracy measures. These measures are extended to conditionally specifiedmodels and they are used to characterize some bivariate distributions using the concepts of weighted and proportional hazard rate models. Moreover, some bounds are obtained for these measures using the likelihood ratio order
Resumo:
The classical methods of analysing time series by Box-Jenkins approach assume that the observed series uctuates around changing levels with constant variance. That is, the time series is assumed to be of homoscedastic nature. However, the nancial time series exhibits the presence of heteroscedasticity in the sense that, it possesses non-constant conditional variance given the past observations. So, the analysis of nancial time series, requires the modelling of such variances, which may depend on some time dependent factors or its own past values. This lead to introduction of several classes of models to study the behaviour of nancial time series. See Taylor (1986), Tsay (2005), Rachev et al. (2007). The class of models, used to describe the evolution of conditional variances is referred to as stochastic volatility modelsThe stochastic models available to analyse the conditional variances, are based on either normal or log-normal distributions. One of the objectives of the present study is to explore the possibility of employing some non-Gaussian distributions to model the volatility sequences and then study the behaviour of the resulting return series. This lead us to work on the related problem of statistical inference, which is the main contribution of the thesis
Resumo:
Traditionally, we've focussed on the question of how to make a system easy to code the first time, or perhaps on how to ease the system's continued evolution. But if we look at life cycle costs, then we must conclude that the important question is how to make a system easy to operate. To do this we need to make it easy for the operators to see what's going on and to then manipulate the system so that it does what it is supposed to. This is a radically different criterion for success. What makes a computer system visible and controllable? This is a difficult question, but it's clear that today's modern operating systems with nearly 50 million source lines of code are neither. Strikingly, the MIT Lisp Machine and its commercial successors provided almost the same functionality as today's mainstream sytsems, but with only 1 Million lines of code. This paper is a retrospective examination of the features of the Lisp Machine hardware and software system. Our key claim is that by building the Object Abstraction into the lowest tiers of the system, great synergy and clarity were obtained. It is our hope that this is a lesson that can impact tomorrow's designs. We also speculate on how the spirit of the Lisp Machine could be extended to include a comprehensive access control model and how new layers of abstraction could further enrich this model.
Resumo:
Las estrategias de inversión pairs trading se basan en desviaciones del precio entre pares de acciones correlacionadas y han sido ampliamente implementadas por fondos de inversión tomando posiciones largas y cortas en las acciones seleccionadas cuando surgen divergencias y obteniendo utilidad cerrando la posición al converger. Se describe un modelo de reversión a la media para analizar la dinámica que sigue el diferencial del precio entre acciones ordinarias y preferenciales de una misma empresa en el mismo mercado. La media de convergencia en el largo plazo es obtenida con un filtro de media móvil, posteriormente, los parámetros del modelo de reversión a la media se estiman mediante un filtro de Kalman bajo una formulación de estado espacio sobre las series históricas. Se realiza un backtesting a la estrategia de pairs trading algorítmico sobre el modelo propuesto indicando potenciales utilidades en mercados financieros que se observan por fuera del equilibrio. Aplicaciones de los resultados podrían mostrar oportunidades para mejorar el rendimiento de portafolios, corregir errores de valoración y sobrellevar mejor periodos de bajos retornos.
Resumo:
The purpose of this expository arti le is to present a self- ontained overview of some results on the hara terization of the optimal value fun tion of a sto hasti target problem as (dis ontinuous) vis osity solution of a ertain dynami programming PDE and its appli ation to the problem of hedging ontingent laims in the presen e of portfolio onstraints and large investors
Resumo:
The performance of a model-based diagnosis system could be affected by several uncertainty sources, such as,model errors,uncertainty in measurements, and disturbances. This uncertainty can be handled by mean of interval models.The aim of this thesis is to propose a methodology for fault detection, isolation and identification based on interval models. The methodology includes some algorithms to obtain in an automatic way the symbolic expression of the residual generators enhancing the structural isolability of the faults, in order to design the fault detection tests. These algorithms are based on the structural model of the system. The stages of fault detection, isolation, and identification are stated as constraint satisfaction problems in continuous domains and solved by means of interval based consistency techniques. The qualitative fault isolation is enhanced by a reasoning in which the signs of the symptoms are derived from analytical redundancy relations or bond graph models of the system. An initial and empirical analysis regarding the differences between interval-based and statistical-based techniques is presented in this thesis. The performance and efficiency of the contributions are illustrated through several application examples, covering different levels of complexity.
Resumo:
This thesis presents population dynamics models that can be applied to predict the rate of spread of the Neolithic transition (change from hunter-gathering to farming economics) across the European continent, which took place about 9000 to 5000 years ago. The first models in this thesis provide predictions at a continental scale. We develop population dynamics models with explicit kernels and apply realistic data. We also derive a new time-delayed reaction-diffusion equation which yields speeds about a 10% slower than previous models. We also deal with a regional variability: the slowdown of the Neolithic front when reaching the North of Europe. We develop simple reaction-diffusion models that can predict the measured speeds in terms of the non-homogeneous distribution of pre-Neolithic (Mesolithic) population in Europe, which were present in higher densities at the North of the continent. Such models can explain the observed speeds.
Resumo:
La formiga argentina (Linepithema humile) es troba entre les espècies més invasores: originària d'Amèrica del Sud, actualment ha envaït nombroses àrees arreu del món. Aquesta tesi doctoral intenta fer una primera anàlisi integrada i multiescalar de la distribució de la formiga argentina mitjançant l'ús de models de nínxol ecològic. D'acord amb els resultats obtinguts, es preveu que la formiga argentina assoleixi una distribució més àmplia que l'actual. Les prediccions obtingudes a partir dels models concorden amb la distribució actualment coneguda i, a més, indiquen àrees a prop de la costa i dels rius principals com a altament favorables per a l'espècie. Aquests resultats corroboren la idea que la formiga argentina no es troba actualment en equilibri amb el medi. D'altra banda, amb el canvi climàtic, s'espera que la distribució de la formiga argentina s'estengui cap a latituds més elevades en ambdós hemisferis, i sofreixi una retracció en els tròpics a escales globals.
Combining altimetric/gravimetric and ocean model mean dynamic topography models in the GOCINA region