70 resultados para Dynamic parameters
Resumo:
This paper shows that tourism specialisation can help to explain the observed high growth rates of small countries. For this purpose, two models of growth and trade are constructed to represent the trade relations between two countries. One of the countries is large, rich, has an own source of sustained growth and produces a tradable capital good. The other is a small poor economy, which does not have an own engine of growth and produces tradable tourism services. The poor country exports tourism services to and imports capital goods from the rich economy. In one model tourism is a luxury good, while in the other the expenditure elasticity of tourism imports is unitary. Two main results are obtained. In the long run, the tourism country overcomes decreasing returns and permanently grows because its terms of trade continuously improve. Since the tourism sector is relatively less productive than the capital good sector, tourism services become relatively scarcer and hence more expensive than the capital good. Moreover, along the transition the growth rate of the tourism economy holds well above the one of the rich country for a long time. The growth rate differential between countries is particularly high when tourism is a luxury good. In this case, there is a faster increase in the tourism demand. As a result, investment of the small economy is boosted and its terms of trade highly improve.
Resumo:
En termes de temps d'execució i ús de dades, les aplicacions paral·leles/distribuïdes poden tenir execucions variables, fins i tot quan s'empra el mateix conjunt de dades d'entrada. Existeixen certs aspectes de rendiment relacionats amb l'entorn que poden afectar dinàmicament el comportament de l'aplicació, tals com: la capacitat de la memòria, latència de la xarxa, el nombre de nodes, l'heterogeneïtat dels nodes, entre d'altres. És important considerar que l'aplicació pot executar-se en diferents configuracions de maquinari i el desenvolupador d'aplicacions no port garantir que els ajustaments de rendiment per a un sistema en particular continuïn essent vàlids per a d'altres configuracions. L'anàlisi dinàmica de les aplicacions ha demostrat ser el millor enfocament per a l'anàlisi del rendiment per dues raons principals. En primer lloc, ofereix una solució molt còmoda des del punt de vista dels desenvolupadors mentre que aquests dissenyen i evaluen les seves aplicacions paral·leles. En segon lloc, perquè s'adapta millor a l'aplicació durant l'execució. Aquest enfocament no requereix la intervenció de desenvolupadors o fins i tot l'accés al codi font de l'aplicació. S'analitza l'aplicació en temps real d'execució i es considra i analitza la recerca dels possibles colls d'ampolla i optimitzacions. Per a optimitzar l'execució de l'aplicació bioinformàtica mpiBLAST, vam analitzar el seu comportament per a identificar els paràmetres que intervenen en el rendiment d'ella, com ara: l'ús de la memòria, l'ús de la xarxa, patrons d'E/S, el sistema de fitxers emprat, l'arquitectura del processador, la grandària de la base de dades biològica, la grandària de la seqüència de consulta, la distribució de les seqüències dintre d'elles, el nombre de fragments de la base de dades i/o la granularitat dels treballs assignats a cada procés. El nostre objectiu és determinar quins d'aquests paràmetres tenen major impacte en el rendiment de les aplicacions i com ajustar-los dinàmicament per a millorar el rendiment de l'aplicació. Analitzant el rendiment de l'aplicació mpiBLAST hem trobat un conjunt de dades que identifiquen cert nivell de serial·lització dintre l'execució. Reconeixent l'impacte de la caracterització de les seqüències dintre de les diferents bases de dades i una relació entre la capacitat dels workers i la granularitat de la càrrega de treball actual, aquestes podrien ser sintonitzades dinàmicament. Altres millores també inclouen optimitzacions relacionades amb el sistema de fitxers paral·lel i la possibilitat d'execució en múltiples multinucli. La grandària de gra de treball està influenciat per factors com el tipus de base de dades, la grandària de la base de dades, i la relació entre grandària de la càrrega de treball i la capacitat dels treballadors.
Resumo:
Abstract. Given a model that can be simulated, conditional moments at a trial parameter value can be calculated with high accuracy by applying kernel smoothing methods to a long simulation. With such conditional moments in hand, standard method of moments techniques can be used to estimate the parameter. Because conditional moments are calculated using kernel smoothing rather than simple averaging, it is not necessary that the model be simulable subject to the conditioning information that is used to define the moment conditions. For this reason, the proposed estimator is applicable to general dynamic latent variable models. It is shown that as the number of simulations diverges, the estimator is consistent and a higher-order expansion reveals the stochastic difference between the infeasible GMM estimator based on the same moment conditions and the simulated version. In particular, we show how to adjust standard errors to account for the simulations. Monte Carlo results show how the estimator may be applied to a range of dynamic latent variable (DLV) models, and that it performs well in comparison to several other estimators that have been proposed for DLV models.
Dynamic stackelberg game with risk-averse players: optimal risk-sharing under asymmetric information
Resumo:
The objective of this paper is to clarify the interactive nature of the leader-follower relationship when both players are endogenously risk-averse. The analysis is placed in the context of a dynamic closed-loop Stackelberg game with private information. The case of a risk-neutral leader, very often discussed in the literature, is only a borderline possibility in the present study. Each player in the game is characterized by a risk-averse type which is unknown to his opponent. The goal of the leader is to implement an optimal incentive compatible risk-sharing contract. The proposed approach provides a qualitative analysis of adaptive risk behavior profiles for asymmetrically informed players in the context of dynamic strategic interactions modelled as incentive Stackelberg games.
Resumo:
The objective of this paper is to re-examine the risk-and effort attitude in the context of strategic dynamic interactions stated as a discrete-time finite-horizon Nash game. The analysis is based on the assumption that players are endogenously risk-and effort-averse. Each player is characterized by distinct risk-and effort-aversion types that are unknown to his opponent. The goal of the game is the optimal risk-and effort-sharing between the players. It generally depends on the individual strategies adopted and, implicitly, on the the players' types or characteristics.
Resumo:
Los servicios de salud son sistemas muy complejos, pero de alta importancia, especialmente en algunos momentos críticos, en todo el mundo. Los departamentos de urgencias pueden ser una de las áreas más dinámicas y cambiables de todos los servicios de salud y a la vez más vulnerables a dichos cambios. La mejora de esos departamentos se puede considerar uno de los grandes retos que tiene cualquier administrador de un hospital, y la simulación provee una manera de examinar este sistema tan complejo sin poner en peligro los pacientes que son atendidos. El objetivo de este trabajo ha sido el modelado de un departamento de urgencias y el desarrollo de un simulador que implementa este modelo con la finalidad de explorar el comportamiento y las características de dicho servicio de urgencias. El uso del simulador ofrece la posibilidad de visualizar el comportamiento del modelo con diferentes parámetros y servirá como núcleo de un sistema de ayuda a la toma de decisiones que pueda ser usado en departamentos de urgencias. El modelo se ha desarrollado con técnicas de modelado basado en agentes (ABM) que permiten crear modelos funcionalmente más próximos a la realidad que los modelos de colas o de dinámicas de sistemas, al permitir la inclusión de la singularidad que implica el modelado a nivel de las personas. Los agentes del modelo presentado, descritos internamente como máquinas de estados, representan a todo el personal del departamento de urgencias y los pacientes que usan este servicio. Un análisis del modelo a través de su implementación en el simulador muestra que el sistema se comporta de manera semejante a un departamento de urgencias real.
Resumo:
General signaling results in dynamic Tullock contests have been missing for long. The reason is the tractability of the problems. In this paper, an uninformed contestant with valuation vx competes against an informed opponent with valuation, either high vh or low vl. We show that; (i) When the hierarchy of valuations is vh ≥ vx ≥ vl, there is no pooling. Sandbagging is too costly for the high type. (ii) When the order of valuations is vx ≥ vh ≥ vl, there is no separation if vh and vl are close. Sandbagging is cheap due to the proximity of valuations. However, if vh and vx are close, there is no pooling. First period cost of pooling is high. (iii) For valuations satisfying vh ≥ vl ≥ vx, there is no separation if vh and vl are close. Bluffing in the first period is cheap for the low valuation type. Conversely, if vx and vl are close there is no pooling. Bluffing in the first stage is too costly. JEL: C72, C73, D44, D82. KEYWORDS: Signaling, Dynamic Contests, Non-existence, Sandbag Pooling, Bluff Pooling, Separating
Resumo:
Given a sample from a fully specified parametric model, let Zn be a given finite-dimensional statistic - for example, an initial estimator or a set of sample moments. We propose to (re-)estimate the parameters of the model by maximizing the likelihood of Zn. We call this the maximum indirect likelihood (MIL) estimator. We also propose a computationally tractable Bayesian version of the estimator which we refer to as a Bayesian Indirect Likelihood (BIL) estimator. In most cases, the density of the statistic will be of unknown form, and we develop simulated versions of the MIL and BIL estimators. We show that the indirect likelihood estimators are consistent and asymptotically normally distributed, with the same asymptotic variance as that of the corresponding efficient two-step GMM estimator based on the same statistic. However, our likelihood-based estimators, by taking into account the full finite-sample distribution of the statistic, are higher order efficient relative to GMM-type estimators. Furthermore, in many cases they enjoy a bias reduction property similar to that of the indirect inference estimator. Monte Carlo results for a number of applications including dynamic and nonlinear panel data models, a structural auction model and two DSGE models show that the proposed estimators indeed have attractive finite sample properties.
Resumo:
El canvi climàtic del segle XXI és una realitat, hi ha moltes evidències científiques que indiquen que l’escalfament del sistema climàtic és inequívoc. Malgrat això, també hi ha moltes incerteses respecte els impactes que pot comportar aquest canvi climàtic global. L’objectiu d’aquest projecte és estudiar la possible evolució futura de tres variables climàtiques, que són el rang de la temperatura diürna a prop de la superfície (DTR), la temperatura mitjana a prop de la superfície (MT) i la precipitació mensual (PL_mes) i valorar l’exposició que poden experimentar diferents cobertes del sòl i diferents regions biogeogràfiques del continent europeu davant d’aquests possibles patrons de canvi. Per això s’han utilitzat Models Climàtics Globals que fan projeccions de variables climàtiques que permeten preveure el possible clima futur. Mitjançant l’aplicatiu informàtic Tetyn s’han extret els paràmetres climàtics dels conjunts de dades del Tyndall Centre for Climate Change Research, del futur (TYN SC) i del passat (CRU TS). Les variables obtingudes s’han processat amb eines de sistemes d’informació geogràfica (SIG) per obtenir els patrons de canvi de les variables a cada coberta del sòl. Els resultats obtinguts mostren que hi ha una gran variabilitat, que augmenta amb el temps, entre els diferents models climàtics i escenaris considerats, que posa de manifest la incertesa associada a la modelització climàtica, a la generació d’escenaris d’emissions i a la naturalesa dinàmica i no determinista del sistema climàtic. Però en general, mostren que les glaceres seran una de les cobertes més exposades al canvi climàtic, i la mediterrània, una de les regions més vulnerables
Resumo:
The goal of this paper is to study the frequency of new product introductions in monopoly markets where demand is subject to transitory saturation. We focus on those types of goods for which consumers purchase at most one unit of each variety, but repeat purchases in the same product category. The model considers infinitely-lived, forward-looking consumers and firms. We show that the share of potential surplus that a monopolist is able to appropriate increases with the frequency of introduction of new products and the intensity of transitory saturation. If the latter is sufficiently strong then the rate of introduction of new products is higher than socially desirable (excessive dynamic product diversity.)
Resumo:
L'objectiu final d'aquest projecte és realitzar un Sistema Traçador d' Errors, però potser mésimportant és l'objectiu d'aprendre noves tecnologies, que sovint estan a disposició de l'usuari però l'usuari les desconeix.
Resumo:
This paper presents the implementation details of a coded structured light system for rapid shape acquisition of unknown surfaces. Such techniques are based on the projection of patterns onto a measuring surface and grabbing images of every projection with a camera. Analyzing the pattern deformations that appear in the images, 3D information of the surface can be calculated. The implemented technique projects a unique pattern so that it can be used to measure moving surfaces. The structure of the pattern is a grid where the color of the slits are selected using a De Bruijn sequence. Moreover, since both axis of the pattern are coded, the cross points of the grid have two codewords (which permits to reconstruct them very precisely), while pixels belonging to horizontal and vertical slits have also a codeword. Different sets of colors are used for horizontal and vertical slits, so the resulting pattern is invariant to rotation. Therefore, the alignment constraint between camera and projector considered by a lot of authors is not necessary
Resumo:
We present a system for dynamic network resource configuration in environments with bandwidth reservation and path restoration mechanisms. Our focus is on the dynamic bandwidth management results, although the main goal of the system is the integration of the different mechanisms that manage the reserved paths (bandwidth, restoration, and spare capacity planning). The objective is to avoid conflicts between these mechanisms. The system is able to dynamically manage a logical network such as a virtual path network in ATM or a label switch path network in MPLS. This system has been designed to be modular in the sense that in can be activated or deactivated, and it can be applied only in a sub-network. The system design and implementation is based on a multi-agent system (MAS). We also included details of its architecture and implementation
Resumo:
Low concentrations of elements in geochemical analyses have the peculiarity of beingcompositional data and, for a given level of significance, are likely to be beyond thecapabilities of laboratories to distinguish between minute concentrations and completeabsence, thus preventing laboratories from reporting extremely low concentrations of theanalyte. Instead, what is reported is the detection limit, which is the minimumconcentration that conclusively differentiates between presence and absence of theelement. A spatially distributed exhaustive sample is employed in this study to generateunbiased sub-samples, which are further censored to observe the effect that differentdetection limits and sample sizes have on the inference of population distributionsstarting from geochemical analyses having specimens below detection limit (nondetects).The isometric logratio transformation is used to convert the compositional data in thesimplex to samples in real space, thus allowing the practitioner to properly borrow fromthe large source of statistical techniques valid only in real space. The bootstrap method isused to numerically investigate the reliability of inferring several distributionalparameters employing different forms of imputation for the censored data. The casestudy illustrates that, in general, best results are obtained when imputations are madeusing the distribution best fitting the readings above detection limit and exposes theproblems of other more widely used practices. When the sample is spatially correlated, itis necessary to combine the bootstrap with stochastic simulation