162 resultados para Dynamic Traffic Assignment
em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain
Resumo:
Traffic forecasts provide essential input for the appraisal of transport investment projects. However, according to recent empirical evidence, long-term predictions are subject to high levels of uncertainty. This paper quantifies uncertainty in traffic forecasts for the tolled motorway network in Spain. Uncertainty is quantified in the form of a confidence interval for the traffic forecast that includes both model uncertainty and input uncertainty. We apply a stochastic simulation process based on bootstrapping techniques. Furthermore, the paper proposes a new methodology to account for capacity constraints in long-term traffic forecasts. Specifically, we suggest a dynamic model in which the speed of adjustment is related to the ratio between the actual traffic flow and the maximum capacity of the motorway. This methodology is applied to a specific public policy that consists of suppressing the toll on a certain motorway section before the concession expires.
Resumo:
In this article, a new technique for grooming low-speed traffic demands into high-speed optical routes is proposed. This enhancement allows a transparent wavelength-routing switch (WRS) to aggregate traffic en route over existing optical routes without incurring expensive optical-electrical-optical (OEO) conversions. This implies that: a) an optical route may be considered as having more than one ingress node (all inline) and, b) traffic demands can partially use optical routes to reach their destination. The proposed optical routes are named "lighttours" since the traffic originating from different sources can be forwarded together in a single optical route, i.e., as taking a "tour" over different sources towards the same destination. The possibility of creating lighttours is the consequence of a novel WRS architecture proposed in this article, named "enhanced grooming" (G+). The ability to groom more traffic in the middle of a lighttour is achieved with the support of a simple optical device named lambda-monitor (previously introduced in the RingO project). In this article, we present the new WRS architecture and its advantages. To compare the advantages of lighttours with respect to classical lightpaths, an integer linear programming (ILP) model is proposed for the well-known multilayer problem: traffic grooming, routing and wavelength assignment The ILP model may be used for several objectives. However, this article focuses on two objectives: maximizing the network throughput, and minimizing the number of optical-electro-optical conversions used. Experiments show that G+ can route all the traffic using only half of the total OEO conversions needed by classical grooming. An heuristic is also proposed, aiming at achieving near optimal results in polynomial time
Resumo:
We consider a dynamic model where traders in each period are matched randomly into pairs who then bargain about the division of a fixed surplus. When agreement is reached the traders leave the market. Traders who do not come to an agreement return next period in which they will be matched again, as long as their deadline has not expired yet. New traders enter exogenously in each period. We assume that traders within a pair know each other's deadline. We define and characterize the stationary equilibrium configurations. Traders with longer deadlines fare better than traders with short deadlines. It is shown that the heterogeneity of deadlines may cause delay. It is then shown that a centralized mechanism that controls the matching protocol, but does not interfere with the bargaining, eliminates all delay. Even though this efficient centralized mechanism is not as good for traders with long deadlines, it is shown that in a model where all traders can choose which mechanism to
Resumo:
The proposed game is a natural extension of the Shapley and Shubik Assignment Game to the case where each seller owns a set of different objets instead of only one indivisible object. We propose definitions of pairwise stability and group stability that are adapted to our framework. Existence of both pairwise and group stable outcomes is proved. We study the structure of the group stable set and we finally prove that the set of group stable payoffs forms a complete lattice with one optimal group stable payoff for each side of the market.
Resumo:
Given a model that can be simulated, conditional moments at a trial parameter value can be calculated with high accuracy by applying kernel smoothing methods to a long simulation. With such conditional moments in hand, standard method of moments techniques can be used to estimate the parameter. Since conditional moments are calculated using kernel smoothing rather than simple averaging, it is not necessary that the model be simulable subject to the conditioning information that is used to define the moment conditions. For this reason, the proposed estimator is applicable to general dynamic latent variable models. Monte Carlo results show that the estimator performs well in comparison to other estimators that have been proposed for estimation of general DLV models.
Resumo:
In the literature on risk, one generally assume that uncertainty is uniformly distributed over the entire working horizon, when the absolute risk-aversion index is negative and constant. From this perspective, the risk is totally exogenous, and thus independent of endogenous risks. The classic procedure is "myopic" with regard to potential changes in the future behavior of the agent due to inherent random fluctuations of the system. The agent's attitude to risk is rigid. Although often criticized, the most widely used hypothesis for the analysis of economic behavior is risk-neutrality. This borderline case must be envisaged with prudence in a dynamic stochastic context. The traditional measures of risk-aversion are generally too weak for making comparisons between risky situations, given the dynamic �complexity of the environment. This can be highlighted in concrete problems in finance and insurance, context for which the Arrow-Pratt measures (in the small) give ambiguous.
Resumo:
The objective of this paper is to re-evaluate the attitude to effort of a risk-averse decision-maker in an evolving environment. In the classic analysis, the space of efforts is generally discretized. More realistic, this new approach emploies a continuum of effort levels. The presence of multiple possible efforts and performance levels provides a better basis for explaining real economic phenomena. The traditional approach (see, Laffont, J. J. & Tirole, J., 1993, Salanie, B., 1997, Laffont, J.J. and Martimort, D, 2002, among others) does not take into account the potential effect of the system dynamics on the agent's behavior to effort over time. In the context of a Principal-agent relationship, not only the incentives of the Principal can determine the private agent to allocate a good effort, but also the evolution of the dynamic system. The incentives can be ineffective when the environment does not incite the agent to invest a good effort. This explains why, some effici
Resumo:
The demand for computational power has been leading the improvement of the High Performance Computing (HPC) area, generally represented by the use of distributed systems like clusters of computers running parallel applications. In this area, fault tolerance plays an important role in order to provide high availability isolating the application from the faults effects. Performance and availability form an undissociable binomial for some kind of applications. Therefore, the fault tolerant solutions must take into consideration these two constraints when it has been designed. In this dissertation, we present a few side-effects that some fault tolerant solutions may presents when recovering a failed process. These effects may causes degradation of the system, affecting mainly the overall performance and availability. We introduce RADIC-II, a fault tolerant architecture for message passing based on RADIC (Redundant Array of Distributed Independent Fault Tolerance Controllers) architecture. RADIC-II keeps as maximum as possible the RADIC features of transparency, decentralization, flexibility and scalability, incorporating a flexible dynamic redundancy feature, allowing to mitigate or to avoid some recovery side-effects.
Resumo:
This paper shows that tourism specialisation can help to explain the observed high growth rates of small countries. For this purpose, two models of growth and trade are constructed to represent the trade relations between two countries. One of the countries is large, rich, has an own source of sustained growth and produces a tradable capital good. The other is a small poor economy, which does not have an own engine of growth and produces tradable tourism services. The poor country exports tourism services to and imports capital goods from the rich economy. In one model tourism is a luxury good, while in the other the expenditure elasticity of tourism imports is unitary. Two main results are obtained. In the long run, the tourism country overcomes decreasing returns and permanently grows because its terms of trade continuously improve. Since the tourism sector is relatively less productive than the capital good sector, tourism services become relatively scarcer and hence more expensive than the capital good. Moreover, along the transition the growth rate of the tourism economy holds well above the one of the rich country for a long time. The growth rate differential between countries is particularly high when tourism is a luxury good. In this case, there is a faster increase in the tourism demand. As a result, investment of the small economy is boosted and its terms of trade highly improve.
Resumo:
Abstract. Given a model that can be simulated, conditional moments at a trial parameter value can be calculated with high accuracy by applying kernel smoothing methods to a long simulation. With such conditional moments in hand, standard method of moments techniques can be used to estimate the parameter. Because conditional moments are calculated using kernel smoothing rather than simple averaging, it is not necessary that the model be simulable subject to the conditioning information that is used to define the moment conditions. For this reason, the proposed estimator is applicable to general dynamic latent variable models. It is shown that as the number of simulations diverges, the estimator is consistent and a higher-order expansion reveals the stochastic difference between the infeasible GMM estimator based on the same moment conditions and the simulated version. In particular, we show how to adjust standard errors to account for the simulations. Monte Carlo results show how the estimator may be applied to a range of dynamic latent variable (DLV) models, and that it performs well in comparison to several other estimators that have been proposed for DLV models.
Dynamic stackelberg game with risk-averse players: optimal risk-sharing under asymmetric information
Resumo:
The objective of this paper is to clarify the interactive nature of the leader-follower relationship when both players are endogenously risk-averse. The analysis is placed in the context of a dynamic closed-loop Stackelberg game with private information. The case of a risk-neutral leader, very often discussed in the literature, is only a borderline possibility in the present study. Each player in the game is characterized by a risk-averse type which is unknown to his opponent. The goal of the leader is to implement an optimal incentive compatible risk-sharing contract. The proposed approach provides a qualitative analysis of adaptive risk behavior profiles for asymmetrically informed players in the context of dynamic strategic interactions modelled as incentive Stackelberg games.
Resumo:
The objective of this paper is to re-examine the risk-and effort attitude in the context of strategic dynamic interactions stated as a discrete-time finite-horizon Nash game. The analysis is based on the assumption that players are endogenously risk-and effort-averse. Each player is characterized by distinct risk-and effort-aversion types that are unknown to his opponent. The goal of the game is the optimal risk-and effort-sharing between the players. It generally depends on the individual strategies adopted and, implicitly, on the the players' types or characteristics.
Resumo:
A multiple-partners assignment game with heterogeneous sales and multiunit demands consists of a set of sellers that own a given number of indivisible units of (potentially many different) goods and a set of buyers who value those units and want to buy at most an exogenously fixed number of units. We define a competitive equilibrium for this generalized assignment game and prove its existence by using only linear programming. In particular, we show how to compute equilibrium price vectors from the solutions of the dual linear program associated to the primal linear program defined to find optimal assignments. Using only linear programming tools, we also show (i) that the set of competitive equilibria (pairs of price vectors and assignments) has a Cartesian product structure: each equilibrium price vector is part of a competitive equilibrium with all optimal assignments, and vice versa; (ii) that the set of (restricted) equilibrium price vectors has a natural lattice structure; and (iii) how this structure is translated into the set of agents' utilities that are attainable at equilibrium.
Resumo:
We study two cooperative solutions of a market with indivisible goods modeled as a generalized assignment game: Set-wise stability and Core. We first establish that the Set-wise stable set is contained in the Core and it contains the non-empty set of competitive equilibrium payoffs. We then state and prove three limit results for replicated markets. First, the sequence of Cores of replicated markets converges to the set of competitive equilibrium payoffs when the number of replicas tends to infinity. Second, the Set-wise stable set of a two-fold replicated market already coincides with the set of competitive equilibrium payoffs. Third, for any number of replicas there is a market with a Core payoff that is not a competitive equilibrium payoff.
Resumo:
General signaling results in dynamic Tullock contests have been missing for long. The reason is the tractability of the problems. In this paper, an uninformed contestant with valuation vx competes against an informed opponent with valuation, either high vh or low vl. We show that; (i) When the hierarchy of valuations is vh ≥ vx ≥ vl, there is no pooling. Sandbagging is too costly for the high type. (ii) When the order of valuations is vx ≥ vh ≥ vl, there is no separation if vh and vl are close. Sandbagging is cheap due to the proximity of valuations. However, if vh and vx are close, there is no pooling. First period cost of pooling is high. (iii) For valuations satisfying vh ≥ vl ≥ vx, there is no separation if vh and vl are close. Bluffing in the first period is cheap for the low valuation type. Conversely, if vx and vl are close there is no pooling. Bluffing in the first stage is too costly. JEL: C72, C73, D44, D82. KEYWORDS: Signaling, Dynamic Contests, Non-existence, Sandbag Pooling, Bluff Pooling, Separating