953 resultados para Adjustment cost models
Resumo:
WDM (Wavelength-Division Multiplexing) optiset verkot on tällä hetkellä suosituin tapa isojen määrän tietojen siirtämiseen. Jokaiselle liittymälle määrätään reitin ja aallonpituus joka linkin varten. Tarvittavan reitin ja aallon pituuden löytäminen kutsutaan RWA-ongelmaksi. Tämän työn kuvaa mahdollisia kustannuksen mallein ratkaisuja RWA-ongelmaan. Olemassa on paljon erilaisia optimoinnin tavoitteita. Edellä mainittuja kustannuksen malleja perustuu näillä tavoitteilla. Kustannuksen malleja antavat tehokkaita ratkaisuja ja algoritmeja. The multicommodity malli on käsitelty tässä työssä perusteena RV/A-kustannuksen mallille. Myöskin OB käsitelty heuristisia menetelmiä RWA-ongelman ratkaisuun. Työn loppuosassa käsitellään toteutuksia muutamalle mallille ja erilaisia mahdollisuuksia kustannuksen mallein parantamiseen.
Resumo:
In the last decade, the potential macroeconomic effects of intermittent large adjustments in microeconomic decision variables such as prices, investment, consumption of durables or employment – a behavior which may be justified by the presence of kinked adjustment costs – have been studied in models where economic agents continuously observe the optimal level of their decision variable. In this paper, we develop a simple model which introduces infrequent information in a kinked adjustment cost model by assuming that agents do not observe continuously the frictionless optimal level of the control variable. Periodic releases of macroeconomic statistics or dividend announcements are examples of such infrequent information arrivals. We first solve for the optimal individual decision rule, that is found to be both state and time dependent. We then develop an aggregation framework to study the macroeconomic implications of such optimal individual decision rules. Our model has the distinct characteristic that a vast number of agents tend to act together, and more so when uncertainty is large. The average effect of an aggregate shock is inversely related to its size and to aggregate uncertainty. We show that these results differ substantially from the ones obtained with full information adjustment cost models.
Resumo:
We extend the macroeconomic literature on Sstype rules by introducing infrequent information in a kinked ad justment cost model. We first show that optimal individual decision rules are both state-and -time dependent. We then develop an aggregation framework to study the macroeconomic implications of such optimal individual decision rules. In our model, a vast number of agents act together, and more so when uncertainty is large.The average effect of an aggregate shock is inversely related to its size and to aggregate uncertainty. These results are in contrast with those obtained with full information ad justment cost models.
Resumo:
Mode of access: Internet.
Resumo:
This thesis describes the procedure and results from four years research undertaken through the IHD (Interdisciplinary Higher Degrees) Scheme at Aston University in Birmingham, sponsored by the SERC (Science and Engineering Research Council) and Monk Dunstone Associates, Chartered Quantity Surveyors. A stochastic networking technique VERT (Venture Evaluation and Review Technique) was used to model the pre-tender costs of public health, heating ventilating, air-conditioning, fire protection, lifts and electrical installations within office developments. The model enabled the quantity surveyor to analyse, manipulate and explore complex scenarios which previously had defied ready mathematical analysis. The process involved the examination of historical material costs, labour factors and design performance data. Components and installation types were defined and formatted. Data was updated and adjusted using mechanical and electrical pre-tender cost indices and location, selection of contractor, contract sum, height and site condition factors. Ranges of cost, time and performance data were represented by probability density functions and defined by constant, uniform, normal and beta distributions. These variables and a network of the interrelationships between services components provided the framework for analysis. The VERT program, in this particular study, relied upon Monte Carlo simulation to model the uncertainties associated with pre-tender estimates of all possible installations. The computer generated output in the form of relative and cumulative frequency distributions of current element and total services costs, critical path analyses and details of statistical parameters. From this data alternative design solutions were compared, the degree of risk associated with estimates was determined, heuristics were tested and redeveloped, and cost significant items were isolated for closer examination. The resultant models successfully combined cost, time and performance factors and provided the quantity surveyor with an appreciation of the cost ranges associated with the various engineering services design options.
Resumo:
Slot and van Emde Boas Invariance Thesis states that a time (respectively, space) cost model is reasonable for a computational model C if there are mutual simulations between Turing machines and C such that the overhead is polynomial in time (respectively, linear in space). The rationale is that under the Invariance Thesis, complexity classes such as LOGSPACE, P, PSPACE, become robust, i.e. machine independent. In this dissertation, we want to find out if it possible to define a reasonable space cost model for the lambda-calculus, the paradigmatic model for functional programming languages. We start by considering an unusual evaluation mechanism for the lambda-calculus, based on Girard's Geometry of Interaction, that was conjectured to be the key ingredient to obtain a space reasonable cost model. By a fine complexity analysis of this schema, based on new variants of non-idempotent intersection types, we disprove this conjecture. Then, we change the target of our analysis. We consider a variant over Krivine's abstract machine, a standard evaluation mechanism for the call-by-name lambda-calculus, optimized for space complexity, and implemented without any pointer. A fine analysis of the execution of (a refined version of) the encoding of Turing machines into the lambda-calculus allows us to conclude that the space consumed by this machine is indeed a reasonable space cost model. In particular, for the first time we are able to measure also sub-linear space complexities. Moreover, we transfer this result to the call-by-value case. Finally, we provide also an intersection type system that characterizes compositionally this new reasonable space measure. This is done through a minimal, yet non trivial, modification of the original de Carvalho type system.
Resumo:
We present an envelope theorem for establishing first-order conditions in decision problems involving continuous and discrete choices. Our theorem accommodates general dynamic programming problems, even with unbounded marginal utilities. And, unlike classical envelope theorems that focus only on differentiating value functions, we accommodate other endogenous functions such as default probabilities and interest rates. Our main technical ingredient is how we establish the differentiability of a function at a point: we sandwich the function between two differentiable functions from above and below. Our theory is widely applicable. In unsecured credit models, neither interest rates nor continuation values are globally differentiable. Nevertheless, we establish an Euler equation involving marginal prices and values. In adjustment cost models, we show that first-order conditions apply universally, even if optimal policies are not (S,s). Finally, we incorporate indivisible choices into a classic dynamic insurance analysis.
Resumo:
This paper presents a novel approach to WLAN propagation models for use in indoor localization. The major goal of this work is to eliminate the need for in situ data collection to generate the Fingerprinting map, instead, it is generated by using analytical propagation models such as: COST Multi-Wall, COST 231 average wall and Motley- Keenan. As Location Estimation Algorithms kNN (K-Nearest Neighbour) and WkNN (Weighted K-Nearest Neighbour) were used to determine the accuracy of the proposed technique. This work is based on analytical and measurement tools to determine which path loss propagation models are better for location estimation applications, based on Receive Signal Strength Indicator (RSSI).This study presents different proposals for choosing the most appropriate values for the models parameters, like obstacles attenuation and coefficients. Some adjustments to these models, particularly to Motley-Keenan, considering the thickness of walls, are proposed. The best found solution is based on the adjusted Motley-Keenan and COST models that allows to obtain the propagation loss estimation for several environments.Results obtained from two testing scenarios showed the reliability of the adjustments, providing smaller errors in the measured values values in comparison with the predicted values.
Resumo:
Les questions abordées dans les deux premiers articles de ma thèse cherchent à comprendre les facteurs économiques qui affectent la structure à terme des taux d'intérêt et la prime de risque. Je construis des modèles non linéaires d'équilibre général en y intégrant des obligations de différentes échéances. Spécifiquement, le premier article a pour objectif de comprendre la relation entre les facteurs macroéconomiques et le niveau de prime de risque dans un cadre Néo-keynésien d'équilibre général avec incertitude. L'incertitude dans le modèle provient de trois sources : les chocs de productivité, les chocs monétaires et les chocs de préférences. Le modèle comporte deux types de rigidités réelles à savoir la formation des habitudes dans les préférences et les coûts d'ajustement du stock de capital. Le modèle est résolu par la méthode des perturbations à l'ordre deux et calibré à l'économie américaine. Puisque la prime de risque est par nature une compensation pour le risque, l'approximation d'ordre deux implique que la prime de risque est une combinaison linéaire des volatilités des trois chocs. Les résultats montrent qu'avec les paramètres calibrés, les chocs réels (productivité et préférences) jouent un rôle plus important dans la détermination du niveau de la prime de risque relativement aux chocs monétaires. Je montre que contrairement aux travaux précédents (dans lesquels le capital de production est fixe), l'effet du paramètre de la formation des habitudes sur la prime de risque dépend du degré des coûts d'ajustement du capital. Lorsque les coûts d'ajustement du capital sont élevés au point que le stock de capital est fixe à l'équilibre, une augmentation du paramètre de formation des habitudes entraine une augmentation de la prime de risque. Par contre, lorsque les agents peuvent librement ajuster le stock de capital sans coûts, l'effet du paramètre de la formation des habitudes sur la prime de risque est négligeable. Ce résultat s'explique par le fait que lorsque le stock de capital peut être ajusté sans coûts, cela ouvre un canal additionnel de lissage de consommation pour les agents. Par conséquent, l'effet de la formation des habitudes sur la prime de risque est amoindri. En outre, les résultats montrent que la façon dont la banque centrale conduit sa politique monétaire a un effet sur la prime de risque. Plus la banque centrale est agressive vis-à-vis de l'inflation, plus la prime de risque diminue et vice versa. Cela est due au fait que lorsque la banque centrale combat l'inflation cela entraine une baisse de la variance de l'inflation. Par suite, la prime de risque due au risque d'inflation diminue. Dans le deuxième article, je fais une extension du premier article en utilisant des préférences récursives de type Epstein -- Zin et en permettant aux volatilités conditionnelles des chocs de varier avec le temps. L'emploi de ce cadre est motivé par deux raisons. D'abord des études récentes (Doh, 2010, Rudebusch and Swanson, 2012) ont montré que ces préférences sont appropriées pour l'analyse du prix des actifs dans les modèles d'équilibre général. Ensuite, l'hétéroscedasticité est une caractéristique courante des données économiques et financières. Cela implique que contrairement au premier article, l'incertitude varie dans le temps. Le cadre dans cet article est donc plus général et plus réaliste que celui du premier article. L'objectif principal de cet article est d'examiner l'impact des chocs de volatilités conditionnelles sur le niveau et la dynamique des taux d'intérêt et de la prime de risque. Puisque la prime de risque est constante a l'approximation d'ordre deux, le modèle est résolu par la méthode des perturbations avec une approximation d'ordre trois. Ainsi on obtient une prime de risque qui varie dans le temps. L'avantage d'introduire des chocs de volatilités conditionnelles est que cela induit des variables d'état supplémentaires qui apportent une contribution additionnelle à la dynamique de la prime de risque. Je montre que l'approximation d'ordre trois implique que les primes de risque ont une représentation de type ARCH-M (Autoregressive Conditional Heteroscedasticty in Mean) comme celui introduit par Engle, Lilien et Robins (1987). La différence est que dans ce modèle les paramètres sont structurels et les volatilités sont des volatilités conditionnelles de chocs économiques et non celles des variables elles-mêmes. J'estime les paramètres du modèle par la méthode des moments simulés (SMM) en utilisant des données de l'économie américaine. Les résultats de l'estimation montrent qu'il y a une évidence de volatilité stochastique dans les trois chocs. De plus, la contribution des volatilités conditionnelles des chocs au niveau et à la dynamique de la prime de risque est significative. En particulier, les effets des volatilités conditionnelles des chocs de productivité et de préférences sont significatifs. La volatilité conditionnelle du choc de productivité contribue positivement aux moyennes et aux écart-types des primes de risque. Ces contributions varient avec la maturité des bonds. La volatilité conditionnelle du choc de préférences quant à elle contribue négativement aux moyennes et positivement aux variances des primes de risque. Quant au choc de volatilité de la politique monétaire, son impact sur les primes de risque est négligeable. Le troisième article (coécrit avec Eric Schaling, Alain Kabundi, révisé et resoumis au journal of Economic Modelling) traite de l'hétérogénéité dans la formation des attentes d'inflation de divers groupes économiques et de leur impact sur la politique monétaire en Afrique du sud. La question principale est d'examiner si différents groupes d'agents économiques forment leurs attentes d'inflation de la même façon et s'ils perçoivent de la même façon la politique monétaire de la banque centrale (South African Reserve Bank). Ainsi on spécifie un modèle de prédiction d'inflation qui nous permet de tester l'arrimage des attentes d'inflation à la bande d'inflation cible (3% - 6%) de la banque centrale. Les données utilisées sont des données d'enquête réalisée par la banque centrale auprès de trois groupes d'agents : les analystes financiers, les firmes et les syndicats. On exploite donc la structure de panel des données pour tester l'hétérogénéité dans les attentes d'inflation et déduire leur perception de la politique monétaire. Les résultats montrent qu'il y a évidence d'hétérogénéité dans la manière dont les différents groupes forment leurs attentes. Les attentes des analystes financiers sont arrimées à la bande d'inflation cible alors que celles des firmes et des syndicats ne sont pas arrimées. En effet, les firmes et les syndicats accordent un poids significatif à l'inflation retardée d'une période et leurs prédictions varient avec l'inflation réalisée (retardée). Ce qui dénote un manque de crédibilité parfaite de la banque centrale au vu de ces agents.
Resumo:
Most building services products are installed while a building is constructed, but they are not operated until the building is commissioned. The warranty of the products may cover the time starting from their installation to the end of the warranty period. Prior to the commissioning of the building, the products are at a dormant mode (i.e., not operated) but protected by the warranty. For such products, both the usage intensity and the failure patterns are different from those with continuous usage intensity and failure patterns. This paper develops warranty cost models for repairable products with a dormant mode from both the manufacturer's and buyer's perspectives. Relationships between the failure patterns at the dormant mode and at the operational mode are also discussed. Numerical examples and sensitivity analysis are used to demonstrate the applicability of the methodology derived in the paper.
Resumo:
Choosing between Light Rail Transit (LRT) and Bus Rapid Transit (BRT) systems is often controversial and not an easy task for transportation planners who are contemplating the upgrade of their public transportation services. These two transit systems provide comparable services for medium-sized cities from the suburban neighborhood to the Central Business District (CBD) and utilize similar right-of-way (ROW) categories. The research is aimed at developing a method to assist transportation planners and decision makers in determining the most feasible system between LRT and BRT. ^ Cost estimation is a major factor when evaluating a transit system. Typically, LRT is more expensive to build and implement than BRT, but has significantly lower Operating and Maintenance (OM) costs than BRT. This dissertation examines the factors impacting capacity and costs, and develops cost models, which are a capacity-based cost estimate for the LRT and BRT systems. Various ROW categories and alignment configurations of the systems are also considered in the developed cost models. Kikuchi's fleet size model (1985) and cost allocation method are used to develop the cost models to estimate the capacity and costs. ^ The comparison between LRT and BRT are complicated due to many possible transportation planning and operation scenarios. In the end, a user-friendly computer interface integrated with the established capacity-based cost models, the LRT and BRT Cost Estimator (LBCostor), was developed by using Microsoft Visual Basic language to facilitate the process and will guide the users throughout the comparison operations. The cost models and the LBCostor can be used to analyze transit volumes, alignments, ROW configurations, number of stops and stations, headway, size of vehicle, and traffic signal timing at the intersections. The planners can make the necessary changes and adjustments depending on their operating practices. ^
Resumo:
This dissertation mainly focuses on coordinated pricing and inventory management problems, where the related background is provided in Chapter 1. Several periodic-review models are then discussed in Chapters 2,3,4 and 5, respectively. Chapter 2 analyzes a deterministic single-product model, where a price adjustment cost incurs if the current selling price is changed from the previous period. We develop exact algorithms for the problem under different conditions and find out that computation complexity varies significantly associated with the cost structure. %Moreover, our numerical study indicates that dynamic pricing strategies may outperform static pricing strategies even when price adjustment cost accounts for a significant portion of the total profit. Chapter 3 develops a single-product model in which demand of a period depends not only on the current selling price but also on past prices through the so-called reference price. Strongly polynomial time algorithms are designed for the case without no fixed ordering cost, and a heuristic is proposed for the general case together with an error bound estimation. Moreover, our illustrates through numerical studies that incorporating reference price effect into coordinated pricing and inventory models can have a significant impact on firms' profits. Chapter 4 discusses the stochastic version of the model in Chapter 3 when customers are loss averse. It extends the associated results developed in literature and proves that the reference price dependent base-stock policy is proved to be optimal under a certain conditions. Instead of dealing with specific problems, Chapter 5 establishes the preservation of supermodularity in a class of optimization problems. This property and its extensions include several existing results in the literature as special cases, and provide powerful tools as we illustrate their applications to several operations problems: the stochastic two-product model with cross-price effects, the two-stage inventory control model, and the self-financing model.
Resumo:
Habitat destruction and fragmentation are known to strongly affect dispersal by altering the quality of the environment between populations. As a consequence, lower landscape connectivity is expected to enhance extinction risks through a decrease in gene flow and the resulting negative effects of genetic drift, accumulation of deleterious mutations and inbreeding depression. Such phenomena are particularly harmful for amphibian species, characterized by disjunct breeding habitats. The dispersal behaviour of amphibians being poorly understood, it is crucial to develop new tools, allowing us to determine the influence of landscape connectivity on the persistence of populations. In this study, we developed a new landscape genetics approach that aims at identifying land-uses affecting genetic differentiation, without a priori assumptions about associated ecological costs. We surveyed genetic variation at seven microsatellite loci for 19 Alpine newt (Mesotriton alpestris) populations in western Switzerland. Using strips of varying widths that define a dispersal corridor between pairs of populations, we were able to identify land-uses that act as dispersal barriers (i.e. urban areas) and corridors (i.e. forests). Our results suggest that habitat destruction and landscape fragmentation might in the near future affect common species such as M. alpestris. In addition, by identifying relevant landscape variables influencing population structure without unrealistic assumptions about dispersal, our method offers a simple and flexible tool of investigation as an alternative to least-cost models and other approaches.
Resumo:
The present paper makes progress in explaining the role of capital for inflation and output dynamics. We followWoodford (2003, Ch. 5) in assuming Calvo pricing combined with a convex capital adjustment cost at the firm level. Our main result is that capital accumulation affects inflation dynamics primarily through its impact on the marginal cost. This mechanism is much simpler than the one implied by the analysis in Woodford's text. The reason is that his analysis suffers from a conceptual mistake, as we show. The latter obscures the economic mechanism through which capital affects inflation and output dynamics in the Calvo model, as discussed in Woodford (2004).
Resumo:
Background, aim, and scope A coupled Life Cycle Costing and life cycle assessment has been performed for car-bodies of the Korean Tilting Train eXpress (TTX) project using European and Korean databases, with the objective of assessing environmental and cost performance to aid materials and process selection. More specifically, the potential of polymer composite car-body structures for the Korean Tilting Train eXpress (TTX) has been investigated. Materials and methods This assessment includes the cost of both carriage manufacturing and use phases, coupled with the life cycle environmental impacts of all stages from raw material production, through carriage manufacture and use, to end-of-life scenarios. Metallic carriages were compared with two composite options: hybrid steel-composite and full-composite carriages. The total planned production for this regional Korean train was 440 cars, with an annual production volume of 80 cars. Results and discussion The coupled analyses were used to generate plots of cost versus energy consumption and environmental impacts. The results show that the raw material and manufacturing phase costs are approximately half of the total life cycle costs, whilst their environmental impact is relatively insignificant (3-8%). The use phase of the car-body has the largest environmental impact for all scenarios, with near negligible contributions from the other phases. Since steel rail carriages weigh more (27-51%), the use phase cost is correspondingly higher, resulting in both the greatest environmental impact and the highest life cycle cost. Compared to the steel scenario, the hybrid composite variant has a lower life cycle cost (16%) and a lower environmental impact (26%). Though the full composite rail carriage may have the highest manufacturing cost, it results in the lowest total life cycle costs and lowest environmental impacts. Conclusions and recommendations This coupled cost and life cycle assessment showed that the full composite variant was the optimum solution. This case study showed that coupling of technical cost models with life cycle assessment offers an efficient route to accurately evaluate economic and environmental performance in a consistent way.