965 resultados para Adjustment cost models


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Organised by Knowledge Exchange & the Nordbib programme 11 June 2012, 8:30-12:30, Copenhagen Adjacent to the Nordbib conference 'Structural frameworks for open, digital research' Participants in break out discussion during the workshop on cost modelsThe Knowledge Exchange and the Nordbib programme organised a workshop on cost models for the preservation and management of digital collections. The rapid growth of the digital information which a wide range of institutions must preserve emphasizes the need for robust cost modelling. Such models should enable these institutions to assess both what resources are needed to sustain their digital preservation activities and allow comparisons of different preservation solutions in order to select the most cost-efficient alternative. In order to justify the costs institutions also need to describe the expected benefits of preserving digital information. This workshop provided an overview of existing models and demonstrated the functionality of some of the current cost tools. It considered the specific economic challenges with regard to the preservation of research data and addressed the benefits of investing in the preservation of digital information. Finally, the workshop discussed international collaboration on cost models. The aim of the workshop was to facilitate understanding of the economies of data preservation and to discuss the value of developing an international benchmarking model for the costs and benefits of digital preservation. The workshop took place in the Danish Agency for Culture and was planned directly prior to the Nordbib conference 'Structural frameworks for open, digital research'

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the last decade, the potential macroeconomic effects of intermittent large adjustments in microeconomic decision variables such as prices, investment, consumption of durables or employment – a behavior which may be justified by the presence of kinked adjustment costs – have been studied in models where economic agents continuously observe the optimal level of their decision variable. In this paper, we develop a simple model which introduces infrequent information in a kinked adjustment cost model by assuming that agents do not observe continuously the frictionless optimal level of the control variable. Periodic releases of macroeconomic statistics or dividend announcements are examples of such infrequent information arrivals. We first solve for the optimal individual decision rule, that is found to be both state and time dependent. We then develop an aggregation framework to study the macroeconomic implications of such optimal individual decision rules. Our model has the distinct characteristic that a vast number of agents tend to act together, and more so when uncertainty is large. The average effect of an aggregate shock is inversely related to its size and to aggregate uncertainty. We show that these results differ substantially from the ones obtained with full information adjustment cost models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We extend the macroeconomic literature on Sstype rules by introducing infrequent information in a kinked ad justment cost model. We first show that optimal individual decision rules are both state-and -time dependent. We then develop an aggregation framework to study the macroeconomic implications of such optimal individual decision rules. In our model, a vast number of agents act together, and more so when uncertainty is large.The average effect of an aggregate shock is inversely related to its size and to aggregate uncertainty. These results are in contrast with those obtained with full information ad justment cost models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mode of access: Internet.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis describes the procedure and results from four years research undertaken through the IHD (Interdisciplinary Higher Degrees) Scheme at Aston University in Birmingham, sponsored by the SERC (Science and Engineering Research Council) and Monk Dunstone Associates, Chartered Quantity Surveyors. A stochastic networking technique VERT (Venture Evaluation and Review Technique) was used to model the pre-tender costs of public health, heating ventilating, air-conditioning, fire protection, lifts and electrical installations within office developments. The model enabled the quantity surveyor to analyse, manipulate and explore complex scenarios which previously had defied ready mathematical analysis. The process involved the examination of historical material costs, labour factors and design performance data. Components and installation types were defined and formatted. Data was updated and adjusted using mechanical and electrical pre-tender cost indices and location, selection of contractor, contract sum, height and site condition factors. Ranges of cost, time and performance data were represented by probability density functions and defined by constant, uniform, normal and beta distributions. These variables and a network of the interrelationships between services components provided the framework for analysis. The VERT program, in this particular study, relied upon Monte Carlo simulation to model the uncertainties associated with pre-tender estimates of all possible installations. The computer generated output in the form of relative and cumulative frequency distributions of current element and total services costs, critical path analyses and details of statistical parameters. From this data alternative design solutions were compared, the degree of risk associated with estimates was determined, heuristics were tested and redeveloped, and cost significant items were isolated for closer examination. The resultant models successfully combined cost, time and performance factors and provided the quantity surveyor with an appreciation of the cost ranges associated with the various engineering services design options.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In today's fiercely competitive products market, product warranty has started playing an important role. The warranty period offered by the manufacturer/dealer has been progressively increasing since the beginning of the 20th Century. Currently, a large number of products are being sold with long-term warranty policies in the form of extended warranty, warranty for used products, service contracts and lifetime warranty policies. Lifetime warranties are relatively a new concept. The modelling of failures during the warranty period and the costs for such policies are complex since the lifespan in these policies are not defined well and it is often difficult to tell about life measures for the longer period of coverage due to usage pattern/maintenance activities undertaken and uncertainties of costs over the period. This paper focuses on defining lifetime, developing lifetime warranty policies and models for predicting failures and estimating costs for lifetime warranty policies.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper presents a new programming methodology for introducing and tuning parallelism in Erlang programs, using source-level code refactoring from sequential source programs to parallel programs written using our skeleton library, Skel. High-level cost models allow us to predict with reasonable accuracy the parallel performance of the refactored program, enabling programmers to make informed decisions about which refactorings to apply. Using our approach, we demonstrate easily obtainable, significant and scalable speedups of up to 21 on a 24-core machine over the sequential code.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper presents a novel approach to WLAN propagation models for use in indoor localization. The major goal of this work is to eliminate the need for in situ data collection to generate the Fingerprinting map, instead, it is generated by using analytical propagation models such as: COST Multi-Wall, COST 231 average wall and Motley- Keenan. As Location Estimation Algorithms kNN (K-Nearest Neighbour) and WkNN (Weighted K-Nearest Neighbour) were used to determine the accuracy of the proposed technique. This work is based on analytical and measurement tools to determine which path loss propagation models are better for location estimation applications, based on Receive Signal Strength Indicator (RSSI).This study presents different proposals for choosing the most appropriate values for the models parameters, like obstacles attenuation and coefficients. Some adjustments to these models, particularly to Motley-Keenan, considering the thickness of walls, are proposed. The best found solution is based on the adjusted Motley-Keenan and COST models that allows to obtain the propagation loss estimation for several environments.Results obtained from two testing scenarios showed the reliability of the adjustments, providing smaller errors in the measured values values in comparison with the predicted values.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Les questions abordées dans les deux premiers articles de ma thèse cherchent à comprendre les facteurs économiques qui affectent la structure à terme des taux d'intérêt et la prime de risque. Je construis des modèles non linéaires d'équilibre général en y intégrant des obligations de différentes échéances. Spécifiquement, le premier article a pour objectif de comprendre la relation entre les facteurs macroéconomiques et le niveau de prime de risque dans un cadre Néo-keynésien d'équilibre général avec incertitude. L'incertitude dans le modèle provient de trois sources : les chocs de productivité, les chocs monétaires et les chocs de préférences. Le modèle comporte deux types de rigidités réelles à savoir la formation des habitudes dans les préférences et les coûts d'ajustement du stock de capital. Le modèle est résolu par la méthode des perturbations à l'ordre deux et calibré à l'économie américaine. Puisque la prime de risque est par nature une compensation pour le risque, l'approximation d'ordre deux implique que la prime de risque est une combinaison linéaire des volatilités des trois chocs. Les résultats montrent qu'avec les paramètres calibrés, les chocs réels (productivité et préférences) jouent un rôle plus important dans la détermination du niveau de la prime de risque relativement aux chocs monétaires. Je montre que contrairement aux travaux précédents (dans lesquels le capital de production est fixe), l'effet du paramètre de la formation des habitudes sur la prime de risque dépend du degré des coûts d'ajustement du capital. Lorsque les coûts d'ajustement du capital sont élevés au point que le stock de capital est fixe à l'équilibre, une augmentation du paramètre de formation des habitudes entraine une augmentation de la prime de risque. Par contre, lorsque les agents peuvent librement ajuster le stock de capital sans coûts, l'effet du paramètre de la formation des habitudes sur la prime de risque est négligeable. Ce résultat s'explique par le fait que lorsque le stock de capital peut être ajusté sans coûts, cela ouvre un canal additionnel de lissage de consommation pour les agents. Par conséquent, l'effet de la formation des habitudes sur la prime de risque est amoindri. En outre, les résultats montrent que la façon dont la banque centrale conduit sa politique monétaire a un effet sur la prime de risque. Plus la banque centrale est agressive vis-à-vis de l'inflation, plus la prime de risque diminue et vice versa. Cela est due au fait que lorsque la banque centrale combat l'inflation cela entraine une baisse de la variance de l'inflation. Par suite, la prime de risque due au risque d'inflation diminue. Dans le deuxième article, je fais une extension du premier article en utilisant des préférences récursives de type Epstein -- Zin et en permettant aux volatilités conditionnelles des chocs de varier avec le temps. L'emploi de ce cadre est motivé par deux raisons. D'abord des études récentes (Doh, 2010, Rudebusch and Swanson, 2012) ont montré que ces préférences sont appropriées pour l'analyse du prix des actifs dans les modèles d'équilibre général. Ensuite, l'hétéroscedasticité est une caractéristique courante des données économiques et financières. Cela implique que contrairement au premier article, l'incertitude varie dans le temps. Le cadre dans cet article est donc plus général et plus réaliste que celui du premier article. L'objectif principal de cet article est d'examiner l'impact des chocs de volatilités conditionnelles sur le niveau et la dynamique des taux d'intérêt et de la prime de risque. Puisque la prime de risque est constante a l'approximation d'ordre deux, le modèle est résolu par la méthode des perturbations avec une approximation d'ordre trois. Ainsi on obtient une prime de risque qui varie dans le temps. L'avantage d'introduire des chocs de volatilités conditionnelles est que cela induit des variables d'état supplémentaires qui apportent une contribution additionnelle à la dynamique de la prime de risque. Je montre que l'approximation d'ordre trois implique que les primes de risque ont une représentation de type ARCH-M (Autoregressive Conditional Heteroscedasticty in Mean) comme celui introduit par Engle, Lilien et Robins (1987). La différence est que dans ce modèle les paramètres sont structurels et les volatilités sont des volatilités conditionnelles de chocs économiques et non celles des variables elles-mêmes. J'estime les paramètres du modèle par la méthode des moments simulés (SMM) en utilisant des données de l'économie américaine. Les résultats de l'estimation montrent qu'il y a une évidence de volatilité stochastique dans les trois chocs. De plus, la contribution des volatilités conditionnelles des chocs au niveau et à la dynamique de la prime de risque est significative. En particulier, les effets des volatilités conditionnelles des chocs de productivité et de préférences sont significatifs. La volatilité conditionnelle du choc de productivité contribue positivement aux moyennes et aux écart-types des primes de risque. Ces contributions varient avec la maturité des bonds. La volatilité conditionnelle du choc de préférences quant à elle contribue négativement aux moyennes et positivement aux variances des primes de risque. Quant au choc de volatilité de la politique monétaire, son impact sur les primes de risque est négligeable. Le troisième article (coécrit avec Eric Schaling, Alain Kabundi, révisé et resoumis au journal of Economic Modelling) traite de l'hétérogénéité dans la formation des attentes d'inflation de divers groupes économiques et de leur impact sur la politique monétaire en Afrique du sud. La question principale est d'examiner si différents groupes d'agents économiques forment leurs attentes d'inflation de la même façon et s'ils perçoivent de la même façon la politique monétaire de la banque centrale (South African Reserve Bank). Ainsi on spécifie un modèle de prédiction d'inflation qui nous permet de tester l'arrimage des attentes d'inflation à la bande d'inflation cible (3% - 6%) de la banque centrale. Les données utilisées sont des données d'enquête réalisée par la banque centrale auprès de trois groupes d'agents : les analystes financiers, les firmes et les syndicats. On exploite donc la structure de panel des données pour tester l'hétérogénéité dans les attentes d'inflation et déduire leur perception de la politique monétaire. Les résultats montrent qu'il y a évidence d'hétérogénéité dans la manière dont les différents groupes forment leurs attentes. Les attentes des analystes financiers sont arrimées à la bande d'inflation cible alors que celles des firmes et des syndicats ne sont pas arrimées. En effet, les firmes et les syndicats accordent un poids significatif à l'inflation retardée d'une période et leurs prédictions varient avec l'inflation réalisée (retardée). Ce qui dénote un manque de crédibilité parfaite de la banque centrale au vu de ces agents.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Most building services products are installed while a building is constructed, but they are not operated until the building is commissioned. The warranty of the products may cover the time starting from their installation to the end of the warranty period. Prior to the commissioning of the building, the products are at a dormant mode (i.e., not operated) but protected by the warranty. For such products, both the usage intensity and the failure patterns are different from those with continuous usage intensity and failure patterns. This paper develops warranty cost models for repairable products with a dormant mode from both the manufacturer's and buyer's perspectives. Relationships between the failure patterns at the dormant mode and at the operational mode are also discussed. Numerical examples and sensitivity analysis are used to demonstrate the applicability of the methodology derived in the paper.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The article discusses the findings of research on the cost performance of structural frames. The analysis conducted is based on a range of designs that are commonly found in 10-storey buildings in Australia. The study explored the best cost solutions for any particular project through generic cost models of steel, concrete, and formwork systems used in commercial buildings. The research concluded that the most important factor in choosing structural building frames would be the unique characteristics of individual projects.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Choosing between Light Rail Transit (LRT) and Bus Rapid Transit (BRT) systems is often controversial and not an easy task for transportation planners who are contemplating the upgrade of their public transportation services. These two transit systems provide comparable services for medium-sized cities from the suburban neighborhood to the Central Business District (CBD) and utilize similar right-of-way (ROW) categories. The research is aimed at developing a method to assist transportation planners and decision makers in determining the most feasible system between LRT and BRT. ^ Cost estimation is a major factor when evaluating a transit system. Typically, LRT is more expensive to build and implement than BRT, but has significantly lower Operating and Maintenance (OM) costs than BRT. This dissertation examines the factors impacting capacity and costs, and develops cost models, which are a capacity-based cost estimate for the LRT and BRT systems. Various ROW categories and alignment configurations of the systems are also considered in the developed cost models. Kikuchi's fleet size model (1985) and cost allocation method are used to develop the cost models to estimate the capacity and costs. ^ The comparison between LRT and BRT are complicated due to many possible transportation planning and operation scenarios. In the end, a user-friendly computer interface integrated with the established capacity-based cost models, the LRT and BRT Cost Estimator (LBCostor), was developed by using Microsoft Visual Basic language to facilitate the process and will guide the users throughout the comparison operations. The cost models and the LBCostor can be used to analyze transit volumes, alignments, ROW configurations, number of stops and stations, headway, size of vehicle, and traffic signal timing at the intersections. The planners can make the necessary changes and adjustments depending on their operating practices. ^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This dissertation mainly focuses on coordinated pricing and inventory management problems, where the related background is provided in Chapter 1. Several periodic-review models are then discussed in Chapters 2,3,4 and 5, respectively. Chapter 2 analyzes a deterministic single-product model, where a price adjustment cost incurs if the current selling price is changed from the previous period. We develop exact algorithms for the problem under different conditions and find out that computation complexity varies significantly associated with the cost structure. %Moreover, our numerical study indicates that dynamic pricing strategies may outperform static pricing strategies even when price adjustment cost accounts for a significant portion of the total profit. Chapter 3 develops a single-product model in which demand of a period depends not only on the current selling price but also on past prices through the so-called reference price. Strongly polynomial time algorithms are designed for the case without no fixed ordering cost, and a heuristic is proposed for the general case together with an error bound estimation. Moreover, our illustrates through numerical studies that incorporating reference price effect into coordinated pricing and inventory models can have a significant impact on firms' profits. Chapter 4 discusses the stochastic version of the model in Chapter 3 when customers are loss averse. It extends the associated results developed in literature and proves that the reference price dependent base-stock policy is proved to be optimal under a certain conditions. Instead of dealing with specific problems, Chapter 5 establishes the preservation of supermodularity in a class of optimization problems. This property and its extensions include several existing results in the literature as special cases, and provide powerful tools as we illustrate their applications to several operations problems: the stochastic two-product model with cross-price effects, the two-stage inventory control model, and the self-financing model.