895 resultados para cost model
Resumo:
The cost of a road construction over its service life is a function of design, quality of construction as well as maintenance strategies and operations. An optimal life-cycle cost for a road requires evaluations of the above mentioned components. Unfortunately, road designers often neglect a very important aspect, namely, the possibility to perform future maintenance activities. Focus is mainly directed towards other aspects such as investment costs, traffic safety, aesthetic appearance, regional development and environmental effects. This doctoral thesis presents the results of a research project aimed to increase consideration of road maintenance aspects in the planning and design process. The following subgoals were established: Identify the obstacles that prevent adequate consideration of future maintenance during the road planning and design process; and Examine optimisation of life-cycle costs as an approach towards increased efficiency during the road planning and design process. The research project started with a literature review aimed at evaluating the extent to which maintenance aspects are considered during road planning and design as an improvement potential for maintenance efficiency. Efforts made by road authorities to increase efficiency, especially maintenance efficiency, were evaluated. The results indicated that all the evaluated efforts had one thing in common, namely ignorance of the interrelationship between geometrical road design and maintenance as an effective tool to increase maintenance efficiency. Focus has mainly been on improving operating practises and maintenance procedures. This fact might also explain why some efforts to increase maintenance efficiency have been less successful. An investigation was conducted to identify the problems and difficulties, which obstruct due consideration of maintainability during the road planning and design process. A method called “Change Analysis” was used to analyse data collected during interviews with experts in road design and maintenance. The study indicated a complex combination of problems which result in inadequate consideration of maintenance aspects when planning and designing roads. The identified problems were classified into six categories: insufficient consulting, insufficient knowledge, regulations and specifications without consideration of maintenance aspects, insufficient planning and design activities, inadequate organisation and demands from other authorities. Several urgent needs for changes to eliminate these problems were identified. One of the problems identified in the above mentioned study as an obstacle for due consideration of maintenance aspects during road design was the absence of a model for calculating life-cycle costs for roads. Because of this lack of knowledge, the research project focused on implementing a new approach for calculating and analysing life-cycle costs for roads with emphasis on the relationship between road design and road maintainability. Road barriers were chosen as an example. The ambition is to develop this approach to cover other road components at a later stage. A study was conducted to quantify repair rates for barriers and associated repair costs as one of the major maintenance costs for road barriers. A method called “Case Study Research Method” was used to analyse the effect of several factors on barrier repairs costs, such as barrier type, road type, posted speed and seasonal effect. The analyses were based on documented data associated with 1625 repairs conducted in four different geographical regions in Sweden during 2006. A model for calculation of average repair costs per vehicle kilometres was created. Significant differences in the barrier repair costs were found between the studied barrier types. In another study, the injuries associated with road barrier collisions and the corresponding influencing factors were analysed. The analyses in this study were based on documented data from actual barrier collisions between 2005 and 2008 in Sweden. The result was used to calculate the cost for injuries associated with barrier collisions as a part of the socio-economic cost for road barriers. The results showed significant differences in the number of injuries associated with collisions with different barrier types. To calculate and analyse life-cycle costs for road barriers a new approach was developed based on a method called “Activity-based Life-cycle Costing”. By modelling uncertainties, the presented approach gives a possibility to identify and analyse factors crucial for optimising life-cycle costs. The study showed a great potential to increase road maintenance efficiency through road design. It also showed that road components with low investment costs might not be the best choice when including maintenance and socio-economic aspects. The difficulties and problems faced during the collection of data for calculating life-cycle costs for road barriers indicated a great need for improving current data collecting and archiving procedures. The research focused on Swedish road planning and design. However, the conclusions can be applied to other Nordic countries, where weather conditions and road design practices are similar. The general methodological approaches used in this research project may be applied also to other studies.
Resumo:
Millions of unconscious calculations are made daily by pedestrians walking through the Colby College campus. I used ArcGIS to make a predictive spatial model that chose paths similar to those that are actually used by people on a regular basis. To make a viable model of how most travelers choose their way, I considered both the distance required and the type of traveling surface. I used an iterative process to develop a scheme for weighting travel costs which resulted in accurate least-cost paths to be predicted by ArcMap. The accuracy was confirmed when the calculated routes were compared to satellite photography and were found to overlap well-worn “shortcuts” taken between the paved paths throughout campus.
Resumo:
Climate model projections show that climate change will further increase the risk of flooding in many regions of the world. There is a need for climate adaptation, but building new infrastructure or additional retention basins has its limits, especially in densely populated areas where open spaces are limited. Another solution is the more efficient use of the existing infrastructure. This research investigates a method for real-time flood control by means of existing gated weirs and retention basins. The method was tested for the specific study area of the Demer basin in Belgium but is generally applicable. Today, retention basins along the Demer River are controlled by means of adjustable gated weirs based on fixed logic rules. However, because of the high complexity of the system, only suboptimal results are achieved by these rules. By making use of precipitation forecasts and combined hydrological-hydraulic river models, the state of the river network can be predicted. To fasten the calculation speed, a conceptual river model was used. The conceptual model was combined with a Model Predictive Control (MPC) algorithm and a Genetic Algorithm (GA). The MPC algorithm predicts the state of the river network depending on the positions of the adjustable weirs in the basin. The GA generates these positions in a semi-random way. Cost functions, based on water levels, were introduced to evaluate the efficiency of each generation, based on flood damage minimization. In the final phase of this research the influence of the most important MPC and GA parameters was investigated by means of a sensitivity study. The results show that the MPC-GA algorithm manages to reduce the total flood volume during the historical event of September 1998 by 46% in comparison with the current regulation. Based on the MPC-GA results, some recommendations could be formulated to improve the logic rules.
Resumo:
Canada releases over 150 billion litres of untreated and undertreated wastewater into the water environment every year1. To clean up urban wastewater, new Federal Wastewater Systems Effluent Regulations (WSER) on establishing national baseline effluent quality standards that are achievable through secondary wastewater treatment were enacted on July 18, 2012. With respect to the wastewater from the combined sewer overflows (CSO), the Regulations require the municipalities to report the annual quantity and frequency of effluent discharges. The City of Toronto currently has about 300 CSO locations within an area of approximately 16,550 hectares. The total sewer length of the CSO area is about 3,450 km and the number of sewer manholes is about 51,100. A system-wide monitoring of all CSO locations has never been undertaken due to the cost and practicality. Instead, the City has relied on estimation methods and modelling approaches in the past to allow funds that would otherwise be used for monitoring to be applied to the reduction of the impacts of the CSOs. To fulfill the WSER requirements, the City is now undertaking a study in which GIS-based hydrologic and hydraulic modelling is the approach. Results show the usefulness of this for 1) determining the flows contributing to the combined sewer system in the local and trunk sewers for dry weather flow, wet weather flow, and snowmelt conditions; 2) assessing hydraulic grade line and surface water depth in all the local and trunk sewers under heavy rain events; 3) analysis of local and trunk sewer capacities for future growth; and 4) reporting of the annual quantity and frequency of CSOs as per the requirements in the new Regulations. This modelling approach has also allowed funds to be applied toward reducing and ultimately eliminating the adverse impacts of CSOs rather than expending resources on unnecessary and costly monitoring.
Resumo:
Lucas (1987) has shown the surprising result that the welfare cost of business cycles is quite small. Using standard assumptions on preferences and a fully-áedged econometric model we computed the welfare costs of macroeconomic uncertainty for the post-WWII era using the multivariate Beveridge-Nelson decomposition for trends and cycles, which considers not only business-cycle uncertainty but also uncertainty from the stochastic trend in consumption. The post-WWII period is relatively quiet, with the welfare costs of uncertainty being about 0:9% of per-capita consumption. Although changing the decomposition method changed substantially initial results, the welfare cost of uncertainty is qualitatively small in the post-WWII era - about $175.00 a year per-capita in the U.S. We also computed the marginal welfare cost of macroeconomic uncertainty using this same technique. It is about twice as large as the welfare cost ñ$350.00 a year per-capita.
Resumo:
With standard assumptions on preferences and a fully-fledged econometric model we computed the welfare costs of macroeconomic uncertainty for post-war U.S. using the BeveridgeNelson decomposition. Welfare costs are about 0.9% per-capita consumption ($175.00) and marginal welfare costs are about twice as large.
Resumo:
Em modelos de competição de preços, somente um custo de procura positivo por parte do consumidor não gera equilíbrio com dispersão de preços. Já modelos dinâmicos de switching cost consistentemente geram este fenômeno bastante documentado para preços no varejo. Embora ambas as literaturas sejam vastas, poucos modelos tentaram combinar as duas fricções em um só modelo. Este trabalho apresenta um modelo dinâmico de competição de preços em que consumidores idênticos enfrentam custos de procura e de switching. O equilíbrio gera dispersão nos preços. Ainda, como os consumidores são obrigados a se comprometer com uma amostra fixa de firmas antes dos preços serem definidos, somente dois preços serão considerados antes de cada compra. Este resultado independe do tamanho do custo de procura individual do consumidor.
Resumo:
Pode-se observar uma considerável dispersão entre os preços que diferentes bancos comerciais no Brasil cobram por um mesmo pacote homogêneo de serviços— dispersão esta que é sustentada ao longo do tempo. Em uma tentativa de replicar esta observação empírica, foi desenvolvido um simples modelo que lança mão do arcabouço da literatura de custos de procura (search costs) e que baseia-se também na lealdade por parte dos consumidores. Em seguida, dados de preços referentes ao setor bancário brasileiro são aplicados ao modelo desenvolvido e alguns exercícios empíricos são então realizados. Esses exercícios permitem que: (i) os custos de procura incorridos pelos consumidores sejam estimados, ao fixar-se os valores dos demais parâmetros e (ii) as correspondentes perdas de peso-morto que surgem como consequência dos custos de procura incorridos pelos consumidores sejam também estimadas. Quando apenas 80% da população é livre para buscar por bancos que cobrem menores tarifas, à taxa de juros mensal de 0,5%, o valor estimado do custo de procura médio incorrido pelos consumidores chega a 1805,80 BRL, sendo a correspondente perda de peso-morto média na ordem de 233,71 BRL por consumidor.
Resumo:
This thesis develops and evaluates a business model for connected full electric vehicles (FEV) for the European market. Despite a promoting political environment, various barriers have thus far prevented the FEV from becoming a mass-market vehicle. Besides cost, the most noteworthy of these barriers is represented by range anxiety, a product of FEVs’ limited range, lacking availability of charging infrastructure, and long recharging times. Connected FEVs, which maintain a constant connection to the surrounding infrastructure, appear to be a promising element to overcome drivers’ range anxiety. Yet their successful application requires a well functioning FEV ecosystem which can only be created through the collaboration of various stakeholders such as original equipment manufacturers (OEM), first tier suppliers (FTS), charging infrastructure and service providers (CISP), utilities, communication enablers, and governments. This thesis explores and evaluates how a business model, jointly created by these stakeholders, could look like, i.e. how stakeholders could collaborate in the design of products, services, infrastructure, and advanced mobility management, to meet drivers with a sensible value proposition that is at least equivalent to that of internal combustion engine (ICE) cars. It suggests that this value proposition will be an end-2-end package provided by CISPs or OEMs that comprises mobility packages (incl. pay per mile plans, battery leasing, charging and battery swapping (BS) infrastructure) and FEVs equipped with an on-board unit (OBU) combined with additional services targeted at range anxiety reduction. From a theoretical point of view the thesis answers the question which business model framework is suitable for the development of a holistic, i.e. all stakeholder-comprising business model for connected FEVs and defines such a business model. In doing so the thesis provides the first comprehensive business model related research findings on connected FEVs, as prior works focused on the much less complex scenario featuring only “offline” FEVs.
Resumo:
This paper investigates the introduction of type dynamic in the La ont and Tirole's regulation model. The regulator and the rm are engaged in a two period relationship governed by short-term contracts, where, the regulator observes cost but cannot distinguish how much of the cost is due to e ort on cost reduction or e ciency of rm's technology, named type. There is asymmetric information about the rm's type. Our model is developed in a framework in which the regulator learns with rm's choice in the rst period and uses that information to design the best second period incentive scheme. The regulator is aware of the possibility of changes in types and takes that into account. We show how type dynamic builds a bridge between com- mitment and non-commitment situations. In particular, the possibility of changing types mitigates the \ratchet e ect". We show that for small degree of type dynamic the equilibrium shows separation and the welfare achived is close to his upper bound (given by the commitment allocation).
Resumo:
This dissertation analyses quantitatively the costs of sovereign default for the economy, in a model where banks with long positions in government debt play a central role in the financial intermediation for private sector's investments and face financial frictions that limit their leverage ability. Calibration tries to resemble some features of the Eurozone, where discussions about bailout schemes and default risk have been central issues. Results show that the model captures one important cost of default pointed out by empirical and theoretical literature on debt crises, namely the fall in investment that follows haircut episodes, what can be explained by a worsening in banks' balance sheet conditions that limits credit for the private sector and raises their funding costs. The cost in terms of output decrease is though not significant enough to justify the existence of debt markets and the government incentives for debt repayment. Assuming that the government is able to alleviate its constrained budget by imposing a restructuring on debt repayment profile that allows it to cut taxes, our model generates an important difference for output path comparing lump-sum taxes and distortionary. For our calibration, quantitative results show that in terms of output and utility, it is possible that the effect on the labour supply response generated by tax cuts dominates investment drop caused by credit crunch on financial markets. We however abstract from default costs associated to the breaking of existing contracts, external sanctions and risk spillovers between countries, that might also be relevant in addition to financial disruption effects. Besides, there exist considerable trade-offs for short and long run path of economic variables related to government and banks' behaviour.
Resumo:
This paper studies cost-sharing rules under dynamic adverse selection. We present a typical principal-agent model with two periods, set up in Laffont and Tirole's (1986) canonical regulation environment. At first, when the contract is signed, the firm has prior uncertainty about its efficiency parameter. In the second period, the firm learns its efficiency and chooses the level of cost-reducing effort. The optimal mechanism sequentially screens the firm's types and achieves a higher level of welfare than its static counterpart. The contract is indirectly implemented by a sequence of transfers, consisting of a fixed advance payment based on the reported cost estimate, and an ex-post compensation linear in cost performance.
Resumo:
The paper extends the cost of altruism model, analyzed in Lisboa (1999). There are three types of agents: households, providers of a service and insurance companies. Households have uncertainty about future leveIs of income. Providers, if hired by a household, have to choose a non-observable leveI of effort, perform a diagnoses and privately learn a signal. For each signal there is a procedure that maximizes the likelihood of the household obtaining the good state of nature. Finally, insurance companies offer contracts to both providers and households. The paper provides suflicient conditions for the existence of equilibrium and shows the optimal contract induces providers to care about their income and also about the likelihood households will obtain the good state of nature, which in Lisboa (1999) was stated as altruism assumption. Equilibrium is inefficient in comparison with the standard moral hazard outcome whenever high leveIs of effort is chosen precisely due to the need to incentive providers to choose the least expensive treatment for some signals. We show, however that an equilibrium is always constrained optimal.
Resumo:
I study the welfare cost of inflation and the effect on prices after a permanent increase in the interest rate. In the steady state, the real money demand is homogeneous of degree one in income and its interest-rate elasticity is approximately equal to −1/2. Consumers are indifferent between an economy with 10% p.a. inflation and one with zero inflation if their income is 1% higher in the first economy. A permanent increase in the interest rate makes the price level to drop initially and inflation to adjust slowly to its steady state level.
Resumo:
This paper illustrates the use of the marginal cost of public funds concept in three contexts. First, we extend Parry’s (2003) analysis of the efficiency effects excise taxes in the U.K., primarily by incorporating the distortion caused by imperfect competition in the cigarette market and distinguishing between the MCFs for per unit and ad valorem taxes on cigarettes. Our computations show, contrary to the standard result in the literature, that the per unit tax on cigarettes has a slightly lower MCF than the ad valorem tax on cigarettes. Second, we calculate the MCF for a payroll tax in a labour market with involuntary unemployment, using the Shapiro and Stiglitz (1984) efficiency wage model as our framework. Our computations, based on Canadian labour market data, indicate that incorporating the distortion caused by involuntary unemployment raises the MCF by 25 to 50 percent. Third, we derive expressions for the distributionally-weighted MCFs for the exemption level and the marginal tax rate for a “flat tax”, such as the one that has been adopted by the province of Alberta. This allows us to develop a restricted, but tractable, version of the optimal income tax problem. Computations indicate that the optimal marginal tax rate may be quite high, even with relatively modest pro-poor distributional preferences.