980 resultados para cost-plus pricing
Resumo:
The purpose of this research was to estimate the cost-effectiveness of two rehabilitation interventions for breast cancer survivors, each compared to a population-based, non-intervention group (n = 208). The two services included an early home-based physiotherapy intervention (DAART, n = 36) and a group-based exercise and psychosocial intervention (STRETCH, n = 31). A societal perspective was taken and costs were included as those incurred by the health care system, the survivors and community. Health outcomes included: (a) 'rehabilitated cases' based on changes in health-related quality of life between 6 and 12 months post-diagnosis, using the Functional Assessment of Cancer Therapy - Breast Cancer plus Arm Morbidity (FACT-B+4) questionnaire, and (b) quality-adjusted life years (QALYs) using utility scores from the Subjective Health Estimation (SHE) scale. Data were collected using self-reported questionnaires, medical records and program budgets. A Monte-Carlo modelling approach was used to test for uncertainty in cost and outcome estimates. The proportion of rehabilitated cases was similar across the three groups. From a societal perspective compared with the non-intervention group, the DAART intervention appeared to be the most efficient option with an incremental cost of $1344 per QALY gained, whereas the incremental cost per QALY gained from the STRETCH program was $14,478. Both DAART and STRETCH are low-cost, low-technological health promoting programs representing excellent public health investments.
Resumo:
The paper presents a spreadsheet-based multiple account framework for cost-benefit analysis which incorporates all the usual concerns of cost-benefit analysts such as shadow-pricing to account for market failure. distribution of net benefits. sensitivity and risk analysis, cost of public funds, and environmental effects. The approach is generalizable to a wide range of projects and situations and offers a number of advantages to both analysts and decision-makers, including transparency, a check on internal consistency, and a detailed summary of project net benefits disaggregated by stakeholder group. Of particular importance is the ease with which this framework allows for a project to be evaluated from alternative decision-making perspectives and under alternative policy scenarios where the trade-offs among the project's stakeholders can readily be identified and quantified. (C) 2004 Elsevier Ltd. All rights reserved.
Resumo:
This paper proposes a transmission and wheeling pricing method based on the monetary flow tracing along power flow paths: the monetary flow-monetary path method. Active and reactive power flows are converted into monetary flows by using nodal prices. The method introduces a uniform measurement for transmission service usages by active and reactive powers. Because monetary flows are related to the nodal prices, the impacts of generators and loads on operation constraints and the interactive impacts between active and reactive powers can be considered. Total transmission service cost is separated into more practical line-related costs and system-wide cost, and can be flexibly distributed between generators and loads. The method is able to reconcile transmission service cost fairly and to optimize transmission system operation and development. The case study on the IEEE 30 bus test system shows that the proposed pricing method is effective in creating economic signals towards the efficient use and operation of the transmission system. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
This thesis investigates the pricing-to-market (PTM) behaviour of the UK export sector. Unlike previous studies, this study econometrically tests for seasonal unit roots in the export prices prior to estimating PTM behaviour. Prior studies have seasonally adjusted the data automatically. This study’s results show that monthly export prices contain very little seasonal unit roots implying that there is a loss of information in the data generating process of the series when estimating PTM using seasonally-adjusted data. Prior studies have also ignored the econometric properties of the data despite the existence of ARCH effects in such data. The standard approach has been to estimate PTM models using Ordinary Least Square (OLS). For this reason, both EGARCH and GJR-EGARCH (hereafter GJR) estimation methods are used to estimate both a standard and an Error Correction model (ECM) of PTM. The results indicate that PTM behaviour varies across UK sectors. The variables used in the PTM models are co-integrated and an ECM is a valid representation of pricing behaviour. The study also finds that the price adjustment is slower when the analysis is performed on real prices, i.e., data that are adjusted for inflation. There is strong evidence of auto-regressive condition heteroscedasticity (ARCH) effects – meaning that the PTM parameter estimates of prior studies have been ineffectively estimated. Surprisingly, there is very little evidence of asymmetry. This suggests that exporters appear to PTM at a relatively constant rate. This finding might also explain the failure of prior studies to find evidence of asymmetric exposure in foreign exchange (FX) rates. This study also provides a cross sectional analysis to explain the implications of the observed PTM of producers’ marginal cost, market share and product differentiation. The cross-sectional regressions are estimated using OLS, Generalised Method of Moment (GMM) and Logit estimations. Overall, the results suggest that market share affects PTM positively.Exporters with smaller market share are more likely to operate PTM. Alternatively, product differentiation is negatively associated with PTM. So industries with highly differentiated products are less likely to adjust their prices. However, marginal costs seem not to be significantly associated with PTM. Exporters perform PTM to limit the FX rate effect pass-through to their foreign customers, but they also avoided exploiting PTM to the full, since to do so can substantially reduce their profits.
Resumo:
This thesis is concerned with the inventory control of items that can be considered independent of one another. The decisions when to order and in what quantity, are the controllable or independent variables in cost expressions which are minimised. The four systems considered are referred to as (Q, R), (nQ,R,T), (M,T) and (M,R,T). Wiith ((Q,R) a fixed quantity Q is ordered each time the order cover (i.e. stock in hand plus on order ) equals or falls below R, the re-order level. With the other three systems reviews are made only at intervals of T. With (nQ,R,T) an order for nQ is placed if on review the inventory cover is less than or equal to R, where n, which is an integer, is chosen at the time so that the new order cover just exceeds R. In (M, T) each order increases the order cover to M. Fnally in (M, R, T) when on review, order cover does not exceed R, enough is ordered to increase it to M. The (Q, R) system is examined at several levels of complexity, so that the theoretical savings in inventory costs obtained with more exact models could be compared with the increases in computational costs. Since the exact model was preferable for the (Q,R) system only exact models were derived for theoretical systems for the other three. Several methods of optimization were tried, but most were found inappropriate for the exact models because of non-convergence. However one method did work for each of the exact models. Demand is considered continuous, and with one exception, the distribution assumed is the normal distribution truncated so that demand is never less than zero. Shortages are assumed to result in backorders, not lost sales. However, the shortage cost is a function of three items, one of which, the backorder cost, may be either a linear, quadratic or an exponential function of the length of time of a backorder, with or without period of grace. Lead times are assumed constant or gamma distributed. Lastly, the actual supply quantity is allowed to be distributed. All the sets of equations were programmed for a KDF 9 computer and the computed performances of the four inventory control procedures are compared under each assurnption.
Resumo:
Background: Screening for congenital heart defects (CHDs) relies on antenatal ultrasound and postnatal clinical examination; however, life-threatening defects often go undetected. Objective: To determine the accuracy, acceptability and cost-effectiveness of pulse oximetry as a screening test for CHDs in newborn infants. Design: A test accuracy study determined the accuracy of pulse oximetry. Acceptability of testing to parents was evaluated through a questionnaire, and to staff through focus groups. A decision-analytic model was constructed to assess cost-effectiveness. Setting: Six UK maternity units. Participants: These were 20,055 asymptomatic newborns at = 35 weeks’ gestation, their mothers and health-care staff. Interventions: Pulse oximetry was performed prior to discharge from hospital and the results of this index test were compared with a composite reference standard (echocardiography, clinical follow-up and follow-up through interrogation of clinical databases). Main outcome measures: Detection of major CHDs – defined as causing death or requiring invasive intervention up to 12 months of age (subdivided into critical CHDs causing death or intervention before 28 days, and serious CHDs causing death or intervention between 1 and 12 months of age); acceptability of testing to parents and staff; and the cost-effectiveness in terms of cost per timely diagnosis. Results: Fifty-three of the 20,055 babies screened had a major CHD (24 critical and 29 serious), a prevalence of 2.6 per 1000 live births. Pulse oximetry had a sensitivity of 75.0% [95% confidence interval (CI) 53.3% to 90.2%] for critical cases and 49.1% (95% CI 35.1% to 63.2%) for all major CHDs. When 23 cases were excluded, in which a CHD was already suspected following antenatal ultrasound, pulse oximetry had a sensitivity of 58.3% (95% CI 27.7% to 84.8%) for critical cases (12 babies) and 28.6% (95% CI 14.6% to 46.3%) for all major CHDs (35 babies). False-positive (FP) results occurred in 1 in 119 babies (0.84%) without major CHDs (specificity 99.2%, 95% CI 99.0% to 99.3%). However, of the 169 FPs, there were six cases of significant but not major CHDs and 40 cases of respiratory or infective illness requiring medical intervention. The prevalence of major CHDs in babies with normal pulse oximetry was 1.4 (95% CI 0.9 to 2.0) per 1000 live births, as 27 babies with major CHDs (6 critical and 21 serious) were missed. Parent and staff participants were predominantly satisfied with screening, perceiving it as an important test to detect ill babies. There was no evidence that mothers given FP results were more anxious after participating than those given true-negative results, although they were less satisfied with the test. White British/Irish mothers were more likely to participate in the study, and were less anxious and more satisfied than those of other ethnicities. The incremental cost-effectiveness ratio of pulse oximetry plus clinical examination compared with examination alone is approximately £24,900 per timely diagnosis in a population in which antenatal screening for CHDs already exists. Conclusions: Pulse oximetry is a simple, safe, feasible test that is acceptable to parents and staff and adds value to existing screening. It is likely to identify cases of critical CHDs that would otherwise go undetected. It is also likely to be cost-effective given current acceptable thresholds. The detection of other pathologies, such as significant CHDs and respiratory and infective illnesses, is an additional advantage. Other pulse oximetry techniques, such as perfusion index, may enhance detection of aortic obstructive lesions.
Resumo:
This paper analyzes a communication network facing users with a continuous distribution of delay cost per unit time. Priority queueing is often used as a way to provide differential services for users with different delay sensitivities. Delay is a key dimension of network service quality, so priority is a valuable resource which is limited and should to be optimally allocated. We investigate the allocation of priority in queues via a simple bidding mechanism. In our mechanism, arriving users can decide not to enter the network at all or submit an announced delay sensitive value. User entering the network obtains priority over all users who make lower bids, and is charged by a payment function which is designed following an exclusion compensation principle. The payment function is proved to be incentive compatible, so the equilibrium bidding behavior leads to the implementation of "cµ-rule". Social warfare or revenue maximizing by appropriately setting the reserve payment is also analyzed.
Resumo:
After the ten Regional Water Authorities (RWAs) of England and Wales were privatized in November 1989, the successor Water and Sewerage Companies (WASCs) faced a new regulatory regime that was designed to promote economic efficiency while simultaneously improving drinking water and environmental quality. As legally mandated quality improvements necessitated a costly capital investment programme, the industry's economic regulator, the Office of Water Services (Ofwat), implemented a retail price index (RPI)+K pricing system, which was designed to compensate the WASCs for their capital investment programme while also encouraging gains in economic efficiency. In order to analyse jointly the impact of privatization, as well as the impact of increasingly stringent economic and environmental regulation on the WASCs' economic performance, this paper estimates a translog multiple output cost function model for the period 1985–1999. Given the significant costs associated with water quality improvements, the model is augmented to include the impact of drinking water quality and environmental quality on total costs. The model is then employed to determine the extent of scale and scope economies in the water and sewerage industry, as well as the impact of privatization and economic regulation on economic efficiency.
Resumo:
Carbon pricing policy is a fundamental humanly devised theoretical and practical cornerstone in the fight against climate change. It involves short term and long term policies, theoretical and practical considerations. A quantitative global stabilisation target range for the stock of greenhouse gases in the atmosphere is needed, because it is an important and useful foundation in the shaping of a comprehensive climate pricing policy. A global stabilisation target range is obviously a long term policy to control climate change and events ensuing excessive increase in temperature. Setting long term objectives in the fight against climate change are substantial in avoiding catastrophic consequences therefore short term policies, which aim advances in emission reductions, have to be consistent with the pre-defined long term stabilisation goals. Short term policy reaction means using price-driven instruments like taxes and tradable quotas. These instruments allow broad flexibility in the parameters of emission reduction, and provide opportunities and incentives wherewith the cost of mitigation and abatement can be kept down. Taxes and tradable quotas give the flexibility in how, where and when emission reduction can be accomplished thereby reaching agreements between states and companies may result an appropriate and environment-conscious emission scheme, that can fit into the long term objectives.
Resumo:
For the last three decades, the Capital Asset Pricing Model (CAPM) has been a dominant model to calculate expected return. In early 1990% Fama and French (1992) developed the Fama and French Three Factor model by adding two additional factors to the CAPM. However even with these present models, it has been found that estimates of the expected return are not accurate (Elton, 1999; Fama &French, 1997). Botosan (1997) introduced a new approach to estimate the expected return. This approach employs an equity valuation model to calculate the internal rate of return (IRR) which is often called, 'implied cost of equity capital" as a proxy of the expected return. This approach has been gaining in popularity among researchers. A critical review of the literature will help inform hospitality researchers regarding the issue and encourage them to implement the new approach into their own studies.
Resumo:
The purpose of this paper is to compare prices for a popular quick-service restaurant chain (i.e. McDonalds’) across countries throughout the world using the “Big Mac Index” published by “The Economist.” The index was originally developed to measure the valuation of international currencies against the U.S. dollar. The analysis in this study examines the relationship between the price of a Big Mac and other variables such as the cost of beef, price elasticity, and income. Finally, these relationships are reviewed to draw inferences concerning the use of demand, costs, and competition in setting prices.
Resumo:
This paper presents a study on the implementation of Real-Time Pricing (RTP) based Demand Side Management (DSM) of water pumping at a clean water pumping station in Northern Ireland, with the intention of minimising electricity costs and maximising the usage of electricity from wind generation. A Genetic Algorithm (GA) was used to create pumping schedules based on system constraints and electricity tariff scenarios. Implementation of this method would allow the water network operator to make significant savings on electricity costs while also helping to mitigate the variability of wind generation.
Resumo:
This paper presents a study on the implementation of Real-Time Pricing (RTP) based Demand Side Management (DSM) of water pumping at a clean water pumping station in Northern Ireland, with the intention of minimising electricity costs and maximising the usage of electricity from wind generation. A Genetic Algorithm (GA) was used to create pumping schedules based on system constraints and electricity tariff scenarios. Implementation of this method would allow the water network operator to make significant savings on electricity costs while also helping to mitigate the variability of wind generation.
Resumo:
This paper presents a study on the implementation of Real-Time Pricing (RTP) based Demand Side Management (DSM) of water pumping at a clean water pumping station in Northern Ireland, with the intention of minimising electricity costs and maximising the usage of electricity from wind generation. A Genetic Algorithm (GA) was used to create pumping schedules based on system constraints and electricity tariff scenarios. Implementation of this method would allow the water network operator to make significant savings on electricity costs while also helping to mitigate the variability of wind generation.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08