855 resultados para Travel Cost Method
Resumo:
Objective. To determine the cost-effectiveness of averting the burden of disease. We used secondary population data and metaanalyses of various government-funded services and interventions to investigate the costs and benefits of various levels of treatment for rheumatoid arthritis (RA) and osteoarthritis (OA) in adults using a burden of disease framework. Method. Population burden was calculated for both diseases in the absence of any treatment as years lived with disability (YLD), ignoring the years of life lost. We then estimated the proportion of burden averted with current interventions, the proportion that could be averted with optimally implemented cut-rent evidence-based guidelines, and the direct treatment cost-effectiveness ratio in dollars per YLD averted for both treatment levels. Results. The majority of people with arthritis sought medical treatment. Current treatment for RA averted 26% of the burden, with a cost-effectiveness ratio of $19,000 per YLD averted. Optimal, evidence-based treatment would avert 48% of the burden. with a cost-effectiveness ratio of $12,000 per YLD averted. Current treatment of OA in Australia averted 27% of the burden, with a cost-effectiveness ratio of $25,000 per YLD averted. Optimal, evidence-based treatment would avert 39% of the burden, with an unchanged cost-effectiveness ratio of $25,000 per YLD averted. Conclusion. While the precise dollar costs in each country will differ, the relativities at this level of coverage should remain the same. There is no evidence that closing the gap between evidence and practice would result in a drop in efficiency.
Resumo:
This economic evaluation was part of the Australian National Evaluation of Pharmacotherapies for Opioid Dependence (NEPOD) project. Data from four trials of heroin detoxification methods, involving 365 participants, were pooled to enable a comprehensive comparison of the cost-effectiveness of five inpatient and outpatient detoxification methods. This study took the perspective of the treatment provider in assessing resource use and costs. Two short-term outcome measures were used-achievement of an initial 7-day period of abstinence, and entry into ongoing post-detoxification treatment. The mean costs of the various detoxification methods ranged widely, from AUD $491 (buprenorphine-based outpatient); to AUD $605 for conventional outpatient; AUD $1404 for conventional inpatient; AUD $1990 for rapid detoxification under sedation; and to AUD $2689 for anaesthesia per episode. An incremental cost-effectiveness analysis was carried out using conventional outpatient detoxification as the base comparator. The buprenorphine-based outpatient detoxification method was found to be the most cost-effective method overall, and rapid opioid detoxification under sedation was the most costeffective inpatient method.
Resumo:
A number of systematic conservation planning tools are available to aid in making land use decisions. Given the increasing worldwide use and application of reserve design tools, including measures of site irreplaceability, it is essential that methodological differences and their potential effect on conservation planning outcomes are understood. We compared the irreplaceability of sites for protecting ecosystems within the Brigalow Belt Bioregion, Queensland, Australia, using two alternative reserve system design tools, Marxan and C-Plan. We set Marxan to generate multiple reserve systems that met targets with minimal area; the first scenario ignored spatial objectives, while the second selected compact groups of areas. Marxan calculates the irreplaceability of each site as the proportion of solutions in which it occurs for each of these set scenarios. In contrast, C-Plan uses a statistical estimate of irreplaceability as the likelihood that each site is needed in all combinations of sites that satisfy the targets. We found that sites containing rare ecosystems are almost always irreplaceable regardless of the method. Importantly, Marxan and C-Plan gave similar outcomes when spatial objectives were ignored. Marxan with a compactness objective defined twice as much area as irreplaceable, including many sites with relatively common ecosystems. However, targets for all ecosystems were met using a similar amount of area in C-Plan and Marxan, even with compactness. The importance of differences in the outcomes of using the two methods will depend on the question being addressed; in general, the use of two or more complementary tools is beneficial.
Resumo:
A new methodology is proposed for the analysis of generation capacity investment in a deregulated market environment. This methodology proposes to make the investment appraisal using a probabilistic framework. The probabilistic production simulation (PPC) algorithm is used to compute the expected energy generated, taking into account system load variations and plant forced outage rates, while the Monte Carlo approach has been applied to model the electricity price variability seen in a realistic network. The model is able to capture the price and hence the profitability uncertainties for generator companies. Seasonal variation in the electricity prices and the system demand are independently modeled. The method is validated on IEEE RTS system, augmented with realistic market and plant data, by using it to compare the financial viability of several generator investments applying either conventional or directly connected generator (powerformer) technologies. The significance of the results is assessed using several financial risk measures.
Resumo:
Government agencies responsible for riparian environments are assessing the combined utility of field survey and remote sensing for mapping and monitoring indicators of riparian zone condition. The objective of this work was to compare the Tropical Rapid Appraisal of Riparian Condition (TRARC) method to a satellite image based approach. TRARC was developed for rapid assessment of the environmental condition of savanna riparian zones. The comparison assessed mapping accuracy, representativeness of TRARC assessment, cost-effectiveness, and suitability for multi-temporal analysis. Two multi-spectral QuickBird images captured in 2004 and 2005 and coincident field data covering sections of the Daly River in the Northern Territory, Australia were used in this work. Both field and image data were processed to map riparian health indicators (RHIs) including percentage canopy cover, organic litter, canopy continuity, stream bank stability, and extent of tree clearing. Spectral vegetation indices, image segmentation and supervised classification were used to produce RHI maps. QuickBird image data were used to examine if the spatial distribution of TRARC transects provided a representative sample of ground based RHI measurements. Results showed that TRARC transects were required to cover at least 3% of the study area to obtain a representative sample. The mapping accuracy and costs of the image based approach were compared to those of the ground based TRARC approach. Results proved that TRARC was more cost-effective at smaller scales (1-100km), while image based assessment becomes more feasible at regional scales (100-1000km). Finally, the ability to use both the image and field based approaches for multi-temporal analysis of RHIs was assessed. Change detection analysis demonstrated that image data can provide detailed information on gradual change, while the TRARC method was only able to identify more gross scale changes. In conclusion, results from both methods were considered to complement each other if used at appropriate spatial scales.
Resumo:
In developed countries travel time savings can account for as much as 80% of the overall benefits arising from transport infrastructure and service improvements. In developing countries they are generally ignored in transport project appraisals, notwithstanding their importance. One of the reasons for ignoring these benefits in the developing countries is that there is insufficient empirical evidence to support the conventional models for valuing travel time where work patterns, particularly of the poor, are diverse and it is difficult to distinguish between work and non-work activities. The exclusion of time saving benefits may lead to a bias against investment decisions that benefit the poor and understate the poverty reduction potential of transport investments in Least Developed Countries (LDCs). This is because the poor undertake most travel and transport by walking and headloading on local roads, tracks and paths and improvements of local infrastructure and services bring large time saving benefits for them through modal shifts. The paper reports on an empirical study to develop a methodology for valuing rural travel time savings in the LDCs. Apart from identifying the theoretical and empirical issues in valuing travel time savings in the LDCs, the paper presents and discusses the results of an analysis of data from Bangladesh. Some of the study findings challenge the conventional wisdom concerning the time saving values. The Bangladesh study suggests that the western concept of dividing travel time savings into working and non-working time savings is broadly valid in the developing country context. The study validates the use of preference methods in valuing non-working time saving values. However, stated preference (SP) method is more appropriate than revealed preference (RP) method.
Resumo:
This paper presents a new method for the optimisation of the mirror element spacing arrangement and operating temperature of linear Fresnel reflectors (LFR). The specific objective is to maximise available power output (i.e. exergy) and operational hours whilst minimising cost. The method is described in detail and compared to an existing design method prominent in the literature. Results are given in terms of the exergy per total mirror area (W/m2) and cost per exergy (US $/W). The new method is applied principally to the optimisation of an LFR in Gujarat, India, for which cost data have been gathered. It is recommended to use a spacing arrangement such that the onset of shadowing among mirror elements occurs at a transversal angle of 45°. This results in a cost per exergy of 2.3 $/W. Compared to the existing design approach, the exergy averaged over the year is increased by 9% to 50 W/m2 and an additional 122 h of operation per year are predicted. The ideal operating temperature at the surface of the absorber tubes is found to be 300 °C. It is concluded that the new method is an improvement over existing techniques and a significant tool for any future design work on LFR systems
Resumo:
OBJECTIVES: To assess whether blood pressure control in primary care could be improved with the use of patient held targets and self monitoring in a practice setting, and to assess the impact of these on health behaviours, anxiety, prescribed antihypertensive drugs, patients' preferences, and costs. DESIGN: Randomised controlled trial. SETTING: Eight general practices in south Birmingham. PARTICIPANTS: 441 people receiving treatment in primary care for hypertension but not controlled below the target of < 140/85 mm Hg. INTERVENTIONS: Patients in the intervention group received treatment targets along with facilities to measure their own blood pressure at their general practice; they were also asked to visit their general practitioner or practice nurse if their blood pressure was repeatedly above the target level. Patients in the control group received usual care (blood pressure monitoring by their practice). MAIN OUTCOME MEASURES: Primary outcome: change in systolic blood pressure at six months and one year in both intervention and control groups. Secondary outcomes: change in health behaviours, anxiety, prescribed antihypertensive drugs, patients' preferences of method of blood pressure monitoring, and costs. RESULTS: 400 (91%) patients attended follow up at one year. Systolic blood pressure in the intervention group had significantly reduced after six months (mean difference 4.3 mm Hg (95% confidence interval 0.8 mm Hg to 7.9 mm Hg)) but not after one year (mean difference 2.7 mm Hg (- 1.2 mm Hg to 6.6 mm Hg)). No overall difference was found in diastolic blood pressure, anxiety, health behaviours, or number of prescribed drugs. Patients who self monitored lost more weight than controls (as evidenced by a drop in body mass index), rated self monitoring above monitoring by a doctor or nurse, and consulted less often. Overall, self monitoring did not cost significantly more than usual care (251 pounds sterling (437 dollars; 364 euros) (95% confidence interval 233 pounds sterling to 275 pounds sterling) versus 240 pounds sterling (217 pounds sterling to 263 pounds sterling). CONCLUSIONS: Practice based self monitoring resulted in small but significant improvements of blood pressure at six months, which were not sustained after a year. Self monitoring was well received by patients, anxiety did not increase, and there was no appreciable additional cost. Practice based self monitoring is feasible and results in blood pressure control that is similar to that in usual care.
Resumo:
Most parametric software cost estimation models used today evolved in the late 70's and early 80's. At that time, the dominant software development techniques being used were the early 'structured methods'. Since then, several new systems development paradigms and methods have emerged, one being Jackson Systems Development (JSD). As current cost estimating methods do not take account of these developments, their non-universality means they cannot provide adequate estimates of effort and hence cost. In order to address these shortcomings two new estimation methods have been developed for JSD projects. One of these methods JSD-FPA, is a top-down estimating method, based on the existing MKII function point method. The other method, JSD-COCOMO, is a sizing technique which sizes a project, in terms of lines of code, from the process structure diagrams and thus provides an input to the traditional COCOMO method.The JSD-FPA method allows JSD projects in both the real-time and scientific application areas to be costed, as well as the commercial information systems applications to which FPA is usually applied. The method is based upon a three-dimensional view of a system specification as opposed to the largely data-oriented view traditionally used by FPA. The method uses counts of various attributes of a JSD specification to develop a metric which provides an indication of the size of the system to be developed. This size metric is then transformed into an estimate of effort by calculating past project productivity and utilising this figure to predict the effort and hence cost of a future project. The effort estimates produced were validated by comparing them against the effort figures for six actual projects.The JSD-COCOMO method uses counts of the levels in a process structure chart as the input to an empirically derived model which transforms them into an estimate of delivered source code instructions.
Resumo:
The work described in the following pages was carried out at various sites in the Rod Division of the Delta Metal Company. Extensive variation in the level of activity in the industry during the years 1974 to I975 had led to certain inadequacies being observed 1n the traditional cost control procedure. In an attempt to remedy this situation it was suggested that a method be found of constructing a system to improve the flexibility of cost control procedures. The work involved an assimilation of the industrial and financial environment via pilot studies which would later prove invaluable to home in on the really interesting and important areas. Weaknesses in the current systems which came to light made the methodology of data collection and the improvement of cost control and profit planning procedures easier to adopt. Because of the requirements of the project to investigate the implications of Cost behaviour for profit planning and control, the next stage of the research work was to utilise the on-site experience to examine at a detailed level the nature of cost behaviour. The analysis of factory costs then showed that certain costs, which were the most significant exhibited a stable relationship with respect to some known variable, usually a specific measure of Output. These costs were then formulated in a cost model, to establish accurate standards in a complex industrial setting in order to provide a meaningful comparison against which to judge actual performance. The necessity of a cost model was •reinforced by the fact that the cost behaviour found to exist was, in the main, a step function, and this complex cost behaviour, the traditional cost and profit planning procedures could not possibly incorporate. Already implemented from this work is the establishment of the post of information officer to co-ordinate data collection and information provision.
Resumo:
This thesis is concerned with the inventory control of items that can be considered independent of one another. The decisions when to order and in what quantity, are the controllable or independent variables in cost expressions which are minimised. The four systems considered are referred to as (Q, R), (nQ,R,T), (M,T) and (M,R,T). Wiith ((Q,R) a fixed quantity Q is ordered each time the order cover (i.e. stock in hand plus on order ) equals or falls below R, the re-order level. With the other three systems reviews are made only at intervals of T. With (nQ,R,T) an order for nQ is placed if on review the inventory cover is less than or equal to R, where n, which is an integer, is chosen at the time so that the new order cover just exceeds R. In (M, T) each order increases the order cover to M. Fnally in (M, R, T) when on review, order cover does not exceed R, enough is ordered to increase it to M. The (Q, R) system is examined at several levels of complexity, so that the theoretical savings in inventory costs obtained with more exact models could be compared with the increases in computational costs. Since the exact model was preferable for the (Q,R) system only exact models were derived for theoretical systems for the other three. Several methods of optimization were tried, but most were found inappropriate for the exact models because of non-convergence. However one method did work for each of the exact models. Demand is considered continuous, and with one exception, the distribution assumed is the normal distribution truncated so that demand is never less than zero. Shortages are assumed to result in backorders, not lost sales. However, the shortage cost is a function of three items, one of which, the backorder cost, may be either a linear, quadratic or an exponential function of the length of time of a backorder, with or without period of grace. Lead times are assumed constant or gamma distributed. Lastly, the actual supply quantity is allowed to be distributed. All the sets of equations were programmed for a KDF 9 computer and the computed performances of the four inventory control procedures are compared under each assurnption.
Resumo:
We investigate an application of the method of fundamental solutions (MFS) to heat conduction in two-dimensional bodies, where the thermal diffusivity is piecewise constant. We extend the MFS proposed in Johansson and Lesnic [A method of fundamental solutions for transient heat conduction, Eng. Anal. Bound. Elem. 32 (2008), pp. 697–703] for one-dimensional heat conduction with the sources placed outside the space domain of interest, to the two-dimensional setting. Theoretical properties of the method, as well as numerical investigations, are included, showing that accurate results can be obtained efficiently with small computational cost.
Resumo:
We investigate an application of the method of fundamental solutions (MFS) to the one-dimensional inverse Stefan problem for the heat equation by extending the MFS proposed in [5] for the one-dimensional direct Stefan problem. The sources are placed outside the space domain of interest and in the time interval (-T, T). Theoretical properties of the method, as well as numerical investigations, are included, showing that accurate and stable results can be obtained efficiently with small computational cost.
Resumo:
We investigate an application of the method of fundamental solutions (MFS) to the backward heat conduction problem (BHCP). We extend the MFS in Johansson and Lesnic (2008) [5] and Johansson et al. (in press) [6] proposed for one and two-dimensional direct heat conduction problems, respectively, with the sources placed outside the space domain of interest. Theoretical properties of the method, as well as numerical investigations, are included, showing that accurate and stable results can be obtained efficiently with small computational cost.
Resumo:
We investigate an application of the method of fundamental solutions (MFS) to the one-dimensional parabolic inverse Cauchy–Stefan problem, where boundary data and the initial condition are to be determined from the Cauchy data prescribed on a given moving interface. In [B.T. Johansson, D. Lesnic, and T. Reeve, A method of fundamental solutions for the one-dimensional inverse Stefan Problem, Appl. Math Model. 35 (2011), pp. 4367–4378], the inverse Stefan problem was considered, where only the boundary data is to be reconstructed on the fixed boundary. We extend the MFS proposed in Johansson et al. (2011) and show that the initial condition can also be simultaneously recovered, i.e. the MFS is appropriate for the inverse Cauchy-Stefan problem. Theoretical properties of the method, as well as numerical investigations, are included, showing that accurate results can be efficiently obtained with small computational cost.