975 resultados para Stochastic Translog Cost Frontier
Resumo:
Temporary Traffic Control Plans (TCP’s), which provide construction phasing to maintain traffic during construction operations, are integral component of highway construction project design. Using the initial design, designers develop estimated quantities for the required TCP devices that become the basis for bids submitted by highway contractors. However, actual as-built quantities are often significantly different from the engineer’s original estimate. The total cost of TCP phasing on highway construction projects amounts to 6–10% of the total construction cost. Variations between engineer estimated quantities and final quantities contribute to reduced cost control, increased chances of cost related litigations, and bid rankings and selection. Statistical analyses of over 2000 highway construction projects were performed to determine the sources of variation, which later were used as the basis of development for an automated-hybrid prediction model that uses multiple regressions and heuristic rules to provide accurate TCP quantities and costs. The predictive accuracy of the model developed was demonstrated through several case studies.
Resumo:
The objective of this research was to develop a model to estimate future freeway pavement construction costs in Henan Province, China. A comprehensive set of factors contributing to the cost of freeway pavement construction were included in the model formulation. These factors comprehensively reflect the characteristics of region and topography and altitude variation, the cost of labour, material, and equipment, and time-related variables such as index numbers of labour prices, material prices and equipment prices. An Artificial Neural Network model using the Back-Propagation learning algorithm was developed to estimate the cost of freeway pavement construction. A total of 88 valid freeway cases were obtained from freeway construction projects let by the Henan Transportation Department during the period 1994−2007. Data from a random selection of 81 freeway cases were used to train the Neural Network model and the remaining data were used to test the performance of the Neural Network model. The tested model was used to predict freeway pavement construction costs in 2010 based on predictions of input values. In addition, this paper provides a suggested correction for the prediction of the value for the future freeway pavement construction costs. Since the change in future freeway pavement construction cost is affected by many factors, the predictions obtained by the proposed method, and therefore the model, will need to be tested once actual data are obtained.
Resumo:
Highway construction works have significant bearings on all aspects of sustainability. With the increasing level of public awareness and government regulatory measures, the construction industry is experiencing a cultural shift to recognise, embrace and pursue sustainability. Stakeholders are now keen to identify sustainable alternatives and the financial implications of including them on a lifecycle basis. They need tools that can aid the evaluation of investment options. To date, however, there have not been many financial assessments on the sustainability aspects of highway projects. This is because the existing life-cycle costing analysis (LCCA) models tend to focus on economic issues alone and are not able to deal with sustainability factors. This paper provides insights into the current practice of life-cycle cost analysis, and the identification and quantification of sustainability-related cost components in highway projects through literature review, questionnaire surveys and semi-structured interviews. The results can serve as a platform for highway project stakeholders to develop practical tools to evaluate highway investment decisions and reach an optimum balance between financial viability and sustainability deliverables.
Resumo:
We address the problem of finite horizon optimal control of discrete-time linear systems with input constraints and uncertainty. The uncertainty for the problem analysed is related to incomplete state information (output feedback) and stochastic disturbances. We analyse the complexities associated with finding optimal solutions. We also consider two suboptimal strategies that could be employed for larger optimization horizons.
Resumo:
Background Lumbar Epidural Steroids Injections (ESI’s) have previously been shown to provide some degree of pain relief in sciatica. Number Needed To Treat (NNT) to achieve 50% pain relief has been estimated at 7 from the results of randomised controlled trials. Pain relief is temporary. They remain one of the most commonly provided procedures in the UK. It is unknown whether this pain relief represents good value for money. Methods 228 patients were randomised into a multi-centre Double Blind Randomised Controlled Trial. Subjects received up to 3 ESI’s or intra-spinous saline depending on response and fall off with the first injection. All other treatments were permitted. All received a review of analgesia, education and physical therapy. Quality of life was assessed using the SF36 at 6 points and compared using independent sample t-tests. Follow up was up to 1 yr. Missing data was imputed using last observation carried forward (LOCF). QALY’s (Quality of Life Years) were derived from preference based heath values (summary health utility score). SF-6D health state classification was derived from SF-36 raw score data. Standard gambles (SG) were calculated using Model 10. SG scores were calculated on trial results. LOCF was not used for this. Instead average SG were derived for a subset of patients with observations for all visits up to week 12. Incremental QALY’s were derived as the difference in the area between the SG curve for the active group and placebo group. Results SF36 domains showed a significant improvement in pain at week 3 but this was not sustained (mean 54 Active vs 61 Placebo P<0.05). Other domains did not show any significant gains compared with placebo. For derivation of SG the number in the sample in each period differed. In week 12, average SG scores for active and placebo converged. In other words, the health gain for the active group as measured by SG was achieved by the placebo group by week 12. The incremental QALY gained for a patient under the trial protocol compared with the standard care package was 0.0059350. This is equivalent to an additional 2.2 days of full health. The cost per QALY gained to the provider from a patient management strategy administering one epidural as suggested by results was £25 745.68. This result was derived assuming that the gain in QALY data calculated for patients under the trial protocol would approximate that under a patient management strategy based on the trial results (one ESI). This is above the threshold suggested by some as a cost effective treatment. Conclusions The transient benefit in pain relief afforded by ESI’s does not appear to be cost-effective. Further work is needed to develop more cost-effective conservative treatments for sciatica.
Resumo:
A recent decision of the Queensland Civil and Administrative Tribunal dealt with the liability of a purchaser to pay a termination penalty where a contract for the purchase of a residential property was terminated during the ‘cooling-off’ period. The decision is Lucy Cole Prestige Properties Broadbeach Pty Ltd ATF Gaindri FT Trust t/as Lucy Cole Prestige Properties Broadbeach Pty Ltd v Kastrissios [2013] QCAT 653.
Resumo:
This chapter focuses on demonstrating the role of Design-Led Innovation (DLI) as an enabler for the success of Small to Medium Enterprises (SMEs) within high growth environments. This chapter is targeted toward businesses that may have been exposed to the concept of design previously at a product level and now seek to better understand its value through implementation at a strategic level offering. The decision to engage in the DLI process is made by firms who want to remain competitive as they struggle to compete in high cost environments, such as the state of the Australian economy at present. The results presented in this chapter outline the challenges in the adoption of the DLI process and the implications it can have. An understanding of the value of DLI in practice—as an enabler of business transformation in Australia—is of benefit to government and the broader design community.
Resumo:
The accuracy of early cost estimates is critical to the success of construction projects. The selected tender price (clients' building cost) is usually seen in previous research as a holistic dependent variable when examining early stage estimates. Unlike other components of construction cost, the amount of contingencies is decided by clients/consultants with consideration of early project information. Cost drivers of contingencies estimates are associated with uncertainty and complexity, and include project size, schedule, ground condition, construction site access, market condition and so on. A path analysis of 133 UK school building contracts was conducted to identify impacts of nine major cost drivers on the determination of contingencies by different clients/cost estimators. This research finds that gross floor area (GFA), schedule and requirement of air conditioning have statistically significant impacts on the contingency determination. The mediating role of schedule between gross floor area and contingencies (GFA→Schedule→Contingencies) was confirmed with the Soble test. The total effects of the three variables on contingencies estimates were obtained with the consideration of this indirect effect. The squared multiple correlation (SMC) of contingencies (=0.624) indicates the identified three variables can explain 62.4% variance of contingencies, and it is comparatively satisfactory considering the heterogeneity among different estimators, unknown estimating techniques and different projects
Resumo:
In this paper, we present fully Bayesian experimental designs for nonlinear mixed effects models, in which we develop simulation-based optimal design methods to search over both continuous and discrete design spaces. Although Bayesian inference has commonly been performed on nonlinear mixed effects models, there is a lack of research into performing Bayesian optimal design for nonlinear mixed effects models that require searches to be performed over several design variables. This is likely due to the fact that it is much more computationally intensive to perform optimal experimental design for nonlinear mixed effects models than it is to perform inference in the Bayesian framework. In this paper, the design problem is to determine the optimal number of subjects and samples per subject, as well as the (near) optimal urine sampling times for a population pharmacokinetic study in horses, so that the population pharmacokinetic parameters can be precisely estimated, subject to cost constraints. The optimal sampling strategies, in terms of the number of subjects and the number of samples per subject, were found to be substantially different between the examples considered in this work, which highlights the fact that the designs are rather problem-dependent and require optimisation using the methods presented in this paper.
Resumo:
A statistical approach is used in the design of a battery-supercapacitor energy storage system for a wind farm. The design exploits the technical merits of the two energy storage mediums, in terms of the differences in their specific power and energy densities, and their ability to accommodate different rates of change in the charging/discharging powers. By treating the input wind power as random and using a proposed coordinated power flows control strategy for the battery and the supercapacitor, the approach evaluates the energy storage capacities, the corresponding expected life cycle cost/year of the storage mediums, and the expected cost/year of unmet power dispatch. A computational procedure is then developed for the design of a least-cost/year hybrid energy storage system to realize wind power dispatch at a specified confidence level.
Resumo:
Background In 2002/03 the Queensland Government responded to high rates of alcohol-related harm in discrete Indigenous communities by implementing alcohol management plans (AMPs), designed to include supply and harm reduction and treatment measures. Tighter alcohol supply and carriage restrictions followed in 2008 following indications of reductions in violence and injury. Despite the plans being in place for over a decade, no comprehensive independent review has assessed to what level the designed aims were achieved and what effect the plans have had on Indigenous community residents and service providers. This study will describe the long-term impacts on important health, economic and social outcomes of Queensland’s AMPs. Methods/Design The project has two main studies, 1) outcome evaluation using de-identified epidemiological data on injury, violence and other health and social indicators for across Queensland, including de-identified databases compiled from relevant routinely-available administrative data sets, and 2) a process evaluation to map the nature, timing and content of intervention components targeting alcohol. Process evaluation will also be used to assess the fidelity with which the designed intervention components have been implemented, their uptake and community responses to them and their perceived impacts on alcohol supply and consumption, injury, violence and community health. Interviews and focus groups with Indigenous residents and service providers will be used. The study will be conducted in all 24 of Queensland’s Indigenous communities affected by alcohol management plans. Discussion This evaluation will report on the impacts of the original aims for AMPs, what impact they have had on Indigenous residents and service providers. A central outcome will be the establishment of relevant databases describing the parameters of the changes seen. This will permit comprehensive and rigorous surveillance systems to be put in place and provided to communities empowering them with the best credible evidence to judge future policy and program requirements for themselves. The project will inform impending alcohol policy and program adjustments in Queensland and other Australian jurisdictions. The project has been approved by the James Cook University Human Research Ethics Committee (approval number H4967 & H5241).
Resumo:
The use of Wireless Sensor Networks (WSNs) for vibration-based Structural Health Monitoring (SHM) has become a promising approach due to many advantages such as low cost, fast and flexible deployment. However, inherent technical issues such as data asynchronicity and data loss have prevented these distinct systems from being extensively used. Recently, several SHM-oriented WSNs have been proposed and believed to be able to overcome a large number of technical uncertainties. Nevertheless, there is limited research verifying the applicability of those WSNs with respect to demanding SHM applications like modal analysis and damage identification. Based on a brief review, this paper first reveals that Data Synchronization Error (DSE) is the most inherent factor amongst uncertainties of SHM-oriented WSNs. Effects of this factor are then investigated on outcomes and performance of the most robust Output-only Modal Analysis (OMA) techniques when merging data from multiple sensor setups. The two OMA families selected for this investigation are Frequency Domain Decomposition (FDD) and data-driven Stochastic Subspace Identification (SSI-data) due to the fact that they both have been widely applied in the past decade. Accelerations collected by a wired sensory system on a large-scale laboratory bridge model are initially used as benchmark data after being added with a certain level of noise to account for the higher presence of this factor in SHM-oriented WSNs. From this source, a large number of simulations have been made to generate multiple DSE-corrupted datasets to facilitate statistical analyses. The results of this study show the robustness of FDD and the precautions needed for SSI-data family when dealing with DSE at a relaxed level. Finally, the combination of preferred OMA techniques and the use of the channel projection for the time-domain OMA technique to cope with DSE are recommended.
Resumo:
INTRODUCTION: Increasing health care costs, limited resources and increased demand makes cost effective and cost-efficient delivery of Adolescent Idiopathic Scoliosis (AIS) management paramount. Rising implant costs in deformity correction surgery have prompted analysis of whether high implant densities are justified. The objective of this study was to analyse the costs of thoracoscopic scoliosis surgery, comparing initial learning curve costs with those of the established technique and to the costs involved in posterior instrumented fusion from the literature. METHODS: 189 consecutive cases from April 2000 to July 2011 were assessed with a minimum of 2 years follow-up. Information was gathered from a prospective database covering perioperative factors, clinical and radiological outcomes, complications and patient reported outcomes. The patients were divided into three groups to allow comparison; 1. A learning curve cohort, 2. An intermediate cohort and 3. A third cohort of patients, using our established technique. Hospital finance records and implant manufacturer figures were corrected to 2013 costs. A literature review of AIS management costs and implant density in similar curve types was performed. RESULTS: The mean pre-op Cobb angle was 53°(95%CI 0.4) and was corrected postop to mean 22.9°(CI 0.4). The overall complication rate was 20.6%, primarily in the first cohort, with a rate of 5.6% in the third cohort. The average total costs were $46,732, operating room costs of $10,301 (22.0%) and ICU costs of $4620 (9.8%). The mean number of screws placed was 7.1 (CI 0.04) with a single rod used for each case giving average implant costs of $14,004 (29.9%). Comparison of the three groups revealed higher implant costs as the technique evolved to that in use today, from $13,049 in Group 1 to $14577 in Group 3 (P<0.001). Conversely operating room costs reduced from $10,621 in Group 1 to $7573 (P<0.001) in Group 3. ICU stay was reduced from an average of 1.2 to 0 days. In-patient stay was significantly (P=0.006) lower in Groups 2 and 3 (5.4 days) than Group 1 (5.9 days) (i.e. a reduction in cost of approximately $6,140). CONCLUSIONS: The evolution of our thoracoscopic anterior scoliosis correction has resulted in an increase in the number of levels fused and reduction in complication rate. Implant costs have risen as a result, however, there has been a concurrent decrease in those costs generated by operating room use, ICU and in-patient stay with increasing experience. Literature review of equivalent curve types treated posteriorly shows similar perioperative factors but higher implant density, 69-83% compared to the 50% in this study. Thoracoscopic Scoliosis surgery presents a low density, reliable, efficient and effective option for selected curves. A cost analysis of Thoracoscopic Scoliosis Surgery using financial records and a prospectively collected database of all patients since 2000, demonstrating a clear cost advantage compared to equivalent posterior instrumentation and fusion.
Resumo:
Level crossing risk continues to be a significant safety concern for the security of rail operations around the world. Over the last decade or so, a third of railway related fatalities occurred as a direct result of collisions between road and rail vehicles in Australia. Importantly, nearly half of these collisions occurred at railway level crossings with no active protection, such as flashing lights or boom barriers. Current practice is to upgrade level crossings that have no active protection. However, the total number of level crossings found across Australia exceed 23,500, and targeting the proportion of these that are considered high risk (e.g. public crossings with passive controls) would cost in excess of AU$3.25 billion based on equipment, installation and commissioning costs of warning devices that are currently type approved. Level crossing warning devices that are low-cost provide a potentially effective control for reducing risk; however, over the last decade, there have been significant barriers and legal issues in both Australia and the US that have foreshadowed their adoption. These devices are designed to have significantly lower lifecycle costs compared with traditional warning devices. They often make use of use of alternative technologies for train detection, wireless connectivity and solar energy supply. This paper describes the barriers that have been encountered for the adoption of these devices in Australia, including the challenges associated with: (1) determining requisite safety levels for such devices; (2) legal issues relating to duty of care obligations of railway operators; and (3) issues of Tort liability around the use of less than fail-safe equipment. This paper provides an overview of a comprehensive safety justification that was developed as part of a project funded by a collaborative rail research initiative established by the Australian government, and describes the conceptual framework and processes being used to justify its adoption. The paper provides a summary of key points from peer review and discusses prospective barriers that may need to be overcome for future adoption. A successful outcome from this process would result in the development of a guideline for decision-making, providing a precedence for adopting low-cost level crossing warning devices in other parts of the world. The framework described in this paper also provides relevance to the review and adoption of analogous technologies in rail and other safety critical industries.
Resumo:
This paper examines the properties of various approximation methods for solving stochastic dynamic programs in structural estimation problems. The problem addressed is evaluating the expected value of the maximum of available choices. The paper shows that approximating this by the maximum of expected values frequently has poor properties. It also shows that choosing a convenient distributional assumptions for the errors and then solving exactly conditional on the distributional assumption leads to small approximation errors even if the distribution is misspecified. © 1997 Cambridge University Press.