868 resultados para Cost Over run
Resumo:
A prototype fluorescent based biosensor has been developed for the antibody based detection of food related contaminants. Its performance was characterised and showed a typical antibody binding signal of 200-2000 mV, a short term noise of 9.1 mV, and baseline slope of -0.016 mV/s over 4 h. Bulk signal detection repeatability (n=23) and reproducibility (n=3) were less than 2.4%CV. The biosensor detection unit was evaluated using two food related model systems proving its ability to monitor both binding using commercial products and inhibition through the development of an assay. This assay development potential was evaluated by observing the biosensor's performance whilst appraising several labelled antibody and glass slide configurations. The molecular interaction between biotin and an anti-biotin antibody was shown to be inhibited by 41% due to the presence of biotin in a sample. A food toxin (domoic acid) calibration curve was produced, with %CVs ranging from 2.7 to 7.8%, and a midpoint of approximately 17 ng/ml with further optimisation possible. The ultimate aim of this study was to demonstrate the working principles of this innovative biosensor as a potential portable tool with the opportunity of interchangeable assays. The biosensor design is applicable for the requirements of routine food contaminant analysis, with respect to performance, functionality and cost. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
BACKGROUND: For many, physical activity has been engineered out of daily life, leading to high levels of sedentariness and obesity. Multi-faceted physical activity interventions, combining individual, community and environmental approaches, have the greatest potential to improve public health, but few have been evaluated. METHODS: Approximately 100 000 people may benefit from improved opportunities for physical activity through an urban regeneration project in Northern Ireland, the Connswater Community Greenway. Using the macro-simulation PREVENT model, we estimated its potential health impacts and cost-effectiveness. To do so, we modelled its potential impact on the burden from cardiovascular disease, namely, ischaemic heart disease, type 2 diabetes mellitus and stroke, and colon and breast cancer, by the year 2050, if feasible increases in physical activity were to be achieved. RESULTS: If 10% of those classified as 'inactive' (perform less than 150 minutes of moderate activity/week) became 'active', 886 incident cases (1.2%) and 75 deaths (0.9%) could be prevented with an incremental cost-effectiveness ratio of £4469/disability-adjusted life year. For effectiveness estimates as low as 2%, the intervention would remain cost-effective (£18 411/disability-adjusted life year). Small gains in average life expectancy and disability-adjusted life expectancy could be achieved, and the Greenway population would benefit from 46 less years lived with disability. CONCLUSION: The Greenway intervention could be cost-effective at improving physical activity levels. Although the direct health gains are predicted to be small for any individual, summed over an entire population, they are substantial. In addition, the Greenway is likely to have much wider benefits beyond health.
Resumo:
Background: Increasing emphasis is being placed on the economics of health care service delivery - including home-based palliative care. Aim: This paper analyzes resource utilization and costs of a shared-care demonstration project in rural Ontario (Canada) from the public health care system's perspective. Design: To provide enhanced end-of-life care, the shared-care approach ensured exchange of expertise and knowledge and coordination of services in line with the understood goals of care. Resource utilization and costs were tracked over the 15 month study period from January 2005 to March 2006. Results: Of the 95 study participants (average age 71 years), 83 had a cancer diagnosis (87%); the non-cancer diagnoses (12 patients, 13%) included mainly advanced heart diseases and COPD. Community Care Access Centre and Enhanced Palliative Care Team-based homemaking and specialized nursing services were the most frequented offerings, followed by equipment/transportation services and palliative care consults for pain and symptom management. Total costs for all patient-related services (in 2007 CAN) were 1,625,658.07 - or 17,112.19 per patient/117.95 per patient day. Conclusion: While higher than expenditures previously reported for a cancer-only population in an urban Ontario setting, the costs were still within the parameters of the US Medicare Hospice Benefits, on a par with the per diem funding assigned for long-term care homes and lower than both average alternate level of care and hospital costs within the Province of Ontario. The study results may assist service planners in the appropriate allocation of resources and service packaging to meet the complex needs of palliative care populations. © 2012 The Author(s).
Resumo:
Background: Recently both the UK and US governments have advocated the use of financial incentives to encourage healthier lifestyle choices but evidence for the cost-effectiveness of such interventions is lacking. Our aim was to perform a cost-effectiveness analysis (CEA) of a quasi-experimental trial, exploring the use of financial incentives to increase employee physical activity levels, from a healthcare and employer’s perspective.
Methods: Employees used a ‘loyalty card’ to objectively monitor their physical activity at work over 12 weeks. The Incentive Group (n=199) collected points and received rewards for minutes of physical activity completed. The No Incentive Group (n=207) self-monitored their physical activity only. Quality of life (QOL) and absenteeism were assessed at baseline and 6 months follow-up. QOL scores were also converted into productivity estimates using a validated algorithm. The additional costs of the Incentive Group were divided by the additional quality adjusted life years (QALYs) or productivity gained to calculate incremental cost effectiveness ratios (ICERs). Cost-effectiveness acceptability curves (CEACs) and population expected value of perfect information (EVPI) was used to characterize and value the uncertainty in our estimates.
Results: The Incentive Group performed more physical activity over 12 weeks and by 6 months had achieved greater gains in QOL and productivity, although these mean differences were not statistically significant. The ICERs were £2,900/QALY and £2,700 per percentage increase in overall employee productivity. Whilst the confidence intervals surrounding these ICERs were wide, CEACs showed a high chance of the intervention being cost-effective at low willingness-to-pay (WTP) thresholds.
Conclusions: The Physical Activity Loyalty card (PAL) scheme is potentially cost-effective from both a healthcare and employer’s perspective but further research is warranted to reduce uncertainty in our results. It is based on a sustainable “business model” which should become more cost-effective as it is delivered to more participants and can be adapted to suit other health behaviors and settings. This comes at a time when both UK and US governments are encouraging business involvement in tackling public health challenges.
Resumo:
Reliable detection of JAK2-V617F is critical for accurate diagnosis of myeloproliferative neoplasms (MPNs); in addition, sensitive mutation-specific assays can be applied to monitor disease response. However, there has been no consistent approach to JAK2-V617F detection, with assays varying markedly in performance, affecting clinical utility. Therefore, we established a network of 12 laboratories from seven countries to systematically evaluate nine different DNA-based quantitative PCR (qPCR) assays, including those in widespread clinical use. Seven quality control rounds involving over 21,500 qPCR reactions were undertaken using centrally distributed cell line dilutions and plasmid controls. The two best-performing assays were tested on normal blood samples (n=100) to evaluate assay specificity, followed by analysis of serial samples from 28 patients transplanted for JAK2-V617F-positive disease. The most sensitive assay, which performed consistently across a range of qPCR platforms, predicted outcome following transplant, with the mutant allele detected a median of 22 weeks (range 6-85 weeks) before relapse. Four of seven patients achieved molecular remission following donor lymphocyte infusion, indicative of a graft vs MPN effect. This study has established a robust, reliable assay for sensitive JAK2-V617F detection, suitable for assessing response in clinical trials, predicting outcome and guiding management of patients undergoing allogeneic transplant.
Resumo:
Pressure myography studies have played a crucial role in our understanding of vascular physiology and pathophysiology. Such studies depend upon the reliable measurement of changes in the diameter of isolated vessel segments over time. Although several software packages are available to carry out such measurements on small arteries and veins, no such software exists to study smaller vessels (<50 µm in diameter). We provide here a new, freely available open-source algorithm, MyoTracker, to measure and track changes in the diameter of small isolated retinal arterioles. The program has been developed as an ImageJ plug-in and uses a combination of cost analysis and edge enhancement to detect the vessel walls. In tests performed on a dataset of 102 images, automatic measurements were found to be comparable to those of manual ones. The program was also able to track both fast and slow constrictions and dilations during intraluminal pressure changes and following application of several drugs. Variability in automated measurements during analysis of videos and processing times were also investigated and are reported. MyoTracker is a new software to assist during pressure myography experiments on small isolated retinal arterioles. It provides fast and accurate measurements with low levels of noise and works with both individual images and videos. Although the program was developed to work with small arterioles, it is also capable of tracking the walls of other types of microvessels, including venules and capillaries. It also works well with larger arteries, and therefore may provide an alternative to other packages developed for larger vessels when its features are considered advantageous.
Resumo:
The exponential growth in user and application data entails new means for providing fault tolerance and protection against data loss. High Performance Com- puting (HPC) storage systems, which are at the forefront of handling the data del- uge, typically employ hardware RAID at the backend. However, such solutions are costly, do not ensure end-to-end data integrity, and can become a bottleneck during data reconstruction. In this paper, we design an innovative solution to achieve a flex- ible, fault-tolerant, and high-performance RAID-6 solution for a parallel file system (PFS). Our system utilizes low-cost, strategically placed GPUs — both on the client and server sides — to accelerate parity computation. In contrast to hardware-based approaches, we provide full control over the size, length and location of a RAID array on a per file basis, end-to-end data integrity checking, and parallelization of RAID array reconstruction. We have deployed our system in conjunction with the widely-used Lustre PFS, and show that our approach is feasible and imposes ac- ceptable overhead.
Resumo:
This paper presents a new programming methodology for introducing and tuning parallelism in Erlang programs, using source-level code refactoring from sequential source programs to parallel programs written using our skeleton library, Skel. High-level cost models allow us to predict with reasonable accuracy the parallel performance of the refactored program, enabling programmers to make informed decisions about which refactorings to apply. Using our approach, we demonstrate easily obtainable, significant and scalable speedups of up to 21 on a 24-core machine over the sequential code.
Resumo:
BACKGROUND: As the world population ages, the requirement for cost-effective methods of treating chronic disease conditions increases. In terms of oral health, there is a rapidly increasing number of dentate elderly with a high burden of maintenance. Population surveys indicate that older individuals are keeping their teeth for longer and are a higher caries risk group. Atraumatic Restorative Treatment (ART) could be suitable for patients in nursing homes or house-bound elderly, but very little research has been done on its use in adults.
OBJECTIVES: To compare the cost-effectiveness of ART and a conventional technique (CT) for restoring carious lesions as part of a preventive and restorative programme for older adults.
METHODS: In this randomized clinical trial, 82 patients with carious lesions were randomly allocated to receive either ART or conventional restorations. Treatment costs were measured based on treatment time, materials and labour. For the ART group, the cost of care provided by a dentist was also compared to the cost of having a hygienist to provide treatment. Effectiveness was measured using percentage of restorations that survived after a year.
RESULTS: Eighty-two patients received 260 restorations, that is, 128 ART and 132 conventional restorations. 91.1% of the restorations were on one surface only. After a year, 252 restorations were assessed in 80 patients. The average cost for ART and conventional restorations was €16.86 and €28.71 respectively; the restoration survival percentages were 91.1% and 97.7%, respectively. This resulted in a cost-effectiveness ratio of 0.18 (ART) and 0.29 (CT). When the cost of a hygienist to provide ART was inserted in the analysis, the resulting ratio was 0.14.
CONCLUSIONS: Atraumatic restorative treatment was found to be a more cost-effective alternative to treat older adults after 1 year, compared to conventional restorations, especially in out of surgery facilities and using alternative workforce such as hygienists. Atraumatic restorative treatment can be a useful tool to provide dental care for frail and fearful individuals who might not access dental treatment routinely.
Resumo:
Applications that cannot tolerate the loss of accuracy that results from binary arithmetic demand hardware decimal arithmetic designs. Binary arithmetic in Quantum-dot cellular automata (QCA) technology has been extensively investigated in recent years. However, only limited attention has been paid to QCA decimal arithmetic. In this paper, two cost-efficient binary-coded decimal (BCD) adders are presented. One is based on the carry flow adder (CFA) using a conventional correction method. The other uses the carry look ahead (CLA) algorithm which is the first QCA CLA decimal adder proposed to date. Compared with previous designs, both decimal adders achieve better performance in terms of latency and overall cost. The proposed CFA-based BCD adder has the smallest area with the least number of cells. The proposed CLA-based BCD adder is the fastest with an increase in speed of over 60% when compared with the previous fastest decimal QCA adder. It also has the lowest overall cost with a reduction of over 90% when compared with the previous most cost-efficient design.
Resumo:
Bag of Distributed Tasks (BoDT) can benefit from decentralised execution on the Cloud. However, there is a trade-off between the performance that can be achieved by employing a large number of Cloud VMs for the tasks and the monetary constraints that are often placed by a user. The research reported in this paper is motivated towards investigating this trade-off so that an optimal plan for deploying BoDT applications on the cloud can be generated. A heuristic algorithm, which considers the user's preference of performance and cost is proposed and implemented. The feasibility of the algorithm is demonstrated by generating execution plans for a sample application. The key result is that the algorithm generates optimal execution plans for the application over 91% of the time.
Resumo:
Lattice-based cryptography has gained credence recently as a replacement for current public-key cryptosystems, due to its quantum-resilience, versatility, and relatively low key sizes. To date, encryption based on the learning with errors (LWE) problem has only been investigated from an ideal lattice standpoint, due to its computation and size efficiencies. However, a thorough investigation of standard lattices in practice has yet to be considered. Standard lattices may be preferred to ideal lattices due to their stronger security assumptions and less restrictive parameter selection process. In this paper, an area-optimised hardware architecture of a standard lattice-based cryptographic scheme is proposed. The design is implemented on a FPGA and it is found that both encryption and decryption fit comfortably on a Spartan-6 FPGA. This is the first hardware architecture for standard lattice-based cryptography reported in the literature to date, and thus is a benchmark for future implementations.
Additionally, a revised discrete Gaussian sampler is proposed which is the fastest of its type to date, and also is the first to investigate the cost savings of implementing with lamda_2-bits of precision. Performance results are promising in comparison to the hardware designs of the equivalent ring-LWE scheme, which in addition to providing a stronger security proof; generate 1272 encryptions per second and 4395 decryptions per second.
Resumo:
The recent remarkable growth in bandwidth of both wired optical and wireless access networks supports a burst of new high bandwidth Internet applications such as: peer-topeer file sharing, cloud storage, on-line gaming, video streaming, etc. Within this scenario, the convergence of fixed and wireless access networks offers significant opportunities for network operators to satisfy user demands, and simultaneously reduce the cost of implementing and running separated wireless and wired networks. The integration of wired and wireless network can be accomplished within several scenarios and at several levels. In this thesis we will focus on converged radio over fiber architectures, particularly on two application scenarios: converged optical 60 GHz wireless networks and wireless overlay backhauling over bidirectional colorless wavelength division multiplexing passive optical networks (WDM-PONs). In the first application scenario, optical 60 GHz signal generation using external modulation of an optical carrier by means of lithium niobate (LiNbO3) Mach- Zehnder modulators (MZM) is considered. The performance of different optical modulation techniques, robust against fiber dispersion is assessed and dispersion mitigation strategies are identified. The study is extended to 60 GHz carriers digitally modulated with data and to systems employing subcarrier multiplexed (SCM) mm-wave channels. In the second application scenario, the performance of WDM-PONs employing reflective semiconductor optical amplifiers (RSOAs), transmitting an overlay orthogonal frequency-division multiplexing (OFDM) wireless signal is assessed analytically and experimentally, with the relevant system impairments being identified. It is demonstrated that the intermodulation due to the beating of the baseband signal and wireless signal at the receiver can seriously impair the wireless channel. Performance degradation of the wireless channel caused by the RSOA gain modulation owing to the downstream baseband data is also assessed, and system design guidelines are provided.
Resumo:
The meteorological and chemical transport model WRF-Chem was implemented to forecast PM10 concentrations over Poland. WRF-Chem version 3.5 was configured with three one way nested domains using the GFS meteorological data and the TNO MACC II emissions. Forecasts, with 48h lead time, were run for a winter and summer period 2014. WRF-Chem in general captures the variability in observed PM10 concentrations, but underestimates some peak concentrations during winter-time. The peaks coincide with either stable atmospheric condition during nighttime in the lower part of the planetary boundary layer or on days with very low surface temperatures. Such episodes lead to increased combustion in residential heating, where hard coal is the main fuel in Poland. This suggests that a key to improvement in the model performance for the peak concentrations is to focus on the simulation of PBL processes and the distribution of emissions with high resolution in WRF-Chem.
Resumo:
Energy-using Products (EuPs) contribute significantly to the United Kingdom’s CO2 emissions, both in the domestic and non-domestic sectors. Policies that encourage the use of more energy efficient products (such as minimum performance standards, energy labelling, enhanced capital allowances, etc.) can therefore generate significant reductions in overall energy consumption and hence, CO2 emissions. While these policies can impose costs on the producers and consumers of these products in the short run, the process of product innovation may reduce the magnitude of these costs over time. If this is the case, then it is important that the impacts of innovation are taken into account in policy impact assessments. Previous studies have found considerable evidence of experience curve effects for EuP categories (e.g. refrigerators, televisions, etc.), with learning rates of around 20% for both average unit costs and average prices; similar to those found for energy supply technologies. Moreover, the decline in production costs has been accompanied by a significant improvement in the energy efficiency of EuPs. Building on these findings and the results of an empirical analysis of UK sales data for a range of product categories, this paper sets out an analytic framework for assessing the impact of EuP policy interventions on consumers and producers which takes explicit account of the product innovation process. The impact of the product innovation process can be seen in the continuous evolution of the energy class profiles of EuP categories over time; with higher energy classes (e.g. A, A+, etc.) entering the market and increasing their market share, while lower classes (e.g. E, F, etc.) lose share and then leave the market. Furthermore, the average prices of individual energy classes have declined over their respective lives, while new classes have typically entered the market at successively lower “launch prices”. Based on two underlying assumptions regarding the shapes of the “lifecycle profiles” for the relative sales and the relative average mark-ups of individual energy classes, a simple simulation model is developed that can replicate the observed market dynamics in terms of the evolution of market shares and average prices. The model is used to assess the effect of two alternative EuP policy interventions – a minimum energy performance standard and an energy-labelling scheme – on the average unit cost trajectory and the average price trajectory of a typical EuP category, and hence the financial impacts on producers and consumers.