173 resultados para Value analysis (Cost control)
Resumo:
Objectives Commercial sex is licensed in Victoria, Australia such that sex workers are required to have regular tests for sexually transmitted infections (STIs). However, the incidence and prevalence of STIs in sex workers are very low, especially since there is almost universal condom use at work. We aimed to conduct a cost-effectiveness analysis of the financial cost of the testing policy versus the health benefits of averting the transmission of HIV, syphilis, chlamydia and gonorrhoea to clients. Methods We developed a simple mathematical transmission model, informed by conservative parameter estimates from all available data, linked to a cost-effectiveness analysis. Results We estimated that under current testing rates, it costs over $A90 000 in screening costs for every chlamydia infection averted (and $A600 000 in screening costs for each quality-adjusted life year (QALY) saved) and over $A4 000 000 for every HIV infection averted ($A10 000 000 in screening costs for each QALY saved). At an assumed willingness to pay of $A50 000 per QALY gained, HIV testing should not be conducted less than approximately every 40 weeks and chlamydia testing approximately once per year; in comparison, current requirements are testing every 12 weeks for HIV and every 4 weeks for chlamydia. Conclusions Mandatory screening of female sex workers at current testing frequencies is not cost-effective for the prevention of disease in their male clients. The current testing rate required of sex workers in Victoria is excessive. Screening intervals for sex workers should be based on local STI epidemiology and not locked by legislation.
Resumo:
The functions of the volunteer functions inventory were combined with the constructs of the theory of planned behaviour (i.e., attitudes, subjective norms, and perceived behavioural control) to establish whether a stronger, single explanatory model prevailed. Undertaken in the context of episodic, skilled volunteering by individuals who were retired or approaching retirement (N = 186), the research advances on prior studies which either examined the predictive capacity of each model independently or compared their explanatory value. Using hierarchical regression analysis, the functions of the volunteer functions inventory (when controlling for demographic variables) explained an additional 7.0% of variability in individuals’ willingness to volunteer over and above that accounted for by the theory of planned behaviour. Significant predictors in the final model included attitudes, subjective norms and perceived behavioural control from the theory of planned behaviour and the understanding function from the volunteer functions inventory. It is proposed that the items comprising the understanding function may represent a deeper psychological construct (e.g., self-actualisation) not accounted for by the theory of planned behaviour. The findings highlight the potential benefit of combining these two prominent models in terms of improving understanding of volunteerism and providing a single parsimonious model for raising rates of this important behaviour.
Resumo:
In this study, the biodiesel properties and effects of blends of oil methyl ester petroleum diesel on a CI direct injection diesel engine is investigated. Blends were obtained from the marine dinoflagellate Crypthecodinium cohnii and waste cooking oil. The experiment was conducted using a four-cylinder, turbo-charged common rail direct injection diesel engine at four loads (25%, 50%, 75% and 100%). Three blends (10%, 20% and 50%) of microalgae oil methyl ester and a 20% blend of waste cooking oil methyl ester were compared to petroleum diesel. To establish suitability of the fuels for a CI engine, the effects of the three microalgae fuel blends at different engine loads were assessed by measuring engine performance, i.e. mean effective pressure (IMEP), brake mean effective pressure (BMEP), in cylinder pressure, maximum pressure rise rate, brake-specific fuel consumption (BSFC), brake thermal efficiency (BTE), heat release rate and gaseous emissions (NO, NOx,and unburned hydrocarbons (UHC)). Results were then compared to engine performance characteristics for operation with a 20% waste cooking oil/petroleum diesel blend and petroleum diesel. In addition, physical and chemical properties of the fuels were measured. Use of microalgae methyl ester reduced the instantaneous cylinder pressure and engine output torque, when compared to that of petroleum diesel, by a maximum of 4.5% at 50% blend at full throttle. The lower calorific value of the microalgae oil methyl ester blends increased the BSFC, which ultimately reduced the BTE by up to 4% at higher loads. Minor reductions of IMEP and BMEP were recorded for both the microalgae and the waste cooking oil methyl ester blends at low loads, with a maximum of 7% reduction at 75% load compared to petroleum diesel. Furthermore, compared to petroleum diesel, gaseous emissions of NO and NOx, increased for operations with biodiesel blends. At full load, NO and NOx emissions increased by 22% when 50% microalgae blends were used. Petroleum diesel and a 20% blend of waste cooking oil methyl ester had emissions of UHC that were similar, but those of microalgae oil methyl ester/petroleum diesel blends were reduced by at least 50% for all blends and engine conditions. The tested microalgae methyl esters contain some long-chain, polyunsaturated fatty acid methyl esters (FAMEs) (C22:5 and C22:6) not commonly found in terrestrial-crop-derived biodiesels yet all fuel properties were satisfied or were very close to the ASTM 6751-12 and EN14214 standards. Therefore, Crypthecodinium cohnii- derived microalgae biodiesel/petroleum blends of up to 50% are projected to meet all fuel property standards and, engine performance and emission results from this study clearly show its suitability for regular use in diesel engines.
Resumo:
Air transport is a critical link to regional, rural and remote communities in Australia. Air services provide important economic and social benefits but very little research has been done on assessing the value of regional aviation. This research provides the first empirical evidence that there is short and long run causality between regional aviation and economic growth. The authors analysed 88 regional airports in Australia over a period of 1985–86 to 2010–11 to determine the catalytic impacts of regional air transport on regional economic growth. The analysis was conducted using annual data related to total airport passenger movements – for the level of airport activity, and real aggregate taxable income – to represent economic growth. A significant bi-directional relationship was established: airports have an impact on regional economic growth and the economy directly impacts regional air transport. The economic significance of regional air transport confirms the importance of the airport as infrastructure for regional councils and the need for them to maintain and develop local airports. Funding should be targeted at airports directly to support regional development.
Resumo:
Ever growing populations in cities are associated with a major increase in road vehicles and air pollution. The overall high levels of urban air pollution have been shown to be of a significant risk to city dwellers. However, the impacts of very high but temporally and spatially restricted pollution, and thus exposure, are still poorly understood. Conventional approaches to air quality monitoring are based on networks of static and sparse measurement stations. However, these are prohibitively expensive to capture tempo-spatial heterogeneity and identify pollution hotspots, which is required for the development of robust real-time strategies for exposure control. Current progress in developing low-cost micro-scale sensing technology is radically changing the conventional approach to allow real-time information in a capillary form. But the question remains whether there is value in the less accurate data they generate. This article illustrates the drivers behind current rises in the use of low-cost sensors for air pollution management in cities, whilst addressing the major challenges for their effective implementation.
Resumo:
Introduced predators can have pronounced effects on naïve prey species; thus, predator control is often essential for conservation of threatened native species. Complete eradication of the predator, although desirable, may be elusive in budget-limited situations, whereas predator suppression is more feasible and may still achieve conservation goals. We used a stochastic predator-prey model based on a Lotka-Volterra system to investigate the cost-effectiveness of predator control to achieve prey conservation. We compared five control strategies: immediate eradication, removal of a constant number of predators (fixed-number control), removal of a constant proportion of predators (fixed-rate control), removal of predators that exceed a predetermined threshold (upper-trigger harvest), and removal of predators whenever their population falls below a lower predetermined threshold (lower-trigger harvest). We looked at the performance of these strategies when managers could always remove the full number of predators targeted by each strategy, subject to budget availability. Under this assumption immediate eradication reduced the threat to the prey population the most. We then examined the effect of reduced management success in meeting removal targets, assuming removal is more difficult at low predator densities. In this case there was a pronounced reduction in performance of the immediate eradication, fixed-number, and lower-trigger strategies. Although immediate eradication still yielded the highest expected minimum prey population size, upper-trigger harvest yielded the lowest probability of prey extinction and the greatest return on investment (as measured by improvement in expected minimum population size per amount spent). Upper-trigger harvest was relatively successful because it operated when predator density was highest, which is when predator removal targets can be more easily met and the effect of predators on the prey is most damaging. This suggests that controlling predators only when they are most abundant is the "best" strategy when financial resources are limited and eradication is unlikely. © 2008 Society for Conservation Biology.
Resumo:
The notion of being sure that you have completely eradicated an invasive species is fanciful because of imperfect detection and persistent seed banks. Eradication is commonly declared either on an ad hoc basis, on notions of seed bank longevity, or on setting arbitrary thresholds of 1% or 5% confidence that the species is not present. Rather than declaring eradication at some arbitrary level of confidence, we take an economic approach in which we stop looking when the expected costs outweigh the expected benefits. We develop theory that determines the number of years of absent surveys required to minimize the net expected cost. Given detection of a species is imperfect, the optimal stopping time is a trade-off between the cost of continued surveying and the cost of escape and damage if eradication is declared too soon. A simple rule of thumb compares well to the exact optimal solution using stochastic dynamic programming. Application of the approach to the eradication programme of Helenium amarum reveals that the actual stopping time was a precautionary one given the ranges for each parameter. © 2006 Blackwell Publishing Ltd/CNRS.
Resumo:
To produce commercially valuable ketocarotenoids in Solanum tuberosum, the 4, 4′ β-oxygenase (crtW) and 3, 3′ β-hydroxylase (crtZ) genes from Brevundimonas spp. have been expressed in the plant host under constitutive transcriptional control. The CRTW and CRTZ enzymes are capable of modifying endogenous plant carotenoids to form a range of hydroxylated and ketolated derivatives. The host (cv. Désirée) produced significant levels of nonendogenous carotenoid products in all tissues, but at the apparent expense of the economically critical metabolite, starch. Carotenoid levels increased in both wild-type and transgenic tubers following cold storage; however, stability during heat processing varied between compounds. Subcellular fractionation of leaf tissues revealed the presence of ketocarotenoids in thylakoid membranes, but not predominantly in the photosynthetic complexes. A dramatic increase in the carotenoid content of plastoglobuli was determined. These findings were corroborated by microscopic analysis of chloroplasts. In tuber tissues, esterified carotenoids, representing 13% of the total pigment found in wild-type extracts, were sequestered in plastoglobuli. In the transgenic tubers, this proportion increased to 45%, with esterified nonendogenous carotenoids in place of endogenous compounds. Conversely, nonesterified carotenoids in both wild-type and transgenic tuber tissues were associated with amyloplast membranes and starch granules.
Resumo:
The structural features of fatty acids in biodiesel, including degree of unsaturation, percentage of saturated fatty acids and average chain length, influence important fuel properties such as cetane number, iodine value, density, kinematic viscosity, higher heating value and oxidation stability. The composition of fatty acid esters within the fuel should therefore be in the correct ratio to ensure fuel properties are within international biodiesel standards such as ASTM 6751 or EN 14214. This study scrutinises the influence of fatty acid composition and individual fatty acids on fuel properties. Fuel properties were estimated based on published equations, and measured according to standard procedure ASTM D6751 and EN 14214 to confirm the influences of the fatty acid profile. Based on fatty acid profile-derived calculations, the cetane number of the microalgal biodiesel was estimated to be 11.6, but measured 46.5, which emphasises the uncertainty of the method used for cetane number calculation. Multi-criteria decision analysis (MCDA), PROMETHEE-GAIA, was used to determine the influence of individual fatty acids on fuel properties in the GAIA plane. Polyunsaturated fatty acids increased the iodine value and had a negative influence on cetane number. Kinematic viscosity was negatively influenced by some long chain polyunsaturated fatty acids such as C20:5 and C22:6 and some of the more common saturated fatty acids C14:0 and C18:0. The positive impact of average chain length on higher heating value was also confirmed in the GAIA plane
Resumo:
One of the objectives of this study was to evaluate soil testing equipment based on its capability of measuring in-place stiffness or modulus values. As design criteria transition from empirical to mechanistic-empirical, soil test methods and equipment that measure properties such as stiffness and modulus and how they relate to Florida materials are needed. Requirements for the selected equipment are that they be portable, cost effective, reliable, a ccurate, and repeatable. A second objective is that the selected equipment measures soil properties without the use of nuclear materials.The current device used to measure soil compaction is the nuclear density gauge (NDG). Equipment evaluated in this research included lightweight deflectometers (LWD) from different manufacturers, a dynamic cone penetrometer (DCP), a GeoGauge, a Clegg impact soil tester (CIST), a Briaud compaction device (BCD), and a seismic pavement analyzer (SPA). Evaluations were conducted over ranges of measured densities and moistures.Testing (Phases I and II) was conducted in a test box and test pits. Phase III testing was conducted on materials found on five construction projects located in the Jacksonville, Florida, area. Phase I analyses determined that the GeoGauge had the lowest overall coefficient of variance (COV). In ascending order of COV were the accelerometer-type LWD, the geophone-type LWD, the DCP, the BCD, and the SPA which had the highest overall COV. As a result, the BCD and the SPA were excluded from Phase II testing.In Phase II, measurements obtained from the selected equipment were compared to the modulus values obtained by the static plate load test (PLT), the resilient modulus (MR) from laboratory testing, and the NDG measurements. To minimize soil and moisture content variability, the single spot testing sequence was developed. At each location, test results obtained from the portable equipment under evaluation were compared to the values from adjacent NDG, PLT, and laboratory MR measurements. Correlations were developed through statistical analysis. Target values were developed for various soils for verification on similar soils that were field tested in Phase III. The single spot testing sequence also was employed in Phase III, field testing performed on A-3 and A-2-4 embankments, limerock-stabilized subgrade, limerock base, and graded aggregate base found on Florida Department of Transportation construction projects. The Phase II and Phase III results provided potential trend information for future research—specifically, data collection for in-depth statistical analysis for correlations with the laboratory MR for specific soil types under specific moisture conditions. With the collection of enough data, stronger relationships could be expected between measurements from the portable equipment and the MR values. Based on the statistical analyses and the experience gained from extensive use of the equipment, the combination of the DCP and the LWD was selected for in-place soil testing for compaction control acceptance. Test methods and developmental specifications were written for the DCP and the LWD. The developmental specifications include target values for the compaction control of embankment, subgrade, and base materials.
Resumo:
In 2005, Ginger Myles and Hongxia Jin proposed a software watermarking scheme based on converting jump instructions or unconditional branch statements (UBSs) by calls to a fingerprint branch function (FBF) that computes the correct target address of the UBS as a function of the generated fingerprint and integrity check. If the program is tampered with, the fingerprint and integrity checks change and the target address will not be computed correctly. In this paper, we present an attack based on tracking stack pointer modifications to break the scheme and provide implementation details. The key element of the attack is to remove the fingerprint and integrity check generating code from the program after disassociating the target address from the fingerprint and integrity value. Using the debugging tools that give vast control to the attacker to track stack pointer operations, we perform both subtractive and watermark replacement attacks. The major steps in the attack are automated resulting in a fast and low-cost attack.
Resumo:
We study the influence of the choice of template in tensor-based morphometry. Using 3D brain MR images from 10 monozygotic twin pairs, we defined a tensor-based distance in the log-Euclidean framework [1] between each image pair in the study. Relative to this metric, twin pairs were found to be closer to each other on average than random pairings, consistent with evidence that brain structure is under strong genetic control. We also computed the intraclass correlation and associated permutation p-value at each voxel for the determinant of the Jacobian matrix of the transformation. The cumulative distribution function (cdf) of the p-values was found at each voxel for each of the templates and compared to the null distribution. Surprisingly, there was very little difference between CDFs of statistics computed from analyses using different templates. As the brain with least log-Euclidean deformation cost, the mean template defined here avoids the blurring caused by creating a synthetic image from a population, and when selected from a large population, avoids bias by being geometrically centered, in a metric that is sensitive enough to anatomical similarity that it can even detect genetic affinity among anatomies.
Resumo:
Monitoring pedestrian and cyclists movement is an important area of research in transport, crowd safety, urban design and human behaviour assessment areas. Media Access Control (MAC) address data has been recently used as potential information for extracting features from people’s movement. MAC addresses are unique identifiers of WiFi and Bluetooth wireless technologies in smart electronics devices such as mobile phones, laptops and tablets. The unique number of each WiFi and Bluetooth MAC address can be captured and stored by MAC address scanners. MAC addresses data in fact allows for unannounced, non-participatory, and tracking of people. The use of MAC data for tracking people has been focused recently for applying in mass events, shopping centres, airports, train stations etc. In terms of travel time estimation, setting up a scanner with a big value of antenna’s gain is usually recommended for highways and main roads to track vehicle’s movements, whereas big gains can have some drawbacks in case of pedestrian and cyclists. Pedestrian and cyclists mainly move in built distinctions and city pathways where there is significant noises from other fixed WiFi and Bluetooth. Big antenna’s gains will cover wide areas that results in scanning more samples from pedestrians and cyclists’ MAC device. However, anomalies (such fixed devices) may be captured that increase the complexity and processing time of data analysis. On the other hand, small gain antennas will have lesser anomalies in the data but at the cost of lower overall sample size of pedestrian and cyclist’s data. This paper studies the effect of antenna characteristics on MAC address data in terms of travel-time estimation for pedestrians and cyclists. The results of the empirical case study compare the effects of small and big antenna gains in order to suggest optimal set up for increasing the accuracy of pedestrians and cyclists’ travel-time estimation.
Resumo:
The design-build (DB) delivery method has been widely used in the United States due to its reputed superior cost and time performance. However, rigorous studies have produced inconclusive support and only in terms of overall results, with few attempts being made to relate project characteristics with performance levels. This paper provides a larger and more finely grained analysis of a set of 418 DB projects from the online project database of the Design-Build Institute of America (DBIA), in terms of the time-overrun rate (TOR), early start rate (ESR), early completion rate (ECR) and cost overrun rate (COR) associated with project type (e.g., commercial/institutional buildings and civil infrastructure projects), owners (e.g., Department of Defense and private corporations), procurement methods (e.g., ‘best value with discussion’ and qualifications-based selection), contract methods (e.g., lump sum and GMP) and LEED levels (e.g., gold and silver). The results show ‘best value with discussion’ to be the dominant procurement method and lump sum the most frequently used contract method. The DB method provides relatively good time performance, with more than 75% of DB projects completed on time or before schedule. However, with more than 50% of DB projects cost overrunning, the DB advantage of cost saving remains uncertain. ANOVA tests indicate that DB projects within different procurement methods have significantly different time performance and that different owner types and contract methods significantly affect cost performance. In addition to contributing to empirical knowledge concerning the cost and time performance of DB projects with new solid evidence from a large sample size, the findings and practical implications of this study are beneficial to owners in understanding the likely schedule and budget implications involved for their particular project characteristics.
Resumo:
AIMS: To examine changes in illicit drug consumption between peak holiday season (23 December-3 January) in Australia and a control period two months later in a coastal urban area, an inland semi-rural area and an island populated predominantly by vacationers during holidays. DESIGN: Analysis of representative daily composite wastewater samples collected from the inlet of the major wastewater treatment plant in each area. SETTING: Three wastewater treatment plants. PARTICIPANTS: Wastewater treatment plants serviced approximately 350, 000 persons in the urban area, 120,000 in the semi-rural area and 1100-2400 on the island. MEASUREMENTS: Drug residues were analysed using liquid chromatography coupled to a tandem mass spectrometer. Per capita drug consumption was estimated. Changes in drug use were quantified using Hedges' g. FINDINGS: During the holidays, cannabis consumption in the semi-rural area declined (g = -2.8) as did methamphetamine (-0.8), whereas cocaine (+1.5) and ecstasy (+1.6) use increased. In the urban area, consumption of all drugs increased during holidays (cannabis +1.6, cocaine +1.2, ecstasy +0.8 and methamphetamine +0.3). In the vacation area, methamphetamine (+0.7), ecstasy (+0.7) and cocaine (+1.1) use increased, but cannabis (-0.5) use decreased during holiday periods. CONCLUSIONS: While the peak holiday season in Australia is perceived as a period of increased drug use, this is not uniform across all drugs and areas. Substantial declines in drug use in the semi-rural area contrasted with substantial increases in urban and vacation areas. Per capita drug consumption in the vacation area was equivalent to that in the urban area, implying that these locations merit particular attention for drug use monitoring and harm minimisation measures.