185 resultados para Cost analysis
Resumo:
Researching administrative history is problematical. A trail of authoritative documents is often hard to find; and useful summaries can be difficult to organise, especially if source material is in paper formats in geographically dispersed locations. In the absence of documents, the reasons for particular decisions and the rationale underpinning particular policies can be confounded as key personnel advance in their professions and retire. The rationale for past decisions may be lost for practical purposes; and if an organisation’s memory of events is diminished, its learning through experience is also diminished. Publishing this document tries to avoid unnecessary duplication of effort by other researchers that need to venture into how policies of charging for public sector information have been justified. The author compiled this work within a somewhat limited time period and the work does not pretend to be a complete or comprehensive analysis of the issues.----- A significant part of the role of government is to provide a framework of legally-enforceable rights and obligations that can support individuals and non-government organisations in their lawful activities. Accordingly, claims that governments should be more ‘business-like’ need careful scrutiny. A significant supply of goods and services occurs as non-market activity where neither benefits nor costs are quantified within conventional accounting systems or in terms of money. Where a government decides to provide information as a service; and information from land registries is archetypical, the transactions occur as a political decision made under a direct or a clearly delegated authority of a parliament with the requisite constitutional powers. This is not a market transaction and the language of the market confuses attempts to describe a number of aspects of how governments allocate resources.----- Cost recovery can be construed as an aspect of taxation that is a sole prerogative of a parliament. The issues are fundamental to political constitutions; but they become more complicated where states cede some taxing powers to a central government as part of a federal system. Nor should the absence of markets be construed necessarily as ‘market failure’ or even ‘government failure’. The absence is often attributable to particular technical, economic and political constraints that preclude the operation of markets. Arguably, greater care is needed in distinguishing between the polity and markets in raising revenues and allocating resources; and that needs to start by removing unhelpful references to ‘business’ in the context of government decision-making.
Resumo:
This paper focuses on the varying approaches and methodologies adopted when the calculation of holding costs is undertaken, focusing on greenfield development. Whilst acknowledging there may be some consistency in embracing first principles relating to holding cost theory, a review of the literature reveals considerable lack of uniformity in this regard. There is even less clarity in quantitative determination, especially in Australia where there has been only limited empirical analysis undertaken. Despite a growing quantum of research undertaken in relation to various elements connected with housing affordability, the matter of holding costs has not been well addressed regardless of its part in the highly prioritised Australian Government’s housing research agenda. The end result has been a modicum of qualitative commentary relating to holding costs. There have been few attempts at finer-tuned analysis that exposes a quantified level of holding cost calculated with underlying rigour. Holding costs can take many forms, but they inevitably involve the computation of “carrying costs” of an initial outlay that has yet to fully realise its ultimate yield. Although sometimes considered a “hidden” cost, it is submitted that holding costs prospectively represent a major determinate of value. If this is the case, then considered in the context of housing affordability, it is therefore potentially pervasive.
Resumo:
Over recent decades there has been growing interest in the role of non-motorized modes in the overall transport system (especially walking and cycling for private purposes) and many government initiatives have been taken to encourage these active modes. However there has been relatively little research attention given to the paid form of non-motorized travel which can be called non-motorized public transport (NMPT). This involves cycle-powered vehicles which can carry several passengers (plus the driver) and a small amount of goods; and which provide flexible hail-and-ride services. Effectively they are non-motorized taxis. Common forms include cycle-rickshaw (Bangladesh, India), becak (Indonesia), cyclos (Vietnam, Cambodia), bicitaxi (Columbia, Cuba), velo-taxi (Germany, Netherland), and pedicabs (UK, Japan, USA). --------- The popularity of NMPT is widespread in developing countries, where it caters for a wide range of mobility needs. For instance in Dhaka, Bangladesh, rickshaws are the preferred mode for non-walk trips and have a higher mode share than cars or buses. Factors that underlie the continued existence and popularity of NMPT in many developing countries include positive contribution to social equity, micro-macro economic significance, employment creation, and suitability for narrow and crowded streets. Although top speeds are lower than motorized modes, NMPT is competitive and cost-effective for short distance door-to-door trips that make up the bulk of travel in many developing cities. In addition, NMPT is often the preferred mode for vulnerable groups such as females, children and elderly people. NMPT is more prominent in developing countries but its popularity and significance is also gradually increasing in several developed countries of Asia, Europe and parts of North America, where there is a trend for the NMPT usage pattern to broaden from tourism to public transport. This shift is due to a number of factors including the eco-sustainable nature of NMPT; its operating flexibility (such as in areas where motorized vehicle access is restricted or discouraged through pricing); and the dynamics that it adds to the urban fabric. Whereas NMPT may have been seen as a “dying” mode, in many cities it is maintaining or increasing its significance and with potential for further growth. --------- This paper will examine and analyze global trends in NMPT incorporating both developing and developed country contexts and issues such as usage patterns; NMPT policy and management practices; technological development; and operational integration of NMPT into the overall transport system. It will look at how NMPT policies, practices and usage have changed over time and the differing trends in developing and developed countries. In particular, it will use Dhaka, Bangladesh as a case study in recognition of its standing as the major NMPT city in the world. The aim is to highlight NMPT issues and trends and their significance for shaping future policy towards NMPT in developing and developed countries. The paper will be of interest to transport planners, traffic engineers, urban and regional planners, environmentalists, economists and policy makers.
Resumo:
Highway construction often requires a significant capital input; therefore it often causes serious financial implications for developers, owners and operators. The recent industry-wide focus on sustainability has added a new dimension to the evaluation of highway projects, particularly on the economical scale of ‘going green’. Comprehensive analysis of the whole-of-life highway development that responds to sustainability challenges is one of the primary concerns for stakeholders. Principles of engineering economics and life cycle costing have been used to determine the incremental capacity investments for highway projects. However, the consideration of costs and issues associated with sustainability is still very limited in current studies on highway projects. Previous studies have identified that highway project investments are primarily concerned with direct market costs that can be quantified through life cycle costing analysis (LCCA). But they tend to ignore costs that are difficult to calculate, as those related to environmental and social elements. On a more positive note, these studies proved that the inclusion of such costs is an essential part of the overall development investment and a primary concern for decision making by the stakeholders. This paper discusses a research attempt to identify and categorise sustainability cost elements for highway projects. Through questionnaire survey, a set of sustainability cost elements on highway projects has been proposed. These cost elements are incorporated into the extension of some of the existing Life Cycle Costing Analysis (LCCA) models in order to produce a holistic financial picture of the highway project. It is expected that a new LCCA model will be established to serve as a suitable tool for decision making for highway project stakeholders.
Resumo:
It is now well known that pesticide spraying by farmers has an adverse impact on their health. This is especially so in developing countries where pesticide spraying is undertaken manually. The estimated health costs are large. Studies to date have examined farmers’ exposure to pesticides, the costs of ill-health and their determinants based on information provided by farmers. Hence, some doubt has been cast on the reliability of such studies. In this study, we rectify this situation by conducting surveys among two groups of farmers. Farmers who perceive that their ill-health is due to exposure to pesticides and obtained treatment and farmers whose ill-health have been diagnosed by doctors and who have been treated in hospital for exposure to pesticides. In the paper, cost comparisons between the two groups of farmers are made. Furthermore, regression analysis of the determinants of health costs show that the quantity of pesticides used per acre per month, frequency of pesticide use and number of pesticides used per hour per day are the most important determinants of medical costs for both samples. The results have important policy implications.
Resumo:
Greyback canegrubs cost the Australian sugarcane industry around $13 million per annum in damage and control. A novel and cost effective biocontrol bacterium could play an important role in the integrated pest management program currently in place to reduce damage and control associated costs. During the course of this project, terminal restriction fragment length polymorphism (TRFLP), 16-S rDNA cloning, suppressive subtractive hybridisation (SSH) and entomopathogen-specific PCR screening were used to investigate the little studied canegrub-associated microflora in an attempt to discover novel pathogens from putatively-diseased specimens. Microflora associated with these soil-dwelling insects was found to be both highly diverse and divergent between individual specimens. Dominant members detected in live specimens were predominantly from taxa of known insect symbionts while dominant sequences amplified from dead grubs were homologous to putativelysaprophytic bacteria and bacteria able to grow during refrigeration. A number of entomopathogenic bacteria were identified such as Photorhabdus luminescens and Pseudomonas fluorescens. Dead canegrubs prior to decomposition need to be analysed if these bacteria are to be isolated. Novel strategies to enrich putative pathogen-associated sequences (SSH and PCR screening) were shown to be promising approaches for pathogen discovery and the investigation of canegrubsassociated microflora. However, due to inter- and intra-grub-associated community diversity, dead grub decomposition and PCR-specific methodological limitations (PCR bias, primer specificity, BLAST database restrictions, 16-S gene copy number and heterogeneity), recommendations have been made to improve the efficiency of such techniques. Improved specimen collection procedures and utilisation of emerging high-throughput sequencing technologies may be required to examine these complex communities in more detail. This is the first study to perform a whole-grub analysis and comparison of greyback canegrub-associated microbial communities. This work also describes the development of a novel V3-PCR based SSH technique. This was the first SSH technique to use V3-PCR products as a starting material and specifically compare bacterial species present in a complex community.
Resumo:
Transport regulators consider that, with respect to pavement damage, heavy vehicles (HVs) are the riskiest vehicles on the road network. That HV suspension design contributes to road and bridge damage has been recognised for some decades. This thesis deals with some aspects of HV suspension characteristics, particularly (but not exclusively) air suspensions. This is in the areas of developing low-cost in-service heavy vehicle (HV) suspension testing, the effects of larger-than-industry-standard longitudinal air lines and the characteristics of on-board mass (OBM) systems for HVs. All these areas, whilst seemingly disparate, seek to inform the management of HVs, reduce of their impact on the network asset and/or provide a measurement mechanism for worn HV suspensions. A number of project management groups at the State and National level in Australia have been, and will be, presented with the results of the project that resulted in this thesis. This should serve to inform their activities applicable to this research. A number of HVs were tested for various characteristics. These tests were used to form a number of conclusions about HV suspension behaviours. Wheel forces from road test data were analysed. A “novel roughness” measure was developed and applied to the road test data to determine dynamic load sharing, amongst other research outcomes. Further, it was proposed that this approach could inform future development of pavement models incorporating roughness and peak wheel forces. Left/right variations in wheel forces and wheel force variations for different speeds were also presented. This led on to some conclusions regarding suspension and wheel force frequencies, their transmission to the pavement and repetitive wheel loads in the spatial domain. An improved method of determining dynamic load sharing was developed and presented. It used the correlation coefficient between two elements of a HV to determine dynamic load sharing. This was validated against a mature dynamic loadsharing metric, the dynamic load sharing coefficient (de Pont, 1997). This was the first time that the technique of measuring correlation between elements on a HV has been used for a test case vs. a control case for two different sized air lines. That dynamic load sharing was improved at the air springs was shown for the test case of the large longitudinal air lines. The statistically significant improvement in dynamic load sharing at the air springs from larger longitudinal air lines varied from approximately 30 percent to 80 percent. Dynamic load sharing at the wheels was improved only for low air line flow events for the test case of larger longitudinal air lines. Statistically significant improvements to some suspension metrics across the range of test speeds and “novel roughness” values were evident from the use of larger longitudinal air lines, but these were not uniform. Of note were improvements to suspension metrics involving peak dynamic forces ranging from below the error margin to approximately 24 percent. Abstract models of HV suspensions were developed from the results of some of the tests. Those models were used to propose further development of, and future directions of research into, further gains in HV dynamic load sharing. This was from alterations to currently available damping characteristics combined with implementation of large longitudinal air lines. In-service testing of HV suspensions was found to be possible within a documented range from below the error margin to an error of approximately 16 percent. These results were in comparison with either the manufacturer’s certified data or test results replicating the Australian standard for “road-friendly” HV suspensions, Vehicle Standards Bulletin 11. OBM accuracy testing and development of tamper evidence from OBM data were detailed for over 2000 individual data points across twelve test and control OBM systems from eight suppliers installed on eleven HVs. The results indicated that 95 percent of contemporary OBM systems available in Australia are accurate to +/- 500 kg. The total variation in OBM linearity, after three outliers in the data were removed, was 0.5 percent. A tamper indicator and other OBM metrics that could be used by jurisdictions to determine tamper events were developed and documented. That OBM systems could be used as one vector for in-service testing of HV suspensions was one of a number of synergies between the seemingly disparate streams of this project.
Resumo:
The main goal of this research is to design an efficient compression al~ gorithm for fingerprint images. The wavelet transform technique is the principal tool used to reduce interpixel redundancies and to obtain a parsimonious representation for these images. A specific fixed decomposition structure is designed to be used by the wavelet packet in order to save on the computation, transmission, and storage costs. This decomposition structure is based on analysis of information packing performance of several decompositions, two-dimensional power spectral density, effect of each frequency band on the reconstructed image, and the human visual sensitivities. This fixed structure is found to provide the "most" suitable representation for fingerprints, according to the chosen criteria. Different compression techniques are used for different subbands, based on their observed statistics. The decision is based on the effect of each subband on the reconstructed image according to the mean square criteria as well as the sensitivities in human vision. To design an efficient quantization algorithm, a precise model for distribution of the wavelet coefficients is developed. The model is based on the generalized Gaussian distribution. A least squares algorithm on a nonlinear function of the distribution model shape parameter is formulated to estimate the model parameters. A noise shaping bit allocation procedure is then used to assign the bit rate among subbands. To obtain high compression ratios, vector quantization is used. In this work, the lattice vector quantization (LVQ) is chosen because of its superior performance over other types of vector quantizers. The structure of a lattice quantizer is determined by its parameters known as truncation level and scaling factor. In lattice-based compression algorithms reported in the literature the lattice structure is commonly predetermined leading to a nonoptimized quantization approach. In this research, a new technique for determining the lattice parameters is proposed. In the lattice structure design, no assumption about the lattice parameters is made and no training and multi-quantizing is required. The design is based on minimizing the quantization distortion by adapting to the statistical characteristics of the source in each subimage. 11 Abstract Abstract Since LVQ is a multidimensional generalization of uniform quantizers, it produces minimum distortion for inputs with uniform distributions. In order to take advantage of the properties of LVQ and its fast implementation, while considering the i.i.d. nonuniform distribution of wavelet coefficients, the piecewise-uniform pyramid LVQ algorithm is proposed. The proposed algorithm quantizes almost all of source vectors without the need to project these on the lattice outermost shell, while it properly maintains a small codebook size. It also resolves the wedge region problem commonly encountered with sharply distributed random sources. These represent some of the drawbacks of the algorithm proposed by Barlaud [26). The proposed algorithm handles all types of lattices, not only the cubic lattices, as opposed to the algorithms developed by Fischer [29) and Jeong [42). Furthermore, no training and multiquantizing (to determine lattice parameters) is required, as opposed to Powell's algorithm [78). For coefficients with high-frequency content, the positive-negative mean algorithm is proposed to improve the resolution of reconstructed images. For coefficients with low-frequency content, a lossless predictive compression scheme is used to preserve the quality of reconstructed images. A method to reduce bit requirements of necessary side information is also introduced. Lossless entropy coding techniques are subsequently used to remove coding redundancy. The algorithms result in high quality reconstructed images with better compression ratios than other available algorithms. To evaluate the proposed algorithms their objective and subjective performance comparisons with other available techniques are presented. The quality of the reconstructed images is important for a reliable identification. Enhancement and feature extraction on the reconstructed images are also investigated in this research. A structural-based feature extraction algorithm is proposed in which the unique properties of fingerprint textures are used to enhance the images and improve the fidelity of their characteristic features. The ridges are extracted from enhanced grey-level foreground areas based on the local ridge dominant directions. The proposed ridge extraction algorithm, properly preserves the natural shape of grey-level ridges as well as precise locations of the features, as opposed to the ridge extraction algorithm in [81). Furthermore, it is fast and operates only on foreground regions, as opposed to the adaptive floating average thresholding process in [68). Spurious features are subsequently eliminated using the proposed post-processing scheme.
Resumo:
In Australia and many other countries worldwide, water used in the manufacture of concrete must be potable. At present, it is currently thought that concrete properties are highly influenced by the water type used and its proportion in the concrete mix, but actually there is little knowledge of the effects of different, alternative water sources used in concrete mix design. Therefore, the identification of the level and nature of contamination in available water sources and their subsequent influence on concrete properties is becoming increasingly important. Of most interest, is the recycled washout water currently used by batch plants as mixing water for concrete. Recycled washout water is the water used onsite for a variety of purposes, including washing of truck agitator bowls, wetting down of aggregate and run off. This report presents current information on the quality of concrete mixing water in terms of mandatory limits and guidelines on impurities as well as investigating the impact of recycled washout water on concrete performance. It also explores new sources of recycled water in terms of their quality and suitability for use in concrete production. The complete recycling of washout water has been considered for use in concrete mixing plants because of the great benefit in terms of reducing the cost of waste disposal cost and environmental conservation. The objective of this study was to investigate the effects of using washout water on the properties of fresh and hardened concrete. This was carried out by utilizing a 10 week sampling program from three representative sites across South East Queensland. The sample sites chosen represented a cross-section of plant recycling methods, from most effective to least effective. The washout water samples collected from each site were then analysed in accordance with Standards Association of Australia AS/NZS 5667.1 :1998. These tests revealed that, compared with tap water, the washout water was higher in alkalinity, pH, and total dissolved solids content. However, washout water with a total dissolved solids content of less than 6% could be used in the production of concrete with acceptable strength and durability. These results were then interpreted using chemometric techniques of Principal Component Analysis, SIMCA and the Multi-Criteria Decision Making methods PROMETHEE and GAIA were used to rank the samples from cleanest to unclean. It was found that even the simplest purifying processes provided water suitable for the manufacture of concrete form wash out water. These results were compared to a series of alternative water sources. The water sources included treated effluent, sea water and dam water and were subject to the same testing parameters as the reference set. Analysis of these results also found that despite having higher levels of both organic and inorganic properties, the waters complied with the parameter thresholds given in the American Standard Test Method (ASTM) C913-08. All of the alternative sources were found to be suitable sources of water for the manufacture of plain concrete.
Resumo:
Agriculture's contribution to radiative forcing is principally through its historical release of carbon in soil and vegetation to the atmosphere and through its contemporary release of nitrous oxide (N2O) and methane (CHM4). The sequestration of soil carbon in soils now depleted in soil organic matter is a well-known strategy for mitigating the buildup of CO2 in the atmosphere. Less well-recognized are other mitigation potentials. A full-cost accounting of the effects of agriculture on greenhouse gas emissions is necessary to quantify the relative importance of all mitigation options. Such an analysis shows nitrogen fertilizer, agricultural liming, fuel use, N2O emissions, and CH4 fluxes to have additional significant potential for mitigation. By evaluating all sources in terms of their global warming potential it becomes possible to directly evaluate greenhouse policy options for agriculture. A comparison of temperate and tropical systems illustrates some of these options.
Resumo:
Accurate owner budget estimates are critical to the initial decision-to-build process for highway construction projects. However, transportation projects have historically experienced significant construction cost overruns from the time the decision to build has been taken by the owner. This paper addresses the problem of why highway projects overrun their predicted costs. It identifies the owner risk variables that contribute to significant cost overrun and then uses factor analysis, expert elicitation, and the nominal group technique to establish groups of importance ranked owner risks. Stepwise multivariate regression analysis is also used to investigate any correlation of the percentage of cost overrun with risks, together with attributes such as highway project type, indexed cost, geographics location, and project delivery method. The research results indicate a correlation between the reciprocal of project budgets size and percentage cost overrun. This can be useful for owners in determining more realistic decision-to-build highway budget estimates by taking into account the economies of scale associated with larger projects.
Resumo:
High reliability of railway power systems is one of the essential criteria to ensure quality and cost-effectiveness of railway services. Evaluation of reliability at system level is essential for not only scheduling maintenance activities, but also identifying reliability-critical components. Various methods to compute reliability on individual components or regularly structured systems have been developed and proven to be effective. However, they are not adequate for evaluating complicated systems with numerous interconnected components, such as railway power systems, and locating the reliability critical components. Fault tree analysis (FTA) integrates the reliability of individual components into the overall system reliability through quantitative evaluation and identifies the critical components by minimum cut sets and sensitivity analysis. The paper presents the reliability evaluation of railway power systems by FTA and investigates the impact of maintenance activities on overall reliability. The applicability of the proposed methods is illustrated by case studies in AC railways.
Resumo:
Aims--Telemonitoring (TM) and structured telephone support (STS) have the potential to deliver specialised management to more patients with chronic heart failure (CHF), but their efficacy is still to be proven. Objectives To review randomised controlled trials (RCTs) of TM or STS on all- cause mortality and all-cause and CHF-related hospitalisations in patients with CHF, as a non-invasive remote model of specialised disease-management intervention.--Methods and Results--Data sources:We searched 15 electronic databases and hand-searched bibliographies of relevant studies, systematic reviews, and meeting abstracts. Two reviewers independently extracted all data. Study eligibility and participants: We included any randomised controlled trials (RCT) comparing TM or STS to usual care of patients with CHF. Studies that included intensified management with additional home or clinic visits were excluded. Synthesis: Primary outcomes (mortality and hospitalisations) were analysed; secondary outcomes (cost, length of stay, quality of life) were tabulated.--Results: Thirty RCTs of STS and TM were identified (25 peer-reviewed publications (n=8,323) and five abstracts (n=1,482)). Of the 25 peer-reviewed studies, 11 evaluated TM (2,710 participants), 16 evaluated STS (5,613 participants) and two tested both interventions. TM reduced all-cause mortality (risk ratio (RR 0•66 [95% CI 0•54-0•81], p<0•0001) and STS showed similar trends (RR 0•88 [95% CI 0•76-1•01], p=0•08). Both TM (RR 0•79 [95% CI 0•67-0•94], p=0•008) and STS (RR 0•77 [95% CI 0•68-0•87], p<0•0001) reduced CHF-related hospitalisations. Both interventions improved quality of life, reduced costs, and were acceptable to patients. Improvements in prescribing, patient-knowledge and self-care, and functional class were observed.--Conclusion: TM and STS both appear effective interventions to improve outcomes in patients with CHF.