968 resultados para Central cost
Resumo:
Organisations are constantly seeking new ways to improve operational efficiencies. This study investigates a novel way to identify potential efficiency gains in business operations by observing how they were carried out in the past and then exploring better ways of executing them by taking into account trade-offs between time, cost and resource utilisation. This paper demonstrates how these trade-offs can be incorporated in the assessment of alternative process execution scenarios by making use of a cost environment. A number of optimisation techniques are proposed to explore and assess alternative execution scenarios. The objective function is represented by a cost structure that captures different process dimensions. An experimental evaluation is conducted to analyse the performance and scalability of the optimisation techniques: integer linear programming (ILP), hill climbing, tabu search, and our earlier proposed hybrid genetic algorithm approach. The findings demonstrate that the hybrid genetic algorithm is scalable and performs better compared to other techniques. Moreover, we argue that the use of ILP is unrealistic in this setup and cannot handle complex cost functions such as the ones we propose. Finally, we show how cost-related insights can be gained from improved execution scenarios and how these can be utilised to put forward recommendations for reducing process-related cost and overhead within organisations.
Resumo:
The phase relations have been investigated experimentally at 200 and 500 MPa as a function of water activity for one of the least evolved (Indian Batt Rhyolite) and of a more evolved rhyolite composition (Cougar Point Tuff XV) from the 12·8-8·1 Ma Bruneau-Jarbidge eruptive center of the Yellowstone hotspot. Particular priority was given to accurate determination of the water content of the quenched glasses using infrared spectroscopic techniques. Comparison of the composition of natural and experimentally synthesized phases confirms that high temperatures (>900°C) and extremely low melt water contents (<1·5 wt % H₂O) are required to reproduce the natural mineral assemblages. In melts containing 0·5-1·5 wt % H₂O, the liquidus phase is clinopyroxene (excluding Fe-Ti oxides, which are strongly dependent on fO₂), and the liquidus temperature of the more evolved Cougar Point Tuff sample (BJR; 940-1000°C) is at least 30°C lower than that of the Indian Batt Rhyolite lava sample (IBR2; 970-1030°C). For the composition BJR, the comparison of the compositions of the natural and experimental glasses indicates a pre-eruptive temperature of at least 900°C. The composition of clinopyroxene and pigeonite pairs can be reproduced only for water contents below 1·5 wt % H₂O at 900°C, or lower water contents if the temperature is higher. For the composition IBR2, a minimum temperature of 920°C is necessary to reproduce the main phases at 200 and 500 MPa. At 200 MPa, the pre-eruptive water content of the melt is constrained in the range 0·7-1·3 wt % at 950°C and 0·3-1·0 wt % at 1000°C. At 500 MPa, the pre-eruptive temperatures are slightly higher (by 30-50°C) for the same ranges of water concentration. The experimental results are used to explore possible proxies to constrain the depth of magma storage. The crystallization sequence of tectosilicates is strongly dependent on pressure between 200 and 500 MPa. In addition, the normative Qtz-Ab-Or contents of glasses quenched from melts coexisting with quartz, sanidine and plagioclase depend on pressure and melt water content, assuming that the normative Qtz and Ab/Or content of such melts is mainly dependent on pressure and water activity, respectively. The combination of results from the phase equilibria and from the composition of glasses indicates that the depth of magma storage for the IBR2 and BJR compositions may be in the range 300-400 MPa (13 km) and 200-300 MPa (10 km), respectively.
Resumo:
Energy efficiency as a concept has gained significant attention over the last few decades, as governments and industries around the world have grappled with issues such as rapid population growth and expanding needs for energy, the cost of supplying infrastructure for growing spikes in peak demand, the finite nature of fossil based energy reserves, and managing transition timeframes for expanding renewable energy supplies. Over the last decade in particular, there has been significant growth in understanding the complexity and interconnectedness of these issues, and the centrality of energy efficiency to the engineering profession. Furthermore, there has been a realisation amongst various government departments and education providers that associated knowledge and skill sets to achieve energy efficiency goals are not being sufficiently developed in vocational or higher education. Within this context, this poster discusses the emergence of a national energy efficiency education agenda in Australia, to support embedding such knowledge throughout the engineering curriculum, and throughout career pathways. In particular, the posterprovides insights into the national priorities for capacity building in Australia, and how this is influencing the engineering education community, from undergraduate education through to postgraduate studies and professional development. The poster is intended to assist in raising awareness about the central role of energy efficiency within engineering, significant initiatives by major government, professional, and training organisations, and the increasing availability of high quality energy efficiency engineering education resources. The authors acknowledge the support for and contributions to this poster by the federal Department of Resources, Energy and Tourism, through members of the national Energy Efficiency Advisory Group for engineering education.
Resumo:
This paper reports on an Australian study that explored the costs and benefits of the National Assessment Programme, Literacy and Numeracy (NAPLAN) testing, both tangible and intangible, of Year 9 students in three Queensland schools. The study commenced with a review of pertinent studies and other related material about standardised testing in Australia, the USA and UK. Information about NAPLAN testing and reporting, and the pedagogical impacts of standardised testing were identified, however little about administrative costs to schools was found. A social constructivist perspective and a multiple case study approach were used to explore the actions of school managers and teachers in three Brisbane secondary schools. The study found that the costs of NAPLAN testing to schools fell into two categories: preparation of students for the testing; and administration of the tests. Whilst many of the costs could not be quantified, they were substantial and varied according to the education sector in which the school operated. The benefits to schools of NAPLAN testing were found to be limited. The findings have implications for governments, curriculum authorities and schools, leading to the conclusion that, from a school perspective, the benefits of NAPLAN testing do not justify the costs.
Resumo:
Blasting is an integral part of large-scale open cut mining that often occurs in close proximity to population centers and often results in the emission of particulate material and gases potentially hazardous to health. Current air quality monitoring methods rely on limited numbers of fixed sampling locations to validate a complex fluid environment and collect sufficient data to confirm model effectiveness. This paper describes the development of a methodology to address the need of a more precise approach that is capable of characterizing blasting plumes in near-real time. The integration of the system required the modification and integration of an opto-electrical dust sensor, SHARP GP2Y10, into a small fixed-wing and multi-rotor copter, resulting in the collection of data streamed during flight. The paper also describes the calibration of the optical sensor with an industry grade dust-monitoring device, Dusttrak 8520, demonstrating a high correlation between them, with correlation coefficients (R2) greater than 0.9. The laboratory and field tests demonstrate the feasibility of coupling the sensor with the UAVs. However, further work must be done in the areas of sensor selection and calibration as well as flight planning.
Resumo:
The Bruneau–Jarbidge eruptive center of the central Snake River Plain in southern Idaho, USA produced multiple rhyolite lava flows with volumes of <10 km³ to 200 km³ each from ~11.2 to 8.1 Ma, most of which follow its climactic phase of large-volume explosive volcanism, represented by the Cougar Point Tuff, from 12.7 to 10.5 Ma. These lavas represent the waning stages of silicic volcanism at a major eruptive center of the Yellowstone hotspot track. Here we provide pyroxene compositions and thermometry results from several lavas that demonstrate that the demise of the silicic volcanic system was characterized by sustained, high pre-eruptive magma temperatures (mostly ≥950 °C) prior to the onset of exclusively basaltic volcanism at the eruptive center. Pyroxenes display a variety of textures in single samples, including solitary euhedral crystals as well as glomerocrysts, crystal clots and annealed microgranular inclusions of pyroxene ±magnetite± plagioclase. Pigeonite and augite crystals are unzoned, and there are no detectable differences in major and minor element compositions according to textural variety — mineral compositions in the microgranular inclusions and crystal clots are identical to those of phenocrysts in the host lavas. In contrast to members of the preceding Cougar Point Tuff that host polymodal glass and mineral populations, pyroxene compositions in each of the lavas are characterized by single rather than multiple discrete compositional modes. Collectively, the lavas reproduce and extend the range of Fe–Mg pyroxene compositional modes observed in the Cougar Point Tuff to more Mg-rich varieties. The compositionally homogeneous populations of pyroxene in each of the lavas, as well as the lack of core-to-rim zonation in individual crystals suggest that individual eruptions each were fed by compositionally homogeneous magma reservoirs, and similarities with the Cougar Point Tuff suggest consanguinity of such reservoirs to those that supplied the polymodal Cougar Point Tuff. Pyroxene thermometry results obtained using QUILF equilibria yield pre-eruptive magma temperatures of 905 to 980 °C, and individual modes consistently record higher Ca content and higher temperatures than pyroxenes with equivalent Fe–Mg ratios in the preceding Cougar Point Tuff. As is the case with the Cougar Point Tuff, evidence for up-temperature zonation within single crystals that would be consistent with recycling of sub- or near-solidus material from antecedent magma reservoirs by rapid reheating is extremely rare. Also, the absence of intra-crystal zonation, particularly at crystal rims, is not easily reconciled with cannibalization of caldera fill that subsided into pre-eruptive reservoirs. The textural, compositional and thermometric results rather are consistent with minor re-equilibration to higher temperatures of the unerupted crystalline residue from the explosive phase of volcanism, or perhaps with newly generated magmas from source materials very similar to those for the Cougar Point Tuff. Collectively, the data suggest that most of the pyroxene compositional diversity that is represented by the tuffs and lavas was produced early in the history of the eruptive center and that compositions across this range were preserved or duplicated through much of its lifetime. Mineral compositions and thermometry of the multiple lavas suggest that unerupted magmas residual to the explosive phase of volcanism may have been stored at sustained, high temperatures subsequent to the explosive phase of volcanism. If so, such persistent high temperatures and large eruptive magma volumes likewise require an abundant and persistent supply of basalt magmas to the lower and/or mid-crust, consistent with the tectonic setting of a continental hotspot.
Resumo:
The aim of this project was to evaluate the cost-effectiveness of hand hygiene interventions in resource-limited hospital settings. Using data from north-east Thailand, the research found that such interventions are likely to be very cost-effective in intensive care unit settings as a result of reduced incidence of methicillin-resistant Staphylococcus aureus bloodstream infection alone. This study also found evidence showing that the World Health Organization's (WHO) multimodal intervention is effective and when adding either goal-setting, reward incentives, or accountability strategies to the WHO intervention, compliance could be further improved.
Resumo:
A roll-to-roll compatible, high throughput process is reported for the production of highly conductive, transparent planar electrode comprising an interwoven network of silver nanowires and single walled carbon nanotubes imbedded into poly(3,4-ethylenedioxythiophene):polystyrene sulfonate (PEDOT:PSS). The planar electrode has a sheet resistance of between 4 and 7 Ω □−1 and a transmission of >86% between 800 and 400 nm with a figure of merit of between 344 and 400 Ω−1. The nanocomposite electrode is highly flexible and retains a low sheet resistance after bending at a radius of 5 mm for up to 500 times without loss. Organic photovoltaic devices containing the planar nanocomposite electrodes had efficiencies of ∼90% of control devices that used indium tin oxide as the transparent conducting electrode.
Resumo:
This paper presents an unmanned aircraft system (UAS) that uses a probabilistic model for autonomous front-on environmental sensing or photography of a target. The system is based on low-cost and readily-available sensor systems in dynamic environments and with the general intent of improving the capabilities of dynamic waypoint-based navigation systems for a low-cost UAS. The behavioural dynamics of target movement for the design of a Kalman filter and Markov model-based prediction algorithm are included. Geometrical concepts and the Haversine formula are applied to the maximum likelihood case in order to make a prediction regarding a future state of a target, thus delivering a new waypoint for autonomous navigation. The results of the application to aerial filming with low-cost UAS are presented, achieving the desired goal of maintained front-on perspective without significant constraint to the route or pace of target movement.
Resumo:
In the past few years, the virtual machine (VM) placement problem has been studied intensively and many algorithms for the VM placement problem have been proposed. However, those proposed VM placement algorithms have not been widely used in today's cloud data centers as they do not consider the migration cost from current VM placement to the new optimal VM placement. As a result, the gain from optimizing VM placement may be less than the loss of the migration cost from current VM placement to the new VM placement. To address this issue, this paper presents a penalty-based genetic algorithm (GA) for the VM placement problem that considers the migration cost in addition to the energy-consumption of the new VM placement and the total inter-VM traffic flow in the new VM placement. The GA has been implemented and evaluated by experiments, and the experimental results show that the GA outperforms two well known algorithms for the VM placement problem.
Resumo:
The design-build (DB) delivery method has been widely used in the United States due to its reputed superior cost and time performance. However, rigorous studies have produced inconclusive support and only in terms of overall results, with few attempts being made to relate project characteristics with performance levels. This paper provides a larger and more finely grained analysis of a set of 418 DB projects from the online project database of the Design-Build Institute of America (DBIA), in terms of the time-overrun rate (TOR), early start rate (ESR), early completion rate (ECR) and cost overrun rate (COR) associated with project type (e.g., commercial/institutional buildings and civil infrastructure projects), owners (e.g., Department of Defense and private corporations), procurement methods (e.g., ‘best value with discussion’ and qualifications-based selection), contract methods (e.g., lump sum and GMP) and LEED levels (e.g., gold and silver). The results show ‘best value with discussion’ to be the dominant procurement method and lump sum the most frequently used contract method. The DB method provides relatively good time performance, with more than 75% of DB projects completed on time or before schedule. However, with more than 50% of DB projects cost overrunning, the DB advantage of cost saving remains uncertain. ANOVA tests indicate that DB projects within different procurement methods have significantly different time performance and that different owner types and contract methods significantly affect cost performance. In addition to contributing to empirical knowledge concerning the cost and time performance of DB projects with new solid evidence from a large sample size, the findings and practical implications of this study are beneficial to owners in understanding the likely schedule and budget implications involved for their particular project characteristics.
Resumo:
Genomewide association studies (GWAS) have proven a powerful hypothesis-free method to identify common disease-associated variants. Even quite large GWAS, however, have only at best identified moderate proportions of the genetic variants contributing to disease heritability. To provide cost-effective genotyping of common and rare variants to map the remaining heritability and to fine-map established loci, the Immunochip Consortium has developed a 200,000 SNP chip that has been produced in very large numbers for a fraction of the cost of GWAS chips. This chip provides a powerful tool for immunogenetics gene mapping.
Resumo:
Cost estimating has been acknowledged as a crucial component of construction projects. Depending on available information and project requirements, cost estimates evolve in tandem with project lifecycle stages; conceptualisation, design development, execution and facility management. The premium placed on the accuracy of cost estimates is crucial to producing project tenders and eventually in budget management. Notwithstanding the initial slow pace of its adoption, Building Information Modelling (BIM) has successfully addressed a number of challenges previously characteristic of traditional approaches in the AEC, including poor communication, the prevalence of islands of information and frequent reworks. Therefore, it is conceivable that BIM can be leveraged to address specific shortcomings of cost estimation. The impetus for leveraging BIM models for accurate cost estimation is to align budgeted and actual cost. This paper hypothesises that the accuracy of BIM-based estimation, as more efficient, process-mirrors of traditional cost estimation methods, can be enhanced by simulating traditional cost estimation factors variables. Through literature reviews and preliminary expert interviews, this paper explores the factors that could potentially lead to more accurate cost estimates for construction projects. The findings show numerous factors that affect the cost estimates ranging from project information and its characteristic, project team, clients, contractual matters, and other external influences. This paper will make a particular contribution to the early phase of BIM-based project estimation.
Resumo:
Cost estimating is a key task within Quantity Surveyors’ (QS) offices. Provision of an accurate estimate is vital to ensure that the objectives of the client are met by staying within the client’s budget. Building Information Modelling (BIM) is an evolving technology that has gained attention in the construction industries all over the world. Benefits from the use of BIM include cost and time savings if the processes used by the procurement team are adapted to maximise the benefits of BIM. BIM can be used by QSs to automate aspects of quantity take-off and the preparation of estimates, decreasing turnaround time and assist in controlling errors and inaccuracies. The Malaysian government has decided to require the use of BIM for its projects beginning from 2016. However, slow uptake is reported in the use of BIM both within companies and to support collaboration within the Malaysian industry. It has been recommended that QSs to start evaluating the impact of BIM on their practices. This paper reviews the perspectives of QSs in Malaysia towards the use of BIM to achieve more dependable results in their cost estimating practice. The objectives of this paper include identifying strategies in improving practice and potential adoption drivers that lead QSs to BIM usage in their construction projects. From the expert interviews, it was found out that, despite still using traditional methods and not practising BIM, the interviewees still acquire limited knowledge related to BIM. There are some drivers that potentially motivate them to employ BIM in their practices. These include client demands, innovation in traditional methods, speed in estimating costs, reduced time and costs, improvement in practices and self-awareness, efficiency in projects, and competition from other companies. The findings of this paper identify the potential drivers in encouraging Malaysian Quantity Surveyors to exploit BIM in their construction projects.
Resumo:
Individuals with limb amputation fitted with conventional socket-suspended prostheses often experience socket-related discomfort leading to a significant decrease in quality of life. Bone-anchored prostheses are increasingly acknowledged as viable alternative method of attachment of artificial limb. In this case, the prosthesis is attached directly to the residual skeleton through a percutaneous fixation. To date, a few osseointegration fixations are commercially available. Several devices are at different stages of development particularly in Europe and the US. [1-15] Clearly, surgical procedures are currently blooming worldwide. Indeed, Australia and Queensland, in particular, have one of the fastest growing populations. Previous studies involving either screw-type implants or press-fit fixations for bone-anchorage have focused on biomechanics aspects as well as the clinical benefits and safety of the procedure. In principle, bone-anchored prostheses should eliminate lifetime expenses associated with sockets and, consequently, potentially alleviate the financial burden of amputation for governmental organizations. Unfortunately, publications focusing on cost-effectiveness are sparse. In fact, only one study published by Haggstrom et al (2012), reported that “despite significantly fewer visits for prosthetic service the annual mean costs for osseointegrated prostheses were comparable with socket-suspended prostheses”. Consequently, governmental organizations such as Queensland Artificial Limb Services (QALS) are facing a number of challenges while adjusting financial assistance schemes that should be fair and equitable to their clients fitted with bone-anchored prostheses. Clearly, more scientific evidence extracted from governmental databases is needed to further consolidate the analyses of financial burden associated with both methods of attachment (i.e., conventional sockets prostheses, bone-anchored prostheses). The purpose of the presentation will be to share the current outcomes of a cost-analysis study lead by QALS. The specific objectives will be: • To outline methodological avenues to assess the cost-effectiveness of bone-anchored prostheses compared to conventional sockets prostheses, • To highlight the potential obstacles and limitations in cost-effectiveness analyses of bone-anchored prostheses, • To present cohort results of a cost-effectiveness (QALY vs cost) including the determination of fair Incremental cost-effectiveness Ratios (ICER) as well as cost-benefit analysis focusing on the comparing costs and key outcome indicators (e.g., QTFA, TUG, 6MWT, activities of daily living) over QALS funding cycles for both methods of attachment.