868 resultados para Cost Over run
Resumo:
How can we calculate earthquake magnitudes when the signal is clipped and over-run? When a volcano is very active, the seismic record may saturate (i.e., the full amplitude of the signal is not recorded) or be over-run (i.e., the end of one event is covered by the start of a new event). The duration, and sometimes the amplitude, of an earthquake signal are necessary for determining event magnitudes; thus, it may be impossible to calculate earthquake magnitudes when a volcano is very active. This problem is most likely to occur at volcanoes with limited networks of short period seismometers. This study outlines two methods for calculating earthquake magnitudes when events are clipped and over-run. The first method entails modeling the shape of earthquake codas as a power law function and extrapolating duration from the decay of the function. The second method draws relations between clipped duration (i.e., the length of time a signal is clipped) and the full duration. These methods allow for magnitudes to be determined within 0.2 to 0.4 units of magnitude. This error is within the range of analyst hand-picks and is within the acceptable limits of uncertainty when quickly quantifying volcanic energy release during volcanic crises. Most importantly, these estimates can be made when data are clipped or over-run. These methods were developed with data from the initial stages of the 2004-2008 eruption at Mount St. Helens. Mount St. Helens is a well-studied volcano with many instruments placed at varying distances from the vent. This fact makes the 2004-2008 eruption a good place to calibrate and refine methodologies that can be applied to volcanoes with limited networks.
Resumo:
This research was undertaken with an objective of studying software development project risk, risk management, project outcomes and their inter-relationship in the Indian context. Validated instruments were used to measure risk, risk management and project outcome in software development projects undertaken in India. A second order factor model was developed for risk with five first order factors. Risk management was also identified as a second order construct with four first order factors. These structures were validated using confirmatory factor analysis. Variation in risk across categories of select organization / project characteristics was studied through a series of one way ANOVA tests. Regression model was developed for each of the risk factors by linking it to risk management factors and project /organization characteristics. Similarly regression models were developed for the project outcome measures linking them to risk factors. Integrated models linking risk factors, risk management factors and project outcome measures were tested through structural equation modeling. Quality of the software developed was seen to have a positive relationship with risk management and negative relationship with risk. The other outcome variables, namely time overrun and cost over run, had strong positive relationship with risk. Risk management did not have direct effect on overrun variables. Risk was seen to be acting as an intervening variable between risk management and overrun variables.
Resumo:
The alliance project delivery method is used for approximately one third of all Australian government infrastructure projects representing $8-$10 billion per annum. Despite its widespread use, little is known about the differences between estimated project cost and actual cost over the project lifecycle. This paper presents the findings of research into 14 Australian government alliance case studies investigating the observed cost uplift over each project’s lifecycle. I find that significant cost uplift is likely and that this uplift is greater than that afflicting traditional delivery methods. Furthermore, most of the cost uplift occurs at a different place in the project lifecycle, namely between Business Case and Contractual Commitment.
Resumo:
India's energy challenges are three pronged: presence of majority energy poor lacking access to modern energy; need for expanding energy system to bridge this access gap as well as to meet the requirements of fast-growing economy; and the desire to partner with global economies in mitigating the threat of climate change. The presence of 364 million people without access to electricity and 726 million relying on biomass for cooking out of a total rural population of 809 million indicate the seriousness of challenge. In this paper, we discuss an innovative approach to address this challenge, which intends to take advantage of recent global developments and untapped capabilities possessed by India. Intention is to use climate change mitigation imperative as a stimulus and adopt a public-private-partnership-driven ‘business model' with innovative institutional, regulatory, financing, and delivery mechanisms. Some of the innovations are: creation of rural energy access authorities within the government system as leadership institutions; establishment of energy access funds to enable transitions from the regime of "investment/fuel subsidies" to "incentive-linked" delivery of energy services; integration of business principles to facilitate affordable and equitable energy sales and carbon trade; and treatment of entrepreneurs as implementation targets. This proposal targets 100% access to modern energy carriers by 2030 through a judicious mix of conventional and biomass energy systems with an investment of US$35 billion over 20 years. The estimated annual cost of universal energy access is about US$9 billion for a GHG mitigation potential of 213Tg CO2e at an abatement cost of US$41/tCO2e. It is a win-win situation for all stakeholders. Households benefit from modern energy carriers at affordable cost; entrepreneurs run profitable energy enterprises; carbon markets have access to CERs; the government has the satisfaction of securing energy access to rural people; and globally, there is a benefit of climate change mitigation.
Resumo:
Developing countries are heavily burdened by limited access to safe drinking water and subsequent water-related diseases. Numerous water treatment interventions combat this public health crisis, encompassing both traditional and less-common methods. Of these, water disinfection serves as an important means to provide safe drinking water. Existing literature discusses a wide range of traditional treatment options and encourages the use of multi-barrier approaches including coagulation-flocculation, filtration, and disinfection. Most sources do not delve into approaches specifically appropriate for developing countries, nor do they exclusively examine water disinfection methods.^ The objective of this review is to focus on an extensive range of chemical, physio-chemical, and physical water disinfection techniques to provide a compilation, description and evaluation of options available. Such an objective provides further understanding and knowledge to better inform water treatment interventions and explores alternate means of water disinfection appropriate for developing countries. Appropriateness for developing countries corresponds to the effectiveness of an available, easy to use disinfection technique at providing safe drinking water at a low cost.^ Among chemical disinfectants, SWS sodium hypochlorite solution is preferred over sodium hypochlorite bleach due to consistent concentrations. Tablet forms are highly recommended chemical disinfectants because they are effective and very easy to use, but also because they are stable. Examples include sodium dichloroisocyanurate, calcium hypochlorite, and chlorine dioxide, which vary in cost depending on location and availability. Among physio-chemical disinfection options, electrolysis which produces mixed oxidants (MIOX) provides a highly effective disinfection option with a higher upfront cost but very low cost over the long term. Among physical disinfection options, solar disinfection (SODIS) applications are effective, but they treat only a fixed volume of water at a time. They come with higher initial costs but very low on-going costs. Additional effective disinfection techniques may be suitable depending on the location, availability and cost.^
Resumo:
In the last few years, technical debt has been used as a useful means for making the intrinsic cost of the internal software quality weaknesses visible. This visibility is made possible by quantifying this cost. Specifically, technical debt is expressed in terms of two main concepts: principal and interest. The principal is the cost of eliminating or reducing the impact of a, so called, technical debt item in a software system; whereas the interest is the recurring cost, over a time period, of not eliminating a technical debt item. Previous works about technical debt are mainly focused on estimating principal and interest, and on performing a cost-benefit analysis. This cost-benefit analysis allows one to determine if to remove technical debt is profitable and to prioritize which items incurring in technical debt should be fixed first. Nevertheless, for these previous works technical debt is flat along the time. However the introduction of new factors to estimate technical debt may produce non flat models that allow us to produce more accurate predictions. These factors should be used to estimate principal and interest, and to perform cost-benefit analysis related to technical debt. In this paper, we take a step forward introducing the uncertainty about the interest, and the time frame factors so that it becomes possible to depict a number of possible future scenarios. Estimations obtained without considering the possible evolution of the interest over time may be less accurate as they consider simplistic scenarios without changes.
Resumo:
As all environmental problems are caused by human systems of design, sustainability can be seen as a design problem. Given the massive energy and material flows through the built environment, sustainability simply cannot be achieved without the re-design of our urban areas. ‘Eco-retrofitting’, as used here, means modifying buildings and/or urban areas to create net positive social and environmental impacts – both on site and off site. While this has probably not been achieved anywhere as yet, myriad but untapped eco-solutions are already available which could be up-scaled to the urban level. It is now well established that eco-retrofitting buildings and cities with appropriate design technology can pay for itself through lower health costs, productivity increases and resource savings. Good design would also mean happier human and ecological communities at a much lower cost over time. In fact, good design could increase life quality and the life support services of nature while creating sustainable‘economic’growth. The impediments are largely institutional and intellectual, which can be encapsulated in the term ‘managerial’. There are, however, also systems design solutions to the managerial obstacles that seem to be stalling the transition to sustainable systems designs. Given the sustainability imperative, then, why is the adoption of better management systems so slow? The oral presentation will show examples of ways in which built environment design can create environments that not only reduce the ongoing damage of past design, but could theoretically generate net positive social and ecological outcomes over their life cycle. These illustrations show that eco-retrofitting could cost society less than doing nothing - especially given the ongoing renovations of buildings - but for managerial hurdles. The paper outlines on how traditional managerial approaches stand in the way of ‘design for ecosystem services’, and list some management solutions that have long been identified, but are not yet widely adopted. Given the pervasive nature of these impediments and their alternatives, they are presented by way of examples. A sampling of eco-retrofitting solutions are also listed to show that ecoretrofitting is a win-win-win solution that stands ready to be implemented by people having management skills and/or positions of influence.
Resumo:
Aim The aim of this paper was to explore the concept of expertise in nursing from the perspective of how it relates to current driving forces in health care in which it discusses the potential barriers to acceptance of nursing expertise in a climate in which quantification of value and cost containment run high on agendas. Background Expert nursing practice can be argued to be central to high quality, holistic, individualized patient care. However, changes in government policy which have led to the inception of comprehensive guidelines or protocols of care are in danger of relegating the ‘expert nurse’ to being an icon of the past. Indeed, it could be argued that expert nurses are an expensive commodity within the nursing workforce. Consequently, with this change to the use of clinical guidelines, it calls into question how expert nursing practice will develop within this framework of care. Method The article critically reviews the evidence related to the role of the Expert Nurse in an attempt to identify the key concepts and ideas, and how the inception of care protocols has implications for their role. Conclusion Nursing expertise which focuses on the provision of individualized, holistic care and is based largely on intuitive decision making cannot, should not be reduced to being articulated in positivist terms. However, the dominant power and decision-making focus in health care means that nurses must be confident in articulating the value of a concept which may be outside the scope of knowledge of those with whom they are debating. Relevance to clinical practice The principles of abduction or fuzzy logic may be useful in assisting nurses to explain in terms which others can comprehend, the value of nursing expertise.
Resumo:
This paper explores the concept of expertise in intensive care nursing practice from the perspective of its relationship to the current driving forces in healthcare. It discusses the potential barriers to acceptance of nursing expertise in a climate in which quantification of value and cost containment run high on agendas. It argues that nursing expertise which focuses on the provision of individualised, holistic care and which is based largely on intuitive decision-making cannot and should not be reduced to being articulated in positivist terms. The principles of abduction or fuzzy logic, derived from computer science, may be useful in assisting nurses to explain in terms, which others can comprehend, the value of nursing expertise.
Resumo:
Study/Objective This paper describes a program of research examining emergency messaging during the response and early recovery phases of natural disasters. The objective of this suite of studies is to develop message construction frameworks and channels that maximise community compliance with instructional messaging. The research has adopted a multi-hazard approach and considers the impact of formal emergency messages, as well as informal messages (e.g., social media posts), on community compliance. Background In recent years, media reports have consistently demonstrated highly variable community compliance to instructional messaging during natural disasters. Footage of individuals watching a tsunami approaching from the beach or being over-run by floodwaters are disturbing and indicate the need for a clearer understanding of decision making under stress. This project’s multi-hazard approach considers the time lag between knowledge of the event and desired action, as well as how factors such as message fatigue, message ambiguity, and the interplay of messaging from multiple media sources are likely to play a role in an individual’s compliance with an emergency instruction. Methods To examine effective messaging strategy, we conduct a critical analysis of the literature to develop a framework for community consultation and design experiments to test the potential for compliance improvement. Results Preliminary results indicate that there is, as yet, little published evidence on which to base decisions about emergency instructional messages to threatened communities. Conclusion The research described here will contribute improvements in emergency instructional message compliance by generating an evidence-based framework that takes into account behavioural compliance theory, the psychology of decision making under stress, and multiple channels of communication including social media.
Resumo:
This paper describes a program of research examining emergency messaging during the response and early recovery phases of natural disasters. The objective of this suite of studies is to develop message construction frameworks and channels that maximise community compliance with instructional messaging. The research has adopted a multi-hazard approach and considers the impact of formal emergency messages, as well as informal messages (e.g., social media posts), on community compliance. In recent years, media reports have consistently demonstrated highly variable community compliance to instructional messaging during natural disasters. Footage of individuals watching a tsunami approaching from the beach or being over-run by floodwaters are disturbing and indicate the need for a clearer understanding of decision making under stress. This project’s multi-hazard approach considers the time lag between knowledge of the event and desired action, as well as how factors such as message fatigue, message ambiguity, and the interplay of messaging from multiple media sources are likely to play a role in an individual’s compliance with an emergency instruction. To examine effective messaging strategy, we conduct a critical analysis of the literature to develop a framework for community consultation and design experiments to test the potential for compliance improvement. Preliminary results indicate that there is, as yet, little published evidence on which to base decisions about emergency instructional messages to threatened communities. The research described here will contribute improvements in emergency instructional message compliance by generating an evidence-based framework that takes into account behavioural compliance theory, the psychology of decision making under stress, and multiple channels of communication including social media.
Resumo:
A branch and bound type algorithm is presented in this paper to the problem of finding a transportation schedule which minimises the total transportation cost, where the transportation cost over each route is assumed to be a piecewice linear continuous convex function with increasing slopes. The algorithm is an extension of the work done by Balachandran and Perry, in which the transportation cost over each route is assumed to beapiecewise linear discontinuous function with decreasing slopes. A numerical example is solved illustrating the algorithm.
Resumo:
Objective Foodborne illnesses in Australia, including salmonellosis, are estimated to cost over $A1.25 billion annually. The weather has been identified as being influential on salmonellosis incidence, as cases increase during summer, however time series modelling of salmonellosis is challenging because outbreaks cause strong autocorrelation. This study assesses whether switching models is an improved method of estimating weather–salmonellosis associations. Design We analysed weather and salmonellosis in South-East Queensland between 2004 and 2013 using 2 common regression models and a switching model, each with 21-day lags for temperature and precipitation. Results The switching model best fit the data, as judged by its substantial improvement in deviance information criterion over the regression models, less autocorrelated residuals and control of seasonality. The switching model estimated a 5°C increase in mean temperature and 10 mm precipitation were associated with increases in salmonellosis cases of 45.4% (95% CrI 40.4%, 50.5%) and 24.1% (95% CrI 17.0%, 31.6%), respectively. Conclusions Switching models improve on traditional time series models in quantifying weather–salmonellosis associations. A better understanding of how temperature and precipitation influence salmonellosis may identify where interventions can be made to lower the health and economic costs of salmonellosis.