919 resultados para Cost Over run


Relevância:

100.00% 100.00%

Publicador:

Resumo:

How can we calculate earthquake magnitudes when the signal is clipped and over-run? When a volcano is very active, the seismic record may saturate (i.e., the full amplitude of the signal is not recorded) or be over-run (i.e., the end of one event is covered by the start of a new event). The duration, and sometimes the amplitude, of an earthquake signal are necessary for determining event magnitudes; thus, it may be impossible to calculate earthquake magnitudes when a volcano is very active. This problem is most likely to occur at volcanoes with limited networks of short period seismometers. This study outlines two methods for calculating earthquake magnitudes when events are clipped and over-run. The first method entails modeling the shape of earthquake codas as a power law function and extrapolating duration from the decay of the function. The second method draws relations between clipped duration (i.e., the length of time a signal is clipped) and the full duration. These methods allow for magnitudes to be determined within 0.2 to 0.4 units of magnitude. This error is within the range of analyst hand-picks and is within the acceptable limits of uncertainty when quickly quantifying volcanic energy release during volcanic crises. Most importantly, these estimates can be made when data are clipped or over-run. These methods were developed with data from the initial stages of the 2004-2008 eruption at Mount St. Helens. Mount St. Helens is a well-studied volcano with many instruments placed at varying distances from the vent. This fact makes the 2004-2008 eruption a good place to calibrate and refine methodologies that can be applied to volcanoes with limited networks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This research was undertaken with an objective of studying software development project risk, risk management, project outcomes and their inter-relationship in the Indian context. Validated instruments were used to measure risk, risk management and project outcome in software development projects undertaken in India. A second order factor model was developed for risk with five first order factors. Risk management was also identified as a second order construct with four first order factors. These structures were validated using confirmatory factor analysis. Variation in risk across categories of select organization / project characteristics was studied through a series of one way ANOVA tests. Regression model was developed for each of the risk factors by linking it to risk management factors and project /organization characteristics. Similarly regression models were developed for the project outcome measures linking them to risk factors. Integrated models linking risk factors, risk management factors and project outcome measures were tested through structural equation modeling. Quality of the software developed was seen to have a positive relationship with risk management and negative relationship with risk. The other outcome variables, namely time overrun and cost over run, had strong positive relationship with risk. Risk management did not have direct effect on overrun variables. Risk was seen to be acting as an intervening variable between risk management and overrun variables.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background: Breast cancer (BC) causes more deaths than any other cancer among women in Catalonia. Early detection has contributed to the observed decline in BC mortality. However, there is debate on the optimal screening strategy. We performed an economic evaluation of 20 screening strategies taking into account the cost over time of screening and subsequent medical costs, including diagnostic confirmation, initial treatment, follow-up and advanced care. Methods: We used a probabilistic model to estimate the effect and costs over time of each scenario. The effect was measured as years of life (YL), quality-adjusted life years (QALY), and lives extended (LE). Costs of screening and treatment were obtained from the Early Detection Program and hospital databases of the IMAS-Hospital del Mar in Barcelona. The incremental cost-effectiveness ratio (ICER) was used to compare the relative costs and outcomes of different scenarios. Results: Strategies that start at ages 40 or 45 and end at 69 predominate when the effect is measured as YL or QALYs. Biennial strategies 50-69, 45-69 or annual 45-69, 40-69 and 40-74 were selected as cost-effective for both effect measures (YL or QALYs). The ICER increases considerably when moving from biennial to annual scenarios. Moving from no screening to biennial 50-69 years represented an ICER of 4,469€ per QALY. Conclusions: A reduced number of screening strategies have been selected for consideration by researchers, decision makers and policy planners. Mathematical models are useful to assess the impact and costs of BC screening in a specific geographical area.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Developing countries are heavily burdened by limited access to safe drinking water and subsequent water-related diseases. Numerous water treatment interventions combat this public health crisis, encompassing both traditional and less-common methods. Of these, water disinfection serves as an important means to provide safe drinking water. Existing literature discusses a wide range of traditional treatment options and encourages the use of multi-barrier approaches including coagulation-flocculation, filtration, and disinfection. Most sources do not delve into approaches specifically appropriate for developing countries, nor do they exclusively examine water disinfection methods.^ The objective of this review is to focus on an extensive range of chemical, physio-chemical, and physical water disinfection techniques to provide a compilation, description and evaluation of options available. Such an objective provides further understanding and knowledge to better inform water treatment interventions and explores alternate means of water disinfection appropriate for developing countries. Appropriateness for developing countries corresponds to the effectiveness of an available, easy to use disinfection technique at providing safe drinking water at a low cost.^ Among chemical disinfectants, SWS sodium hypochlorite solution is preferred over sodium hypochlorite bleach due to consistent concentrations. Tablet forms are highly recommended chemical disinfectants because they are effective and very easy to use, but also because they are stable. Examples include sodium dichloroisocyanurate, calcium hypochlorite, and chlorine dioxide, which vary in cost depending on location and availability. Among physio-chemical disinfection options, electrolysis which produces mixed oxidants (MIOX) provides a highly effective disinfection option with a higher upfront cost but very low cost over the long term. Among physical disinfection options, solar disinfection (SODIS) applications are effective, but they treat only a fixed volume of water at a time. They come with higher initial costs but very low on-going costs. Additional effective disinfection techniques may be suitable depending on the location, availability and cost.^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In the last few years, technical debt has been used as a useful means for making the intrinsic cost of the internal software quality weaknesses visible. This visibility is made possible by quantifying this cost. Specifically, technical debt is expressed in terms of two main concepts: principal and interest. The principal is the cost of eliminating or reducing the impact of a, so called, technical debt item in a software system; whereas the interest is the recurring cost, over a time period, of not eliminating a technical debt item. Previous works about technical debt are mainly focused on estimating principal and interest, and on performing a cost-benefit analysis. This cost-benefit analysis allows one to determine if to remove technical debt is profitable and to prioritize which items incurring in technical debt should be fixed first. Nevertheless, for these previous works technical debt is flat along the time. However the introduction of new factors to estimate technical debt may produce non flat models that allow us to produce more accurate predictions. These factors should be used to estimate principal and interest, and to perform cost-benefit analysis related to technical debt. In this paper, we take a step forward introducing the uncertainty about the interest, and the time frame factors so that it becomes possible to depict a number of possible future scenarios. Estimations obtained without considering the possible evolution of the interest over time may be less accurate as they consider simplistic scenarios without changes.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this study, 20 Brazilian public schools have been assessed regarding good manufacturing practices and standard sanitation operating procedures implementation. We used a checklist comprised of 10 parts ( facilities and installations, water supply, equipments and tools, pest control, waste management, personal hygiene, sanitation, storage, documentation, and training), making a total of 69 questions. The implementing modification cost to the found nonconformities was also determined so that it could work with technical data as a based decision-making prioritization. The average nonconformity percentage at schools concerning to prerequisite program was 36%, from which 66% of them own inadequate installations, 65% waste management, 44% regarding documentation, and 35% water supply and sanitation. The initial estimated cost for changing has been U.S.$24,438 and monthly investments of 1.55% on the currently needed invested values. This would result in U.S.$0.015 increase on each served meal cost over the investment replacement within a year. Thus, we have concluded that such modifications are economically feasible and will be considered on technical requirements when prerequisite program implementation priorities are established.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Coming Into Focus presents a needs assessment related to Iowans with brain injury, and a state action plan to improve Iowa’s ability to meet those needs. Support for this project came from a grant from the Office of Maternal and Child Health to the Iowa Department of Public Health, Iowa’s lead agency for brain injury. The report is a description of the needs of people with brain injuries in Iowa, the status of services to meet those needs and a plan for improving Iowa’s system of supports. Brain injury can result from a skull fracture or penetration of the brain, a disease process such as tumor or infection, or a closed head injury, such as shaken baby syndrome. Traumatic brain injury is a leading cause of death and disability in children and young adults (Fick, 1997). In the United States there are as many as 2 million brain injuries per year, with 300,000 severe enough to require hospitalization. Some 50,000 lives are lost every year to TBI. Eighty to 90 thousand people have moderate to acute brain injuries that result in disabling conditions which can last a lifetime. These conditions can include physical impairments, memory defects, limited concentration, communication deficits, emotional problems and deficits in social abilities. In addition to the personal pain and challenges to survivors and their families, the financial cost of brain injuries is enormous. With traumatic brain injuries, it is estimated that in 1995 Iowa hospitals charged some $38 million for acute care for injured persons. National estimates offer a lifetime cost of $4 million for one person with brain injury (Schootman and Harlan, 1997). With this estimate, new injuries in 1995 could eventually cost over $7 billion dollars. Dramatic improvements in medicine, and the development of emergency response systems, means that more people sustaining brain injuries are being saved. How can we insure that supports are available to this emerging population? We have called the report Coming into Focus, because, despite the prevalence and the personal and financial costs to society, brain injury is poorly understood. The Iowa Department of Public Health, the Iowa Advisory Council on Head Injuries State Plan Task Force, the Brain Injury Association of Iowa and the Iowa University Affiliated Program have worked together to begin answering this question. A great deal of good information already existed. This project brought this information together, gathered new information where it was needed, and carried out a process for identifying what needs to be done in Iowa, and what the priorities will be.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Traditional inventory models focus on risk-neutral decision makers, i.e., characterizing replenishment strategies that maximize expected total profit, or equivalently, minimize expected total cost over a planning horizon. In this paper, we propose a framework for incorporating risk aversion in multi-period inventory models as well as multi-period models that coordinate inventory and pricing strategies. In each case, we characterize the optimal policy for various measures of risk that have been commonly used in the finance literature. In particular, we show that the structure of the optimal policy for a decision maker with exponential utility functions is almost identical to the structure of the optimal risk-neutral inventory (and pricing) policies. Computational results demonstrate the importance of this approach not only to risk-averse decision makers, but also to risk-neutral decision makers with limited information on the demand distribution.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Background A whole-genome genotyping array has previously been developed for Malus using SNP data from 28 Malus genotypes. This array offers the prospect of high throughput genotyping and linkage map development for any given Malus progeny. To test the applicability of the array for mapping in diverse Malus genotypes, we applied the array to the construction of a SNPbased linkage map of an apple rootstock progeny. Results Of the 7,867 Malus SNP markers on the array, 1,823 (23.2 %) were heterozygous in one of the two parents of the progeny, 1,007 (12.8 %) were heterozygous in both parental genotypes, whilst just 2.8 % of the 921 Pyrus SNPs were heterozygous. A linkage map spanning 1,282.2 cM was produced comprising 2,272 SNP markers, 306 SSR markers and the S-locus. The length of the M432 linkage map was increased by 52.7 cM with the addition of the SNP markers, whilst marker density increased from 3.8 cM/marker to 0.5 cM/marker. Just three regions in excess of 10 cM remain where no markers were mapped. We compared the positions of the mapped SNP markers on the M432 map with their predicted positions on the ‘Golden Delicious’ genome sequence. A total of 311 markers (13.7 % of all mapped markers) mapped to positions that conflicted with their predicted positions on the ‘Golden Delicious’ pseudo-chromosomes, indicating the presence of paralogous genomic regions or misassignments of genome sequence contigs during the assembly and anchoring of the genome sequence. Conclusions We incorporated data for the 2,272 SNP markers onto the map of the M432 progeny and have presented the most complete and saturated map of the full 17 linkage groups of M. pumila to date. The data were generated rapidly in a high-throughput semi-automated pipeline, permitting significant savings in time and cost over linkage map construction using microsatellites. The application of the array will permit linkage maps to be developed for QTL analyses in a cost-effective manner, and the identification of SNPs that have been assigned erroneous positions on the ‘Golden Delicious’ reference sequence will assist in the continued improvement of the genome sequence assembly for that variety.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The high computational cost of calculating the radiative heating rates in numerical weather prediction (NWP) and climate models requires that calculations are made infrequently, leading to poor sampling of the fast-changing cloud field and a poor representation of the feedback that would occur. This paper presents two related schemes for improving the temporal sampling of the cloud field. Firstly, the ‘split time-stepping’ scheme takes advantage of the independent nature of the monochromatic calculations of the ‘correlated-k’ method to split the calculation into gaseous absorption terms that are highly dependent on changes in cloud (the optically thin terms) and those that are not (optically thick). The small number of optically thin terms can then be calculated more often to capture changes in the grey absorption and scattering associated with cloud droplets and ice crystals. Secondly, the ‘incremental time-stepping’ scheme uses a simple radiative transfer calculation using only one or two monochromatic calculations representing the optically thin part of the atmospheric spectrum. These are found to be sufficient to represent the heating rate increments caused by changes in the cloud field, which can then be added to the last full calculation of the radiation code. We test these schemes in an operational forecast model configuration and find a significant improvement is achieved, for a small computational cost, over the current scheme employed at the Met Office. The ‘incremental time-stepping’ scheme is recommended for operational use, along with a new scheme to correct the surface fluxes for the change in solar zenith angle between radiation calculations.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Maine implemented a hospital rate-setting program in 1984 at approximately the same time as Medicare started the Prospective Payment System (PPS). This study examines the effectiveness of the program in controlling cost over the period 1984-1989. Hospital costs in Maine are compared to costs in 36 non rate-setting states and 11 other rate-setting states. Changes in cost per equivalent admission, adjusted patient day, per capita, admissions, and length of stay are described and analyzed using multivariate techniques. A number of supply and demand variables which were expected to influence costs independently of rate-setting were controlled for in the study. Results indicate the program was effective in containing costs measured in terms of cost per adjusted patient day. However, this was not true for the other two cost variables. The average length of stay increased during the period in Maine hospitals indicating an association with rate-setting. Several supply variables, especially the number of beds per 1,000 population were strongly associated with the cost and use of hospitals. ^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Aircraft Operators Companies (AOCs) are always willing to keep the cost of a flight as low as possible. These costs could be modelled using a function of the fuel consumption, time of flight and fixed cost (over flight cost, maintenance, etc.). These are strongly dependant on the atmospheric conditions, the presence of winds and the aircraft performance. For this reason, much research effort is being put in the development of numerical and graphical techniques for defining the optimal trajectory. This paper presents a different approach to accommodate AOCs preferences, adding value to their activities, through the development of a tool, called aircraft trajectory simulator. This tool is able to simulate the actual flight of an aircraft with the constraints imposed. The simulator is based on a point mass model of the aircraft. The aim of this paper is to evaluate 3DoF aircraft model errors with BADA data through real data from Flight Data Recorder FDR. Therefore, to validate the proposed simulation tool a comparative analysis of the state variables vector is made between an actual flight and the same flight using the simulator. Finally, an example of a cruise phase is presented, where a conventional levelled flight is compared with a continuous climb flight. The comparison results show the potential benefits of following user-preferred routes for commercial flights.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Óleo de soja epoxidado (OSE) é um produto químico há muito tempo utilizado como co-estabilizante e plastificante secundário do poli (cloreto de vinila) (PVC), ou seja, como um material que tem limitações na quantidade máxima que pode ser usada no composto de PVC. A sua aplicação como plastificante primário, ou seja, como o principal elemento plastificante no composto de PVC, e como base para outros plastificantes de fontes renováveis, tem aumentado nos últimos anos, principalmente devido a melhorias de desempenho e à redução do custo do OSE em comparação com plastificantes tradicionais. A reação de epoxidação do óleo de soja é bem conhecida e ocorre em duas fases líquidas, com reações em ambas as fases, e transferência de massa entre as fases. O processo industrial mais utilizado conta com formação in-situ do ácido perfórmico, através da adição gradativa do principal reagente, o peróxido de hidrogênio a uma mistura agitada de ácido fórmico e óleo de soja refinado. Industrialmente, o processo é realizado em batelada, controlando a adição do reagente peróxido de hidrogênio de forma que a geração de calor não ultrapasse a capacidade de resfriamento do sistema. O processo tem um ciclo que pode variar entre 8 e 12 horas para atingir a conversão desejada, fazendo com que a capacidade de produção seja dependente de investimentos relativamente pesados em reatores agitados mecanicamente, que apresentam diversos riscos de segurança. Estudos anteriores não exploram em profundidade algumas potenciais áreas de otimização e redução das limitações dos processos, como a intensificação da transferência de calor, que permite a redução do tempo total de reação. Este trabalho avalia experimentalmente e propõe uma modelagem para a reação de epoxidação do óleo de soja em condições de remoção de calor máxima, o que permite que os reagentes sejam adicionados em sua totalidade no início da reação, simplificando o processo. Um modelo foi ajustado aos dados experimentais. O coeficiente de troca térmica, cuja estimativa teórica pode incorrer em erros significativos, foi calculado a partir de dados empíricos e incluído na modelagem, acrescentando um fator de variabilidade importante em relação aos modelos anteriores. O estudo propõe uma base teórica para potenciais alternativas aos processos adotados atualmente, buscando entender as condições necessárias e viáveis em escala industrial para redução do ciclo da reação, podendo inclusive apoiar potenciais estudos de implementação de um reator contínuo, mais eficiente e seguro, para esse processo.