968 resultados para Cost estimation of environmental protection
Resumo:
Bridge construction responds to the need for environmentally friendly design of motorways and facilitates the passage through sensitive natural areas and the bypassing of urban areas. However, according to numerous research studies, bridge construction presents substantial budget overruns. Therefore, it is necessary early in the planning process for the decision makers to have reliable estimates of the final cost based on previously constructed projects. At the same time, the current European financial crisis reduces the available capital for investments and financial institutions are even less willing to finance transportation infrastructure. Consequently, it is even more necessary today to estimate the budget of high-cost construction projects -such as road bridges- with reasonable accuracy, in order for the state funds to be invested with lower risk and the projects to be designed with the highest possible efficiency. In this paper, a Bill-of-Quantities (BoQ) estimation tool for road bridges is developed in order to support the decisions made at the preliminary planning and design stages of highways. Specifically, a Feed-Forward Artificial Neural Network (ANN) with a hidden layer of 10 neurons is trained to predict the superstructure material quantities (concrete, pre-stressed steel and reinforcing steel) using the width of the deck, the adjusted length of span or cantilever and the type of the bridge as input variables. The training dataset includes actual data from 68 recently constructed concrete motorway bridges in Greece. According to the relevant metrics, the developed model captures very well the complex interrelations in the dataset and demonstrates strong generalisation capability. Furthermore, it outperforms the linear regression models developed for the same dataset. Therefore, the proposed cost estimation model stands as a useful and reliable tool for the construction industry as it enables planners to reach informed decisions for technical and economic planning of concrete bridge projects from their early implementation stages.
Resumo:
Objectives The increasing prevalence of overweight and obesity worldwide continues to compromise population health and creates a wider societal cost in terms of productivity loss and premature mortality. Despite extensive international literature on the cost of overweight and obesity, findings are inconsistent between Europe and the USA, and particularly within Europe. Studies vary on issues of focus, specific costs and methods. This study aims to estimate the healthcare and productivity costs of overweight and obesity for the island of Ireland in 2009, using both top-down and bottom-up approaches.
Methods Costs were estimated across four categories: healthcare utilisation, drug costs, work absenteeism and premature mortality. Healthcare costs were estimated using Population Attributable Fractions (PAFs). PAFs were applied to national cost data for hospital care and drug prescribing. PAFs were also applied to social welfare and national mortality data to estimate productivity costs due to absenteeism and premature mortality.
Results The healthcare costs of overweight and obesity in 2009 were estimated at €437 million for the Republic of Ireland (ROI) and €127.41 million for NI. Productivity loss due to overweight and obesity was up to €865 million for ROI and €362 million for NI. The main drivers of healthcare costs are cardiovascular disease, type II diabetes, colon cancer, stroke and gallbladder disease. In terms of absenteeism, low back pain is the main driver in both jurisdictions, and for productivity loss due to premature mortality the primary driver of cost is coronary heart disease.
Conclusions The costs are substantial, and urgent public health action is required in Ireland to address the problem of increasing prevalence of overweight and obesity, which if left unchecked will lead to unsustainable cost escalation within the health service and unacceptable societal costs.
Resumo:
This paper describes the first use of inter-particle force measurement in reworked aerosols to better understand the mechanics of dust deflation and its consequent ecological ramifications. Dust is likely to carry hydrocarbons and micro-organisms including human pathogens and cultured microbes and thereby is a threat to plants, animals and human. Present-day global aerosol emissions are substantially greater than in 1850; however, the projected influx rates are highly disputable. This uncertainty, in part, has roots in the lack of understanding of deflation mechanisms. A growing body of literature shows that whether carbon emission continues to increase, plant transpiration drops and soil water retention enhances, allowing more greenery to grow and less dust to flux. On the other hand, a small but important body of geochemistry literature shows that increasing emission and global temperature leads to extreme climates, decalcification of surface soils containing soluble carbonate polymorphs and hence a greater chance of deflation. The consistency of loosely packed reworked silt provides background data against which the resistance of dust’s bonding components (carbonates and water) can be compared. The use of macro-scale phenomenological approaches to measure dust consistency is trivial. Instead, consistency can be measured in terms of inter-particle stress state. This paper describes a semi-empirical parametrisation of the inter-particle cohesion forces in terms of the balance of contact-level forces at the instant of particle motion. We put forward the hypothesis that the loss of Ca2+-based pedogenic salts is responsible for much of the dust influx and surficial drying pays a less significant role.
Resumo:
Through a case-study analysis of Ontario's ethanol policy, this thesis addresses a number of themes that are consequential to policy and policy-making: spatiality, democracy and uncertainty. First, I address the 'spatial debate' in Geography pertaining to the relevance and affordances of a 'scalar' versus a 'flat' ontoepistemology. I argue that policy is guided by prior arrangements, but is by no means inevitable or predetermined. As such, scale and network are pragmatic geographical concepts that can effectively address the issue of the spatiality of policy and policy-making. Second, I discuss the democratic nature of policy-making in Ontario through an examination of the spaces of engagement that facilitate deliberative democracy. I analyze to what extent these spaces fit into Ontario's environmental policy-making process, and to what extent they were used by various stakeholders. Last, I take seriously the fact that uncertainty and unavoidable injustice are central to policy, and examine the ways in which this uncertainty shaped the specifics of Ontario's ethanol policy. Ultimately, this thesis is an exercise in understanding sub-national environmental policy-making in Canada, with an emphasis on how policy-makers tackle the issues they are faced with in the context of environmental change, political-economic integration, local priorities, individual goals, and irreducible uncertainty.
Resumo:
So far, in the bivariate set up, the analysis of lifetime (failure time) data with multiple causes of failure is done by treating each cause of failure separately. with failures from other causes considered as independent censoring. This approach is unrealistic in many situations. For example, in the analysis of mortality data on married couples one would be interested to compare the hazards for the same cause of death as well as to check whether death due to one cause is more important for the partners’ risk of death from other causes. In reliability analysis. one often has systems with more than one component and many systems. subsystems and components have more than one cause of failure. Design of high-reliability systems generally requires that the individual system components have extremely high reliability even after long periods of time. Knowledge of the failure behaviour of a component can lead to savings in its cost of production and maintenance and. in some cases, to the preservation of human life. For the purpose of improving reliability. it is necessary to identify the cause of failure down to the component level. By treating each cause of failure separately with failures from other causes considered as independent censoring, the analysis of lifetime data would be incomplete. Motivated by this. we introduce a new approach for the analysis of bivariate competing risk data using the bivariate vector hazard rate of Johnson and Kotz (1975).
Resumo:
In networks with small buffers, such as optical packet switching based networks, the convolution approach is presented as one of the most accurate method used for the connection admission control. Admission control and resource management have been addressed in other works oriented to bursty traffic and ATM. This paper focuses on heterogeneous traffic in OPS based networks. Using heterogeneous traffic and bufferless networks the enhanced convolution approach is a good solution. However, both methods (CA and ECA) present a high computational cost for high number of connections. Two new mechanisms (UMCA and ISCA) based on Monte Carlo method are proposed to overcome this drawback. Simulation results show that our proposals achieve lower computational cost compared to enhanced convolution approach with an small stochastic error in the probability estimation
Resumo:
The theta-logistic is a widely used generalisation of the logistic model of regulated biological processes which is used in particular to model population regulation. Then the parameter theta gives the shape of the relationship between per-capita population growth rate and population size. Estimation of theta from population counts is however subject to bias, particularly when there are measurement errors. Here we identify factors disposing towards accurate estimation of theta by simulation of populations regulated according to the theta-logistic model. Factors investigated were measurement error, environmental perturbation and length of time series. Large measurement errors bias estimates of theta towards zero. Where estimated theta is close to zero, the estimated annual return rate may help resolve whether this is due to bias. Environmental perturbations help yield unbiased estimates of theta. Where environmental perturbations are large, estimates of theta are likely to be reliable even when measurement errors are also large. By contrast where the environment is relatively constant, unbiased estimates of theta can only be obtained if populations are counted precisely Our results have practical conclusions for the design of long-term population surveys. Estimation of the precision of population counts would be valuable, and could be achieved in practice by repeating counts in at least some years. Increasing the length of time series beyond ten or 20 years yields only small benefits. if populations are measured with appropriate accuracy, given the level of environmental perturbation, unbiased estimates can be obtained from relatively short censuses. These conclusions are optimistic for estimation of theta. (C) 2008 Elsevier B.V All rights reserved.
Resumo:
The paper presents the techno-economic modelling of CO2 capture process in coal-fired power plants. An overall model is being developed to compare carbon capture and sequestration options at locations within the UK, and for studies of the sensitivity of the cost of disposal to changes in the major parameters of the most promising solutions identified. Technological options of CO2 capture have been studied and cost estimation relationships (CERs) for the chosen options calculated. Created models are related to the capital, operation and maintenance cost. A total annualised cost of plant electricity output and amount of CO2 avoided have been developed. The influence of interest rates and plant life has been analysed as well. The CERs are included as an integral part of the overall model.
Resumo:
Hidden Markov Models (HMMs) have been successfully applied to different modelling and classification problems from different areas over the recent years. An important step in using HMMs is the initialisation of the parameters of the model as the subsequent learning of HMM’s parameters will be dependent on these values. This initialisation should take into account the knowledge about the addressed problem and also optimisation techniques to estimate the best initial parameters given a cost function, and consequently, to estimate the best log-likelihood. This paper proposes the initialisation of Hidden Markov Models parameters using the optimisation algorithm Differential Evolution with the aim to obtain the best log-likelihood.
Resumo:
It is reported in the literature that distances from the observer are underestimated more in virtual environments (VEs) than in physical world conditions. On the other hand estimation of size in VEs is quite accurate and follows a size-constancy law when rich cues are present. This study investigates how estimation of distance in a CAVETM environment is affected by poor and rich cue conditions, subject experience, and environmental learning when the position of the objects is estimated using an experimental paradigm that exploits size constancy. A group of 18 healthy participants was asked to move a virtual sphere controlled using the wand joystick to the position where they thought a previously-displayed virtual cube (stimulus) had appeared. Real-size physical models of the virtual objects were also presented to the participants as a reference of real physical distance during the trials. An accurate estimation of distance implied that the participants assessed the relative size of sphere and cube correctly. The cube appeared at depths between 0.6 m and 3 m, measured along the depth direction of the CAVE. The task was carried out in two environments: a poor cue one with limited background cues, and a rich cue one with textured background surfaces. It was found that distances were underestimated in both poor and rich cue conditions, with greater underestimation in the poor cue environment. The analysis also indicated that factors such as subject experience and environmental learning were not influential. However, least square fitting of Stevens’ power law indicated a high degree of accuracy during the estimation of object locations. This accuracy was higher than in other studies which were not based on a size-estimation paradigm. Thus as indirect result, this study appears to show that accuracy when estimating egocentric distances may be increased using an experimental method that provides information on the relative size of the objects used.
Resumo:
This article combines institutional and resources’ arguments to show that the institutional distance between the home and the host country, and the headquarters’ financial performance have a relevant impact on the environmental standardization decision in multinational companies. Using a sample of 135 multinational companies in three different industries with headquarters and subsidiaries based in the USA, Canada, Mexico, France, and Spain, we find that a high environmental institutional distance between headquarters’ and subsidiaries’ countries deters the standardization of environmental practices. On the other hand, high-profit headquarters are willing to standardize their environmental practices, rather than taking advantage of countries with lax environmental protection to undertake more pollution-intensive activities. Finally, we show that headquarters’ financial performance also imposes a moderating effect on the relationship between environmental institutional distance between countries and environmental standardization within the multinational company.
Resumo:
There is a current need to constrain the parameters of gravity wave drag (GWD) schemes in climate models using observational information instead of tuning them subjectively. In this work, an inverse technique is developed using data assimilation principles to estimate gravity wave parameters. Because mostGWDschemes assume instantaneous vertical propagation of gravity waves within a column, observations in a single column can be used to formulate a one-dimensional assimilation problem to estimate the unknown parameters. We define a cost function that measures the differences between the unresolved drag inferred from observations (referred to here as the ‘observed’ GWD) and the GWD calculated with a parametrisation scheme. The geometry of the cost function presents some difficulties, including multiple minima and ill-conditioning because of the non-independence of the gravity wave parameters. To overcome these difficulties we propose a genetic algorithm to minimize the cost function, which provides a robust parameter estimation over a broad range of prescribed ‘true’ parameters. When real experiments using an independent estimate of the ‘observed’ GWD are performed, physically unrealistic values of the parameters can result due to the non-independence of the parameters. However, by constraining one of the parameters to lie within a physically realistic range, this degeneracy is broken and the other parameters are also found to lie within physically realistic ranges. This argues for the essential physical self-consistency of the gravity wave scheme. A much better fit to the observed GWD at high latitudes is obtained when the parameters are allowed to vary with latitude. However, a close fit can be obtained either in the upper or the lower part of the profiles, but not in both at the same time. This result is a consequence of assuming an isotropic launch spectrum. The changes of sign in theGWDfound in the tropical lower stratosphere, which are associated with part of the quasi-biennial oscillation forcing, cannot be captured by the parametrisation with optimal parameters.