954 resultados para Investment cost minimisation
Resumo:
During tunnel constriction the classification of rock mass is widely used in tunnel design and construction. Moreover it offers the base information about tunnel investment and security. The quick classification of rock mass is very important for not delaying tunnel construction. Nowadays the tunnel engineers usually use initial survey files which are obtained by probe drilling to design a tunnel. It brings the problem that initial surrounding rock classification is usually much different from the real condition during the tunnel construction. Because initial surrounding rock lack credibility, it need us to make real time surrounding rock classification during the tunnel construction, and feed back the result to designers and constructors. Therefore, to find a quick wall rock classification method is very important not only for the time limit for a project but also for not delaying tunnel construction. Not all but many tunnels and underground constructions do suffer form collapse during the period of construction. Although accidental collapse in a large project in civil and geotechnical engineering sometimes appears to be a local event, if it occurred, it can bring about casualties, disrupted,production, construction delay, environmental damage, capital cost etc,therefore, it has been a difficult problem ,both in theory and in practice, establishing how to prevent underground structures form collapse and how to handle such an event in case in occurs. It is important to develop effective solutions and technical measures to prevent and control the collapse. According to the tunnel collapse occurred in Cheng De this paper analyze the main collapse mechanism leading to tunnel collapse and summon up the disposal method when collapse happened. It may be useful for tunnel construction in Cheng De in future. This paper is base on tunnel surrounding rock classification and tunnel support tasks during the tunnel construction in Cheng De area. It aims at solving 4 important problems in tunnel design and construction. 1) The relationship between rock rebound strength and rock single axle compression strength. First we go to the face wall and do rebound test on the tunnel face, then we chose some pieces of rock and do point loading test. Form the tests record we try to find the relationship between rock rebound strength and rock single axle compression strength. 2) The relationship between the value [BQ] and the value Q. First in order to obtain the information of rock character, rock strength, degree of weathering, the structure of rock mass, the joint condition, underground water condition and so on, we go to the tunnel face to do field investigation. And then we use two kinds of rock classification method to make surrounding rock classification. Base on the works above, finally we analyze the relationship between the value [BQ] and the value Q. 3) Sum up the mechanism leading to tunnel collapse and it disposal method in Cheng De area According to the tunnel collapse occurred in Cheng De this paper analyze the main reasons leading to the tunnel collapse and sum up the disposal method when collapse happened. 4) Obtain the properties of steel frame grid by numerical simulation. First we establish the 3D numeral model of steel frame grid by ADINA, and then find the mechanics properties by numerical simulation in ADINA. Second Based on the rock mass geological structure model, we established steel frame grid numeral model which is installed in the tunnel by FLAC3D and simulated the progress of tunnel construction. We hope that the support effect in tunnel can be evaluated from the numerical simulation.
Resumo:
In the engineering reinforcement of-rock and soil mass, engineers must consider how to obtain better reinforcing effect at the cost of less reinforcing expense, which, in fact, is the aim of reinforcement design. In order to accomplish the purpose, they require not only researching the material used to reinforce and its structure, but also taking into account of several important geological factors, such as the structure and property of rock and soil mass. How to improve the reinforcing effect according to engineering geomechanical principle at the respect of the reinforcement of engineering soil and rock mass is studied and discussed in this paper. The author studies the theory, technology and practice of geotechnical reinforcement based on engineering geomechanics, taking example for the soil treatment of Zhengzhou Airport, the effect analysis of reinforcement to the slope on the left bank of Wuqiangxi Hydropower Station and the reinforcing design of the No. 102 Landslide and unique sand-slide slope on the Sichuan-Tibet Highway. The paper is comprised of two parts for the convenience of discussion. In the first part, from the first chapter to the fifth chapter, trying to perform the relevant research and application at the viewpoint of soil mass engineering geomechanics, the author mainly discusses the study of reinforcing soft ground soil through dynamical consolidation and its application. Then, in the second part, from the sixth chapter to the eleventh chapter, the study of new technologies in the rock slope reinforcement and their application are discussed. The author finds that not only better reinforcing effect can be gained in the research where the principle and method of rock mass engineering geomechanics is adopted, but also new reinforcing technologies can be put forward. Zhengzhou Airport is an important one in central plains. It lies on Yellow River alluvial deposit and the structure of stratum is complex and heterogeneous. The area of airport is very large, which can result in differential settlement easily, damage of airport and aircraft accident, whereas, there are no similar experiences to dispose the foundation, so the foundation treatment become a principal problem. During the process of treatment, the method of dynamic compaction was adopted after compared with other methods using the theory of synthetic integration. Dynamic compaction is an important method to consolidate foundation, which was successfully used in the foundation of Zhengzhou Airport. For fill foundation, controlling the thickness of fill so as to make the foundation treatment can reach the design demand and optimum thickness of the fill is a difficult problem. Considering this problem, the author proposed a calculation method to evaluate the thickness of fill. The method can consider not only the self-settlement of fill but also the settlement of the ground surface under applied load so as to ensure the settlement occurred during the using period can satisfy the design demand. It is proved that the method is correct after using it to choose reasonable energy of dynamic compaction to treat foundation. At the same time, in order to examine the effect of dynamic compaction, many monitor methods were adopted in the test such as static loading test, modulus of resilience test, deep pore pressure -test, static cone penetration test and the variation of the pore volume measurement. Through the tests, the author summarized the discipline of the accumulation and dissipation of pore pressure in Yellow River alluvial deposit under the action of dynamic compaction, gave a correct division of the property change of silt and clay under dynamic compaction, determined the bearing capacity of foundation after treatment and weighted the reinforcing effect of dynamic consolidation from the variation of the soil particle in microcosmic and the parameter of soil mass' density. It can be considered that the compactness of soil is in proportion to the energy of dynamic compaction. This conclusion provided a reference to the research of the "Problem of Soil Structure-the Central Problem of Soil Mechanics in 21 Century ". It is also important to strengthen rock mass for water conservancy and electric power engineering. Slip-resistance pile and anchoring adit full of reinforced concrete are usually adopted in engineering experience to strengthen rock mass and very important for engineering. But there also some deficiency such as the weakest section can't be highlighted, the monitor is inconvenient and the diameter of pile and adit is very large etc. The author and his supervisor professor Yangzhifa invented prestressed slip-resistance pile and prestressed anchoring adit full of reinforced concrete, utilizing the advantage that the prestressed structure has better anti-tensile characteristic (this invention is to be published). These inventions overcome the disadvantages of general slip-resistance pile and anchoring adit full of reinforced concrete and have the functions of engineering prospecting, strengthening, drainage and monitor simultaneous, so they have better strengthened effect and be more convenient for monitor and more economical than traditional methods. Drainage is an important factor in treatments of rock mass and slop. In view of the traditional drainage method that drainage pore often be clogged so as to resulted in incident, professor Yangzhifa invented the method and setting of guide penetration by fiber bundle. It would take good effect to use it in prestressed slip-resistance pile and anchoring adit full of reinforced concrete. In this paper, the author took example for anchoring adit full of reinforced concrete used to strengthen Wuqiangxi left bank to simulate the strengthened effect after consolidated by prestressed slip-resistance pile, took example for 102 landslide occurred along Sichuan-Tibet highway to simulate the application of slip-resistance pile and the new technology of drainage. At the same time the author proposed the treatment method of flowing sand in Sichuan-Tibet highway, which will benefit the study on strengthening similar engineering. There are five novelties in the paper with the author's theoretical study and engineering practice: 1. Summarizing the role of pore water pressure accumulation and dissipation of the Yellow River alluvial and diluvial soil under the action of dynamical consolidation, which has instructive significance in the engineering construction under the analogical engineering geological conditions in the future. It has not been researched by the predecessors. 2. Putting forward the concept of density D in microcosmic based on the microcosmical structure study of the soil sample. Adopting D to weight the reinforcing effect of dynamic consolidation is considered to be appropriate by the means of comparing the D values of Zhengzhou Airport's ground soil before with after dynamically consolidating reinforcement, so a more convenient balancing method can be provided for engineering practice. 3. According to the deep research into the soil mass engineering geology, engineering rock and soil science, soil mechanics, as well as considerable field experiments, improving the consolidating method in airport construction, from the conventional method, which is dynamically compactmg original ground surface firstly, then filling soil and dynamically layer-consolidating or layer-compacting at last to the upgraded method, which is performing dynamical consolidation after filling soil to place totally at the extent of the certain earth-filling depth. The result of the dynamical consolidation not only complies with the specifications, but also reduces the soil treatment investment by 10 million RMB. 4. Proposing the method for calculating the height of the filled soil by the means of estimating the potential displacement produced in the original ground surface and the filled earth soil under the possible load, selecting the appropriate dynamically-compacting power and determining the virtual height of the filled earth soil. The method is proved to be effective and scientific. 5. According to the thought of Engineering Geomechanics Metal-Synthetic Methodology (EGMS), patenting two inventions (to the stage of roclamation, with Professor Yang Zhi-fa, the cooperative tutor, and etc.) in which multi-functions, engineering geological investigation, reinforcement, drainage and strength remedy, are integrated all over in one body at the viewpoint of the breakage mechanism of the rock slope.
Resumo:
With the continually increase both in the amount of wastewater disposal and in the treatment rate, more and more sewage sludge has been produced. An economic estimate was taken on the different sewage sludge disposal and treatment technologies, and led to the conclusion that compost is an effective way to make sewage sludge harmless, stable and resourceable. Normally, there are several ways to treat sewage sludge, such as landfill, compost, incineration and so on. These technologies will cost 300-1000 Y per ton of sludge. Among those ways, landfill is the cheapest one and operates easily, however, it just postpones the pollution instead of eventually eliminating the pollution; The amount of the sludge will reduce dramatically after incineration, while incineration will take a very high investment in the beginning, at the same time, it's very hard to maintain running; Sewage sludge will be resourceful after composting treantment, thus makes up the treatment cost, makes composting is the most economical way. Compost production is safe when correctly used, compost is a important way to treat sewage sludge. Oxygen is an important control factor in aerobic composting that has great effects on temperature and microorganisms. The gas gathering and transfering system of an online oxygen monitoring system for composting were bettermented to prolong the monitoring system's running period. The oxygen concentration changes in various aerobic composting stage were studied, and conclusions came to that oxygen concentration changes much faster in the oxygen concentration increasing stage than that in the declining stage; the better the aerobic condition is, the sooner the monitoring system starts to work. The minimal oxygen concentration during a ventilation cycle often falls at the beginning, then ascends in the composting period; at the same time, oxygen concentration changes fast in the early composting stage(temperature increasing stage), much slower in the middle stage(continouns thermophilic stage),and seldom changes in the late composting stage(temperature declining stage). With the help of the oxygen realtime-online monitoring system, oxygen concentrations was measured. During the composting period, water contents was analyzed after sampled. It's found that water contents (WC) and Oxygen concentration can both influence the composting process, and the control rule varies in the various composting stages. Essentially, the rule that water and oxygen control the composting process comes from water counterchecks the oxygen transferring to the composting substrate. The most influential factor to the WC and to the oxygen is the components in the composting pile. In the temperature increasing stage, seldom microorganisms exist in the composting pile with low activity, thus oxygen can meet with microorganisms' need, and WC is the dominant factor. In the high temperature (continouns thermophilic) stage, composting process is controlled by WC and oxygen, essentially by WC, at the same time, their influence somehow is not remarkable. In the temperature declining stage, WC and oxygen influence the composting process little. It's also found that the composting process will differ even if under the same components, thus to equably mix the components can avoid WC focusing in some place and let the composting pile to be aerobic. In one sentence, aerobic state is the most important factor in the composting process, suitable bulking material will be useful to the composting control.
Resumo:
N.W. Hardy and M.H. Lee. The effect of the product cost factor on error handling in industrial robots. In Maria Gini, editor, Detecting and Resolving Errors in Manufacturing Systems. Papers from the 1994 AAAI Spring Symposium Series, pages 59-64, Menlo Park, CA, March 1994. The AAAI Press. Technical Report SS-94-04, ISBN 0-929280-60-1.
Resumo:
Previous research argues that large non-controlling shareholders enhance firm value because they deter expropriation by the controlling shareholder. We propose that the conflicting incentives faced by large shareholders may induce a nonlinear relationship between the relative size of large shareholdings and firm value. Consistent with this prediction, we present evidence that there are costs of having a second (and third) largest shareholder, especially when the largest shareholdings are similar in size. Our results are robust to various relative size proxies, firm performance measures, model specifications, and potential endogeneity issues.
Resumo:
Biblioteki pełnią różnorodne funkcje we współczesnym otoczeniu społecznym. Uczestniczą w tworzeniu kapitału intelektualnego i społecznego, wpływają na wzrost korzyści ekonomicznych użytkowników i całego społeczeństwa. W artykule omówiono główne podejścia i metody badawcze w zakresie oceny korzyści ekonomicznych płynących z funkcjonowania bibliotek. Skupiono się na metodzie analizy kosztów w stosunku do korzyści (ang. CBA – cost-benefit analysis), metodzie analizy warunkowej (ang. CVM – contigent valuation method), określaniu wartości dodanej dla użytkownika (ang. consumer surplus method) i metodologii oceny stopy wzrostu z inwestycji (ang. ROI – return of investment). Przeanalizowano również różne projekty badań prowadzone na świecie w tym zakresie.
Resumo:
The Basic Income has been defined as a relatively small income that the public Administration unconditionally provides to all its members as a citizenship right. Its principal objective consists on guaranteeing the entire population with an income enough to satisfy living basic needs, but it could have other positive effects such as a more equally income redistribution or tax fraud fighting, as well as some drawbacks, like the labor supply disincentives. In this essay we present the argument in favor and against this policy and ultimately define how it could be financed according to the actual tax and social benefits’ system in Navarra. The research also approaches the main economic implications of the proposal, both in terms of static income redistribution and discusses other relevant dynamic uncertainties.
Resumo:
Background: Many African countries are rapidly expanding HIV/AIDS treatment programs. Empirical information on the cost of delivering antiretroviral therapy (ART) for HIV/AIDS is needed for program planning and budgeting. Methods: We searched published and gray sources for estimates of the cost of providing ART in service delivery (non-research) settings in sub-Saharan Africa. Estimates were included if they were based on primary local data for input prices. Results: 17 eligible cost estimates were found. Of these, 10 were from South Africa. The cost per patient per year ranged from $396 to $2,761. It averaged approximately $850/patient/year in countries outside South Africa and $1,700/patient/year in South Africa. The most recent estimates for South Africa averaged $1,200/patient/year. Specific cost items included in the average cost per patient per year varied, making comparison across studies problematic. All estimates included the cost of antiretroviral drugs and laboratory tests, but many excluded the cost of inpatient care, treatment of opportunistic infections, and/or clinic infrastructure. Antiretroviral drugs comprised an average of one third of the cost of treatment in South Africa and one half to three quarters of the cost in other countries. Conclusions: There is very little empirical information available about the cost of providing antiretroviral therapy in non-research settings in Africa. Methods for estimating costs are inconsistent, and many estimates combine data drawn from disparate sources. Cost analysis should become a routine part of operational research on the treatment rollout in Africa.
Resumo:
The objective of unicast routing is to find a path from a source to a destination. Conventional routing has been used mainly to provide connectivity. It lacks the ability to provide any kind of service guarantees and smart usage of network resources. Improving performance is possible by being aware of both traffic characteristics and current available resources. This paper surveys a range of routing solutions, which can be categorized depending on the degree of the awareness of the algorithm: (1) QoS/Constraint-based routing solutions are aware of traffic requirements of individual connection requests; (2) Traffic-aware routing solutions assume knowledge of the location of communicating ingress-egress pairs and possibly the traffic demands among them; (3) Routing solutions that are both QoS-aware as (1) and traffic-aware as (2); (4) Best-effort solutions are oblivious to both traffic and QoS requirements, but are adaptive only to current resource availability. The best performance can be achieved by having all possible knowledge so that while finding a path for an individual flow, one can make a smart choice among feasible paths to increase the chances of supporting future requests. However, this usually comes at the cost of increased complexity and decreased scalability. In this paper, we discuss such cost-performance tradeoffs by surveying proposed heuristic solutions and hybrid approaches.
Resumo:
As the Internet has evolved and grown, an increasing number of nodes (hosts or autonomous systems) have become multihomed, i.e., a node is connected to more than one network. Mobility can be viewed as a special case of multihoming—as a node moves, it unsubscribes from one network and subscribes to another, which is akin to one interface becoming inactive and another active. The current Internet architecture has been facing significant challenges in effectively dealing with multihoming (and consequently mobility). The Recursive INternet Architecture (RINA) [1] was recently proposed as a clean-slate solution to the current problems of the Internet. In this paper, we perform an average-case cost analysis to compare the multihoming / mobility support of RINA, against that of other approaches such as LISP and MobileIP. We also validate our analysis using trace-driven simulation.
Resumo:
Buildings consume 40% of Ireland's total annual energy translating to 3.5 billion (2004). The EPBD directive (effective January 2003) places an onus on all member states to rate the energy performance of all buildings in excess of 50m2. Energy and environmental performance management systems for residential buildings do not exist and consist of an ad-hoc integration of wired building management systems and Monitoring & Targeting systems for non-residential buildings. These systems are unsophisticated and do not easily lend themselves to cost effective retrofit or integration with other enterprise management systems. It is commonly agreed that a 15-40% reduction of building energy consumption is achievable by efficiently operating buildings when compared with typical practice. Existing research has identified that the level of information available to Building Managers with existing Building Management Systems and Environmental Monitoring Systems (BMS/EMS) is insufficient to perform the required performance based building assessment. The cost of installing additional sensors and meters is extremely high, primarily due to the estimated cost of wiring and the needed labour. From this perspective wireless sensor technology provides the capability to provide reliable sensor data at the required temporal and spatial granularity associated with building energy management. In this paper, a wireless sensor network mote hardware design and implementation is presented for a building energy management application. Appropriate sensors were selected and interfaced with the developed system based on user requirements to meet both the building monitoring and metering requirements. Beside the sensing capability, actuation and interfacing to external meters/sensors are provided to perform different management control and data recording tasks associated with minimisation of energy consumption in the built environment and the development of appropriate Building information models(BIM)to enable the design and development of energy efficient spaces.
Resumo:
Background: Elective repeat caesarean delivery (ERCD) rates have been increasing worldwide, thus prompting obstetric discourse on the risks and benefits for the mother and infant. Yet, these increasing rates also have major economic implications for the health care system. Given the dearth of information on the cost-effectiveness related to mode of delivery, the aim of this paper was to perform an economic evaluation on the costs and short-term maternal health consequences associated with a trial of labour after one previous caesarean delivery compared with ERCD for low risk women in Ireland.Methods: Using a decision analytic model, a cost-effectiveness analysis (CEA) was performed where the measure of health gain was quality-adjusted life years (QALYs) over a six-week time horizon. A review of international literature was conducted to derive representative estimates of adverse maternal health outcomes following a trial of labour after caesarean (TOLAC) and ERCD. Delivery/procedure costs derived from primary data collection and combined both "bottom-up" and "top-down" costing estimations.Results: Maternal morbidities emerged in twice as many cases in the TOLAC group than the ERCD group. However, a TOLAC was found to be the most-effective method of delivery because it was substantially less expensive than ERCD ((sic)1,835.06 versus (sic)4,039.87 per women, respectively), and QALYs were modestly higher (0.84 versus 0.70). Our findings were supported by probabilistic sensitivity analysis.Conclusions: Clinicians need to be well informed of the benefits and risks of TOLAC among low risk women. Ideally, clinician-patient discourse would address differences in length of hospital stay and postpartum recovery time. While it is premature advocate a policy of TOLAC across maternity units, the results of the study prompt further analysis and repeat iterations, encouraging future studies to synthesis previous research and new and relevant evidence under a single comprehensive decision model.
Resumo:
The work presented in this thesis described the development of low-cost sensing and separation devices with electrochemical detections for health applications. This research employs macro, micro and nano technology. The first sensing device developed was a tonerbased micro-device. The initial development of microfluidic devices was based on glass or quartz devices that are often expensive to fabricate; however, the introduction of new types of materials, such as plastics, offered a new way for fast prototyping and the development of disposable devices. One such microfluidic device is based on the lamination of laser-printed polyester films using a computer, printer and laminator. The resulting toner-based microchips demonstrated a potential viability for chemical assays, coupled with several detection methods, particularly Chip-Electrophoresis-Chemiluminescence (CE-CL) detection which has never been reported in the literature. Following on from the toner-based microchip, a three-electrode micro-configuration was developed on acetate substrate. This is the first time that a micro-electrode configuration made from gold; silver and platinum have been fabricated onto acetate by means of patterning and deposition techniques using the central fabrication facilities in Tyndall National Institute. These electrodes have been designed to facilitate the integration of a 3- electrode configuration as part of the fabrication process. Since the electrodes are on acetate the dicing step can automatically be eliminated. The stability of these sensors has been investigated using electrochemical techniques with excellent outcomes. Following on from the generalised testing of the electrodes these sensors were then coupled with capillary electrophoresis. The final sensing devices were on a macro scale and involved the modifications of screenprinted electrodes. Screen-printed electrodes (SPE) are generally seen to be far less sensitive than the more expensive electrodes including the gold, boron-doped diamond and glassy carbon electrodes. To enhance the sensitivity of these electrodes they were treated with metal nano-particles, gold and palladium. Following on from this, another modification was introduced. The carbonaceous material carbon monolith was drop-cast onto the SPE and then the metal nano-particles were electrodeposited onto the monolith material
Cost savings from relaxation of operational constraints on a power system with high wind penetration
Resumo:
Wind energy is predominantly a nonsynchronous generation source. Large-scale integration of wind generation with existing electricity systems, therefore, presents challenges in maintaining system frequency stability and local voltage stability. Transmission system operators have implemented system operational constraints (SOCs) in order to maintain stability with high wind generation, but imposition of these constraints results in higher operating costs. A mixed integer programming tool was used to simulate generator dispatch in order to assess the impact of various SOCs on generation costs. Interleaved day-ahead scheduling and real-time dispatch models were developed to allow accurate representation of forced outages and wind forecast errors, and were applied to the proposed Irish power system of 2020 with a wind penetration of 32%. Savings of at least 7.8% in generation costs and reductions in wind curtailment of 50% were identified when the most influential SOCs were relaxed. The results also illustrate the need to relax local SOCs together with the system-wide nonsynchronous penetration limit SOC, as savings from increasing the nonsynchronous limit beyond 70% were restricted without relaxation of local SOCs. The methodology and results allow for quantification of the costs of SOCs, allowing the optimal upgrade path for generation and transmission infrastructure to be determined.
Resumo:
This research investigates whether a reconfiguration of maternity services, which collocates consultant- and midwifery-led care, reflects demand and value for money in Ireland. Qualitative and quantitative research is undertaken to investigate demand and an economic evaluation is performed to evaluate the costs and benefits of the different models of care. Qualitative research is undertaken to identify women’s motivations when choosing place of delivery. These data are further used to inform two stated preference techniques: a discrete choice experiment (DCE) and contingent valuation method (CVM). These are employed to identify women’s strengths of preferences for different features of care (DCE) and estimate women’s willingness to pay for maternity care (CVM), which is used to inform a cost-benefit analysis (CBA) on consultant- and midwifery-led care. The qualitative research suggests women do not have a clear preference for consultant or midwifery-led care, but rather a hybrid model of care which closely resembles the Domiciliary Care In and Out of Hospital (DOMINO) scheme. Women’s primary concern during care is safety, meaning women would only utilise midwifery-led care when co-located with consultant-led care. The DCE also finds women’s preferred package of care closely mirrors the DOMINO scheme with 39% of women expected to utilise this service. Consultant- and midwifery-led care would then be utilised by 34% and 27% of women, respectively. The CVM supports this hierarchy of preferences where consultant-led care is consistently valued more than midwifery-led care – women are willing to pay €956.03 for consultant-led care and €808.33 for midwifery-led care. A package of care for a woman availing of consultant- and midwifery-led care is estimated to cost €1,102.72 and €682.49, respectively. The CBA suggests both models of care are cost-beneficial and should be pursued in Ireland. This reconfiguration of maternity services would maximise women’s utility, while fulfilling important objectives of key government policy.