928 resultados para least-cost diet
Resumo:
Risks and uncertainties are part and parcel of any project as projects are planned with many assumptions. Therefore, managing those risks is the key to project success. Although risk is present in all most all projects, large-scale construction projects are most vulnerable. Risk is by nature subjective. However, managing risk subjectively posses the danger of non-achievement of project goals. This study introduces an analytical framework for managing risk in projects. All the risk factors are identified, their effects are analyzed, and alternative responses are derived with cost implication for mitigating the identified risks. A decision-making framework is then formulated using decision tree. The expected monetary values are derived for each alternative. The responses, which require least cost is selected. The entire methodology has been explained through a case study of an oil pipeline project in India and its effectiveness in managing projects has been demonstrated. © INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING.
Resumo:
Feasibility studies of industrial projects consist of multiple analyses carried out sequentially. This is time consuming and each analysis screens out alternatives based solely on the merits of that analysis. In cross-country petroleum pipeline project selection, market analysis determines throughput requirement and supply and demand points. Technical analysis identifies technological options and alternatives for pipe-line routes. Economic and financial analysis derive the least-cost option. The impact assessment addresses environmental issues. The impact assessment often suggests alternative sites, routes, technologies, and/or implementation methodology, necessitating revision of technical and financial analysis. This report suggests an integrated approach to feasibility analysis presented as a case application of a cross-country petroleum pipeline project in India.
Resumo:
The integration of automation (specifically Global Positioning Systems (GPS)) and Information and Communications Technology (ICT) through the creation of a Total Jobsite Management Tool (TJMT) in construction contractor companies can revolutionize the way contractors do business. The key to this integration is the collection and processing of real-time GPS data that is produced on the jobsite for use in project management applications. This research study established the need for an effective planning and implementation framework to assist construction contractor companies in navigating the terrain of GPS and ICT use. An Implementation Framework was developed using the Action Research approach. The framework consists of three components, as follows: (i) ICT Infrastructure Model, (ii) Organizational Restructuring Model, and (iii) Cost/Benefit Analysis. The conceptual ICT infrastructure model was developed for the purpose of showing decision makers within highway construction companies how to collect, process, and use GPS data for project management applications. The organizational restructuring model was developed to assist companies in the analysis and redesign of business processes, data flows, core job responsibilities, and their organizational structure in order to obtain the maximum benefit at the least cost in implementing GPS as a TJMT. A cost-benefit analysis which identifies and quantifies the cost and benefits (both direct and indirect) was performed in the study to clearly demonstrate the advantages of using GPS as a TJMT. Finally, the study revealed that in order to successfully implement a program to utilize GPS data as a TJMT, it is important for construction companies to understand the various implementation and transitioning issues that arise when implementing this new technology and business strategy. In the study, Factors for Success were identified and ranked to allow a construction company to understand the factors that may contribute to or detract from the prospect for success during implementation. The Implementation Framework developed as a result of this study will serve to guide highway construction companies in the successful integration of GPS and ICT technologies for use as a TJMT.
Resumo:
Human use of the oceans is increasingly in conflict with conservation of endangered species. Methods for managing the spatial and temporal placement of industries such as military, fishing, transportation and offshore energy, have historically been post hoc; i.e. the time and place of human activity is often already determined before assessment of environmental impacts. In this dissertation, I build robust species distribution models in two case study areas, US Atlantic (Best et al. 2012) and British Columbia (Best et al. 2015), predicting presence and abundance respectively, from scientific surveys. These models are then applied to novel decision frameworks for preemptively suggesting optimal placement of human activities in space and time to minimize ecological impacts: siting for offshore wind energy development, and routing ships to minimize risk of striking whales. Both decision frameworks relate the tradeoff between conservation risk and industry profit with synchronized variable and map views as online spatial decision support systems.
For siting offshore wind energy development (OWED) in the U.S. Atlantic (chapter 4), bird density maps are combined across species with weights of OWED sensitivity to collision and displacement and 10 km2 sites are compared against OWED profitability based on average annual wind speed at 90m hub heights and distance to transmission grid. A spatial decision support system enables toggling between the map and tradeoff plot views by site. A selected site can be inspected for sensitivity to a cetaceans throughout the year, so as to capture months of the year which minimize episodic impacts of pre-operational activities such as seismic airgun surveying and pile driving.
Routing ships to avoid whale strikes (chapter 5) can be similarly viewed as a tradeoff, but is a different problem spatially. A cumulative cost surface is generated from density surface maps and conservation status of cetaceans, before applying as a resistance surface to calculate least-cost routes between start and end locations, i.e. ports and entrance locations to study areas. Varying a multiplier to the cost surface enables calculation of multiple routes with different costs to conservation of cetaceans versus cost to transportation industry, measured as distance. Similar to the siting chapter, a spatial decisions support system enables toggling between the map and tradeoff plot view of proposed routes. The user can also input arbitrary start and end locations to calculate the tradeoff on the fly.
Essential to the input of these decision frameworks are distributions of the species. The two preceding chapters comprise species distribution models from two case study areas, U.S. Atlantic (chapter 2) and British Columbia (chapter 3), predicting presence and density, respectively. Although density is preferred to estimate potential biological removal, per Marine Mammal Protection Act requirements in the U.S., all the necessary parameters, especially distance and angle of observation, are less readily available across publicly mined datasets.
In the case of predicting cetacean presence in the U.S. Atlantic (chapter 2), I extracted datasets from the online OBIS-SEAMAP geo-database, and integrated scientific surveys conducted by ship (n=36) and aircraft (n=16), weighting a Generalized Additive Model by minutes surveyed within space-time grid cells to harmonize effort between the two survey platforms. For each of 16 cetacean species guilds, I predicted the probability of occurrence from static environmental variables (water depth, distance to shore, distance to continental shelf break) and time-varying conditions (monthly sea-surface temperature). To generate maps of presence vs. absence, Receiver Operator Characteristic (ROC) curves were used to define the optimal threshold that minimizes false positive and false negative error rates. I integrated model outputs, including tables (species in guilds, input surveys) and plots (fit of environmental variables, ROC curve), into an online spatial decision support system, allowing for easy navigation of models by taxon, region, season, and data provider.
For predicting cetacean density within the inner waters of British Columbia (chapter 3), I calculated density from systematic, line-transect marine mammal surveys over multiple years and seasons (summer 2004, 2005, 2008, and spring/autumn 2007) conducted by Raincoast Conservation Foundation. Abundance estimates were calculated using two different methods: Conventional Distance Sampling (CDS) and Density Surface Modelling (DSM). CDS generates a single density estimate for each stratum, whereas DSM explicitly models spatial variation and offers potential for greater precision by incorporating environmental predictors. Although DSM yields a more relevant product for the purposes of marine spatial planning, CDS has proven to be useful in cases where there are fewer observations available for seasonal and inter-annual comparison, particularly for the scarcely observed elephant seal. Abundance estimates are provided on a stratum-specific basis. Steller sea lions and harbour seals are further differentiated by ‘hauled out’ and ‘in water’. This analysis updates previous estimates (Williams & Thomas 2007) by including additional years of effort, providing greater spatial precision with the DSM method over CDS, novel reporting for spring and autumn seasons (rather than summer alone), and providing new abundance estimates for Steller sea lion and northern elephant seal. In addition to providing a baseline of marine mammal abundance and distribution, against which future changes can be compared, this information offers the opportunity to assess the risks posed to marine mammals by existing and emerging threats, such as fisheries bycatch, ship strikes, and increased oil spill and ocean noise issues associated with increases of container ship and oil tanker traffic in British Columbia’s continental shelf waters.
Starting with marine animal observations at specific coordinates and times, I combine these data with environmental data, often satellite derived, to produce seascape predictions generalizable in space and time. These habitat-based models enable prediction of encounter rates and, in the case of density surface models, abundance that can then be applied to management scenarios. Specific human activities, OWED and shipping, are then compared within a tradeoff decision support framework, enabling interchangeable map and tradeoff plot views. These products make complex processes transparent for gaming conservation, industry and stakeholders towards optimal marine spatial management, fundamental to the tenets of marine spatial planning, ecosystem-based management and dynamic ocean management.
Resumo:
South Florida continues to become increasingly developed and urbanized. My exploratory study examines connections between land use and water quality. The main objectives of the project were to develop an understanding of how land use has affected water quality in Miami-Dade canals, and an economic optimization model to estimate the costs of best management practices necessary to improve water quality. Results indicate Miami-Dade County land use and water quality are correlated. Through statistical factor and cluster analysis, it is apparent that agricultural areas are associated with higher concentrations of nitrogen, while urban areas commonly have higher levels of phosphorous than agricultural areas. The economic optimization model shows that urban areas can improve water quality by lowering fertilizer inputs. Agricultural areas can also implement methods to improve water quality although it may be more expensive than urban areas. It is important to keep solutions in mind when looking towards future water quality improvements in South Florida.
Resumo:
The application of custom classification techniques and posterior probability modeling (PPM) using Worldview-2 multispectral imagery to archaeological field survey is presented in this paper. Research is focused on the identification of Neolithic felsite stone tool workshops in the North Mavine region of the Shetland Islands in Northern Scotland. Sample data from known workshops surveyed using differential GPS are used alongside known non-sites to train a linear discriminant analysis (LDA) classifier based on a combination of datasets including Worldview-2 bands, band difference ratios (BDR) and topographical derivatives. Principal components analysis is further used to test and reduce dimensionality caused by redundant datasets. Probability models were generated by LDA using principal components and tested with sites identified through geological field survey. Testing shows the prospective ability of this technique and significance between 0.05 and 0.01, and gain statistics between 0.90 and 0.94, higher than those obtained using maximum likelihood and random forest classifiers. Results suggest that this approach is best suited to relatively homogenous site types, and performs better with correlated data sources. Finally, by combining posterior probability models and least-cost analysis, a survey least-cost efficacy model is generated showing the utility of such approaches to archaeological field survey.
Resumo:
Development of reliable methods for optimised energy storage and generation is one of the most imminent challenges in modern power systems. In this paper an adaptive approach to load leveling problem using novel dynamic models based on the Volterra integral equations of the first kind with piecewise continuous kernels. These integral equations efficiently solve such inverse problem taking into account both the time dependent efficiencies and the availability of generation/storage of each energy storage technology. In this analysis a direct numerical method is employed to find the least-cost dispatch of available storages. The proposed collocation type numerical method has second order accuracy and enjoys self-regularization properties, which is associated with confidence levels of system demand. This adaptive approach is suitable for energy storage optimisation in real time. The efficiency of the proposed methodology is demonstrated on the Single Electricity Market of Republic of Ireland and Northern Ireland.
Resumo:
This report is the product of a first-year research project in the University Transportation Centers Program. This project was carried out by an interdisciplinary research team at The University of Iowa's Public Policy Center. The project developed a computerized system to support decisions on locating facilities that serve rural areas while minimizing transportation costs. The system integrates transportation databases with algorithms that specify efficient locations and allocate demand efficiently to service regions; the results of these algorithms are used interactively by decision makers. The authors developed documentation for the system so that others could apply it to estimate the transportation and route requirements of alternative locations and identify locations that meet certain criteria with the least cost. The system was developed and tested on two transportation-related problems in Iowa, and this report uses these applications to illustrate how the system can be used.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
To meet electricity demand, electric utilities develop growth strategies for generation, transmission, and distributions systems. For a long time those strategies have been developed by applying least-cost methodology, in which the cheapest stand-alone resources are simply added, instead of analyzing complete portfolios. As a consequence, least-cost methodology is biased in favor of fossil fuel-based technologies, completely ignoring the benefits of adding non-fossil fuel technologies to generation portfolios, especially renewable energies. For this reason, this thesis introduces modern portfolio theory (MPT) to gain a more profound insight into a generation portfolio’s performance using generation cost and risk metrics. We discuss all necessary assumptions and modifications to this finance technique for its application within power systems planning, and we present a real case of analysis. Finally, the results of this thesis are summarized, pointing out the main benefits and the scope of this new tool in the context of electricity generation planning.
Resumo:
The goal of Vehicle Routing Problems (VRP) and their variations is to transport a set of orders with the minimum number of vehicles at least cost. Most approaches are designed to solve specific problem variations independently, whereas in real world applications, different constraints are handled concurrently. This research extends solutions obtained for the traveling salesman problem with time windows to a much wider class of route planning problems in logistics. The work describes a novel approach that: supports a heterogeneous fleet of vehicles dynamically reduces the number of vehicles respects individual capacity restrictions satisfies pickup and delivery constraints takes Hamiltonian paths (rather than cycles) The proposed approach uses Monte-Carlo Tree Search and in particular Nested Rollout Policy Adaptation. For the evaluation of the work, real data from the industry was obtained and tested and the results are reported.
Resumo:
In this dissertation, we apply mathematical programming techniques (i.e., integer programming and polyhedral combinatorics) to develop exact approaches for influence maximization on social networks. We study four combinatorial optimization problems that deal with maximizing influence at minimum cost over a social network. To our knowl- edge, all previous work to date involving influence maximization problems has focused on heuristics and approximation. We start with the following viral marketing problem that has attracted a significant amount of interest from the computer science literature. Given a social network, find a target set of customers to seed with a product. Then, a cascade will be caused by these initial adopters and other people start to adopt this product due to the influence they re- ceive from earlier adopters. The idea is to find the minimum cost that results in the entire network adopting the product. We first study a problem called the Weighted Target Set Selection (WTSS) Prob- lem. In the WTSS problem, the diffusion can take place over as many time periods as needed and a free product is given out to the individuals in the target set. Restricting the number of time periods that the diffusion takes place over to be one, we obtain a problem called the Positive Influence Dominating Set (PIDS) problem. Next, incorporating partial incentives, we consider a problem called the Least Cost Influence Problem (LCIP). The fourth problem studied is the One Time Period Least Cost Influence Problem (1TPLCIP) which is identical to the LCIP except that we restrict the number of time periods that the diffusion takes place over to be one. We apply a common research paradigm to each of these four problems. First, we work on special graphs: trees and cycles. Based on the insights we obtain from special graphs, we develop efficient methods for general graphs. On trees, first, we propose a polynomial time algorithm. More importantly, we present a tight and compact extended formulation. We also project the extended formulation onto the space of the natural vari- ables that gives the polytope on trees. Next, building upon the result for trees---we derive the polytope on cycles for the WTSS problem; as well as a polynomial time algorithm on cycles. This leads to our contribution on general graphs. For the WTSS problem and the LCIP, using the observation that the influence propagation network must be a directed acyclic graph (DAG), the strong formulation for trees can be embedded into a formulation on general graphs. We use this to design and implement a branch-and-cut approach for the WTSS problem and the LCIP. In our computational study, we are able to obtain high quality solutions for random graph instances with up to 10,000 nodes and 20,000 edges (40,000 arcs) within a reasonable amount of time.
Resumo:
Monitoring of nitrogen and phosphorus in streams and rivers throughout Iowa is an essential element of the Iowa Nutrient Reduction Strategy (INRS). Sampling and analysis of surface water is necessary to develop periodic estimates of the amounts of nitrogen and phosphorus transported from Iowa. Surface and groundwater monitoring provides the scientific evidence needed to document the effectiveness of nutrient reduction practices and the impact they have on water quality. Lastly, monitoring data informs decisions about where and how best to implement nutrient reduction practices, by both point sources and nonpoint sources, to provide the greatest benefit at the least cost. The impetus for this report comes from the Water Resources Coordination Council (WRCC) which states in its 2014‐15 Annual Report “Efforts are underway to improve understanding of the multiple nutrient monitoring efforts that may be available and can be compared to the nutrient WQ monitoring framework to identify opportunities and potential data gaps to better coordinate and prioritize future nutrient monitoring efforts.” This report is the culmination of those efforts.
Resumo:
Background: Given escalating rates of chronic disease, broad-reach and cost-effective interventions to increase physical activity and improve dietary intake are needed. The cost-effectiveness of a Telephone Counselling intervention to improve physical activity and diet, targeting adults with established chronic diseases in a low socio-economic area of a major Australian city was examined. Methodology/Principal Findings: A cost-effectiveness modelling study using data collected between February 2005 and November 2007 from a cluster-randomised trial that compared Telephone Counselling with a “Usual Care” (brief intervention) alternative. Economic outcomes were assessed using a state-transition Markov model, which predicted the progress of participants through five health states relating to physical activity and dietary improvement, for ten years after recruitment. The costs and health benefits of Telephone Counselling, Usual Care and an existing practice (Real Control) group were compared. Telephone Counselling compared to Usual Care was not cost-effective ($78,489 per quality adjusted life year gained). However, the Usual Care group did not represent existing practice and is not a useful comparator for decision making. Comparing Telephone Counselling outcomes to existing practice (Real Control), the intervention was found to be cost-effective ($29,375 per quality adjusted life year gained). Usual Care (brief intervention) compared to existing practice (Real Control) was also cost-effective ($12,153 per quality adjusted life year gained). Conclusions/Significance: This modelling study shows that a decision to adopt a Telephone Counselling program over existing practice (Real Control) is likely to be cost-effective. Choosing the ‘Usual Care’ brief intervention over existing practice (Real Control) shows a lower cost per quality adjusted life year, but the lack of supporting evidence for efficacy or sustainability is an important consideration for decision makers. The economics of behavioural approaches to improving health must be made explicit if decision makers are to be convinced that allocating resources toward such programs is worthwhile.
Resumo:
The INFORMAS food prices module proposes a step-wise framework to measure the cost and affordability of population diets. The price differential and the tax component of healthy and less healthy foods, food groups, meals and diets will be benchmarked and monitored over time. Results can be used to model or assess the impact of fiscal policies, such as ‘fat taxes’ or subsidies. Key methodological challenges include: defining healthy and less healthy foods, meals, diets and commonly consumed items; including costs of alcohol, takeaways, convenience foods and time; selecting the price metric; sampling frameworks; and standardizing collection and analysis protocols. The minimal approach uses three complementary methods to measure the price differential between pairs of healthy and less healthy foods. Specific challenges include choosing policy relevant pairs and defining an anchor for the lists. The expanded approach measures the cost of a healthy diet compared to the current (less healthy) diet for a reference household. It requires dietary principles to guide the development of the healthy diet pricing instrument and sufficient information about the population’s current intake to inform the current (less healthy) diet tool. The optimal approach includes measures of affordability and requires a standardised measure of household income that can be used for different countries. The feasibility of implementing the protocol in different countries is being tested in New Zealand, Australia and Fiji. The impact of different decision points to address challenges will be investigated in a systematic manner. We will present early insights and results from this work.