955 resultados para Dynamic Marginal Cost
Resumo:
The amplification of demand variation up a supply chain widely termed ‘the Bullwhip Effect’ is disruptive, costly and something that supply chain management generally seeks to minimise. Originally attributed to poor system design; deficiencies in policies, organisation structure and delays in material and information flow all lead to sub-optimal reorder point calculation. It has since been attributed to exogenous random factors such as: uncertainties in demand, supply and distribution lead time but these causes are not exclusive as academic and operational studies since have shown that orders and/or inventories can exhibit significant variability even if customer demand and lead time are deterministic. This increase in the range of possible causes of dynamic behaviour indicates that our understanding of the phenomenon is far from complete. One possible, yet previously unexplored, factor that may influence dynamic behaviour in supply chains is the application and operation of supply chain performance measures. Organisations monitoring and responding to their adopted key performance metrics will make operational changes and this action may influence the level of dynamics within the supply chain, possibly degrading the performance of the very system they were intended to measure. In order to explore this a plausible abstraction of the operational responses to the Supply Chain Council’s SCOR® (Supply Chain Operations Reference) model was incorporated into a classic Beer Game distribution representation, using the dynamic discrete event simulation software Simul8. During the simulation the five SCOR Supply Chain Performance Attributes: Reliability, Responsiveness, Flexibility, Cost and Utilisation were continuously monitored and compared to established targets. Operational adjustments to the; reorder point, transportation modes and production capacity (where appropriate) for three independent supply chain roles were made and the degree of dynamic behaviour in the Supply Chain measured, using the ratio of the standard deviation of upstream demand relative to the standard deviation of the downstream demand. Factors employed to build the detailed model include: variable retail demand, order transmission, transportation delays, production delays, capacity constraints demand multipliers and demand averaging periods. Five dimensions of supply chain performance were monitored independently in three autonomous supply chain roles and operational settings adjusted accordingly. Uniqueness of this research stems from the application of the five SCOR performance attributes with modelled operational responses in a dynamic discrete event simulation model. This project makes its primary contribution to knowledge by measuring the impact, on supply chain dynamics, of applying a representative performance measurement system.
Resumo:
From a manufacturing perspective, the efficiency of manufacturing operations (such as process planning and production scheduling) are the key element for enhancing manufacturing competence. Process planning and production scheduling functions have been traditionally treated as two separate activities, and have resulted in a range of inefficiencies. These include infeasible process plans, non-available/overloaded resources, high production costs, long production lead times, and so on. Above all, it is unlikely that the dynamic changes can be efficiently dealt with. Despite much research has been conducted to integrate process planning and production scheduling to generate optimised solutions to improve manufacturing efficiency, there is still a gap to achieve the competence required for the current global competitive market. In this research, the concept of multi-agent system (MAS) is adopted as a means to address the aforementioned gap. A MAS consists of a collection of intelligent autonomous agents able to solve complex problems. These agents possess their individual objectives and interact with each other to fulfil the global goal. This paper describes a novel use of an autonomous agent system to facilitate the integration of process planning and production scheduling functions to cope with unpredictable demands, in terms of uncertainties in product mix and demand pattern. The novelty lies with the currency-based iterative agent bidding mechanism to allow process planning and production scheduling options to be evaluated simultaneously, so as to search for an optimised, cost-effective solution. This agent based system aims to achieve manufacturing competence by means of enhancing the flexibility and agility of manufacturing enterprises.
Resumo:
In this paper, we report a simple fibre laser torsion sensor system using an intracavity tilted fibre grating as a torsion encoded loss filter. When the grating is subjected to twist, it induces loss to the cavity, thus affecting the laser oscillation build-up time. By measuring the build-up time, both twist direction and angle on the grating can be monitored. Using a low-cost photodiode and a two-channel digital oscilloscope, we have characterised the torsion sensing capability of this fibre laser system and obtained a torsion sensitivity of ~412µs/(rad/m) in the dynamic range from -150° to +150°.
Resumo:
Fibre overlay is a cost-effective technique to alleviate wavelength blocking in some links of a wavelength-routed optical network by increasing the number of wavelengths in those links. In this letter, we investigate the effects of overlaying fibre in an all-optical network (AON) based on GÉANT2 topology. The constraint-based routing and wavelength assignment (CB-RWA) algorithm locates where cost-efficient upgrades should be implemented. Through numerical examples, we demonstrate that the network capacity improves by 25 per cent by overlaying fibre on 10 per cent of the links, and by 12 per cent by providing hop reduction links comprising 2 per cent of the links. For the upgraded network, we also show the impact of dynamic traffic allocation on the blocking probability. Copyright © 2010 John Wiley & Sons, Ltd.
Resumo:
Fibre overlay is a cost-effective technique to alleviate wavelength blocking in some links of a wavelength-routed optical network by increasing the number of wavelengths in those links. In this letter, we investigate the effects of overlaying fibre in an all-optical network (AON) based on GÉANT2 topology. The constraint-based routing and wavelength assignment (CB-RWA) algorithm locates where cost-efficient upgrades should be implemented. Through numerical examples, we demonstrate that the network capacity improves by 25 per cent by overlaying fibre on 10 per cent of the links, and by 12 per cent by providing hop reduction links comprising 2 per cent of the links. For the upgraded network, we also show the impact of dynamic traffic allocation on the blocking probability. Copyright © 2010 John Wiley & Sons, Ltd.
Resumo:
In this paper, we report a simple fibre laser torsion sensor system using an intracavity tilted fibre grating as a torsion encoded loss filter. When the grating is subjected to twist, it induces loss to the cavity, thus affecting the laser oscillation build-up time. By measuring the build-up time, both twist direction and angle on the grating can be monitored. Using a low-cost photodiode and a two-channel digital oscilloscope, we have characterised the torsion sensing capability of this fibre laser system and obtained a torsion sensitivity of ~412µs/(rad/m) in the dynamic range from -150° to +150°.
Resumo:
We have implemented a dynamic strain sensor using a Polymer Optical Fiber Bragg Grating (POFBG). In this paper, we have investigated an approach for making such systems cheaper through the use of easy to handle multimode fiber. A Vertical-Cavity Surface-Emitting Laser is used to decrease the cost of the interrogation system and a photodetector converts the reflected light into an electrical signal.
Resumo:
This paper examines the methodological aspect of climate change, particularly the aggregation of costs and benefits induced by climate change on individuals, societies, economies and on the whole ecosystem. Assessing the total and/or marginal costs of environmental change is difficult because of wide range of factors that have to be involved. The subsequent study tries to capture the complexity of cost assessment on climate change therefore includes several critical factors such as scenarios and modeling, valuation and estimation, equity and discounting.
Resumo:
Shipboard power systems have different characteristics than the utility power systems. In the Shipboard power system it is crucial that the systems and equipment work at their peak performance levels. One of the most demanding aspects for simulations of the Shipboard Power Systems is to connect the device under test to a real-time simulated dynamic equivalent and in an environment with actual hardware in the Loop (HIL). The real time simulations can be achieved by using multi-distributed modeling concept, in which the global system model is distributed over several processors through a communication link. The advantage of this approach is that it permits the gradual change from pure simulation to actual application. In order to perform system studies in such an environment physical phase variable models of different components of the shipboard power system were developed using operational parameters obtained from finite element (FE) analysis. These models were developed for two types of studies low and high frequency studies. Low frequency studies are used to examine the shipboard power systems behavior under load switching, and faults. High-frequency studies were used to predict abnormal conditions due to overvoltage, and components harmonic behavior. Different experiments were conducted to validate the developed models. The Simulation and experiment results show excellent agreement. The shipboard power systems components behavior under internal faults was investigated using FE analysis. This developed technique is very curial in the Shipboard power systems faults detection due to the lack of comprehensive fault test databases. A wavelet based methodology for feature extraction of the shipboard power systems current signals was developed for harmonic and fault diagnosis studies. This modeling methodology can be utilized to evaluate and predicate the NPS components future behavior in the design stage which will reduce the development cycles, cut overall cost, prevent failures, and test each subsystem exhaustively before integrating it into the system.
Resumo:
Inverters play key roles in connecting sustainable energy (SE) sources to the local loads and the ac grid. Although there has been a rapid expansion in the use of renewable sources in recent years, fundamental research, on the design of inverters that are specialized for use in these systems, is still needed. Recent advances in power electronics have led to proposing new topologies and switching patterns for single-stage power conversion, which are appropriate for SE sources and energy storage devices. The current source inverter (CSI) topology, along with a newly proposed switching pattern, is capable of converting the low dc voltage to the line ac in only one stage. Simple implementation and high reliability, together with the potential advantages of higher efficiency and lower cost, turns the so-called, single-stage boost inverter (SSBI), into a viable competitor to the existing SE-based power conversion technologies.^ The dynamic model is one of the most essential requirements for performance analysis and control design of any engineering system. Thus, in order to have satisfactory operation, it is necessary to derive a dynamic model for the SSBI system. However, because of the switching behavior and nonlinear elements involved, analysis of the SSBI is a complicated task.^ This research applies the state-space averaging technique to the SSBI to develop the state-space-averaged model of the SSBI under stand-alone and grid-connected modes of operation. Then, a small-signal model is derived by means of the perturbation and linearization method. An experimental hardware set-up, including a laboratory-scaled prototype SSBI, is built and the validity of the obtained models is verified through simulation and experiments. Finally, an eigenvalue sensitivity analysis is performed to investigate the stability and dynamic behavior of the SSBI system over a typical range of operation. ^
Resumo:
Shipboard power systems have different characteristics than the utility power systems. In the Shipboard power system it is crucial that the systems and equipment work at their peak performance levels. One of the most demanding aspects for simulations of the Shipboard Power Systems is to connect the device under test to a real-time simulated dynamic equivalent and in an environment with actual hardware in the Loop (HIL). The real time simulations can be achieved by using multi-distributed modeling concept, in which the global system model is distributed over several processors through a communication link. The advantage of this approach is that it permits the gradual change from pure simulation to actual application. In order to perform system studies in such an environment physical phase variable models of different components of the shipboard power system were developed using operational parameters obtained from finite element (FE) analysis. These models were developed for two types of studies low and high frequency studies. Low frequency studies are used to examine the shipboard power systems behavior under load switching, and faults. High-frequency studies were used to predict abnormal conditions due to overvoltage, and components harmonic behavior. Different experiments were conducted to validate the developed models. The Simulation and experiment results show excellent agreement. The shipboard power systems components behavior under internal faults was investigated using FE analysis. This developed technique is very curial in the Shipboard power systems faults detection due to the lack of comprehensive fault test databases. A wavelet based methodology for feature extraction of the shipboard power systems current signals was developed for harmonic and fault diagnosis studies. This modeling methodology can be utilized to evaluate and predicate the NPS components future behavior in the design stage which will reduce the development cycles, cut overall cost, prevent failures, and test each subsystem exhaustively before integrating it into the system.
Resumo:
The exploration and development of oil and gas reserves located in harsh offshore environments are characterized with high risk. Some of these reserves would be uneconomical if produced using conventional drilling technology due to increased drilling problems and prolonged non-productive time. Seeking new ways to reduce drilling cost and minimize risks has led to the development of Managed Pressure Drilling techniques. Managed pressure drilling methods address the drawbacks of conventional overbalanced and underbalanced drilling techniques. As managed pressure drilling techniques are evolving, there are many unanswered questions related to safety and operating pressure regimes. Quantitative risk assessment techniques are often used to answer these questions. Quantitative risk assessment is conducted for the various stages of drilling operations – drilling ahead, tripping operation, casing and cementing. A diagnostic model for analyzing the rotating control device, the main component of managed pressure drilling techniques, is also studied. The logic concept of Noisy-OR is explored to capture the unique relationship between casing and cementing operations in leading to well integrity failure as well as its usage to model the critical components of constant bottom-hole pressure drilling technique of managed pressure drilling during tripping operation. Relevant safety functions and inherent safety principles are utilized to improve well integrity operations. Loss function modelling approach to enable dynamic consequence analysis is adopted to study blowout risk for real-time decision making. The aggregation of the blowout loss categories, comprising: production, asset, human health, environmental response and reputation losses leads to risk estimation using dynamically determined probability of occurrence. Lastly, various sub-models developed for the stages/sub-operations of drilling operations and the consequence modelling approach are integrated for a holistic risk analysis of drilling operations.
Resumo:
Zooplankton was studied on eight stations in the marginal ice zone (MIZ) of the Barents Sea, in May 1999, along two transects across the ice edge. On each station, physical background measurements and zooplankton samples were taken every 6 h over a 24 h period at five discrete depth intervals. Cluster analysis revealed separation of open water stations from all ice stations as well as high similarity level among replicates belonging to particular station. Based on five replicates per station, analysis of variance (ANOVA) confirmed significant differences (P < 0.05) in abundances of the main mesozooplankton taxa among stations. Relations between the zooplankton community and environmental parameters were established using redundancy analysis (CANOCO). In total, 55% of mesozooplankton variability within studied area was explained by eight variables with significant conditional effects: depth stratum, fluorescence, temperature, salinity, bottom depth, latitude, bloom situation, and ice concentration. GLM models supported supposition about clear and negative relationship between concentration of Oithona similis, and overall mesozooplankton diversity The analyses showed a dynamic relationship between mesozooplankton distribution and hydrological conditions on short-term scale. Furthermore, our study demonstrated that variability in the physical environment of dynamic MIZ of the Barents Sea has measurable effect on the Arctic pelagic ecosystem.
Resumo:
Human use of the oceans is increasingly in conflict with conservation of endangered species. Methods for managing the spatial and temporal placement of industries such as military, fishing, transportation and offshore energy, have historically been post hoc; i.e. the time and place of human activity is often already determined before assessment of environmental impacts. In this dissertation, I build robust species distribution models in two case study areas, US Atlantic (Best et al. 2012) and British Columbia (Best et al. 2015), predicting presence and abundance respectively, from scientific surveys. These models are then applied to novel decision frameworks for preemptively suggesting optimal placement of human activities in space and time to minimize ecological impacts: siting for offshore wind energy development, and routing ships to minimize risk of striking whales. Both decision frameworks relate the tradeoff between conservation risk and industry profit with synchronized variable and map views as online spatial decision support systems.
For siting offshore wind energy development (OWED) in the U.S. Atlantic (chapter 4), bird density maps are combined across species with weights of OWED sensitivity to collision and displacement and 10 km2 sites are compared against OWED profitability based on average annual wind speed at 90m hub heights and distance to transmission grid. A spatial decision support system enables toggling between the map and tradeoff plot views by site. A selected site can be inspected for sensitivity to a cetaceans throughout the year, so as to capture months of the year which minimize episodic impacts of pre-operational activities such as seismic airgun surveying and pile driving.
Routing ships to avoid whale strikes (chapter 5) can be similarly viewed as a tradeoff, but is a different problem spatially. A cumulative cost surface is generated from density surface maps and conservation status of cetaceans, before applying as a resistance surface to calculate least-cost routes between start and end locations, i.e. ports and entrance locations to study areas. Varying a multiplier to the cost surface enables calculation of multiple routes with different costs to conservation of cetaceans versus cost to transportation industry, measured as distance. Similar to the siting chapter, a spatial decisions support system enables toggling between the map and tradeoff plot view of proposed routes. The user can also input arbitrary start and end locations to calculate the tradeoff on the fly.
Essential to the input of these decision frameworks are distributions of the species. The two preceding chapters comprise species distribution models from two case study areas, U.S. Atlantic (chapter 2) and British Columbia (chapter 3), predicting presence and density, respectively. Although density is preferred to estimate potential biological removal, per Marine Mammal Protection Act requirements in the U.S., all the necessary parameters, especially distance and angle of observation, are less readily available across publicly mined datasets.
In the case of predicting cetacean presence in the U.S. Atlantic (chapter 2), I extracted datasets from the online OBIS-SEAMAP geo-database, and integrated scientific surveys conducted by ship (n=36) and aircraft (n=16), weighting a Generalized Additive Model by minutes surveyed within space-time grid cells to harmonize effort between the two survey platforms. For each of 16 cetacean species guilds, I predicted the probability of occurrence from static environmental variables (water depth, distance to shore, distance to continental shelf break) and time-varying conditions (monthly sea-surface temperature). To generate maps of presence vs. absence, Receiver Operator Characteristic (ROC) curves were used to define the optimal threshold that minimizes false positive and false negative error rates. I integrated model outputs, including tables (species in guilds, input surveys) and plots (fit of environmental variables, ROC curve), into an online spatial decision support system, allowing for easy navigation of models by taxon, region, season, and data provider.
For predicting cetacean density within the inner waters of British Columbia (chapter 3), I calculated density from systematic, line-transect marine mammal surveys over multiple years and seasons (summer 2004, 2005, 2008, and spring/autumn 2007) conducted by Raincoast Conservation Foundation. Abundance estimates were calculated using two different methods: Conventional Distance Sampling (CDS) and Density Surface Modelling (DSM). CDS generates a single density estimate for each stratum, whereas DSM explicitly models spatial variation and offers potential for greater precision by incorporating environmental predictors. Although DSM yields a more relevant product for the purposes of marine spatial planning, CDS has proven to be useful in cases where there are fewer observations available for seasonal and inter-annual comparison, particularly for the scarcely observed elephant seal. Abundance estimates are provided on a stratum-specific basis. Steller sea lions and harbour seals are further differentiated by ‘hauled out’ and ‘in water’. This analysis updates previous estimates (Williams & Thomas 2007) by including additional years of effort, providing greater spatial precision with the DSM method over CDS, novel reporting for spring and autumn seasons (rather than summer alone), and providing new abundance estimates for Steller sea lion and northern elephant seal. In addition to providing a baseline of marine mammal abundance and distribution, against which future changes can be compared, this information offers the opportunity to assess the risks posed to marine mammals by existing and emerging threats, such as fisheries bycatch, ship strikes, and increased oil spill and ocean noise issues associated with increases of container ship and oil tanker traffic in British Columbia’s continental shelf waters.
Starting with marine animal observations at specific coordinates and times, I combine these data with environmental data, often satellite derived, to produce seascape predictions generalizable in space and time. These habitat-based models enable prediction of encounter rates and, in the case of density surface models, abundance that can then be applied to management scenarios. Specific human activities, OWED and shipping, are then compared within a tradeoff decision support framework, enabling interchangeable map and tradeoff plot views. These products make complex processes transparent for gaming conservation, industry and stakeholders towards optimal marine spatial management, fundamental to the tenets of marine spatial planning, ecosystem-based management and dynamic ocean management.
Resumo:
Urban problems have several features that make them inherently dynamic. Large transaction costs all but guarantee that homeowners will do their best to consider how a neighborhood might change before buying a house. Similarly, stores face large sunk costs when opening, and want to be sure that their investment will pay off in the long run. In line with those concerns, different areas of Economics have made recent advances in modeling those questions within a dynamic framework. This dissertation contributes to those efforts.
Chapter 2 discusses how to model an agent’s location decision when the agent must learn about an exogenous amenity that may be changing over time. The model is applied to estimating the marginal willingness to pay to avoid crime, in which agents are learning about the crime rate in a neighborhood, and the crime rate can change in predictable (Markovian) ways.
Chapters 3 and 4 concentrate on location decision problems when there are externalities between decision makers. Chapter 3 focuses on the decision of business owners to open a store, when its demand is a function of other nearby stores, either through competition, or through spillovers on foot traffic. It uses a dynamic model in continuous time to model agents’ decisions. A particular challenge is isolating the contribution of spillovers from the contribution of other unobserved neighborhood attributes that could also lead to agglomeration. A key contribution of this chapter is showing how we can use information on storefront ownership to help separately identify spillovers.
Finally, chapter 4 focuses on a class of models in which families prefer to live
close to similar neighbors. This chapter provides the first simulation of such a model in which agents are forward looking, and shows that this leads to more segregation than it would have been observed with myopic agents, which is the standard in this literature. The chapter also discusses several extensions of the model that can be used to investigate relevant questions such as the arrival of a large contingent high skilled tech workers in San Francisco, the immigration of hispanic families to several southern American cities, large changes in local amenities, such as the construction of magnet schools or metro stations, and the flight of wealthy residents from cities in the Rust belt, such as Detroit.