13 resultados para Diagrama de Pareto
em Aston University Research Archive
Inventory parameter management and focused continuous improvement for repetitive batch manufacturers
Resumo:
What this thesis proposes is a methodology to assist repetitive batch manufacturers in the adoption of certain aspects of the Lean Production principles. The methodology concentrates on the reduction of inventory through the setting of appropriate batch sizes, taking account of the effect of sequence dependent set-ups and the identification and elimination of bottlenecks. It uses a simple Pareto and modified EBQ based analysis technique to allocate items to period order day classes based on a combination of each item's annual usage value and set-up cost. The period order day classes the items are allocated to are determined by the constraints limits in the three measured dimensions, capacity, administration and finance. The methodology overcomes the limitations associated with MRP in the area of sequence dependent set-ups, and provides a simple way of setting planning parameters taking this effect into account by concentrating on the reduction of inventory through the systematic identification and elimination of bottlenecks through set-up reduction processes, so allowing batch sizes to reduce. It aims to help traditional repetitive batch manufacturers in a route to continual improvement by: Highlighting those areas where change would bring the greatest benefits. Modelling the effect of proposed changes. Quantifying the benefits that could be gained through implementing the proposed changes. Simplifying the effort required to perform the modelling process. It concentrates on increasing flexibility through managed inventory reduction through rationally decreasing batch sizes, taking account of sequence dependent set-ups and the identification and elimination of bottlenecks. This was achieved through the development of a software modelling tool, and validated through a case study approach.
Resumo:
In recent years, UK industry has seen an explosive growth in the number of `Computer Aided Production Management' (CAPM) system installations. Of the many CAPM systems, materials requirement planning/manufacturing resource planning (MRP/MRPII) is the most widely implemented. Despite the huge investments in MRP systems, over 80 percent are said to have failed within 3 to 5 years of installation. Many people now assume that Just-In-Time (JIT) is the best manufacturing technique. However, those who have implemented JIT have found that it also has many problems. The author argues that the success of a manufacturing company will not be due to a system which complies with a single technique; but due to the integration of many techniques and the ability to make them complement each other in a specific manufacturing environment. This dissertation examines the potential for integrating MRP with JIT and Two-Bin systems to reduce operational costs involved in managing bought-out inventory. Within this framework it shows that controlling MRP is essential to facilitate the integrating process. The behaviour of MRP systems is dependent on the complex interactions between the numerous control parameters used. Methodologies/models are developed to set these parameters. The models are based on the Pareto principle. The idea is to use business targets to set a coherent set of parameters, which not only enables those business targets to be realised, but also facilitates JIT implementation. It illustrates this approach in the context of an actual manufacturing plant - IBM Havant. (IBM Havant is a high volume electronics assembly plant with the majority of the materials bought-out). The parameter setting models are applicable to control bought-out items in a wide range of industries and are not dependent on specific MRP software. The models have produced successful results in several companies and are now being developed as commercial products.
Resumo:
Erbium-doped fibre amplifiers (EDFA’s) are a key technology for the design of all optical communication systems and networks. The superiority of EDFAs lies in their negligible intermodulation distortion across high speed multichannel signals, low intrinsic losses, slow gain dynamics, and gain in a wide range of optical wavelengths. Due to long lifetime in excited states, EDFAs do not oppose the effect of cross-gain saturation. The time characteristics of the gain saturation and recovery effects are between a few hundred microseconds and 10 milliseconds. However, in wavelength division multiplexed (WDM) optical networks with EDFAs, the number of channels traversing an EDFA can change due to the faulty link of the network or the system reconfiguration. It has been found that, due to the variation in channel number in the EDFAs chain, the output system powers of surviving channels can change in a very short time. Thus, the power transient is one of the problems deteriorating system performance. In this thesis, the transient phenomenon in wavelength routed WDM optical networks with EDFA chains was investigated. The task was performed using different input signal powers for circuit switched networks. A simulator for the EDFA gain dynamicmodel was developed to compute the magnitude and speed of the power transients in the non-self-saturated EDFA both single and chained. The dynamic model of the self-saturated EDFAs chain and its simulator were also developed to compute the magnitude and speed of the power transients and the Optical signal-to-noise ratio (OSNR). We found that the OSNR transient magnitude and speed are a function of both the output power transient and the number of EDFAs in the chain. The OSNR value predicts the level of the quality of service in the related network. It was found that the power transients for both self-saturated and non-self-saturated EDFAs are close in magnitude in the case of gain saturated EDFAs networks. Moreover, the cross-gain saturation also degrades the performance of the packet switching networks due to varying traffic characteristics. The magnitude and the speed of output power transients increase along the EDFAs chain. An investigation was done on the asynchronous transfer mode (ATM) or the WDM Internet protocol (WDM-IP) traffic networks using different traffic patterns based on the Pareto and Poisson distribution. The simulator is used to examine the amount and speed of the power transients in Pareto and Poisson distributed traffic at different bit rates, with specific focus on 2.5 Gb/s. It was found from numerical and statistical analysis that the power swing increases if the time interval of theburst-ON/burst-OFF is long in the packet bursts. This is because the gain dynamics is fast during strong signal pulse or with long duration pulses, which is due to the stimulatedemission avalanche depletion of the excited ions. Thus, an increase in output power levelcould lead to error burst which affects the system performance.
Resumo:
Supply chain formation is the process by which a set of producers within a network determine the subset of these producers able to form a chain to supply goods to one or more consumers at the lowest cost. This problem has been tackled in a number of ways, including auctions, negotiations, and argumentation-based approaches. In this paper we show how this problem can be cast as an optimization of a pairwise cost function. Optimizing this class of energy functions is NP-hard but efficient approximations to the global minimum can be obtained using loopy belief propagation (LBP). Here we detail a max-sum LBP-based approach to the supply chain formation problem, involving decentralized message-passing between supply chain participants. Our approach is evaluated against a well-known decentralized double-auction method and an optimal centralized technique, showing several improvements on the auction method: it obtains better solutions for most network instances which allow for competitive equilibrium (Competitive equilibrium in Walsh and Wellman is a set of producer costs which permits a Pareto optimal state in which agents in the allocation receive non-negative surplus and agents not in the allocation would acquire non-positive surplus by participating in the supply chain) while also optimally solving problems where no competitive equilibrium exists, for which the double-auction method frequently produces inefficient solutions. © 2012 Wiley Periodicals, Inc.
Resumo:
Background. The secondary structure of folded RNA sequences is a good model to map phenotype onto genotype, as represented by the RNA sequence. Computational studies of the evolution of ensembles of RNA molecules towards target secondary structures yield valuable clues to the mechanisms behind adaptation of complex populations. The relationship between the space of sequences and structures, the organization of RNA ensembles at mutation-selection equilibrium, the time of adaptation as a function of the population parameters, the presence of collective effects in quasispecies, or the optimal mutation rates to promote adaptation all are issues that can be explored within this framework. Results. We investigate the effect of microscopic mutations on the phenotype of RNA molecules during their in silico evolution and adaptation. We calculate the distribution of the effects of mutations on fitness, the relative fractions of beneficial and deleterious mutations and the corresponding selection coefficients for populations evolving under different mutation rates. Three different situations are explored: the mutation-selection equilibrium (optimized population) in three different fitness landscapes, the dynamics during adaptation towards a goal structure (adapting population), and the behavior under periodic population bottlenecks (perturbed population). Conclusions. The ratio between the number of beneficial and deleterious mutations experienced by a population of RNA sequences increases with the value of the mutation rate µ at which evolution proceeds. In contrast, the selective value of mutations remains almost constant, independent of µ, indicating that adaptation occurs through an increase in the amount of beneficial mutations, with little variations in the average effect they have on fitness. Statistical analyses of the distribution of fitness effects reveal that small effects, either beneficial or deleterious, are well described by a Pareto distribution. These results are robust under changes in the fitness landscape, remarkably when, in addition to selecting a target secondary structure, specific subsequences or low-energy folds are required. A population perturbed by bottlenecks behaves similarly to an adapting population, struggling to return to the optimized state. Whether it can survive in the long run or whether it goes extinct depends critically on the length of the time interval between bottlenecks. © 2010 Stich et al; licensee BioMed Central Ltd.
Resumo:
Using a modified deprivation (or poverty) function, in this paper, we theoretically study the changes in poverty with respect to the 'global' mean and variance of the income distribution using Indian survey data. We show that when the income obeys a log-normal distribution, a rising mean income generally indicates a reduction in poverty while an increase in the variance of the income distribution increases poverty. This altruistic view for a developing economy, however, is not tenable anymore once the poverty index is found to follow a pareto distribution. Here although a rising mean income indicates a reduction in poverty, due to the presence of an inflexion point in the poverty function, there is a critical value of the variance below which poverty decreases with increasing variance while beyond this value, poverty undergoes a steep increase followed by a decrease with respect to higher variance. Identifying this inflexion point as the poverty line, we show that the pareto poverty function satisfies all three standard axioms of a poverty index [N.C. Kakwani, Econometrica 43 (1980) 437; A.K. Sen, Econometrica 44 (1976) 219] whereas the log-normal distribution falls short of this requisite. Following these results, we make quantitative predictions to correlate a developing with a developed economy. © 2006 Elsevier B.V. All rights reserved.
Resumo:
To solve multi-objective problems, multiple reward signals are often scalarized into a single value and further processed using established single-objective problem solving techniques. While the field of multi-objective optimization has made many advances in applying scalarization techniques to obtain good solution trade-offs, the utility of applying these techniques in the multi-objective multi-agent learning domain has not yet been thoroughly investigated. Agents learn the value of their decisions by linearly scalarizing their reward signals at the local level, while acceptable system wide behaviour results. However, the non-linear relationship between weighting parameters of the scalarization function and the learned policy makes the discovery of system wide trade-offs time consuming. Our first contribution is a thorough analysis of well known scalarization schemes within the multi-objective multi-agent reinforcement learning setup. The analysed approaches intelligently explore the weight-space in order to find a wider range of system trade-offs. In our second contribution, we propose a novel adaptive weight algorithm which interacts with the underlying local multi-objective solvers and allows for a better coverage of the Pareto front. Our third contribution is the experimental validation of our approach by learning bi-objective policies in self-organising smart camera networks. We note that our algorithm (i) explores the objective space faster on many problem instances, (ii) obtained solutions that exhibit a larger hypervolume, while (iii) acquiring a greater spread in the objective space.
Resumo:
In this paper we study the self-organising behaviour of smart camera networks which use market-based handover of object tracking responsibilities to achieve an efficient allocation of objects to cameras. Specifically, we compare previously known homogeneous configurations, when all cameras use the same marketing strategy, with heterogeneous configurations, when each camera makes use of its own, possibly different marketing strategy. Our first contribution is to establish that such heterogeneity of marketing strategies can lead to system wide outcomes which are Pareto superior when compared to those possible in homogeneous configurations. However, since the particular configuration required to lead to Pareto efficiency in a given scenario will not be known in advance, our second contribution is to show how online learning of marketing strategies at the individual camera level can lead to high performing heterogeneous configurations from the system point of view, extending the Pareto front when compared to the homogeneous case. Our third contribution is to show that in many cases, the dynamic behaviour resulting from online learning leads to global outcomes which extend the Pareto front even when compared to static heterogeneous configurations. Our evaluation considers results obtained from an open source simulation package as well as data from a network of real cameras. © 2013 IEEE.
Resumo:
We study heterogeneity among nodes in self-organizing smart camera networks, which use strategies based on social and economic knowledge to target communication activity efficiently. We compare homogeneous configurations, when cameras use the same strategy, with heterogeneous configurations, when cameras use different strategies. Our first contribution is to establish that static heterogeneity leads to new outcomes that are more efficient than those possible with homogeneity. Next, two forms of dynamic heterogeneity are investigated: nonadaptive mixed strategies and adaptive strategies, which learn online. Our second contribution is to show that mixed strategies offer Pareto efficiency consistently comparable with the most efficient static heterogeneous configurations. Since the particular configuration required for high Pareto efficiency in a scenario will not be known in advance, our third contribution is to show how decentralized online learning can lead to more efficient outcomes than the homogeneous case. In some cases, outcomes from online learning were more efficient than all other evaluated configuration types. Our fourth contribution is to show that online learning typically leads to outcomes more evenly spread over the objective space. Our results provide insight into the relationship between static, dynamic, and adaptive heterogeneity, suggesting that all have a key role in achieving efficient self-organization.
Resumo:
Heterogeneous multi-core FPGAs contain different types of cores, which can improve efficiency when used with an effective online task scheduler. However, it is not easy to find the right cores for tasks when there are multiple objectives or dozens of cores. Inappropriate scheduling may cause hot spots which decrease the reliability of the chip. Given that, our research builds a simulating platform to evaluate all kinds of scheduling algorithms on a variety of architectures. On this platform, we provide an online scheduler which uses multi-objective evolutionary algorithm (EA). Comparing the EA and current algorithms such as Predictive Dynamic Thermal Management (PDTM) and Adaptive Temperature Threshold Dynamic Thermal Management (ATDTM), we find some drawbacks in previous work. First, current algorithms are overly dependent on manually set constant parameters. Second, those algorithms neglect optimization for heterogeneous architectures. Third, they use single-objective methods, or use linear weighting method to convert a multi-objective optimization into a single-objective optimization. Unlike other algorithms, the EA is adaptive and does not require resetting parameters when workloads switch from one to another. EAs also improve performance when used on heterogeneous architecture. A efficient Pareto front can be obtained with EAs for the purpose of multiple objectives.
Resumo:
Insulated-gate bipolar transistor (IGBT) power modules find widespread use in numerous power conversion applications where their reliability is of significant concern. Standard IGBT modules are fabricated for general-purpose applications while little has been designed for bespoke applications. However, conventional design of IGBTs can be improved by the multiobjective optimization technique. This paper proposes a novel design method to consider die-attachment solder failures induced by short power cycling and baseplate solder fatigue induced by the thermal cycling which are among major failure mechanisms of IGBTs. Thermal resistance is calculated analytically and the plastic work design is obtained with a high-fidelity finite-element model, which has been validated experimentally. The objective of minimizing the plastic work and constrain functions is formulated by the surrogate model. The nondominated sorting genetic algorithm-II is used to search for the Pareto-optimal solutions and the best design. The result of this combination generates an effective approach to optimize the physical structure of power electronic modules, taking account of historical environmental and operational conditions in the field.
Resumo:
Frequency, time and places of charging and discharging have critical impact on the Quality of Experience (QoE) of using Electric Vehicles (EVs). EV charging and discharging scheduling schemes should consider both the QoE of using EV and the load capacity of the power grid. In this paper, we design a traveling plan-aware scheduling scheme for EV charging in driving pattern and a cooperative EV charging and discharging scheme in parking pattern to improve the QoE of using EV and enhance the reliability of the power grid. For traveling planaware scheduling, the assignment of EVs to Charging Stations (CSs) is modeled as a many-to-one matching game and the Stable Matching Algorithm (SMA) is proposed. For cooperative EV charging and discharging in parking pattern, the electricity exchange between charging EVs and discharging EVs in the same parking lot is formulated as a many-to-many matching model with ties, and we develop the Pareto Optimal Matching Algorithm (POMA). Simulation results indicates that the SMA can significantly improve the average system utility for EV charging in driving pattern, and the POMA can increase the amount of electricity offloaded from the grid which is helpful to enhance the reliability of the power grid.
Resumo:
The increasing trend of disaster victims globally is posing a complex challenge for disaster management authorities. Moreover, to accomplish successful transition between preparedness and response, it is important to consider the different features inherent to each type of disaster. Floods are portrayed as one of the most frequent and harmful disasters, hence introducing the necessity to develop a tool for disaster preparedness to perform efficient and effective flood management. The purpose of the article is to introduce a method to simultaneously define the proper location of shelters and distribution centers, along with the allocation of prepositioned goods and distribution decisions required to satisfy flood victims. The tool combines the use of a raster geographical information system (GIS) and an optimization model. The GIS determines the flood hazard of the city areas aiming to assess the flood situation and to discard floodable facilities. Then, the multi-commodity multimodal optimization model is solved to obtain the Pareto frontier of two criteria: distance and cost. The methodology was applied to a case study in the flood of Villahermosa, Mexico, in 2007, and the results were compared to an optimized scenario of the guidelines followed by Mexican authorities, concluding that the value of the performance measures was improved using the developed method. Furthermore, the results exhibited the possibility to provide adequate care for people affected with less facilities than the current approach and the advantages of considering more than one distribution center for relief prepositioning.