11 resultados para Algorithm efficiency

em Aston University Research Archive


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Supply chain operations directly affect service levels. Decision on amendment of facilities is generally decided based on overall cost, leaving out the efficiency of each unit. Decomposing the supply chain superstructure, efficiency analysis of the facilities (warehouses or distribution centers) that serve customers can be easily implemented. With the proposed algorithm, the selection of a facility is based on service level maximization and not just cost minimization as this analysis filters all the feasible solutions utilizing Data Envelopment Analysis (DEA) technique. Through multiple iterations, solutions are filtered via DEA and only the efficient ones are selected leading to cost minimization. In this work, the problem of optimal supply chain networks design is addressed based on a DEA based algorithm. A Branch and Efficiency (B&E) algorithm is deployed for the solution of this problem. Based on this DEA approach, each solution (potentially installed warehouse, plant etc) is treated as a Decision Making Unit, thus is characterized by inputs and outputs. The algorithm through additional constraints named “efficiency cuts”, selects only efficient solutions providing better objective function values. The applicability of the proposed algorithm is demonstrated through illustrative examples.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Data Envelopment Analysis (DEA) is one of the most widely used methods in the measurement of the efficiency and productivity of Decision Making Units (DMUs). DEA for a large dataset with many inputs/outputs would require huge computer resources in terms of memory and CPU time. This paper proposes a neural network back-propagation Data Envelopment Analysis to address this problem for the very large scale datasets now emerging in practice. Neural network requirements for computer memory and CPU time are far less than that needed by conventional DEA methods and can therefore be a useful tool in measuring the efficiency of large datasets. Finally, the back-propagation DEA algorithm is applied to five large datasets and compared with the results obtained by conventional DEA.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In printed circuit board (PCB) assembly, the efficiency of the component placement process is dependent on two interrelated issues: the sequence of component placement, that is, the component sequencing problem, and the assignment of component types to feeders of the placement machine, that is, the feeder arrangement problem. In cases where some components with the same type are assigned to more than one feeder, the component retrieval problem should also be considered. Due to their inseparable relationship, a hybrid genetic algorithm is adopted to solve these three problems simultaneously for a type of PCB placement machines called the sequential pick-and-place (PAP) machine in this paper. The objective is to minimise the total distance travelled by the placement head for assembling all components on a PCB. Besides, the algorithm is compared with the methods proposed by other researchers in order to examine its effectiveness and efficiency.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The generalised transportation problem (GTP) is an extension of the linear Hitchcock transportation problem. However, it does not have the unimodularity property, which means the linear programming solution (like the simplex method) cannot guarantee to be integer. This is a major difference between the GTP and the Hitchcock transportation problem. Although some special algorithms, such as the generalised stepping-stone method, have been developed, but they are based on the linear programming model and the integer solution requirement of the GTP is relaxed. This paper proposes a genetic algorithm (GA) to solve the GTP and a numerical example is presented to show the algorithm and its efficiency.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Emerging vehicular comfort applications pose a host of completely new set of requirements such as maintaining end-to-end connectivity, packet routing, and reliable communication for internet access while on the move. One of the biggest challenges is to provide good quality of service (QoS) such as low packet delay while coping with the fast topological changes. In this paper, we propose a clustering algorithm based on minimal path loss ratio (MPLR) which should help in spectrum efficiency and reduce data congestion in the network. The vehicular nodes which experience minimal path loss are selected as the cluster heads. The performance of the MPLR clustering algorithm is calculated by rate of change of cluster heads, average number of clusters and average cluster size. Vehicular traffic models derived from the Traffic Wales data are fed as input to the motorway simulator. A mathematical analysis for the rate of change of cluster head is derived which validates the MPLR algorithm and is compared with the simulated results. The mathematical and simulated results are in good agreement indicating the stability of the algorithm and the accuracy of the simulator. The MPLR system is also compared with V2R system with MPLR system performing better. © 2013 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We investigate two numerical procedures for the Cauchy problem in linear elasticity, involving the relaxation of either the given boundary displacements (Dirichlet data) or the prescribed boundary tractions (Neumann data) on the over-specified boundary, in the alternating iterative algorithm of Kozlov et al. (1991). The two mixed direct (well-posed) problems associated with each iteration are solved using the method of fundamental solutions (MFS), in conjunction with the Tikhonov regularization method, while the optimal value of the regularization parameter is chosen via the generalized cross-validation (GCV) criterion. An efficient regularizing stopping criterion which ceases the iterative procedure at the point where the accumulation of noise becomes dominant and the errors in predicting the exact solutions increase, is also presented. The MFS-based iterative algorithms with relaxation are tested for Cauchy problems for isotropic linear elastic materials in various geometries to confirm the numerical convergence, stability, accuracy and computational efficiency of the proposed method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose two algorithms involving the relaxation of either the given Dirichlet data (boundary displacements) or the prescribed Neumann data (boundary tractions) on the over-specified boundary in the case of the alternating iterative algorithm of Kozlov et al. [16] applied to Cauchy problems in linear elasticity. A convergence proof of these relaxation methods is given, along with a stopping criterion. The numerical results obtained using these procedures, in conjunction with the boundary element method (BEM), show the numerical stability, convergence, consistency and computational efficiency of the proposed method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Emerging vehicular comfort applications pose a host of completely new set of requirements such as maintaining end-to-end connectivity, packet routing, and reliable communication for internet access while on the move. One of the biggest challenges is to provide good quality of service (QoS) such as low packet delay while coping with the fast topological changes. In this paper, we propose a clustering algorithm based on minimal path loss ratio (MPLR) which should help in spectrum efficiency and reduce data congestion in the network. The vehicular nodes which experience minimal path loss are selected as the cluster heads. The performance of the MPLR clustering algorithm is calculated by rate of change of cluster heads, average number of clusters and average cluster size. Vehicular traffic models derived from the Traffic Wales data are fed as input to the motorway simulator. A mathematical analysis for the rate of change of cluster head is derived which validates the MPLR algorithm and is compared with the simulated results. The mathematical and simulated results are in good agreement indicating the stability of the algorithm and the accuracy of the simulator. The MPLR system is also compared with V2R system with MPLR system performing better. © 2013 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper introduces a joint load balancing and hotspot mitigation protocol for mobile ad-hoc network (MANET) termed by us as 'load_energy balance + hotspot mitigation protocol (LEB+HM)'. We argue that although ad-hoc wireless networks have limited network resources - bandwidth and power, prone to frequent link/node failures and have high security risk; existing ad hoc routing protocols do not put emphasis on maintaining robust link/node, efficient use of network resources and on maintaining the security of the network. Typical route selection metrics used by existing ad hoc routing protocols are shortest hop, shortest delay, and loop avoidance. These routing philosophy have the tendency to cause traffic concentration on certain regions or nodes, leading to heavy contention, congestion and resource exhaustion which in turn may result in increased end-to-end delay, packet loss and faster battery power depletion, degrading the overall performance of the network. Also in most existing on-demand ad hoc routing protocols intermediate nodes are allowed to send route reply RREP to source in response to a route request RREQ. In such situation a malicious node can send a false optimal route to the source so that data packets sent will be directed to or through it, and tamper with them as wish. It is therefore desirable to adopt routing schemes which can dynamically disperse traffic load, able to detect and remove any possible bottlenecks and provide some form of security to the network. In this paper we propose a combine adaptive load_energy balancing and hotspot mitigation scheme that aims at evenly distributing network traffic load and energy, mitigate against any possible occurrence of hotspot and provide some form of security to the network. This combine approach is expected to yield high reliability, availability and robustness, that best suits any dynamic and scalable ad hoc network environment. Dynamic source routing (DSR) was use as our underlying protocol for the implementation of our algorithm. Simulation comparison of our protocol to that of original DSR shows that our protocol has reduced node/link failure, even distribution of battery energy, and better network service efficiency.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The utilization of solar energy by photovoltaic (PV) systems have received much research and development (R&D) attention across the globe. In the past decades, a large number of PV array have been installed. Since the installed PV arrays often operate in harsh environments, non-uniform aging can occur and impact adversely on the performance of PV systems, especially in the middle and late periods of their service life. Due to the high cost of replacing aged PV modules by new modules, it is appealing to improve energy efficiency of aged PV systems. For this purpose, this paper presents a PV module reconfiguration strategy to achieve the maximum power generation from non-uniformly aged PV arrays without significant investment. The proposed reconfiguration strategy is based on the cell-unit structure of PV modules, the operating voltage limit of gird-connected converter, and the resulted bucket-effect of the maximum short circuit current. The objectives are to analyze all the potential reorganization options of the PV modules, find the maximum power point and express it in a proposition. This proposition is further developed into a novel implementable algorithm to calculate the maximum power generation and the corresponding reconfiguration of the PV modules. The immediate benefits from this reconfiguration are the increased total power output and maximum power point voltage information for global maximum power point tracking (MPPT). A PV array simulation model is used to illustrate the proposed method under three different cases. Furthermore, an experimental rig is built to verify the effectiveness of the proposed method. The proposed method will open an effective approach for condition-based maintenance of emerging aging PV arrays.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a surrogate-model-based optimization of a doubly-fed induction generator (DFIG) machine winding design for maximizing power yield. Based on site-specific wind profile data and the machine's previous operational performance, the DFIG's stator and rotor windings are optimized to match the maximum efficiency with operating conditions for rewinding purposes. The particle swarm optimization-based surrogate optimization techniques are used in conjunction with the finite element method to optimize the machine design utilizing the limited available information for the site-specific wind profile and generator operating conditions. A response surface method in the surrogate model is developed to formulate the design objectives and constraints. Besides, the machine tests and efficiency calculations follow IEEE standard 112-B. Numerical and experimental results validate the effectiveness of the proposed technologies.