906 resultados para Normalization-based optimization
Resumo:
This article presents a laser tracker position optimization code based on the tracker uncertainty model developed by the National Physical Laboratory (NPL). The code is able to find the optimal tracker positions for generic measurements involving one or a network of many trackers, and an arbitrary set of targets. The optimization is performed using pattern search or optionally, genetic algorithm (GA) or particle swarm optimization (PSO). Different objective function weightings for the uncertainties of individual points, distance uncertainties between point pairs, and the angular uncertainties between three points can be defined. Constraints for tracker position limits and minimum measurement distances have also been implemented. Furthermore, position optimization taking into account of lines-of-sight (LOS) within complex CAD geometry have also been demonstrated. The code is simple to use and can be a valuable measurement planning tool.
Resumo:
This article presents a laser tracker position optimization code based on the tracker uncertainty model developed by the National Physical Laboratory (NPL). The code is able to find the optimal tracker positions for generic measurements involving one or a network of many trackers, and an arbitrary set of targets. The optimization is performed using pattern search or optionally, genetic algorithm (GA) or particle swarm optimization (PSO). Different objective function weightings for the uncertainties of individual points, distance uncertainties between point pairs, and the angular uncertainties between three points can be defined. Constraints for tracker position limits and minimum measurement distances have also been implemented. Furthermore, position optimization taking into account of lines-of-sight (LOS) within complex CAD geometry have also been demonstrated. The code is simple to use and can be a valuable measurement planning tool.
Resumo:
Measurement and variation control of geometrical Key Characteristics (KCs), such as flatness and gap of joint faces, coaxiality of cabin sections, is the crucial issue in large components assembly from the aerospace industry. Aiming to control geometrical KCs and to attain the best fit of posture, an optimization algorithm based on KCs for large components assembly is proposed. This approach regards the posture best fit, which is a key activity in Measurement Aided Assembly (MAA), as a two-phase optimal problem. In the first phase, the global measurement coordinate system of digital model and shop floor is unified with minimum error based on singular value decomposition, and the current posture of components being assembly is optimally solved in terms of minimum variation of all reference points. In the second phase, the best posture of the movable component is optimally determined by minimizing multiple KCs' variation with the constraints that every KC respectively conforms to its product specification. The optimal models and the process procedures for these two-phase optimal problems based on Particle Swarm Optimization (PSO) are proposed. In each model, every posture to be calculated is modeled as a 6 dimensional particle (three movement and three rotation parameters). Finally, an example that two cabin sections of satellite mainframe structure are being assembled is selected to verify the effectiveness of the proposed approach, models and algorithms. The experiment result shows the approach is promising and will provide a foundation for further study and application. © 2013 The Authors.
Resumo:
AMS subject classification: 49J52, 90C30.
Resumo:
Relay selection has been considered as an effective method to improve the performance of cooperative communication. However, the Channel State Information (CSI) used in relay selection can be outdated, yielding severe performance degradation of cooperative communication systems. In this paper, we investigate the relay selection under outdated CSI in a Decode-and-Forward (DF) cooperative system to improve its outage performance. We formulize an optimization problem, where the set of relays that forwards data is optimized to minimize the probability of outage conditioned on the outdated CSI of all the decodable relays’ links. We then propose a novel multiple-relay selection strategy based on the solution of the optimization problem. Simulation results show that the proposed relay selection strategy achieves large improvement of outage performance compared with the existing relay selection strategies combating outdated CSI given in the literature.
Resumo:
Many practical routing algorithms are heuristic, adhoc and centralized, rendering generic and optimal path configurations difficult to obtain. Here we study a scenario whereby selected nodes in a given network communicate with fixed routers and employ statistical physics methods to obtain optimal routing solutions subject to a generic cost. A distributive message-passing algorithm capable of optimizing the path configuration in real instances is devised, based on the analytical derivation, and is greatly simplified by expanding the cost function around the optimized flow. Good algorithmic convergence is observed in most of the parameter regimes. By applying the algorithm, we study and compare the pros and cons of balanced traffic configurations to that of consolidated traffic, which provides important implications to practical communication and transportation networks. Interesting macroscopic phenomena are observed from the optimized states as an interplay between the communication density and the cost functions used. © 2013 IEEE.
Resumo:
Porosity development of mesostructured colloidal silica nanoparticles is related to the removal of the organic templates and co-templates which is often carried out by calcination at high temperatures, 500-600 °C. In this study a mild detemplation method based on the oxidative Fenton chemistry has been investigated. The Fenton reaction involves the generation of OH radicals following a redox Fe3+/Fe2+ cycle that is used as catalyst and H2O2 as oxidant source. Improved material properties are anticipated since the Fenton chemistry comprises milder conditions than calcination. However, the general application of this methodology is not straightforward due to limitations in the hydrothermal stability of the particular system under study. The objective of this work is three-fold: 1) reducing the residual Fe in the resulting solid as this can be detrimental for the application of the material, 2) shortening the reaction time by optimizing the reaction temperature to minimize possible particle agglomeration, and finally 3) investigating the structural and textural properties of the resulting material in comparison to the calcined counterparts. It appears that the Fenton detemplation can be optimized by shortening the reaction time significantly at low Fe concentration. The milder conditions of detemplation give rise to enhanced properties in terms of surface area, pore volume, structural preservation, low Fe residue and high degree of surface hydroxylation; the colloidal particles are stable during storage. A relative particle size increase, expressed as 0.11%·h-1, has been determined.
Resumo:
In this paper, we focus on the design of bivariate EDAs for discrete optimization problems and propose a new approach named HSMIEC. While the current EDAs require much time in the statistical learning process as the relationships among the variables are too complicated, we employ the Selfish gene theory (SG) in this approach, as well as a Mutual Information and Entropy based Cluster (MIEC) model is also set to optimize the probability distribution of the virtual population. This model uses a hybrid sampling method by considering both the clustering accuracy and clustering diversity and an incremental learning and resample scheme is also set to optimize the parameters of the correlations of the variables. Compared with several benchmark problems, our experimental results demonstrate that HSMIEC often performs better than some other EDAs, such as BMDA, COMIT, MIMIC and ECGA. © 2009 Elsevier B.V. All rights reserved.
Resumo:
Surface modification by means of nanostructures is of interest to enhance boiling heat transfer in various applications including the organic Rankine cycle (ORC). With the goal of obtaining rough and dense aluminum oxide (Al2O3) nanofilms, the optimal combination of process parameters for electrophoretic deposition (EPD) based on the uniform design (UD) method is explored in this paper. The detailed procedures for the EPD process and UD method are presented. Four main influencing conditions controlling the EPD process were identified as nanofluid concentration, deposition time, applied voltage and suspension pH. A series of tests were carried out based on the UD experimental design. A regression model and statistical analysis were applied to the results. Sensitivity analyses of the effect of the four main parameters on the roughness and deposited mass of Al2O3 films were also carried out. The results showed that Al2O3 nanofilms were deposited compactly and uniformly on the substrate. Within the range of the experiments, the preferred combination of process parameters was determined to be nanofluid concentration of 2 wt.%, deposition time of 15 min, applied voltage of 23 V and suspension pH of 3, yielding roughness and deposited mass of 520.9 nm and 161.6 × 10− 4 g/cm2, respectively. A verification experiment was carried out at these conditions and gave values of roughness and deposited mass within 8% error of the expected ones as determined from the UD approach. It is concluded that uniform design is useful for the optimization of electrophoretic deposition requiring only 7 tests compared to 49 using the orthogonal design method.
Resumo:
A distance-based inconsistency indicator, defined by the third author for the consistency-driven pairwise comparisons method, is extended to the incomplete case. The corresponding optimization problem is transformed into an equivalent linear programming problem. The results can be applied in the process of filling in the matrix as the decision maker gets automatic feedback. As soon as a serious error occurs among the matrix elements, even due to a misprint, a significant increase in the inconsistency index is reported. The high inconsistency may be alarmed not only at the end of the process of filling in the matrix but also during the completion process. Numerical examples are also provided.
Resumo:
We present a general model to find the best allocation of a limited amount of supplements (extra minutes added to a timetable in order to reduce delays) on a set of interfering railway lines. By the best allocation, we mean the solution under which the weighted sum of expected delays is minimal. Our aim is to finely adjust an already existing and well-functioning timetable. We model this inherently stochastic optimization problem by using two-stage recourse models from stochastic programming, building upon earlier research from the literature. We present an improved formulation, allowing for an efficient solution using a standard algorithm for recourse models. We show that our model may be solved using any of the following theoretical frameworks: linear programming, stochastic programming and convex non-linear programming, and present a comparison of these approaches based on a real-life case study. Finally, we introduce stochastic dependency into the model, and present a statistical technique to estimate the model parameters from empirical data.
Resumo:
The major barrier to practical optimization of pavement preservation programming has always been that for formulations where the identity of individual projects is preserved, the solution space grows exponentially with the problem size to an extent where it can become unmanageable by the traditional analytical optimization techniques within reasonable limit. This has been attributed to the problem of combinatorial explosion that is, exponential growth of the number of combinations. The relatively large number of constraints often presents in a real-life pavement preservation programming problems and the trade-off considerations required between preventive maintenance, rehabilitation and reconstruction, present yet another factor that contributes to the solution complexity. In this research study, a new integrated multi-year optimization procedure was developed to solve network level pavement preservation programming problems, through cost-effectiveness based evolutionary programming analysis, using the Shuffled Complex Evolution (SCE) algorithm.^ A case study problem was analyzed to illustrate the robustness and consistency of the SCE technique in solving network level pavement preservation problems. The output from this program is a list of maintenance and rehabilitation treatment (M&R) strategies for each identified segment of the network in each programming year, and the impact on the overall performance of the network, in terms of the performance levels of the recommended optimal M&R strategy. ^ The results show that the SCE is very efficient and consistent in the simultaneous consideration of the trade-off between various pavement preservation strategies, while preserving the identity of the individual network segments. The flexibility of the technique is also demonstrated, in the sense that, by suitably coding the problem parameters, it can be used to solve several forms of pavement management programming problems. It is recommended that for large networks, some sort of decomposition technique should be applied to aggregate sections, which exhibit similar performance characteristics into links, such that whatever M&R alternative is recommended for a link can be applied to all the sections connected to it. In this way the problem size, and hence the solution time, can be greatly reduced to a more manageable solution space. ^ The study concludes that the robust search characteristics of SCE are well suited for solving the combinatorial problems in long-term network level pavement M&R programming and provides a rich area for future research. ^
Resumo:
Optimization of adaptive traffic signal timing is one of the most complex problems in traffic control systems. This dissertation presents a new method that applies the parallel genetic algorithm (PGA) to optimize adaptive traffic signal control in the presence of transit signal priority (TSP). The method can optimize the phase plan, cycle length, and green splits at isolated intersections with consideration for the performance of both the transit and the general vehicles. Unlike the simple genetic algorithm (GA), PGA can provide better and faster solutions needed for real-time optimization of adaptive traffic signal control. ^ An important component in the proposed method involves the development of a microscopic delay estimation model that was designed specifically to optimize adaptive traffic signal with TSP. Macroscopic delay models such as the Highway Capacity Manual (HCM) delay model are unable to accurately consider the effect of phase combination and phase sequence in delay calculations. In addition, because the number of phases and the phase sequence of adaptive traffic signal may vary from cycle to cycle, the phase splits cannot be optimized when the phase sequence is also a decision variable. A "flex-phase" concept was introduced in the proposed microscopic delay estimation model to overcome these limitations. ^ The performance of PGA was first evaluated against the simple GA. The results show that PGA achieved both faster convergence and lower delay for both under- or over-saturated traffic conditions. A VISSIM simulation testbed was then developed to evaluate the performance of the proposed PGA-based adaptive traffic signal control with TSP. The simulation results show that the PGA-based optimizer for adaptive TSP outperformed the fully actuated NEMA control in all test cases. The results also show that the PGA-based optimizer was able to produce TSP timing plans that benefit the transit vehicles while minimizing the impact of TSP on the general vehicles. The VISSIM testbed developed in this research provides a powerful tool to design and evaluate different TSP strategies under both actuated and adaptive signal control. ^
Resumo:
Traffic incidents are non-recurring events that can cause a temporary reduction in roadway capacity. They have been recognized as a major contributor to traffic congestion on our nation’s highway systems. To alleviate their impacts on capacity, automatic incident detection (AID) has been applied as an incident management strategy to reduce the total incident duration. AID relies on an algorithm to identify the occurrence of incidents by analyzing real-time traffic data collected from surveillance detectors. Significant research has been performed to develop AID algorithms for incident detection on freeways; however, similar research on major arterial streets remains largely at the initial stage of development and testing. This dissertation research aims to identify design strategies for the deployment of an Artificial Neural Network (ANN) based AID algorithm for major arterial streets. A section of the US-1 corridor in Miami-Dade County, Florida was coded in the CORSIM microscopic simulation model to generate data for both model calibration and validation. To better capture the relationship between the traffic data and the corresponding incident status, Discrete Wavelet Transform (DWT) and data normalization were applied to the simulated data. Multiple ANN models were then developed for different detector configurations, historical data usage, and the selection of traffic flow parameters. To assess the performance of different design alternatives, the model outputs were compared based on both detection rate (DR) and false alarm rate (FAR). The results show that the best models were able to achieve a high DR of between 90% and 95%, a mean time to detect (MTTD) of 55-85 seconds, and a FAR below 4%. The results also show that a detector configuration including only the mid-block and upstream detectors performs almost as well as one that also includes a downstream detector. In addition, DWT was found to be able to improve model performance, and the use of historical data from previous time cycles improved the detection rate. Speed was found to have the most significant impact on the detection rate, while volume was found to contribute the least. The results from this research provide useful insights on the design of AID for arterial street applications.
Resumo:
The total time a customer spends in the business process system, called the customer cycle-time, is a major contributor to overall customer satisfaction. Business process analysts and designers are frequently asked to design process solutions with optimal performance. Simulation models have been very popular to quantitatively evaluate the business processes; however, simulation is time-consuming and it also requires extensive modeling experiences to develop simulation models. Moreover, simulation models neither provide recommendations nor yield optimal solutions for business process design. A queueing network model is a good analytical approach toward business process analysis and design, and can provide a useful abstraction of a business process. However, the existing queueing network models were developed based on telephone systems or applied to manufacturing processes in which machine servers dominate the system. In a business process, the servers are usually people. The characteristics of human servers should be taken into account by the queueing model, i.e. specialization and coordination. ^ The research described in this dissertation develops an open queueing network model to do a quick analysis of business processes. Additionally, optimization models are developed to provide optimal business process designs. The queueing network model extends and improves upon existing multi-class open-queueing network models (MOQN) so that the customer flow in the human-server oriented processes can be modeled. The optimization models help business process designers to find the optimal design of a business process with consideration of specialization and coordination. ^ The main findings of the research are, first, parallelization can reduce the cycle-time for those customer classes that require more than one parallel activity; however, the coordination time due to the parallelization overwhelms the savings from parallelization under the high utilization servers since the waiting time significantly increases, thus the cycle-time increases. Third, the level of industrial technology employed by a company and coordination time to mange the tasks have strongest impact on the business process design; as the level of industrial technology employed by the company is high; more division is required to improve the cycle-time; as the coordination time required is high; consolidation is required to improve the cycle-time. ^