872 resultados para Particle swarm optimization algorithm PSO


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Using the risk measure CV aR in �nancial analysis has become more and more popular recently. In this paper we apply CV aR for portfolio optimization. The problem is formulated as a two-stage stochastic programming model, and the SRA algorithm, a recently developed heuristic algorithm, is applied for minimizing CV aR.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A CV aR kockázati mérték egyre nagyobb jelentÅségre tesz szert portfóliók kockázatának megítélésekor. A portfolió egészére a CVaR kockázati mérték minimalizálását meg lehet fogalmazni kétlépcsÅs sztochasztikus feladatként. Az SRA algoritmus egy mostanában kifejlesztett megoldó algoritmus sztochasztikus programozási feladatok optimalizálására. Ebben a cikkben az SRA algoritmussal oldottam meg CV aR kockázati mérték minimalizálást. ___________ The risk measure CVaR is becoming more and more popular in recent years. In this paper we use CVaR for portfolio optimization. We formulate the problem as a two-stage stochastic programming model. We apply the SRA algorithm, which is a recently developed heuristic algorithm, to minimizing CVaR.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a general model to find the best allocation of a limited amount of supplements (extra minutes added to a timetable in order to reduce delays) on a set of interfering railway lines. By the best allocation, we mean the solution under which the weighted sum of expected delays is minimal. Our aim is to finely adjust an already existing and well-functioning timetable. We model this inherently stochastic optimization problem by using two-stage recourse models from stochastic programming, building upon earlier research from the literature. We present an improved formulation, allowing for an efficient solution using a standard algorithm for recourse models. We show that our model may be solved using any of the following theoretical frameworks: linear programming, stochastic programming and convex non-linear programming, and present a comparison of these approaches based on a real-life case study. Finally, we introduce stochastic dependency into the model, and present a statistical technique to estimate the model parameters from empirical data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Optimization of adaptive traffic signal timing is one of the most complex problems in traffic control systems. This dissertation presents a new method that applies the parallel genetic algorithm (PGA) to optimize adaptive traffic signal control in the presence of transit signal priority (TSP). The method can optimize the phase plan, cycle length, and green splits at isolated intersections with consideration for the performance of both the transit and the general vehicles. Unlike the simple genetic algorithm (GA), PGA can provide better and faster solutions needed for real-time optimization of adaptive traffic signal control. ^ An important component in the proposed method involves the development of a microscopic delay estimation model that was designed specifically to optimize adaptive traffic signal with TSP. Macroscopic delay models such as the Highway Capacity Manual (HCM) delay model are unable to accurately consider the effect of phase combination and phase sequence in delay calculations. In addition, because the number of phases and the phase sequence of adaptive traffic signal may vary from cycle to cycle, the phase splits cannot be optimized when the phase sequence is also a decision variable. A "flex-phase" concept was introduced in the proposed microscopic delay estimation model to overcome these limitations. ^ The performance of PGA was first evaluated against the simple GA. The results show that PGA achieved both faster convergence and lower delay for both under- or over-saturated traffic conditions. A VISSIM simulation testbed was then developed to evaluate the performance of the proposed PGA-based adaptive traffic signal control with TSP. The simulation results show that the PGA-based optimizer for adaptive TSP outperformed the fully actuated NEMA control in all test cases. The results also show that the PGA-based optimizer was able to produce TSP timing plans that benefit the transit vehicles while minimizing the impact of TSP on the general vehicles. The VISSIM testbed developed in this research provides a powerful tool to design and evaluate different TSP strategies under both actuated and adaptive signal control. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Access to healthcare is a major problem in which patients are deprived of receiving timely admission to healthcare. Poor access has resulted in significant but avoidable healthcare cost, poor quality of healthcare, and deterioration in the general public health. Advanced Access is a simple and direct approach to appointment scheduling in which the majority of a clinic's appointments slots are kept open in order to provide access for immediate or same day healthcare needs and therefore, alleviate the problem of poor access the healthcare. This research formulates a non-linear discrete stochastic mathematical model of the Advanced Access appointment scheduling policy. The model objective is to maximize the expected profit of the clinic subject to constraints on minimum access to healthcare provided. Patient behavior is characterized with probabilities for no-show, balking, and related patient choices. Structural properties of the model are analyzed to determine whether Advanced Access patient scheduling is feasible. To solve the complex combinatorial optimization problem, a heuristic that combines greedy construction algorithm and neighborhood improvement search was developed. The model and the heuristic were used to evaluate the Advanced Access patient appointment policy compared to existing policies. Trade-off between profit and access to healthcare are established, and parameter analysis of input parameters was performed. The trade-off curve is a characteristic curve and was observed to be concave. This implies that there exists an access level at which at which the clinic can be operated at optimal profit that can be realized. The results also show that, in many scenarios by switching from existing scheduling policy to Advanced Access policy clinics can improve access without any decrease in profit. Further, the success of Advanced Access policy in providing improved access and/or profit depends on the expected value of demand, variation in demand, and the ratio of demand for same day and advanced appointments. The contributions of the dissertation are a model of Advanced Access patient scheduling, a heuristic to solve the model, and the use of the model to understand the scheduling policy trade-offs which healthcare clinic managers must make. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Numerical optimization is a technique where a computer is used to explore design parameter combinations to find extremes in performance factors. In multi-objective optimization several performance factors can be optimized simultaneously. The solution to multi-objective optimization problems is not a single design, but a family of optimized designs referred to as the Pareto frontier. The Pareto frontier is a trade-off curve in the objective function space composed of solutions where performance in one objective function is traded for performance in others. A Multi-Objective Hybridized Optimizer (MOHO) was created for the purpose of solving multi-objective optimization problems by utilizing a set of constituent optimization algorithms. MOHO tracks the progress of the Pareto frontier approximation development and automatically switches amongst those constituent evolutionary optimization algorithms to speed the formation of an accurate Pareto frontier approximation. Aerodynamic shape optimization is one of the oldest applications of numerical optimization. MOHO was used to perform shape optimization on a 0.5-inch ballistic penetrator traveling at Mach number 2.5. Two objectives were simultaneously optimized: minimize aerodynamic drag and maximize penetrator volume. This problem was solved twice. The first time the problem was solved by using Modified Newton Impact Theory (MNIT) to determine the pressure drag on the penetrator. In the second solution, a Parabolized Navier-Stokes (PNS) solver that includes viscosity was used to evaluate the drag on the penetrator. The studies show the difference in the optimized penetrator shapes when viscosity is absent and present in the optimization. In modern optimization problems, objective function evaluations may require many hours on a computer cluster to perform these types of analysis. One solution is to create a response surface that models the behavior of the objective function. Once enough data about the behavior of the objective function has been collected, a response surface can be used to represent the actual objective function in the optimization process. The Hybrid Self-Organizing Response Surface Method (HYBSORSM) algorithm was developed and used to make response surfaces of objective functions. HYBSORSM was evaluated using a suite of 295 non-linear functions. These functions involve from 2 to 100 variables demonstrating robustness and accuracy of HYBSORSM.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the advantages and popularity of Permanent Magnet (PM) motors due to their high power density, there is an increasing incentive to use them in variety of applications including electric actuation. These applications have strict noise emission standards. The generation of audible noise and associated vibration modes are characteristics of all electric motors, it is especially problematic in low speed sensorless control rotary actuation applications using high frequency voltage injection technique. This dissertation is aimed at solving the problem of optimizing the sensorless control algorithm for low noise and vibration while achieving at least 12 bit absolute accuracy for speed and position control. The low speed sensorless algorithm is simulated using an improved Phase Variable Model, developed and implemented in a hardware-in-the-loop prototyping environment. Two experimental testbeds were developed and built to test and verify the algorithm in real time.^ A neural network based modeling approach was used to predict the audible noise due to the high frequency injected carrier signal. This model was created based on noise measurements in an especially built chamber. The developed noise model is then integrated into the high frequency based sensorless control scheme so that appropriate tradeoffs and mitigation techniques can be devised. This will improve the position estimation and control performance while keeping the noise below a certain level. Genetic algorithms were used for including the noise optimization parameters into the developed control algorithm.^ A novel wavelet based filtering approach was proposed in this dissertation for the sensorless control algorithm at low speed. This novel filter was capable of extracting the position information at low values of injection voltage where conventional filters fail. This filtering approach can be used in practice to reduce the injected voltage in sensorless control algorithm resulting in significant reduction of noise and vibration.^ Online optimization of sensorless position estimation algorithm was performed to reduce vibration and to improve the position estimation performance. The results obtained are important and represent original contributions that can be helpful in choosing optimal parameters for sensorless control algorithm in many practical applications.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation presents a system-wide approach, based on genetic algorithms, for the optimization of transfer times for an entire bus transit system. Optimization of transfer times in a transit system is a complicated problem because of the large set of binary and discrete values involved. The combinatorial nature of the problem imposes a computational burden and makes it difficult to solve by classical mathematical programming methods. ^ The genetic algorithm proposed in this research attempts to find an optimal solution for the transfer time optimization problem by searching for a combination of adjustments to the timetable for all the routes in the system. It makes use of existing scheduled timetables, ridership demand at all transfer locations, and takes into consideration the randomness of bus arrivals. ^ Data from Broward County Transit are used to compute total transfer times. The proposed genetic algorithm-based approach proves to be capable of producing substantial time savings compared to the existing transfer times in a reasonable amount of time. ^ The dissertation also addresses the issues related to spatial and temporal modeling, variability in bus arrival and departure times, walking time, as well as the integration of scheduling and ridership data. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Today, over 15,000 Ion Mobility Spectrometry (IMS) analyzers are employed at worldwide security checkpoints to detect explosives and illicit drugs. Current portal IMS instruments and other electronic nose technologies detect explosives and drugs by analyzing samples containing the headspace air and loose particles residing on a surface. Canines can outperform these systems at sampling and detecting the low vapor pressure explosives and drugs, such as RDX, PETN, cocaine, and MDMA, because these biological detectors target the volatile signature compounds available in the headspace rather than the non-volatile parent compounds of explosives and drugs.^ In this dissertation research volatile signature compounds available in the headspace over explosive and drug samples were detected using SPME as a headspace sampling tool coupled to an IMS analyzer. A Genetic Algorithm (GA) technique was developed to optimize the operating conditions of a commercial IMS (GE Itemizer 2), leading to the successful detection of plastic explosives (Detasheet, Semtex H, and C-4) and illicit drugs (cocaine, MDMA, and marijuana). Short sampling times (between 10 sec to 5 min) were adequate to extract and preconcentrate sufficient analytes (> 20 ng) representing the volatile signatures in the headspace of a 15 mL glass vial or a quart-sized can containing ⤠1 g of the bulk explosive or drug.^ Furthermore, a research grade IMS with flexibility for changing operating conditions and physical configurations was designed and fabricated to accommodate future research into different analytes or physical configurations. The design and construction of the FIU-IMS were facilitated by computer modeling and simulation of ionâs behavior within an IMS. The simulation method developed uses SIMION/SDS and was evaluated with experimental data collected using a commercial IMS (PCP Phemto Chem 110). The FIU-IMS instrument has comparable performance to the GE Itemizer 2 (average resolving power of 14, resolution of 3 between two drugs and two explosives, and LODs range from 0.7 to 9 ng). ^ The results from this dissertation further advance the concept of targeting volatile components to presumptively detect the presence of concealed bulk explosives and drugs by SPME-IMS, and the new FIU-IMS provides a flexible platform for future IMS research projects.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A wireless mesh network is a mesh network implemented over a wireless network system such as wireless LANs. Wireless Mesh Networks(WMNs) are promising for numerous applications such as broadband home networking, enterprise networking, transportation systems, health and medical systems, security surveillance systems, etc. Therefore, it has received considerable attention from both industrial and academic researchers. This dissertation explores schemes for resource management and optimization in WMNs by means of network routing and network coding.^ In this dissertation, we propose three optimization schemes. (1) First, a triple-tier optimization scheme is proposed for load balancing objective. The first tier mechanism achieves long-term routing optimization, and the second tier mechanism, using the optimization results obtained from the first tier mechanism, performs the short-term adaptation to deal with the impact of dynamic channel conditions. A greedy sub-channel allocation algorithm is developed as the third tier optimization scheme to further reduce the congestion level in the network. We conduct thorough theoretical analysis to show the correctness of our design and give the properties of our scheme. (2) Then, a Relay-Aided Network Coding scheme called RANC is proposed to improve the performance gain of network coding by exploiting the physical layer multi-rate capability in WMNs. We conduct rigorous analysis to find the design principles and study the tradeoff in the performance gain of RANC. Based on the analytical results, we provide a practical solution by decomposing the original design problem into two sub-problems, flow partition problem and scheduling problem. (3) Lastly, a joint optimization scheme of the routing in the network layer and network coding-aware scheduling in the MAC layer is introduced. We formulate the network optimization problem and exploit the structure of the problem via dual decomposition. We find that the original problem is composed of two problems, routing problem in the network layer and scheduling problem in the MAC layer. These two sub-problems are coupled through the link capacities. We solve the routing problem by two different adaptive routing algorithms. We then provide a distributed coding-aware scheduling algorithm. According to corresponding experiment results, the proposed schemes can significantly improve network performance.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Today, over 15,000 Ion Mobility Spectrometry (IMS) analyzers are employed at worldwide security checkpoints to detect explosives and illicit drugs. Current portal IMS instruments and other electronic nose technologies detect explosives and drugs by analyzing samples containing the headspace air and loose particles residing on a surface. Canines can outperform these systems at sampling and detecting the low vapor pressure explosives and drugs, such as RDX, PETN, cocaine, and MDMA, because these biological detectors target the volatile signature compounds available in the headspace rather than the non-volatile parent compounds of explosives and drugs. In this dissertation research volatile signature compounds available in the headspace over explosive and drug samples were detected using SPME as a headspace sampling tool coupled to an IMS analyzer. A Genetic Algorithm (GA) technique was developed to optimize the operating conditions of a commercial IMS (GE Itemizer 2), leading to the successful detection of plastic explosives (Detasheet, Semtex H, and C-4) and illicit drugs (cocaine, MDMA, and marijuana). Short sampling times (between 10 sec to 5 min) were adequate to extract and preconcentrate sufficient analytes (> 20 ng) representing the volatile signatures in the headspace of a 15 mL glass vial or a quart-sized can containing ⤠1 g of the bulk explosive or drug. Furthermore, a research grade IMS with flexibility for changing operating conditions and physical configurations was designed and fabricated to accommodate future research into different analytes or physical configurations. The design and construction of the FIU-IMS were facilitated by computer modeling and simulation of ionâs behavior within an IMS. The simulation method developed uses SIMION/SDS and was evaluated with experimental data collected using a commercial IMS (PCP Phemto Chem 110). The FIU-IMS instrument has comparable performance to the GE Itemizer 2 (average resolving power of 14, resolution of 3 between two drugs and two explosives, and LODs range from 0.7 to 9 ng). The results from this dissertation further advance the concept of targeting volatile components to presumptively detect the presence of concealed bulk explosives and drugs by SPME-IMS, and the new FIU-IMS provides a flexible platform for future IMS research projects.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recentemente diversas técnicas de computação evolucionárias têm sido utilizadas em áreas como estimação de parâmetros de processos dinâmicos lineares e não lineares ou até sujeitos a incertezas. Isso motiva a utilização de algoritmos como o otimizador por nuvem de partículas (PSO) nas referidas áreas do conhecimento. Porém, pouco se sabe sobre a convergência desse algoritmo e, principalmente, as análises e estudos realizados têm se concentrado em resultados experimentais. Por isso, é objetivo deste trabalho propor uma nova estrutura para o PSO que permita analisar melhor a convergência do algoritmo de forma analítica. Para isso, o PSO é reestruturado para assumir uma forma matricial e reformulado como um sistema linear por partes. As partes serão analisadas de forma separada e será proposta a inserção de um fator de esquecimento que garante que a parte mais significativa deste sistema possua autovalores dentro do círculo de raio unitário. Também será realizada a análise da convergência do algoritmo como um todo, utilizando um critério de convergência quase certa, aplicável a sistemas chaveados. Na sequência, serão realizados testes experimentais de maneira a verificar o comportamento dos autovalores após a inserção do fator de esquecimento. Posteriormente, os algoritmos de identificação de parâmetros tradicionais serão combinados com o PSO matricial, de maneira a tornar os resultados da identificação tão bons ou melhores que a identificação apenas com o PSO ou, apenas com os algoritmos tradicionais. Os resultados mostram a convergência das partículas em uma região delimitada e que as funções obtidas após a combinação do algoritmo PSO matricial com os algoritmos convencionais, apresentam maior generalização para o sistema apresentado. As conclusões a que se chega é que a hibridização, apesar de limitar a busca por uma partícula mais apta do PSO, permite um desempenho mínimo para o algoritmo e ainda possibilita melhorar o resultado obtido com os algoritmos tradicionais, permitindo a representação do sistema aproximado em quantidades maiores de frequências.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Performing experiments on small-scale quantum computers is certainly a challenging endeavor. Many parameters need to be optimized to achieve high-fidelity operations. This can be done efficiently for operations acting on single qubits, as errors can be fully characterized. For multiqubit operations, though, this is no longer the case, as in the most general case, analyzing the effect of the operation on the system requires a full state tomography for which resources scale exponentially with the system size. Furthermore, in recent experiments, additional electronic levels beyond the two-level system encoding the qubit have been used to enhance the capabilities of quantum-information processors, which additionally increases the number of parameters that need to be controlled. For the optimization of the experimental system for a given task (e.g., a quantum algorithm), one has to find a satisfactory error model and also efficient observables to estimate the parameters of the model. In this manuscript, we demonstrate a method to optimize the encoding procedure for a small quantum error correction code in the presence of unknown but constant phase shifts. The method, which we implement here on a small-scale linear ion-trap quantum computer, is readily applicable to other AMO platforms for quantum-information processing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

<p>Knowledge-based radiation treatment is an emerging concept in radiotherapy. It</p><p>mainly refers to the technique that can guide or automate treatment planning in</p><p>clinic by learning from prior knowledge. Dierent models are developed to realize</p><p>it, one of which is proposed by Yuan et al. at Duke for lung IMRT planning. This</p><p>model can automatically determine both beam conguration and optimization ob-</p><p>jectives with non-coplanar beams based on patient-specic anatomical information.</p><p>Although plans automatically generated by this model demonstrate equivalent or</p><p>better dosimetric quality compared to clinical approved plans, its validity and gener-</p><p>ality are limited due to the empirical assignment to a coecient called angle spread</p><p>constraint dened in the beam eciency index used for beam ranking. To eliminate</p><p>these limitations, a systematic study on this coecient is needed to acquire evidences</p><p>for its optimal value.</p><p>To achieve this purpose, eleven lung cancer patients with complex tumor shape</p><p>with non-coplanar beams adopted in clinical approved plans were retrospectively</p><p>studied in the frame of the automatic lung IMRT treatment algorithm. The primary</p><p>and boost plans used in three patients were treated as dierent cases due to the</p><p>dierent target size and shape. A total of 14 lung cases, thus, were re-planned using</p><p>the knowledge-based automatic lung IMRT planning algorithm by varying angle</p><p>spread constraint from 0 to 1 with increment of 0.2. A modied beam angle eciency</p><p>index used for navigate the beam selection was adopted. Great eorts were made to assure the quality of plans associated to every angle spread constraint as good</p><p>as possible. Important dosimetric parameters for PTV and OARs, quantitatively</p><p>re</p><p>ecting the plan quality, were extracted from the DVHs and analyzed as a function</p><p>of angle spread constraint for each case. Comparisons of these parameters between</p><p>clinical plans and model-based plans were evaluated by two-sampled Students t-tests,</p><p>and regression analysis on a composite index built on the percentage errors between</p><p>dosimetric parameters in the model-based plans and those in the clinical plans as a</p><p>function of angle spread constraint was performed.</p><p>Results show that model-based plans generally have equivalent or better quality</p><p>than clinical approved plans, qualitatively and quantitatively. All dosimetric param-</p><p>eters except those for lungs in the automatically generated plans are statistically</p><p>better or comparable to those in the clinical plans. On average, more than 15% re-</p><p>duction on conformity index and homogeneity index for PTV and V40, V60 for heart</p><p>while an 8% and 3% increase on V5, V20 for lungs, respectively, are observed. The</p><p>intra-plan comparison among model-based plans demonstrates that plan quality does</p><p>not change much with angle spread constraint larger than 0.4. Further examination</p><p>on the variation curve of the composite index as a function of angle spread constraint</p><p>shows that 0.6 is the optimal value that can result in statistically the best achievable</p><p>plans.</p>

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A 17 month record of vertical particle flux of dry weight, carbonate and organic carbon were 25.8, 9.4 and 2.4g/m**2/y, respectively. Parallel to trap deployments, pelagic system structure was recorded with high vertical and temporal resolution. Within a distinct seasonal cycle of vertical particle flux, zooplankton faecal pellets of various sizes, shapes and contents were collected by the traps in different proportions and quantities throughout the year (range: 0-4,500 10**3/m**2/d). The remains of different groups of organisms showed distinct seasonal variations in abundance. In early summer there was a small maximum in the diatom flux and this was followed by pulses of tinntinids, radiolarians, foraminiferans and pteropods between July and November. Food web interactions in the water column were important in controlling the quality and quantity of sinking materials. For example, changes in the population structure of dominant herbivores, the break-down of regenerating summer populations of microflagellates and protozooplankton and the collapse of a pteropod dominated community, each resulted in marked sedimentation pulses. These data from the Norwegian Sea indicate those mechanisms which either accelerate or counteract loss of material via sedimentation. These involve variations in the structure of the pelagic system and they operatè on long (e.g. annual plankton succession) and short (e.g. the end of new production, sporadic grazing of swarm feeders) time scales. Connecting investigation of the water column with a high resolution in time in parallel with drifting sediment trap deployments and shipboard experiments with the dominant zooplankters is a promising approach for giving a better understanding of both the origin and the fate of material sinking to the sea floor.