805 resultados para Minimization Algorithm
Resumo:
In this work it is proposed to validate an evolutionary tuning algorithm in plants composed by a grid connected inverter. The optimization aims the tuning of the slopes of P-Ω and Q-V curves so that the system is stable, damped and minimum settling time. Simulation and experimental results are presented to prove the feasibility of the proposed approach. However, experimental results demonstrate a compromising effect of grid frequency oscillations in the active power transferring. In addition, it was proposed an additional loop to compensate this effect ensuring a constant active power flow. © 2011 IEEE.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Bound-constrained minimization is a subject of active research. To assess the performance of existent solvers, numerical evaluations and comparisons are carried on. Arbitrary decisions that may have a crucial effect on the conclusions of numerical experiments are highlighted in the present work. As a result, a detailed evaluation based on performance profiles is applied to the comparison of bound-constrained minimization solvers. Extensive numerical results are presented and analyzed.
Resumo:
A deep theoretical analysis of the graph cut image segmentation framework presented in this paper simultaneously translates into important contributions in several directions. The most important practical contribution of this work is a full theoretical description, and implementation, of a novel powerful segmentation algorithm, GC(max). The output of GC(max) coincides with a version of a segmentation algorithm known as Iterative Relative Fuzzy Connectedness, IRFC. However, GC(max) is considerably faster than the classic IRFC algorithm, which we prove theoretically and show experimentally. Specifically, we prove that, in the worst case scenario, the GC(max) algorithm runs in linear time with respect to the variable M=|C|+|Z|, where |C| is the image scene size and |Z| is the size of the allowable range, Z, of the associated weight/affinity function. For most implementations, Z is identical to the set of allowable image intensity values, and its size can be treated as small with respect to |C|, meaning that O(M)=O(|C|). In such a situation, GC(max) runs in linear time with respect to the image size |C|. We show that the output of GC(max) constitutes a solution of a graph cut energy minimization problem, in which the energy is defined as the a"" (a) norm ayenF (P) ayen(a) of the map F (P) that associates, with every element e from the boundary of an object P, its weight w(e). This formulation brings IRFC algorithms to the realm of the graph cut energy minimizers, with energy functions ayenF (P) ayen (q) for qa[1,a]. Of these, the best known minimization problem is for the energy ayenF (P) ayen(1), which is solved by the classic min-cut/max-flow algorithm, referred to often as the Graph Cut algorithm. We notice that a minimization problem for ayenF (P) ayen (q) , qa[1,a), is identical to that for ayenF (P) ayen(1), when the original weight function w is replaced by w (q) . Thus, any algorithm GC(sum) solving the ayenF (P) ayen(1) minimization problem, solves also one for ayenF (P) ayen (q) with qa[1,a), so just two algorithms, GC(sum) and GC(max), are enough to solve all ayenF (P) ayen (q) -minimization problems. We also show that, for any fixed weight assignment, the solutions of the ayenF (P) ayen (q) -minimization problems converge to a solution of the ayenF (P) ayen(a)-minimization problem (ayenF (P) ayen(a)=lim (q -> a)ayenF (P) ayen (q) is not enough to deduce that). An experimental comparison of the performance of GC(max) and GC(sum) algorithms is included. This concentrates on comparing the actual (as opposed to provable worst scenario) algorithms' running time, as well as the influence of the choice of the seeds on the output.
Resumo:
Intra-session network coding has been shown to offer significant gains in terms of achievable throughput and delay in settings where one source multicasts data to several clients. In this paper, we consider a more general scenario where multiple sources transmit data to sets of clients over a wireline overlay network. We propose a novel framework for efficient rate allocation in networks where intermediate network nodes have the opportunity to combine packets from different sources using randomized network coding. We formulate the problem as the minimization of the average decoding delay in the client population and solve it with a gradient-based stochastic algorithm. Our optimized inter-session network coding solution is evaluated in different network topologies and is compared with basic intra-session network coding solutions. Our results show the benefits of proper coding decisions and effective rate allocation for lowering the decoding delay when the network is used by concurrent multicast sessions.
Resumo:
The hierarchical properties of potential energy landscapes have been used to gain insight into thermodynamic and kinetic properties of protein ensembles. It also may be possible to use them to direct computational searches for thermodynamically stable macroscopic states, i.e., computational protein folding. To this end, we have developed a top-down search procedure in which conformation space is recursively dissected according to the intrinsic hierarchical structure of a landscape's effective-energy barriers. This procedure generates an inverted tree similar to the disconnectivity graphs generated by local minima-clustering methods, but it fundamentally differs in the manner in which the portion of the tree that is to be computationally explored is selected. A key ingredient is a branch-selection algorithm that takes advantage of statistically predictive properties of the landscape to guide searches down the tree branches that are most likely to lead to the physically relevant macroscopic states. Using the computational folding of a β-hairpin-forming peptide as an example, we show that such predictive properties indeed exist and can be used for structure prediction by free-energy global minimization.
Resumo:
We present a derivative-free optimization algorithm coupled with a chemical process simulator for the optimal design of individual and complex distillation processes using a rigorous tray-by-tray model. The proposed approach serves as an alternative tool to the various models based on nonlinear programming (NLP) or mixed-integer nonlinear programming (MINLP) . This is accomplished by combining the advantages of using a commercial process simulator (Aspen Hysys), including especially suited numerical methods developed for the convergence of distillation columns, with the benefits of the particle swarm optimization (PSO) metaheuristic algorithm, which does not require gradient information and has the ability to escape from local optima. Our method inherits the superstructure developed in Yeomans, H.; Grossmann, I. E.Optimal design of complex distillation columns using rigorous tray-by-tray disjunctive programming models. Ind. Eng. Chem. Res.2000, 39 (11), 4326–4335, in which the nonexisting trays are considered as simple bypasses of liquid and vapor flows. The implemented tool provides the optimal configuration of distillation column systems, which includes continuous and discrete variables, through the minimization of the total annual cost (TAC). The robustness and flexibility of the method is proven through the successful design and synthesis of three distillation systems of increasing complexity.
Resumo:
The 'moving targets' algorithm for training recurrent networks is reviewed and applied to a task which demonstrates the ability of this algorithm to use distant contextual information. Some practical difficulties are discussed, especially with regard to the minimization process. Results on performance and computational requirements of several different 2nd-order minimization algorithms are presented for moving target problems.
Resumo:
2000 Mathematics Subject Classification: 90C25, 68W10, 49M37.
Resumo:
Cooperative communication has gained much interest due to its ability to exploit the broadcasting nature of the wireless medium to mitigate multipath fading. There has been considerable amount of research on how cooperative transmission can improve the performance of the network by focusing on the physical layer issues. During the past few years, the researchers have started to take into consideration cooperative transmission in routing and there has been a growing interest in designing and evaluating cooperative routing protocols. Most of the existing cooperative routing algorithms are designed to reduce the energy consumption; however, packet collision minimization using cooperative routing has not been addressed yet. This dissertation presents an optimization framework to minimize collision probability using cooperative routing in wireless sensor networks. More specifically, we develop a mathematical model and formulate the problem as a large-scale Mixed Integer Non-Linear Programming problem. We also propose a solution based on the branch and bound algorithm augmented with reducing the search space (branch and bound space reduction). The proposed strategy builds up the optimal routes from each source to the sink node by providing the best set of hops in each route, the best set of relays, and the optimal power allocation for the cooperative transmission links. To reduce the computational complexity, we propose two near optimal cooperative routing algorithms. In the first near optimal algorithm, we solve the problem by decoupling the optimal power allocation scheme from optimal route selection. Therefore, the problem is formulated by an Integer Non-Linear Programming, which is solved using a branch and bound space reduced method. In the second near optimal algorithm, the cooperative routing problem is solved by decoupling the transmission power and the relay node se- lection from the route selection. After solving the routing problems, the power allocation is applied in the selected route. Simulation results show the algorithms can significantly reduce the collision probability compared with existing cooperative routing schemes.
Resumo:
Supply chain operations directly affect service levels. Decision on amendment of facilities is generally decided based on overall cost, leaving out the efficiency of each unit. Decomposing the supply chain superstructure, efficiency analysis of the facilities (warehouses or distribution centers) that serve customers can be easily implemented. With the proposed algorithm, the selection of a facility is based on service level maximization and not just cost minimization as this analysis filters all the feasible solutions utilizing Data Envelopment Analysis (DEA) technique. Through multiple iterations, solutions are filtered via DEA and only the efficient ones are selected leading to cost minimization. In this work, the problem of optimal supply chain networks design is addressed based on a DEA based algorithm. A Branch and Efficiency (B&E) algorithm is deployed for the solution of this problem. Based on this DEA approach, each solution (potentially installed warehouse, plant etc) is treated as a Decision Making Unit, thus is characterized by inputs and outputs. The algorithm through additional constraints named “efficiency cuts”, selects only efficient solutions providing better objective function values. The applicability of the proposed algorithm is demonstrated through illustrative examples.
Resumo:
Demand response (DR) algorithms manipulate the energy consumption schedules of controllable loads so as to satisfy grid objectives. Implementation of DR algorithms using a centralized agent can be problematic for scalability reasons, and there are issues related to the privacy of data and robustness to communication failures. Thus, it is desirable to use a scalable decentralized algorithm for the implementation of DR. In this paper, a hierarchical DR scheme is proposed for peak minimization based on Dantzig-Wolfe decomposition (DWD). In addition, a time weighted maximization option is included in the cost function, which improves the quality of service for devices seeking to receive their desired energy sooner rather than later. This paper also demonstrates how the DWD algorithm can be implemented more efficiently through the calculation of the upper and lower cost bounds after each DWD iteration.
Resumo:
Lipidic mixtures present a particular phase change profile highly affected by their unique crystalline structure. However, classical solid-liquid equilibrium (SLE) thermodynamic modeling approaches, which assume the solid phase to be a pure component, sometimes fail in the correct description of the phase behavior. In addition, their inability increases with the complexity of the system. To overcome some of these problems, this study describes a new procedure to depict the SLE of fatty binary mixtures presenting solid solutions, namely the Crystal-T algorithm. Considering the non-ideality of both liquid and solid phases, this algorithm is aimed at the determination of the temperature in which the first and last crystal of the mixture melts. The evaluation is focused on experimental data measured and reported in this work for systems composed of triacylglycerols and fatty alcohols. The liquidus and solidus lines of the SLE phase diagrams were described by using excess Gibbs energy based equations, and the group contribution UNIFAC model for the calculation of the activity coefficients of both liquid and solid phases. Very low deviations of theoretical and experimental data evidenced the strength of the algorithm, contributing to the enlargement of the scope of the SLE modeling.
Resumo:
PURPOSE: To compare the Full Threshold (FT) and SITA Standard (SS) strategies in glaucomatous patients undergoing automated perimetry for the first time. METHODS: Thirty-one glaucomatous patients who had never undergone perimetry underwent automated perimetry (Humphrey, program 30-2) with both FT and SS on the same day, with an interval of at least 15 minutes. The order of the examination was randomized, and only one eye per patient was analyzed. Three analyses were performed: a) all the examinations, regardless of the order of application; b) only the first examinations; c) only the second examinations. In order to calculate the sensitivity of both strategies, the following criteria were used to define abnormality: glaucoma hemifield test (GHT) outside normal limits, pattern standard deviation (PSD) <5%, or a cluster of 3 adjacent points with p<5% at the pattern deviation probability plot. RESULTS: When the results of all examinations were analyzed regardless of the order in which they were performed, the number of depressed points with p<0.5% in the pattern deviation probability map was significantly greater with SS (p=0.037), and the sensitivities were 87.1% for SS and 77.4% for FT (p=0.506). When only the first examinations were compared, there were no statistically significant differences regarding the number of depressed points, but the sensitivity of SS (100%) was significantly greater than that obtained with FT (70.6%) (p=0.048). When only the second examinations were compared, there were no statistically significant differences regarding the number of depressed points, and the sensitivities of SS (76.5%) and FT (85.7%) (p=0.664). CONCLUSION: SS may have a higher sensitivity than FT in glaucomatous patients undergoing automated perimetry for the first time. However, this difference tends to disappear in subsequent examinations.
Resumo:
The network of HIV counseling and testing centers in São Paulo, Brazil is a major source of data used to build epidemiological profiles of the client population. We examined HIV-1 incidence from November 2000 to April 2001, comparing epidemiological and socio-behavioral data of recently-infected individuals with those with long-standing infection. A less sensitive ELISA was employed to identify recent infection. The overall incidence of HIV-1 infection was 0.53/100/year (95% CI: 0.31-0.85/100/year): 0.77/100/year for males (95% CI: 0.42-1.27/100/year) and 0.22/100/ year (95% CI: 0.05-0.59/100/year) for females. Overall HIV-1 prevalence was 3.2% (95% CI: 2.8-3.7%), being 4.0% among males (95% CI: 3.3-4.7%) and 2.1% among females (95% CI: 1.6-2.8%). Recent infections accounted for 15% of the total (95% CI: 10.2-20.8%). Recent infection correlated with being younger and male (p = 0.019). Therefore, recent infection was more common among younger males and older females.