998 resultados para Lagrangian methods


Relevância:

60.00% 60.00%

Publicador:

Resumo:

A new approach to solving the Optimal Power Flow problem is described, making use of some recent findings, especially in the area of primal-dual methods for complex programming. In this approach, equality constraints are handled by Newton's method inequality constraints for voltage and transformer taps by the logarithmic barrier method and the other inequality constraints by the augmented Lagrangian method. Numerical test results are presented, showing the effective performance of this algorithm. © 2001 IEEE.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this work we introduce a relaxed version of the constant positive linear dependence constraint qualification (CPLD) that we call RCPLD. This development is inspired by a recent generalization of the constant rank constraint qualification by Minchenko and Stakhovski that was called RCRCQ. We show that RCPLD is enough to ensure the convergence of an augmented Lagrangian algorithm and that it asserts the validity of an error bound. We also provide proofs and counter-examples that show the relations of RCRCQ and RCPLD with other known constraint qualifications. In particular, RCPLD is strictly weaker than CPLD and RCRCQ, while still stronger than Abadie's constraint qualification. We also verify that the second order necessary optimality condition holds under RCRCQ.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Bound-constrained minimization is a subject of active research. To assess the performance of existent solvers, numerical evaluations and comparisons are carried on. Arbitrary decisions that may have a crucial effect on the conclusions of numerical experiments are highlighted in the present work. As a result, a detailed evaluation based on performance profiles is applied to the comparison of bound-constrained minimization solvers. Extensive numerical results are presented and analyzed.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We present two new constraint qualifications (CQs) that are weaker than the recently introduced relaxed constant positive linear dependence (RCPLD) CQ. RCPLD is based on the assumption that many subsets of the gradients of the active constraints preserve positive linear dependence locally. A major open question was to identify the exact set of gradients whose properties had to be preserved locally and that would still work as a CQ. This is done in the first new CQ, which we call the constant rank of the subspace component (CRSC) CQ. This new CQ also preserves many of the good properties of RCPLD, such as local stability and the validity of an error bound. We also introduce an even weaker CQ, called the constant positive generator (CPG), which can replace RCPLD in the analysis of the global convergence of algorithms. We close this work by extending convergence results of algorithms belonging to all the main classes of nonlinear optimization methods: sequential quadratic programming, augmented Lagrangians, interior point algorithms, and inexact restoration.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Solution of structural reliability problems by the First Order method require optimization algorithms to find the smallest distance between a limit state function and the origin of standard Gaussian space. The Hassofer-Lind-Rackwitz-Fiessler (HLRF) algorithm, developed specifically for this purpose, has been shown to be efficient but not robust, as it fails to converge for a significant number of problems. On the other hand, recent developments in general (augmented Lagrangian) optimization techniques have not been tested in aplication to structural reliability problems. In the present article, three new optimization algorithms for structural reliability analysis are presented. One algorithm is based on the HLRF, but uses a new differentiable merit function with Wolfe conditions to select step length in linear search. It is shown in the article that, under certain assumptions, the proposed algorithm generates a sequence that converges to the local minimizer of the problem. Two new augmented Lagrangian methods are also presented, which use quadratic penalties to solve nonlinear problems with equality constraints. Performance and robustness of the new algorithms is compared to the classic augmented Lagrangian method, to HLRF and to the improved HLRF (iHLRF) algorithms, in the solution of 25 benchmark problems from the literature. The new proposed HLRF algorithm is shown to be more robust than HLRF or iHLRF, and as efficient as the iHLRF algorithm. The two augmented Lagrangian methods proposed herein are shown to be more robust and more efficient than the classical augmented Lagrangian method.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Mineral dust is an important component of the Earth's climate system and provides essential nutrientsrnto oceans and rain forests. During atmospheric transport, dust particles directly and indirectly influencernweather and climate. The strength of dust sources and characteristics of the transport, in turn, mightrnbe subject to climatic changes. Earth system models help for a better understanding of these complexrnmechanisms.rnrnThis thesis applies the global climate model ECHAM5/MESSy Atmospheric Chemistry (EMAC) for simulationsrnof the mineral dust cycle under different climatic conditions. The prerequisite for suitable modelrnresults is the determination of the model setup reproducing the most realistic dust cycle in the recentrnclimate. Simulations with this setup are used to gain new insights into properties of the transatlanticrndust transport from Africa to the Americas and adaptations of the model's climate forcing factors allowrnfor investigations of the impact of climatic changes on the dust cycle.rnrnIn the first part, the most appropriate model setup is determined through a number of sensitivity experiments.rnIt uses the dust emission parametrisation from Tegen et al. 2002 and a spectral resolutionrnof T85, corresponding to a horizontal grid spacing of about 155 km. Coarser resolutions are not able tornaccurately reproduce emissions from important source regions such as the Bodele Depression in Chad orrnthe Taklamakan Desert in Central Asia. Furthermore, the representation of ageing and wet deposition ofrndust particles in the model requires a basic sulphur chemical mechanism. This setup is recommended forrnfuture simulations with EMAC focusing on mineral dust.rnrnOne major branch of the global dust cycle is the long-range transport from the world's largest dustrnsource, the Sahara, across the Atlantic Ocean. Seasonal variations of the main transport pathways to thernAmazon Basin in boreal winter and to the Caribbean during summer are well known and understood,rnand corroborated in this thesis. Both Eulerian and Lagrangian methods give estimates on the typicalrntransport times from the source regions to the deposition on the order of nine to ten days. Previously, arnhuge proportion of the dust transported across the Atlantic Ocean has been attributed to emissions fromrnthe Bodele Depression. However, the contribution of this hot spot to the total transport is very low inrnthe present results, although the overall emissions from this region are comparable. Both model resultsrnand data sets analysed earlier, such as satellite products, involve uncertainties and this controversy aboutrndust transport from the Bodele Depression calls for future investigations and clarification.rnrnAforementioned characteristics of the transatlantic dust transport just slightly change in simulationsrnrepresenting climatic conditions of the Little Ice Age in the middle of the last millennium with meanrnnear-surface cooling of 0.5 to 1 K. However, intensification of the West African summer monsoon duringrnthe Little Ice Age is associated with higher dust emissions from North African source regions and wetterrnconditions in the Sahel. Furthermore, the Indian Monsoon and dust emissions from the Arabian Peninsula,rnwhich are affected by this circulation, are intensified during the Little Ice Age, whereas the annual globalrndust budget is similar in both climate epochs. Simulated dust emission fluxes are particularly influencedrnby the surface parameters. Modifications of the model do not affect those in this thesis, to be able tornascribe all differences in the results to changed forcing factors, such as greenhouse gas concentrations.rnDue to meagre comparison data sets, the verification of results presented here is problematic. Deeperrnknowledge about the dust cycle during the Little Ice Age can be obtained by future simulations, based onrnthis work, and additionally using improved reconstructions of surface parameters. Better evaluation ofrnsuch simulations would be possible by refining the temporal resolution of reconstructed dust depositionrnfluxes from existing ice and marine sediment cores.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Over the past several decades, it has become apparent that anthropogenic activities have resulted in the large-scale enhancement of the levels of many trace gases throughout the troposphere. More recently, attention has been given to the transport pathway taken by these emissions as they are dispersed throughout the atmosphere. The transport pathway determines the physical characteristics of emissions plumes and therefore plays an important role in the chemical transformations that can occur downwind of source regions. For example, the production of ozone (O3) is strongly dependent upon the transport its precursors undergo. O3 can initially be formed within air masses while still over polluted source regions. These polluted air masses can experience continued O3 production or O3 destruction downwind, depending on the air mass's chemical and transport characteristics. At present, however, there are a number of uncertainties in the relationships between transport and O3 production in the North Atlantic lower free troposphere. The first phase of the study presented here used measurements made at the Pico Mountain observatory and model simulations to determine transport pathways for US emissions to the observatory. The Pico Mountain observatory was established in the summer of 2001 in order to address the need to understand the relationships between transport and O3 production. Measurements from the observatory were analyzed in conjunction with model simulations from the Lagrangian particle dispersion model (LPDM), FLEX-PART, in order to determine the transport pathway for events observed at the Pico Mountain observatory during July 2003. A total of 16 events were observed, 4 of which were analyzed in detail. The transport time for these 16 events varied from 4.5 to 7 days, while the transport altitudes over the ocean ranged from 2-8 km, but were typically less than 3 km. In three of the case studies, eastward advection and transport in a weak warm conveyor belt (WCB) airflow was responsible for the export of North American emissions into the FT, while transport in the FT was governed by easterly winds driven by the Azores/Bermuda High (ABH) and transient northerly lows. In the fourth case study, North American emissions were lofted to 6-8 km in a WCB before being entrained in the same cyclone's dry airstream and transported down to the observatory. The results of this study show that the lower marine FT may provide an important transport environment where O3 production may continue, in contrast to transport in the marine boundary layer, where O3 destruction is believed to dominate. The second phase of the study presented here focused on improving the analysis methods that are available with LPDMs. While LPDMs are popular and useful for the analysis of atmospheric trace gas measurements, identifying the transport pathway of emissions from their source to a receptor (the Pico Mountain observatory in our case) using the standard gridded model output, particularly during complex meteorological scenarios can be difficult can be difficult or impossible. The transport study in phase 1 was limited to only 1 month out of more than 3 years of available data and included only 4 case studies out of the 16 events specifically due to this confounding factor. The second phase of this study addressed this difficulty by presenting a method to clearly and easily identify the pathway taken by only those emissions that arrive at a receptor at a particular time, by combining the standard gridded output from forward (i.e., concentrations) and backward (i.e., residence time) LPDM simulations, greatly simplifying similar analyses. The ability of the method to successfully determine the source-to-receptor pathway, restoring this Lagrangian information that is lost when the data are gridded, is proven by comparing the pathway determined from this method with the particle trajectories from both the forward and backward models. A sample analysis is also presented, demonstrating that this method is more accurate and easier to use than existing methods using standard LPDM products. Finally, we discuss potential future work that would be possible by combining the backward LPDM simulation with gridded data from other sources (e.g., chemical transport models) to obtain a Lagrangian sampling of the air that will eventually arrive at a receptor.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this dissertation a new numerical method for solving Fluid-Structure Interaction (FSI) problems in a Lagrangian framework is developed, where solids of different constitutive laws can suffer very large deformations and fluids are considered to be newtonian and incompressible. For that, we first introduce a meshless discretization based on local maximum-entropy interpolants. This allows to discretize a spatial domain with no need of tessellation, avoiding the mesh limitations. Later, the Stokes flow problem is studied. The Galerkin meshless method based on a max-ent scheme for this problem suffers from instabilities, and therefore stabilization techniques are discussed and analyzed. An unconditionally stable method is finally formulated based on a Douglas-Wang stabilization. Then, a Langrangian expression for fluid mechanics is derived. This allows us to establish a common framework for fluid and solid domains, such that interaction can be naturally accounted. The resulting equations are also in the need of stabilization, what is corrected with an analogous technique as for the Stokes problem. The fully Lagrangian framework for fluid/solid interaction is completed with simple point-to-point and point-to-surface contact algorithms. The method is finally validated, and some numerical examples show the potential scope of applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hyperspectral imaging can be used for object detection and for discriminating between different objects based on their spectral characteristics. One of the main problems of hyperspectral data analysis is the presence of mixed pixels, due to the low spatial resolution of such images. This means that several spectrally pure signatures (endmembers) are combined into the same mixed pixel. Linear spectral unmixing follows an unsupervised approach which aims at inferring pure spectral signatures and their material fractions at each pixel of the scene. The huge data volumes acquired by such sensors put stringent requirements on processing and unmixing methods. This paper proposes an efficient implementation of a unsupervised linear unmixing method on GPUs using CUDA. The method finds the smallest simplex by solving a sequence of nonsmooth convex subproblems using variable splitting to obtain a constraint formulation, and then applying an augmented Lagrangian technique. The parallel implementation of SISAL presented in this work exploits the GPU architecture at low level, using shared memory and coalesced accesses to memory. The results herein presented indicate that the GPU implementation can significantly accelerate the method's execution over big datasets while maintaining the methods accuracy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One of the main problems of hyperspectral data analysis is the presence of mixed pixels due to the low spatial resolution of such images. Linear spectral unmixing aims at inferring pure spectral signatures and their fractions at each pixel of the scene. The huge data volumes acquired by hyperspectral sensors put stringent requirements on processing and unmixing methods. This letter proposes an efficient implementation of the method called simplex identification via split augmented Lagrangian (SISAL) which exploits the graphics processing unit (GPU) architecture at low level using Compute Unified Device Architecture. SISAL aims to identify the endmembers of a scene, i.e., is able to unmix hyperspectral data sets in which the pure pixel assumption is violated. The proposed implementation is performed in a pixel-by-pixel fashion using coalesced accesses to memory and exploiting shared memory to store temporary data. Furthermore, the kernels have been optimized to minimize the threads divergence, therefore achieving high GPU occupancy. The experimental results obtained for the simulated and real hyperspectral data sets reveal speedups up to 49 times, which demonstrates that the GPU implementation can significantly accelerate the method's execution over big data sets while maintaining the methods accuracy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

"Series title: Springerbriefs in applied sciences and technology, ISSN 2191-530X"

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The network revenue management (RM) problem arises in airline, hotel, media,and other industries where the sale products use multiple resources. It can be formulatedas a stochastic dynamic program but the dynamic program is computationallyintractable because of an exponentially large state space, and a number of heuristicshave been proposed to approximate it. Notable amongst these -both for their revenueperformance, as well as their theoretically sound basis- are approximate dynamic programmingmethods that approximate the value function by basis functions (both affinefunctions as well as piecewise-linear functions have been proposed for network RM)and decomposition methods that relax the constraints of the dynamic program to solvesimpler dynamic programs (such as the Lagrangian relaxation methods). In this paperwe show that these two seemingly distinct approaches coincide for the network RMdynamic program, i.e., the piecewise-linear approximation method and the Lagrangianrelaxation method are one and the same.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A Lagrangian treatment of the quantization of first class Hamiltonian systems with constraints and Hamiltonian linear and quadratic in the momenta, respectively, is performed. The first reduce and then quantize and the first quantize and then reduce (Diracs) methods are compared. A source of ambiguities in this latter approach is pointed out and its relevance on issues concerning self-consistency and equivalence with the first reduce method is emphasized. One of the main results is the relation between the propagator obtained la Dirac and the propagator in the full space. As an application of the formalism developed, quantization on coset spaces of compact Lie groups is presented. In this case it is shown that a natural selection of a Dirac quantization allows for full self-consistency and equivalence. Finally, the specific case of the propagator on a two-dimensional sphere S2 viewed as the coset space SU(2)/U(1) is worked out. 1995 American Institute of Physics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we present an efficient numerical scheme for the recently introduced geodesic active fields (GAF) framework for geometric image registration. This framework considers the registration task as a weighted minimal surface problem. Hence, the data-term and the regularization-term are combined through multiplication in a single, parametrization invariant and geometric cost functional. The multiplicative coupling provides an intrinsic, spatially varying and data-dependent tuning of the regularization strength, and the parametrization invariance allows working with images of nonflat geometry, generally defined on any smoothly parametrizable manifold. The resulting energy-minimizing flow, however, has poor numerical properties. Here, we provide an efficient numerical scheme that uses a splitting approach; data and regularity terms are optimized over two distinct deformation fields that are constrained to be equal via an augmented Lagrangian approach. Our approach is more flexible than standard Gaussian regularization, since one can interpolate freely between isotropic Gaussian and anisotropic TV-like smoothing. In this paper, we compare the geodesic active fields method with the popular Demons method and three more recent state-of-the-art algorithms: NL-optical flow, MRF image registration, and landmark-enhanced large displacement optical flow. Thus, we can show the advantages of the proposed FastGAF method. It compares favorably against Demons, both in terms of registration speed and quality. Over the range of example applications, it also consistently produces results not far from more dedicated state-of-the-art methods, illustrating the flexibility of the proposed framework.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La programmation linéaire en nombres entiers est une approche robuste qui permet de résoudre rapidement de grandes instances de problèmes d'optimisation discrète. Toutefois, les problèmes gagnent constamment en complexité et imposent parfois de fortes limites sur le temps de calcul. Il devient alors nécessaire de développer des méthodes spécialisées afin de résoudre approximativement ces problèmes, tout en calculant des bornes sur leurs valeurs optimales afin de prouver la qualité des solutions obtenues. Nous proposons d'explorer une approche de reformulation en nombres entiers guidée par la relaxation lagrangienne. Après l'identification d'une forte relaxation lagrangienne, un processus systématique permet d'obtenir une seconde formulation en nombres entiers. Cette reformulation, plus compacte que celle de Dantzig et Wolfe, comporte exactement les mêmes solutions entières que la formulation initiale, mais en améliore la borne linéaire: elle devient égale à la borne lagrangienne. L'approche de reformulation permet d'unifier et de généraliser des formulations et des méthodes de borne connues. De plus, elle offre une manière simple d'obtenir des reformulations de moins grandes tailles en contrepartie de bornes plus faibles. Ces reformulations demeurent de grandes tailles. C'est pourquoi nous décrivons aussi des méthodes spécialisées pour en résoudre les relaxations linéaires. Finalement, nous appliquons l'approche de reformulation à deux problèmes de localisation. Cela nous mène à de nouvelles formulations pour ces problèmes; certaines sont de très grandes tailles, mais nos méthodes de résolution spécialisées les rendent pratiques.