887 resultados para nonlinear optimization problems
Resumo:
In this paper we address the problem of the separation and recovery of convolutively mixed autoregressive processes in a Bayesian framework. Solving this problem requires the ability to solve integration and/or optimization problems of complicated posterior distributions. We thus propose efficient stochastic algorithms based on Markov chain Monte Carlo (MCMC) methods. We present three algorithms. The first one is a classical Gibbs sampler that generates samples from the posterior distribution. The two other algorithms are stochastic optimization algorithms that allow to optimize either the marginal distribution of the sources, or the marginal distribution of the parameters of the sources and mixing filters, conditional upon the observation. Simulations are presented.
Resumo:
131 p.: graf.
Resumo:
基于同伦映射的思想,改进了求解非线性反问题的梯度正则化算法.通过路径跟踪有效地拓宽了梯度正则化算法求解的收敛范围.对于正则化参数的修正,通过引入拟Sigmoid函数,提出了一种下降速率可调的连续化参数修正方法,在保证迭代稳定的条件下,得到较好的计算效率,同时保证该算法具有很好的抵抗观测噪声能力.实际算例表明,该方法收敛范围宽,计算效率高,在存在较强观测噪声的条件下也能得到很好的反演结果.
Resumo:
Global warming of the oceans is expected to alter the environmental conditions that determine the growth of a fishery resource. Most climate change studies are based on models and scenarios that focus on economic growth, or they concentrate on simulating the potential losses or cost to fisheries due to climate change. However, analysis that addresses model optimization problems to better understand of the complex dynamics of climate change and marine ecosystems is still lacking. In this paper a simple algorithm to compute transitional dynamics in order to quantify the effect of climate change on the European sardine fishery is presented. The model results indicate that global warming will not necessarily lead to a monotonic decrease in the expected biomass levels. Our results show that if the resource is exploited optimally then in the short run, increases in the surface temperature of the fishery ground are compatible with higher expected biomass and economic profit.
Resumo:
This paper describes Mateda-2.0, a MATLAB package for estimation of distribution algorithms (EDAs). This package can be used to solve single and multi-objective discrete and continuous optimization problems using EDAs based on undirected and directed probabilistic graphical models. The implementation contains several methods commonly employed by EDAs. It is also conceived as an open package to allow users to incorporate different combinations of selection, learning, sampling, and local search procedures. Additionally, it includes methods to extract, process and visualize the structures learned by the probabilistic models. This way, it can unveil previously unknown information about the optimization problem domain. Mateda-2.0 also incorporates a module for creating and validating function models based on the probabilistic models learned by EDAs.
Resumo:
Signal processing techniques play important roles in the design of digital communication systems. These include information manipulation, transmitter signal processing, channel estimation, channel equalization and receiver signal processing. By interacting with communication theory and system implementing technologies, signal processing specialists develop efficient schemes for various communication problems by wisely exploiting various mathematical tools such as analysis, probability theory, matrix theory, optimization theory, and many others. In recent years, researchers realized that multiple-input multiple-output (MIMO) channel models are applicable to a wide range of different physical communications channels. Using the elegant matrix-vector notations, many MIMO transceiver (including the precoder and equalizer) design problems can be solved by matrix and optimization theory. Furthermore, the researchers showed that the majorization theory and matrix decompositions, such as singular value decomposition (SVD), geometric mean decomposition (GMD) and generalized triangular decomposition (GTD), provide unified frameworks for solving many of the point-to-point MIMO transceiver design problems.
In this thesis, we consider the transceiver design problems for linear time invariant (LTI) flat MIMO channels, linear time-varying narrowband MIMO channels, flat MIMO broadcast channels, and doubly selective scalar channels. Additionally, the channel estimation problem is also considered. The main contributions of this dissertation are the development of new matrix decompositions, and the uses of the matrix decompositions and majorization theory toward the practical transmit-receive scheme designs for transceiver optimization problems. Elegant solutions are obtained, novel transceiver structures are developed, ingenious algorithms are proposed, and performance analyses are derived.
The first part of the thesis focuses on transceiver design with LTI flat MIMO channels. We propose a novel matrix decomposition which decomposes a complex matrix as a product of several sets of semi-unitary matrices and upper triangular matrices in an iterative manner. The complexity of the new decomposition, generalized geometric mean decomposition (GGMD), is always less than or equal to that of geometric mean decomposition (GMD). The optimal GGMD parameters which yield the minimal complexity are derived. Based on the channel state information (CSI) at both the transmitter (CSIT) and receiver (CSIR), GGMD is used to design a butterfly structured decision feedback equalizer (DFE) MIMO transceiver which achieves the minimum average mean square error (MSE) under the total transmit power constraint. A novel iterative receiving detection algorithm for the specific receiver is also proposed. For the application to cyclic prefix (CP) systems in which the SVD of the equivalent channel matrix can be easily computed, the proposed GGMD transceiver has K/log_2(K) times complexity advantage over the GMD transceiver, where K is the number of data symbols per data block and is a power of 2. The performance analysis shows that the GGMD DFE transceiver can convert a MIMO channel into a set of parallel subchannels with the same bias and signal to interference plus noise ratios (SINRs). Hence, the average bit rate error (BER) is automatically minimized without the need for bit allocation. Moreover, the proposed transceiver can achieve the channel capacity simply by applying independent scalar Gaussian codes of the same rate at subchannels.
In the second part of the thesis, we focus on MIMO transceiver design for slowly time-varying MIMO channels with zero-forcing or MMSE criterion. Even though the GGMD/GMD DFE transceivers work for slowly time-varying MIMO channels by exploiting the instantaneous CSI at both ends, their performance is by no means optimal since the temporal diversity of the time-varying channels is not exploited. Based on the GTD, we develop space-time GTD (ST-GTD) for the decomposition of linear time-varying flat MIMO channels. Under the assumption that CSIT, CSIR and channel prediction are available, by using the proposed ST-GTD, we develop space-time geometric mean decomposition (ST-GMD) DFE transceivers under the zero-forcing or MMSE criterion. Under perfect channel prediction, the new system minimizes both the average MSE at the detector in each space-time (ST) block (which consists of several coherence blocks), and the average per ST-block BER in the moderate high SNR region. Moreover, the ST-GMD DFE transceiver designed under an MMSE criterion maximizes Gaussian mutual information over the equivalent channel seen by each ST-block. In general, the newly proposed transceivers perform better than the GGMD-based systems since the super-imposed temporal precoder is able to exploit the temporal diversity of time-varying channels. For practical applications, a novel ST-GTD based system which does not require channel prediction but shares the same asymptotic BER performance with the ST-GMD DFE transceiver is also proposed.
The third part of the thesis considers two quality of service (QoS) transceiver design problems for flat MIMO broadcast channels. The first one is the power minimization problem (min-power) with a total bitrate constraint and per-stream BER constraints. The second problem is the rate maximization problem (max-rate) with a total transmit power constraint and per-stream BER constraints. Exploiting a particular class of joint triangularization (JT), we are able to jointly optimize the bit allocation and the broadcast DFE transceiver for the min-power and max-rate problems. The resulting optimal designs are called the minimum power JT broadcast DFE transceiver (MPJT) and maximum rate JT broadcast DFE transceiver (MRJT), respectively. In addition to the optimal designs, two suboptimal designs based on QR decomposition are proposed. They are realizable for arbitrary number of users.
Finally, we investigate the design of a discrete Fourier transform (DFT) modulated filterbank transceiver (DFT-FBT) with LTV scalar channels. For both cases with known LTV channels and unknown wide sense stationary uncorrelated scattering (WSSUS) statistical channels, we show how to optimize the transmitting and receiving prototypes of a DFT-FBT such that the SINR at the receiver is maximized. Also, a novel pilot-aided subspace channel estimation algorithm is proposed for the orthogonal frequency division multiplexing (OFDM) systems with quasi-stationary multi-path Rayleigh fading channels. Using the concept of a difference co-array, the new technique can construct M^2 co-pilots from M physical pilot tones with alternating pilot placement. Subspace methods, such as MUSIC and ESPRIT, can be used to estimate the multipath delays and the number of identifiable paths is up to O(M^2), theoretically. With the delay information, a MMSE estimator for frequency response is derived. It is shown through simulations that the proposed method outperforms the conventional subspace channel estimator when the number of multipaths is greater than or equal to the number of physical pilots minus one.
Resumo:
The work presented in this thesis revolves around erasure correction coding, as applied to distributed data storage and real-time streaming communications.
First, we examine the problem of allocating a given storage budget over a set of nodes for maximum reliability. The objective is to find an allocation of the budget that maximizes the probability of successful recovery by a data collector accessing a random subset of the nodes. This optimization problem is challenging in general because of its combinatorial nature, despite its simple formulation. We study several variations of the problem, assuming different allocation models and access models, and determine the optimal allocation and the optimal symmetric allocation (in which all nonempty nodes store the same amount of data) for a variety of cases. Although the optimal allocation can have nonintuitive structure and can be difficult to find in general, our results suggest that, as a simple heuristic, reliable storage can be achieved by spreading the budget maximally over all nodes when the budget is large, and spreading it minimally over a few nodes when it is small. Coding would therefore be beneficial in the former case, while uncoded replication would suffice in the latter case.
Second, we study how distributed storage allocations affect the recovery delay in a mobile setting. Specifically, two recovery delay optimization problems are considered for a network of mobile storage nodes: the maximization of the probability of successful recovery by a given deadline, and the minimization of the expected recovery delay. We show that the first problem is closely related to the earlier allocation problem, and solve the second problem completely for the case of symmetric allocations. It turns out that the optimal allocations for the two problems can be quite different. In a simulation study, we evaluated the performance of a simple data dissemination and storage protocol for mobile delay-tolerant networks, and observed that the choice of allocation can have a significant impact on the recovery delay under a variety of scenarios.
Third, we consider a real-time streaming system where messages created at regular time intervals at a source are encoded for transmission to a receiver over a packet erasure link; the receiver must subsequently decode each message within a given delay from its creation time. For erasure models containing a limited number of erasures per coding window, per sliding window, and containing erasure bursts whose maximum length is sufficiently short or long, we show that a time-invariant intrasession code asymptotically achieves the maximum message size among all codes that allow decoding under all admissible erasure patterns. For the bursty erasure model, we also show that diagonally interleaved codes derived from specific systematic block codes are asymptotically optimal over all codes in certain cases. We also study an i.i.d. erasure model in which each transmitted packet is erased independently with the same probability; the objective is to maximize the decoding probability for a given message size. We derive an upper bound on the decoding probability for any time-invariant code, and show that the gap between this bound and the performance of a family of time-invariant intrasession codes is small when the message size and packet erasure probability are small. In a simulation study, these codes performed well against a family of random time-invariant convolutional codes under a number of scenarios.
Finally, we consider the joint problems of routing and caching for named data networking. We propose a backpressure-based policy that employs virtual interest packets to make routing and caching decisions. In a packet-level simulation, the proposed policy outperformed a basic protocol that combines shortest-path routing with least-recently-used (LRU) cache replacement.
Resumo:
A general framework for multi-criteria optimal design is presented which is well-suited for automated design of structural systems. A systematic computer-aided optimal design decision process is developed which allows the designer to rapidly evaluate and improve a proposed design by taking into account the major factors of interest related to different aspects such as design, construction, and operation.
The proposed optimal design process requires the selection of the most promising choice of design parameters taken from a large design space, based on an evaluation using specified criteria. The design parameters specify a particular design, and so they relate to member sizes, structural configuration, etc. The evaluation of the design uses performance parameters which may include structural response parameters, risks due to uncertain loads and modeling errors, construction and operating costs, etc. Preference functions are used to implement the design criteria in a "soft" form. These preference functions give a measure of the degree of satisfaction of each design criterion. The overall evaluation measure for a design is built up from the individual measures for each criterion through a preference combination rule. The goal of the optimal design process is to obtain a design that has the highest overall evaluation measure - an optimization problem.
Genetic algorithms are stochastic optimization methods that are based on evolutionary theory. They provide the exploration power necessary to explore high-dimensional search spaces to seek these optimal solutions. Two special genetic algorithms, hGA and vGA, are presented here for continuous and discrete optimization problems, respectively.
The methodology is demonstrated with several examples involving the design of truss and frame systems. These examples are solved by using the proposed hGA and vGA.
Resumo:
O surgimento de novos serviços de telecomunicações tem provocado um enorme aumento no tráfego de dados nas redes de transmissão. Para atender a essa demanda crescente, novas tecnologias foram desenvolvidas e implementadas ao longo dos anos, sendo que um dos principais avanços está na área de transmissão óptica, devido à grande capacidade de transporte de informação da fibra óptica. A tecnologia que melhor explora a capacidade desse meio de transmissão atualmente é a multiplexação por divisão de comprimento de onda ou Wavelength Division Multiplexing (WDM) que permite a transmissão de diversos sinais utilizando apenas uma fibra óptica. Redes ópticas WDM se tornaram muito complexas, com enorme capacidade de transmissão de informação (terabits por segundo), para atender à explosão de necessidade por largura de banda. Nesse contexto, é de extrema importância que os recursos dessas redes sejam utilizados de forma inteligente e otimizada. Um dos maiores desafios em uma rede óptica é a escolha de uma rota e a seleção de um comprimento de onda disponível na rede para atender uma solicitação de conexão utilizando o menor número de recursos possível. Esse problema é bastante complexo e ficou conhecido como problema de roteamento e alocação de comprimento de onda ou, simplesmente, problema RWA (Routing and Wavelentgh Assignment problem). Muitos estudos foram realizados com o objetivo de encontrar uma solução eficiente para esse problema, mas nem sempre é possível aliar bom desempenho com baixo tempo de execução, requisito fundamental em redes de telecomunicações. A técnica de algoritmo genético (AG) tem sido utilizada para encontrar soluções de problemas de otimização, como é o caso do problema RWA, e tem obtido resultados superiores quando comparada com soluções heurísticas tradicionais encontradas na literatura. Esta dissertação apresenta, resumidamente, os conceitos de redes ópticas e de algoritmos genéticos, e descreve uma formulação do problema RWA adequada à solução por algoritmo genético.
Resumo:
Os métodos de otimização que adotam condições de otimalidade de primeira e/ou segunda ordem são eficientes e normalmente esses métodos iterativos são desenvolvidos e analisados através da análise matemática do espaço euclidiano n-dimensional, o qual tem caráter local. Esses métodos levam a algoritmos iterativos que são usados para o cálculo de minimizadores globais de uma função não linear, principalmente não-convexas e multimodais, dependendo da posição dos pontos de partida. Método de Otimização Global Topográfico é um algoritmo de agrupamento, o qual é fundamentado nos conceitos elementares da teoria dos grafos, com a finalidade de gerar bons pontos de partida para os métodos de busca local, com base nos pontos distribuídos de modo uniforme no interior da região viável. Este trabalho tem como objetivo a aplicação do método de Otimização Global Topográfica junto com um método robusto e eficaz de direções viáveis por pontos-interiores a problemas de otimização que tem restrições de igualdade e/ou desigualdade lineares e/ou não lineares, que constituem conjuntos viáveis com interiores não vazios. Para cada um destes problemas, é representado também um hiper-retângulo compreendendo cada conjunto viável, onde os pontos amostrais são gerados.
Resumo:
When searching for characteristic subpatterns in potentially noisy graph data, it appears self-evident that having multiple observations would be better than having just one. However, it turns out that the inconsistencies introduced when different graph instances have different edge sets pose a serious challenge. In this work we address this challenge for the problem of finding maximum weighted cliques. We introduce the concept of most persistent soft-clique. This is subset of vertices, that 1) is almost fully or at least densely connected, 2) occurs in all or almost all graph instances, and 3) has the maximum weight. We present a measure of clique-ness, that essentially counts the number of edge missing to make a subset of vertices into a clique. With this measure, we show that the problem of finding the most persistent soft-clique problem can be cast either as: a) a max-min two person game optimization problem, or b) a min-min soft margin optimization problem. Both formulations lead to the same solution when using a partial Lagrangian method to solve the optimization problems. By experiments on synthetic data and on real social network data we show that the proposed method is able to reliably find soft cliques in graph data, even if that is distorted by random noise or unreliable observations. Copyright 2012 by the author(s)/owner(s).
Resumo:
The notion of coupling within a design, particularly within the context of Multidisciplinary Design Optimization (MDO), is much used but ill-defined. There are many different ways of measuring design coupling, but these measures vary in both their conceptions of what design coupling is and how such coupling may be calculated. Within the differential geometry framework which we have previously developed for MDO systems, we put forth our own design coupling metric for consideration. Our metric is not commensurate with similar types of coupling metrics, but we show that it both provides a helpful geo- metric interpretation of coupling (and uncoupledness in particular) and exhibits greater generality and potential for analysis than those similar metrics. Furthermore, we discuss how the metric might be profitably extended to time-varying problems and show how the metric's measure of coupling can be applied to multi-objective optimization problems (in unconstrained optimization and in MDO). © 2013 by the American Institute of Aeronautics and Astronautics, Inc. All rights reserved.
Resumo:
The solution time of the online optimization problems inherent to Model Predictive Control (MPC) can become a critical limitation when working in embedded systems. One proposed approach to reduce the solution time is to split the optimization problem into a number of reduced order problems, solve such reduced order problems in parallel and selecting the solution which minimises a global cost function. This approach is known as Parallel MPC. The potential capabilities of disturbance rejection are introduced using a simulation example. The algorithm is implemented in a linearised model of a Boeing 747-200 under nominal flight conditions and with an induced wind disturbance. Under significant output disturbances Parallel MPC provides a significant improvement in performance when compared to Multiplexed MPC (MMPC) and Linear Quadratic Synchronous MPC (SMPC). © 2013 IEEE.
Resumo:
A Penning trap system called Lanzhou Penning Trap (LPT) is now being developed for precise mass measurements at the Institute of Modern Physics (IMP). One of the key components is a 7 T actively shielded superconducting magnet with a clear warm bore of 156 mm. The required field homogeneity is 3 x 10(-7) over two 1 cubic centimeter volumes lying 220 mm apart along the magnet axis. We introduce a two-step method which combines linear programming and a nonlinear optimization algorithm for designing the multi-section superconducting magnet. This method is fast and flexible for handling arbitrary shaped homogeneous volumes and coils. With the help of this method an optimal design for the LPT superconducting magnet has been obtained.